uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,091,054 | arxiv | \section{Introduction}
\label{sec:intro}
While both speech recognition and image classification remain active areas of research, extraordinary progress has been made on both problems---ones that appeared intractable only a decade
ago~\cite{LeCun:2015dt}. By harvesting the power of neural networks
while simultaneously benefiting from advances in computational
hardware, complex tasks such as automatic language translation are now
routinely performed by computers with a high degree of
reliability. The underlying explanation for these significant advances
seems to be related to the expressive power of neural networks, and
their ability to accurately represent high dimensional functions.
These successes open exciting possibilities in applied and
computational mathematics
that are
only beginning to be explored~\cite{Behler:2007fe ,Schneider:2017dn,
Khoo:2018wz, Berg:2017tg,E:2017a,E2017b,Zhang:2018}. Any numerical
calculation that uses a given function begins with a
finite-dimensional approximation of that function. Because standard
approximations, e.g., Galerkin truncations or finite element
decompositions, suffer from the curse of dimensionality, it is nearly
impossible to scale such methods to large
dimensions~$d$. Fundamentally, these representations are linear
combinations of basis functions. The issue arises because the
dimensionality of the representation is equal to that of the
truncation. Neural networks, on the other hand, are highly nonlinear
in their adjusting parameters. As a result, the effective
dimensionality of a neural network is much higher than its total number of parameters,
which may explain the impressive function approximation capabilities observed in practice, even when $d$ is large. Characterizing this
observation with analysis is non-trivial though, precisely because the
representation of a function by a neural network is nonlinear in its
parameters. This renders many of the standard tools of numerical
analysis useless, since they are in large part based on linear
algebra.
The significant achievements of machine learning have inspired many
efforts to provide theoretical justification to a vast and growing
body of empirical knowledge. At the core of our understanding of
the approximation properties of neural networks are the well-known ``Universal Approximation Theorems'' that
specify the conditions under which a neural network can represent a
target function with arbitrary
accuracy~\cite{Cybenko:1989fm,Barron:1993ba,Park:2008ka}.
Despite the power of these results, they do not indicate how the network parameters should be
optimized to achieve maximal accuracy in practice\cite{Bottou:2003wp}. In particular, these theorems do not
provide general guidance on how the error of the network scales with its size at the end of training.
Several recent papers have focused on the analysis of the
shape and properties of the objective or ``loss'' function
landscape~\cite{Sagun:2014tg, Choromanska:2014ui,Baitsy:2018}. These
studies have mainly focused on the fine features of this landscape,
trying to understand how non-convex it is and making analogies with
glassy landscapes. Additionally, some analysis has been performed in
cases where the number of parameters vastly exceeds the amount of
training data, a setting that guarantees convexity and dramatically
simplifies the landscape. Further studies have examined the dynamics
of the parameters on the loss landscape to understand the properties
of optimization procedures based on stochastic gradient descent.
In this paper, we adopt a different perspective which enables powerful
tools for analysis. Similar to what was recently
proposed in~\cite{Mei:2018,Sirignano:2018vg,Chizat:2018}, we view the parameters
in the network as particles and the loss function as a potential that
dictates the interaction between them. Correspondingly, training the
network can be interpreted as the evolution of the particles in this
interaction potential. Using the interchangeability of the $n$ interacting particles / parameters in the neural representation, we focus on their empirical distribution and analyze its properties
when $n$ is large using standard limit
theorems\cite{kipnis2013scaling,Serfaty:2015,leble2017large,Serfaty:2017kb}.
This viewpoint allows us to bypass many of the difficulties that arise
with approaches that attempt to study the dynamics of the individual
particles. In particular:
\begin{enumerate}
\item We derive an evolution equation for the empirical distribution of the
particles, and show that it evolves by gradient descent in the 2-Wasserstein metric on a convex energy landscape. This observation allows us to assert that convergence towards equilibrium of the empirical
distribution occurs on a time scale that is independent of $n$ to
leading order---similar results were obtained
in~\cite{Mei:2018,Sirignano:2018vg,Chizat:2018}. The results are
obtained in the form of Law of Large Numbers (LLN) for the empirical
distribution of the parameters. As a consequence, we rederive the
Universal Approximation Theorem and establish that it can be realized dynamically.
\item We quantify the fluctuations of the empirical distribution at finite $n$ above its limit. We show that these fluctuations are of order $O(n^{-1/2})$ and controlled at all $t<\infty$. In addition, we establish conditions under which these fluctuations heal and become $O(n^{-1})$ as $t\to\infty$. These results rely on a Central Limit Theorem (CLT) and indicate that the neural network approximation error is universal and scales as $O(n^{-1})$ as $n\to\infty$ in any $d$.
\end{enumerate}
\noindent
These results are established first in situations where gradient
descent (GD) on the loss function is used to optimize or ``train'' the parameters in the
network, and then shown to also apply in the context of stochastic
gradient descent (SGD). In the latter case, our analysis sheds light
on the nature of the noise introduced in SGD, and indicates how the
time step and the batch size should be scaled to achieve the optimal
error. We briefly elaborate on these statements below, first precisely formulating the problem.
\subsection{Problem set-up}
\label{sec:setup}
Given a function $f:\Omega\to\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$ defined on the closed manifold
$\Omega \subseteq \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^d$, consider its approximation by a neural network of the form
\begin{equation}
\label{eq:21}
f^{(n)}(\boldsymbol{x}) = \frac1n \sum_{i=1}^n c_i \hat \varphi(\boldsymbol{x},\boldsymbol{z}_i)
\end{equation}
where $n\in \NN$, $(c_i,\boldsymbol{z}_i) \in D \equiv \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} \times \hat D $ are
parameters to be learned for $i=1,\ldots, n$, and
$\varphi: \Omega \times D \to \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$ is some function---we assume
throughout this paper that $\hat D$ is a closed manifold in
$\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^N$. The function $\hat \varphi$ is usually referred to as the
`nonlinearity' or `unit' and $n$ as the width of the network. To
simplify notations, we use
$\boldsymbol{\theta} = (c_i,\boldsymbol{z})\in D$ and
$\varphi(\boldsymbol{x},\boldsymbol{\theta}) = c\hat \varphi(\boldsymbol{x},\boldsymbol{z})$, in terms of
which~\eqref{eq:21} reads
\begin{equation}
\label{eq:21compact}
f^{(n)}(\boldsymbol{x}) = \frac1n \sum_{i=1}^n \varphi(\boldsymbol{x},\boldsymbol{\theta}_i)
\end{equation}
Many models used in machine learning can be cast in
the form~\eqref{eq:21}-\eqref{eq:21compact}:
\begin{itemize}
\item \textbf{Radial basis function networks.} In this case
$\hat D \equiv\Omega$ and $\hat\varphi( \boldsymbol{x},\boldsymbol{z}) \equiv \phi(\boldsymbol{x}-\boldsymbol{z})$
where $\phi$ is some kernel, for example that of a radial function
such as
\begin{displaymath}
\phi(\boldsymbol{x}) = \exp\left(-\tfrac12 \kappa |\boldsymbol{x}|^2\right)
\end{displaymath}
where $\kappa>0$ is a fixed constant.
\item \textbf{Single hidden layer neural networks.} In this case,
$\hat D \subset \SS^{d}$ and $\hat \varphi( \boldsymbol{x},\boldsymbol{z})= \hat \varphi(\boldsymbol{x},\boldsymbol{a},b)$
with e.g. $\boldsymbol{a}\in \SS^{d-1}$, $b\in[-1,1]$, and
\begin{displaymath}
\varphi(\boldsymbol{x},\boldsymbol{a},b) = h(\boldsymbol{a}\cdot \boldsymbol{x}+b)
\end{displaymath}
where $h:\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}\to\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$ is e.g. a sigmoid function $h(z) = 1/(1+e^{-z})$
or a rectified linear unit (ReLU) $h(z) = \max(z,0)$.
\item \textbf{Iterated neural networks.} These are structurally similar
to single hidden layer neural networks. For example, to construct
a two-layer network we take $h$ as above and for $m\in \NN$,
$m\le d$ define $\boldsymbol{h}^{(1)}:\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^m \to \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^m$ such that
\begin{displaymath}
h_j^{(1)}(\boldsymbol{v}) = h(v_j), \qquad \boldsymbol{v} =(v_1,\ldots, v_m)\in \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^m, \quad j=1,\ldots, m
\end{displaymath}
then set
\begin{displaymath}
f^{(n)}(\boldsymbol{x}) = \frac1n \sum_{i=1}^n c_i h\left(\boldsymbol{a}_i^{(0)} \cdot
\boldsymbol{h}^{(1)}\left(A_i^{(1)} \boldsymbol{x} + \boldsymbol{b}_i^{(1)}\right)+b_i^{(0)}\right)
\end{displaymath}
where $\boldsymbol{a}_i^{(0)} \in \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^m$, $b_i^{(0)}\in\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$,
$A_i^{(1)} \in \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^{m\times d}$, $\boldsymbol{b}_i^{(1)}\in \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^m$,
$i=1,\ldots,n$. Therefore here we have
$\boldsymbol{z} = (\boldsymbol{a}^{(0)},b^{(0)},A^{(1)} ,\boldsymbol{b}^{(1)})\in \hat D\subset
\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^{m+1+m\times d + m}$ (where with a slight abuse of notation we
view the matrix $A^{(1)}$ has a vector in $\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^{m\times
d}$). Three-layer networks, etc. can be constructed similarly. Note
that our results apply to deep neural networks when their final layer
grows large, with fixed depth.
\end{itemize}
To measure the discrepancy between the target function $f$ and its neural network approximation $f^{(n)}$, we need to introduce a distance, or loss
function, between~$f$ and~$f^{(n)}$. A natural candidate often used in
practice is
\begin{equation}
\label{eq:21a}
\mathcal{L}[f^{(n)}] = \tfrac12\int_{\Omega} \big|f(\boldsymbol{x}) -
f^{(n)}(\boldsymbol{x})\big|^2 \nu(d\boldsymbol{x}) = \tfrac12 \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu \big|f -
f^{(n)}\big|^2
\end{equation}
where $\nu$, the data distribution, is some positive
measure on $\Omega$ such that $\nu(\Omega)<\infty$ (for example the
Hausdorff measure on $\Omega$, which we will denote by $d\boldsymbol{x}$). We can view $\mathcal{L}[f^{(n)}]$ as an objective
function for $\{\boldsymbol{\theta}\}_{i=1}^n$:
\begin{equation}
\label{eq:51}
\mathcal{L}[f^{(n)}] = C_f
-\frac1n\sum_{i=1}^n F(\boldsymbol{\theta}_i) + \frac1{2n^2}
\sum_{i,j=1}^n K(\boldsymbol{\theta}_i,\boldsymbol{\theta}_j)
\end{equation}
where $C_f=\frac12\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu\left|f\right|^2$ and
we defined
\begin{equation}
\label{eq:18}
F(\boldsymbol{\theta}) = c \hat F(\boldsymbol{z}), \qquad K(\boldsymbol{\theta},\boldsymbol{\theta}') = cc' \hat K(\boldsymbol{z},\boldsymbol{z}')
\end{equation}
with
\begin{equation}
\label{eq:22a}
\begin{aligned}
\hat F(\boldsymbol{z}) &= \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu [f \hat\varphi(\cdot,\boldsymbol{z})],
\\
\hat K(\boldsymbol{z},\boldsymbol{z}') & = \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu [\hat\varphi(\cdot,\boldsymbol{z})
\hat\varphi(\cdot,\boldsymbol{z}')] \equiv \hat K(\boldsymbol{z}',\boldsymbol{z}).
\end{aligned}
\end{equation}
Trying to minimize~\eqref{eq:51} over $\{\boldsymbol{\theta}_i\}_{i=1}^n$ leads
to difficulties, however, since this is potentially (and presumably) a
non-convex optimization problem, which typically has local
minimizers. In particular, if we perform training by making
$\{\boldsymbol{\theta}_i\}_{i=1}^n$ evolve via gradient descent (GD) over the
loss, i.e. if we use
\begin{equation}
\label{eq:34aa}
\dot\boldsymbol{\theta}_{\!i} = \nabla F(\boldsymbol{\theta}_{\!i}) -
\frac1n\sum_{j=1}^n \nabla K(\boldsymbol{\theta}_{\!i},\boldsymbol{\theta}_{\!j}),
\end{equation}
there is no guarantee \textit{a~priori} that these parameters will
reach the global minimum of the loss or even a local minimum with a
value for the loss that is close to that of the global minimum. As a
result determining the value of~\eqref{eq:21a} after training (and its
scaling with $n$, say) is nontrivial. It is therefore
natural to ask:
\begin{quote}
{\it How accurate is the approximation~\eqref{eq:21}-\eqref{eq:21compact} if we optimize
$\{\boldsymbol{\theta}_i\}_{i=1}^n$ by applying the algorithms commonly used in machine learning?}
\end{quote}
This is the main question
we investigate in the present paper.
\subsection{Main results and organization}
We will consider the evolution in time of the representation
\begin{equation}
\label{eq:68aa}
f^{(n)}_t = \frac1n \sum_{i=1}^n \varphi(\cdot,\boldsymbol{\theta}_i(t))
\end{equation}
and study the behavior of this function for large $n$ and large
$t$. To this end we use tools from interacting particle systems, and
also build on known results about the nature of the loss
function~\eqref{eq:51} in the limit as $n\to\infty$---these results
are recalled in Sec.~\ref{sec:continuousintro}.
\medskip
In Sec.~\ref{sec:weighted} we consider the situation where
$\boldsymbol{\theta}_i(t)$ are the solution of the GD flow~\eqref{eq:34aa}---as
explained below, this is somewhat of an idealized situation since we
typically must work with the empirical loss rather than the exact one,
but it is more easily amenable to analysis. By looking at the
evolution of the empirical distribution of $\{\boldsymbol{\theta}_i(t)\}_{i=1}^n$
rather than that of the parameters themselves, we establish a Law of
Large Numbers (LLN) for $f^{(n)}_t$ namely that
$\lim_{n\to\infty} f^{(n)}_t= f_t$, where $f_t$ evolves as
\begin{equation}
\label{eq:25}
\partial_t f_t(\boldsymbol{x}) = -\int_\Omega M_t(\boldsymbol{x},\boldsymbol{x}') \left(f_t(\boldsymbol{x}') -
f(\boldsymbol{x}')\right) \nu(d\boldsymbol{x}').
\end{equation}
where $f$ is the target function and $M_t(\boldsymbol{x},\boldsymbol{x}')$ a positive
semi-definite kernel whose form is explicit---see Proposition~\ref{th:llnft}.
The evolution equation~\eqref{eq:25} can be interpreted as GD for $f_t$ over the
loss in some metric inherited from the 2-Wasserstein metric, and in Proposition~\ref{th:lln} we show the flow converges to the target function, i.e.,
\begin{equation}
\label{eq:82}
\lim_{t\to\infty} f_t =\lim_{t\to\infty} \lim_{n\to\infty} f^{(n)}_t = f.
\end{equation}
We also establish that the limit in $n$ and $t$ commute.
Regarding the scaling of the fluctuations above $f_t$ when $n$ is
finite, in Proposition~\ref{th:cltg} we establish a Central Limit Theorem (CLT) that asserts that
these fluctuations are of size $O(n^{-1/2})$, i.e.
$n^{1/2} (f^{(n)}_t - f_t)$ has a limit in law as $n\to\infty$. In
addition, in Proposition~\ref{th:cltglt} we show that these fluctuations are controlled at all times, and in Proposition~\ref{th:cltgvlt} that under certain conditions they heal as $t\to\infty$, in the sense that
\begin{equation}
\label{eq:55zT}
f^{(n)}_{a_n} = f + O(n^{-1}) \qquad \text{as} \quad
n\to\infty \quad \text{with} \quad a_n /\log n
\to\infty.
\end{equation}
\medskip
In Sec.~\ref{sec:stochgrad} we analyze the typical situation in which it is not possible to calculate~\eqref{eq:21a} or~\eqref{eq:22a} exactly. Rather, we must approximate these
expectations using a ``training set'', i.e. a set of points
$\{\boldsymbol{x}_p\}_{p=1}^P$ distributed according to~$\nu$ on which $f$ is known, so that instead of
$\mathcal{L}[f^{(n)}]$ we must use
\begin{equation}
\label{eq:57}
\mathcal{L}_P[f^{(n)}] = \frac1P \sum_{p=1}^P |f(\boldsymbol{x}_p)-f^{(n)}(\boldsymbol{x}_p)|^2
\end{equation}
and instead of $\hat F$ and $\hat K$
\begin{equation} \label{eq:58} \hat F_P(\boldsymbol{z}) = \frac1P \sum_{p=1}^P
f(\boldsymbol{x}_p) \hat \varphi(\boldsymbol{x}_p,\boldsymbol{z}), \quad \hat K_P(\boldsymbol{z},\boldsymbol{z}') =
\frac1P \sum_{p=1}^P \hat \varphi(\boldsymbol{x}_p,\boldsymbol{z})
\hat\varphi(\boldsymbol{x}_p,\boldsymbol{z}').
\end{equation}
If in~\eqref{eq:34aa} we replace $F(\boldsymbol{\theta})$ and
$K(\boldsymbol{\theta},\boldsymbol{\theta}')$ by their empirical estimates over a subset of the training points
$F_P(\boldsymbol{\theta}) = c \hat F_P(\boldsymbol{z})$ and
$K_P(\boldsymbol{\theta},\boldsymbol{\theta}') = cc' \hat K_P(\boldsymbol{z},\boldsymbol{z}')$, we arrive at what is referred
to as stochastic gradient descent (SGD)---the method of
choice to train neural networks.
We focus on situations in which we
can redraw the training set as often as we need, namely, at every step
during the learning process, an algorithm called online learning.
In this
case, in the limit as the optimization time step $\Delta t$ used in SGD
tends to zero, SGD becomes asymptotically equivalent to an SDE whose
drift terms coincide with those of GD but with multiplicative noise
terms added. In this context, we establish that~\eqref{eq:25}
and~\eqref{eq:82} also hold if we choose the size $P$ of the batch used
in~\eqref{eq:58} at every SGD step such that as $P=O(n^{2\alpha})$
with $\alpha>0$. Regarding the scaling of the fluctuations, if we set
$\alpha\in (0,1)$, we lose accuracy
and~\eqref{eq:55zT} is replaced by
\begin{equation}
\label{eq:55zTSGD}
f^{(n)}_{a_n} = f + O(n^{-\alpha}) \qquad \text{as} \quad
n\to\infty \quad \text{with} \quad a_n /\log n
\to\infty.
\end{equation}
However if $\alpha\ge1$, we get \eqref{eq:55zT} back (meaning also
that there is no advantage to take $\alpha$ bigger than 1). These results are stated in Propositions~\ref{th:train} and \ref{th:train2}.
In Sec.~\ref{sec:examples} we illustrate these results, using
a spherical $p$-spin model with $p=3$ as test
function to represent with a neural network. We show that the network
accurately approximates this function in up to $d=25$ dimensions, with
a scaling of the error consistent with the results established in
Secs.~\ref{sec:weighted} and~\ref{sec:stochgrad}. These results are
obtained using both a radial basis function network, and a
single hidden layer network using sigmoid functions.
Concluding remarks are made in Sec.~\ref{sec:conclu} and in
an Appendix we establish a finite-temperature
variant (Langevin dynamics) of~\eqref{eq:55zT} which applies when
additive noise terms are added in the GD
equations~\eqref{eq:34aa}. This result reads
\begin{equation}
\label{eq:55}
\lim_{T\to-\infty} f^{(n)}_t = f + n^{-1} \tilde f_t + o(n^{-1}) \quad \text{with} \quad
\tilde f_t = \beta^{-1} \epsilon^* + \beta^{-1/2}\tilde \epsilon_t
\end{equation}
where $T$ is the time at which we initiate the training. Here
$\beta> 0$ is a parameter playing the role of inverse temperature,
$\epsilon^*:\Omega\to\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$ is some given (non-random) function and
$\tilde \epsilon_t :\Omega\to\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$ is a Gaussian process with mean
zero and covariance
$\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T} [\tilde \epsilon_t(\boldsymbol{x}) \tilde\epsilon_t(\boldsymbol{x}')]\propto
\delta(\boldsymbol{x}-\boldsymbol{x}') $. Note that \eqref{eq:55} gives~\eqref{eq:55zT} back
after quenching (i.e. by sending $\beta\to\infty$). The result
in~\eqref{eq:55} is stated in Proposition~\ref{th:cltfT}
\bigskip
As we have emphasized, our approach has strong ties with the statistical
mechanics of systems of large numbers of interacting particles.
Our main aim here
is to introduce a framework showing how results and concepts developed
in this context are useful to address
questions in machine learning.
Conversely, we seek to illustrate that ML provides new mathematical questions about an interesting class of particle systems. With this in mind, we adopt a presentation style that relies on formal asymptotic arguments to
derive our results, though we are confident that providing rigorous proofs
to our propositions is achievable. To a certain extent, this program
was already started in~\cite{Mei:2018,Chizat:2018, Sirignano:2018vg}.
\section{Functional formulation of the learning problem}
\label{sec:continuousintro}
As discussed in Bach~\cite{Bach:2017}, it is useful to give conditions
under which~\eqref{eq:21} has a limit as $n\to\infty$, for two main
reasons: First it shows which functions can be represented as
in~\eqref{eq:21} if we allow the number of units $n$ to grow to
infinity, and second, while the loss
function~\eqref{eq:21a} may be non-convex for $\{\boldsymbol{\theta}_i\}_{i=1}^n$, the limiting functional for the parameter distribution is convex.
\subsection{Universal Approximation Theorem}
Consider the space $\mathcal{F}_1$ of all functions that can be represented as
\begin{equation}
\label{eq:52}
f = \int_{\hat D} \hat \varphi(\cdot,\boldsymbol{z}) \gamma(d\boldsymbol{z})
\end{equation}
where $\gamma$ is some (signed) Radon measure on $D$ with finite total
variation ($L^1$-norm),
$|\gamma|_{\text{TV}}=\int_{\hat D} |\gamma(d\boldsymbol{z})|<\infty$: we will
denote the space of these Radon measures by $\mathcal{M}(\hat D)$ and
that of probability measures by $\mathcal{M}_+(\hat D)$. The space $\mathcal{F}_1$ is important in our context, since any $f\in \mathcal{F}_1$ can be realized as
\begin{equation}
\label{eq:116}
f = \lim_{n\to\infty} \frac1n \sum_{i=1}^n c_i\hat \varphi(\cdot, \boldsymbol{z}_j)
\end{equation}
by drawing $\{c_i,\boldsymbol{z}_i\}_{i\in \NN}$ as follows. Start from the
Jordan decomposition for $\gamma$~\cite{billing},
\begin{equation}
\label{eq:17}
\gamma = \gamma_+-\gamma_-,
\end{equation}
where $\gamma_+$ and $\gamma_-$ are positive measures with
$\supp \gamma_+ \cup \supp\gamma_- = \supp \gamma$ and
$\supp \gamma_+ \cap \supp\gamma_- = \emptyset$. Using this
decomposition, we can draw $\boldsymbol{z}_i$'s independently from
$(\gamma_++\gamma_-)/|\gamma|_{\text{TV}}\in\mathcal{M}_+(\hat
D)$, where
$|\gamma|_{\text{TV}} = \int_{\hat
D}(\gamma_+(d\boldsymbol{z})+\gamma_-(d\boldsymbol{z})) < \infty$ , and set $c_i=+|\gamma|_{\text{TV}}$
if $\boldsymbol{z}_i\in \supp \gamma_+$ and $c_i=-|\gamma|_{\text{TV}}$ if
$\boldsymbol{z}_i\in \supp \gamma_-$. By the Law of Large Numbers we then have
\begin{equation}
\gamma_n = \frac1n \sum_{i=1}^n c_i\delta_{\boldsymbol{z}_i}\rightharpoonup
\gamma \qquad \text{as} \quad n\to\infty\label{eq:28}
\end{equation}
which implies~\eqref{eq:116}. We can also use the Central Limit
Theorem to get an approximation error at finite~$n$, a calculation we carry out in Sec.~\ref{sec:clt}.
Since the space $\mathcal{F}_1$ depends on the choice of unit $\hat\varphi$, to characterize it we make:
\begin{assumption}
\label{th:ascomp} Both the input space $\Omega$ and the feature space $\hat D$ are closed (i.e. compact with no boundaries) smooth Riemannian manifolds. The unit is continuously differentiable in $\boldsymbol{z}$, i.e. $\forall \boldsymbol{x}\in \Omega$, $\hat\varphi(\boldsymbol{x},\cdot) \in C^1(\hat D)$.
\end{assumption}
\begin{assumption}[Discriminating unit]
\label{th:as1} The unit satisfies
\begin{equation}
\label{eq:50}
\int_{\Omega} g(\boldsymbol{x}) \hat \varphi(\boldsymbol{x},\cdot) \nu(d\boldsymbol{x}) = 0 \quad
\text{a.e.\ \ in \ \ $\hat D$} \quad
\Rightarrow \quad g = 0 \quad \text{a.e.\ \ in\ \ $\Omega$}
\end{equation}
\end{assumption}
\noindent
The differentiability of $\hat\varphi$ in $\boldsymbol{z}$ is required to guarantee uniqueness of the GD flow.
\begin{theorem}[Universal Approximation Theorem~\cite{Cybenko:1989fm,Barron:1993ba, Park:2008ka}]
\label{th:1}
Under Assumptions~\ref{th:ascomp} and~\ref{th:as1}, $\mathcal{F}_1$ is a dense subspace of $L^2(\Omega,\nu)$, i.e. given any
$f\in L^2(\Omega,\nu)$ and $\epsilon >0$, there exists
$\gamma^* \in \mathcal{M}(\hat D)$
such that $|\gamma^*|_{\text{TV}}<\infty $ and
\begin{equation}
\label{eq:29}
f^* = \int_{\hat D} \hat \varphi(\cdot, \boldsymbol{z}) \gamma^*(d\boldsymbol{z}) \in \mathcal{F}_1
\end{equation}
satisfies
\begin{equation}
\label{eq:123}
\| f- f^*\|_{L^2(\Omega,\nu) } \le \epsilon.
\end{equation}
\end{theorem}
\noindent A similar theorem was originally stated in~\cite{Cybenko:1989fm}. Since its proof is elementary let us reproduce it here:
\begin{proof}The space $\mathcal{F}_1$ is a linear subspace of $L^2(\Omega, \nu)$ since, if $f=\int_{\hat D} \hat \varphi(\cdot,\boldsymbol{z}) \gamma(d\boldsymbol{z}) \in \mathcal{F}_1$,
\begin{displaymath}
\begin{aligned}\|f\|^2_{L^2(\Omega,\nu)} &= \int_\Omega \left(\int_{\hat D} \hat \varphi(\boldsymbol{x},\boldsymbol{z}) \gamma(d\boldsymbol{z})\right)^2\nu(d\boldsymbol{x}) \\
&= \int_{\hat D \times \hat D} \hat K(\boldsymbol{z},\boldsymbol{z}') \gamma(d\boldsymbol{z})\gamma(d\boldsymbol{z}')\\
& \le \|\hat K\|_\infty |\gamma |^2_{\text{TV}} < \infty
\end{aligned}
\end{displaymath}
where we used $\|\hat K\|_\infty = \sup_{(\boldsymbol{z},\boldsymbol{z}')\in \hat D \times \hat D} |\hat K(\boldsymbol{z},\boldsymbol{z}')|<\infty$ which follows from Assumption~\ref{th:ascomp}.
To show that $\mathcal{F}_1$ is dense in $L^2(\Omega, \nu)$, we proceed by contradiction. Assuming that $\mathcal{F}_1$ is not dense, by the Hahn-Banach theorem, there exists a nonzero linear functional $L:L^2(\Omega, \nu)\to\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$ such that $Lf=0$ for all $f\in\mathcal{F}_1$. By the Riesz representation theorem, the action of $L$ on $f$ can be represented as the inner product in $L^2(\Omega, \nu)$ between $f$ and some $g\in L^2(\Omega, \nu)$, i.e. there must exist $g\not=0$ such that for all $f=\int_{\hat D} \hat \varphi(\cdot,\boldsymbol{z}) \gamma(d\boldsymbol{z})\in \mathcal{F}_1$ (i.e. all $\gamma \in \mathcal{M}(\hat D)$ with finite variation)
\begin{displaymath}
\begin{aligned}0 &= \int_\Omega g(\boldsymbol{x}) \left(\int_{\hat D} \hat \varphi(\boldsymbol{x},\boldsymbol{z}) \gamma(d\boldsymbol{z})\right)\nu(d\boldsymbol{x}) \\
&= \int_{\hat D} \left( \int_\Omega g(\boldsymbol{x}) \hat \varphi(\boldsymbol{x},\boldsymbol{z}) \nu(d\boldsymbol{x}) \right) \gamma(d\boldsymbol{z}),
\end{aligned}
\end{displaymath}
This requires that
\begin{displaymath}
0=\int_\Omega g(\boldsymbol{x}) \hat \varphi(\boldsymbol{x},\cdot ) \nu(d\boldsymbol{x})\quad \text{a.e.\ \ in \ \ $\hat D$}.
\end{displaymath}
which, by Assumption~\ref{th:as1}, implies that $g=0$ a.e. in $\Omega$, a contradiction.
\end{proof}
From now on, we will make
\begin{assumption}
\label{as:inF1}
The target function is representable by the network, i.e., $f\in \mathcal{F}_1$.
\end{assumption}
\noindent
This means that we can take $f=f^*$ in Theorem~\ref{th:1}.
\subsection{Convexification at distributional level}
\label{sec:convex}
Another advantage of taken the $n\to\infty$ limit of~\eqref{eq:21} is
that it turns \eqref{eq:51} into a quadratic
objective function for~$\gamma$:
\begin{equation}
\label{eq:21ab}
\mathcal{L}[{\textstyle\int_D}\hat \varphi(\cdot,\boldsymbol{z}) \gamma(d\boldsymbol{z})] = C_f -\int_{\hat D} \hat F(\boldsymbol{z}) \gamma(d\boldsymbol{z})+\tfrac12
\int_{\hat D\times\hat D} \hat K(\boldsymbol{z},\boldsymbol{z}') \gamma(d\boldsymbol{z}) \gamma(d\boldsymbol{z}')
\end{equation}
This means that minimizing~\eqref{eq:21ab} over $\gamma$ rather
than~\eqref{eq:51} over $\{\boldsymbol{\theta}_i\}_{i=1}^n$ is conceptually
simpler. In particular, any minimizer $\gamma^*$ of~\eqref{eq:21ab}
solves the linear Euler-Lagrange equation:
\begin{equation}
\label{eq:42}
\forall \boldsymbol{z} \in \hat D \ : \qquad \hat F(\boldsymbol{z}) = \int_{\hat D}
\hat K(\boldsymbol{z},\boldsymbol{z}') \gamma^*(d\boldsymbol{z}')
\end{equation}
and the loss evaluated on any $\gamma^*$ has value zero. Indeed, using
the definitions of $ \hat F$ and $\hat K$ in~\eqref{eq:22a},
\eqref{eq:42} can be written as
\begin{equation}
\label{eq:32}
\int_\Omega \hat \varphi(\boldsymbol{x},\boldsymbol{z}) \left( f(\boldsymbol{x})-
\int_{\hat D} \hat \varphi(\boldsymbol{x},\boldsymbol{z}') \gamma^*(d\boldsymbol{z}')\right)
\nu(d\boldsymbol{x}) =0
\end{equation}
which, by Assumptions~\ref{th:as1} and \ref{as:inF1}, has a solution such that
$f = \int_{\hat D} \hat \varphi(\cdot,\boldsymbol{z}) \gamma^*(d\boldsymbol{z})$ and, as
a result, the loss evaluated on
$\int_{\hat D} \hat \varphi(\cdot,\boldsymbol{z}) \gamma^*(d\boldsymbol{z})$ is zero.
\bigskip
Of course, the results above are not necessarily an assurance of convergence in practice.
Indeed, we do not
know how to pick $\gamma\in \mathcal{M}(\hat D)$ to represent an $f\in \mathcal{F}_1$ nor can we manipulate these Radon measures explicitly:
rather we will have to learn finite $n$ approximations of the form $\gamma^{(n)} = n^{-1} \sum_{i=1}^n c_i \delta_{\boldsymbol{z}_i}$ by
adjusting the parameters
$\{\boldsymbol{\theta}_i\}_{i=1}^n = \{c_i,\boldsymbol{z}_i\}_{i=1}^n$ dynamically.
Furthermore, even though the energy can be expressed
in term of $\gamma^{(n)}$, as we will see the dynamics can only be closed at the level of the empirical distribution
\begin{equation}
\label{eq:27}
\mu^{(n)}(dc,d\boldsymbol{z}) = \frac1n \sum_{i=1}^n \delta
_{c_i}(dc)\delta_{\boldsymbol{z}_i}(d\boldsymbol{z}) \equiv \frac1n \sum_{i=1}^n
\delta_{\boldsymbol{\theta}_i}(d\boldsymbol{\theta}) = \mu^{(n)}(d\boldsymbol{\theta})
\end{equation}
with $\gamma^{(n)}$ given by
\begin{equation}
\label{eq:76}
\gamma^{(n)} = \int_\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} c \mu^{(n)}(dc,\cdot).
\end{equation}
Viewed as a functional of $\mu\in\mathcal{M}_+(D)$ such that $\int_\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} c \mu(dc,\cdot) = \gamma \in \mathcal{M}(\hat D)$,
the loss function~\eqref{eq:21ab}
becomes
\begin{equation}
\label{eq:64a}
\begin{aligned}
\mathcal{E}[\mu] &= C_f - \int_{D} F(\boldsymbol{\theta}) \mu(d\boldsymbol{\theta}) +
\tfrac12 \int_{D\times D} K(\boldsymbol{\theta},\boldsymbol{\theta}') \mu(d\boldsymbol{\theta})\mu(d\boldsymbol{\theta}')\\
& = \tfrac12 \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu\left(f - \int_{D}
\varphi(\cdot,\boldsymbol{\theta}) \mu (d\boldsymbol{\theta}) \right)^2 \ge 0
\end{aligned}
\end{equation}
\section{Training by gradient descent on the exact loss}
\label{sec:weighted}
Here we assume that we train the network by evolving dynamically the
parameters $\{\boldsymbol{\theta}_{\!i}(t)\}_{i=1}^n$ according to the GD
flow~\eqref{eq:34aa}, which we recall is given by the coupled ODEs,
\begin{equation}
\label{eq:34}
\dot\boldsymbol{\theta}_{\!i} = \nabla F(\boldsymbol{\theta}_{\!i}) -
\frac1n\sum_{j=1}^n \nabla K(\boldsymbol{\theta}_{\!i},\boldsymbol{\theta}_{\!j}),
\end{equation}
for $i=1,\ldots, n$. As we show in Sec.~\ref{sec:stochgrad},
\eqref{eq:34} shares many properties with the stochastic gradient
descent (SGD) used in applications, though in SGD a
multiplicative noise term persists in the equations. The ODEs
in~\eqref{eq:34} are the GD flow on the energy:
\begin{equation}
\label{eq:66}
\begin{aligned}
E(\boldsymbol{\theta}_1,\cdots,\boldsymbol{\theta}_n)= n C_f -\sum_{i=1}^n F(\boldsymbol{\theta}_i) +
\frac1{2n} \sum_{i,j=1}^n K(\boldsymbol{\theta}_i,\boldsymbol{\theta}_j)
\end{aligned}
\end{equation}
This energy is simply the loss function in~\eqref{eq:51} rescaled by
$n$.
\smallskip
We consider~\eqref{eq:34} with initial conditions such that every
$\boldsymbol{\theta}_{\!i}(0)$ for $i=1, \dots,n$ is drawn independently
from some probability distribution $\mu_{\text{in}}$ satisfying
\begin{assumption}
\label{as:wellprep}
The distribution $\mu_{\text{in}}$ is such that: (i)
its support contains a smooth manifold that separates the regions in $D=\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}\times \hat D$ where $c>c_0$ and $c<-c_0$ for some large enough $c_0>0$;
(ii) $\gamma_{\text{in}} = \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}} c \mu_{\text{in}}(dc,\cdot)$
has finite total variation,
$|\gamma_{\text{in}}|_{\text{TV}} < \infty$; and (iii) $\forall b\in \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}\ : \ \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}\times \hat D} e^{b c}\mu_{\text{in}}(dc,d\boldsymbol{z}) <\infty.$
\end{assumption}
\noindent
Note that property (i) guarantees that $\hat \mu_{\text{in}} = \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}} \mu_{\text{in}}(dc,\cdot)$ has
full support in $\hat D$, $\supp \hat \mu_{\text{in}} = \hat D$---we show below that this property is preserved by the dynamics.
Distributions $\mu_{\text{in}}$ that satisfy Assumption~\ref{as:wellprep} include e.g.
$$\delta_0(dc) \, \hat \mu_{\text{in}}(d\boldsymbol{z})\qquad \text{and} \qquad (2\pi)^{-1/2} e^{-\frac12 c^2} dc\, \hat\mu_{\text{in}}(d\boldsymbol{z}),$$
if $\supp \hat \mu_{\text{in}}=\hat D$ in both.
We denote the measure for the infinite set
$\{\boldsymbol{\theta}_{i}(0)\}_{i\in \NN}$ constructed this way by
$\PP_{\text{in}}$. Initial conditions of this type are used
in practice.
\subsection{Empirical distribution and nonlinear Liouville equation}
\label{sec:empiricaldist}
To proceed, we consider the empirical distribution
\begin{equation}
\label{eq:37}
\mu_t^{(n)} = \frac1n \sum_{i=1}^n
\delta_{\boldsymbol{\theta}_{\!i}(t)}
\end{equation}
in terms of which we can express~\eqref{eq:68aa} as
\begin{equation}
\label{eq:68}
f^{(n)}_t = \frac1n \sum_{i=1}^n \varphi(\cdot,\boldsymbol{\theta}_{\!i}(t))
= \int_{D\times R} \varphi(\cdot,\boldsymbol{\theta}) \mu_t^{(n)}(d\boldsymbol{\theta}).
\end{equation}
The empirical distribution~\eqref{eq:37} is useful to work with
because it satisfies a nonlinear Liouville type equation
\begin{equation}
\label{eq:38}
\begin{aligned}
\partial_t \mu_t^{(n)} & = \nabla \cdot\left( \nabla V(\boldsymbol{\theta}, [\mu_t^{(n)}])
\mu_t^{(n)} \right)
\end{aligned}
\end{equation}
where we defined
\begin{equation}
\label{eq:16}
V(\boldsymbol{\theta}, [\mu]) = -F(\boldsymbol{\theta}) + \int_{D} K(\boldsymbol{\theta},\boldsymbol{\theta}') \mu(d\boldsymbol{\theta}')
\end{equation}
Throughout, we will interpret \eqref{eq:38} in the standard weak
sense, as in~\eqref{eq:38weak} below. When there is a Laplacian term
in~\eqref{eq:38} this equation is called the McKean-Vlasov
equation~\cite{McKean:1966ha, dawson_large_1987, gartner_mckean-vlasov_1988,sznit:2006}; with an additional noise term added it is often
referred to as Dean's equation~\cite{Dean:1999df}. To prove
asymptotic trainability results, we analyze the properties of the
solution to this equation as $n\to\infty$ and $t\to\infty$.
\paragraph{Derivation of~\eqref{eq:38}}
Let $\chi:D \to \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$ be a test function, and consider
\begin{equation}
\label{eq:85}
\int_D \chi(\boldsymbol{\theta})
\mu^{(n)}_t (d\boldsymbol{\theta}) = \frac1n \sum_{i=1}^n \chi(\boldsymbol{\theta}_i(t))
\end{equation}
Taking the time derivative of this equation and using~\eqref{eq:34}
we deduce
\begin{equation}
\label{eq:38weak}
\begin{aligned}
& \int_D \chi(\boldsymbol{\theta}) \partial_t\mu_t^{(n)}(d\boldsymbol{\theta})\\ &= \frac1n
\sum_{i=1}^n \nabla \chi(\boldsymbol{\theta}_i(t)) \cdot \dot \boldsymbol{\theta}_i(t)\\
& = \frac1n
\sum_{i=1}^n \nabla \chi(\boldsymbol{\theta}_i(t)) \cdot \left (\nabla
F(\boldsymbol{\theta}_i(t))
-\frac1n \sum_{j=1}^n \nabla K(\boldsymbol{\theta}_i(t),\boldsymbol{\theta}_j(t))\right)\\
& = \int_D \nabla \chi(\boldsymbol{\theta}) \cdot \left (\nabla F(\boldsymbol{\theta})
-\int_D \nabla K(\boldsymbol{\theta},\boldsymbol{\theta}') \mu_t^{(n)}(d\boldsymbol{\theta}')\right)
\mu_t^{(n)}(d\boldsymbol{\theta})
\end{aligned}
\end{equation}
This is the weak form of~\eqref{eq:38}.
\subsection{Law of Large Numbers (LLN)---mean field limit}
\label{sec:zero}
Since we know that, $\PP_{\text{in}}$-almost surely as $n\to\infty$,
$\mu_0^{(n)} \rightharpoonup \mu_{\text{in}}$ by the Law of Large
Numbers, we can take the limit as $n\to\infty$ of~\eqref{eq:38} to
formally deduce:
\begin{proposition}
Let $\mu^{(n)}_t$ be given by~\eqref{eq:37} with
$\{\boldsymbol{\theta}_i(t)\}_{i=1}^n$ the solution of~\eqref{eq:34} with initial
condition drawn from $\PP_{\text{in}}$. Then, as $n\to\infty$,
$\mu_t^{(n)}\rightharpoonup \mu_t$ a.s. where $\mu_t$ satisfies
\begin{equation}
\label{eq:39}
\begin{aligned}
\partial_t \mu_t & = \nabla \cdot\left( \nabla
V(\boldsymbol{\theta},[\mu_t]) \mu_t \right) \qquad \mu_0 =
\mu_{\text{in}},
\end{aligned}
\end{equation}
interpreted in the weak sense.
\end{proposition}
Note that \eqref{eq:39} is the same as~\eqref{eq:38} but with
a different initial condition. Note also that~\eqref{eq:39} is the GD
flow in the Wasserstein metric~\cite{villani_optimal_2009,ambrosio_gradient_2005}. Indeed this equation can be written
as the $\tau\to0$ limit of the Jordan-Kinderlehrer-Otto (JKO) proximal scheme~\cite{Jordan:1998b}
\begin{equation}
\label{eq:40}
\mu_{t+\tau} \in \argmin \left(\mathcal{E}[\mu] + \tfrac12 \tau^{-1}
W_2^2 (\mu,\mu_t)\right),\quad \mu_0 = \mu_{\text{in}}
\end{equation}
where $W_2 (\mu,\mu_t)$ is the 2-Wasserstein distance between $\mu$
and $\mu_t$ and $\mathcal{E}[\mu]$ is defined in ~\eqref{eq:64a}.
Finally, note that the weak solutions of~\eqref{eq:39}
satisfy: for any test function $\chi:D\to \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$,
\begin{equation}
\label{eq:23}
\int_D \chi(\boldsymbol{\theta}) \mu_t(d\boldsymbol{\theta}) = \int_D
\chi(\boldsymbol{\Theta}_t(\boldsymbol{\theta}))
\mu_{\text{in}}(d\boldsymbol{\theta})
\end{equation}
where $\boldsymbol{\Theta}_t(\boldsymbol{\theta})$ solves is given in terms of characteristics
\begin{equation}
\label{eq:30}
\dot \boldsymbol{\Theta}_t(\boldsymbol{\theta}) = -\nabla V(\boldsymbol{\Theta}_t(\boldsymbol{\theta}),[\mu_t]), \qquad
\boldsymbol{\Theta}_0(\boldsymbol{\theta}) = \boldsymbol{\theta}.
\end{equation}
Of course, \eqref{eq:23} is not explicit since~\eqref{eq:30} depends
on $\mu_t$, but this representation formula is useful in the
sequel. In particular, notice that it implies that: (i) $\mu_t\in \mathcal{M}_+(D)$ for all $t<\infty$ since the velocity field in~\eqref{eq:30} is globally Lipschitz on $\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}\times \hat D$ by Assumption~\ref{th:ascomp} and hence the solutions to this equation exist for all $t<\infty$; and (ii) $\supp \hat \mu_t = \hat D$ with $\hat \mu_t =\int_\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} \mu_t(dc,\cdot)$ by Assumption~\ref{as:wellprep}, and $\supp \mu_t= D$ if $\supp \mu_{\text{in}}=D$.
\paragraph{The dynamics of $f_t=\lim_{n\to\infty} f^{(n)}_t$}
\label{sec:dynoff0}
We now discuss the implications of the limiting PDE for the evolution of
\begin{equation}
\label{eq:125}
\lim_{n\to\infty} f^{(n)}_t = \lim_{n\to\infty} \int_{D}
\varphi(\cdot,\boldsymbol{\theta}) \mu^{(n)}_t(d\boldsymbol{\theta})
= \int_{D} \varphi(\cdot,\boldsymbol{\theta}) \mu_t(d\boldsymbol{\theta}) \equiv f_t
\end{equation}
To begin, note that from \eqref{eq:16} we can express $V(\boldsymbol{\theta},[\mu_t])$ as
\begin{equation}
\label{eq:17b}
V(\boldsymbol{\theta},[\mu_t]) =
\int_{\Omega} \left(f_t(\boldsymbol{x})- f(\boldsymbol{x}) \right) \varphi(\boldsymbol{x},\boldsymbol{\theta}) \nu(d\boldsymbol{x})
\end{equation}
As a result~\eqref{eq:39} can be written as
\begin{equation}
\label{eq:39sb}
\partial_t \mu_t = \nabla \cdot\left( \int_{\Omega}
\nabla_{\boldsymbol{\theta}} \varphi(\boldsymbol{x},\boldsymbol{\theta}) \left( f_t(\boldsymbol{x})
-f(\boldsymbol{x})\right)\nu(d\boldsymbol{x})\mu_t\right)
\end{equation}
and we deduce, using~\eqref{eq:125},
\begin{equation}
\label{eq:99lln}
\begin{aligned}
\partial_t f_t &= \int_{D}
\varphi(\cdot,\boldsymbol{\theta}) \partial_t\mu_t(d\boldsymbol{\theta})\\
& = -\int_{D} \nabla_{\boldsymbol{\theta}}\varphi(\cdot,\boldsymbol{\theta}) \cdot\left(
\int_{\Omega} \nabla_{\boldsymbol{\theta}} \varphi(\boldsymbol{x}',\boldsymbol{\theta})\left(
f_t(\boldsymbol{x}')
-f(\boldsymbol{x}')\right)\nu(d\boldsymbol{x}')\mu_t(d\boldsymbol{\theta})\right)\\
\end{aligned}
\end{equation}
Interchanging the order of integration
gives:
\begin{proposition}[LLN]
\label{th:llnft}
Let $f^{(n)}_t$ be given by~\eqref{eq:68} with
$\{\boldsymbol{\theta}_i(t)\}_{i=1}^n$ solution of~\eqref{eq:34} with initial
condition drawn from $\PP_{\text{in}}$. Then, as $n\to\infty$,
$f_t^{(n)}\to f_t$ a.s. pointwise, where $f_t$ satisfies
%
\begin{equation}
\label{eq:48}
\partial_t f_t(\boldsymbol{x}) = -\int_{\Omega} M([\mu_t],\boldsymbol{x},\boldsymbol{x}')
\left(f_t(\boldsymbol{x}') - f(\boldsymbol{x}') \right)\nu(d\boldsymbol{x}')
\end{equation}
where we defined the kernel
\begin{equation}
\label{eq:96}
\begin{aligned}
M([\mu],\boldsymbol{x},\boldsymbol{x}') &= \int_{D} \nabla_{\boldsymbol{\theta}}
\varphi(\boldsymbol{x},\boldsymbol{\theta})
\cdot\nabla_{\boldsymbol{\theta}} \varphi(\boldsymbol{x}',\boldsymbol{\theta}) \mu(d\boldsymbol{\theta})\\
&= \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}\times \hat D} \left(c^2\nabla_{\boldsymbol{z}} \hat
\varphi(\boldsymbol{x},\boldsymbol{z}) \cdot\nabla_{\boldsymbol{z}} \hat\varphi(\boldsymbol{x}',\boldsymbol{z}) +
\hat\varphi(\boldsymbol{x},\boldsymbol{z}) \hat\varphi(\boldsymbol{x}',\boldsymbol{z}) \right) \mu(dc,d\boldsymbol{z}).
\end{aligned}
\end{equation}
\end{proposition}
\noindent
The kernel~\eqref{eq:96} is symmetric in $\boldsymbol{x}$ and $\boldsymbol{x}'$ for any
$\mu\in \mathcal{M}(D)$ and positive semidefinite if
$\mu\in \mathcal{M}_+(D)$ since, given any $r\in L^2(\Omega,\nu)$, we
then have
\begin{equation}
\label{eq:97}
\begin{aligned}
&\int_{\Omega^2} r(\boldsymbol{x}) r(\boldsymbol{x}') M([\mu],\boldsymbol{x},\boldsymbol{x}') \nu(d\boldsymbol{x})
\nu(d\boldsymbol{x}') \\
& = \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}\times \hat D} \left(c^2|\nabla_{\boldsymbol{z}} R(\boldsymbol{z})|^2 +
|R(\boldsymbol{z})|^2\right)\mu(dc,d\boldsymbol{z}) \ge 0
\end{aligned}
\end{equation}
where
\begin{equation}
\label{eq:98}
R(\boldsymbol{z}) = \int_\Omega r(\boldsymbol{x}) \hat\varphi(\boldsymbol{x},\boldsymbol{z}) \nu(d\boldsymbol{x}).
\end{equation}
Equation~\eqref{eq:48} also confirms that $f_t$ evolves on a
quadratic landscape, namely the loss function~\eqref{eq:21a} itself:
Indeed this equation can be written as
\begin{equation}
\label{eq:134}
\partial_t f_t(\boldsymbol{x}) = -\int_{\Omega}
M([\mu_t],\boldsymbol{x},\boldsymbol{x}')\, D_{f_t(\boldsymbol{x}')}\mathcal{L}[f_t] \nu(d\boldsymbol{x}')
\end{equation}
where $D_{f(\boldsymbol{x})}$ denotes the gradient with respect to $f(\boldsymbol{x})$ in the
$L^2(\Omega,\nu)$-norm, i.e. given a functional $\mathcal{F}[f]$,
\begin{equation}
\label{eq:104}
\forall h: \Omega \to \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} \ : \
\lim_{z\to0} \frac{d}{dz}\mathcal{F}[f+z h] = \< h, D_{f} \mathcal{F}[f]
\>_{L^2(\Omega, \nu)} = \int_\Omega h(\boldsymbol{x}) D_{f(\boldsymbol{x})} \mathcal{F}[f] \nu(d\boldsymbol{x}).
\end{equation}
That is, $D_{f(\boldsymbol{x})}$ reduces to $\delta/\delta f(\boldsymbol{x}) $ if
$\nu(d\boldsymbol{x}) = d\boldsymbol{x}$.
\subsection{Long time behavior---global convergence}
\label{sec:lln}
Let us now analyze the long-time solutions of~\eqref{eq:39} for the
weak limit $\mu_t$ of $\mu_t^{(n)}$ and~\eqref{eq:48} for the limit
$f_t$ of $f^{(n)}_t$. As is well-known, \eqref{eq:39} has more
stationary points than $\mathcal{E}[\mu]$ has minimizers. Since
\eqref{eq:39} is the Wasserstein GD flow on $\mathcal{E}[\mu]$, a
direct calculation shows that $E_t=\mathcal{E}[\mu_t]$ satisfies
\begin{equation}
\label{eq:65}
\frac{dE_t}{dt} = - \int_{D}
|\nabla V(\boldsymbol{\theta},[\mu_t])|^2\mu_t(d\boldsymbol{\theta})
\end{equation}
This equation implies that the stationary points $\mu^s$
of~\eqref{eq:39} are the solutions of
\begin{equation}
\label{eq:45s}
\nabla V(\boldsymbol{\theta},[\mu^s]) = 0 \qquad \text{for} \quad \boldsymbol{\theta}\in
\supp \mu^s.
\end{equation}
This should be contrasted with the minimizers of
$\mathcal{E}[\mu]$, which satisfy:
\begin{equation}
\label{eq:45}
\left\{\begin{aligned}
&V(\boldsymbol{\theta},[\mu^*]) \ge \bar V[\mu^*]\qquad \text{for} \quad
\boldsymbol{\theta}\in D\\
&V(\boldsymbol{\theta},[\mu^*]) = \bar V[\mu^*] \qquad \text{for} \quad
\boldsymbol{\theta}\in \supp \mu^*
\end{aligned}
\right.
\end{equation}
where $\bar V[\mu] = \int_D V(\boldsymbol{\theta},[\mu]) \mu(d\boldsymbol{\theta})$. In
general, we cannot guarantee that the solutions to~\eqref{eq:45s} also
solve~\eqref{eq:45}.
However, due to the specific form of the
unit, $\varphi(\boldsymbol{x},\boldsymbol{\theta}) = c\hat \varphi(\boldsymbol{x},\boldsymbol{z})$, the rate of decay of the energy~\eqref{eq:65} reads
\begin{equation}
\label{eq:65b}
\begin{aligned}
\frac{dE}{dt} &= - \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}\times \hat D} \left(c^2 |\nabla \hat
V(\boldsymbol{z},[\mu_t])|^2 + |\hat
V(\boldsymbol{z},[\mu_t])|^2\right)\mu_t(dc,d\boldsymbol{z})\\
& = - \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}\times \hat D} c^2 |\nabla \hat
V(\boldsymbol{z},[\mu_t])|^2 \mu_t(dc,d\boldsymbol{z}) - \int_{\hat D} |\hat
V(\boldsymbol{z},[\mu_t])|^2 \hat \mu_t(d\boldsymbol{z})
\end{aligned}
\end{equation}
where $\hat \mu_t = \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}} \mu_t(dc,\cdot)$ and
\begin{equation}
\label{eq:54}
\hat V(\boldsymbol{z},[\mu]) = - \hat F(\boldsymbol{z}) + \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}\times \hat D} c' \hat
K(\boldsymbol{z},\boldsymbol{z}') \mu(dc',d\boldsymbol{z}')
\end{equation}
\eqref{eq:65b} implies that the stationary points $\mu^s$
of~\eqref{eq:39} satisfy
\begin{equation}
\label{eq:24}
\begin{aligned}
&\hat V(\boldsymbol{z},[\mu^s]) = 0 \quad \text{for} \quad \boldsymbol{z} \in \supp \hat
\mu^s = \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}} \mu^s(dc,\cdot)\\
\end{aligned}
\end{equation}
As a result $V(\boldsymbol{\theta},[\mu^s]) = c\hat V(\boldsymbol{z},[\mu^s]) =0 $ for
$\boldsymbol{\theta}=(c,\boldsymbol{z}) \in \supp \mu^s$, and this shows that the second
equation in~\eqref{eq:45} is automatically satisfied, noting that $\bar V
=0$ for a global minimizer.
To show that the first equation of~\eqref{eq:24} also holds, we establish that
$\hat V(\boldsymbol{z},[\mu^s]) = 0$ everywhere in $\hat D$. We proceed by
contradiction: Suppose that $\mu_t$ converges to some
$\mu^s$ such that $\hat V(\boldsymbol{z},[\mu^s]) \not= 0$ for $\boldsymbol{z} =\hat D^c_s$
where $\hat D^c_s$ is the complement in $\hat D $ of
$\hat D_s = \supp \hat \mu^s$---the relevant case is when $\hat D^c_s$ has nonzero Hausdorff measure in $\hat D$. Looking at the characteristic
equations~\eqref{eq:30} written in terms of $\boldsymbol{\Theta}_t=(C_t,\boldsymbol{Z}_t)$ as
\begin{equation}
\label{eq:12}
\left\{
\begin{aligned}
\dot C_t(c,\boldsymbol{z}) &= - \hat V(\boldsymbol{Z}_t(c,\boldsymbol{z}),[\mu_t]), \qquad &C_0(c,\boldsymbol{z})=c\\
\dot \boldsymbol{Z}_t(c,\boldsymbol{z}) &= -C_t(c,\boldsymbol{z}) \nabla \hat V(\boldsymbol{Z}_t(c,\boldsymbol{z}),[\mu_t]),
\qquad &\boldsymbol{Z}_0(c,\boldsymbol{z})=\boldsymbol{z}
\end{aligned}
\right.
\end{equation}
Since we know that
$\supp\hat \mu_t = \hat D $ at all positive time $t<\infty$, in order that
$\mu_t \to \mu^s$ as $t\to\infty$, all the mass must be expelled from $\hat D^c_s$. That is, all the solutions to~\eqref{eq:12} must leave this domain, or at
least accumulate at its boundary, while at the same time we must have
$\lim_{t\to\infty}\hat V(\boldsymbol{z},[\mu_t]) \not= 0$. To show that this scenario
is impossible, note that (using the fact that $\hat D$ is compact)
\begin{equation}
\label{eq:81}
\forall \delta >0 \quad \exists t_c>0 \ : \
\sup_{\boldsymbol{z}} |\hat V(\boldsymbol{z},[\mu_t]) - \hat V(\boldsymbol{z},[\mu_s])|\le \delta
\qquad \text{if} \quad t \ge t_c.
\end{equation}
This means that, for $t\ge t_c$, to leading order in
$\delta$~\eqref{eq:12} reads
\begin{equation}
\label{eq:12s}
\left\{
\begin{aligned}
\dot C_t(c,\boldsymbol{z}) &= - \hat
V(\boldsymbol{Z}_t(c,\boldsymbol{z}),[\mu^s]), \qquad &C_0(c,\boldsymbol{z})=c\\
\dot \boldsymbol{Z}_t(c,\boldsymbol{z}) &= -C_t(c,\boldsymbol{z}) \nabla \hat
V(\boldsymbol{Z}_t(c,\boldsymbol{z}),[\mu^s]),
\qquad &\boldsymbol{Z}_0(c,\boldsymbol{z})=\boldsymbol{z}
\end{aligned}
\right.
\end{equation}
which is the GD flow on
\begin{equation}
\label{eq:26}
c \hat V(\boldsymbol{z},[\mu^s])
\end{equation}
Suppose that $\hat V(\boldsymbol{z},[\mu^s])>0$ somewhere in $\hat D_s^c$---the case when $\hat V(\boldsymbol{z},[\mu^s])<0$ somewhere in $\hat D_s^c$ can be treated similarly. Since $\hat D_s^c$ is compact, $\hat V(\boldsymbol{z},[\mu^s])$ must then have a maximum in $\hat D_s^c$, i.e. (using the differentiability of the unit) there exists $ \boldsymbol{z}_1 \in\hat D_s^c$ with $\boldsymbol{z}_1\not \in \partial \hat D_s^c$ and such that $\nabla \hat V(\boldsymbol{z}_1,[\mu^s])=0$, $\hat V(\boldsymbol{z}_1,[\mu^s])= \hat V_1 >0$, and and $\hat V(\boldsymbol{z}_1,[\mu^s])>\hat V(\boldsymbol{z},[\mu^s])$ for $\boldsymbol{z}\in\hat D_s^c$. Consider the solutions to~\eqref{eq:12s} for initial $(c,\boldsymbol{z})$ such that
$\boldsymbol{Z}_t(c,\boldsymbol{z})$ is very close to $\boldsymbol{z}_1$ at $t=t_c$---these
solutions must exist since $\supp \mu_t= \hat D$ for all $t<\infty$. If among these solutions there are some such that $C_t(c,\boldsymbol{z})$ is negative at time $t=t_c$ (which is always the case if $\supp \mu_{\rm in}=D$ since $\supp \mu_t = D$ for all $t<\infty$ in that case), then by~\eqref{eq:12s} $C_t(c,\boldsymbol{z})$ becomes more negative and $\boldsymbol{Z}_t(c,\boldsymbol{z})$ gets closer to $\boldsymbol{z}_1$ for $t>t_c$. If all $C_{t+\tau}(c,\boldsymbol{z})$ are positive at time $t=t_c(\delta)$, then the corresponding $\boldsymbol{Z}_t(c,\boldsymbol{z})$ go away from $\boldsymbol{z}_1$ for as long as their $C_t(c,\boldsymbol{z})$ remains positive; however, eventually some $C_t(c,\boldsymbol{z})$ become negative (since $C_{t+\tau}(c,\boldsymbol{z}_1) = C_{t}(c,\boldsymbol{z}_1)- \tau\hat V(\boldsymbol{z}_1,[\mu^s])= C_{t}(c,\boldsymbol{z}_1)- \tau\hat V_1$ under~\eqref{eq:12s}), at which point we go back to the first case and $\boldsymbol{Z}_t(c,\boldsymbol{z})$ gets closer to $\boldsymbol{z}_1$. Either way, we can always find solutions with $\boldsymbol{Z}_t(c,\boldsymbol{z})$ sufficiently close to $\boldsymbol{z}_1$ at time $t_c(\delta)$ that will eventually converge to $\boldsymbol{z}_1$ rather than exiting $\hat D_s^c$, a contradiction with our assumption that all solutions must either exit this domain or accumulate at its boundary. This argument is based on~\eqref{eq:12s} rather than the original~\eqref{eq:12}, but by setting $\delta$ small enough (and $t_c$ large enough) we can make the terms left over in~\eqref{eq:12s} arbitrarily small so that they do not affect the result.
This concludes the justification that the stationary points $\mu^s$ of~\eqref{eq:39} are such that $\hat V(\boldsymbol{z},[\mu^s])=0$ everywhere in $\hat D$, i.e. they are minimizers of $\mathcal{E}[\mu]$, which from~\eqref{eq:24} implies
\begin{equation}
\label{eq:44}
\forall \boldsymbol{z} \in \hat D \ : \qquad 0= \int_{\Omega} \hat \varphi(\boldsymbol{x},\boldsymbol{z})\left( f(\boldsymbol{x}) - \int_{\hat
D}\hat\varphi(\boldsymbol{x},\boldsymbol{z}')\gamma^s(d\boldsymbol{z}') \right) \nu(d\boldsymbol{x})
\end{equation}
where $\gamma^s = \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}} c \mu^s(dc,\cdot)$. As a result, by
Assumptions~\ref{th:as1} and~\ref{as:inF1},
\begin{equation}
f =\int_{\hat D}\hat\varphi(\cdot,\boldsymbol{z})\gamma^s(d\boldsymbol{z}),
\label{eq:56}
\end{equation}
In other other words, we have established:
\begin{proposition}[Global convergence]
\label{thm:limrhot}
Let $\mu_t$ be the solution to~\eqref{eq:39} for the initial
condition $\mu_0=\mu_{\text{in}}$ that satisfy
Assumption~\ref{as:wellprep}. If
$\mu_t \to \mu^*\in \mathcal{M}_+(D)$ as $t\to\infty$, then under
Assumptions~\ref{th:as1} and~\ref{as:inF1} $\mu^*$
is a minimizer of $\mathcal{E}[\mu]$ and we have
\begin{equation}
\label{eq:31}
\lim_{t\to\infty} \int_{D} \varphi(\cdot, \boldsymbol{\theta})
\mu_t(d\boldsymbol{\theta}) =\int_{D} \varphi(\cdot, \boldsymbol{\theta})
\mu^*(d\boldsymbol{\theta})= f.
\end{equation}
\end{proposition}
\noindent
Note that we assume that $\mu_t$ converges to some probability
measure to establish this proposition. This is because we cannot
exclude \textit{a~priori} that the dynamics eventually loses mass at
infinity, e.g. if some of the solutions of the characteristic
equation~\eqref{eq:30} eventually diverge as $t\to\infty$. We do not expect this
scenario to occur for most initial conditions and one way to preclude it
altogether is to add regularizing terms in the loss function.
The argument that leads to~\eqref{eq:31} would be simple if it were the case that
$\hat \mu^*= \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}} \mu^*(dc,\cdot)$ has full support in $\hat D$.
Indeed this would imply that the kernel~\eqref{eq:96} evaluated on $\mu_t$ is positive
definite not only for all $t\ge 0$ but also in the limit as
$t\to\infty$ and hence the only fixed point of~\eqref{eq:48} is $f$.
It is reasonable to assume that $\supp \hat \mu^*=\hat D$ because: (i)
$\supp \hat \mu_t = \hat D$ for all $t<\infty$ as mentioned before
and (ii) there is no energetic incentive to shrink the support, even when
$t\to\infty$. This see why, note that if $\mu^*$ is an energy
minimizer such that $\supp \hat \mu^*\not =\hat D$, then a direct
calculation shows that for any $\alpha\in(0,1)$ and any
$\hat \mu\in \mathcal{M}_+(\hat D)$ with $\supp \hat \mu = \hat D$,
\begin{equation}
\label{eq:63}
\mu^{**}(dc,d\boldsymbol{z}) = (1-\alpha)^2 \mu^*((1-\alpha) dc, d\boldsymbol{z}) + \alpha
\delta_0(dc) \hat \mu(d\boldsymbol{z})
\end{equation}
is also a energy minimizer in $\mathcal{M}_+(D)$ such that
$\hat \mu^{**} = \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}} \mu^{**}(dc,\cdot)$ has support $\hat
D$.
In Appendix, we analyze the behavior of
$\mu_t$ on a longer timescale and show that, with noise and certain
regularizing terms added in~\eqref{eq:34}, $\mu_t$ reaches a unique
fixed point $\mu^*\in \mathcal{M}_+(D)$ such that
$\int_{D} \log (d\mu^*/d\mu^0) d\mu^* <\infty$, where $\mu^0$ is some
prior measure used for regularization.
\bigskip
We can summarize the results of Secs.~\ref{sec:zero} and \ref{sec:lln}
into:
\begin{proposition}[LLN \& global convergence]
\label{th:lln}
Let $f^{(n)}_t$ be given by~\eqref{eq:68} with
$\{\boldsymbol{\theta}_i(t)\}_{i=1}^n$ solution of~\eqref{eq:34} with
initial condition drawn from $\PP_{\text{in}}$. Then under the
conditions of Proposition~\ref{thm:limrhot} we have
\begin{equation}
\label{eq:86lln}
\lim_{n\to\infty} f^{(n)}_t = f_t
\qquad \text{pointwise, \ $\PP_{\text{in}}$-almost surely}
\end{equation}
where $f_t$ solves~\eqref{eq:48} and satisfies
\begin{equation}
\label{eq:127}
\lim_{t\to\infty} f_t = f \quad \text{a.e.\ \ in \ \ $\Omega$}.
\end{equation}
\end{proposition}
\noindent
The convergence in~\eqref{eq:127} is equivalent to the statement in
Proposition~\ref{thm:limrhot}. Notice that, since the evolution of
$f_t$ occurs via~\eqref{eq:48}, which is independent of~$n$, for any $\delta>0$ we can find $t_c$ independent of $n$ such that for $t>t_c$, $\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu|f_t-f|^2<\delta$. Since for any $\delta >0$ and $t>0$ we can also find $n_c$ such that for $n>n_c$, $\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu|f^{(n)}_t-f_t|^2<\delta$, we can
interchange the limits in $n$ and $t$ in Theorem~\ref{th:lln}, i.e. we
also have
\begin{equation}
\label{eq:15}
\lim_{n\to\infty} \lim_{t\to\infty} f^{(n)}_t = f.
\end{equation}
\subsection{Central Limit Theorem (CLT)}
\label{sec:clt}
Let us now consider the fluctuations of $\mu_t^{(n)}$ around its limit
$\mu_t$. To this end, we define~$\omega^{(n)}_t$ via:
\begin{equation}
\label{eq:74}
\omega^{(n)}_t = n^{1/2} \left( \mu_t^{(n)}-\mu_t\right),
\end{equation}
Explicitly, \eqref{eq:74} means:
\begin{equation}
\label{eq:106}
\omega^{(n)}_t = n^{-1/2} \sum_{i=1}^n
\left(\delta_{\boldsymbol{\theta}_{i}(t)} - \mu_t\right)
\end{equation}
The scaling factor $n^{1/2}$ is set by the fluctuations in the initial
conditions: if we pick a test function $\chi:D\to\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$ the
CLT tells us that under $\PP_{\text{in}}$
\begin{equation}
\label{eq:112init}
\begin{aligned}
\int_{D} \chi(\boldsymbol{\theta}) \omega^{(n)}_0(d\boldsymbol{\theta}) =
n^{-1/2} \sum_{i=1}^n \tilde \chi(\boldsymbol{\theta}_{i}(0)) \to
N(0,C_\chi) \quad \text{in law as\ \ $n\to\infty$}
\end{aligned}
\end{equation}
where
$\tilde \chi(\boldsymbol{\theta}) = \chi(\boldsymbol{\theta}) -\int_{D} \chi(\boldsymbol{\theta})
\mu_{\text{in}}(d\boldsymbol{\theta})$ and $N(0,C_\chi) $ denotes the Gaussian
random variable with mean zero and variance
\begin{equation}
\label{eq:135}
C_\chi = \int_{D} \left |\tilde \chi(\boldsymbol{\theta})\right|^2
\mu_{\text{in}}(d\boldsymbol{\theta}),
\end{equation}
To see what happens at later times, we derive an equation
for~$\omega^{(n)}_t$ by subtracting~\eqref{eq:39}
from~\eqref{eq:38} and using~\eqref{eq:74}
\begin{equation}
\label{eq:38om}
\partial_t \omega^{(n)}_t
= \nabla \cdot\left( \omega^{(n)}_t \nabla
V(\boldsymbol{\theta},[\mu_t]) + \left(\mu_t + n^{-1/2} \omega_t^{(n)} \right)\nabla F(\boldsymbol{\theta},[\omega^{(n)}_t])\right)
\end{equation}
where we defined
\begin{equation}
\label{eq:46}
F(\boldsymbol{\theta},[\mu]) = \int_{D} K(\boldsymbol{\theta},\boldsymbol{\theta}') \mu(d\boldsymbol{\theta}')
\end{equation}
If we take the limit as $n\to\infty$, the term involving
$ n^{-1/2} \omega_t^{(n)} $ at the right hand side
of~\eqref{eq:38om} disappears (we quantify its rate of
convergence to zero in more detail in Sec.~\ref{sec:first}) and we
formally deduce that
\begin{proposition}
Let $\omega^{(n)}_t$ be given by~\eqref{eq:106} with
$\{\boldsymbol{\theta}_i(t)\}_{i=1}^n$ solution of~\eqref{eq:34} with initial
conditions draw from $\PP_{\text{in}}$ and $\mu_t$ solution
to~\eqref{eq:39sb}. Then
\begin{equation}
\label{eq:33}
\omega_t^{(n)} \rightharpoonup
\omega _t \quad \text{in law as \ \ $n\to \infty$}
\end{equation}
where $\omega _t$ satisfies
\begin{equation}
\label{eq:38omlim}
\begin{aligned}
\partial_t \omega_t = \nabla \cdot\left( \omega_t \nabla
V(\boldsymbol{\theta},[\mu_t]) + \mu_t \nabla F(\boldsymbol{\theta},[\omega_t])\right)
\end{aligned}
\end{equation}
to be solved in the weak sense with the Gaussian initial conditions
read from~\eqref{eq:112init}:
\begin{equation}
\label{eq:1}
\int_{D} \chi(\boldsymbol{\theta}) \omega_0(d\boldsymbol{\theta}) =
N(0,C_\chi)
\end{equation}
\end{proposition}
Note that since the mean of $\omega_0$ is zero initially
and~\eqref{eq:38omlim} is linear, this mean remains zero for all
times, and we can focus on the evolution of its covariance:
\begin{equation}
\label{eq:143}
\Sigma_t(d\boldsymbol{\theta},d\boldsymbol{\theta}') = \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}} [ \omega_t(d\boldsymbol{\theta})
\omega_t(d\boldsymbol{\theta}') ]
\end{equation}
From~\eqref{eq:38omlim} it satisfies
\begin{equation}
\label{eq:38tildeG1}
\begin{aligned}
\partial_t \Sigma_t& = \nabla \cdot\left( \Sigma_t \nabla
V(\boldsymbol{\theta},[\mu_t]) + \mu_t(d\boldsymbol{\theta})
\nabla G(\boldsymbol{\theta},d\boldsymbol{\theta}',[\Sigma_t])\right)\\
& + \nabla' \cdot\left( \Sigma_t \nabla
V(\boldsymbol{\theta}',[\mu_t]) + \mu_t(d\boldsymbol{\theta}')
\nabla G(\boldsymbol{\theta}', d\boldsymbol{\theta},[\Sigma_t])\right)
\end{aligned}
\end{equation}
where we defined
\begin{equation}
\label{eq:64}
G(\boldsymbol{\theta},\cdot,[\Sigma])) = \int_{D} K(\boldsymbol{\theta},\boldsymbol{\theta}'') \Sigma(d\boldsymbol{\theta}'',\cdot)
\end{equation}
Equation~\eqref{eq:38tildeG1} should be interpreted in the weak sense
and solved for the initial condition
\begin{equation}
\label{eq:144}
\Sigma_0(d\boldsymbol{\theta},d\boldsymbol{\theta}')=
\mu_{\text{in}}(d\boldsymbol{\theta}) \delta_{\boldsymbol{\theta}}(d\boldsymbol{\theta}')
\end{equation}
\medskip
\paragraph{The dynamics of
$g_t = \lim_{n\to\infty} n^{1/2} ( f^{(n)}_t - f_t)$} We can also
test these equations against the unit, to deduce that, as
$n\to\infty$,
\begin{equation}
\label{eq:73}
\begin{aligned}
g^{(n)}_t & = \int_D
\varphi(\cdot, \boldsymbol{\theta}) \omega^{(n)}_t(d\boldsymbol{\theta}) = n^{1/2} \big(f^{(n)}_t - f_t\big) \\
& = n^{-1/2} \sum_{i=1}^n \left(\varphi(\cdot,\boldsymbol{\theta}_{i}(t))-
f_t\right)
\end{aligned}
\end{equation}
converges in law, $g^{(n)}_t\to g_t$, where $g_t$ is a Gaussian
process satisfying
\begin{equation}
\label{eq:75}
\begin{aligned}
\partial_t g_t &= -\int_\Omega M(\boldsymbol{x},\boldsymbol{x}',[\omega_t])
\left(f_t(\boldsymbol{x}')-f(\boldsymbol{x}')\right)\nu(d\boldsymbol{x}') \\
& \quad - \int_\Omega M(\boldsymbol{x},\boldsymbol{x}',[\mu_t]) g_t(\boldsymbol{x}')\nu(d\boldsymbol{x}')
\end{aligned}
\end{equation}
This equation should be solved with Gaussian initial conditions with
mean zero and covariance
\begin{equation}
\label{eq:49}
\begin{aligned}
C_0(\boldsymbol{x},\boldsymbol{x}') = \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}} [g_0(\boldsymbol{x}) g_0(\boldsymbol{x}') ] &= \int_{D}
\varphi(\boldsymbol{x},\boldsymbol{\theta}) \varphi(\boldsymbol{x}',\boldsymbol{\theta})
\mu_{\text{in}}(d\boldsymbol{\theta})\\
&- \int_{D\times D} \varphi(\boldsymbol{x},\boldsymbol{\theta}) \varphi(\boldsymbol{x}',\boldsymbol{\theta}')
\mu_{\text{in}}(d\boldsymbol{\theta}) \mu_{\text{in}}(d\boldsymbol{\theta}')
\end{aligned}
\end{equation}
Since~\eqref{eq:75} is linear the mean of $g_t$ remains zero at all
times and we can again focus on the evolution of its covariance:
\begin{equation}
\label{eq:53}
C_t(\boldsymbol{x},\boldsymbol{x}') =\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}} [g_t(\boldsymbol{x}) g_t(\boldsymbol{x}') ]
\end{equation}
We obtain
\begin{equation}
\label{eq:86}
\begin{aligned}
\partial _t C_t & = -\int_\Omega N(\boldsymbol{x},\boldsymbol{x}',\boldsymbol{x}'', [\Sigma_t])
\left(f_t(\boldsymbol{x}'')-f(\boldsymbol{x}'')\right)\nu(d\boldsymbol{x}'')\\
& \quad - \int_\Omega M(\boldsymbol{x},\boldsymbol{x}'',[\mu_t])
C_t(\boldsymbol{x}'',\boldsymbol{x})\nu(d\boldsymbol{x}'') \\
&\quad - \int_\Omega M(\boldsymbol{x}',\boldsymbol{x}'',[\mu_t])
C_t(\boldsymbol{x}'',\boldsymbol{x}')\nu(d\boldsymbol{x}'')
\end{aligned}
\end{equation}
where $\Sigma_t$ solves~\eqref{eq:38tildeG1} and
\begin{equation}
\label{eq:88}
\begin{aligned}
N(\boldsymbol{x},\boldsymbol{x}'\boldsymbol{x}'', [\Sigma]) & = \int_{D\times D}
\nabla_{\boldsymbol{\theta}} \varphi(\boldsymbol{x},\boldsymbol{\theta}) \cdot \nabla_{\boldsymbol{\theta}}
\varphi(\boldsymbol{x}'',\boldsymbol{\theta}) \varphi(\boldsymbol{x}',\boldsymbol{\theta}') \Sigma(d\boldsymbol{\theta},d\boldsymbol{\theta}')\\
& + \int_{D\times D} \nabla_{\boldsymbol{\theta}}
\varphi(\boldsymbol{x}',\boldsymbol{\theta}) \cdot \nabla_{\boldsymbol{\theta}}
\varphi(\boldsymbol{x}'',\boldsymbol{\theta}) \varphi(\boldsymbol{x},\boldsymbol{\theta}')
\Sigma(d\boldsymbol{\theta},d\boldsymbol{\theta}')
\end{aligned}
\end{equation}
Summarizing, we have established:
\begin{proposition}[CLT]
\label{th:cltg}
Let $g^{(n)}_t$ be given by~\eqref{eq:73} with
$\{\boldsymbol{\theta}_i(t)\}_{i=1}^n$ solution of~\eqref{eq:34} with initial
conditions draw from $\PP_{\text{in}}$ and $\mu_t$ solution
to~\eqref{eq:39sb}. Then, as $n\to\infty$, $g^{(n)}_t\to g_t$ in
law, where $g_t$ is the zero mean Gaussian process whose covariance
solves to~\eqref{eq:86} for the initial condition~\eqref{eq:49}.
\end{proposition}
\noindent
\subsection{Scaling of the fluctuations at long and very long times}
\label{sec:first}
To analyze the behavior of the fluctuations as $t\to\infty$,
we revisit the results from the last section from a
different perspective. Suppose that, instead of~\eqref{eq:106}
and~\eqref{eq:73}, we would consider
\begin{equation}
\label{eq:79}
\bar \omega^{(n)}_t = n^{-1/2} \sum_{i=1}^n \left(\delta_{\boldsymbol{\Theta}_{i}(t)}-
\mu_t\right)
\end{equation}
and
\begin{equation}
\label{eq:79g}
\bar g^{(n)}_t = n^{-1/2} \sum_{i=1}^n \left(\varphi(\cdot,\boldsymbol{\Theta}_{i}(t))-
\int_D \varphi(\cdot,\theta) \mu_t(d\boldsymbol{\theta})\right)
\end{equation}
where $\boldsymbol{\Theta}_{i}(t)$ are independent copies of the mean-field
characteristic equation~\eqref{eq:30}. Then, direct calculations show
that $\bar \omega^{(n)}_t\rightharpoonup \bar\omega_t$ and
$\bar g^{(n)}_t\to\bar g_t$ in law as $n\to\infty$, where $\bar\omega_t$ and
$\bar g_t$ are Gaussian processes with mean zero and covariance given
explicitly by
\begin{equation}
\label{eq:80}
\bar \Sigma_t(d\boldsymbol{\theta},d\boldsymbol{\theta}') =
\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}} [ \bar\omega_t (d\boldsymbol{\theta}) \bar\omega_t(d\boldsymbol{\theta}')] =
\mu_t(d\boldsymbol{\theta}) \delta_{\boldsymbol{\theta}} (d\boldsymbol{\theta}') -
\mu_t(d\boldsymbol{\theta}) \mu_t(d\boldsymbol{\theta}')
\end{equation}
and
\begin{equation}
\label{eq:80C}
\bar C_t(\boldsymbol{x},\boldsymbol{x}') =
\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}} [ \bar g_t (\boldsymbol{x}) \bar g_t(\boldsymbol{x}')] =
\int_D \varphi(\boldsymbol{x},\boldsymbol{\theta})\varphi(\boldsymbol{x}',\boldsymbol{\theta}) \mu_t(d\boldsymbol{\theta}) -
f_t(\boldsymbol{x}) f_t(\boldsymbol{x}')
\end{equation}
We can also easily write down evolution equations for $\bar\omega_t$
and $\bar g_t$: they read
\begin{equation}
\label{eq:38omlimmf}
\partial_t \bar\omega_t = \nabla \cdot\left( \bar\omega_t \nabla
V(\boldsymbol{\theta},[\mu_t]) \right)
\end{equation}
and
\begin{equation}
\label{eq:75mf}
\partial_t \bar g_t = -\int_\Omega M(\boldsymbol{x},\boldsymbol{x}',[\bar \omega_t])
\left(f_t(\boldsymbol{x}')-f(\boldsymbol{x}')\right)\nu(d\boldsymbol{x}')
\end{equation}
Let us focus on this last equation: it is similar to \eqref{eq:75}, but without the last term,
$-\int_\Omega M(\boldsymbol{x},\boldsymbol{x}',[\mu_t]) g_t(\boldsymbol{x}') \nu(d\boldsymbol{x}')$.
Since the kernel $M$ is positive semi-definite, we know that
the solutions to \eqref{eq:75} are controlled by those of
\eqref{eq:75mf}. In particular,
\begin{equation}
\label{eq:91}
\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}} \int_{\Omega} |g_t(\boldsymbol{x})|^2 \nu(d\boldsymbol{x}) = \int_{\Omega}
C_t(\boldsymbol{x},\boldsymbol{x}) \nu(d\boldsymbol{x}) \le \int_{\Omega}
\bar C_t(\boldsymbol{x},\boldsymbol{x}) \nu(d\boldsymbol{x})
\end{equation}
If we assume that $\mu_t\to\mu^*\in \mathcal{M}_+(D)$ as
$t\to\infty$, from~\eqref{eq:80C} we have
\begin{equation}
\label{eq:93}
\lim_{t\to\infty} \int_{\Omega}
\bar C_t(\boldsymbol{x},\boldsymbol{x}) \nu(d\boldsymbol{x}) = \int_{D} K(\boldsymbol{\theta},\boldsymbol{\theta})
\mu^*(d\boldsymbol{\theta}) - \int_\Omega |f(\boldsymbol{x})|^2 \nu(d\boldsymbol{x})
\end{equation}
and therefore
\begin{equation}
\label{eq:91lim}
\lim_{t\to\infty} \int_{\Omega} C_t(\boldsymbol{x},\boldsymbol{x})
\nu(d\boldsymbol{x}) \le \int_{D} K(\boldsymbol{\theta},\boldsymbol{\theta})
\mu^*(d\boldsymbol{\theta}) - \int_\Omega |f(\boldsymbol{x})|^2 \nu(\displaystyle} \def\t{\textstyle} \def\scr{\scriptstyle\boldsymbol{x}).
\end{equation}
Because
\begin{equation}
\label{eq:100}
\int_{\Omega} C_t(\boldsymbol{x},\boldsymbol{x}) \nu(d\boldsymbol{x})= \lim_{n\to\infty}
n \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}}\int_{\Omega} |f^{(n)}_t(\boldsymbol{x})-f_t(\boldsymbol{x})|^2\nu(d\boldsymbol{x})
\end{equation}
the previous result gives a Monte-Carlo type error bound on the loss. Note that this bound is only nontrivial if
\begin{equation}
\label{eq:condonbound}
\begin{aligned}
\int_{D} K(\boldsymbol{\theta},\boldsymbol{\theta}) \mu^*(d\boldsymbol{\theta})
& = \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} \times \hat D } c^2 \hat K(\boldsymbol{z},\boldsymbol{z}) \mu^*(dc,d\boldsymbol{z}) \\
& \le \|\hat K\|_\infty \int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} \times \hat D } c^2 \mu^*(dc,d\boldsymbol{z}) < \infty
\end{aligned}
\end{equation}
During training, we have $\int_{\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} \times \hat D } c^2 \mu_t(dc,d\boldsymbol{z})<\infty$ for all $t<\infty$, and to guarantee that this moment does not blow up as $t\to\infty$, or more generally to control its value in that limit, we may need to add a regularizing term to the loss function. If~\eqref{eq:condonbound} holds,
there is a situation in which we can even deduce a better bound: if
$\supp \hat \mu^* = \hat D$ and , then $M(\boldsymbol{x},\boldsymbol{x},[\mu_t])$ is positive
definite for all $t\ge0$ and in the limit $t\to\infty$, indicating
that the last term in~\eqref{eq:75} is always dissipative. In this
case the argument above shows that
\begin{equation}
\label{eq:103}
\lim_{t\to\infty} \int_{\Omega} C_t(\boldsymbol{x},\boldsymbol{x})
\nu(d\boldsymbol{x}) = 0.
\end{equation}
Summarizing:
\begin{proposition}[Fluctuations at long times]
\label{th:cltglt}
Let $f^{(n)}_t$ be given by with $\{\boldsymbol{\theta}_i(t)\}_{i=1}^n$ solution
of~\eqref{eq:68} with initial conditions draw from $\PP_{\text{in}}$
and $f_t$ solution to~\eqref{eq:48}. Then, under the conditions
of Proposition~\ref{thm:limrhot} and assuming that~\eqref{eq:condonbound} holds, we have
\begin{equation}
\label{eq:105}
\begin{aligned}
& \lim_{t\to\infty} \lim_{n\to\infty} n\, \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}}
\int_{\Omega} |f^{(n)}_t(\boldsymbol{x})-f_t(\boldsymbol{x})|^2\nu(d\boldsymbol{x})\\
& \quad \le \int_{D} K(\boldsymbol{\theta},\boldsymbol{\theta}) \mu^*(d\boldsymbol{\theta}) -
\int_\Omega |f(\boldsymbol{x})|^2 \nu(d\boldsymbol{x})
\end{aligned}
\end{equation}
%
In addition, if $\supp\hat \mu^* = \hat D$, we have
\begin{equation}
\label{eq:105b}
\begin{aligned}
& \lim_{t\to\infty} \lim_{n\to\infty} n \, \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}}
\int_{\Omega} |f^{(n)}_t(\boldsymbol{x})-f_t(\boldsymbol{x})|^2\nu(d\boldsymbol{x})=0.
\end{aligned}
\end{equation}
\end{proposition}
\bigskip
In situations where $\supp\hat \mu^* = \hat D$ and~\eqref{eq:105b} holds, we
see that the fluctuations, initially detectable on the scale $n^{-1/2}$, become higher order as
time increases.
To understand the scale at which the fluctuations eventually settle, consider
\begin{equation}
\label{eq:106lt}
\tilde \omega^{(n)}_t (d\boldsymbol{\theta}) = n^{\xi(t)} \sum_{i=1}^n
\left(\delta_{\boldsymbol{\theta}_{i}(t)}(\boldsymbol{\theta}) - \mu_t(d\boldsymbol{\theta})\right)
\end{equation}
where $\xi(t)$ is some time-dependent exponent to be specified. By
proceeding as we did to derive~\eqref{eq:38om}, we have that
$\tilde \omega^{(n)}_t$ satisfies
\begin{equation}
\label{eq:38tildelt}
\begin{aligned}
\partial_t \tilde \omega^{(n)}_t & = \nabla \cdot\left(
\tilde\omega^{(n)}_t \nabla V(\boldsymbol{\theta},[\mu_t]) + \mu_t \nabla
F(\boldsymbol{\theta},[\tilde\omega^{(n)}_t])\right)\\
& + n^{-\xi(t)} \nabla \cdot\left(\omega_t^{(n)} \nabla
F(\boldsymbol{\theta},[\tilde\omega^{(n)}_t])\right) + \dot \xi(t) \log n \,
\tilde \omega^{(n)}_t.
\end{aligned}
\end{equation}
In order to take the limit as $n\to\infty$ of this equation, we need
to consider carefully the behavior of the factors
in~\eqref{eq:38tildelt} that contain $n$ explicitly, that is,
$\omega_t^{(n)} \nabla F(\boldsymbol{\theta},[\tilde\omega^{(n)}_t])$ and
$\dot \xi(t) \log n \, \tilde\omega^{(n)}_t$. Regarding the former,
for any $p\in \NN$ and $\xi\in \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$,
\begin{equation}
\label{eq:87}
\begin{aligned}
& \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}} \left(n^{-\xi} \int_{D\times D}
\chi(\boldsymbol{\theta})\chi(\boldsymbol{\theta}') \tilde\omega^{(n)}_0(d\boldsymbol{\theta})
\tilde\omega^{(n)}_0(d\boldsymbol{\theta}')\right)^p = O\left(n^{(\xi-1)p}\right),
\end{aligned}
\end{equation}
which can be verified by a direct calculation.
For example if $p=1$, this expectation is $n^{(\xi-1)} C_\chi$ where
$C_\chi$ is given in~\eqref{eq:135}.
Equation~\eqref{eq:87} implies
that
$n^{-\xi} \tilde\omega^{(n)}_0(d\boldsymbol{\theta})
\tilde\omega^{(n)}_0(d\boldsymbol{\theta})\rightharpoonup 0$ weakly in $L^{2p}$ at
$t=0$ for any $\xi<1$. To see whether we can bring the fluctuations to
that scale, notice that if we set
\begin{equation}
\label{eq:3}
\dot \xi(t) \log n = o(1)
\end{equation}
the last term at the right hand side of~\eqref{eq:38tildelt} is also
higher order---\eqref{eq:3} means that we can vary $\xi(t)$, but only
slowly. \eqref{eq:3} can be achieved by choosing e.g.
\begin{equation}
\label{eq:admixi}
\xi(t) = \bar \xi(t/a_n)
\end{equation}
with $\bar \xi(0) =
\tfrac12$, $\bar\xi'(u) >0$, $\lim_{u\to\infty} \bar\xi(u)
= <1$, and $a_n$ growing with
$n$ and such that $\lim_{n\to\infty} a_n/\log n =\infty$.
With this choice, both the last two terms at the right hand
of~\eqref{eq:38tildelt} are a small perturbation that vanishes as
$n\to\infty$. Therefore, if we test $\tilde \omega^{(n)}_t$ against the unit, and
define
\begin{equation}
\label{eq:73lt}
\tilde g^{(n)}_t = n^{\xi(t)} \big(f^{(n)}_t - f_t\big) = \int_D \varphi(\cdot,
\boldsymbol{\theta}) \tilde \omega^{(n)}_t(d\boldsymbol{\theta})
\end{equation}
we know that, if~$\xi(t)$ is as in~\eqref{eq:admixi} and $\supp\hat\mu^* = \hat D$, this
field will be controlled and go to zero eventually. Summarizing we
have established:
\begin{proposition}[Fluctuations at very long times]
\label{th:cltgvlt} Assume that the conditions of
Proposition~\ref{th:cltglt} hold and $\supp\hat\mu^* = \hat D$. Then
\begin{equation}
\label{eq:86clt}
\forall \xi < 1 \quad : \quad \lim_{n\to\infty} n^{2\xi} \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_{\text{in}}
\int_{\Omega} |f^{(n)}_{a_n}(\boldsymbol{x})-f(\boldsymbol{x})|^2\nu(d\boldsymbol{x})=0
\end{equation}
%
if $a_n$ grows with $n$ and is such that
$\lim_{n\to\infty} a_n/\log n =\infty$.
\end{proposition}
\noindent This proposition can be stated as~\eqref{eq:55zT}. It shows
a remarkable self-healing property of the dynamics: the fluctuations
at scale $O(n^{-1/2})$ of $f^{(n)}_t$ around $f_t$ that were present
initially decrease in amplitude as time progresses, and become
$O(n^{-1})$ or smaller as $t\to\infty$.
\section{Training by online stochastic gradient descent}
\label{sec:stochgrad}
In most applications, it is not possible to evaluate the expectation over the data
in~\eqref{eq:22a} defining $\hat F(\boldsymbol{z})$ and $\hat K(\boldsymbol{z},\boldsymbol{z}')$. This is
especially true for $\hat F(\boldsymbol{z})$, since we typically have limited access
to $f(\boldsymbol{x})$: often, we can only evaluate it pointwise or only know its value on a discrete set of
points. In these cases, we typically need to
approximate the expectation in~\eqref{eq:22a} by a sum over a finite subset
of $\boldsymbol{x}$'s obtained by sampling from the measure $\nu$.
If we were to fix this training data set, $\{\boldsymbol{x}_p\}_{p=1}^P$, and denote by $\nu_P = P^{-1}\sum_{p=1}^P \delta_{\boldsymbol{x}_p}$ the corresponding empirical measure, then all the results in Sec.~\ref{sec:weighted} apply at empirical level if we replace everywhere $\nu$ by $\nu_P$. This, however, is not the question we are typically interested in, which is rather:
\begin{quote}
{\it How does the test error (that is, the error obtained using the exact loss defined with the original $\nu$) scale if we train the network on the empirical loss associated to $\nu_P$?}
\end{quote}
Here we will address this question in the specific setting of ``online'' learning algorithms, in which we can draw a training data set of batch size $P$ at every step of the learning.
This effectively assumes that we have access to infinite data, but cannot use it all at the same time, and the finite size of the batch
introduces noise into the learning algorithm.
The algorithm in which the gradient is estimated from a subset of training data at each step is known as stochastic gradient descent.
It reads
\begin{equation}
\label{eq:34discrete2}
\hat \boldsymbol{\theta}_{i}(t+\Delta t) = \hat \boldsymbol{\theta}_{i}(t) + \nabla
F_{P}(t,\hat\boldsymbol{\theta}^P_{i}(t) ) \Delta t - \frac1n\sum_{j=1}^n
\nabla K_P(t,\hat\boldsymbol{\theta}_{i}(t),\hat\boldsymbol{\theta}_{j}(t)) \Delta t
\end{equation}
where $i=1,\ldots,n$, $\Delta t>0$ is some time-step, and we defined
\begin{equation}
\begin{aligned}
\label{eq:47}
F_{P} (t,\boldsymbol{\theta}) &= \frac1P \sum_{p=1}^P
f(\boldsymbol{x}_{p}(t))\varphi(\boldsymbol{x}_{p}(t),\boldsymbol{\theta}), \\
K_P (t,\boldsymbol{\theta},\boldsymbol{\theta}') &= \frac1P \sum_{p=1}^P
\varphi(\boldsymbol{x}_{p}(t),\boldsymbol{\theta}) \varphi(\boldsymbol{x}_{p}(t),\boldsymbol{\theta}')
\end{aligned}
\end{equation}
in which $\{\boldsymbol{x}_{p}(t)\}_{p=1}^P$ are $P$ iid variables which are
redrawn from $\nu$ independently at every time step~$t$. Next we analyze how the result from Sec.~\ref{sec:weighted} must be modified when we use~\eqref{eq:47} rather than \eqref{eq:34} to perform the training.
\subsection{Limiting stochastic differential equation}
\label{lim:sde}
To analyze the properties of~\eqref{eq:34discrete2}, we start start by noticing that the term
\begin{equation}
\boldsymbol{R}_i(\vec\boldsymbol{\theta}) = \nabla F_{P}(t,\boldsymbol{\theta}_{i} ) - \frac1n\sum_{j=1}^n
\nabla K_P(t,\boldsymbol{\theta}_{i},\boldsymbol{\theta}_{j}), \qquad \vec\boldsymbol{\theta}=(\boldsymbol{\theta}_1, \ldots,\boldsymbol{\theta}_n)
\end{equation}
is an unbiased estimator of the right hand side of the GD equation~\eqref{eq:34}. Indeed, conditional on $\{\boldsymbol{\theta}_i\}_{i=1}^n$ fixed, we have
\begin{equation}
\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu \boldsymbol{R}_i(\vec\boldsymbol{\theta}) = \nabla F(\boldsymbol{\theta}_{i} ) - \frac1n\sum_{j=1}^n
\nabla K(\boldsymbol{\theta}_{i},\boldsymbol{\theta}_{j})
\end{equation}
This means that, if we split the right hand side of~\eqref{eq:34discrete2} into its expectation plus a zero-mean fluctuation, the expression resembles
an Euler-Maruyama scheme for a stochastic differential equation (SDE), except that the scaling of the
noise term involves $\Delta t$ rather than
$\sqrt{\Delta t}$. To write this SDE explicitly, we compute the covariance of $\boldsymbol{R}(\vec\boldsymbol{\theta})$ conditional on $\{\boldsymbol{\theta}_i\}_{i=1}^n$ fixed,
\begin{equation}
\cov_\nu[ \boldsymbol{R}_i(\vec\boldsymbol{\theta}),\boldsymbol{R}_j(\vec\boldsymbol{\theta}')] = A([f-f^{(n)}],\boldsymbol{\theta}_i,\boldsymbol{\theta}_j')
\end{equation}
where $f^{(n)}= n^{-1} \sum_{i=1}^n \varphi(\cdot,\boldsymbol{\theta}_i)$ and we defined
\begin{equation}
\begin{aligned}
\label{eq:161}
A([f],\boldsymbol{\theta},\boldsymbol{\theta}') & = \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu [|f|^2
\nabla_{\boldsymbol{\theta}}\varphi(\cdot,\boldsymbol{\theta}) \otimes \nabla_{\boldsymbol{\theta}'}\varphi(\cdot,\boldsymbol{\theta}')]\\
& - \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu [f \,\nabla_{\boldsymbol{\theta}}\varphi(\cdot,\boldsymbol{\theta})] \otimes \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu
[f \, \nabla_{\boldsymbol{\theta}'}\varphi(\cdot,\boldsymbol{\theta}')]
\end{aligned}
\end{equation}
The SDE capturing the behavior of the solution to~\eqref{eq:34discrete2} is
\begin{equation}
\label{eq:34trainz}
d\boldsymbol{\theta}_{i} = \nabla F(\boldsymbol{\theta}_{i}) dt -
\frac1n\sum_{j=1}^n \nabla K(\boldsymbol{\theta}_{i},\boldsymbol{\theta}_{j})
dt +
\sqrt{\sigma}\, d\boldsymbol{B}_i,
\end{equation}
where $\sigma = \Delta t/P$ and $\{d\boldsymbol{B}_i\}_{i=1}^n$ is a white-noise process with
quadratic variation
\begin{equation}
\label{eq:59}
\< d\boldsymbol{B}_i,d\boldsymbol{B}_j\> =
A([f-f^{(n)}],\boldsymbol{\theta}_{i},\boldsymbol{\theta}_{j}) dt.
\end{equation}
More precisely~\cite{Li:2015,Hu:2017uj},
\begin{lemma}
\label{th:lem1}
Given given any test functions $\chi:D\to\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$ and any $T>0$, there is a constant $C>0$ such that
\begin{equation}
\label{eq:92}
\sup_{0\le k\Delta t\le T} \Big|\frac1n \sum_{i=1}^n \left(\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T} \chi(\hat\theta_i(k\Delta t)) - \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T} \chi(\boldsymbol{\theta}_i(k\Delta t))
\right) \Big| \le C \Delta t.
\end{equation}
where $\hat \boldsymbol{\theta}_i(t)$ and $\boldsymbol{\theta}_i(t)$ denote the solutions to~\eqref{eq:34discrete2} and \eqref{eq:38trainz}, respectively.
\end{lemma}
\noindent
This lemma is a direct consequence of the fact
that~\eqref{eq:34discrete2} can be viewed as the Euler-Maruyama
discretization scheme for~\eqref{eq:34trainz}, and this scheme has weak
order of accuracy $1$. Note that if we let $\Delta t \to0$,
\eqref{eq:34trainz} reduces to the ODEs in~\eqref{eq:34} since
$\sigma = \Delta t/P\to0$ in that limit. We should stress, however,
that this limit is not reached in practice since the scheme
\eqref{eq:34discrete2} is used at small but finite $\Delta t$. We analyze next what happens when we
adjust the size of $\sigma$ by
changing $\Delta t$ and/or the batch size~$P$.
\subsection{Dean's equation for particles with correlated noise}
\label{sec:deancorr} Lemma~\ref{th:lem1} indicates that we can
analyze the properties of~\eqref{eq:34trainz} instead of those
of~\eqref{eq:34discrete2}.
To this end, we derive an equation for the empirical distribution $\mu^{(n)}_t$ in \eqref{eq:37} when $\{\boldsymbol{\theta}_i(t)\}_{i=1}^n$ satisfy the SDE~\eqref{eq:34trainz}; this calculation is operationally similar to the derivation of~\eqref{eq:38} but takes into account the extra drift term and the noise term in~\eqref{eq:34trainz}. By applying It\^{o}'s formula to~\eqref{eq:85} we deduce
\begin{equation}
\label{eq:42sgd}
\begin{aligned}
d \int_D \chi(\boldsymbol{\theta}) \mu^{(n)}_t(d\boldsymbol{\theta}) & = \frac1n \sum_{i=1}^n
\nabla \chi(\boldsymbol{\theta}_i(t)) \cdot d\boldsymbol{\theta}_i(t) \\
&+ \frac{\sigma}{2n} \sum_{i=1}^n \nabla
\nabla\chi(\boldsymbol{\theta}_i(t)) : A([f-f_t^{(n)}],\boldsymbol{\theta}_{i}(t), \boldsymbol{\theta}_{i}(t) )dt
\end{aligned}
\end{equation}
where $f_t^{(n)} = n^{-1} \sum_{i=1}^n \varphi(\cdot, \boldsymbol{\theta}_i(t)) = \int_D \varphi(\cdot, \boldsymbol{\theta})\mu_t^{(n)}(d\boldsymbol{\theta})$. Using ~\eqref{eq:34trainz} and the definition of $\mu^{(n)}_t$, this relation can be written as
\begin{equation}
\label{eq:42sgd2}
\begin{aligned}
d \int_D \chi(\boldsymbol{\theta}) \mu^{(n)}_t(d\boldsymbol{\theta}) & = \int_D
\nabla \chi(\boldsymbol{\theta}) \cdot \nabla V(\boldsymbol{\theta},[\mu^{(n)}_t])\mu^{(n)}_t(d\boldsymbol{\theta})dt\\
& + \frac{\sigma}{2}\int_D \nabla \nabla\chi(\boldsymbol{\theta}) : A([f-f_t^{(n)}],\boldsymbol{\theta}, \boldsymbol{\theta} ) \mu^{(n)}_t(d\boldsymbol{\theta}) dt\\
&+ \frac{\sqrt{\sigma}}n \sum_{i=1}^n
\nabla \chi(\boldsymbol{\theta}_i(t)) \cdot d\boldsymbol{B}_i(t)
\end{aligned}
\end{equation}
The drift terms in this equation are expressed in term of $\mu^{(n)}_t$; for the noise term, notice that its quadratic variation is
\begin{equation}
\begin{aligned}
&\Big\< \frac{\sqrt{\sigma}}n \sum_{i=1}^n
\nabla \chi(\boldsymbol{\theta}_i(t)) \cdot d\boldsymbol{B}_i(t), \frac{\sqrt{\sigma}}n \sum_{i=1}^n
\nabla \chi(\boldsymbol{\theta}_i(t)) \cdot d\boldsymbol{B}_i(t)\Big\>\\
& = \sigma
\int_{D\times D}
\nabla\chi(\boldsymbol{\theta}) \nabla\chi(\boldsymbol{\theta}') : A([f-f_t^{(n)}],\boldsymbol{\theta}, \boldsymbol{\theta}' )\mu^{(n)}_t(d\boldsymbol{\theta})
\mu^{(n)}_t(d\boldsymbol{\theta}') dt
\end{aligned}
\end{equation}
This means that, in law, \eqref{eq:42sgd2} is equivalent to
\begin{equation}
\label{eq:38trainz}
\begin{aligned}
d \int_D \chi(\boldsymbol{\theta}) \mu^{(n)}_t(d\boldsymbol{\theta}) & = \int_D
\nabla \chi(\boldsymbol{\theta}) \cdot \nabla V(\boldsymbol{\theta},[\mu^{(n)}_t])\mu^{(n)}_t(d\boldsymbol{\theta})dt\\
& + \frac{\sigma}{2}
\int_D \nabla \nabla\chi(\boldsymbol{\theta}) : A([f-f_t^{(n)}],\boldsymbol{\theta}, \boldsymbol{\theta} ) \mu^{(n)}_t(d\boldsymbol{\theta})dt \\
&+ \sqrt{\sigma} \int_D
\nabla \chi(\boldsymbol{\theta}) \cdot d\boldsymbol{\eta}^{(n)}_t(d\boldsymbol{\theta})
\end{aligned}
\end{equation}
where $d\boldsymbol{\eta}^{(n)}_t(d\boldsymbol{\theta}) $ is vector-valued random measure, white in time, and with quadratic variation
\begin{equation}
\label{eq:quadeta}
\< d\boldsymbol{\eta}^{(n)}_t(d\boldsymbol{\theta}), d\boldsymbol{\eta}^{(n)}_t(d\boldsymbol{\theta}')\> = A([f-f_t^{(n)}],\boldsymbol{\theta}, \boldsymbol{\theta}' )
\mu^{(n)}_t(d\boldsymbol{\theta})
\mu^{(n)}_t(d\boldsymbol{\theta}') dt
\end{equation}
The first term at the right hand side of~\eqref{eq:38trainz} is
the same as in the weak form of~\eqref{eq:38}. This is because these terms come
from the drift terms in~\eqref{eq:34trainz}, which also coincide with
those in~\eqref{eq:34}. However, \eqref{eq:38trainz} also contains
additional terms that were absent
in~\eqref{eq:38}---note that these terms are different from those in the standard Dean's equation, because the noise term in~\eqref{eq:34trainz} is correlated between the particles, instead of being independent.
\subsection{LLN for SGD}
\label{sec:llntrain}
If we want the result established in Proposition~\ref{th:lln} to apply and also for the approximation error to vanish as $n\to\infty$, we need to make the additional terms in~\eqref{eq:38trainz} compared to~\eqref{eq:38} higher order. This can be done by scaling $\sigma$
with some inverse power of~$n$. Specifically, we will
set
\begin{equation}
\label{eq:83}
\sigma = a n^{-2\alpha}\quad \text{for some} \quad a>0\quad
\text{and}\quad \alpha>0
\end{equation}
This scaling can be achieved by choosing, e.g., $ P = O(n^{2\alpha})$, which amounts to increasing the batch size with $n$.
The choice~\eqref{eq:83} implies that the last two terms in~\eqref{eq:38trainz}
disappear in the limit as
$n\to\infty$. Therefore, we formally conclude that
$\mu^{(n)}_t\rightharpoonup\mu_t$ as $n\to\infty$, where $\mu_t$ solves the same
deterministic equation~\eqref{eq:39} as before. This implies that
$\lim_{n\to\infty} f^{(n)}_t =f_t= \int_{D} \varphi(\cdot,\boldsymbol{\theta}) \mu_t(d\boldsymbol{\theta})
$ satisfies \eqref{eq:48} and is such that $f_t\to f $ as
$t\to\infty$.
In particular, both the LLN and the global convergence result in Proposition~\ref{th:lln} still hold if the assumption in this proposition are met and we use the solution
of~\eqref{eq:38train2} in~\eqref{eq:68}.
In turn, we can also conclude that
this proposition holds up to discretization errors in $\Delta t$
if we use the solution of~\eqref{eq:34discrete2} in~\eqref{eq:68}.
Importantly, the covariance associated with the estimator for the gradient,
defined in~\eqref{eq:161}, satisfies
\begin{equation}
\label{eq:126}
\forall (\boldsymbol{\theta},\boldsymbol{\theta}')\in D\times D \ : \qquad \lim_{t\to\infty} A([f_t-f],\boldsymbol{\theta},\boldsymbol{\theta}') = 0.
\end{equation}
This property will be useful later.
\subsection{CLT for SGD}
\label{sec:llnclt}
Turning our attention to the fluctuations of $\mu^{(n)}_t$ around $\mu_t$,
notice that there are two sources of them: some are intrinsic to the
discrete nature of the particles apparent in $\mu^{(n)}_t$, and scale
as $O(n^{-1/2})$ for all $t<\infty$ and possibly as $O(n^{-\xi})$ for any
$\xi<1$ as $t\to\infty$, as discussed in Sec.~\ref{sec:first}. Other fluctuations
come from the noise term in~\eqref{eq:38trainz}, and scale as
$O(n^{-\alpha})$ when~\eqref{eq:83} holds. The It\^o drift terms
proportional to $\sigma = a n^{-2\alpha}$ in~\eqref{eq:38trainz}
always make higher order contributions.
We first consider $t<\infty$ and subsequently examine the limit $t\to\infty$ in Sec.~\ref{sec:SGDlvlt}.
In the present case, we first observe that if $\alpha\ge\frac12$, then for all $t<\infty$ the fluctuations due to the noise
in~\eqref{eq:38trainz} are negligible compared to the intrinsic
ones from discreteness, and we are back to the GD situation studied in
Sec.~\ref{sec:weighted}.
In contrast, if $\alpha\in (0,\frac12)$, for all $t<\infty$ the fluctuations due to the noise in~\eqref{eq:38trainz} dominate the intrinsic ones
from discreteness, so let us focus on this case from now on.
To quantify these fluctuations, we can introduce
$ n^{\alpha} (\mu^{(n)}_t-\mu_t)$, write an equation for this scaled discrepancy,
and take the limit as $n\to\infty$.
The derivation proceeds akin to the derivation of~\eqref{eq:38omlim} and leads to the conclusion
that, as $n\to\infty$, $n^{\alpha} (\mu^{(n)}_t-\mu_t) \rightharpoonup \omega^{(\alpha)}_t$ in law which
satisfies
\begin{equation}
\label{eq:38trainzlim}
\begin{aligned}
d \int_D \chi(\boldsymbol{\theta}) \omega^{(\alpha)}_t(d\boldsymbol{\theta}) & = \int_D
\nabla \chi(\boldsymbol{\theta}) \cdot \nabla V(\boldsymbol{\theta},[\mu_t])\omega^{(\alpha)}_t(d\boldsymbol{\theta}) dt \\
& + \int_D
\nabla \chi(\boldsymbol{\theta}) \cdot \nabla F(\boldsymbol{\theta},[\omega^{(\alpha)}_t]) \mu_t(d\boldsymbol{\theta}) dt\\
& + \sqrt{a} \int_D
\nabla \chi(\boldsymbol{\theta}) \cdot d\boldsymbol{\eta}_t(d\boldsymbol{\theta})
\end{aligned}
\end{equation}
in which $d\boldsymbol{\eta}_t(d\boldsymbol{\theta}) $ is vector valued random measure, white in time, and with quadratic variation (compare~\eqref{eq:quadeta})
\begin{equation}
\label{eq:quadetalim}
\< d\boldsymbol{\eta}_t(d\boldsymbol{\theta}), d\boldsymbol{\eta}_t(d\boldsymbol{\theta}')\> = A([f-f_t],\boldsymbol{\theta}, \boldsymbol{\theta}' )
\mu_t(d\boldsymbol{\theta})
\mu_t(d\boldsymbol{\theta}') dt
\end{equation}
Equation~\eqref{eq:38trainzlim} should be solved with zero initial condition, since the $O(n^{-1/2})$ fluctuations arising from the initial condition are higher order compared to scaling $O(n^{-\alpha})$ we picked to obtain~\eqref{eq:38trainzlim}. Since~\eqref{eq:38trainzlim} is linear in $\omega^{(\alpha)}_t$ with additive noise, it indicates that $\omega^{(\alpha)}_t$ a Gaussian process with mean zero and thereby fully characterized by its covariance (we omit the equation for brevity).
This also implies that
\begin{equation}
n^{\alpha} (f^{(n)}_t - f) \to g^{(\alpha)}_t \qquad \text{in law as $n\to\infty$}
\end{equation}
where $g^{(\alpha)}_t $ is a Gaussian process whose evolution equation (cf. the derivation of ~\eqref{eq:75}) gives,
\begin{equation}
\label{eq:38train2}
\begin{aligned}
d g^{(\alpha)}_t & = - \int_\Omega M([\omega^{\alpha}_t],\boldsymbol{x},\boldsymbol{x}') \left(f_t(\boldsymbol{x})
-f(\boldsymbol{x}')\right) \nu(d\boldsymbol{x}') dt \\
& \quad-\int_\Omega M([\mu_t],\boldsymbol{x},\boldsymbol{x}')
g^{(\alpha)}_t(\boldsymbol{x}') \nu(d\boldsymbol{x}') dt + \sqrt{a}\, d\!\zeta_t(\boldsymbol{x})
\end{aligned}
\end{equation}
where $M([\mu],\boldsymbol{x},\boldsymbol{x}')$ is given in~\eqref{eq:96}, and the quadratic
variation of $d\zeta_t$ is that of $\int_{D} \varphi(\cdot,\boldsymbol{\theta}) d\eta_t(d\boldsymbol{\theta})$.
Explicitly,
\begin{equation}
\label{eq:60}
\begin{aligned}
&\< d\zeta_t(\boldsymbol{x}),d\zeta_t(\boldsymbol{x}')\>\\
&= \int_{\Omega}
M([\mu_t],\boldsymbol{x},\boldsymbol{x}'')M([\mu_t],\boldsymbol{x}',\boldsymbol{x}'')
\left|f_t(\boldsymbol{x}'')-f(\boldsymbol{x}'')\right|^2 d\nu(\boldsymbol{x}'') dt\\
& -\int_{\Omega} M([\mu_t],\boldsymbol{x},\boldsymbol{x}'') \left(f_t(\boldsymbol{x}'')-f(\boldsymbol{x}'')\right) d\nu(\boldsymbol{x}'')\\
& \times \int_{\Omega} M([\mu_t],\boldsymbol{x}',\boldsymbol{x}'') \left(f_t(\boldsymbol{x}'')-f(\boldsymbol{x}'')\right) d\nu(\boldsymbol{x}'')dt.
\end{aligned}
\end{equation}
The SDE \eqref{eq:38train2} should be solved with zero initial condition, $g_0^{(\alpha)}=0$. Since it is linear in $g^{(\alpha)}_t$ with additive noise, it defines a Gaussian process with mean zero and is specified by its covariance
\begin{equation}
C^{(\alpha)}_t(\boldsymbol{x},\boldsymbol{x}') = \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T} [ g^{(\alpha)}_t(\boldsymbol{x}) g^{(\alpha)}_t(\boldsymbol{x})]
\end{equation}
where $\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}$ denotes expectation over the noise $d\zeta_t$ (that is, over the data in the batches used in SGD). With this calculation, we have established
\begin{proposition}[CLT for SGD]
\label{th:cltsgd1}
Consider
\begin{equation}
\label{eq:flucta}
g^{(\alpha,n)}_t = n^{\alpha-1} \sum_{i=1}^n \left(\varphi(\cdot, \boldsymbol{\theta}_i(t)) - f_t\right) = n^{\alpha} \big(f^{(n)}_t - f_t\big)
\end{equation}
with $\{\boldsymbol{\theta}_i(t)\}_{i=1}^n$ solution to the SDE~\eqref{eq:34trainz} with $\sigma = an^{-2\alpha}$, $\alpha \in (0,\frac12)$, and $f_t$ solution to~\eqref{eq:48}. Then, as $n\to\infty$, $g^{(\alpha,n)}_t$ converges in law towards the Gaussian process $g^{(\alpha)}_t$ solution of~\eqref{eq:38train2} for $g^{(\alpha)}_0=0$.
\end{proposition}
\subsection{Fluctuations in SGD at long and very long times}
\label{sec:SGDlvlt}
The noise $d\zeta_t$ in~\eqref{eq:38train2}
has the remarkable property that it
self-quenches as $t\to\infty$ if the conditions of Proposition~\ref{th:lln} are met and $f_t\to f$ as $t\to\infty$ and therefore, from~\eqref{eq:60}:
\begin{equation}
\forall \boldsymbol{x},\boldsymbol{x}'\in \Omega \ : \quad \lim_{t\to\infty}\< d\zeta_t(\boldsymbol{x}),d\zeta_t(\boldsymbol{x}')\> =0.
\end{equation}
Since the first drift term in~\eqref{eq:38train2} also goes to zero when $f_t\to f$ and the second drift term is a damping term because $M([\mu_t],\boldsymbol{x},\boldsymbol{x}')$ is positive definite for all $t<\infty$, we know that $g^{(\alpha)}_t$ will be controlled as $t\to\infty$, i.e. $C^{(\alpha)}_t$ as a limit. In addition, if $\supp \hat \mu^* = \hat D$ where $\hat \mu^* = \int_\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} \mu^*(dc,\cdot) = \lim_{t\to\infty} \int_\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} \mu^*(dc,\cdot)$, then $M([\mu_t],\boldsymbol{x},\boldsymbol{x}')$ is positive definite for all $t<0$ and in the limit as $t\to\infty$, and the solution to~\eqref{eq:38train2} goes to zero. Using the definition the Gaussian process $g^{(n,\alpha)}_t$ defined in~\eqref{eq:flucta}, we can summarize this result into:
\begin{proposition}[Fluctuations in SGD at long time]
\label{th:train}
Under the conditions of Proposition~\ref{th:lln}, if $f_t^{(n)}$ is given by by~\eqref{eq:68} with
$\{\boldsymbol{\theta}_i(t)\}_{i=1}^n$ solution of~\eqref{eq:34trainz} with $\sigma = an^{-2\alpha}$, $\alpha \in (0,\frac12)$, and initial
condition drawn from $\PP_{\text{in}}$, and $f_t$ solves~\eqref{eq:48}, then
\begin{equation}
\lim_{t\to\infty} \lim_{n\to\infty} n^{2\alpha} \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}\int_\Omega |f^{(n)}_t(\boldsymbol{x}) - f_t(\boldsymbol{x})|^2 \nu(d\boldsymbol{x})
= \lim_{t\to\infty} C^{(\alpha)}_t(\boldsymbol{x},\boldsymbol{x}') \quad \text{exists}
\end{equation}
In addition, if $\supp \hat \mu^* = \hat D$, then this limit is zero.
\end{proposition}
If $\supp \hat \mu^* = \hat D$ where $\hat \mu^* = \int_\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} \mu^*(dc,\cdot) = \lim_{t\to\infty} \int_\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} \mu^*(dc,\cdot)$, then $M([\mu_t],\boldsymbol{x},\boldsymbol{x}')$ is positive definite for all $t<0$ and in the limit as $t\to\infty$. In that case, the only
fixed point of~\eqref{eq:38train2} is zero. Since in this case we also know that the fluctuations from the initial conditions disappear on scale $O(n^{\-\xi})$ for any $\xi<0$, we can proceed as in Sec.~\ref{sec:first} and adjust $\alpha$ all the way up to 1 instead of $\frac12$. That is, we can generalize Proposition~\ref{th:cltgvlt} into
\begin{proposition}[Fluctuations in SGD at very long times]
\label{th:train2}
Under the conditions of Proposition~\ref{th:train}, if $\supp \hat \mu^* = \hat D$, then for any $\alpha\in(0,1)$,
\begin{equation}
\label{eq:147}
\lim_{n\to\infty} n^{2\alpha} \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}\int_\Omega |f^{(n)}_{a_n}(\boldsymbol{x}) - f(\boldsymbol{x})|^2 \nu(d\boldsymbol{x})
= 0
\end{equation}
if $a_n$ grows with $n$ and is such that $\lim_{n\to\infty} a_n/\log n = \infty$---here $\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}$ denotes expectation over both the initial condition, $\PP_{\text{in}}$, and the noise in~\eqref{eq:34trainz}.
\end{proposition}
\section{Illustrative example: 3-spin model on the high-dimensional
sphere}
\label{sec:examples}
To test our results, we use a function known for its complex
features in high-dimensions: the spherical 3-spin model,
$f: S^{d-1}(\sqrt{d}) \to \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$, given by
\begin{equation}
\label{eq:psin}
f(\boldsymbol{x}) = \frac1d \sum_{p,q,r=1}^d a_{p,q,r} x_p x_q x_r, \qquad \boldsymbol{x}
\in S^{d-1}(\sqrt{d}) \subset \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^d
\end{equation}
where the coefficients $\{a_{p,q,r}\}_{p,q,r=1}^d$ are independent
Gaussian random variables with mean zero and variance one. The
function~\eqref{eq:psin} is known to have a number of critical points
that grows exponentially with the dimensionality
$d$~\cite{Auffinger:2013kq,Sagun:2014tg,Auffinger:2012gh}. We note
that previous works have sought to draw a parallel between the glassy
3-spin function and generic loss
functions~\cite{Choromanska:2014ui}, but we are not exploring such an
analogy here. Rather, we simply use the function~\eqref{eq:psin} as a
difficult target for approximation by neural networks. That is,
throughout this section, we train networks to learn $f$ with a
particular realization of $a_{p,q,r}$ and study the accuracy of that
representation as a function of the number of particles $n$.
\subsection{Learning with Gaussian kernels}
\label{sec:gauss}
We first consider the case when $D = S^{d-1}(\sqrt{d})$ and we use
\begin{equation}
\label{eq:90}
\varphi(\boldsymbol{x},\boldsymbol{z}) = e^{-\tfrac12 \alpha |\boldsymbol{x}-\boldsymbol{z}|^2}
\end{equation}
for some fixed $\alpha >0$. In this case, the parameters
are elements of the domain of the function (here the $d$-dimensional
sphere).
Note that, since $|\boldsymbol{x}|=|\boldsymbol{z}| = \sqrt{d}$, up to an
irrelevant constant that can be absorbed in the weights $c$, we can
also write~\eqref{eq:90} as
\begin{equation}
\label{eq:90red}
\varphi(\boldsymbol{x},\boldsymbol{z}) = e^{-\alpha \boldsymbol{x}\cdot \boldsymbol{z}}
\end{equation}
This setting allow us to simplify the problem. Using
\begin{equation}
\label{eq:119}
f^{(n)}(\boldsymbol{x}) = \frac1n \sum_{i=1}^n c_i \varphi(\boldsymbol{x},\boldsymbol{z}_i)
= \frac1n \sum_{i=1}^n c_i e^{-\alpha \boldsymbol{x}\cdot \boldsymbol{z}_i},
\end{equation}
we can use as alternative loss
\begin{equation}
\label{eq:otherloss}
\mathcal{L}[f^{(n)}] = -\frac1n \sum_{i=1}^n c_i f(\boldsymbol{z}_i) + \frac1{2n^2} \sum_{i,j=1}^n c_i c_j \varphi(\boldsymbol{z}_i,\boldsymbol{z}_j)
\end{equation}
i.e. eliminate the need for data beside the set $\{\boldsymbol{z}_i\}_{i=1}^n$. In terms of the empirical distribution, the loss can be represented as
\begin{equation}
\begin{aligned}
\mathcal{L}[f^{(n)}] &= -\int_{\hat D} f(\boldsymbol{z}) \gamma^{(n)}(d\boldsymbol{z}) + \frac1{2} \int_{\hat D\times\hat D}
\varphi(\boldsymbol{z},\boldsymbol{z}')\gamma^{(n)}(d\boldsymbol{z})\gamma^{(n)}(d\boldsymbol{z}')
\end{aligned}
\end{equation}
where $\gamma^{(n)}=\int_\mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z} c \mu^{(n)}(dc,\cdot)$. Viewed as an integral kernel, $\varphi$ is positive definite, as a result the loss is a convex functional of $\gamma^{(n)}$ (or $\mu^{(n)}$). Hence, the results established above apply to this special case, as well.
The GD flow on the loss~\eqref{eq:otherloss} can now be written explicitly as
\begin{equation}
\label{eq:34RBFpspin}
\left\{
\begin{aligned}
\dot\boldsymbol{z}_i &= c_i \nabla f(\boldsymbol{z}_i) +
\frac{\alpha}n\sum_{j=1}^n
c_i c_j \boldsymbol{z}_j e^{-\alpha \boldsymbol{z}_i\cdot \boldsymbol{z}_j} -\lambda_i \boldsymbol{z}_i \\
\dot c_i &= f(\boldsymbol{z}_i) - \frac1n\sum_{j=1}^n c_j e^{-\alpha
\boldsymbol{z}_i\cdot \boldsymbol{z}_j}
\end{aligned}
\right.
\end{equation}
where $ -\lambda_i \boldsymbol{z}_i $ is a Lagrange multiplier term added to
enforce $|\boldsymbol{z}_i|= \sqrt{d}$ for all $i=1,\ldots, n$, $f(\boldsymbol{x})$ is
given by~\eqref{eq:psin}, and $\nabla f(\boldsymbol{z})$ is given componentwise by
\begin{equation}
\label{eq:132}
\frac{\partial f}{\partial z_p} = \frac1d\sum_{q,r=1}^d
\left(a_{p,q,r} + a_{r,p,q}+ a_{q,r,p}\right) z_q z_r,
\end{equation}
As is apparent from~\eqref{eq:34RBFpspin} the advantage of using radial
basis function networks (or, in fact, any unit $\hat \phi$ which is (i) such that $\hat D = \Omega$ and (ii) positive definite) is that we can use $f(\boldsymbol{x})$ and the unit
$\varphi(\boldsymbol{x},\boldsymbol{z})$ directly, and do not need to evaluate $\hat F(\boldsymbol{z})$ and
$\hat K(\boldsymbol{z},\boldsymbol{z}')$ (that is, we need no batch). In other words, the cost of
running~\eqref{eq:34RBFpspin} scales like $(dn)^2$, instead of $P(Nn)^2$ in
the case of a general network optimized by SGD with a batch of size
$P$ and $\boldsymbol{z}\in \hat D \subset \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^N$. If we make $P$ scale with $n$, like
$P=C n^{2\alpha}$ for some $C>0$, as we need to do to obtain the
scalings discussed in Sec.~\ref{sec:stochgrad}, the cost of SGD
becomes $N^2n^{2+2\alpha}$, which is quickly becomes much worse than
$(dn)^2$ as $n$ grows.
\begin{figure}[t]
\includegraphics[width=0.45\linewidth]{fig5Dcomp_final}
\includegraphics[width=0.45\linewidth]{rbf_scaling}
\caption{Left panel: Comparison between the level sets of the original
function $f$ in~\eqref{eq:psin} (black dotted curves) and its approximation
by the neural network in~\eqref{eq:119} with $n= 128$ and $d=5$ in
the slice defined by~\eqref{eq:slice}. Also shown are the projection
in the slice of the particle position. Right panel: empirical loss in
~\eqref{eq:lossempirical} vs $n$ at the end of the calculation. The
stars show the empirical loss for 10 independent
realizations of the coefficients $a_{p,q,r}$ in~\eqref{eq:psin}. }
\label{fig:5DRBF}
\end{figure}
We tested the representation~\eqref{eq:119} in $d=5$ using $n=16$, 32,
64, 128, and 256 and setting $\alpha = 5/d= 1$. The training was done
by running a time-discretized version of~\eqref{eq:34RBFpspin} with
time step $\Delta t = 10^{3}$ for $2 \times 10^5$ steps: during the
first $10^5$ we added thermal noise to
\eqref{eq:34RBFpspin}, which we then remove during the second half of
the run. The representation~\eqref{eq:119} proves to be accurate even
at rather low value of $n$: for example, the right panel of
Fig.~\ref{fig:5DRBF} shows a contour plot of the original function $f$
and its representation $f^{(n)}$ with $n=128$ through a slice of the
sphere defined as
\begin{equation}
\label{eq:slice}
\boldsymbol{x}(\theta) = \sqrt{d} \left(\sin(\theta)\cos(\phi),
\sin(\theta)\sin(\phi),
\cos(\theta), 0,0\right),
\end{equation}
with $\theta\in[0,\pi]$ and $\phi\in [0,2\pi)$.
The level sets of both functions are in good agreement. Also shown on
this figure is the projection on the slice of the position of the 64
particles on the sphere. In this result, the parameters $c_i$ take
values that are initially uniformly distributed by about
$-40 d^2 = -10^3$ and $40 d^2 = 10^3$. To test the accuracy of the
representation, we used the following Monte Carlo estimate of the loss
function
\begin{equation}
\label{eq:lossempirical}
\mathcal{L}_P[f^{(n)}_t] = \frac{1}{2P} \sum_{p=1}^P \left|f(\boldsymbol{x}_p) - f^{(n)}_t(\boldsymbol{x}_p)\right|^2.
\end{equation}
This empirical loss function was computed with a batch of $10^6$
points $\boldsymbol{x}_p$ uniformly distributed on the sphere. The value
\eqref{eq:lossempirical} calculated at the end of the calculation is
shown as a function of $n$ in the right panel of
Fig.~\ref{fig:5DRBF}: the empty circles show~\eqref{eq:lossempirical}
for 4 individual realizations of the coefficient $a_{p,q,r}$
in~\eqref{eq:psin}, the full circle shows the average
of~\eqref{eq:lossempirical} over these 4 realizations. The blue line
scale as $n^{-1}$, the red one as $n^{-2}$: as can be seen, the
empirical loss decays with $n$ faster than $n^{-1}$, which is as
expected.
\subsection{Learning with single layer networks with sigmoid
nonlinearity}
\label{sec:sigmo}
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{quench}
\caption{The $\log$ of the empirical loss in~\eqref{eq:lossempirical}
as a function of training time by SGD for the sigmoid neural network
in $d=10$ (left panel) and $d=25$ (right panel). At time $t=2\times 10^6$, the
batch size is increased to initiate a quench. The insets show the
$\log$ of the empirical loss as a function of time during the final
$10^5$ time steps of training. }
\label{fig:quench}
\end{figure}
To further test our predictions and also assess the learnability of
high dimensional functions, we used $3$-spin models in $d=10$ and 25
dimensions, which we approximated with a single-layer neural network
with sigmoid nonlinearity parameterized by
$\boldsymbol{z}=(\boldsymbol{a},b) \in D= \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^{d+1}$, with $\boldsymbol{a} \in \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}^{d}$, $b\in \mathbb{R}} \def\NN{\mathbb{N}} \def\ZZ{\mathbb{Z}$,
and
\begin{equation}
\label{eq:133}
\varphi(\boldsymbol{x},\boldsymbol{z}) = h(\boldsymbol{a}\cdot \boldsymbol{x} + b).
\end{equation}
This gives
\begin{equation}
\label{eq:fnsigmo}
f^{(n)}(\boldsymbol{x}) = \frac1n \sum_{i=1}^n c_i h(\boldsymbol{a}_i \cdot \boldsymbol{x} + b_i)
\end{equation}
where $h(z) = 1/(1+e^{-z})$. Simple networks like these, as opposed
to deep neural with many parameters, provide greater assurance that we
have trained sufficiently to test the scaling.
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{errNN}
\caption{Error scaling for single layer neural network with sigmoid
nonlinearities. Upper row: $d=10$; lower row: $d=25$. The first
column shows the empirical loss in~\eqref{eq:lossempirical}, the
second column shows~\eqref{eq:test2}, and the third column
shows~\eqref{eq:test2} with $\Theta(f)$ replaced by
$\Theta(-f)$. The stars show the results for 10 different
realizations of the coefficients $a_{p,q,r}$ in~\eqref{eq:psin}: the
dashed lines decay as $n^{-1}$, consistent with the predictions
in~\eqref{eq:147} and~\ref{th:cltg} .}
\label{fig:scaling}
\end{figure}
We trained the model in~\eqref{eq:fnsigmo} using SGD with an initial
batch size of $P = \lfloor n/5\rfloor$ points uniformly sampled on the
sphere for $2\times 10^6$ time steps, resampling a new batch at every
time step: this corresponds to choosing $\alpha = 1/2$ in the notation
of Sec.~\ref{sec:stochgrad}. Towards the end of the trajectory, we
initiated a partial quench by increasing the batch size to
$P=\lfloor (n/5)^2\rfloor$ (i.e $\alpha = 1$) which we run for an
additional $2\times 10^5$ time steps. Fig.~\ref{fig:quench} shows the
empirical loss in~\eqref{eq:lossempirical} calculated over the batch
as a function of training time during the optimization with $n=256$
particles and $d=10$ (left panel) and $d=25$ (right panel). Note that
the lack of intermediate plateaus in the loss during training is
consistent with our conclusion that the dynamics effectively descends
on a quadratic energy landscape (i.e. the loss function itself) at the
level of the empirical distribution of the particles. After the
quench the empirical loss shows substantially smaller fluctuations as
a function of time which helps to reduce the fluctuating error. The
inset shows the final $10^5$ time steps in which there is negligible
downward drift, indicating convergence towards stationarity at this
batch size.
In these higher dimensional examples, we tested the scaling with three
different observables. First, we considered the empirical loss
function in~\eqref{eq:lossempirical} which we computed over a batch of
size $\hat P=10^5$ larger than $P$. As shown in the two right panels
Fig.~\ref{fig:scaling}, $\mathcal{L}_{\hat P}[f^{(n)}_t]$ scales as $n^{-1}$, as expected.
We
also tested the estimate in~\eqref{eq:147} using
\begin{equation}
\label{eq:test2}
\frac{1}{\hat P} \sum_{p=1}^{\hat P}
\Theta\left(f(\boldsymbol{x}_p)\right)\left(f(\boldsymbol{x}_p)- f^{(n)}_t(\boldsymbol{x}_p)\right),
\end{equation}
and similarly with $\Theta\left(-f(\boldsymbol{x}_p)\right)$: here $\Theta$
denotes the Heaviside function. The result is shown in the four right
panels in Fig.~\ref{fig:scaling}: \eqref{eq:test2} scales as $n^{-1}$,
consistent with~\eqref{eq:147} and our choice of $\alpha =1$.
To provide further confidence in the quality of the representations,
we also made a visual comparison by plotting $f$ and $f^{(n)}$ along great
circles of the sphere. We do so by picking $i\neq j$ in
$\{1,\cdots, d\}$ and setting
$\boldsymbol{x} = \boldsymbol{x}(\theta) = (x_1(\theta), \dots x_d(\theta))$ with
\begin{equation}
x_i(\theta) = \sqrt{d}\cos(\theta), \qquad x_j(\theta) = \sqrt{d}
\sin(\theta),
\qquad
x_k(\theta) = 0 \quad \forall k\not=i,j.
\end{equation}
In Fig.~\ref{fig:slices} we plot $f(\boldsymbol{x}(\theta)$ and
$f^{(n)}(\boldsymbol{x}(\theta))$ along three great circles for $d=10$ and $d=25$. As
can be seen, the agreement is quite good and confirms the quality of
the final fit. A strong signal is present in $d=25$ with $n=1024$, a
remarkable fact when considering that if we had only two grid points
per dimension, the total number of points in the grid would be
$2^{25} = 33,554,432$.
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{fitsNN}
\caption{One dimensional slices through the $d=10$ (upper row) and
$d=25$ (lower row) neural net representation $f^{(n)}$ are shown below a
yellow curve with the target function $f$. In $d=10$, the function
representations clearly capture the main features of the target
function, with only small scale deviations. In $d=25$ there is
remarkably good signal when $n=1024$ while the smaller neural
network is less able to faithfully represent the target function. }
\label{fig:slices}
\end{figure}
\section{Concluding remarks}
\label{sec:conclu}
Viewing parameters as particles with the loss function as interaction
potential enables us to leverage a powerful theoretical apparatus
developed to analyze problems from statistical physics.
Using these ideas, we can analyze the approximation quality and the
trainability of neural network representations of high-dimensional
functions. Several insights emerge from our analysis based on this
viewpoint: First, these tools show the dynamical realizability of the Universal Approximation
Theorems, a direct consequence of the Law of Large Numbers for the empirical
distribution of the parameters.
Specifically, we conclude that
the empirical distribution effectively descends on the quadratic loss
function landscape when the number $n$ of parameters in the network is
large. This confirms the empirical observation that wide neural networks
are trainable despite the non-convexity of the loss function viewed
from the individual particles perspective (as opposed to that of their
empirical distribution). Secondly, we have derived a Central Limit
Theorem for the empirical distribution of the parameters, specifying
the approximation error of the neural network representation and showing
that it is universal.
We derived these results first in the context of gradient descent
dynamics; however, our conclusions also apply to
stochastic gradient descent. The analysis indicates how the parameters
in SGD should be chosen, in particular how the batch size should be
scaled with $n$ given the time step used in the scheme, which can be
done towards the end of training.
These results were derived for a quadratic loss, $\mathcal{L}[f^{(n)}]= \tfrac12 \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu|f-f^{(n)}|^2$. However, they do generalize to other losses as long as they are convex in~$f^{(n)}$.
We also worked in the limit of an infinite amount of training data, an idealized setting that does not address the error incurred from a finite data set.
For a neural network trained on a dataset of $P$ points, $\{\boldsymbol{x}_p\}_{p=1}^P$, we can decompose the ``generalization'' error into components that involve the approximation error and the error from the finiteness of the data,
\begin{equation}
\label{eq:testerror}
\mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu |f-f_P^{(n)}|^2 \le \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu |f-f_P|^2+ \mathbb{E}}\def\PP{\mathbb{P}}\def\SS{\mathbb{S}}\def\TT{\mathbb{T}_\nu |f_P-f_P^{(n)}|^2
\end{equation}
where $f_P$ and $f^{(n)}_P$ are the approximations of $f$ we can get if we train the network on the empirical loss build on $\{\boldsymbol{x}_p\}_{p=1}^P$ with finitely ($n<\infty$) or infinitely ($n\to\infty$) many units, respectively. Our results give direct insight on the second term at the right hand side of
\eqref{eq:testerror}.
We leave assessments of the first term for future work.
Our numerical results not only confirm our predictions, they emphasize the
capability of neural networks to represent high-dimensional function
accurately with a relatively modest number of adjustable
parameters. Needless to say, the computational achievements of neural networks open the door to developments
in scientific computing that we are only beginning to grasp. Such
applications may benefit from better understanding how the specific
architecture of the neural networks affects the approximation error
and trainability, not in the general terms of their scaling with $n$
that we analyzed here, but in the details of the constant involved.
|
2,877,628,091,055 | arxiv | \section{Introduction}
The process of automatically determining the sensor that produced a given image is referred to as {\em sensor identification}. While a number of sensor identification methods have been discussed in the literature~\cite{Relwork2, Geradts_SPIE_01, Bayram_ICIP_05}, the ones based on Photo Response Non-Uniformity (PRNU)~\cite{Lukas_TIFS_06, Lukas_TIFS_08, Jess_ISP_09} have gained prominence in the recent literature. PRNU refers to the non-uniform response of individual pixels across the sensor array to the same illumination as a consequence of manufacturing defects introduced during sensor production. PRNU manifests itself as a noise pattern in the images generated by a sensor. This noise pattern is believed to be unique to every sensor~\cite{UniquePRNU}. A number of schemes have been designed to compute the PRNU noise of a sensor based on the images generated by it~\cite{SPN_Ref1_2017}.
More recently, the principle of PRNU has been used to perform sensor identification in the context of iris biometrics by processing the near-infrared (NIR) ocular images acquired by typical iris sensors~\cite{Uhl_2012_Baseline2,Kalka_CVPRW_15, Uhl0_IWBF_14,Uhl1_IWBF_15,Uhl3_IWBF_16,Ross_IWBF_17,IrisPRNU_18,SPN_Ref2_2017,Vatsa_18}. In this case, sensor identification (or device identification) can be used in conjunction with biometric recognition to authenticate both the identity of a device (e.g., a smartphone) as well as the individual using the device~\cite{Galdi_PRL_15}.
Given the forensic value of PRNU in determining the origin of an image (\ie, the sensor or device that produced it), we explore if it is possible to alter an image such that its source, as assessed by a PRNU estimation scheme, is confounded. We impose two constraints:
\begin{enumerate}
\item The modified image must spoof the PRNU pattern of a pre-specified target sensor.
\item The biometric utility of the modified image must be retained, viz., the modified ocular image must match successfully with the original image.
\end{enumerate}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{figs/isba2019-figure1-Page-1.png}
\end{center}
\caption{The objective of this work is to perturb an ocular (iris) image such that its PRNU pattern is modified to spoof that of another sensor, while not adversely impacting its biometric utility.}
\label{fig:paperoutline}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{figs/isba2019-figure2.png}
\end{center}
\caption{The proposed algorithm for deriving perturbations for the input image using the candidate image. (a) Steps involved in modifying the original image from the source sensor using a candidate image from the target sensor (see Algorithm~\ref{alg:find-candidate}), and (b) role of the candidate image in the perturbation engine (see Algorithm~\ref{alg:perturbations}).}
\label{fig:algorithm-graphic}
\end{figure*}
This kind of attack can be considered as a `targeted attack', since the sensor whose PRNU pattern has to be spoofed is pre-specified. In the literature, it is also referred to as fingerprint-copy attack~\cite{Ref_Fingerprintcopy2, Uhl_2012_Baseline2}, because the objective is to copy the sensor pattern or `fingerprint' corresponding to the target sensor to an image acquired using a different source sensor. The proposed work has two distinct benefits. Firstly, it allows us to assess the feasibility of PRNU spoofing from a counter-forensic perspective. The widespread use of forensic techniques for examining the validity and origin of digital media~\cite{CounterImageForensics_Ref1, CounterImageForensics_Ref2} necessitates the study of attacks that can potentially undermine the performance of such forensic methods. For example, an adversary may maliciously attempt to link an image to a different camera in an effort to mislead law enforcement investigators~\cite{Ref_Fingerprintcopy2}. Secondly, establishing the viability of such spoof attacks would promote the development of more robust PRNU estimation schemes~\cite{SPN_Ref2_2017}. In addition, effective methods to detect such attacks can be developed if the process of spoofing is better understood. Figure~\ref{fig:paperoutline} summarizes the objective of this work.
The remainder of the paper is organized as follows. Section~\ref{PRNU} briefly reviews the PRNU based sensor identification scheme used in this work. Section~\ref{RelatedWork} presents methods that have been described in the literature for sensor anonymization and spoofing. Section~\ref{ProposedMethod} describes the proposed method for spoofing PRNU patterns. Section~\ref{Expts} provides details about the datasets used, the experimental protocols employed, and reports the results obtained using the proposed method. Section~\ref{Summary} summarizes the paper and indicates future work.
\section{Photo Response Non-Uniformity (PRNU)}
\label{PRNU}
PRNU estimation entails computing the {\em reference pattern} of a sensor based on a set of training images acquired using the sensor. This reference pattern is then used by a sensor classifier to identify the sensor that was used to acquire a given test image. This is accomplished by correlating the reference pattern of the sensor with the {\em noise residual} of the test image to compute a correlation score. The image is assigned to the sensor whose reference pattern yields the highest correlation value. Here, we used Normalized Cross-Correlation (NCC) for computing the correlation score~\cite{Uhl1_IWBF_15, Ross_IWBF_17}. PRNU estimation can be done using numerous approaches~\cite{Lukas_TIFS_06, Li_TIFS_10, Kang_TIFS_12, Li2_TIFS_16, Li3_SPL_16, SPN_Ref1_2017}. In this work, we used the Maximum Likelihood Estimation (MLE) based PRNU estimation scheme~\cite{Lukas_TIFS_08}, which has been demonstrated to suppress image artifacts not associated with the sensor-specific pattern and has resulted in very good performance~\cite{Uhl0_IWBF_14, Uhl1_IWBF_15}.
MLE based PRNU estimation uses a weighted averaging of the noise residuals extracted from a set of training images pertaining to the sensor; each noise residual is weighted by its corresponding training image, to derive the maximum-likelihood estimate of the reference pattern. Wiener filtering and zero-mean operations are applied to the noise residuals to address interpolation artifacts arising due to the Bayer pattern. In our experiments, $L_2-$normalization of the test noise residual is performed to account for the variations in the PRNU strength of different sensors~\cite{Uhl5_IJCB_17}. The MLE of the reference pattern corresponding to a sensor is computed as, $\mathbf{\hat{K}} = \frac{\sum_{i=1}^{N} \mathbf{w}_i\mathbf{I}_i}{\sum_{i=1}^{N} \mathbf{I}_i^2}.$
Here, $\mathbf{w}_i$ is the noise residual obtained using a wavelet-based denoising filter applied to training image $\mathbf{I}_i$ and $\mathbf{w}_i = \mathbf{I}_i - F (\mathbf{I}_i)$, where, $F$ denotes the Daubechies Quadrature Mirror Filter~\cite{Lukas_TIFS_06}.
\section{Perturbing the PRNU Pattern}
\label{RelatedWork}
The counter-forensics literature describes techniques that can be used to suppress or perturb the PRNU pattern embedded in an image. This is often referred to as \textit{source anonymization}~\cite{PRNU_Perturb1_2014}, \ie, obscuring the `fingerprint' of the source sensor in an image so as to anonymize the origin of the image. Source anonymization can be used as a privacy preservation scheme, particularly relevant when the sensor-specific details can be used to associate a sensor with its owner. Assuming that each device is typically associated with a single user, device identification can be indirectly used to reveal the identity of the person possessing that specific device~\cite{AnonRef_2011}. There have been primarily two approaches to perturb the PRNU pattern for this purpose, namely, (i) compression and filtering based schemes, which typically use strong filtering schemes such as, flat-field subtraction~\cite{PRNU_attack2} or Wiener filtering~\cite{PRNU_attack4_2013} that can degrade the PRNU pattern leading to incorrect source attribution; and (ii) geometric perturbation based schemes such as `seam carving'~\cite{PRNU_attack3_2014,PRNU_attack4_2013} that distorts the alignment between the sensor reference pattern and the test noise residual, thereby impeding the process of correlating the reference pattern with the test noise residual.
In contrast to source anonymization, \textit{PRNU spoofing} not only suppresses the fingerprint of the source sensor, but it also inserts the fingerprint of the target sensor. An adversary may tamper with the digital evidence to maliciously exculpate a guilty person or worse, incriminate an innocent person. In recent literature, PRNU spoofing has been performed by two methods, namely, (i) PRNU injection and (ii) PRNU substitution. The first method adds the weighted reference pattern of a pre-selected target sensor to the input image, $\mathbf{I}$~\cite{Ref_Fingerprintcopy2}. The modified image becomes $\mathbf{I'}=[\mathbf{I}+\mathbf{I}\times\gamma \mathbf{\hat{K}_{T}}]$. Here, $\mathbf{\hat{K}_{T}}$ is the reference pattern of the target sensor $T$ and $\gamma$ is a scalar parameter. The second method subtracts the PRNU pattern of the source sensor in an image and then adds the PRNU pattern of a target sensor~\cite{PRNU_attack5_2010}. The modified image is represented as $\mathbf{I'}=\mathbf{I} - \gamma \mathbf{\hat{K}_{S}} + \beta \mathbf{\hat{K}_{T}}$. $\mathbf{I}$ belongs to the source sensor $S$, whose reference pattern is $\mathbf{\hat{K}_{S}}$. $\gamma$ and $\beta$ are scalar terms. We will use the two methods described above, \ie, PRNU injection and PRNU substitution, as baseline algorithms for comparative evaluation. The first method will be referred to as Baseline 1 and the second method will be referred to as Baseline 2. Both baseline algorithms have been shown to be successful on images acquired using commercial cameras that employ RGB sensors.
In~\cite{Uhl_2012_Baseline2}, the authors examine the viability of PRNU spoofing via injection in the context of \textit{iris sensors} operating in the NIR spectrum~\cite{PRNU_attack_NEW}. In their work, they computed the forged image as $\mathbf{I'}=[\textit{F}(\mathbf{I}) + \gamma \mathbf{\hat{K}_{T}}]$. Here, $F(\cdot)$ is the wavelet based denoising filter discussed in Section~\ref{PRNU}, and $\gamma$ is a scalar parameter. The authors further performed the triangle test to detect the spoof attack, but did not analyze the impact of the PRNU spoofing on iris recognition performance.
In this paper, our objective is to perform PRNU spoofing in a principled manner, that works for any arbitrary pair of source and target iris sensors. In addition, we wish to retain the biometric utility of the PRNU-spoofed image. The task of spoofing can be potentially accomplished through different techniques, an example will be the use of adversarial networks that have been successfully utilized for perturbing images in the current literature~\cite{SAN,mirjalili_semi_2018}. However, a significant bottleneck of deep-learning based techniques is the need for large amount of training data for driving the perturbation process. We will demonstrate the success of the proposed PRNU spoofing scheme using small number of images ($<$1000).
\section{Proposed method}
\label{ProposedMethod}
In this section, we formally describe the objective and the method used to address this objective.
\subsection{Problem formulation}
Let $X$ denote an NIR iris image of width $w$ and height $h$, and $\mathbf{S} = \{S_1, S_2, .., S_n\}$, denote a set of $n$ sensors. Let $\phi(X,S_i)$ be the function that computes the normalized cross-correlation (NCC) between the noise residual of $X$ and the PRNU reference pattern of sensor $S_i$. Then, the sensor label for the input iris image $X$ can be determined using $\displaystyle \arg \max_{i}\{\phi(X,S_i)\}$. Furthermore, let $M$ be a biometric matcher where $M(X_1, X_2)$ determines the match score between two iris samples $X_1$ and $X_2$. Given an input iris image $X$ acquired using sensor $S_o$, a candidate image $X_c$ from the target sensor $S_t$, and an iris matcher $M$ our goal is to devise a perturbation engine $\Psi$ that can modify the input image as $Y = \Psi(X,X_c)$ such that $\phi(Y,S_o)<\phi(Y,S_t)$, and thereby predict $S_t$ as the sensor label of the perturbed image $Y\!$, while the iris matcher, $M\!$, will successfully match $Y$ with $X$. As a result, the target sensor will be spoofed, while the biometric utility of the image will be retained. This implies that the match score between a pair of perturbed images $[M(Y_1, Y_2)]$ as well as that of a perturbed sample with an original sample, $[M(X_1, Y_2)]$ and $[M(Y_1,X_2)]$, are expected to be similar to the match scores between the original samples $[M(X_1,X_2)]$. The steps used to achieve this task are described next.
\begin{table*}
\centering
\caption{Specifications of the datasets used in this work.}
\label{Tab1:Datasets}
\scalebox{0.75}{
\begin{tabular}{|llccc|}
\hline
\textbf{Dataset} & \textbf{Sensor Name (Abbreviation) } & \textbf{Image Size} & \begin{tabular}[c]{@{}l@{}}\textbf{Number of Images Used} \\ \textbf{(Training set+Testing set)}\end{tabular}& \textbf{Number of Subjects} \\
\hline \hline
BioCOP 2009 Set I & Aoptix Insight (Aop) & 640$\times$480 & 995 (55+940) & 100 \\
IITD~\cite{IITD} & Jiristech JPC 1000 (JPC) & 320$\times$240 & 995 (55+940) & 100 \\
CASIAv2 Device2~\cite{CASv2} & CASIA-IrisCamV2 (IC) & 640$\times$480 & 995 (55+940) & 50 \\
IIITD Multi-spectral Periocular (NIR subset)~\cite{IIITD} & Cogent (Cog) & 640$\times$480 & 588 (55+533) & 62 \\
ND CrossSensor Iris 2013 Set II~\cite{ND} & LG 4000 (LG40) & 640$\times$480 & 615 (55+560) & 99 \\
MMU2~\cite{MMU} & Panasonic BM-ET 100US Authenticam (Pan) & 320$\times$238 & 55 (55+0) & 6\\
ND Cosmetic Contact Lens 2013~\cite{ND} & IrisGuard IG AD100 (AD) & 640$\times$480 & 55 (55+0)&4\\
WVU Off-Axis & EverFocus Monochrome CCD (Ever)~\cite{WVU} & 640$\times$480 & 55 (55+0)&7\\
CASIAv2 Device 1~\cite{CASv2} & OKI IrisPass-h (OKI) & 640$\times$480 & 55 (55+0)&3\\
CASIAv4-Iris Thousand subset~\cite{CASv4} & IrisKing IKEMB100 (IK) & 640$\times$480 & 55 (55+0)&3 \\
ND CrossSensor Iris 2013 Set I~\cite{ND} & LG 2200 (LG22) & 640$\times$480 & 55 (55+0)&5 \\ \hline
\end{tabular}}
\end{table*}
\begin{algorithm}
\small{
\caption{\label{alg:find-candidate}Selection of the candidate image.}
\KwIn{An image $X$ from sensor $S_o$, a gallery of images $G = \{X_1, ..., X_L\}$ from the target sensor $S_t$}
\KwOut{A candidate image, $X_{c}$, selected from the gallery.}
Set static parameters $K=10$ (number of random patches) and $w_p=10, h_p=10$, (patch width and height).
Generate a set of $K$ random patch locations $P=\{p_1, \cdots, p_K\}$, where each patch size is $h_p \times w_p$.
Compute the average pixel intensity in each patch $p_k\in P$ of the input image $X$ to obtain a vector $\mathbf{v}_X$ (of size $K$).
Repeat step 3 for each of the gallery images to obtain a set of vectors $\mathbf{v}_{G_i}$, where, $i = 1,\cdots,L$. The value of $L$ (the target gallery size) depends on the number of test images indicated in the fourth column in Table~\ref{Tab1:Datasets}.
Compute the correlation between $\mathbf{v}_X$ and $\mathbf{v}_{G_i}$ corresponding to each gallery image to obtain a set of $L$ correlation scores.
Return candidate image $X_{c} \in G$ that has the highest correlation, \ie $X_{c} = X_{f}$ where $f = \displaystyle \argmax_{i\in [1,\cdots,L]} \{Corr(\mathbf{v}_X,\mathbf{v}_{G_i})\}$.}
\end{algorithm}
\subsection{Deriving perturbations and PRNU Spoofing}
\label{Derivingperturbations}
Given a single image $X$ from the source sensor $S_o$, a gallery of images $G = \{X_1, ..., X_L\}$ from the target sensor $S_t$, and a set of $K$ random patch locations $P=\{p_1, ..., p_K\}$, we first select a candidate image, $X_c$, $c\in [1,\cdots,L]$, from the gallery to perturb the input image. The candidate image is selected from the gallery such that it is maximally correlated with the input image $X$. To accomplish this goal, we select 10 patches in the input image, each of size $10\times10$ (\ie, $K=10,~h_p=10,~w_p=10$ in Algorithm~\ref{alg:find-candidate}). Now, we compute the average pixel intensity in each of these patches and create a $K$-dimensional vector $\mathbf{v}_X$. Next, for each of the $L$ gallery images, we create $\mathbf{v}_{G_i}$ where $i=[1, \cdots, L]$, by computing the average pixel intensity in the 10 patches selected previously in the input image. Finally, we compute the correlation between the vectors $\mathbf{v}_X$ and $\mathbf{v}_{G_i}$, and select the candidate image with the maximum correlation value. The steps for selecting the candidate image are described in Algorithm~\ref{alg:find-candidate}.
After obtaining the candidate image $X_{c}$ from the gallery of the target sensor $S_t$, the perturbations for image $X$ are then derived with the help of $X_{c}$ as described in Algorithm~\ref{alg:perturbations}. The perturbation routine employs the following parameters: (i) $\alpha$ (the learning rate), (ii) $\eta$ (the termination criterion), and (iii) $m$ (the maximum number of iterations). Initially, the output perturbed image $Y^{(0)}$ is identical to the input image $X$. Next, we select a random patch location from $Y^{(0)}$, and create a mask matrix, $\mathit{Mask}$, of the same size as $Y^{(0)}$, such that the elements in $\mathit{Mask}$ are set to 1 for the row and column indices corresponding to the selected patch location. Then, the image $Y^{(0)}$ is perturbed iteratively using pixels from the same patch location in $X_c$. In each iteration, the pixels inside the selected patch are updated along two directions. The candidate image guides the direction of perturbation~\cite{vahidRef}. In the first case the perturbation is along a positive direction (implemented using line 9 in Algorithm~\ref{alg:perturbations}), which generates $Y^u$. The other direction corresponds to a negative perturbation (see line 11 in Algorithm~\ref{alg:perturbations}), which produces $Y^v$. Figure~\ref{fig:algorithm-graphic}(b) illustrates the role of the candidate image in the perturbation routine. Next, the noise residuals extracted from $(Y^u,Y^v)$ are correlated with the reference pattern of the target sensor. The perturbed image yielding the maximum correlation value is then selected as the seed image for the next iteration, $iter$. This process is repeated until the relative difference between the NCC values of perturbed image $Y^{iter}$ with respect to target sensor $S_t$ and the original sensor $S_o$ exceeds 10\%, \ie, $\eta=0.1$, or the maximum number of iterations is reached. The parameters employed in the perturbation routine are selected intuitively; for example, the learning rate is set to a small value $\alpha=0.01$ because our objective is to perturb the image while preserving its biometric utility.
\begin{algorithm}
\small{
\caption{\label{alg:perturbations}Spoofing PRNU pattern.}
\KwIn{An image $X_{h\times w}$ from sensor $S_o$, a candidate image $X_{c}$ from sensor $S_t$, a function $\phi(X,S_i)$ that returns the NCC value when image $X$ is correlated with the PRNU pattern of sensor $S_i$ ($i\in\{o,t\}$).}
\KwOut{perturbed image $Y$.}
Set static parameters $\alpha=0.01$ (learning rate), $\eta = 0.1$ (threshold), $m = 3000$ (maximum number of iterations) and $h_p=10, w_p=10$ (patch size).
Initialize $iter = 0$ and $Y^{(0)} = X$.
\Repeat{$\displaystyle \frac{\phi(Y^{(iter)}, S_t) - \phi(Y^{(iter)}, S_o)}{\phi(X, S_o)} > \eta$}{
\nl -- Choose a random patch location $\left(p_x,p_y\right)$ from $[0,\frac{h}{h_p}]$ and $[0,\frac{w}{w_p}]$ such that $0\le p_x<\frac{h}{h_p},0 \le p_y < \frac{w}{w_p}$.
\nl -- Construct the mask matrix $\mathit{Mask}$ such that $\mathit{Mask}[i,j]=1$ if $\left(\lfloor \frac{i}{h_p} \rfloor , \lfloor \frac{j}{w_p} \rfloor \right) = \left(p_x,p_y\right)$, and $\mathit{Mask}[i,j]=0$ elsewhere.
\nl -- Create a perturbed image in the positive direction $Y^{u} = Y^{(iter)} + \alpha \mathit{Mask}\odot \left(X_c - Y^{(iter)}\right)$.
\nl -- Create a perturbed image in the negative direction $Y^{v} = Y^{(iter)} - \alpha \mathit{Mask}\odot \left(X_c - Y^{(iter)}\right)$.
\nl -- Compute the NCC values of $Y^u$ and $Y^v$ for the target sensor $S_t$, $\phi(Y^{u},S_t)$ and $\phi(Y^{v},S_t)$, respectively.
\nl -- Set $Y^{(iter+1)} = Y^u$ if $\phi(Y^{u},S_t) > \phi(Y^{v},S_t)$, otherwise set $Y^{(iter+1)} = Y^v$,
\nl -- $iter++$,
\nl -- If $iter>m$ break the loop.
}
Return the final perturbed image, $Y^{(iter)}.$}
\end{algorithm}
At the end of the routine, the perturbed image will have to be incorrectly attributed to $S_t$ by the sensor classifier. The steps of the PRNU spoofing algorithm are illustrated in Figure~\ref{fig:algorithm-graphic}(a). The sequence of modified images undergoing the perturbation routine is illustrated for two example iris images in Figure~\ref{fig:Illus_moreeg}
\begin{figure*}
\begin{center}
\includegraphics[width=0.85\linewidth]{figs/DemoPerturbation_View2_Lessimgs.png}
\end{center}
\caption{Illustration of PRNU spoofing using images belonging to the source sensor JPC and the candidate images belonging to the target sensor Aoptix.}
\label{fig:Illus_moreeg}
\end{figure*}
\section{Experiments and Results}
\label{Expts}
In this section, we describe the datasets and sensors employed in this work, followed by the experiments conducted on the datasets. Results are reported and analyzed in the context of PRNU spoofing and iris recognition.
\subsection{Datasets}
Experiments are conducted using 11 different sensors from 11 iris datasets. The PRNU spoofing process typically involves a single source sensor and a single target sensor from the set of 11 sensors. The sensor details and image specifications of the 11 sensors are described in Table~\ref{Tab1:Datasets}. Thus, there can be a total of $\Perms{2}{11} =110$ combinations for PRNU spoofing. However, for the sake of brevity, we performed 20 different PRNU spoofing experiments involving 5 sensors: $\{Aop, JPC, IC, Cog, LG40\}$. From the set of 5 sensors listed above, each sensor serves as the source sensor while the remaining 4 sensors serve as target sensors one at a time, thus resulting in 20 different PRNU spoofing experiments.
\begin{table*}[h]
\centering
\caption{Confusion matrix for sensor identification involving unperturbed but resized images. The test noise residuals of images from 5 sensors are compared against reference patterns from 11 sensors. The last column indicates sensor identification accuracy.}
\label{ConfMat_Ori}
\scalebox{0.85}{
\begin{tabular}{|l|lllllllllll||l|}
\hline
\diagbox[width=5em]{\scriptsize{Actual}}{\scriptsize{Predicted}} & Aop & JPC & IC & Cog & \small{LG 40} & Pan&AD&Ever&OKI&IK& \small{LG 22} & \begin{tabular}[c]{@{}l@{}}Accuracy \\(\%)\end{tabular} \\ \hline
Aop & \textbf{900} & 1 & 2 & 1 & 9 &4&3&3&9&7&1 & 95.74 \\
JPC & 2 & \textbf{919} & 4 & 2 &5 &0&0&4&1&2&1 & 97.77 \\
IC & 0 & 0 & \textbf{940} & 0 & 0 &0&0&0&0&0&0 & 100 \\
Cog & 2 & 1 & 2 & \textbf{546} & 2 &0&2&0&0&5&0 & 97.51 \\
LG40 & 0 & 0 & 0 & 0 & \textbf{529} & 0&0&3&1&0&0 & 99.25 \\ \hline
\end{tabular}}
\end{table*}
\subsection{Sensor identification before PRNU spoofing}
Due to variations in image size of the source and target sensors, all images were resized to a fixed spatial resolution of 160 $\times$ 120 to facilitate PRNU spoofing. We then evaluated the sensor identification accuracy based on these resized images prior to PRNU spoofing. This is to determine if resizing impacts sensor identification accuracy. The sensor identification involves deriving sensor reference patterns using 55 training images, as used in~\cite{Ross_IWBF_17} from each of the 11 sensors, followed by extraction of test noise residuals from images belonging to the 5 sensors, and finally correlating them. The subjects in the training set and the test set are disjoint. The sensor identification accuracy and the corresponding confusion matrix is presented in Table~\ref{ConfMat_Ori}. The results indicate a very high sensor identification accuracy using the MLE PRNU scheme on the resized images. So we use the resized images in the experiments below.
\subsection{Sensor identification after PRNU spoofing}
\label{B1B2_Reference}
The PRNU spoofing process involves perturbing the original image from a source sensor using a candidate image belonging to the target sensor, whose PRNU needs to be spoofed. The impact of the perturbations on spoofing the PRNU pattern has been reported in terms of \textit{Spoof Success Rate} (SSR), which computes the proportion of test images from the source sensor classified as belonging to the target sensor after perturbing using Algorithm~\ref{alg:perturbations}. The results of spoofing are presented in Table~\ref{Tab:PerturbAop}.
We implemented Baseline 1 and Baseline 2 algorithm described in Section~\ref{RelatedWork}. Baseline 2 is implemented following normalization of the source and target reference patterns with respect to the maximum intensity of the PRNU present in the two reference patterns. The normalization is required to account for the variation in the PRNU strength associated with different sensors. Ideally, the scalar terms $\gamma$ and $\beta$, which serve as parameters in the baseline algorithm, need to be optimized through grid-search for a specific pair of source ($S_o$) and target ($S_t)$ sensors. However, we set the scalars to a static value of 1 for two reasons: (i) for ease of computation and (ii) to provide fair comparison with the proposed algorithm which also uses fixed values of parameters for all pairs of sensors. The baseline algorithms are state-of-the art to the best of our knowledge and are, therefore, used for comparative evaluation. Examples of perturbed outputs of images spoofed using Baseline 1, Baseline 2, and the proposed algorithm are presented in Figure~\ref{fig:demoperturbed}.
\begin{table*}[t]
\centering
\caption{Results of PRNU spoofing where the target sensors (along the second column) are spoofed by perturbing the images from 5 source sensors, namely, Aop, JPC, IC, Cog and LG40 (along the first column). The test noise residual after the perturbation process is compared against the reference patterns of 11 sensors (see Table~\ref{Tab1:Datasets}). The last 3 columns indicate the proportion of the perturbed images successfully classified as belonging to the target sensor and is denoted as the Spoof Success Rate (SSR). The highest values of the SSR are bolded.}
\label{Tab:PerturbAop}
\scalebox{0.8}{
\begin{tabular}{|l|l|lllllllllll|c|c|c|}
\hline
\begin{tabular}[c]{@{}l@{}}Original\\ Sensor\end{tabular} & \begin{tabular}[c]{@{}l@{}}Target \\ Sensor\end{tabular} & \multicolumn{11}{c|}{Sensor classes compared against perturbed PRNU} & \begin{tabular}[c|]{@{}l@{}}SSR (\%) for \\proposed method \end{tabular} & \begin{tabular}[c|]{@{}l@{}}SSR (\%) for \\Baseline 1\end{tabular} & \begin{tabular}[c|]{@{}l@{}}SSR (\%) for \\Baseline 2\end{tabular}\\ \hline
\multirow{5}{*}{Aop} & & Aop & JPC & IC & Cog & LG40 & Pan & AD & Ever & OKI & IK & LG22 & & & \\ \cline{3-13}
& JPC & 4 & \textbf{894} & 3 & 3 & 8 & 2 & 2 & 2 & 9 & 12 & 1 &\textbf{ 95.11} & 92.55 &67.98 \\
& IC & 21 & 0 & \textbf{891} & 0 & 6 & 2 & 1 & 5 & 6 & 5 & 3 & \textbf{94.79} & 92.77 &13.51 \\
& Cog & 7 & 2 & 3 & \textbf{890} & 7 & 5 & 2 & 2 & 13 & 5 & 4 & \textbf{94.68} & 79.89 &0.21 \\
& LG40 & 66 & 4 & 4 & 4 & \textbf{836} & 3 & 0 & 4 & 7 & 8 & 4 & \textbf{88.94} & 79.15 &10.00 \\ \hline \hline
\multirow{4}{*}{JPC}
& Aop & \textbf{905} & 18 & 3 & 3 & 4 & 0 & 0 & 2 & 2 & 2 & 1 & \textbf{96.28} & 49.15 &1.91 \\
& IC & 2 & 209 & \textbf{712} & 2 & 4 & 2 & 1 & 3 & 2 & 2 & 1 & 75.74 & 99.79 &\textbf{100} \\
& Cog & 3 & 94 & 4 & \textbf{817} & 5 & 5 & 0 & 5 & 2 & 1 & 4 & \textbf{86.91 } & 35.53 &0.21 \\
& LG40 & 1 & 61 & 3 & 1 & \textbf{861} & 5 & 0 & 3 & 1 & 2 & 2 & \textbf{91.60} & 8.09 &9.26 \\ \hline \hline
\multirow{4}{*}{IC}
& Aop & \textbf{910} & 0 & 30 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{96.81 } & 48.72 &0 \\
& JPC & 0 & \textbf{797} & 143 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 84.79 & \textbf{100}&53.09 \\
& Cog & 0 & 0 & 243 & \textbf{697} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{74.15} & 46.70&0 \\
& LG40 & 0 & 0 & 46 & 0 & \textbf{894} & 0 & 0 & 0 & 0 & 0 & 0 &\textbf{95.11 } & 1.91 &0.11 \\ \hline \hline
\multirow{4}{*}{Cog}
& Aop & \textbf{552} & 0 & 0 & 0 & 2 & 0 & 2 & 0 & 0 & 4 & 0 & 98.57 & \textbf{100}&38.57 \\
& JPC & 1 & \textbf{546} & 0 & 0 & 1 & 0 & 2 & 2 & 0 & 8 & 0 & 97.50 & \textbf{100}&\textbf{100} \\
& IC & 2 & 0 & \textbf{545} & 2 & 2 & 0 & 2 & 0 & 1 & 5 & 1 & 97.32 & \textbf{100}&\textbf{100} \\
& LG40 & 1 & 0 & 0 & 0 & \textbf{550} & 0 & 2 & 1 & 0 & 6 & 0 & \textbf{98.21} & 82.32 &35.00 \\ \hline \hline
\multirow{4}{*}{LG40}
& Aop & \textbf{330} & 0 & 3 & 0 & 198 & 0 & 0 & 0 & 1 & 1 & 0 & \textbf{61.91} & 9.94&1.31 \\
& JPC & 0 & \textbf{491} & 0 & 0 & 38 & 0 & 0 & 2 & 1 & 1 & 0 &\textbf{92.12} & 9.38 &24.20 \\
& IC & 0 & 0 & \textbf{393} & 0 & 136 & 0 & 0 & 3 & 1 & 0 & 0 & 73.73 & 11.44&\textbf{99.44} \\
& Cog & 0 & 0 & 0 & \textbf{479} & 50 & 0 & 0 & 2 & 1 & 1 & 0 &\textbf{89.87} & 4.69 &0.19 \\ \hline \hline
\multicolumn{13}{|l|}{\textbf{Average SSR (\%)}}& \textbf{89.21}& 57.60&32.75 \\ \hline
\end{tabular}}
\end{table*}
\textbf{Results in Table~\ref{Tab:PerturbAop} indicate that 15 out of 20 times the proposed algorithm outperforms Baseline 1 technique, and performs considerably better than Baseline 2 method 16 out of 20 times. The average SSR of the proposed algorithm outperforms the baseline algorithms by a significant margin.} We believe that the parameters $\gamma$ and $\beta$ need to be tuned accurately for each pair of source and target sensors to ensure the success of the baseline algorithms. On the other hand, the proposed algorithm is successful for static parameter values: the size of patches ($h_p\times w_p$), the threshold $\eta$, the learning rate $\alpha$, and the number of patches ($K$) (see Section~\ref{Derivingperturbations}). The PRNU is successfully spoofed by the proposed method in most of the cases barring the case where the target sensor is Aoptix and the source sensor is LG 4000 ($\approx$62\% SSR). Inspection of the images acquired using LG 4000 sensor reveals the presence of image padding, which may negatively impact the PRNU spoofing process.
Figure~\ref{Fig:Results_Perturb} shows an input image undergoing iterative perturbations. The original (unperturbed) image belongs to the Aoptix sensor and is perturbed using a candidate image from the target sensor, Cogent. The subsequent shift of the NCC values from being the highest for the source sensor (Aoptix) to being the highest for the target sensor (Cogent), indicates the success of the proposed method.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figs/Figure4_NEWEDIT.png}
\caption{\label{fig:demoperturbed}Example of PRNU spoofed images originating from the JPC 1000 sensor (first column) is illustrated for Baseline 1 (second column), Baseline 2 (third column) and the proposed method (last column). Here, the target sensor is Aoptix.}
\end{figure}
The average number of iterations required for successful PRNU spoofing varied between 200 to 2200. Another experiment is conducted to study the impact of increasing the number of iterations on the proposed PRNU spoofing process. This experiment is conducted for the specific case where the source sensor is LG 4000 and the target to be spoofed is the Aoptix sensor. The reason for selecting this pair is due to the poor SSR reported for this specific set of sensors (see the fifth block in Table~\ref{Tab:PerturbAop}). We speculate that with an increase in the number of iterations, the PRNU spoofing process will succeed and improve the SSR as a result. In this regard, in the new experimental set-up, the maximum number of iterations was set to 6000 (twice the earlier terminating criterion). As a result, the SSR increased considerably from 61.91\% to 79.73\%, \ie, a $\approx$ 18\% increase was observed. 425 out of 533 test images belonging to the LG 4000 sensor were successfully classified as originating from the Aoptix sensor when the number of iterations was increased.
\begin{figure}[h]
\centering
\includegraphics[width=0.98\linewidth]{figs/Figure5_NEW.png}
\caption{\label{Fig:Results_Perturb}Intermediate images generated when an image from the Aoptix ($S_o$) sensor is perturbed using a candidate image from Cogent ($S_t$). For the sake of brevity, NCC values corresponding to the reference patterns of the first 5 sensors in Table~\ref{Tab1:Datasets} are mentioned in the figure. The arrows indicate the increase in the NCC values corresponding to the target sensor.}
\end{figure}
\subsection{Retaining biometric matching utility}
\label{IrisMatching}
The impact of the perturbations on iris recognition performance is evaluated next using the VeriEye iris matcher~\cite{VeriEye}. We designed three experiments for analyzing biometric matching performance. First, the match scores between all pairs of iris samples before perturbation were computed. In the second experiment, we computed the match scores between all pairs of perturbed samples. In the third experiment, we computed match scores between all iris samples before perturbation and all samples after perturbation. This is referred to as the cross-matching scenario. In the third set of experiments, the genuine scores are computed by employing 2 sample images (from the same subject): one sample belonging to the set of unperturbed images and the other sample from the set of perturbed images. The impostor scores are generated by pairing samples belonging to different subjects: one image is taken from the set of unperturbed images, while the other is taken from the set of perturbed images.
\begin{figure*}
\centering
\includegraphics[width=0.98\linewidth]{figs/results-verieye-matching_NEW.png}
\caption{\label{fig:roc-matching}ROC curves of matching performance obtained using the VeriEye iris matcher software. The terms `Original', `Perturbed' and `Original vs. Perturbed' indicate the three different matching scenarios (see Section~\ref{IrisMatching}). `Original' indicates matching only unperturbed images; `Perturbed' indicates matching only perturbed images; `Original vs. Perturbed' indicates the cross-matching case where unperturbed images are matched against perturbed images. Note that the curves obtained from perturbed images match very closely with the curves corresponding to the unperturbed images illustrating preservation of iris recognition for each sensor depicted in each column. The results are compared with Baseline 1 and 2 algorithms discussed in Section~\ref{B1B2_Reference}.}
\end{figure*}
Figure~\ref{fig:roc-matching} shows the ROC curves obtained from these three experiments. The ROC curves confirm that the perturbed images do not negatively impact the matching utility. In the case of all the sensors, the ROC curves of the perturbed images are within a $1\%$ deviation from the ROC curve of the original samples before perturbation, except for the IrisCam (IC) sensor. Further, we note that the matching performance of original samples from the Cogent (Cog) sensor is degraded to begin with. We believe the reason for this degraded performance is due to the low quality of the original images. Yet, perturbations have not further deteriorated the matching performance, as evidenced by the before- and after-perturbation ROC curves that are very similar to each other.
In addition, the iris recognition performance after PRNU spoofing using the baseline algorithms is analyzed. The results indicate that the proposed method is comparable to the baseline algorithms in terms of iris recognition performance. Furthermore, we conducted a fourth experiment, where we analyzed the matching performance of those LG4000 iris images that were perturbed to spoof the Aoptix sensor after increasing the number of iterations. The result confirms that increasing the number of iterations to improve the SSR does not degrade matching performance, as is evident in Figure~\ref{fig:roc-matching-und-more}.
\paragraph{In summary, the following salient observations in the context of both PRNU spoofing and iris recognition preservation can be made.}
\begin{itemize}
\item The PRNU pattern of a sensor can be successfully spoofed by \textit{directly} modifying an input image, without invoking the sensor reference pattern of the target sensor. Experiments are conducted using 11 iris sensors, and the PRNU spoofing process is demonstrated using 5 sensors and compared with existing approaches. Results show that the proposed spoofing method outperforms Baseline 1 by 31.6\% and Baseline 2 by 56.4\% in terms of average spoof success rate.
\item The proposed spoofing algorithm uses identical parameters, such as the size of patches and learning rate for all pairs of source and target sensors. This obviates the need to fine tune the method for different pairs of sensors.
\item The iris recognition performance of the images perturbed using the proposed algorithm is retained within 1\% of the original. This suggests the success of the proposed spoofing method in retaining the biometric utility of the modified images.
\end{itemize}
\section{Summary and Future Work}
\label{Summary}
In this work, we design a method for PRNU spoofing that preserves biometric recognition in the context of NIR iris images. In the proposed strategy, a test image belonging to a particular sensor is modified iteratively using patches from a candidate image belonging to a target sensor, whose PRNU is to be spoofed. We examine the impact of these perturbations on PRNU spoofing as well as iris recognition performance. Experiments are conducted in this regard using 11 sensors and compared with two existing PRNU spoofing algorithms. Results show that the proposed method can successfully spoof the PRNU pattern of a target sensor and does not significantly impact the iris recognition performance in a majority of the cases.
\par
Future work will involve testing the proposed PRNU spoofing process on a larger set of sensors and analyzing the impact of the number of candidate images on the spoof success rate. The iterative spoofing routine can be expedited by perturbing multiple image patches simultaneously (instead of one patch at a a time). Finally, we will look into developing new sensor identification schemes that are resilient to spoof attacks as well as methods to detect such attacks.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figs/results-verieye-matching_und-aop-more.eps}
\caption{\label{fig:roc-matching-und-more} Impact of increase in the number of iterations on iris recognition performance for the pair of LG 4000 (source) and Aoptix (target) sensors. }
\end{figure}
\section*{Acknowledgement}
We would like to thank Denton Bobeldyk and Steven Hoffman from the iPRoBe Lab for sharing their iris recognition code. This material is based upon work supported by the National Science Foundation under Grant Number $1618518$.
{\small
\bibliographystyle{ieee}
\balance
|
2,877,628,091,056 | arxiv | \section{Introduction}
\iffalse
The year 2013 marks the 250th anniversary of Bayes rule, one of the two fundamental inferential principles of mathematical statistics. The rule has been influential over the entire period—and controversial over most of it. Its reliance on prior beliefs has been challenged by frequentism, which focuses instead on the behavior of specific estimates and tests under repeated use. Twentieth-century statistics was overwhelmingly behavioristic, especially in applications, but the twenty-first century has seen a resurgence of Bayesian- ism. Some simple examples are used to show what’s at stake in the argu- ment. The bootstrap, a computer-intensive inference machine, helps connect Bayesian and frequentist practice, leading finally to an empirical Bayes exam- ple of collaboration between the two philosophies.
\fi
While Bayesian's obligation to specify a prior has been challenged the most, the obligation to specify the likelihood is perhaps even more consequential.
Implicit in the Bayesian paradigm is the assumption that a probabilistic model can be formulated which links parameters with data.
When constructing such a model, however, one has to reconcile bias implications of model mis-specification (\cite{muller2013risk,grunwald2017inconsistency}).
Very often,
the primary aim is not modeling the data but rather estimating a statistic. Examples include M-estimators (\cite{huber2009robust}) or other extremum estimators such as censored quantile regression (\cite{powell1986censored}), instrumental and robust median regression (\cite{mood1950introduction}), nonlinear IV and GMM (\cite{hansen1982generalized}). For statistical inference, one may wish to obtain a post-data density summary of the parameter of interest rather than just a point estimate.
Bayesian prior-to-posterior inference can be carried out even when one is reluctant about committing to a particular generative model.
One way how to liberate the inferential parameter $\theta_0$ from the likelihood is by relating it to data through a general loss function. In medicine, for instance, minimal clinically important difference (\cite{syring2017gibbs}) or boundary detection in image analysis (\cite{syring2020robust})
can be formulated as loss minimization problems. It is then possible to perform coherent Bayesian-style updating of prior beliefs, expressed in the prior $\pi(\theta)$, through the so-called Gibbs posteriors (\cite{zhang2006epsilon,zhang2006information,bissiri2016general}). This is a parametric generalization of classical Bayesian inference where the loss function is converted into a pseudo-likelihood function. Another way of expressing uncertainty about $\theta_0$ is through a prior $\pi(F)$, as opposed to the prior $\pi(\theta)$, on the unknown data generating distribution function $F_0$. Such non-parametric Bayesian inference (\cite{chamberlain2003nonparametric,lyddon2019general}) is based on the posterior of $F$ rather than $\theta$. We revisit both the non-parametric and parametric Bayesian approach (with Gibbs posteriors) to inference about parameter targets defined through loss functions.
Recalling the optimal information processing interpretation of the Bayes' rule (\cite{zellner1988optimal,knoblauch2019generalized}), this paper surveys various generalizations of Bayesian inference (including variational inference (\cite{jordan1999introduction,wainwright2008graphical}) and Gibbs posteriors (\cite{catoni2004statistical,zhang2006information})) under one unifying hat.
In particular, we adopt the optimization-centric point of view on Bayes' rule which allows a re-interpretation of Bayesian inference as regularized optimization.
Any commitment to a Bayesian posterior is a commitment to a particular optimization objective parametrized by the prior, the loss (log-likelihood) function and the class of post-data inferential densities (\cite{knoblauch2019generalized}). For example, variational Bayes forces the posterior belief into a specific parametric form, transforming the optimization from an infinite-dimensional into a finite-dimensional one. Gibbs posteriors, on the other hand, force priors and data into an exponentially additive relationship through loss functions.
\cite{alquier2016properties} combined the two by providing a VB computational alternative for Gibbs posteriors.
The repertoire of sampling methods for computing Gibbs posteriors depends on the availability of closed-form conditionals or computational resources. For example, classical MCMC sampling (using e.g. Metropolis-style samplers) may incur large computational costs (\cite{quiroz2018speeding,johndrow2020scalable}). As an alternative, this work investigates the
recently proposed generative bootstrap sampler (\cite{shin2020scalable}) in the context of Bayesian simulation. This generative sampler is trained by learning a deterministic (deep learning) mapping between bootstrap weights and parameters to perform iid sampling. The iid aspect is particularly appealing because, after the mapping has been trained, the simulation cost is negligible compared to sequential samplers. We tailor the strategy of \cite{shin2020scalable} to the context of Bayesian simulation from approximate (1) Gibbs posteriors for parametric Bayesian inference, and (2) non-parametric Bayesian posteriors.
The goal is to learn an implicit distribution prescribed by a deterministic mapping that
filters out bootstrap weights to produce samples from an approximate posterior.
Implicit distributions have been loosely defined as distributions whose likelihoods are unavailable but which can be sampled from (\cite{mohamed2016learning,li2018gradient}). While implicit distributions have been deployed for Bayesian computation before in the context of variational Bayes (\cite{ruiz2019contrastive}), we explore implicit bootstrap distributions to generate samples from approximate posteriors.
Our main purpose is to (1) highlight several recent developments in loss-based Bayesian inference, and to (2) draw attention to generative samplers which are potentially very promising for Bayesian simulation. We investigate their benefits as well as limitations. We start off by describing loss-based Bayesian inference in Section 2. Section 3 is dedicated to an overview of bootstrap techniques for Bayesian computation. Section 4 provides some theoretical insights and Section 5 describes the performance of the generative bootstrap sampler for Bayesian inference in some classical examples.
\subsection{Notation}
We denote $[n]$ as the set $\{1,2,\cdots,n\}$, $\bm 1_p\in\R^p$ as a vector of all 1's. We denote with $\|\cdot\|$ the Euclidean norm. For $X$, a random variable on a probability space $(\Omega,\Sigma,P)$, and $f$, a function on $\Omega$, we denote $\mathbb{P}(A)=\int_A dP$ for any $A\in\Sigma$, $\E[f(X)]=\int_\Omega fdP$, and also $Pf=\int_\Omega fdP$ to emphasize the underlying probability measure $P$.
\section{Setting the Stage}
Assume that we have observed a sequence of iid observations $\Xn=(x_1,\dots,x_n)'$ from an unknown sampling distribution $F_0$, i.e. $x_i \sim F_0$.
Bayesian inference traditionally requires the knowledge of the true underlying model for $F_0$.
This essentially boils down to specifying a family of likelihood functions $\mathcal F_\Theta=\{p_\theta^{(n)}(\Xn):\theta\in\Theta\}$ indexed by an inferential parameter $\theta\in\Theta\subseteq\R^p$.
When there is uncertainty about the parametric family $\mathcal F_\Theta$ and mis-specification occurs, likelihood-based inference can be misleading (\cite{kleijn2006misspecification})).
Without the obligation to construct a probabilistic model, it may often be easier to infer about a target parameter $\theta$ that is directly tied to $F_0$, such as the mean, median or other quantile. In other words, one can merely express an interest in some functional of $F_0$ as opposed to parameter attached to a particular model $p_\theta^{(n)}(\Xn)=\prod_{i=1}^np_\theta(x_i)$. Allowing $\mathcal F_\Theta$ to be unknown, \cite{bissiri2016general} formalized a Bayesian framework for rational updating of beliefs by connecting data to parameters of interest via loss functions.
We revisit their development in the context of optimization-centric Bayesian inference.
\subsection{Optimization-Centric Bayes}\label{sec:ocb}
Bayesian learning from experience and evidence typically processes information from two sources: (1) a prior density $\pi(\theta)$ which extracts domain expertise, and (2) a likelihood function $p_\theta^{(n)}(\Xn)$ which distills the data.
Famously, the Bayes' rule produces a post-data density summary for the parameters in a form of a posterior distribution $\pi(\theta\,|\:\Xn)$.
There is, however, a conceptually different path for arriving at the posterior. Dating back to at least \cite{csiszar1975divergence} and \cite{donsker1975asymptotic}, Bayes' theorem can be interpreted as the
optimal information processing rule solving an infinite-dimensional optimization problem (\cite{zellner1988optimal}). Through the optimization-centric lens (\cite{knoblauch2019generalized}), Bayesian inference is viewed as regularized optimization
\begin{equation}\label{eq:opcentric}
\mathcal L(\ell; D;\Pi(\Theta))\equiv \arg\min_{q\in \Pi(\Theta)}\left\{\E_q\left[\sum_{i=1}^n \ell(\theta;x_i) \right] + D(q \| \pi)\right\}
\end{equation}
indexed by a triplet of parameters: (1) a loss function $\ell$, (2) a discrepancy function $D(\cdot \| \cdot)$ gauging the departure from the prior, and (3) a class of probability distributions $\Pi(\Theta)$ to optimize over. The classical likelihood-based Bayesian inference yields
\begin{equation}\label{eq:posterior_motivate}
\pi(\theta\,|\:\Xn)=\mathcal L\left(-\log \pi_\theta(x); KL;\Pi(\Theta)\right)
\end{equation}
where $\Pi(\Theta)$ is unconstrained and where $KL$ stands for the Kullback-Leibler divergence.
The solution of the optimization problem in \eqref{eq:posterior_motivate} can be rewritten more transparently as (see proof of Theorem 1 in \cite{knoblauch2019generalized} or \cite{csiszar1975divergence})
\begin{equation}\label{eq:insight}
\pi(\theta\,|\:\Xn)=\arg\min_{q\in \Pi(\Theta)} KL\left[q\,\,\big\|\,\, \pi(\theta)\exp\left\{-\sum_{i=1}^n\ell(\theta;x_i)\right\}Z^{-1} \right],
\end{equation}
where $Z=\int_\theta\exp\{-\sum_{i=1}^n\ell(\theta;x_i)\}\pi(\theta)d\theta$ is the norming constant (assumed to be finite).
From \eqref{eq:insight} it can be seen that the optimization problem \eqref{eq:opcentric} forces priors and losses (log-likelihoods) into an exponentially additive relationship.
If the KL term was absent from \eqref{eq:posterior_motivate}, the solution would be a Dirac measure concentrated at the MLE estimator.
The incorporation of the prior $\pi(\theta)$ through the KL term in \eqref{eq:opcentric} allows one to obtain post-data densities for parameters in order to quantify uncertainty and to perform belief updating.
The formula \eqref{eq:insight} is a template for generating belief distributions in more general situations, as will be seen below, through information theory.
The information-theoretic representation \eqref{eq:opcentric} reveals that committing to any particular Bayesian
posterior is equivalent to committing to a particular optimization problem determined by (a) the loss function $\ell$, (b) the discrepancy metric $D$, and (c) the space of probability measures $\Pi(\Theta)$.
\cite{knoblauch2019generalized} presents various generalizations of Bayesian inference by altering the parameters $(\ell; D; \Pi(\Theta))$ of the optimization objective in \eqref{eq:opcentric}.
For example, constraining the class of distributions $\Pi(\Theta)$ to some parametric form is equivalent to the variational Bayes approach (\cite{wainwright2008graphical}).
Alternatively, replacing the self-information loss $\ell(\theta;x)=-\log\pi_\theta(x)$ with any other loss function, one obtains the so called Gibbs posteriors (more below).
Following \cite{knoblauch2019generalized}, we regard \eqref{eq:opcentric} as the unifying hat behind various generalized Bayesian inference methods.
\subsection{Bayesian Inference with Gibbs Posteriors}\label{sec:loss}
Sometimes, the parameter interest $\theta$ is defined indirectly through a general loss function $\ell(\theta;x)$ rather than the likelihood function.
For an unknown distribution function $F_0$, from which we observe an iid vector $\Xn$, one can define such inferential target $\theta_0$
as the minimizer of the average loss \citep{bissiri2016general}
\begin{equation}\label{eq:theta0}
\theta_0= \theta(F_0)\equiv\arg\min\limits_{\theta\in\Theta}\int\ell(\theta;x)dF_0(x).
\end{equation}
Replacing $F_0$ with the empirical distribution function $P_n$ of $\Xn$, one obtains an empirical risk minimizer, for example an M-estimator \citep{huber2004robust,{maronna2006robust},{huber2009robust}}. In econometrics, extremum estimators (e.g. censored quantile regression of \cite{powell1986censored} or instrumental and robust median regression \cite{mood1950introduction}) are also defined as maximizers of a random finite-sample criterion function whose population counterpart is maximized uniquely at some point $\theta_0\in\Theta$.
In these examples, $\theta_0$ generally cannot be understood as a model parameter but rather as a solution to an optimization (loss minimization) problem.
\cite{chernozhukov2003mcmc} note that implementing such estimators can be challenging and introduce quasi-Bayesian estimators for estimation and inference.
\cite{bissiri2016general} propose a related general framework for updating belief distributions as a Bayesian extension of M-estimation.
Indeed, even when the parameter cannot be directly assigned any particular model interpretation, it is still possible to perform Bayesian belief updating and uncertainty quantification.
In particular, \cite{bissiri2016general} suggest a decision-theoretic representation of beliefs about $\theta$ via a composite loss function over probability measures which gauges fidelity to data and departure from the prior (\cite{berger1993statistical}).
Curiously, their loss function corresponds to the optimization objective in \eqref{eq:opcentric} where the log-likelihood has been replaced by a general loss function $\ell(\theta;x)$ and where $D(\cdot\|\cdot)$ is the KL divergence.
Similarly as in Section \ref{sec:ocb}, it can be shown that the optimal distribution which minimizes this cumulative loss function (without constraining $\Pi(\Theta)$) has an exponentially additive form
\begin{equation}\label{eq:gibbs}
\tilde \pi(\theta\,|\:\Xn)= \frac{\pi(\theta)\exp\left\{-\alpha \sum_{i=1}^n\ell(\theta;x_i)\right\}}{\int_{\Theta}\pi(\theta)\exp\left\{-\alpha\sum_{i=1}^n\ell(\theta;x_i)\right\}d\theta},
\end{equation}
where $\alpha=1$. Other values of $\alpha>0$ have been considered to regulate the speed of the learning rate (see \cite{holmes2017assigning,grunwald2012safe}).
The distribution \eqref{eq:gibbs} is the ``quasi-Bayesian'' posterior introduced in \cite{chernozhukov2003mcmc} and it became known as the Gibbs posterior, a probability distribution for random estimators defined by an empirical measure of risk (\cite{catoni2004statistical,zhang2006information}).
The inferential object \eqref{eq:gibbs} now does not have the interpretation of the usual posterior but rather an optimal prior-to-posterior updating distribution that satisfies coherence and information preservation requirements.
The motivation for Gibbs posteriors can be traced back to the early work of Laplace (\cite{laplace1774memoire}) who regarded a transformation of a least square criterion function as a statistical belief and obtained point estimates of that distribution ``without any assumption about the error distribution'' (\cite{stigler1975studies}).
In thermodynamics, the risk is interpreted as an energy function.
In the PAC-Bayesian approach (\cite{shawe1997pac,McAllester1998somepac-bayesian}), the Gibbs distribution appears as the probability distribution that minimizes the upper bound of an oracle inequality on the risk of estimators.
Estimators derived from Gibbs posteriors, such as quasi-Bayesian mean or median \citep{chernozhukov2003mcmc}, usually show excellent performance and yet their actual implementation can be challenging. The usual recommendation (\cite{dalalyan2012mirror,alquier2013sparse}) is to sample from a Gibbs posterior using MCMC (see e.g. \cite{green2015bayesian} or \cite{ridgway2014pac} who propose tempering sequential Monte Carlo which may be too slow for practical use). \cite{alquier2016properties} propose a variational Bayes approximation which can achieve the same rate of convergence as the actual Gibbs posterior
and which has a polynomial time complexity in convex problems. Our work explores Bootstrap techniques and generative bootstrap samplers for approximating Gibbs posteriors, going beyond the development in \cite{lyddon2019general}.
Before delving into the implementation aspects, we distinguish the parametric inference approach with Gibbs posteriors (\cite{chernozhukov2003mcmc,bissiri2016general}) from the non-parametric Bayesian learning approach (\cite{chamberlain2003nonparametric,lyddon2019general}).
\iffalse
Had this been missing, the solution to the optimization problem is $q(\theta)=\delta_{\hat\theta}$ where $\hat \theta$ is the minimizer of the empirical risk. This allows re-interpretation of Bayesian inference as regularized optimization problem.
Treating $F_0$ as unknown, Bissiri et al. (2016) use a decision-theoretic argument relying on a coherency property that least to a unique functional form for the update of prior beliefs $\pi(\theta)$ given observations $x_i$. The data-dependent exponent $\exp(-w\ell(y,\theta)$ provides prior-to-posterior belief update for the parameter $\theta$ where $w>0$ determines the learning rate. Updating targeted prior belief updates without the burden of having to construct a global probabilistic model
\fi
\subsection{Bayesian Non-parametric Learning}\label{sec:nbl}
Gibbs posteriors lock priors and losses in an exponentially additive relationship in order to achieve coherent Bayesian updating of beliefs.
The uncertainty about the inferential target is a-priori represented in the prior distribution $\pi(\theta)$. The Bayesian non-parametric learning (NPL) approach (\cite{lyddon2019general,chamberlain1996nonparametric}), on the other hand,
expresses uncertainty about $\theta$ through a prior on the unknown distribution function $F_0$.
Defining the parameter of interest as a functional of $F_0$ (as in \eqref{eq:theta0}), the focus is shifted from $\theta_0$ to $F_0$.
\cite{lyddon2019general} propose a two-step Bayesian learning process by assigning a Dirichlet process (DP) prior $F\sim DP(\alpha, F_\pi)$ on the unknown distribution function $F$.
The base measure $F_\pi$ conveys prior knowledge about the sampling distribution $F_0$ and, indirectly, also the parameter $\theta_0$.
In the first step, Bayesian non-parametric learning is used to form beliefs about the joint nonparametric density of data and then draws of the non-parametric density are made to repeatedly compute the extremum parameter of interest. \cite{lyddon2019general} call this sampling procedure the loss-likelihood bootstrap. \cite{chamberlain1996nonparametric} first introduced this strategy using a particular DP posterior (supported only on observations $\Xn$ with Dirichlet-distributed probabilities for each state) that corresponds to the Bayesian bootstrap (\cite{efron1979bootstrap}).
A suitable choice for the base measure $F_\pi$ is the empirical distribution of the historical data. The second component of the DP prior the concentration parameter $\alpha>0$ which can be re-interpreted as the effective sample size from the prior $F_\pi$.
Assigning a DP prior on $F_0$, the posterior $F\,|\:\Xn\sim DP(\alpha+n, G_n)$ is also DP with the base measure updated as $G_n=\frac{\alpha}{\alpha+n}F_\pi+\frac{1}{\alpha+n}\sum_{j=1}^n\delta_{x_i}$. The posterior for the inferential target $\theta$ is then determined from this posterior through the mapping \eqref{eq:theta0}. In particular, under the stick-breaking representation (\cite{sethuraman1994constructive}) of the DP posterior, draws from $F\,|\: \Xn$ are almost surely discrete and the parameter of interest, for each $F$, can be computed as
$$
\theta(F)=\arg\min_\theta\sum_{k=1}^\infty w_k \times \ell(\theta;y_k)
$$
where $w_k$'s are the stick-breaking beta-products and where $y_k$ are iid samples from $G_n$. Drawing $F$ from the DP posterior requires infinite time when $F_\pi$ is continuous. \cite{fong2019scalable} suggest an approximate sampling scheme based on a truncation approximation of $F_\pi$ and bootstrap-style sampling. The idea is to generate $T$ fake data points $\tilde x_j$ from $F_\pi$ and assign each one a random weight $\tilde w_j$. The weights $\bm{w}=(w_1,\dots, w_n)'$ for the observed data and fake data $\tilde{\bm{w}}=(\tilde w_1,\dots,\tilde w_T)'$ are jointly sampled from a Dirichlet distribution $Dir(1,\dots, 1,\alpha/T,\dots,\alpha/T)'$. The posterior sample of the parameter $\theta(F)$ is then obtained by minimizing an augmented objective $\sum_{i=1}^n w_i \ell(x_i;\theta)+\sum_{j=1}^T\tilde w_j\ell(\tilde x_j;\theta)$. Bayesian bootstrap is obtained as a special case when $\alpha=0$.
The non-parametric Bayesian learning approach (with Bayesian bootstrap (\cite{efron1979bootstrap}) or the loss-likelihood bootstrap (\cite{lyddon2019general})) requires numerous re-computations of the extremum estimates in order to construct the posterior distribution over the parameter of interest. This can be prohibitively slow for optimization problems that are costly to solve. In this work, we investigate the possibility of deploying the generative Bootstrap sampler of \cite{shin2020scalable} that learns a deterministic mapping of weights $(\bm w,\tilde{\bm w})$ to obtain samples of $\theta(F)$.
\iffalse
\subsection{Bayesian Inference without the Likelihood}
Empirical risk minimization is a common task in machine learning and can be chal- lenging because the data-dependent objective function Rn is not always well-behaved. As an alternative to optimization, the PAC-Bayes literature (e.g., Alquier 2008; McAllester 1999)?where PAC stands for probability approximately correct?proposed to construct a distribution that concentrates on $\theta$ values for which $\theta$ is small. That distribution is the Gibbs posterior and is given by
not relying on tightly parameterized likelihood function
The value $\theta_0$ then minimizes the Kullback-Leibler divergence, which is the parameter of interest in the conventional Bayesian analysis and boils down to the true parameter value in case $F_0$ admits a density $f_{\theta_0}$.
The loss term is merely an instrument to guide oneself towards better performing algorithms but is no longer explicitly motivated by statistical modeling. The generalized Bayesian framework may be described as model-tree.
Bayesian inference proceeds through knowledge of a complete true model for $f_0$. Inference is meaningless unless the true parametric family $f$ is known. Following Savage axioms, the Bayesian update can be shown to be the rational way to proceed.
However $f$ can be unknown and Bissiri et al. (2016) provide a framework for rational updating of beliefs under more general and less stringent conditions by connecting data to parameters of interest via loss functions.
Bissiri et al. (2016) suggest a decision-theoretic representation of beliefs about $\theta$ via a composite loss function over probability measures which gauges fidelity to data and departure from the prior.
Interestingly, the distribution which minimizes this cumulative loss function has a Bayesian interpretation where the likelihood is replaced with an exponent of the negative loss function.
Treating $F_0$ as unknown, Bissiri et al. (2016) use a decision-theoretic argument relying on a coherency property that least to a unique functional form for the update of prior beliefs $\pi(\theta)$ given observations $x_i$. The data-dependent exponent $\exp(-w\ell(y,\theta)$ provides prior-to-posterior belief update for the parameter $\theta$ where $w>0$ determines the learning rate. Updating targeted prior belief updates without the burden of having to construct a global probabilistic model
The generalized Bayesian posteriors of Bissiri et al. (2016) solve the optimization problem given by
$$
q(\theta)=\arg\min_{q}\left\{E_q\left[\sum_{i=1}^n l(\theta;y_i) \right] + KLD(q|\pi)\right\},
$$
where $KLD$ denotes the Kullback-Leibler divergence. Had this been missing, the solution to the optimization problem is $q(\theta)=\delta_{\hat\theta}$ where $\hat \theta$ is the minimizer of the empirical risk. This allows re-interpretation of Bayesian inference as regularized optimization problem.
Authors have defined a generalized class of posterior belief distributions indirectly in terms of a loss function $\ell$, a divergence measure $D$ measuring the deviation of the posterior from the prior and a space $\Pi$ of feasible solutions.
to reinforce the conceptual and methodological appeal.
\iffalse
\subsection{Gibbs Posteriors}
The PAC-Bayesian approach is a set of techniques to derive non-asymptotic risk bounds for random estimators (it originates from machine learning (Shawe-Taylor and Williamson (1997), McAllester (1998)). The Gibbs distribution appears as the probability distribution that minimizes the upper bound of an oracle inequality on the risk of random estimators. The Gibbs posterior has also appeared in other places under different motivations. In Econometrics, it is a way of avoiding direct maximization in moment estimation (Chernozhukov and Hong (2003)) and in Bayesian decision theory it is a way of defining a Bayesian psoterior when no likelihood has been specified. Another connection is with thermodynamics where the risk is interpreted as an energy function.
Estimators derived from Gibbs posteriors usually show excellent performance and yet their actual impltementation is still far from routine. The usual recommendation (Dalalyan and Tsybakov (2012), Alquier and Biau (2013)) is to sample from a Gibbs posterior using MCMC (see e.g. Green et al. (2015)) but constructing an efficient MCMC sampler is often difficult. Alquier, Ridgway and Chopin (2016) propose a variational Bayes approximation which can achieve the same rate of convergence as the actual Gibbs posterior when (PAC-Bayesa bounds are available and the KL divergence between the posterior and its approximation is itself properly controlled). It has polynomial time complexity in convex problems. MCMC for Gibbs posteriors: Dirgway et al (2014) propose tempering SMC which tend to be too slow for practical use in situations where the sample size is large. Here we explore Bootstrap techniques and generative bootstrap samplers for approximating Gibbs posteriors.
Motivation for Gibbs posteriors is rooted in the early work of Laplace (1774) who regarded a transformation of a least square criterion function as a statistical belief or distribution over a parmaeter of interest and obtained point estimates of that distribution. When the likleihood is correct, the Laplacian approach is equivalent to the Bayesian approach.
Gibbs posteriors offer direct principled probabilistic inference on quantities of interest through a loss function, not a model-based likelihood. Syring and Martin (2021) provide simple sufficient conditions for establishing Gibbs posterior concentration rates when the loss function is sub-exponential.
While the Bayesian's obligation to specify a prior attracts the most criticism, their need for a model/likelihood has a number of postenrially negative consequences too. This is especially true when the quantity of interest has meaning independent of the staitsitcal models, like a population quantiles. Bissiri shows that the Gibbs posterior solves a meaningful risk minimization problem and satisfies a relevant coherence property.
\fi
The loss scale in the Gibbs posterior is a non-negative scalar controlling the learning rate about $\theta$ attributable to the observed data.
\fi
\section{Bootstrap for Bayesian Computation}
Sampling methods are innate to Bayesian computation. Parallelizable iid sampling (with Approximate Bayesian Computation (\cite{pritchard1999population,beaumont2002approximate}) or bootstrap (\cite{newton1994approximate,newton2021weighted}) has certain privileges over sequential sampling with MCMC.
ABC techniques are useful when the likelihood is easier to sample from than to evaluate. Bootstrap-style samplers (\cite{newton1994approximate,rubin1981bayesian,efron2012bayesian,newton2021weighted,nie2020bayesian,shin2020scalable}), on the other hand, are beneficial when optimization is easier than, for example, sampling from conditionals.
Below, we review recent developments in bootstrap-style posterior computation for Bayesian inference with loss functions.
Weighted likelihood bootstrap (WLB) of \cite{newton1994approximate} is a method for approximately sampling from a posterior distribution of a well-specified parametric statistical model. Samples are generated by computing randomly-weighted maximum likelihood estimates with the weights drawn from a suitable Dirichlet distribution.
Such bootstrap samples are obtained as minimizers of randomly re-weighted objective functions, e.g.
\begin{equation}\label{eq:theta_hat}
\wh \theta_{\bm w}=\arg\min_\theta \left\{\sum_{i=1}^n w_i \times \ell(\theta;x_i)-\lambda\log\pi(\theta)\right\}
\end{equation}
for a given set of weights $\bm w\in\R^n$ drawn from some distribution $H$ and for the log-likelihood loss function $\ell(\cdot;x_i)$.
The WLB method is not an exact method and does not accommodate a prior (i.e. \cite{newton1994approximate} assume $\lambda=0$). \cite{newton2021weighted} added a log-prior penalty to incorporate the prior with $\lambda>0$.
Conceivably, one can consider any general loss function $l(\cdot;y_i)$ and use alike strategy to sample from approximate Gibbs posteriors. We illustrate this in a Bayesian least absolute deviation regression example in Section \ref{sec:LAD}.
The WLB sampler can be in principle used to construct approximate quasi-posteriors (\cite{chernozhukov2003mcmc}), where the prior may need to be incorporated in some form either through distribution $H$ or through a penalty term in \eqref{eq:theta_hat}).
This aims at parametric-type inference with Gibbs posteriors (as described in Section \ref{sec:loss}).
Loss-likelihood bootstrap (\cite{lyddon2019general}) reinterprets WLB as a sampler from an exact posterior over a parameter of interest defined through a loss function under an unknown sampling distribution.
This aims at non-parametric Bayesian learning (as described in Section \ref{sec:nbl}).
This is also the strategy pursued in \cite{chamberlain2003nonparametric} based on the Bayesian bootstrap (\cite{rubin1981bayesian}).
\subsection{Deep Bootstrap Sampling}\label{sec:deep_bootstrap_sampling}
Recall that the loss-likelihood bootstrap of \cite{lyddon2019general} generates samples
\begin{equation}
\wh\theta_{\bm w}=\theta(F_j)\quad \text{where}\quad F_j=\sum_{i=1}^n\delta_iw_i\label{eq:thetaw}
\end{equation}
with $w_i$'s arriving from, e.g., a Dirichlet distribution and where $\wh\theta_{\bm w}$ can be viewed as a minimizer of an expected loss under $F_j$ as defined in \eqref{eq:theta0}.
At a more intuitive level, each $\wh \theta_{\bm w}$ in \eqref{eq:theta_hat} can be seen as a flexible functional of $\bm w$.
This suggests a compelling possibility of treating the distribution of $\wh\theta_{\bm w}$ (be it an approximation to the Gibbs posterior or the non-parametric Bayesian posterior)
as an implicit distribution. A distribution is implicit when it is not possible to evaluate its density but it is possible to draw samples from it. One typical way to draw from an implicit distribution is to first sample a noise vector and then push it through a deep neural network (\cite{mohamed2016learning}). Implicit distributions have been deployed within variational Bayes (\cite{ruiz2019contrastive,pequignot2020implicit}) to obtain flexible distributional approximations to the posterior. Treating bootstrap distributions implicitly,
the generative bootstrap sampler (\cite{shin2020scalable}) draws samples by learning a flexible mapping which transports weights $\bm w$ onto parameters $\theta(F)$. A similar strategy can be used for the WLB approach of \cite{newton1994approximate} where $\wh\theta_{\bm w}$ is linked to $\bm w$ through \eqref{eq:theta_hat}.
Instead of re-computing the optimization problem \eqref{eq:theta_hat} for a freshly drawn set of weights $\bm w$ at each step, \cite{shin2020scalable} suggest training a generator mapping, say $\wh \theta(\bm w)$,
which has to be learned only once. This mapping is designed to pass random weights $w_j$'s to yield samples from the bootstrap distribution, a sampling process which has negligible cost once the mapping has been learned.
The following Lemma (Theorem 2.1 in \cite{shin2020scalable}) justifies this line of reasoning.
\begin{lemma}(\cite{shin2020scalable})\label{lemma1}
Assume that a function $G:\R^n\rightarrow\R^p$ is defined as
\begin{equation}\label{eq:Ghat}
\hat G(\cdot)=\arg\min_{G\in\mathcal G} \E_{\bm w}\sum_{i=1}^n w_i \times \ell(G(\bm w); y_i)
\end{equation}
where $\E_{\bm w}$ is the expectation with respect to $\bm w\sim H$. Moreover, assume that the solution $\wh\theta(\bw)$ defined in \eqref{eq:theta_hat} is unique for each given $\bm w\in\R^n$. Then
if $\mathcal G$ is rich enough to express any function $G$ we have
\begin{equation}\label{eq:ghat}
\wh G(\bm w)=\wh \theta_{\bm w}.
\end{equation}
\end{lemma}
\proof
The proof is given in Section 2.2 of \cite{shin2020scalable} and rests on the simple observation that, since
$\sum w_il(\theta;y_i)\geq \sum {w_i}l(\wh\theta_{\bw};y_i),$
one has
$$
\E_{\bm w}\sum_{i=1}^n w_i l(\wh G(\bm w); y_i)\geq \E_{\bm w}\sum_{i=1}^n w_i l(\wh\theta_{\bw}; y_i)
$$
which implies that $\wh G(\bw)=\wh\theta_{\bw}.$
The result in Lemma \ref{lemma1} implies an important ``isomorphism'' between bootstrap weights $\bm w$ and parameters $\theta$ which can be exploited for faster computation of belief distributions (Gibbs posteriors in Section \ref{sec:loss}) or non-parametric Bayesian learning posteriors (Section \ref{sec:nbl}).
For training $G(\cdot)$, one may want to search within mappings $\mathcal G$ that are compositions of non-linear transformations, i.e. a deep learning mappings.
Due to the expressibility of neural networks (see e.g. \cite{barron1993universal})), the neural network estimator $\hat G(\cdot)$ can be made arbitrarily close to the optimal mapping that satisfies \eqref{eq:ghat}.
This work uses forward deep learning mappings $\mathcal G$, thereby the name Deep Bootstrap Sampler.
\begin{figure}[t]\centering
\includegraphics[width=.75\linewidth, height=.23\textheight]{figures/pic.png}
\caption{Prototype deep learning architecture from \cite{shin2020scalable}.}
\label{fig:architecture}
\end{figure}
The nested structure of neural networks allows for gradients to be efficiently evaluated using back-propagation (\cite{hecht1992theory,rumelhart1986learning}). Once the gradient is computed, stochastic gradient descent algorithm can be used to update the parameters iteratively. \cite{shin2020scalable} also suggest a specific neural network architecture which re-introduces the weights at each layer.
Such a deep approximating class with $L$ network layers, each with $n_l$ neurons, can be written as
\begin{align*}
g(\bm w)&=g_L(\bm Z_L)\\
\bm Z_{l+1}&=\{g_l(\bm Z_l),\bm w\}\quad \text{for}\quad 1\leq l\leq L-1.
\end{align*}
where $\bm Z_1=\bm w$ and where each function $g_l:\R^{n_l}\rightarrow \R^{n_{l+1}}$ squashes a multivariate linear combination of the input variables $Z_l$ by a squashing function $T$, e.g. the Rectified Linear Unit (ReLU) or a sigmoid function. The parameters of the linear combinations for each layer (i.e. intercept terms and slopes) are encapsulated in a vector parameter $\bm\phi$.
{In the neural network architecture proposed by \cite{shin2020scalable}, the number of trainable parameters in the $l$-th layer is $O((n_l+n)n_l)$. When $n\gg n_l$, this number grows linearly in $n$ and thus, reducing the dimension $n$ of input will significantly improve the computational efficiency. \cite{shin2020scalable} proposed a subgroup bootstrap strategy, which groups the $n$ weights into $S$ equi-sized blocks and $S\ll n$. The weights in the same group are assigned the same value. The $S$ weights are drawn from some distribution $H_\alpha$. Then the number of trainable parameters in the $l$-th layer will be reduced to $O((n_l+S)n_l)$. The subgroup strategy significantly boosts the computational advantage of Deep Bootstrap samplers. The architecture incorporating this strategy is shown in Figure \ref{fig:architecture}. }
We summarize the Deep Bootstrap sampler for Bayesian parametric learning with Gibbs posteriors in Algorithm \ref{alg:gibbs} and for Bayesian non-parametric learning in Algorithm \ref{alg:nonparam}.
{\begin{algorithm}[t]\scriptsize
\spacingset{0.7}
\KwData{Data $\{x_i,1\leq i\leq n\}$, number of training epochs $T$, number of points to sample $N$, number of Monte Carlo samples $K$, prior $\pi$, learning rate $\eta$, number of subgroups $S$ for observed data, $n_S=n/S$ the size of each subgroup.}
\KwResult{$\theta^i, i=1,2,\cdots,N$.}
\hspace{-0.005\linewidth}\colorbox{gray!30}{\makebox[0.99\linewidth][l]{Training stage:}}
\textbf{Initialize weights $\phi$ of the fitted function $\hat G$.}
\For{$t=1,2,\cdots, T$}{
\For{$k=1,2,\cdots,K$}{
{\vspace{-0.3cm} \hspace{5cm}\color{white} \tiny ahoj}\\
\textbf{Draw $\tilde w^k_{1:S}\sim S\times Dir(1,\cdots,1)$}.
\textbf{Set} $w_{(j-1)n_S+m}^k=\tilde w_j^k$ for all $m=1,2,\cdots,n_S$, $j=1,2,\cdots,S$.
}
\textbf{Update} $\phi\leftarrow\phi-\eta\partial_{\phi}\left[\sum_{k=1}^K\left[\sum_{j=1}^{n} w_j^kl\left(G(\tilde w^k_{1:S}); x_j\right)+\log\pi\left(G(\tilde w^k_{1:S})\right)\right]\right]$.
}
\hspace{-0.005\linewidth}\colorbox{gray!30}{\makebox[0.99\linewidth][l]{Sampling stage:}}
\For{$i=1,2,\cdots,N$}{
{\vspace{-0.3cm} \hspace{5cm}\color{white} \tiny ahoj}\\
\textbf{(2.1)} Draw $\tilde w^k_{1:S}\sim S\times Dir(1,\cdots,1)$.\\
\textbf{(2.2)} Evaluate $\theta^i=\hat G_\phi(\tilde w^k_{1:S})$.
}
\caption{\bf : Deep Bootstrap Sampler for Gibbs Posterior Learning}\label{alg:gibbs}
\end{algorithm}
}
{
\begin{algorithm}[t]\scriptsize
\spacingset{0.7}
\KwData{Data $\{x_i,1\leq i\leq n\}$, number of prior pseudo samples $n'$, number of training epochs $T$, number of points to sample $N$, number of Monte Carlo samples $K$, concentration parameter $\alpha$, learning rate $\eta$, number of subgroups $S$ for observed data, number of subgroups $S'$ for prior pseudo data, $n_S=n/S, n_{S'}=n/S'$.}
\KwResult{$\theta^i, i=1,2,\cdots,N$.}
\hspace{-0.005\linewidth}\colorbox{gray!30}{\makebox[0.99\linewidth][l]{Training stage:}}
\textbf{Initialize weights $\phi$ of the fitted function $\hat G$.}
\textbf{Approximate the prior} by drawing ${x}'_{1:n'}\stackrel{\iid}{\sim}F_\pi$.
\textbf{Create} the enlarged (observed + prior pseudo) sample $\{x_1,\cdots,x_n,x_1',\cdots,x_{n'}'\}$.
\For{$t=1,2,\cdots, T$}{
\For{$k=1,2,\cdots,K$}{
{\vspace{-0.3cm} \hspace{5cm}\color{white} \tiny ahoj}\\
\textbf{Draw $\tilde w^k_{1:(S+S')}\sim (S+S')\times Dir(1,\cdots,1,\alpha/n',\cdots,\alpha/n')$.}
\textbf{Set} $w_{(j-1)n_S+m}^k=\tilde w_j^k$ for any $m\in[n_S],\,j\in[S]$ and $w_{n+(j'-1)n_{S'}+m'}^k=\tilde w_{j'+S}^k,$ for any $m'\in[n_{S'}], j'\in[S']$.
\iffalse\begin{align*}
&w_{(j-1)n_S+m}^k=\tilde w_j^k, \quad\forall m\in[n_S],\,j\in[S],\, \text{and}
\\&w_{n+(j-1)n_{S'}+m}^k=\tilde w_{j+S}^k, \quad\forall m\in[n_{S'}], j\in[S'].
\end{align*}\fi
}
\textbf{Update} $\phi\leftarrow\phi-\eta\partial_{\phi}\left[\sum_{k=1}^K\left[\sum_{j=1}^{n} w_j^kl\left(G(\tilde w^k_{1:(S+S')}); x_j\right)+\sum_{j=1}^{n'} w_{j+n}^kl\left(G(\tilde w^k_{1:(S+S')}k); x'_j\right)\right]\right]$.
}
\hspace{-0.005\linewidth}\colorbox{gray!30}{\makebox[0.99\linewidth][l]{Sampling stage:}}
\For{$i=1,2,\cdots,N$}{
{\vspace{-0.3cm} \hspace{5cm}\color{white} \tiny ahoj}\\
\textbf{(2.1)} Draw $\tilde w^k_{1:(S+S')}\sim (S+S')\times Dir(1,\cdots,1,\alpha/n',\cdots,\alpha/n')$.\\
\textbf{(2.2)} Evaluate $\theta^i=\hat G_\phi(\tilde w^k_{1:(S+S')})$.
}
\caption{\bf : Deep Bootstrap Sampler for Bayesian NPL}\label{alg:nonparam}
\end{algorithm}
}
\section{Theory}
The quantification of the speed of concentration around the truth (as the sample size goes to infinity) is now a standard way of assessing the quality of posteriors (\cite{ghosal2000convergence}).
As \cite{shalizi2009dynamics} states, such Bayesian asymptotic results are ``frequentist license for Bayesian practice''. Below, we review recent literature related to our development and provide theoretical support for certain aspects of the weighted likelihood bootstrap using connections to model misspecification.
\cite{bhattacharya2019bayesian} studied concentration of the so-called fractional $\alpha$-posteriors obtained by raising the likelihood to some fixed value $\alpha\in(0,1)$. We study bootstrap-style posteriors where each observation is raised
to a {\em different random} weight $w_i$ where $\sum_{i=1}^nw_i=n$. {\cite{bhattacharya2019bayesian} proved that the fractional $\alpha$-posteriors concentrate on the so-called $\alpha$-divergence neighborhoods around the truth. The $\alpha$-divergence is shown to be a valid divergence measure when $\alpha\in(0,1)$, and the rate of contraction is inflated by a multiplicative factor $\frac{1}{1-\alpha}$. This line of proof, unfortunately, does not extend easily to our bootstrap-style posteriors.\footnote{Unless $w_i=1$ for all $i$'s, there must be some weight $w_i> 1$. The existence of such weights invalidates the upper bound in \cite{bhattacharya2019bayesian}, and we find it difficult to define a similar (and valid) divergence measure that could be properly upper bounded.}} We study the distribution of extremal (modal) estimators
$\wh\theta_{\bm w}$ defined in \eqref{eq:theta_hat} when $\lambda=1$.
While the $\alpha$-posteriors keep the weight fixed and the randomness stems from treating $\theta$ as a random variable with a prior, we treat $\hat\theta_{\bm w}$ as an estimator where the randomness comes from $\bm w$. In a related paper, \cite{han2019statistical} study contraction of weighted posterior distributions incorporating both the randomness of $\theta$ (through the weighted posterior distribution under the prior $\pi(\theta)$) and the randomness of $\bm w$ (from the distribution of weights $\pi(\bm w)$).
Our theory has two parts. First, in our Theorem \ref{thm:concentration}, we obtain a similar conclusion to \cite{han2019statistical} but using a different proving technique. There we focus on the entire weighted posteriors assuming that $\ell(x;\theta)$ in \eqref{eq:theta_hat} is the unit information loss. We refer to \cite{syring2020gibbs} for concentration-rate results for the actual Gibbs posteriors with sub-exponential loss functions.
Second, we are interested in the contraction of a distribution of weighted posterior modes $\hat\theta_{\bm w}$ around $\theta_0$. This property is summarized in Theorem \ref{thm:mode}, where we consider losses other than just the unit information loss.
First, we want to understand the behavior of the weighted posteriors
\[
\pi_{\bm w}(\theta\mid\Xn)=\frac{\pi(\theta)p_{\theta}^{(n),\bm w}(\Xn)}{\int_{\Theta}\pi(\theta)p_{\theta}^{(n),\bm w}(\Xn)d\theta}\propto e^{u_{\theta,\bm w}(\Xn)}p_{\theta}^{(n)}(\Xn)\pi(\theta),
\]
where the exponential tilting factor
$$
u_{\theta,\bm w}(\Xn)=\log\frac{p_{\theta}^{(n),\bm w}(\Xn)p_{\theta_{0}}^{(n),1-\bm w}(\Xn)}{p_{\theta}^{(n)}(\Xn)}
$$
combines individually weighted likelihood terms (with weights $w_i$ and $1-w_i$)
$$
p_{\theta}^{(n),\bm w}(\Xn)=\prod_{i=1}^np_{\theta}^{w_i}(X_i)\quad\text{and}\quad
p_{\theta_0}^{(n),1-\bm w}(\Xn)=\prod_{i=1}^np_{\theta_0}^{1-w_i}(X_i).
$$
Following \cite{kaji2021metropolis}, for any fixed $\bm w$, $\pi_{\bm w}(\theta\mid\Xn)$ can be viewed as the posterior density under a mis-specified likelihood
\[
\wt{p}_{\theta}^{(n),\w}(\Xn)=\frac{e^{u_{\theta,\bm w}(\Xn)}p_{\theta}^{(n)}(\Xn)}{C_{\theta,\bm w}}\quad\text{where}\quad C_{\theta,\bm w}=\int_{\mathcal{X}}e^{u_{\theta,\bm w}(\Xn)}p_{\theta}^{(n)}(\Xn)d\Xn,
\]
and a modified prior
\begin{equation}\label{eq:modified_prior}
\wt{\pi}_{\bm w}(\theta)=\frac{C_{\theta,\bm w}\times \pi(\theta)}{\int_{\Theta}C_{\theta,\bm w}\times \pi(\theta)}.
\end{equation}
Note that $P_{\theta_0}^{(n)}$ is \emph{not} of the same form as $\mathcal{\wt{P}}^{(n),\bm w}=\{\tilde P_\theta^{(n),\bm w}:\theta\in\Theta\}$. This new mis-specification perspective allows us to use a different proving technique from \cite{bhattacharya2019bayesian} and \cite{han2019statistical}.
It is known (\cite{kleijn2006misspecification}) that under the mis-specified model $\wt{p}_{\theta}^{(n),\w}$, the posterior will concentrate around the KL-projection point $\theta_{\bm w}^*$ defined as
\begin{equation}\label{eq:dfn_theta_W^*}
\theta_{\bm w}^*=\arg\min_{\theta\in\Theta}-P_{\theta_0}^{(n)}\log\frac{\wt{p}_{\theta}^{(n),\w}(\Xn)}{p_{\theta_0}^{(n)}(\Xn)}.
\end{equation}
At first sight, since the model $\wt{p}_{\theta}^{(n),\w}$ is mis-specified and depending on the value of $\bm w$, $\theta_{\bm w}^*$ will not necessarily be the true target $\theta_0$. Surprisingly, under some mild conditions on $\bm w$, $\theta_0$ will be the unique minimizer of Equation \eqref{eq:dfn_theta_W^*}, i.e., $\theta_0=\theta_{\bm w}^*$.
This is stated in the next Lemma.
\begin{lem}\label{lemma:unique_minimizer}
Assume that: (1)
there exists $i\in[n]$ such that $w_i>0$, and (2) the random variable $Z(X_i)=\frac{p_{\theta}\left(X_{i}\right)}{p_{\theta_0}\left(X_{i}\right)}$ is degenerate only if $\theta=\theta_0$.
Then
\begin{equation*}
\theta_0=\arg\min_{\theta\in\Theta}-P_{\theta_0}^{(n)}\log\frac{\wt{p}_{\theta}^{(n),\w}(\Xn)}{p_{\theta_0}^{(n)}(\Xn)},\quad\text{i.e.},\quad \theta_{\bm w}^*=\theta_0.
\end{equation*}
\end{lem}
\proof
See Appendix Section \ref{sec_appendix:unique_minimizer_proof}.
The lemma says that even though we perturb the likelihood, the truth $\theta_{0}$ still
optimizes the KL divergence between the mis-specified likelihood $\wt{p}_{\theta}^{(n),\w}$
and the true likelihood $p_{\theta_{0}}^{(n)}$.
{The intuitive explanation
is that
\[
\wt{p}_{\theta}^{(n),\w}=C^{-1}_{\theta,\w}p_{\theta_{0}}^{(n),1-\w}p_{\theta}^{(n),\w}.
\]
The mis-specified likelihood $\wt{p}_{\theta}^{(n),\w}$
can be thereby viewed as the original likelihood $p_{\theta_{0}}^{(n)}$ adjusted
towards $p_{\theta}^{(n)}$, where the strength of adjustment depends
on $\bm w$. When $\theta=\theta_{0}$, we always have $\wt{p}_{\theta}^{(n),\w}=p_{\theta_{0}}^{(n)}$,
regardless of the weights $\bm w$. }
To start, using Lemma \ref{lemma:unique_minimizer}, we can conclude concentration-rate result for the whole posterior $\pi_{\bm w}(\theta\mid X^{(n)})$ around the truth $\theta_0$.
Define $R_n(c_0)=\{(a_1,a_2,\cdots,a_n)'\in\mathbb{R}^n: a_i\ge 0, \sum_{i=1}^n a_i=n, \max_{i\in[n]} a_i\le c_0\log n\}$.
\begin{thm}\label{thm:concentration}
Assume that there exists a constant $c_0>0$ and
a sequence of real numbers $\epsilon_{n}>0$ with
$\epsilon_{n}\rightarrow0,n\epsilon_{n}^{2}\rightarrow\infty$, such that the sequence of sets $R_n(c_0)$
satisfies
\begin{equation}\label{eq:weight_regularization}
\mathbb{P}_{\bm w}[\w_n\in R_n(c_0)]\rightarrow 1.
\end{equation}
Under technical Assumptions \ref{assump_appendix:metric}, \ref{assump_appendix:test}, \ref{assump_appendix:prior} in the Appendix (Section \ref{sec_appendix:concentration_proof}) and the assumptions in Lemma \ref{lemma:unique_minimizer} we have
\[
P_{\theta_{0}}^{(n)}P_{\bm w}\left[\Pi_{\boldsymbol{w}}\left(\left\Vert \theta-\theta_{0}\right\Vert >M_{n}\epsilon_{n}\mid X^{(n)}\right)\right]\rightarrow0
\]
for any sequence $M_{n}\rightarrow\infty$ as $n\rightarrow\infty$, where $\Pi_{\bm w}(\cdot\mid X^{(n)})$ is the probability measure associated with the posterior density $\pi_{\bm w}(\cdot\mid X^{(n)})$.
\end{thm}
\proof See Appendix Section \ref{sec_appendix:concentration_proof}.
\cite{han2019statistical} obtained a similar result under stronger differentiability assumptions on both the prior and the likelihood to achieve a parametric rate (up to a logarithmic factor). Our Theorem \ref{thm:concentration} assumes a prior mass and testing conditions, yielding a more general concentration rate (not necessarily $1/\sqrt{n}$). In addition, our proving technique is different and uses the model mis-specification perspective.
We are ultimately interested in the convergence of posterior modes $\hat\theta_{\bm w}$ around $\theta_0$ where the randomness is driven by $\bm w$.
{Theorem \ref{thm:concentration} implies the existence of a point estimator based on $\Pi_{\bm w}(\cdot\mid X^{(n)})$ that converges to the truth at a rate $\epsilon_n$ for each $\bm w$ (e.g., using Theorem 8.7 in \cite{ghosal2017fundamentals}).
In Theorem 8.8 of \cite{ghosal2017fundamentals}, for example, it is shown that for convex distances, the posterior mean converges at a rate $\epsilon_n$.
The convergence rate might be adopted also by the posterior mode under suitable local asymptotic normality conditions. We have chosen a different, more direct, route to show the convergence of posterior modes $\hat\theta_{\bm w}$.
}
\iffalse
\begin{rem}[Discussion on weight regularization]
For random vectors $\w_n=(w_1,\cdots,w_n)^\top$ lying in the $(n-1)$-simplex, if there exist constants $(c_0,c_1)$ such that
\[
\mathbb{E}_{\bm w}\left[e^{\lambda(w_i-1)}\right]\le e^{c_0\lambda^2/2}\quad \text{for any}\quad |\lambda|\le c_1 \quad\text{and $i=1,2,\cdots,n$},
\]
then \eqref{eq:weight_regularization} holds for some sufficiently large $c>0$ (\cite{han2019statistical}). This assumptions is analogous to the Assumption W in \cite{han2019statistical} which regulates the tail behavior of the weight distribution.
\end{rem}
\fi
\iffalse
\begin{rem}[Related existing results]
\cite{han2019statistical} obtained a similar result but require stronger differentiability assumptions on both the prior and the likelihood, achieving a parametric rate (up to logarithmic factor) of concentration for the whole posterior. The loss likelihood bootstrap in \cite{lyddon2019general} does not accommodate a prior. By assuming three-times differentiable likelihood, \cite{lyddon2019general} shows that the loss likelihood bootstrap samples concentrate around the mode (of the original loss without re-weighting) at the parametric rate. Our Theorem \ref{thm:concentration} only assumes proper prior mass condition (Assumption \ref{assump_appendix:prior}), yielding a more general concentration rate (not necessarily $\sqrt{n}$). In addition, our proving technique uses the model mis-specification perspective.
\end{rem}
\fi
\iffalse
\begin{rem}[Implications on point estimators]\label{remark:LAN}
Under the additional local asymptotic normality condition (Assumption \ref{assump_appendix:LAN}) and prior regularizations (Assumption \ref{assump_appendix:prior_continuous}) in the Appendix Section \ref{sec_appendix:concentration_proof}, one can show the existence of point estimators based on $\Pi_{\bm w}(\cdot\mid X^{(n)})$ that converges to the truth at a rate of $\epsilon_n$. For example, the posterior mean, median, and observing the closeness between the posterior mean and mode (from the local asymptotic normality), possibly also the posterior mode. For $\epsilon_n$ that satisfies LAN condition, the most common norming rate is $\epsilon_{n}=\sqrt{n}$,
which always holds when $p_{\theta}$ is sufficiently smooth in the
sense that it is differentiable in quadratic mean.
However, there are examples with a different rate. Consider
the triangular density $p_{\theta}(x)=(1-\left|x-\theta\right|)^{+}$
and the gamma density $p(x)=xe^{-x}\mathbb{I}\left(x>0\right)$. They
yield models which satisfy the LAN condition but with a norming rate $\epsilon_{n}=\sqrt{n\log n}$.
We refer interested readers to \cite{van2000asymptotic}, page 105 for general conditions characterizing a set of densities which yield
$\epsilon_{n}=\sqrt{n\log n}$.
\end{rem}
\fi
We utilize tools for establishing convergence rates for M-estimators (\cite{wellner2013weak}). {We denote with $p_\theta(x)=\exp\{-l(\theta;x)\}$ an exponentiated loss, not necessarily a likelihood, and we show that $\hat\theta_{\bm w}$ concentrates around the inferential target $\theta_0$ defined in Equation \eqref{eq:theta0}.
\iffalse
\[
\theta_0=\arg\min_{\theta\in\Theta}\int l(\theta;x)dP_0(x),
\]
where $P_0$ is the probability measure of $X_i$'s, i.e., $X_i\stackrel{\text{iid}}{\sim} P_0$.
\fi
}
We denote $\mathcal{M}_\epsilon(\theta_0)=\left\{\log[p_{\theta}/p_{\theta_0}]:d(\theta,\theta_0)<\epsilon\right\}$, {and $P_0$ the probability measure of $X_i$'s, i.e., $X_i\stackrel{\text{iid}}{\sim} P_0$.}
For any function class $\mathcal{F}$, we write its Rademacher complexity with respect to $P_{0}$ for the sample size $n$ as $\mathcal{R}_n(\F)$, i.e.,
\[
\mathcal{R}_n(\F)=P^{(n)}_{0}\left[\sup_{f\in\F}\left|\frac{1}{n}\sum_{i=1}^{n}\sigma_if(X_i)\right|\right],
\]
where $\P(\sigma_i=1)=\P(\sigma_i=-1)=1/2$ for any $i\in[n]$.
Denote $\phi_n(\epsilon)=\sqrt{n}\mathcal{R}_n(\mathcal{M}_\epsilon(\theta_0))$.
The following theorem establishes the convergence rate of the posterior modes $\hat\theta_{\bm w}$. We allow the prior $\pi$ to depend on the sample size $n$, and we thereby write $\pi_n$ for emphasis.
\begin{thm}\label{thm:mode}
Assume that:
\begin{itemize}
\item[(1)] (Existence of a suitable semi-metric) There exists a semi-metric $d(\cdot,\cdot)$ on $\Theta$ such that \[P_0\log [p_{\theta_0}/p_{\theta}]\ge d^2(\theta,\theta_0)\quad\text{for all}\quad \theta\in\Theta,\]
\item[(2)] (Bounded loss) $\sup_{\theta\in\Theta}\left|\log p_{\theta}(X)\right|<\infty$,
\item[(3)] (Weight regularization) The weights satisfy
\begin{equation}
w_i\,\text{i.i.d. with}\,\,\E w_i=1, \|w_i\|_{2,1}=\int_0^\infty\sqrt{P\left(|w_i|>x\right)}dx<\infty,\quad\text{or}
\end{equation}
\begin{equation}
\bm w_n=(w_1,\cdots,w_n)\sim n\times Dir(c,\cdots,c),\quad\text{for some fixed }c>0.
\end{equation}
\item[(4)] (Proper growth of Rademacher complexity) There exist constants $C>0$, $\gamma\in(0,2)$ such that for all $\epsilon>0,c>1$,
\[
\phi_n(\epsilon)\le C\sqrt{n}\epsilon^2,\quad
\phi_n(c\epsilon)\le c^{\gamma}\phi_n(\epsilon).
\]
\end{itemize}
Then, for any $M_n\rightarrow\infty$ as $n\rightarrow\infty$, we have
\[
P_{0}^{(n)}\P_{\bm w}\left(d(\hat\theta_{\bm w},\theta_0)>M_n\epsilon_{n}\right)\rightarrow 0,
\]
where $\epsilon_{n}\rightarrow 0$ satisfies
\begin{equation}\label{eq:mode_rate}
\epsilon_{n}^{-2}\phi_n(\epsilon_{n})\le\sqrt{n}\,\,\text{for all }n\quad\text{and}\,\,
\sup_{\epsilon\ge\epsilon_{n}}\frac{\sup_{\theta\in\Theta:\epsilon<d(\theta,\theta_0)\le 2\epsilon}\log\frac{\pi_n(\theta)}{\pi_n(\theta_0)}}{n\epsilon^{2}}\rightarrow 0.
\end{equation}
\end{thm}
\proof See Appendix \ref{sec_appendix:proof_mode}.
\begin{rem}[Discussion on the convergence rate]
Theorem \ref{thm:mode} establishes a general (not necessarily $\sqrt{n}$) rate of convergence for the distribution of posterior modes.
The first part of Equation \eqref{eq:mode_rate} is similar to previous conclusions for M-estimators (\cite{wellner2013weak}). It shows that the convergence rate for $\hat\theta_{\bm w}$ is driven by the growth of $\mathcal{R}_n(\mathcal{M}_\epsilon(\theta_0))$ around $\epsilon=0$, which is determined by the richness of the function class $\{\log p_{\theta}:\theta\in\Theta\}$ around $\theta=\theta_0$. For example, with monotone densities, $\epsilon_n=n^{-1/2}$ and with convex densities in $\mathbb{R}^d$, $\epsilon_n=n^{2/(d+4)}$ for $d\le 3$, $\epsilon_n=n^{1/4}/[\log(n)]^{1/2}$ for $d=4$ and $\epsilon_n=n^{1/d}$ for $d>4$. We refer interested readers to \cite{wellner2013weak,pollard1991asymptotics} for more examples.
The second part of \eqref{eq:mode_rate} is less common, and it says that the convergence rate will also be affected by prior $\pi_n(\theta)$. In particular, a sufficient prior mass has to be put around $\theta=\theta_0$, otherwise the convergence rate will be slowed.
\end{rem}
{
\begin{rem}[Errors from deep learning approximation]
We note that all previous results refer to the actual posterior, not the Deep Bootstrap approximation. In other words, we do not consider the estimation error in obtaining $\hat\theta_{\bm w}$ using the deep learning approximations. Theoretically, there always exists a sufficiently large neural network whose approximation error is sufficiently small (\cite{hornik1989multilayer,guhring2020error}). Thus, if we allow its size to grow at a proper rate, we might show existence of a sequence of networks whose mapping converges at a rate no slower than $\epsilon_n$. The actual estimation error of the trained neural network, however, would need to incorporate the actual optimization of the network.
\end{rem}
}
\section{Deep Bootstrap in Bayesian Practice}
This section presents several stereotypical toy examples of inference about parameters determined by loss-functions.
We aim to illustrate the potential of the deep bootstrap sampler for Bayesian inference.
\subsection{Bayesian Support Vector Machines}\label{sec:svm}
We first demonstrate the performance of the Deep Bootstrap sampler for Bayesian non-parametric learning (Section \ref{sec:nbl}) in binary classification tasks.
Given data $\{(y_i,\bm x_i)\in\{-1,1\}\bigotimes\mathbb{R}^p\}_{i=1}^n$ where $\x_i$ denotes the covariates of the $i^{th}$ observation with a binary label $y_i\in\{-1,+1\}$, binary classification aims to predict $y$ when given $\x$ using the sign of $f(x)$, where $f:\mathbb{R}^p\rightarrow \mathbb{R}$ is a function to be learned. Various loss functions have been designed to learn $f$, including the Support Vector Machine (SVM) loss (\cite{cortes1995support})
\begin{equation*}
L(y,f(\x))=\max\{0,1-yf(\x)\},
\end{equation*}
and the logistic loss (\cite{pearl1920rate})
\begin{equation*}
L(y,f(\x))=\log(1+e^{-yf(\x)}).
\end{equation*}
Suggested by \cite{rosasco2004loss}, we choose the SVM loss with a linear $f$,
i.e., we minimize the empirical loss $\sum_{i=1}^n l(\beta,\bm\theta;y_i,\x_i)$ with
\begin{equation}\label{eq:svm}
l(\beta,\bm\theta;y_i,\x_i)=\max\{0,1-y_i\left(\beta+\x_i'\bm\theta\right)\},
\end{equation}
where $\beta\in\mathbb{R}$ is the bias and $\bm\theta=(\theta_1,\cdots,\theta_p)'\in\R^p$ are the regression coefficients.
For the DP prior, following \cite{fong2019scalable}, we use the prior centering measure
\begin{align}
&F_\pi(y,\x)=F_\pi(\x)F_\pi(y),\quad\text{where}\label{eq:NPL_prior}
\\
&F_\pi(\x)=\frac{1}{n}\sum_{i=1}^n\delta_{\x_i},\quad\text{with }\delta_{\x}\text{ the Dirac delta measure centered at }\x,\nonumber
\\
&F_\pi(y)=\text{Bernoulli}(0.5).\nonumber
\end{align}
As discussed in \cite{fong2019scalable}, this choice of $F_\pi$ assumes that $y,\x$ are independent and thus is equivalent to assuming $\bm\theta=\bm 0_p$ a priori, which induces similar effects as shrinking priors on $\bm\theta$ (for example, $\|\bm\theta\|_1$ or $\|\bm\theta\|_2$). Regarding the choice of the concentration parameter $\alpha$, larger $\alpha$ represents stronger beliefs in the prior. Here, following \cite{fong2019scalable}, we set $\alpha=1.0$.
To generate simulated data sets, we adopt the setting in \cite{lyddon2019general} and extend to multivariate $\x$'s: we sample $n$ i.i.d. data points $\left\{(y_i, \bm x_i)\right\}_{i=1}^n$ from
\begin{equation}\label{eq:svm_generation}
\P\left(y_i=1\right)=\P\left(y_i=-1\right)=1/2, \quad \bm x_i\mid y_i\sim N(y_i\bm 1_p,\Sigma),
\end{equation}
where $\Sigma$ is a $p\times p$ matrix with all $1$'s on the diagonal and $\rho$'s off-diagonally. We consider $\rho=0$ (independent covariates) and a more challenging case $\rho=0.6$ (equi-correlated covariates). For brevity, results for the independent case is deferred to the Appendix (Section \ref{sec_appendix:svm}).
Note that {with Equation \eqref{eq:svm_generation}, the inferential target $\theta_0=(\beta,\bm\theta)$ is a solution to the optimization problem defined in Equation \eqref{eq:theta0}, which does not have a closed form solution for the loss \eqref{eq:svm} and which does not necessarily satisfy $\beta=0,\bm\theta=\bm 1_p$. For example, when $p=10,\rho=0.6$, we have $\beta\approx 0,\bm\theta\approx0.2\times\bm 1_p$ by solving \eqref{eq:theta0} numerically. The misalignment between the inferential target $\theta_0=(\beta,\bm\theta)$ and the truth $(0,\bm 1_p)$ is not harmful for binary classification tasks as prediction is usually of more interest.
}
We consider varying samples sizes $n\in\{50,500, 1000, 2000, 5000\}$ and dimensions of covariates $p\in\{10, 50, 100, 200, 500\}$.
\begin{figure}[t]
\centering
\includegraphics[width=.45\linewidth, height=.15\textheight]{figures/NPL_final_cor_fig1}
\includegraphics[width=.45\linewidth, height=.15\textheight]{figures/NPL_final_cor_fig2}
\caption{\small Two-dimensional posterior density plot for $\theta_i$'s from deep Bootstrap sampler and the truth for the Bayesian support vector machine example. We set $n=50,p=10,\rho=0.6,\alpha=1.0$.
}
\label{fig:NPL_2d_density}
\end{figure}
We use the deep learning architecture (Figure \ref{fig:architecture}) introduced in \cite{shin2020scalable} to fit the Deep Bootstrap sampler. As discussed in \cite{shin2020scalable}, the benefit of this structure is that the re-introduction of weights at each hidden layer helps ``gradient flow in training deep neural networks'' and thus alleviates any potential variance underestimation issues. Using the notation in Section \ref{sec:deep_bootstrap_sampling}, we set $L-1=3$ fully connected (linear function + ReLU) hidden layers, each containing 128 neurons. Our sensitivity analysis in Appendix \ref{sec_appendix:complexity} shows that the network architecture has an impact on the approximating performance of the Deep Bootstrap sampler, yet such impact is minimal once the network complexity is moderately large (for example, $L-1=3,n_l=128,l=1,2,3$ sufficies for all experiments we tried). The implementation follows Algorithm \ref{alg:nonparam} and is coded via an optimized machine learning framework \texttt{PyTorch} (\cite{paszke2017automatic}).
As suggested by \cite{shin2020scalable}, we use subgroup bootstrap with $S=100$ and $S'=10$. The subgroup bootstrap strategy significantly boosts the computational benefit of deep Bootstrap samplers (DBS), and does not hurt much its performance in our experiments.
We set the number of Monte Carlo samples $K=100$, learning rate $\eta$ initialized at $0.0003$ and following a decay rate of $t^{-0.3}$ where $t$ is the current number of epoch (\cite{shin2020scalable}). In all $(n,p)$'s we tried, the training usually stabilizes after around 2\,000 iterations, and we set $T=4\,000$ to ensure convergence.
RMSprop algorithm (\cite{graves2013generating}) is used to update the gradients instead of the vanilla gradient descent in Algorithm \ref{alg:nonparam}.
Sampled points $\{\theta^1,\cdots,\theta^N\}$ from the true bootstrap distribution are generated following Algorithm 2 in \cite{fong2019scalable}. It requires solving an optimization problem
\begin{equation}\label{eq:svm_target}
\theta^j=\arg\max_{(\beta,\bm\theta)} \sum_{i=1}^n w_i\times l(\beta,\bm\theta;y_i,\x_i)
+\sum_{i=1}^{n'} w_{i+n}\times l(\beta,\bm\theta;y_i',\x_i'),\quad\forall j=1,2,\cdots,N,
\end{equation}
with $l(\beta,\bm\theta;y_i,\x_i)$ defined in \eqref{eq:svm}, $(\x_i',y_i')\stackrel{\text{iid}}{\sim}F_\pi(y,\x)$ where $F_\pi$ is defined in \eqref{eq:NPL_prior}, and the set of weights $(w_1,\cdots,w_{n+n'})$ are drawn from $Dir(1,\cdots,1,\alpha/n',\cdots,\alpha/n')$. Here $n'$ is the number of pseudo-samples and we set $n'=n$.
We solve the optimizatin problem \eqref{eq:svm_target} using the function \texttt{linear$\_$model.SGDClassifier} in Python library \texttt{sklearn} (\cite{scikit-learn}). This function allows various loss functions and different weight $w_i$'s for each sample point, and optimizes the loss function via stochastic gradient descent. We use the adaptive learning rate schedule (\cite{scikit-learn}) and tune the initial learning rate among $\{0.0001, 0.001, 0.01, 0.1\}$. The maximum number of epochs is set to $20\,000$ with early stopping turned on for saving computations.
To evaluate the performance of the Deep Bootstrap sampler, we first consider the two-dimensional density plots of $\theta_j$'s. Figure \ref{fig:NPL_2d_density} depicts an example where $n=50,p=10,\rho=0.6$. We observe that the Deep Bootstrap sampler captures the bootstrap posterior mean, but less so its variance.
This issue has been discussed in \cite{shin2020scalable}. \cite{shin2020scalable} believe that it is caused by the vanilla feed-forward neural network which prevents variation in the input weights to {properly} transmit to the output layers as the neural network grows deeper, and they propose the network structure in Figure \ref{fig:architecture} to alleviate it. Here, we observe that this issue still persists when applied to Bayesian models. The study in Appendix \ref{sec_appendix:regularization} shows that for Bayesian models, {as the sparsity-inducing prior grows stronger, the Deep Bootstrap sampler goes from accurate or slight over-estimation of the variance to more and more severe underestimation. }
We emphasize that, usually, the variance {mismatch} issue is not severe as long as the regularization strength is properly selected (e.g., using the BIC criterion (\cite{schwarz1978estimating}) as in the next example).
In this example (with $\alpha=1.0$), it does not affect the predictive inference on testing data, which is shown in Table \ref{table:svm}.
\begin{table}[t]
\centering
\small
\scalebox{0.7}{
\begin{tabular}{l|l|l|l|l|l|l|l|l}
\hline\hline
\bf \large Setting&&\multicolumn{7}{|l}{ \cellcolor[gray]{0.8}\large \bf Equi-correlated $\rho=0.6$} \\
\cline{1-9}
& metric
& accuracy & precision & recall & F1 & ROC & PR & time\\
\hline
\multirow{2}{*}{$p=10,n=50$} & DBS
& 0.83 & 0.82 & 0.86 & 0.83 & 0.91 & 0.92 &48.59+0.58\\
& WLB
& 0.86 & 0.85 & 0.90 & 0.87 & 0.94 & 0.94 &110.99\\
\hline
\multirow{2}{*}{$p=50,n=500$} & DBS
& 0.87 & 0.88 & 0.87 & 0.88 & 0.94 & 0.94 & 97.01+0.65 \\
& WLB
& 0.86 & 0.87 & 0.87 & 0.87 & 0.94 & 0.94 & 208.10 \\
\hline
\multirow{2}{*}{$p=100,n=1000$} & DBS
& 0.86& 0.88 & 0.85 & 0.86 & 0.93 & 0.94 &194.80+0.64\\
& WLB
& 0.85 & 0.83 & 0.85 & 0.86 & 0.93&0.94 & 436.86 \\
\hline
\multirow{2}{*}{$p=200,n=2000$} & DBS
& 0.87 & 0.87 & 0.87 & 0.87 & 0.93 & 0.93 & 315.19+0.66\\
& WLB
& 0.86 & 0.86 & 0.86 & 0.86 & 0.93 & 0.93 & 1187.21 \\
\hline
\multirow{2}{*}{$p=500,n=5000$} & DBS
& 0.89 & 0.89 & 0.89 & 0.89 & 0.95 & 0.92 & 565.05+0.63\\
& WLB
& 0.87 & 0.86 & 0.87 & 0.87 & 0.94 & 0.93 & 5677.71 \\
\hline\hline
\end{tabular}
}
\caption{\small Evaluation of approximation properties based on 10 independent runs for Bayesian support vector machine example. DBS standards for `deep Bootstrap sampler'. WLB standards for samples from the true bootstrap distribution. `Bias' refers to the $l_1$ distance of estimated posterior means. `ROC' refers to the area under the receiver operating characteristic curve; `PR' refers to the area under the precision-recall curve. The last column in each setting represents the time (in seconds) to generate 10\,000 sample points. For deep Bootstrap sampler, time reported is in the form of training time $+$ sampling time.}
\label{table:svm}
\end{table}
In Table \ref{table:svm}, we measure the ability of the Deep Bootstrap sampler to approximate the bootstrap target in terms of predictive performance. {For each $x_i$, we assign an `average' label by the majority vote based on samples of $(\beta,\bm\theta)$.} We report quantitative metrics that reflect the quality of this average (accuracy, precision, recall, F1 score).
{In addition, we report metrics} that reflect the shape of the whole Deep Bootstrap posterior (the area under the receiver operating characteristic curve and the precision-recall curve). The metrics are calculated for different choice of $n,p$'s, and on a separate testing set of size 100. The results are summarized in Table \ref{table:svm} which shows that samples generated from the Deep Bootstrap sampler are of high quality for various prediction-related metrics and are close to the target. In terms of predictive performance, the actual bootstrap and DBS are comparable. However, the computing times are dramatically different. We observe that the timing benefit of DBS increases as $n$ or $p$ grow larger.
In summary, this example shows that DBS approximates the true bootstrap posterior well in terms of predictive performance. However, the deep sampler is dramatically faster, especially in large $(n,p)$ settings. The variance {mismatch}
issue mentioned in \cite{shin2020scalable} exists, but is not fatal.
\subsection{Bayesian LAD Regression}\label{sec:LAD}
Least squares regression estimators tend to be less robust to outliers or heavy-tailed errors. When robustness becomes a concern, M-estimators (\cite{huber2009robust}) are often used instead of least-square estimators. Given data $\{(y_i,\bm x_i)\in\R\bigotimes\mathbb{R}^p\}_{i=1}^n$ where $\x_i$ denotes the covariates of an observation with response $y_i$, a regression M-estimator is the minimizer of the loss
\begin{equation}
l(\beta,\bm\theta;y_i,\x_i)=g(y_i-\beta-\x_i^\top\bm\theta).
\end{equation}
where $\beta\in\R$ is the intercept, $\bm\theta=(\theta_1,\cdots,\theta_p)\in\R^p$ is the regression coefficient vector, and $g: \mathbb{R}\rightarrow [0, \infty)$ is a residual function that satisfies (i) $g(0)=0$, (ii) $g(t)=g(-t),\forall t\in\R$, (iii) $g$ is monotonically increasing (\cite{rousseeuw2005robust}).
Statistical literature on robustness has proposed a variety of residual functions $g$, including the Huber function (\cite{huber1964robust})
\begin{equation*}
g(t;\delta)=
\begin{cases}
\frac{1}{2}t^2,\quad&\text{for }|t|\le\delta\\
\delta\left(|t|-\frac{1}{2}\delta\right),\quad&\text{otherwise}
\end{cases}
\end{equation*}
and the absolute value function (\cite{boscovich1757litteraria})
$
g(t)=|t|
$. Here, we consider the case where $g$ is the absolute value function, i.e., least absolute deviation (LAD) regression (\cite{boscovich1757litteraria}). A penalized LAD regression model is investigated in \cite{wang2006regularized,zou2006adaptive,wang2007robust,lambert2011robust,wang2013l1}, which minimizes
\begin{equation}\label{eq:penalized_LAD}
\sum_{i=1}^n\left|y_i-\beta-\bm\theta^\top\bm x_i\right|+\lambda\sum_{j=1}^p|\theta_j|.
\end{equation}
Note that the intercept $\beta$ is excluded from the penalty term as suggested by \cite{wang2006regularized}. In \eqref{eq:penalized_LAD}, the regularization strength $\lambda$ can be selected from the classical BIC criteria as recommended by \cite{schwarz1978estimating,lambert2011robust}.
Viewing \eqref{eq:penalized_LAD} from a Bayesian viewpoint, minimizing \eqref{eq:penalized_LAD} is equivalent to estimating the mode of a Bayesian model with a loss
\begin{equation}\label{eq:gibbs_loss}
l(\beta,\theta;y_i)=\left|y_i-\beta-\bm\theta^\top\bm x_i\right|,
\end{equation}
and a prior
\begin{equation}\label{eq:gibbs_prior}
\pi(\bm\theta)=\prod_{j=1}^p\left[\frac{\lambda}{2}e^{-\lambda|\theta_j|}\right].
\end{equation}
However, one might be interested in not only the posterior mode but also uncertainty quantification. The Gibbs posterior defined by \eqref{eq:gibbs} in Section \ref{sec:loss} provides one such uncertainty measurement. We compute the Gibbs posterior using MCMC. Another possibility is to obtain approximations, either by directly solving the optimization problem defined in \eqref{eq:theta_hat}, or by using the deep Bootstrap sampler (Algorithm \ref{alg:nonparam}). This examples investigates these three objects for uncertainty quantification.
We simulate data following the settings in \cite{lambert2011robust}: $n$ i.i.d. data points $\{(y_i,\x_i)\}_{i=1}^n$ are generated from
\begin{equation}
y_i=\beta^*+\x_i^\top\bm\theta^*+\sigma\epsilon_i,\quad
\x_i\stackrel{\text{iid}}{\sim}N(0,\Sigma),
\end{equation}
where $\beta^*=1$, $\bm\theta^*=(1.5, 2, 3,0,0,\cdots,0)\in\R^p$, $\Sigma$ is a Toeplitz matrix with $\Sigma_{i,j}=(0.5)^{|i-j|}$.
Following \cite{lambert2011robust}), we consider two challenging cases with heavy-tailed noise distribution:
\begin{itemize}
\item Model 1: large outliers. $\epsilon_i=v_i/\sqrt{\text{var}(v_i)}$, $\sigma=9.67$, and $v_i\stackrel{\text{iid}}{\sim}0.9N(0,1)+0.1N(225)$.
\item Model 2: sensible outliers. $\epsilon_i=v_i/\sqrt{\text{var}(v_i)}$, $\sigma=9.67$, and $v_i\stackrel{\text{iid}}{\sim}Laplace(1)$.
\end{itemize}
One may show that the solution to \eqref{eq:theta0} equals the truth (i.e., $\theta_0=(\beta,\bm\theta)=(\beta^*,\bm\theta^*)$) under some mild conditions as in \cite{pollard1991asymptotics,gross1979least,ruzinsky1989strong}. We note that this equality holds for both Model 1 and 2.
We investigate different choices of $n\in\{8, 10, 20, 50\}$ and $p\in\{100, 200, 500, 1000\}$.
Both the Gibbs posterior \eqref{eq:gibbs} and bootstrap samples \eqref{eq:theta_hat} use the loss \eqref{eq:gibbs_loss} and the prior \eqref{eq:gibbs_prior}, where the regularization strength $\lambda$ in prior \eqref{eq:gibbs_prior} is set to the one that minimizes the BIC criteria (\cite{schwarz1978estimating}) among $\log(\lambda)\in\{-6,\cdots,1\}$ (an equi-spaced sequence starting at -6, ending at 1 and of length 20), as suggested by \cite{lambert2011robust}.
For the Gibbs posterior, we set the learning rate $\alpha=1$ to match \eqref{eq:penalized_LAD} which is used to generate bootstrap samples. We refer readers to \cite{bissiri2016general} and \cite{lyddon2019general} for more discussion on the problem of determining $\alpha$.
\iffalse
This is still an open question, and various methods has been proposed (\cite{bissiri2016general,lyddon2019general}). \cite{lyddon2019general} recommend setting
\begin{align}\label{eq:gibbs_alpha}
&\alpha=\frac{\text{tr}\left(J_n(\hat\theta)I_n(\hat\theta)^{-1}J_n(\hat\theta)^\top\right)}{\text{tr}\left(J_n(\hat\theta)\right)},
\quad\text{with}
\\&J_n(\theta)=\frac{1}{n}\sum_{i=1}^n\partial^2l(\theta;\x_i,y_i),
\quad
I_n(\theta)=\frac{1}{n}\sum_{i=1}^n\left[\partial l(\theta;\x_i,y_i)\partial l(\theta;\x_i,y_i)^\top\right],\nonumber
\end{align}
which calibrates $\alpha$ by matching its asymptotic Fisher information to the loss-likelihood bootstrap.
Equation \eqref{eq:gibbs_alpha}, unfortunately, is not well-defined for this example as $J_n(\theta)=O$ for any $\theta\ne \bm 0_p$. Various other approaches were suggested in \cite{bissiri2016general}. Here we adopt the calibration in Section 3.2 of \cite{bissiri2016general}, which stems from `objective Bayes' methods (\cite{kass1996selection}) and requires no subjective choice of any hyperparameters. For our particular loss and prior, it recommends
\begin{equation}\label{eq:gibbs_empirical_alpha}
\alpha=\frac{p}{(1/n)\sum_{i=1}^nl(\hat\theta_{-i},\x_i)},\quad\text{where}\quad
\hat\theta_{-i}=\arg\min_{\theta_0,\bm\theta}\left[\sum_{j\ne i}l(\theta_0,\bm\theta;y_i,\x_i)\right].
\end{equation}
To speed up, we replace the average of $n$ terms in the denominator by an average of the first $m$ terms where $m=\min(n,100)$. And $\hat\theta_{-i}$ is solved using the function \texttt{linear\_model.QuantileRegressor} in \texttt{sklearn} Python library (\cite{scikit-learn}).
Also note that the learning rate $\alpha$ estimated using \eqref{eq:gibbs_empirical_alpha} scales linearly with $p$. When data dimension grows, the estimated $\alpha$ becomes very large, and thus the corresponding Gibbs posterior \eqref{eq:gibbs} becomes almost a point mass around its mode (e.g., in Model 1, this happens when $n=500,p=50$). Recall that \cite{bhattacharya2019bayesian} showed good properties of the Gibbs posterior \eqref{eq:gibbs} with $\alpha\in(0,1)$. Thus, we use a modified rule with
\begin{equation}
\alpha=\min\left(1,\frac{p}{(1/n)\sum_{i=1}^nl(\hat\theta_{-i},\x_i)}\right),
\end{equation}
with $\hat\theta_{-i}$'s defined in \eqref{eq:gibbs_empirical_alpha}.
\fi
In practice, the Gibbs posterior is generated using the Metropolis-Hasting algorithm described in \cite{chernozhukov2003mcmc}. We implement it in Python. For faster convergence, the starting value for $\theta_0,\bm\theta$ are set to the mode of the Gibbs posterior solved by \texttt{QuantileRegressor}. The proposal density $q(x\mid y)$ is set to
the density of $N(y,\sigma'I_p)$. We set $\sigma'=0.1$ for small data sets and $\sigma'=0.01$ for large ones as the true Gibbs posterior tends to have smaller variance as the dimension grows.
We run MCMC for $1\,000\,000$ iterations and discard the first $10\,000$ as burn-in. Gelman-Rubin diagnostic (\cite{gelman1992inference}) computed by R package \texttt{coda} (\cite{coda}) is checked to ensure convergence.
\begin{figure}[t]
\centering
\includegraphics[width=.45\linewidth, height=.15\textheight]{figures/LAD_large_center_1.pdf}
\includegraphics[width=.45\linewidth, height=.15\textheight]{figures/LAD_large_center_2.pdf}
\caption{\small Two-dimensional posterior density plot for $\theta_i$'s in Model 1 for Bayesian LAD example. We set $n=100,p=8$. The location of true parameters are marked in red star. {The location of the minimizer of Equation \eqref {eq:penalized_LAD} are marked in black triangle.}}
\label{fig:LAD_2d_density}
\end{figure}
For the Bootstrap samples, we solve the optimization problem \eqref{eq:theta_hat} with $w\sim n\times Dir(1,1,\cdots,1)$, loss \eqref{eq:gibbs_loss}, and prior \eqref{eq:gibbs_prior} using the built-in function \texttt{QuantileRegressor}. This function solves \eqref{eq:theta_hat} by linear programming, and we use its default interior-point solver with default parameters.
For the Deep Bootstrap sampler, we implement Algorithm \ref{alg:nonparam} using \texttt{Pytorch}, with a subgroup size $S=100$. The other settings, including network architecture, training schedule and hyper-parameters, are identical to the Bayesian support vector machine example.
Let us first consider the posterior densty of $\theta_j$'s. Figure \ref{fig:LAD_2d_density} shows the two-dimensional posterior density plot in Model 1. Results for Model 2 are very similar and are included in the Appendix \ref{sec_appendix:LAD}.
We observe that samples from all three methods are centered around the same location. {The center is close to the minimizer of Equation \eqref {eq:penalized_LAD}, which is different from the truth due to shrinkage effects introduced from the prior.}
The Gibbs posterior tends to have the smallest variance, in contrast to the true Bootstrap samples which has the largest variance.
To quantitatively compare each method, we report the bias, lengths of 90\% credible intervals and their coverage for various choice of $(n,p)$'s in Table \ref{table:LAD}. We separate between active and inactive coordinates. Table \ref{table:LAD} also includes the time cost for each method. For fairness, we compare the time required to generate the same number of effective samples. Since each sample from boostrap is independent, we treat the total number of samples as the effective sample size for both bootstrap samplers. The effective sample size for Gibbs posterior is calculated using R package \texttt{coda}.
Table \ref{table:LAD} shows the deep Bootstrap sampler is much faster than both the true bootstrap and the Gibbs posterior, especially in large datasets.
We conclude that, in this example, DBS reconstructions could be a viable alternative to Gibbs posteriors. The Deep Bootstrap sampler achieves a similar bias and a larger variance, but is much faster than the Metropolis-Hastings algorithm.
\begin{table}[t]
\centering
\small
\scalebox{0.65}{
\begin{tabular}{l|l|l|l|l|l|l|l|l||l|l|l|l|l|l|l}
\hline\hline
\bf \large Setting&&\multicolumn{7}{l||}{ \cellcolor[gray]{0.8}\large \bf Model 1} &\multicolumn{7}{|l}{ \large\cellcolor[gray]{0.8} \bf Model 2} \\
\cline{1-16}
\multirow{2}{*}{} & \multirow{2}{*}{metric} & \multicolumn{2}{l|}{coverage} & \multicolumn{2}{l|}{length of 90\% CI} & \multicolumn{2}{l|}{bias} & time
& \multicolumn{2}{l|}{coverage} & \multicolumn{2}{l|}{length of 90\% CI} & \multicolumn{2}{l|}{bias} & time\\
\cline{3-16}
& & $+$ & $-$ & $+$ & $-$ & $+$ & $-$ &
& $+$ & $-$ & $+$ & $-$ & $+$ & $-$ & \\
\hline
\multirow{2}{*}{$p=8,n=100$} & DBS & 0.90 & 0.98 &1.10 & 1.23 & 0.23 & 0.268 & 50.51+0.63
& 0.90& 0.88&3.43 & 3.82&0.79 & 0.92 & 57.20+0.76 \\
& WLB & 0.98 & 1.00 & 1.37 & 1.52 & 0.23& 0.26 &534.08
&0.90 &0.95 & 4.19& 4.71&0.79 & 0.90 & 546.73\\
& Gibbs & 0.78& 0.80 & 0.71& 0.80 & 0.23& 0.29 & 42.71
& 0.45&0.43&1.22 &1.32 & 0.82&0.96 & 82.12\\
\hline
\multirow{2}{*}{$p=10,n=100$} & DBS & 0.85&0.85 & 0.99&1.15 &0.26 & 0.33 & 79.83+1.01
& 0.90 & 0.83 & 3.20 & 3.93 & 0.77 & 1.14 & 68.39+0.85 \\
&WLB & 0.95& 0.97& 1.27& 1.44& 0.25& 0.34 &889.63
& 0.95 & 0.93 & 4.11 & 4.86& 0.74 & 1.18 & 780.12 \\
& Gibbs & 0.75 & 0.65 &0.67 & 0.76 & 0.27 & 0.34 & 58.58
& 0.43&0.33 &1.14 & 1.39& 0.79& 1.16 & 103.04\\
\hline
\multirow{2}{*}{$p=20,n=200$} & DBS & 0.90&0.88 & 0.72&0.76 &0.18 & 0.19 & 70.05+0.86
& 0.78 & 0.82 & 2.28 & 2.46 & 0.66 & 0.71 & 74.97+0.94 \\
& WLB & 1.00& 0.95& 0.93& 1.00& 0.19& 0.20 &2891.89
& 0.93 & 0.91 & 2.94 & 3.29& 0.69 & 0.73 & 3041.20 \\
& Gibbs & 0.35 & 0.35 & 0.26 & 0.27 & 0.19& 0.29 &696.23
& 0.43& 0.39& 0.85& 0.93& 0.64& 0.71&1550.77 \\
\hline
\multirow{2}{*}{$p=50,n=500$} & DBS & 0.78 & 0.83 & 0.41 & 0.43 & 0.13 & 0.12 & 74.43+0.91
& 0.85& 0.82& 1.35&1.42 & 0.42& 0.41 & 81.64+1.01\\
& WLB & 0.93 & 0.96 & 0.60 & 0.63 & 0.13 & 0.12&> 6 hours
& 0.95&0.95 & 1.89& 2.04& 0.42& 0.42 &> 6 hours \\
& Gibbs & 0.68& 0.74 & 0.32 & 0.34 & 0.13& 0.12 &703.52
&0.38 & 0.43& 0.54& 0.59& 0.40& 0.41& 1529.81\\
\hline
\multirow{2}{*}{$p=50,n=1000$} & DBS & 0.93 & 0.80 & 0.27 & 0.30 & 0.08 & 0.09 & 75.08+0.87
&0.88 & 0.86& 0.86& 0.95& 0.23& 0.25 & 81.90+0.93\\
& WLB & 0.98 & 0.91 & 0.38 & 0.41 & 0.15 & 0.20 & > 2 days
& 0.98& 0.94&1.17&1.24 &0.23 & 0.26 & > 2 days \\
& Gibbs & 0.73& 0.68 & 0.21& 0.23 & 0.08 & 0.09 & 545.54
&0.38 &0.47 &0.35 & 0.39& 0.24& 0.25 & 1140.76\\
\hline\hline
\end{tabular}
}
\caption{\small Evaluation of approximation properties in different settings based on 10 independent runs of Bayesian LAD regression. DBS standards for `Deep Bootstrap Sampler'. WLB standards for samples from the true bootstrap distribution. Coverage stands for the empirical coverage of 90\% credible intervals. `Bias' refers to the $l_1$ distance between estimated posterior means and the truth. We denote with $+$ an average over active coordinates, and with $-$ an average over inactive coordinates. Times reported are the number of seconds (unless otherwise noted) taken to generate 10\,000 effective samples for each procedure. For deep Bootstrap sampler, time reported is in the form of training time $+$ sampling time.
}
\label{table:LAD}
\end{table}
\section{Discussion}
This paper surveys several recent contributions to the Bayesian literature on learning about parameters defined by loss functions. We highlighted a new promising direction for Bayesian computation using generative bootstrap.
We demonstrated the potential of this new strategy on several examples. This paper aims to draw practitioners' attention towards posterior sampling techniques beyond the traditional MCMC technology.
\bibliographystyle{apa}
|
2,877,628,091,057 | arxiv | \section{Introduction}
Recently, we have observed an increase in social media usage and a similar increase in hateful and offensive speech. Solutions to this problem vary from manual control to rule-based filtering systems; however, these methods are time-consuming or prone to errors if the full context is not taken into consideration while assessing the sentiment of the text \cite{saif2016}.
In Subtask-A of the shared task of Multilingual Offensive Language Identification (OffensEval2020), we focus on detecting offensive language on social media platforms, more specifically, on Twitter. The organizers provided data from five different languages, which we worked on three languages of them, namely, Arabic \cite{mubarak2020arabic}, Greek \cite{pitenis2020}, and Turkish \cite{coltekikin2020}. More details about the annotation process have been described in task description paper \cite{zampieri-etal-2020-semeval}.
The approach used combines the knowledge embedded in pre-trained deep bidirectional transformer BERT \cite{devlin-etal-2019-bert} with Convolutional Neural Networks (CNN) for text \cite{kim-2014-convolutional}, which is one of the most utilized approaches for text classification tasks. This combination of models has been shown to yield better results than using BERT or CNN on their own, as was shown in \cite{li-2019-feature-bert}, and shown in this paper. This model, and with minimum text pre-processing, ranked 4\textsuperscript{th} in Arabic, 4\textsuperscript{th} in Greek, and 3\textsuperscript{rd} in Turkish among more than 40 participants.
In the following sections of this paper, previous work is mentioned in Section \ref{section1}, next, the data is described in Section \ref{section2}, then the details of the model and the other experiments are described in Section \ref{section3}. Finally, the submissions and the other experiments are detailed in Section \ref{section4}.
\blfootnote{
\hspace{-0.65cm}
This work is licensed under a Creative Commons
Attribution 4.0 International License.
License details:
\url{http://creativecommons.org/licenses/by/4.0/}.
}
\blfootnote{
\hspace{-0.65cm}
Our source code of the main model and the other experiments can be accessed through: \url{https://github.com/alisafaya/OffensEval2020}
}
\section{Background}\label{section1}
Extensive work has been performed to solve the task of offensive speech identification, which classifies among text classification tasks. Approaches to solve this problem vary from using lexical resources, linguistic features, and meta information \cite{schmidt2017survey}, to machine learning (ML) models \cite{davidson2017automated}, and more recently, deep neural models like CNN and Long-Short Term Memory (LSTM) and their derivatives \cite{zhang2018detecting}.
More recent work, \newcite{zampieri2019predicting} presented Offensive Language Identification Dataset (OLID), which is a new dataset with tweets annotated for offensive content, they experiment with various ML models, like SVM, BiLSTM and CNN.
\section{Data}\label{section2}
The data provided for this task \cite{zampieri-etal-2020-semeval} consists of sets of tweets which were annotated as either \textbf{Offensive} (positive) or \textbf{Non-offensive} (negative). As shown in Table \ref{data-table} below, each set contains a number of positive tweets and negative tweets. In addition, the provided training data had not been split into training and development sets, so the data was split into 90\% and 10\% for training and development sets respectively.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|lll|lll|lll|}
\hline
& \multicolumn{3}{c}{\bf Arabic} & \multicolumn{3}{|c|}{\bf Greek} & \multicolumn{3}{c|}{\bf Turkish} \\
& Train & Dev & Test & Train & Dev & Test & Train & Dev & Test \\
\hline
\bf Negative & 5,785 & 626 & 1,607 & 5,642 & 616 & 1,120 & 23,084 & 2,543 & 2,740 \\
\bf Positive & 1,415 & 174 & 393 & 2,228 & 258 & 424 & 4,885 & 632 & 788 \\ \hline
\bf Total & 7,200 & 800 & 2,000 & 7,869 & 874 & 1,545 & 28,581 & 3,175 & 3,528 \\ \hline
\end{tabular}
\end{center}
\caption{\label{data-table} Tweets distribution over data sets }
\end{table}
\subsection{Data Pre-processing}
Since processed texts were obtained from Twitter, a pre-processing step was needed to maximize the features that can be extracted and to obtain a clean text; Hashtags were converted into raw text by splitting the texts into words, for example: \url{#SomeHashtagText} becomes \url{Some} \url{Hashtag} \url{Text}. As an additional step only for Greek texts, all letters to converted to lowercase letters and all Greek diacritics were removed. The text was subsequently tokenized using the corresponding BERT pre-trained Wordpiece tokenizer for each language and model.
\section{Model Description}\label{section3}
The proposed model maximizes the utilization of knowledge embedded in pre-trained BERT language models by feeding the outputted contextualized embeddings of its last four hidden layers into a several filters and convolution layers of the CNN. Finally, the output of the CNN was passed to a dense layer and the predictions were obtained.
\subsection{Convolutional Neural Networks}
CNN for textual tasks by \newcite{kim-2014-convolutional} showed superiority in text classification tasks. CNNs can be used with learned vector representations of the text (embeddings). These embeddings may either be initialized randomly and trained along with the model, or can be pre-trained vectors.
\subsection{BERT}
Bidirectional Encoder Representations from Transformers (BERT) \cite{devlin-etal-2019-bert} is state-of-the-art language model, which can be fine-tuned, or used directly as a feature extractor for various textual tasks. In our experiments, three pre-trained language-specific BERT models were used along with Multilingual-BERT (mBERT)\footnote{Multilingual: \url{https://github.com/google-research/bert}} model. Those models are GreekBERT\footnote{GreekBERT: \url{https://github.com/nlpaueb/greek-bert}} model for Greek, BERTurk \cite{stefan-bert} for Turkish, and ArabicBERT for Arabic.
\subsection{ArabicBERT}
Since there was no pre-trained BERT model for Arabic at the time of our work, four Arabic BERT language models were trained from scratch and made publicly available for use.
\textbf{ArabicBERT}\footnote{ArabicBERT: \url{https://github.com/alisafaya/arabic-bert}} is a set of BERT language models that consists of four models of different sizes trained using masked language modeling with whole word masking \cite{devlin-etal-2019-bert}. Models of sizes \textbf{Large}, \textbf{Base}, \textbf{Medium}, and \textbf{Mini} \cite{turc2019well} were trained on the same data for 4M steps.
Using a corpus that consists of the unshuffled version of OSCAR data \cite{ortiz-suarez-etal-2020-monolingual} and a recent data dump from Wikipedia, which sums up to 8.2B words, a vocabulary set of 32,000 Wordpieces was constructed. The final version of corpus contains some non-Arabic words inlines, which were not removed from sentences since that would affect some tasks like Named Entity Recognition. Although non-Arabic characters were lowered as a pre-processing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model. Subsequently, the corpus and the vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical (spoken) Arabic too, which boosted models performance in terms of data from social media platforms.
\begin{figure}
\centering
\includegraphics[scale=0.4]{fig1_1.png}
\caption{BERT-CNN model structure}
\end{figure}
\subsection{BERT-CNN Model Structure}
As mentioned above, the main model consists of two main parts. The first part being BERT Base model, in which the text is passed through 12 layers of self-attention to obtain contextualized vector representations. The other part being CNN, which was used as a classifier.
\newcite{devlin-etal-2019-bert} showed by comparing different combinations of layers of BERT, that the output of the last four hidden layers combined, encodes more information than the output of the top layer.
After setting the maximum sequence length of each text sample (tweet) to 64 tokens, the text was input to BERT, then the output of the last four hidden layers of base sized pre-trained BERT, was concatenated to get vector representations of size 768x4x64 as shown in Figure 1. Next, these embeddings were passed in parallel into 160 convolutional filters of five different sizes (768x1, 768x2, 768x3, 768x4, 768x5), 32 filters for each size. Each kernel takes the output of the last four hidden layers of BERT as 4 different channels and applies convolution operation on it. After that, the output is passed through ReLU Activation function and a Global Max-Pooling operation. Finally, the output of the pooling operation is concatenated and flattened to be later on passed through a dense layer and a Sigmoid function to get the final binary label.
This model was trained for 10 epochs with learning rate of 2e-5, and the model with the best macro averaged F1-Score on the development set was saved.
\section{Experiments and Results}\label{section4}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\bf Model & \bf Arabic & \bf Greek & \bf Turkish & \bf Average \\
\hline
\bf SVM with TF-IDF & 0.772 & 0.823 & 0.685 & 0.760 \\
\bf Multilingual BERT & 0.808 & 0.807 & 0.774 & 0.796 \\
\bf Bi-LSTM & 0.822 & 0.826 & 0.755 & 0.801 \\
\bf CNN-Text & 0.840 & 0.825 & 0.751 & 0.805 \\
\bf BERT\tablefootnote{\label{bert-note}Language specific pre-trained BERT models were used for this experiment} & 0.884 & 0.822 & \bf0.816 & 0.841 \\
\bf BERT-CNN (Ours)\footnotemark[4] & \bf0.897 & \bf0.843 & 0.814 & \bf0.851 \\
\hline
\end{tabular}
\end{center}
\caption{\label{results-table} Macro averaged F1-Scores of our submissions and the other experiments on test data}
\end{table}
Macro-averaged F1-Score metric was used for evaluation in this shared task. The results of our submissions were shown in comparison with other experiments in Table \ref{results-table}.
These experiments began by building a baseline model using a classic ML approach for text classification. Additionally, the main model was compared with more recent approaches. All the models use the same train/dev/test splits.
\subsection*{SVM with TF-IDF\footnote{This model was built using Scikit-learn: \url{scikit-learn.org}}}
The baseline model used Term Frequency-Inverse Document Frequency (TF-IDF) \cite{Salton1988} with Support Vector Machine (SVM) \cite{Boser92atraining}. Count Vectorizer with feature set size of 3000 was used to achieve the results demonstrated in Table \ref{results-table}.
\subsection*{CNN-Text\footnote{\label{pytorch} This model was built using PyTorch: \url{pytorch.org}}} Using CNNs with the same structure as the main model, but without pre-trained BERT as an embedder. CNN-Text model uses randomly initialized embeddings of size 300, which were trained along with the model. The difference between the results obtained using pre-trained BERT and randomly initialized embeddings was significant as shown in the Table \ref{results-table} above.
\subsection*{BiLSTM\footnotemark[6]}
While CNNs could be used to capture local features of the text, LSTM which have shown remarkable performance in text classification tasks, capture the temporal information. In our experiments, two layers of Bidirectional LSTM (BiLSTM) with a hidden size of 128, and randomly initialized embeddings of size 300, were used to achieve the results shown in Table \ref{results-table}, However, this was still outperformed by CNN-Text on average.
\subsection*{BERT\footnotemark[6]\textsuperscript{,}\footnote{Transformers library was used for BERT \cite{Wolf2019HuggingFacesTS}}}
By looking at the average results of BERT model on its own, we can see the improvement that was achieved by combining BERT with CNN. Additionally we can clearly observe the advantage of using \textbf{Language-specific} pre-trained models on \textbf{Multilingual} ones.
\section{Conclusion}
In this paper, the structure of BERT-CNN was described and compared with other models on the ability to identify offensive speech text in social media. It was shown that combining BERT with CNN yields better than using BERT on its own. Additionally, the pre-training process of ArabicBERT was explained.
The proposed model with minimum text pre-processing was able to achieve very good results on average and our team was ranked among the highest four participating teams for all languages in the scope of the OffensEval2020.
\section*{Acknowledgements}
The hardware infrastructure of this study is provided by the European Research Council (ERC) Starting Grant 714868. Also, we would like to thank Google for providing free TPUs and Credits for the pre-training process of ArabicBERT and for Huggingface.co for hosting these models on their servers.
\bibliographystyle{coling}
|
2,877,628,091,058 | arxiv | \section{Introduction}
Tensor decompositions have been recently popular for unsupervised learning of a wide range of latent variable models such as independent component analysis~\cite{de2007fourth}, topic models, Gaussian mixtures, hidden Markov models~\cite{AnandkumarEtal:tensor12}, network community models~\cite{AnandkumarEtal:community12}, and so on.
The decomposition of a certain low order multivariate moment tensor (typically up to fourth order) in these models is guaranteed to provide a consistent estimate of the model parameters. Moreover, the sample and computational requirements are only a low order polynomial in the rank of the tensor~\cite{AnandkumarEtal:tensor12,SongEtal:NonparametricTensorDecomp}.
In practice, the tensor decomposition techniques have been shown to be effective in a number of applications such as blind source separation~\cite{comon2002tensor}, computer vision~\cite{vasilescu2003multilinear}, contrastive topic modeling~\cite{zou2013contrastive}, and community detection~\cite{AnandkumarEtal:communityimplementation13}. In many cases, the tensor approach is shown to be orders of magnitude faster than existing techniques such as the stochastic variational approach.
The state of art for guaranteed tensor decomposition involves
two steps: converting the input tensor to an orthogonal symmetric form, and then solving the orthogonal decomposition through tensor eigen decomposition~\cite{Comon94,SIMAX-080148-Tensor-Eigenvalues,ZG01,AnandkumarEtal:tensor12}. The first step of converting the input tensor to an orthogonal symmetric form is known as {\em whitening}. For the second step, the tensor eigen pairs can be found through a simple tensor {\em power} iteration procedure.
While having efficient guarantees, the above procedure suffers from a number of theoretical and practical limitations.
For instance, in practice, the learning performance is especially sensitive to whitening~\cite{le2011ica}. Moreover, whitening is computationally the most expensive step in deployments~\cite{AnandkumarEtal:communityimplementation13}, and it can suffer from numerical instability in high-dimensions due to ill-conditioning. Lastly, the above approach is unable to learn {\em overcomplete representations} (this is the case when number of features/components is much larger than the dimension) due to the orthogonality constraint, which is especially limiting, given the recent popularity of overcomplete feature learning in many domains~\cite{bengio2012unsupervised,lewicki2000learning}.
The current practice for tensor decomposition is the {\em alternating least squares} (ALS) procedure, which has been described as the ``workhorse'' of tensor decomposition~\cite{kolda_survey}. This involves solving the least squares problem on a {\em mode} of the tensor, while keeping the other modes fixed, and alternating between the tensor modes. The method is extremely fast since it involves calculating linear updates, but is not guaranteed to converge to the global optimum in general~\cite{kolda_survey}.
In this paper, we provide local and global convergence guarantees for a modified alternating method, for which the main step is making rank-$1$ updates along different modes of the tensor. This update is basically a rank-1 ALS update. This method is extremely fast to deploy, trivially parallelizable, and does not suffer from ill-conditioning issues faced by both ALS~\cite{kolda_survey} and whitening approaches~\cite{le2011ica}. Our analysis assumes the presence of {\em incoherent} tensor components, which can be viewed as a {\em soft-orthogonality} constraint. Incoherent representations have been extensively considered in literature in a number of contexts, e.g., compressed sensing~\cite{donoho2006compressed} and sparse coding~\cite{Arora2013,AgarwalEtal:SparseCoding2013}. Incoherent representations provide flexible modeling, can handle overcomplete signals, and are robust to noise~\cite{lewicki2000learning}. Moreover, when the latent variable model parameters are {\em generic} or when we have randomly constructed (multiview) features~\cite{mcwilliams2013correlated}, the moment tensors have incoherent components, as assumed here. In this work, we establish that incoherence leads to efficient guarantees for tensor decomposition. The guarantees also include a tight perturbation analysis. In a subsequent work \cite{OvercompleteLVMs2014}, we apply the tensor decomposition guarantees of this paper to various learning settings, and derive sample complexity bounds through novel covering arguments.
\subsection{Summary of results}
In this paper, we propose and analyze an algorithm for non-orthogonal CP (Candecomp/Parafac) tensor decomposition; see Figure~\ref{fig:TensorDecomposition} for the details of the algorithm. The main step of the algorithm is a simple alternating rank-$1$ update which is the alternating version of the tensor power iteration adapted for asymmetric tensors. In each iteration, one of the tensor modes is updated by projecting the other modes along their estimated directions, and the process is alternated between all the modes of the tensor; see~\eqref{eqn:asymmetric power update} for this update.
For the above update, we provide local convergence guarantees under incoherent tensor components for a rank-$k$ third order tensor in $d$ dimensions. We prove a linear rate of convergence under appropriate initialization when $k = o(d^{3/2})$. Due to incoherence, the actual tensor components are not the stationary points of the update (even in the noiseless setting), and thus, there is an approximation error in the estimate after this update. The approximation error depends on the extent of overcompleteness, and scales as\,\footnote{$\tilde{O}$ is $O$ up to $\polylog$ factors.} $\tilde{O} (\sqrt{k}/d )$, which is small since $k = o(d^{3/2})$.
The generalization to higher order tensors is also provided.
To the best of our knowledge, we give the first guarantees for overcomplete tensor decomposition under mild incoherence conditions.
In order to remove the approximation error $\tilde{O} (\sqrt{k}/d )$ after the above rank-1 updates, we propose an additional update to the algorithm which is basically a type of coordinate descent update; see~\eqref{eqn:BiasRemoval}. We run this update after the main rank-1 updates and show that this removes the approximation error in a linear rate of convergence, and thus, we finally consistently recover the tensor decomposition.
In the undercomplete or mildly overcomplete settings $(k = O(d))$, a simple initialization procedure (see Procedure~\ref{algo:SVD init}) based on rank-$1$ SVD of random tensor slices is provided. This initialization procedure lands the estimate in the basin of attraction for the alternating update procedure in polynomial number of trials (in the tensor rank $k$). This leads to global convergence guarantees for tensor decomposition.
We then extend the global convergence guarantees to settings where two modes of the tensor are (sufficiently) undercomplete (the dimension $d_u$ is much larger than tensor rank $k$), and the third tensor mode is (highly) overcomplete (the dimension $d_o$ is much smaller than tensor rank $k$). For instance, consider tensors arising from multi-view mixture models such as ${\mathbb E}[x_1\otimes x_2 \otimes y]$, where $x_i$ are multi-view high dimensional features and $y$ is a low dimensional label. Previous procedures in~\cite{AnandkumarEtal:tensor12} which rely on transforming the input tensor to an orthogonal symmetric form cannot handle this setting. Algorithms based on simultaneous diagonalization \cite{harshman1994parafac} can handle this case, but is not as robust to noise.
We prove global convergence guarantees by considering
rank-$1$ SVD of random tensor slices along the $y$-mode as initialization for the $x_i$-modes of the tensor, and then running the alternating update procedure.
\paragraph{Overview of techniques:} Greedy or rank-$1$ updates are perhaps the most natural procedure for CP tensor decomposition. For orthogonal tensors, they lead to guaranteed recovery~\cite{ZG01}. However, when the tensor is non-orthogonal, greedy procedure is not optimal in general~\cite{kolda2001orthogonal}. Finding tensor decomposition in general is NP-hard~\cite{TensorNPHard}. We circumvent this obstacle by limiting ourselves to tensors with incoherent components. We exploit incoherence to prove error contraction under each step of the alternating update procedure with an approximation error, which is decaying, when $k = o(d^{1.5})$. To this end, we require tools from random matrix theory, bounds on $2 \to p$ norm for random matrices~\cite{guedon2007lp, adamczak2011chevet} for some $p<3$, and matrix perturbation results to provide tight bounds on error contraction.
\subsection{Related work}
CP tensor decomposition~\cite{carroll1970analysis}, also known as PARAFAC decomposition~\cite{harshman1970foundations, harshman1994parafac} is a classical definition for tensor decomposition with many applications. The most commonly used algorithm for CP decomposition is Alternating Least Squares (ALS)~\cite{comon2009tensor}, which has no convergence guarantees in general.
\cite{kolda2001orthogonal} and \cite{ZG01} analyze the greedy or the rank-1 updates in the orthogonal setting. In the noisy setting, \cite{AnandkumarEtal:tensor12} analyze deflation procedure for orthogonal decomposition, and~\cite{SongEtal:NonparametricTensorDecomp} extend the analysis to the nonparametric setting.
For the non-orthogonal tensors, a common strategy is to first apply a procedure called {\em whitening} to reduce it to the orthogonal case. But as discussed earlier, the whitening procedure can lead to poor performance and bad sample complexity. Moreover, it requires the tensor factors to have full column rank, which rules out overcomplete tensors.
Learning overcomplete tensors is challenging, and they may not even be identifiable in general. \cite{Kruskal:76,Kruskal:77} provided an identifiability result based on the {\em Kruskal} rank of the factor matrices of the tensor. However, this result is limiting since it requires $k=O(d)$, where $k$ is the tensor rank and $d$ is the dimension. The FOOBI procedure by~\cite{de2007fourth} overcomes this limitation by assuming {\em generic} factors, and shows that a polynomial-time procedure can recover the tensor components when $k =O(d^2)$, and the tensor is fourth order. However, the procedure does not work for third-order overcomplete tensors, and has no polynomial sample complexity bounds. Simple procedures can recover overcomplete tensors for higher order tensors (five or higher). For instance, for the fifth order tensor, when $k=O(d^2)$, we can utilize random slices along a mode of the tensor, and perform simultaneous diagonalization on the matricized versions. Note that this procedure cannot handle the same level of overcompleteness as FOOBI, since an additional dimension is required for obtaining two (or more) fourth order tensor slices. The simultaneous diagonalization procedure entails careful perturbation analysis, carried out by~\cite{fourierpca,bhaskara2013smoothed}. In addition, \cite{fourierpca} provide stronger results for independent components analysis (ICA), where the tensor slices can be obtained in the Fourier domain.
There are other recent works which can learn overcomplete models, but under different settings than the ones considered in this paper. For instance,~\cite{Arora2013,AgarwalEtal:SparseCoding2013} provide guarantees for the sparse coding problem.~\cite{AnandkumarEtal:NIPS13} learn overcomplete sparse topic models, and provide guarantees for {\em Tucker} tensor decomposition under sparsity constraints. Specifically, the model is identifiable using $(2n)^{{\mbox{\tiny th}}}$ order moments when the latent dimension $k=O(d^n)$ and the sparsity level of the factor matrix is $O(d^{1/n})$, where $d$ is the observed dimension. The Tucker decomposition is different from the CP decomposition considered here (it has weaker assumptions and guarantees), and the techniques in~\cite{AnandkumarEtal:NIPS13} differ significantly from the ones considered here.
The algorithm employed here falls under the general framework of alternating minimization. There are many recent works which provide guarantees on local/global convergence for alternating minimization, e.g., for matrix completion~\cite{jain2013low, hardt2013provable}, phase retrieval~\cite{netrapalli2013phase} and sparse coding~\cite{AgarwalEtal:SparseCoding2013}. However, the techniques in this paper are significantly different, since they involve tensors, while the previous works only required matrix analysis.
\subsection{Notations and tensor preliminaries}
Let $[n]$ denote the set $\{1,2,\dotsc,n\}$.
Notice that while the standard asymptotic notation is to write $f(d) = O(g(d))$ and $g(d) = \Omega(f(d))$, we sometimes use $f(d) \leq O(g(d))$ and $g(d) \geq \Omega(f(d))$ for additional clarity.
We also use the asymptotic notation $f(d) = \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O}(g(d))$ if and only if $f(d) \leq \alpha g(d)$ for all $d \geq d_0$, for some $d_0 >0$ and $\alpha = \polylog(d)$, i.e., $\tilde{O}$ hides $\polylog$ factors.
\subsubsection*{Tensor preliminaries}
A real \emph{$p$-th order tensor} $T \in \bigotimes_{i=1}^p \R^{d_i}$ is a member of the outer product of Euclidean spaces $\R^{d_i}$, $i \in [p]$.
For convenience, we restrict to the case where $d_1 = d_2 = \dotsb = d_p = d$, and simply write $T \in \bigotimes^p \R^d$.
As is the case for vectors (where $p=1$) and matrices (where $p=2$), we may
identify a $p$-th order tensor with the $p$-way array of real numbers $[
T_{i_1,i_2,\dotsc,i_p} \colon i_1,i_2,\dotsc,i_p \in [d] ]$, where
$T_{i_1,i_2,\dotsc,i_p}$ is the $(i_1,i_2,\dotsc,i_p)$-th coordinate of $T$
with respect to a canonical basis. For convenience, we limit to third order tensors $(p=3)$ in our analysis, while the results for higher order tensors are also provided.
The different dimensions of the tensor are referred to as {\em modes}. For instance, for a matrix, the first mode refers to columns and the second mode refers to rows.
In addition, {\em fibers} are higher order analogues of matrix rows and columns. A fiber is obtained by fixing all but one of the indices of the tensor (and is arranged as a column vector). For instance, for a matrix, its mode-$1$ fiber is any matrix column while a mode-$2$ fiber is any row. For a
third order tensor $T\in \R^{d \times d \times d}$, the mode-$1$ fiber is given by $T(:, j, l)$, mode-$2$ by $T(i, :, l)$ and mode-$3$ by $T(i, j, :)$.
Similarly, {\em slices} are obtained by fixing all but two of the indices of the tensor. For example, for the third order tensor $T$, the slices along $3$rd mode are given by $T(:, :, l)$.
For $r \in \{1,2,3\}$, the mode-$r$ matricization of a third order tensor $T\in \R^{d \times d \times d}$, denoted by $\mat(T,r) \in {\mathbb R}^{d \times d^2}$, consists of all mode-$r$ fibers arranged as column vectors.
We view a tensor $T \in {\mathbb R}^{d \times d \times d}$ as a multilinear form.
Consider matrices $M_r \in \R^{d\times d_r}, r \in \{1,2,3\}$. Then tensor $T(M_1,M_2,M_3) \in \R^{d_1}\otimes \R^{d_2}\otimes \R^{d_3}$ is defined as
\begin{align} \label{eqn:multilinear form def}
T(M_1,M_2,M_3)_{i_1,i_2,i_3} := \sum_{j_1, j_2,j_3\in[d]} T_{j_1,j_2,j_3} \cdot M_1(j_1, i_1) \cdot M_2(j_2, i_2) \cdot M_3(j_3, i_3).
\end{align}
In particular, for vectors $u,v,w \in \R^d$, we have\,\footnote{Compare with the matrix case where for $M \in \R^{d \times d}$, we have $ M(I,u) = Mu := \sum_{j \in [d]} u_j M(:,j) \in \R^d$.}
\begin{equation} \label{eqn:rank-1 update}
T(I,v,w) = \sum_{j,l \in [d]} v_j w_l T(:,j,l) \ \in \R^d,
\end{equation}
which is a multilinear combination of the tensor mode-$1$ fibers.
Similarly $T(u,v,w) \in \R$ is a multilinear combination of the tensor entries, and $T(I, I, w) \in \R^{d \times d}$ is a linear combination of the tensor slices.
A $3$rd order tensor $T \in {\mathbb R}^{d \times d \times d}$ is said to be rank-$1$ if it can be written in the form
\begin{align} \label{eqn:rank-1 tensor}
T= w \cdot a \otimes b\otimes c \Leftrightarrow T(i,j,l) = w \cdot a(i) \cdot b(j) \cdot c(l),
\end{align}
where notation $\otimes$ represents the {\em outer product} and $a \in {\mathbb R}^d$, $b \in {\mathbb R}^d$, $c \in {\mathbb R}^d$ are unit vectors (without loss of generality).
A tensor $T \in {\mathbb R}^{d \times d \times d}$ is said to have a CP rank $k\geq 1$ if it can be written as the sum of $k$ rank-$1$ tensors
\begin{equation}\label{eqn:tensordecomp}
T = \sum_{i\in [k]} w_i a_i \otimes b_i \otimes c_i, \quad w_i \in {\mathbb R}, \ a_i,b_i,c_i \in {\mathbb R}^d.
\end{equation}
This decomposition is closely related to the multilinear form. In particular, for vectors $\hat{a},\hat{b},\hat{c} \in {\mathbb R}^d$, we have
$$T(\hat{a},\hat{b},\hat{c}) = \sum_{i\in [k]} w_i \langle a_i, \hat{a}\rangle\langle b_i, \hat{b}\rangle\langle c_i,\hat{c}\rangle.$$
Consider the decomposition in equation~\eqref{eqn:tensordecomp},
denote matrix $A:=[a_1 \ a_2 \ \dotsb \ a_k] \in \R^{d \times k}$, and similarly $B$ and $C$. Without loss of generality, we assume that the matrices have normalized columns (in $2$-norm), since we can always rescale them, and adjust the weights $w_i$ appropriately.
Throughout, $\|v\| := (\sum_i v_i^2)^{1/2}$ denotes the Euclidean ($\ell_2$) norm
of a vector $v$, and $\|M\|$ denotes the spectral (operator) norm of a matrix $M$.
Furthermore, $\|T\|$ and $\|T\|_F$ denote the spectral (operator) norm and the Frobenius norm of a tensor, respectively. In particular, for a $3$rd order tensor, we have
$$
\|T\| := \sup_{\|u\| = \|v\| = \|w\| = 1} |T(u,v,w)|, \quad \|T\|_F := \sqrt{\sum_{i,j,l \in [d]} T_{i,j,l}^2}.
$$
\section{Tensor Decomposition Algorithm} \label{sec:algorithm}
\begin{figure
\begin{center}
\begin{tikzpicture}
[
scale=1,
nodestyle/.style={fill = gray!30, shape = rectangle, rounded corners, minimum width = 2cm},
]
\@setsize\small{9pt}\viiipt\@viiipt\let\@listi\@listI
\matrix [column sep=2mm,row sep=6mm] {
\node[nodestyle](a1){Input: Tensor $T=\sum_{i \in [k]} w_i \cdot a_i \otimes b_i \otimes c_i$}; & \\
\node[nodestyle, align=left](a){\quad \quad Algorithm Initialization: \\ 1) Random initialization \\ 2) SVD-base method: Procedure~\ref{algo:SVD init}}; & \\
\node[nodestyle, align=center](b){Tensor {\em Power Iterations}}; & \\
\node[nodestyle, align=center](c){Clustering the output of tensor \\ power method into $k$ clusters}; &
\\
\node[nodestyle, align=center](d){Coordinate descent updates \\ for removing the residual error}; & \\
\node[nodestyle](e){Output: estimates $\lbrace (\hat{w}_i, \hat{a}_i, \hat{b}_i, \hat{c}_i) \rbrace_{i \in [k]}$}; & \\
};
\draw [->, line width = 1pt] (a1) to (a);
\draw [->, line width = 1pt] (a) to (b);
\draw [->, line width = 1pt] (b) to (c);
\draw [->, line width = 1pt] (c) to (d);
\draw [->, line width = 1pt] (d) to (e);
\node [draw, dashed, rounded corners, violet, line width=0.5pt,
fit = {(a) (b) ($(b.east)+(2mm,0)$) ($(b.west)-(3mm,0)$) ($(a.north)+(0,0.5mm)$) ($(b.south)-(0,0.5mm)$)},
label=left:{\begin{tabular}{c} Algorithm~\ref{algo:Power method form} \end{tabular}}
] {};
\node [draw, dashed, rounded corners, green!40!black, line width=0.5pt,
fit = {(c) ($(c.east)+(2mm,0)$) ($(c.west)-(3mm,0)$) ($(c.north)+(0,0.5mm)$) ($(c.south)-(0,0.5mm)$)},
label=left:{\begin{tabular}{c} Procedure~\ref{alg:cluster} \end{tabular}}
] {};
\node [draw, dashed, rounded corners, violet, line width=0.5pt,
fit = {(d) ($(d.east)+(2mm,0)$) ($(d.west)-(2mm,0)$) ($(d.north)+(0,0.5mm)$) ($(d.south)-(0,0.5mm)$)},
label=left:{\begin{tabular}{c} Algorithm~\ref{algo:coordinate-descent}\\ \& Procedure~\ref{algo:fix-procedure} \end{tabular}}
] {};
\end{tikzpicture}
\end{center}
\vspace{-0.2in}
\caption{\@setsize\small{9pt}\viiipt\@viiipt\let\@listi\@listI Overview of tensor decomposition algorithm.}
\label{fig:TensorDecomposition}
\end{figure}
In this section, we introduce the alternating tensor decomposition algorithm, and the guarantees are provided in Section~\ref{sec:analysis}. The goal of tensor decomposition algorithm is to recover the rank-1 components of tensor; see~\eqref{eqn:tensordecomp} for the notion of tensor rank.
Figure~\ref{fig:TensorDecomposition} depicts the overview of our tensor decomposition method where the corresponding algorithms and procedures are also specified. Our algorithm includes two main steps as 1) alternating tensor power iteration, and 2) coordinate descent iteration for removing the residual error. The former one is performed in Algorithm~\ref{algo:Power method form} (see equation~\eqref{eqn:asymmetric power update}, and the latter one is done in Algorithm~\ref{algo:coordinate-descent} (see equation~\eqref{eqn:BiasRemoval}). We now describe these steps of the algorithm in more details as well as providing the auxiliary procedures required to complete the algorithm.
\subsection{Tensor power iteration in Algorithm~\ref{algo:Power method form}}
The main step of the algorithm is tensor power iteration which basically performs alternating {\em asymmetric power updates}\,\footnote{This is exactly the generalization of asymmetric matrix power update to $3$rd order tensors.}
on different modes of the tensor as
\begin{equation} \label{eqn:asymmetric power update}
\hat{a}^{(t+1)} = \frac{T \left( I, \hat{b}^{(t)}, \hat{c}^{(t)} \right)}{\left\| T \left( I, \hat{b}^{(t)}, \hat{c}^{(t)} \right) \right\|}, \ \
\hat{b}^{(t+1)} = \frac{T \left( \hat{a}^{(t)}, I, \hat{c}^{(t)} \right)}{\left\| T \left( \hat{a}^{(t)}, I, \hat{c}^{(t)} \right) \right\|}, \ \
\hat{c}^{(t+1)} = \frac{T \left( \hat{a}^{(t)}, \hat{b}^{(t)},I \right)}{\left\| T \left( \hat{a}^{(t)}, \hat{b}^{(t)},I \right) \right\|},
\end{equation}
where $\{\hat{a}^{(t)},\hat{b}^{(t)},\hat{c}^{(t)}\}$ denotes estimate in the $t$-th iteration.
Recall that for vectors $v,w \in \R^d$, the multilinear form $T(I,v,w) \in \R^d$ used in the above update formula is defined in \eqref{eqn:rank-1 update}, where $T(I,v,w)$ is a multilinear combination of the tensor mode-$1$ fibers.
Notice that the updates alternate among different modes of the tensor which can be viewed as a rank-$1$ form of the standard Alternating Least Squares (ALS) method. We later discuss this relation in more details.
\paragraph{Optimization viewpoint:}
Consider the problem of best rank-$1$ approximation of tensor $T$ as
\begin{equation} \label{eqn:power-opt}
\min_{\substack{a,b,c \in {\cal S}^{d-1} \\ w \in {\mathbb R}}} \| T - w \cdot a \otimes b \otimes c \|_F,
\end{equation}
where ${\cal S}^{d-1}$ denotes the unit $d$-dimensional sphere. This optimization program is non-convex, and has multiple local optima. It can be shown that the updates in \eqref{eqn:asymmetric power update} are the alternating optimization for this program where in each update, optimization over one vector is performed while the other two vectors are assumed fixed. This alternating minimization approach does not converge to the true components of tensor $T$ in general, and in this paper we provide sufficient conditions for the convergence guarantees.
\begin{algorithm}[t]
\caption{Tensor decomposition via alternating asymmetric power updates}
\label{algo:Power method form}
\begin{algorithmic}[1]
\REQUIRE Tensor $T \in {\mathbb R}^{d \times d \times d}$, number of initializations $L$, number of iterations $N$.
\FOR{$\tau=1$ \TO $L$}
\STATE \textbf{Initialize} unit vectors $\hat{a}_\tau^{(0)} \in {\mathbb R}^d$, $\hat{b}_\tau^{(0)} \in {\mathbb R}^d$, and $\hat{c}_\tau^{(0)} \in {\mathbb R}^d$ as
\begin{itemize}[itemsep=-1mm]
\vspace{-2mm}
\item Option 1: SVD-based method in Procedure~\ref{algo:SVD init} when $k\leq \beta d$ for arbitrary constant $\beta$.
\item Option 2: random initialization.
\end{itemize}
\FOR{$t=0$ \TO $N-1$
\STATE Asymmetric power updates (see \eqref{eqn:rank-1 update} for the definition of the multilinear form):
\begin{align*}
\hat{a}_\tau^{(t+1)} = \frac{T \left( I, \hat{b}_\tau^{(t)}, \hat{c}_\tau^{(t)} \right)}{\left\| T \left( I, \hat{b}_\tau^{(t)}, \hat{c}_\tau^{(t)} \right) \right\|}, \quad
\hat{b}_\tau^{(t+1)} = \frac{T \left( \hat{a}_\tau^{(t)}, I, \hat{c}_\tau^{(t)} \right)}{\left\| T \left( \hat{a}_\tau^{(t)}, I, \hat{c}_\tau^{(t)} \right) \right\|}, \quad
\hat{c}_\tau^{(t+1)} = \frac{T \left( \hat{a}_\tau^{(t)}, \hat{b}_\tau^{(t)},I \right)}{\left\| T \left( \hat{a}_\tau^{(t)}, \hat{b}_\tau^{(t)},I \right) \right\|}.
\end{align*}
\ENDFO
\STATE weight estimation:
\begin{align} \label{eqn:weight update}
\hat{w}_\tau = T \left( \hat{a}_\tau^{(N)}, \hat{b}_\tau^{(N)}, \hat{c}_\tau^{(N)} \right).
\end{align}
\ENDFOR
\STATE Cluster set $\left\{ \left( \hat{w}_\tau,\hat{a}_\tau^{(N)},\hat{b}_\tau^{(N)},\hat{c}_\tau^{(N)} \right), \tau \in [L] \right\}$ into $k$ clusters as in Procedure~\ref{alg:cluster}.
\RETURN the center member of these $k$ clusters as estimates $(\hat{w}_j,\hat{a}_j,\hat{b}_j,\hat{c}_j), j \in [k]$.
\end{algorithmic}
\end{algorithm}
\paragraph{Intuition:}
We now provide an intuitive argument on the functionality of power updates in~\eqref{eqn:asymmetric power update}.
Consider a rank-$k$ tensor $T$ as in \eqref{eqn:tensordecomp}, and suppose we start at the correct vectors $\hat{a}=a_j$ and $\hat{b}=b_j$, for some $j \in [k]$. Then for the numerator of update formula~\eqref{eqn:asymmetric power update}, we have
\begin{equation}\label{eqn:intuition}
T \left( \hat{a}, \hat{b}, I \right)
= T \left( a_j, b_j, I \right)
= w_j c_j + \sum_{i \neq j} w_i \langle a_j,a_i \rangle \langle b_j,b_i \rangle c_i,
\end{equation}where the first term is along $c_j$ and the second term is an error term due to non-orthogonality. For orthogonal decomposition, the second term is zero, and the true vectors $a_j,b_j$ and $c_j$ are stationary points for the power update procedure. However, since we consider non-orthogonal tensors, this procedure cannot recover the decomposition exactly leading to a residual error after running this step.
Under incoherence conditions which encourages soft-orthogonality constraints\,\footnote{See Assumption~\ref{cond:incoherence} in Appendix~\ref{sec:assumptions} for precise description.} (and some other conditions), we show that the residual error is small (see Lemma~\ref{thm:local convergence-poweriteration} where the guarantees for the tensor power iteration step is provided), and thus, with the additional step we propose in Section~\ref{sec:coordinate-descent}, we can also remove this residual error.
\paragraph{Initialization and clustering procedures:}
We discussed that the tensor power updates in~\eqref{eqn:asymmetric power update} are the alternating iterations for the problem of rank-1 approximation of the tensor; see~\eqref{eqn:power-opt}. This is a non-convex problem and has many local optima. Thus, the power update requires careful initialization to ensure convergence to the true rank-1 tensor components.
For generating initialization vectors $\bigl( \hat{a}^{(0)}, \hat{b}^{(0)}, \hat{c}^{(0)} \bigr)$, we introduce two possibilities. One is the simple random initializations, where $\hat{a}^{(0)}$ and $\hat{b}^{(0)}$ are uniformly drawn from unit sphere ${\cal S}^{d-1}$. The other option is SVD-based technique in Procedure~\ref{algo:SVD init} where top left and right singular vectors of $T(I,I,\theta)$ (for some random $\theta \in {\mathbb R}^d$) are respectively introduced as $\hat{a}^{(0)}$ and $\hat{b}^{(0)}$. Under both initialization procedures, vector $\hat{c}^{(0)}$ is generated through update formula in \eqref{eqn:asymmetric power update}. We establish in Section \ref{sec:global convergence} that when $k=O(d)$, the SVD procedure leads to global convergence guarantees under polynomial number of trials. In practice random initialization also works well, however the analysis is still an open problem
Notice that the algorithm is run for $L$ different initialization vectors for which we do not know the good ones in prior. In order to identify which initializations are successful at the end, we also need a {\em clustering} step proposed in Procedure~\ref{alg:cluster} to obtain the final estimates of the vectors. The detailed analysis of clustering procedure is provided in Appendix~\ref{sec:clustering}.
\floatname{algorithm}{Procedure}
\begin{algorithm}[t]
\caption{SVD-based initialization when $k\leq \beta d$ for arbitrary constant $\beta$}
\label{algo:SVD init}
\begin{algorithmic}[1]
\REQUIRE Tensor $T \in {\mathbb R}^{d \times d \times d}$.
\STATE Draw a random standard Gaussian vector $\theta \sim \mathcal{N}(0,I_d).$
\STATE Compute $u_1$ and $v_1$ as the top left and right singular vectors of $T(I,I,\theta) \in \R^{d \times d}$.
\STATE $\hat{a}^{(0)} \leftarrow u_1$, $\hat{b}^{(0)} \leftarrow v_1$.
\STATE Initialize $\hat{c}^{(0)}$ by update formula in \eqref{eqn:asymmetric power update}.
\RETURN $\bigl( \hat{a}^{(0)}, \hat{b}^{(0)}, \hat{c}^{(0)} \bigr)$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{Clustering process}
\label{alg:cluster}
\begin{algorithmic}[1]
\REQUIRE Tensor $T \in {\mathbb R}^{d \times d \times d}$, set of $4$-tuples
$\left\{(\hat{w}_\tau, \hat{a}_\tau,\hat{b}_\tau, \hat{c}_\tau),\tau\in [L]\right\}$, parameter $\nu$.
\FOR{$i = 1$ \TO $k$}
\STATE Among the remaining 4-tuples, choose $\hat{a},\hat{b},\hat{c}$ which correspond to the largest $|T(\hat{a},\hat{b},\hat{c})|$.
\STATE Do $N$ more iterations of alternating updates in \eqref{eqn:asymmetric power update} starting from $\hat{a},\hat{b},\hat{c}$.
\STATE Let the output of iterations denoted by $(\hat{a},\hat{b},\hat{c})$ be the center of cluster $i$.
\STATE Remove all the tuples with $\max\{|\langle \hat{a}_\tau,\hat{a}\rangle|,|\langle \hat{b}_\tau,\hat{b}\rangle|,|\langle \hat{c}_\tau,\hat{c}\rangle|\} > \nu/2$.
\ENDFOR
\RETURN the $k$ cluster centers.
\end{algorithmic}
\end{algorithm}
\subsection{Coordinate descent iteration in Algorithm~\ref{algo:coordinate-descent}} \label{sec:coordinate-descent}
We discussed in the previous section that the tensor power iteration recovers the tensor rank-1 components up to some residual error. We now propose Algorithm~\ref{algo:coordinate-descent} to remove this additional residual error. This algorithm mainly runs a coordinate descent iteration as
\begin{align}
\tilde}\renewcommand\t{{\scriptscriptstyle\top}{c}_i^{(t+1)} = \operatorname{Norm} \biggl( T \left(\h{a}_i^{(t)},\h{b}_i^{(t)},I \right) - \sum_{j\ne i} \h{w}_j^{(t)} \inner{\h{a}_i^{(t)},\h{a}_j^{(t)}} \inner{\h{b}_i^{(t)},\h{b}_j^{(t)}} \cdot \h{c}_j^{(t)} \biggr), \quad i \in [k], \label{eqn:BiasRemoval}
\end{align}
where for vector $v$, we have $\operatorname{Norm} (v) := v/\|v\|$, i.e., it normalizes the vector. The above is similarly applied for updating $\tilde}\renewcommand\t{{\scriptscriptstyle\top}{a}^{(t+1)}_i$ and $\tilde}\renewcommand\t{{\scriptscriptstyle\top}{b}^{(t+1)}_i$.
Unlike the power iteration, it can be immediately seen that $a_i$, $b_i$ and $c_i$ are stationary points of the above update even if the components are not orthogonal to each other. Inspired by this intuition, we prove that when the residual error is small enough (as guaranteed in the analysis of tensor power iteration), this step removes it.
The analysis of this algorithm requires that the estimate matrices $\hat{A}, \hat{B}, \hat{C}$ satisfy some bound on the spectral norm and some column-wise error bounds; see Definition~\ref{def:nice-property} in Appendix~\ref{sec:convergence proof-coordinate descent} for the details. The optimization program in~\eqref{eqn:coordinate descent-opt} (which is only run in the first iteration) and projection Procedure~\ref{algo:fix-procedure} ensure that these conditions are satisfied.
\floatname{algorithm}{Algorithm}
\begin{algorithm}[h]
\caption{Coordinate descent algorithm for removing the residual error}
\label{algo:coordinate-descent}
\begin{algorithmic}[1]
\REQUIRE Tensor $T \in {\mathbb R}^{d \times d \times d}$, initialization set $\left\{\h{A}, \h{B}, \h{C}, \h{w}^{(0)} \right\}$, number of iterations $N$.
\STATE Initialize $\h{A}^{(0)}$ as (similarly for $ \h{B}^{(0)}, \h{C}^{(0)}$)
\begin{equation} \label{eqn:coordinate descent-opt}
\h{A}^{(0)} := \argmin_{\tilde}\renewcommand\t{{\scriptscriptstyle\top}{A}} \|\tilde{A}\| \quad
\operatorname{s.t.} \ \|\tilde{a}_i - \widehat{a}_i \| \le \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \sqrt{k}/d \right), i \in[k].
\end{equation}
\FOR{$t=0$ \TO $N-1$}
\FOR{$i=1$ \TO $k$}
\STATE
\begin{align*}
\tilde}\renewcommand\t{{\scriptscriptstyle\top}{w}_i^{(t+1)} &= \biggl\| T \left(\h{a}_i^{(t)},\h{b}_i^{(t)},I \right) - \sum_{j\ne i} \h{w}_j^{(t)} \inner{\h{a}_i^{(t)},\h{a}_j^{(t)}} \inner{\h{b}_i^{(t)},\h{b}_j^{(t)}} \cdot \h{c}_j^{(t)} \biggr\|, \nonumber \\
\tilde}\renewcommand\t{{\scriptscriptstyle\top}{c}_i^{(t+1)} &= \frac{1}{\tilde}\renewcommand\t{{\scriptscriptstyle\top}{w}_i^{(t+1)}} \biggl( T \left(\h{a}_i^{(t)},\h{b}_i^{(t)},I \right) - \sum_{j\ne i} \h{w}_j^{(t)} \inner{\h{a}_i^{(t)},\h{a}_j^{(t)}} \inner{\h{b}_i^{(t)},\h{b}_j^{(t)}} \cdot \h{c}_j^{(t)} \biggr).
\end{align*}
\ENDFOR
\STATE Update $\h{C}^{(t+1)}$ by applying Procedure~\ref{algo:fix-procedure} with inputs $\tilde}\renewcommand\t{{\scriptscriptstyle\top}{C}^{(t+1)}$ and $\h{C}^{(t)}$.
\STATE Repeat the above steps (with appropriate changes) to update $\h{A}^{(t+1)}$ and $\h{B}^{(t+1)}$.
\STATE Update $\h{w}^{(t+1)}$: \\,\,\mbox{for}\quad any $i \in [k]$,
$
\h{w}_i^{(t+1)} =
\left\{\begin{array}{ll}
\tilde}\renewcommand\t{{\scriptscriptstyle\top}{w}_i^{(t+1)}, & \left| \tilde}\renewcommand\t{{\scriptscriptstyle\top}{w}_i^{(t+1)} - \h{w}^{(t)}_i \right| \leq \eta_0 \frac{\sqrt{k}}{d}, \\
\h{w}^{(t)}_i + \operatorname{sgn} \left(\tilde}\renewcommand\t{{\scriptscriptstyle\top}{w}_i^{(t+1)} - \h{w}^{(t)}_i \right) \cdot \eta_0 \frac{\sqrt{k}}{d}, & \operatorname{o.w.}
\end{array} \right.
$
\ENDFOR
\RETURN $\left\{\h{A}^{(N)}, \h{B}^{(N)}, \h{C}^{(N)}, \h{w}^{(N)} \right\}$.
\end{algorithmic}
\end{algorithm}
\floatname{algorithm}{Procedure}
\begin{algorithm}[h]
\caption{Projection procedure}
\label{algo:fix-procedure}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{input}}
\renewcommand{\algorithmicensure}{\textbf{output}}
\REQUIRE Matrices $\tilde}\renewcommand\t{{\scriptscriptstyle\top}{C}^{(t+1)}$, $\h{C}^{(t)}$.
\STATE Compute the SVD of $\tilde}\renewcommand\t{{\scriptscriptstyle\top}{C}^{(t+1)} = UDV^\top$.
\STATE Let $\h{D}$ be the truncated version of $D$ as $\h{D}_{i,i} := \min \left\{ D_{i,i},\eta_1\sqrt{\frac{k}{d}} \right\}.$
\STATE Let $Q := U\h{D}V^\top$.
\STATE Update $\h{C}^{(t+1)}$: for any $i \in [k]$,
$
\h{c}_i^{(t+1)} =
\left\{\begin{array}{ll}
Q_i, & \left\|Q_i-\h{c}^{(t)}_i \right\| \le \eta_0 \frac{\sqrt{k}}{d}, \\
\h{c}^{(t)}_i + \eta_0 \frac{\sqrt{k}}{d} \frac{\left( Q_i-\h{c}^{(t)}_i \right)}{\left\|Q_i-\h{c}^{(t)}_i \right\|}, & \operatorname{o.w.}
\end{array} \right.
$
\RETURN $\h{C}^{(t+1)}$.
\end{algorithmic}
\end{algorithm}
\subsection{Discussions}
We now provide some further discussions and comparisons about the algorithm.
\paragraph{Implicit tensor operations:}
In many applications, the input tensor $T$ is not available in advance, and it is computed from samples. It is discussed in~\cite{OvercompleteLVMs2014} that the tensor is not needed to be computed and stored explicitly, where the multilinear tensor updates~\eqref{eqn:asymmetric power update}~and~\eqref{eqn:BiasRemoval} in the algorithm can be efficiently computed through multilinear operations on the samples directly.
\paragraph{Comparison with symmetric orthogonal tensor power method:}
Algorithm~\ref{algo:Power method form} is similar to the symmetric tensor power method
analyzed by~\cite{AnandkumarEtal:tensor12} with the following main differences, {viz.,\ \/}
\begin{itemize}[itemsep=-0.5mm]
\item Symmetric and non-symmetric tensors: Our algorithm can be applied to both symmetric and non-symmetric tensors, while tensor power method in~\cite{AnandkumarEtal:tensor12} is only for symmetric tensors.
\item Linearity: The updates in Algorithm \ref{algo:Power method form} are linear in each variable, while the symmetric tensor power update is a quadratic operator given a third order tensor.
\item Guarantees: In~\cite{AnandkumarEtal:tensor12}, guarantees for the symmetric tensor power update under orthogonality are obtained, while here we consider non-orthogonal tensors under the alternating updates.
\end{itemize}
\paragraph{Comparison with Alternating Least Square(ALS):}
The updates in Algorithm \ref{algo:Power method form} can be viewed as a rank-$1$ form of the standard alternating least squares (ALS) procedure. This is because the unnormalized update for $c$ in \eqref{eqn:asymmetric power update} can be rewritten as
\begin{align}
\tilde}\renewcommand\t{{\scriptscriptstyle\top}{c}_\tau^{(t+1)}
:= T \left( \hat{a}_\tau^{(t)}, \hat{b}_\tau^{(t)},I \right)
= \mat(T,3) \cdot \left( \hat{b}_\tau^{(t)}\odot \hat{a}_\tau^{(t)} \right), \label{eqn:approx update-C}
\end{align}
where $\odot$ denotes the {\em Khatri-Rao} product, and $\mat(T,3) \in \R^{d \times d^2}$ is the mode-$3$ matricization of tensor $T$. On the other hand, the ALS update has the form
$$
\tilde}\renewcommand\t{{\scriptscriptstyle\top}{C}^{(t+1)} =\mat(T,3) \cdot \left( \left( \hat{B}^{(t)}\odot \hat{A}^{(t)} \right)^\top \right)^\dagger,
$$
where $k$ vectors (all columns of $\tilde}\renewcommand\t{{\scriptscriptstyle\top}{C}^{(t+1)} \in \R^{d \times k}$) are simultaneously updated given the current estimates for the other two modes $\hat{A}^{(t)}$ and $\hat{B}^{(t)}$. In contrast, our procedure updates only one vector (with the target of recovering one column of $C$) in each iteration. In our update, we do not require finding matrix inverses. This leads to efficient computational complexity, and we also show that our update procedure is more robust to perturbations.
\section{Analysis} \label{sec:analysis}
In this section, we provide the local and global convergence guarantees for the tensor decomposition algorithm proposed in Section~\ref{sec:algorithm}.
Throughout the paper, we assume tensor $\hat{T} \in {\mathbb R}^{d \times d \times d}$ is of the form $\hat{T} = T + \Psi$, where $\Psi$ is the error or perturbation tensor, and\footnote{For 4th and higher order tensors, same techniques we introduce in this paper, can be exploited to argue similar results.}
$$
T = \sum_{i\in [k]} w_i \cdot a_i\otimes b_i\otimes c_i,
$$
is a rank-$k$ tensor such that $a_i,b_i,c_i \in {\mathbb R}^d, i \in [k],$ are unit vectors. Let $A := [a_1 \ a_2 \ \dotsb \ a_k] \in {\mathbb R}^{d \times k}$, and $B$ and $C$ are similarly defined.
The goal of robust tensor decomposition algorithm is to recover the rank-1 components $\{(a_i, b_i, c_i), i \in [k]\}$ given noisy tensor $\hat{T}$. Our analysis emphasizes on the challenging {\em overcomplete} regime where the tensor rank is larger than the dimension, i.e., $k>d$.
Without loss of generality we also assume $w_{\max} = w_1 \ge w_2\ge \cdots\ge w_k = w_{\min} > 0$.
We require natural deterministic conditions on the tensor components to argue the convergence guarantees; see Appendix~\ref{sec:assumptions} for the details. We show that all of these conditions are satisfied if the true rank-1 components of the tensor are uniformly i.i.d.\ drawn from the unit $d$-dimensional sphere ${\cal S}^{d-1}$. Thus, for simplicity we assume this random assumption in the main part, and state the deterministic assumptions in Appendix \ref{sec:assumptions}. Notice that it is also reasonable to assume these deterministic assumptions hold for some non-random matrices. Among the deterministic assumptions, the most important one is the {\em incoherence} condition which imposes a soft-orthogonality constraint between different rank-1 components of the tensor.
The convergence guarantees are provided in terms of distance between the estimated and the true vectors, defined below.
\begin{definition}
For any two vectors $u, v \in \R^d$, the distance between them is defined as
\begin{align} \label{eqn:dist function definition}
\dist(u,v) := \sup_{z \perp u} \frac{\langle z,v \rangle}{\| z \| \cdot \| v \|}
= \sup_{z \perp v} \frac{\langle z,u \rangle}{\| z \| \cdot \| u \|}.
\end{align}
\end{definition}
Note that distance function $\dist(u,v)$ is invariant w.r.t. norm of input vectors $u$ and $v$. Distance also provides an upper bound on the error between unit vectors $u$ and $v$ as (see Lemma A.1 of \cite{AgarwalEtal:SparseCoding2013})
$$
\min_{z \in \{-1,1\}} \|zu-v \| \leq \sqrt{2} \dist(u,v).
$$
Incorporating distance notion resolves the sign ambiguity issue in recovering the components: note that a third order tensor is unchanged if the sign of a vector along one of the modes is fixed and the signs of the corresponding vectors in the other two modes are flipped.
\subsection{Local convergence guarantee} \label{sec:local convergence}
In the local convergence guarantee, we analyze the convergence properties of the algorithm assuming we have good initialization vectors for the non-convex tensor decomposition algorithm.
\paragraph{Settings of Algorithm in Theorem~\ref{thm:local convergence}:}
\begin{itemize}[itemsep=-1mm]
\item Number of iterations: $N = \Theta \left( \log \left( \frac{1}{\gamma \epsilon_R} \right) \right)$, where $\gamma := \frac{w_{\max}}{w_{\min}}$ and $\epsilon_R := \min \left\{ \frac{\psi}{w_{\min}}, \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \gamma \frac{\sqrt{k}}{d} \right) \right\}$.
\end{itemize}
\paragraph{Conditions for Theorem~\ref{thm:local convergence}:}
\begin{itemize}[itemsep=-1mm]
\item Rank-$k$ true tensor with random components: Let
$$
T= \sum_{i\in [k]} w_i \cdot a_i \otimes b_i \otimes c_i,\quad w_i>0, a_i, b_i, c_i \in {\cal S}^{d-1},
$$
where $a_i,b_i,c_i, i \in [k],$ are uniformly i.i.d.\ drawn from the unit $d$-dimensional sphere ${\cal S}^{d-1}$. We state the deterministic assumptions in Appendix \ref{sec:assumptions}, and show that random matrices satisfy these assumptions.
\item Rank condition: $k = o \left( d^{1.5} \right).$
\item Perturbation tensor $\Psi$ satisfies the bound
$$\psi := \|\Psi\| \le \frac{w_{\min}}{6}.$$
\item Weight ratio: The maximum ratio of weights $\gamma := \frac{w_{\max}}{w_{\min}}$ satisfies the bound
$$
\gamma = O \left( \min \left\{ \sqrt{d}, \frac{d^{1.5}}{k} \right\} \right).
$$
\item Initialization: Assume we have good initialization vectors $\hat{a}^{(0)}_j, \hat{b}^{(0)}_j , j \in [k]$ satisfying
\begin{align} \label{eqn:good init}
\epsilon_0 := \max \left\{ \dist \left(\hat{a}^{(0)}_j, a_j\right), \dist \left(\hat{b}^{(0)}_j, b_j \right) \right\}
= O (1/\gamma), \quad \forall j \in [k],
\end{align}
where $\gamma := \frac{w_{\max}}{w_{\min}}$. In addition, given $\hat{a}^{(0)}_j$ and $\hat{b}^{(0)}_j$, suppose $\hat{c}^{(0)}_j$ is also calculated by the update formula in \eqref{eqn:asymmetric power update}.
\end{itemize}
\begin{theorem}[Local convergence guarantee of the tensor decomposition algorithm]
\label{thm:local convergence}
Consider noisy rank-$k$ tensor $\hat{T} = T + \Psi$ as the input to the tensor decomposition algorithm, and assume the conditions and settings mentioned above hold.
Then the algorithm outputs estimates $\hat{A}:= [\hat{a}_1 \dotsb \hat{a}_k] \in \R^{d \times k}$ and $\hat{w} := [\hat{w}_1 \dotsb \hat{w}_k]^\top \in \R^k$, satisfying w.h.p.\
\begin{equation*}
\left\| \widehat{A} - A \right\|_F \leq \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \frac{\sqrt{k} \cdot \psi}{w_{\min}} \right), \quad
\left\| \hat{w} - w \right\| \leq
\tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \sqrt{k} \cdot \psi \right).
\end{equation*}
Same error bounds hold for other factor matrices $B := [b_1 \dotsb b_k]$ and $C := [c_1 \dotsb c_k]$.
\end{theorem}
See the proof in Appendix \ref{sec:convergence proof}.
Thus, we can efficiently decompose the tensor in the highly overcomplete regime $k \leq o \left( d^{1.5} \right)$ under incoherent factors and some other assumptions mentioned above. The deterministic version of assumptions are stated in Appendix~\ref{sec:assumptions}. We show that these assumptions are true for random components which is assumed here for simplicity. If $k$ is significantly smaller than $d^{1.5}$ ($k\ll d^{1.25}$), then many of the assumptions can be derived from incoherence. See Appendix~\ref{sec:assumptions} for the details.
The above local convergence result can be also interpreted as a local identifiability result for tensor decomposition under incoherent factors.
The $\sqrt{k}$ factor in the above theorem error bound is from the fact that the final recovery guarantee is on the Frobenius norm of the whole factor matrix $A$.
In the following, we provide stronger column-wise guarantees (where there is no $\sqrt{k}$ factor) with the expense of having an additional residual error term.
Recall that our algorithm includes two main update steps including tensor power iteration in~\eqref{eqn:asymmetric power update} and residual error removal in~\eqref{eqn:BiasRemoval}.
The guarantee for the first step --- tensor power iteration --- is provided in the following lemma.
\begin{lemma}[Local convergence guarantee of the tensor power updates, Algorithm~\ref{algo:Power method form}]
\label{thm:local convergence-poweriteration}
Consider the same settings as in Theorem~\ref{thm:local convergence}. Then, the outputs of tensor power iteration steps (output of Algorithm~\ref{algo:Power method form}) satisfy w.h.p
\begin{equation*}
\dist(\widehat{a}_j, a_j) \leq \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \frac{\psi}{w_{\min}} \right) + \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \gamma \frac{\sqrt{k}}{d} \right), \quad
\left| \hat{w}_j - w_j \right| \leq
\tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \psi \right) + \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( w_{\max} \frac{\sqrt{k}}{d} \right), \quad
j \in [k].
\end{equation*}
Same error bounds hold for other factor matrices $B$ and $C$.
\end{lemma}
The above result provides guarantees with the additional residual error $\tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \gamma \frac{\sqrt{k}}{d} \right)$, but we believe this result also has independent importance for the following reasons.
The above result provides column-wise guarantees which is stronger than the guarantees on the whole factor matrix in Theorem~\ref{thm:local convergence}. Furthermore, we can only have recovery guarantees for a subset of rank-1 components of the tensor (the ones for which we have good initializations) without worrying about the rest of components. Finally, in the high-dimensional regime (large $d$), the residual error term goes to zero.
The result in the above lemma is actually stated in the non-asymptotic form, where the details of constants are explicitly provided in Appendix \ref{sec:assumptions}.
%
\paragraph{Symmetric tensor decomposition:} The above local convergence result also holds for recovering the components of a rank-$k$ {\em symmetric} tensor. Consider symmetric tensor $T$ with CP decomposition $T = \sum_{i \in [k]} w_i a_i \otimes a_i \otimes a_i$. The proposed algorithm can be also applied to recover the components $a_i, i \in [k],$ where the main updates are changed to adapt to the symmetric tensor. The tensor power iteration is changed to
\begin{align} \label{eqn:symmetric power update}
\hat{a}^{(t+1)} = \frac{T \left( \hat{a}^{(t)}, \hat{a}^{(t)}, I \right)}{\left\| T \left( \hat{a}^{(t)}, \hat{a}^{(t)}, I \right) \right\|},
\end{align}
and the coordinate descent update is changed to the form stated in~\eqref{eqn:BiasRemoval-symmetric}.
Then, the same local convergence result as in Theorem \ref{thm:local convergence} holds for this algorithm. The proof is very similar to the proof of Theorem \ref{thm:local convergence} with some slight modifications considering the symmetric structure.
\paragraph{Extension to higher order tensors:}
We also provide the generalization of the tensor decomposition guarantees to higher order tensors. We state and prove the result for the tensor power iteration part in details, while the generalization of coordinate descent part (for removing the residual error) to higher order tensors, can be argued by the same techniques we introduce in this paper
For brevity, Algorithm \ref{algo:Power method form} and local convergence guarantee in Lemma \ref{thm:local convergence-poweriteration} are provided for a $3$rd order tensor. The algorithm can be simply extended to higher order tensors to compute the corresponding CP decomposition.
Consider $p$-th order tensor $T \in \bigotimes^p {\mathbb R}^d$ with CP decomposition
\begin{align} \label{eqn:4th order CP decomp}
T = \sum_{i \in [k]} w_i \cdot a_{(1),i} \otimes a_{(2),i} \otimes \dotsb \otimes a_{(p),i},
\end{align}
where $a_{(r),i} \in \R^d$ is the $i$-th column of $r$-th component $A_{(r)} := \left[a_{(r),1} \ a_{(r),2} \ \dotsb \ a_{(r),k} \right] \in {\mathbb R}^{d \times k},$ for $r \in [p]$.
Algorithm \ref{algo:Power method form} can be extended to recover the components of above decomposition where update formula for the $p$-th mode is modified as
\begin{align} \label{eqn:asymmetric power update 4th}
\hat{a}_{(p)}^{(t+1)} = \frac{T \left( \hat{a}_{(1)}^{(t)}, \hat{a}_{(2)}^{(t)}, \dotsc, \hat{a}_{(p-1)}^{(t)}, I \right)}{\left\| T \left( \hat{a}_{(1)}^{(t)}, \hat{a}_{(2)}^{(t)}, \dotsc, \hat{a}_{(p-1)}^{(t)}, I \right) \right\|},
\end{align}
and similarly the other updates are changed.
Then, we have the following generalization of Lemma~\ref{thm:local convergence-poweriteration} to higher order tensors.
\begin{corollary}[Local convergence guarantee of the tensor power updates in Algorithm~\ref{algo:Power method form} for $p$-th order tensor] \label{thm:local convergence 4th}
Consider the same conditions and settings as in Lemma~\ref{thm:local convergence-poweriteration}, unless tensor $T$ is $p$-th order with CP decomposition in \eqref{eqn:4th order CP decomp} where $p\geq 3$ is a constant. In addition, the bounds on $\gamma : =\frac{w_{\max}}{w_{\min}}$ and $k$ are modified as
\begin{equation*}
\gamma = O \left( \min \left\{ d^{\frac{p-2}{2}}, \frac{d^{p/2}}{k} \right\} \right), \quad k = o \left( d^{\frac{p}{2}} \right).
\end{equation*}
Then, the outputs of tensor power iteration steps (output of Algorithm~\ref{algo:Power method form}) satisfy w.h.p.
\begin{equation*}
\dist \left( \hat{a}_{(r),j}, a_{(r),j} \right) \leq \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \frac{\psi}{w_{\min}} \right) + \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \gamma \sqrt{\frac{k}{d^{p-1}}} \right), \quad
\left| \hat{w}_j - w_j \right| \leq \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \psi \right) + \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( w_{\max} \sqrt{\frac{k}{d^{p-1}}} \right),
\end{equation*}
for $j \in [k]$ and $r \in [p]$.
The number of iterations is $N = \Theta \left( \log \left( \frac{1}{\gamma \tilde}\renewcommand\t{{\scriptscriptstyle\top}{\epsilon}_R} \right) \right)$, where $\gamma := \frac{w_{\max}}{w_{\min}}$ and $\tilde}\renewcommand\t{{\scriptscriptstyle\top}{\epsilon}_R := \min \left\{ \frac{\psi}{w_{\min}}, \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \gamma \sqrt{k/d^{p-1}} \right) \right\}$.
\end{corollary}
\subsection{Global convergence guarantee when $k=O(d)$} \label{sec:global convergence}
Theorem \ref{thm:local convergence} provides local convergence guarantee given good initialization vectors.
In this section, we exploit SVD-based initialization method in Procedure~\ref{algo:SVD init} to provide good initialization vectors when $k = O(d)$. This method proposes the top singular vectors of random slices of the moment tensor as the initialization. Combining the theoretical guarantees of this initialization method (provided in Appendix \ref{sec:initialization}) with the local convergence guarantee in Theorem \ref{thm:local convergence}, we provide the following global convergence result.
\paragraph{Settings of Algorithm in Theorem~\ref{thm:global convergence}:}
\begin{itemize}[itemsep=-1mm]
\item Number of iterations: $N = \Theta \left( \log \left( \frac{1}{\gamma \epsilon_R} \right) \right)$, where $\gamma := \frac{w_{\max}}{w_{\min}}$ and $\epsilon_R := \min \left\{ \frac{\psi}{w_{\min}}, \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \gamma \frac{\sqrt{k}}{d} \right) \right\}$.
\item The initialization in each run of Algorithm\ \ref{algo:Power method form} is performed by SVD-based technique proposed in Procedure~\ref{algo:SVD init}, with the number of initializations as
$$L \geq k^{\Omega \left( \gamma^4 \left( k/d \right)^2 \right)}.$$
\end{itemize}
\paragraph{Conditions for Theorem~\ref{thm:global convergence}:}
\begin{itemize}[itemsep=-1mm]
\item Rank-$k$ decomposition and perturbation conditions as\,\footnote{Note that the perturbation condition is stricter than the corresponding condition in the local convergence guarantee (Theorem~\ref{thm:local convergence}).}
$$
T = \sum_{i\in[k]} w_i \cdot a_i\otimes b_i\otimes c_i, \quad \psi := \|\Psi\| \le \frac{w_{\min} \sqrt{\log k}}{\alpha_0 \sqrt{d}},
$$
where $a_i,b_i,c_i, i \in [k],$ are uniformly i.i.d.\ drawn from the unit $d$-dimensional sphere ${\cal S}^{d-1}$, and $\alpha_0 >1$ is a constant.
\item Rank condition: $k = O(d)$, i.e., $k\le \beta d$ for arbitrary constant $\beta>1$.
\end{itemize}
\begin{theorem}[Global convergence guarantee of tensor decomposition algorithm when $k=O(d)$] \label{thm:global convergence}
Consider noisy rank-$k$ tensor $\hat{T} = T + \Psi$ as the input to the tensor decomposition algorithm, and assume the conditions and settings mentioned above hold.
Then, the same guarantees as in Theorem~\ref{thm:local convergence} hold.
\end{theorem}
See the proof in Appendix \ref{sec:convergence proof}.
Thus, we can efficiently recover the tensor decomposition, when the tensor is undercomplete or mildly overcomplete (i.e., $k\le \beta d$ for arbitrary constant $\beta>1$), by initializing the algorithm with a simple SVD-based technique.
The number of initialization trials $L$ is polynomial when $\gamma$ is a constant, and $k =O(d)$.
Note that the argument in Lemma~\ref{thm:local convergence-poweriteration} can be similarly adapted leading to global convergence guarantee of the tensor power iteration step.
\subsubsection*{Two undercomplete, and one overcomplete component}
Here, we apply the global convergence result to the regime of two undercomplete and one overcomplete components. This arises in supervised learning problems under a multiview mixtures model and employing moment tensor ${\mathbb E}[x_1\otimes x_2\otimes y]$, where $x_i\in {\mathbb R}^{d_u}$ are multi-view high-dimensional features and $y\in {\mathbb R}^{d_o}$ is a low-dimensional label.
Since in the SVD initialization Procedure~\ref{algo:SVD init}, two components $\hat{a}^{(0)}$ and $\hat{b}^{(0)}$ are initialized through SVD, and the third component $\hat{c}^{(0)}$ is initialized through update formula \eqref{eqn:asymmetric power update}, we can generalize the global convergence result in Theorem \ref{thm:global convergence} to the setting where $A$, $B$ are undercomplete, and $C$ is overcomplete.
\begin{corollary} \label{corollary:TwoUnder OneOver}
Consider the same setting as in Theorem \ref{thm:global convergence}. In addition, suppose the regime of undercomplete components $A \in {\mathbb R}^{d_u \times k}$, $B \in {\mathbb R}^{d_u \times k}$, and overcomplete component $C \in {\mathbb R}^{d_o \times k}$ such that $d_u \geq k \geq d_o$. In addition, in this case the bound on $\gamma := \frac{w_{\max}}{w_{\min}}$ is
$$
\gamma = O \left( \min \left\{ \sqrt{d_o}, \frac{d_u \sqrt{d_o}}{k} \right\} \right).
$$
Then, if $k=O(d_u)$ and $d_o \geq \polylog(k)$, the same convergence guarantee as in Theorem \ref{thm:global convergence} holds.
\end{corollary}
See the proof in Appendix \ref{sec:convergence proof}.
We observe that given undercomplete modes $A$ and $B$, mode $C$ can be arbitrarily overcomplete, and we can still provide global recovery of $A,B$ and $C$ by employing SVD initialization procedure along modes $A$ and $B$.
\subsection{Proof outline}
The global convergence guarantee in Theorem \ref{thm:global convergence} is established by combining the local convergence result in Theorem \ref{thm:local convergence} and the SVD initialization result in Appendix \ref{sec:initialization}.
The local convergence result in Theorem~\ref{thm:local convergence} is derived by establishing error contraction in each iteration of the tensor power iteration and the coordinate descent for removing the residual error. Note that these convergence properties are broken down in Lemmata~\ref{thm:local convergence-poweriteration}~and~\ref{thm:coordinate descent}, respectively.
Since we assume generic factor matrices $A,B$ and $C$, we utilize many useful properties such as incoherence, bounded spectral norm of the matrices $A,B$ and $C$, bounded tensor spectral norm and so on. We list the precise set of deterministic conditions required to establish the local convergence result in Appendix~\ref{sec:assumptions}. Under these conditions, with a good initialization (i.e., small enough $\max \{\dist(\hat{a},a_j), \dist(\hat{b},b_j)\} \leq \epsilon_0$), we show that the iterative update in \eqref{eqn:asymmetric power update} provides an estimate $\hat{c}$ with
$$\dist(\hat{c},c_j)< \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \frac{\psi}{w_{\min}} \right) + \tilde}\renewcommand\t{{\scriptscriptstyle\top}{O} \left( \gamma \frac{\sqrt{k}}{d} \right) + q \epsilon_0,$$
for some contraction factor $q<1/2$. The incoherence condition is crucial for establishing this result. See Appendix \ref{sec:convergence proof} for the complete proof.
The initialization argument for SVD-based technique in Procedure~\ref{algo:SVD init} has two parts. The first part claims that by performing enough number of initializations (large enough $L$), a gap condition is satisfied, meaning that we obtain a vector $\theta$ which is relatively close to $c_j$ compared to any $c_i,i \neq j$. This is a standard result for Gaussian vectors, e.g., see Lemma B.1 of~\cite{AnandkumarEtal:tensor12}. In the second part of the argument, we analyze the dominant singular vectors of $T(I,I,\theta)$, for a vector $\theta$ with a good relative gap, to obtain an error bound on the initialization vectors. This is obtained through standard matrix perturbation results (Weyl and Wedin's theorems). See Appendix \ref{sec:initialization} for the complete proof.
\section{Experiments}
In this section, we provide some synthetic experiments to evaluate the performance of Algorithm \ref{algo:Power method form}. Note that tensor power update in Algorithm~\ref{algo:Power method form} is the main step of our algorithm which is considered in this experiment.
A random true tensor $T$ is generated as follows. First, three components $A \in {\mathbb R}^{d \times k}$, $B \in {\mathbb R}^{d \times k}$, and $C \in {\mathbb R}^{d \times k}$ are randomly generated with i.i.d standard Gaussian entries. Then, the columns of these matrices are normalized where the normalization factors are aggregated as coefficients $w_j, j \in [k]$. From decomposition form in \eqref{eqn:tensordecomp}, tensor $T$ is built through these random components. For each new initialization, $\hat{a}^{(0)}$ and $\hat{b}^{(0)}$ are randomly generated with i.i.d.\ standard Gaussian entries, and then normalized\,\footnote{Drawing i.i.d. standard Gaussian entries and normalizing them is equivalent to drawing vectors uniformly from the $d$-dimensional unit sphere.}. Initialization vector $\hat{c}^{(0)}$ is generated through update formula in \eqref{eqn:asymmetric power update}.
For each initialization $\tau \in [L]$, an alternative option of running the algorithm with a fixed number of iterations $N$ is to stop the iterations based on some stopping criteria. In this experiment, we stop the iterations when the improvement in subsequent steps is small as
$$
\max \left( \left\| \hat{a}_\tau^{(t)} - \hat{a}_\tau^{(t-1)} \right\|^2, \left\| \hat{b}_\tau^{(t)} - \hat{b}_\tau^{(t-1)} \right\|^2, \left\| \hat{c}_\tau^{(t)} - \hat{c}_\tau^{(t-1)} \right\|^2 \right) \leq t_{\Stopping},
$$
where $t_{\Stopping}$ is the stopping threshold. According to the bound in Theorem \ref{thm:local convergence}, we set
\begin{align} \label{eqn:stopping threshold}
t_{\Stopping} := t_1 (\log d)^2 \frac{\sqrt{k}}{d},
\end{align}
for some constant $t_1>0$.
\subsubsection*{Effect of size $d$ and $k$}
Algorithm \ref{algo:Power method form} is applied to random tensors with $d=1000$ and $k= \{ 10, 50, 100, 200, 500, 1000, 2000 \}$. The number of initializations is $L=2000$. The parameter $t_1$ in \eqref{eqn:stopping threshold} is fixed as $t_1 = 1e-08$. Figure \ref{fig:RecoveryRatio vs Init} and Table \ref{table:RecoveryRatio vs Init} illustrate the outputs of running experiments which is the average of 10 random runs.
Figure \ref{fig:RecoveryRatio vs Init} depicts the ratio of recovered columns versus the number of initializations. Both horizontal and vertical axes are plotted in $\log$-scale. We observe that it is much easier to recover the columns in the undercomplete settings ($k \leq d$), while it becomes harder when $k$ increases. Linear start in Figure \ref{fig:RecoveryRatio vs Init} suggests that recovering the first bunch of columns only needs polynomial number of initializations. For highly undercomplete settings like $d=1000$ and $k=10$, almost all columns are recovered in this linear phase. After this start, the concave part means that it needs many more initializations for recovering the next bunch of columns. As we go ahead, it becomes harder to recover true columns, which is intuitive.
\begin{figure}\centering{
\begin{psfrags}
\psfrag{d eq 1000, k eq 10}[l]{\tiny $d\!=\!1000, k\!=\!10$}
\psfrag{d eq 1000, k eq 50}[l]{\tiny $d\!=\!1000, k\!=\!50$}
\psfrag{d eq 1000, k eq 100}[l]{\tiny $d\!=\!1000, k\!=\!100$}
\psfrag{d eq 1000, k eq 200}[l]{\tiny $d\!=\!1000, k=\!200$}
\psfrag{d eq 1000, k eq 500}[l]{\tiny $d\!=\!1000, k\!=\!500$}
\psfrag{d eq 1000, k eq 1000}[l]{\tiny $d\!=\!1000, k\!=\!1000$}
\psfrag{d eq 1000, k eq 2000}[l]{\tiny $d\!=\!1000, k\!=\!2000$}
\psfrag{d eq 1000, k eq 5000}[l]{\tiny $d\!=\!1000, k\!=\!5000$}
\psfrag{recovery rate of algorithm}[l]{\scriptsize recovery rate of algorithm}
\psfrag{number of initializations}[l]{\scriptsize number of initializations}
\psfrag{ratio of recovered columns}[l]{\scriptsize ratio of recovered columns}
\includegraphics[width=3.0in]{figures/RecoveryRatio_vs_Init_d1000_LogXLogY.eps}\end{psfrags}}
\caption{\@setsize\small{9pt}\viiipt\@viiipt\let\@listi\@listI Ratio of recovered columns versus the number of initializations for $d=1000$, and $k= \{ 10, 50, 100, 200, 500, 1000, 2000 \}$. The number of initializations is $L=2000$. The stopping parameter is set to $t_1 = 1e-08$. The figure is an average over 10 random runs.}
\label{fig:RecoveryRatio vs Init}
\end{figure}
Table \ref{table:RecoveryRatio vs Init} has the results from the experiments. Parameters $k$,
stopping threshold $t_{\Stopping}$, and the average square error of the output, the average weight error and the average number of iterations are stated.
The output averages are over several initializations and random runs. The square error is given by
$$
\frac{1}{3} \left[ \left\| a_j - \hat{a} \right\|^2 + \left\| b_j - \hat{b} \right\|^2 + \left\| c_j - \hat{c} \right\|^2 \right],
$$
for the corresponding recovered $j$. The error in estimating the weights is defined as $|\hat{w} - w_j |^2/w_j^2$ which is the square relative error of weight estimate. The number of iterations performed before stopping the algorithm is mentioned in the last column.
We observe that by increasing $k$, all of these outputs are increased which means we get less accurate estimates with higher computation. This shows that recovering the overcomplete components is much harder. Note that by running the coordinate descent Algorithm~\ref{algo:coordinate-descent}, we can also remove this additional residual error left after the tensor power iteration step.
Similar results and observations as above are seen when $k$ is fixed and $d$ is changed.
Running experiments with SVD initialization instead of random initialization yields nearly the same recovery rates, but with slightly smaller number of iterations. But, since the SVD computation is more expensive, in practice, it is desirable to initialize with random vectors. Our theoretical results for random initialization appear to be highly pessimistic compared to the efficient recovery results in our experiments. This suggests additional room for improving our theoretical guarantees under random initialization.
\begin{table}
\caption{\@setsize\small{9pt}\viiipt\@viiipt\let\@listi\@listI Parameters and more outputs related to results of Figure \ref{fig:RecoveryRatio vs Init}. Note that $d=1000$.} \label{table:RecoveryRatio vs Init}
\begin{center}{\@setsize\small{9pt}\viiipt\@viiipt\let\@listi\@listI \begin{tabular}{c|c||c|c|c}
\hline
\multicolumn{2}{c||}{Parameters} & \multicolumn{3}{|c}{Outputs} \\
\hline
$k$ &
\begin{tabular}{c} $t_{\Stopping}$ \end{tabular} &
\begin{tabular}{c} avg. square \\ error \end{tabular} &
\begin{tabular}{c} avg. weight \\ error \end{tabular} &
\begin{tabular}{c} avg. \# of \\ iterations \end{tabular} \\
\hline \hline
10 & 1.51e-08 & 1.03e-05 & 9.75e-09 & 7.71 \\
50 & 3.37e-08 & 5.54e-05 & 6.69e-08 & 8.53 \\
100& 4.77e-08 & 1.08e-04 & 1.51e-07 & 8.81 \\
200 & 6.75e-08 & 2.07e-04 & 3.41e-07 & 9.09 \\
500 & 1.07e-07 & 5.09e-04 & 1.14e-06 & 9.52 \\
1000 & 1.51e-07 & 1.01e-03 & 3.40e-06 & 10.01 \\
2000 & 2.13e-07 & 2.00e-03 & 1.12e-05 & 10.69 \\
\hline
\end{tabular}}\end{center}
\end{table}
\subsubsection*{Acknowledgements}
We acknowledge detailed discussions with Sham Kakade and Boaz Barak. We thank Praneeth Netrapalli for discussions on alternating minimization. We also thank Sham Kakade, Boaz Barak, Jonathan Kelner, Gregory Valiant and Daniel Hsu for earlier discussions on the $2 \to p$ norm bound for random matrices, used in Lemma~\ref{lem:2 to p norm bound}. We also thank Niranjan U.N. for discussions on running experiments.
A. Anandkumar is supported in part by Microsoft Faculty Fellowship, NSF Career award CCF-$1254106$, NSF Award CCF-$1219234$, and ARO YIP Award W$911$NF-$13$-$1$-$0084$. M. Janzamin is supported by NSF Award CCF-1219234, ARO Award W911NF-12-1-0404 and ARO YIP Award W911NF-13-1-0084.
\renewcommand{
|
2,877,628,091,059 | arxiv | \section{Introduction}
Theoretical calculations on single molecule conduction have typically employed coherent
Non-Equilibrium Green's function (NEGF) theories (``Landauer limit") \cite{rdatta1,mwl1} coupled with Self Consistent Fields (SCF) to describe charging effects.
Though fairly successful in describing many aspects of single molecule conduction \cite{diven,damle,jt,rmrs,rasymm}, there have been important discrepancies
between theory and experiment \cite{rreed}. The most common ones include
poor match between theoretical and experimental current levels and zero-bias currents \cite{diven,damle,rreed}.
It was also pointed out in \cite{rbhasko} that a whole class of experimental I-V's show features,
which cannot be captured even qualitatively using an SCF theory.
Charging energies of short molecules ($~3$ eV for benzene) are often larger than their electrode coupling
($< 0.2 $ eV for benzene di-thiol on gold),
and thus could be in the Coulomb Blockade (CB) regime where single electron charging effects could dominate.
It is thus debatable whether it is better described as a quantum wire in the SCF regime, or as a quantum dot array
in the Coulomb Blockade (CB) regime. Nevertheless the wisdom of SCF approaches must be
scrutinized especially for conduction through shorter molecules. The purpose of this paper
is to present a Coulomb Blockade approach to
molecular conduction using a benzene molecule as prototype, and establish it
as a different viewpoint from the conventional NEGF-SCF treatment. Furthermore features obtained via
the CB approach can semi-quantitatively explain several
non-trivial features commonly observed \cite{jpark,rweber1,rweber2,pnas,rscott} in experiments.
\begin{figure}[ht]
\hskip 0.7cm\centerline{\epsfxsize=6.0in\epsfbox{CB_fig1.ps}}
\caption{Our system is a benzene molecule coupled to metallic contacts.
Single molecule transport calculations typically employ the NEGF-SCF prescription.
The Block diagram depicts the basic scheme. While quantities such as the Hamiltonian are
in the one-electron space of dimensions $N \times N$, N being the number of
basis functions. Our Coulomb Blockade description
involves the full many electron Fock space of dimensions $2^N \times 2^N$ as shown in b)
using a single spin-degenerate level as an example. The use of full many-electron space
captures the correlations exactly within the framework of the given
$N \times N$ one-electron Hamiltonian.}
\label{fig_sim}
\end{figure}
It is common to distinguish between two regimes of transport: a) an SCF regime where
the dominant
energy scale is the contact coupling, allowing for fractional charge transfer through the system;
and b) a Coulomb Blockade (CB) regime where the dominant energy scale is the single electron charging,
leading to integral charge transfer. In the SCF regime the description of transport via
non-interacting single particle energy levels can be justified. In this limit, it is common to
use the SCF-NEGF scheme that takes charging effects into account.
Here the molecular Hamiltonian is described by a set of single particle levels,
which are coupled to reservoirs through their self energies. The electron interactions are taken into account using SCF schemes as shown in the block diagram in Fig. 1a.
All quantities in the NEGF formalism are matrices of dimension $N \times N$, $N$
being the number of single particle basis functions used. This allows for an accurate
description of quantum chemistry of both the isolated molecule and its bonding to the contacts \cite{liang}.
In the CB regime with weak contact coupling, charging effects dominate, and the use of single
particle basis sets may be questionable. In such cases, it may be preferential to employ a multi-electron description or
Configuration Interaction (CI) where feasible. The central quantities in this CI method are now matrices of
dimension $2^N \times 2^N$, thereby accounting for strong interaction accurately. The weakly coupled contacts are treated perturbatively
using transition rates between states differing by a single electron [5].
It is interesting to note that most theoretical efforts in molecular conduction have
been in the SCF regime, while energy scales favor the CB regime. Our paper is thus a concrete attempt
towards CI based transport.
This paper is organized as follows: we begin by defining an appropriate many-body Hamiltonian for
Benzene whose parameters are benchmarked based on well-established mean-field techniques.
We then illustrate how a CB treatment is conceptually different from the standard SCF treatment in the weak coupling limit, not only
under non-equilibrium conditions, but even under equilibrium conditions. We then point out the importance of
inclusion of excited states in transport, that naturally arise
within our CI approach. The progressive access of these excited states leads to transport signatures under various
non-equilibrium conditions. Before we conclude, a few CB fits to experimental data are presented in
support of our analysis.
\section{The Model Hamiltonian and Equilibrium Properties}
An appropriate model Hamiltonian is usually described with an adequate basis set.
In this paper, we use a tight binding Hamiltonian with one $p_z$ orbital per site to describe our CI based scheme.
Although this generates just a minimal $6 \times 6$ single particle basis set, its many-electron space is $2^{12} \times 2^{12}$ in size.
Besides, our objective here is to describe the CI approach for transport and compare it with
the SCF approach for the same Hamiltonian. Better quantum chemical descriptions within the CI approach can be achieved by starting
with a reduced but more accurate one-particle Hamiltonian, but we leave these for future work.
One begins with the model Hamiltonian in second quantized notation:
\begin{eqnarray}
\hat{H} &=& \sum_{\alpha} \epsilon_{\alpha} n_{\alpha}
+ \sum_{\alpha \neq \beta} t_{\alpha \beta} c_{\alpha}^{\dagger} {c_\beta} \nonumber\\
&+& \sum_{\alpha,\sigma} U_{\alpha \alpha} n_{\alpha\sigma}
n_{\alpha\bar{\sigma}} + \frac{1}{2} \sum_{\alpha \neq \beta} U_{\alpha \beta}
n_{\alpha} n_{\beta} ,
\label{eq:mbh}
\end{eqnarray}
where $\alpha,\beta$
correspond to the orbital indices of the frozen $p_z$ orbitals for carbon sites on the Benzene
ring,and $\sigma$,$\bar{\sigma}$ represent a particular spin and its reverse.
In connection to its equilibrium configuration, it is more convenient to work with onsite
energies $\tilde{\epsilon}$ defined as:
\begin{equation}
\tilde{\epsilon}_{\alpha}
= \epsilon_{\alpha} + U_{\alpha \alpha} \langle n_{\alpha \bar{\sigma}}\rangle + \frac{1}{2}
\sum_{\alpha \neq \beta} U_{\alpha \beta} \langle n_{\beta} \rangle ,
\label{eq:mf}
\end{equation}
where $\tilde{\epsilon}_{\alpha}$'s denote the
mean-field on-site energies in the equilibrium charge neutral configuration of the molecule
and $\langle n\rangle$ represents its mean-field value.
Now the model Hamiltonian is simply re-written as:
\begin{eqnarray}
\hat{H} &=& \sum_{\alpha} \tilde{\epsilon}_{\alpha}
n_{\alpha} + \sum_{\alpha \neq \beta} t_{\alpha \beta} c_{\alpha}^{\dagger}
{c_{\beta}} \nonumber\\&+& \sum_{\alpha,\sigma} U_{\alpha \alpha}
(n_{\alpha\sigma} - \langle n_{\alpha \sigma}\rangle) (n_{\alpha \bar{\sigma}} - \langle n_{\alpha
\bar{\sigma}}\rangle) \nonumber\\&+& \frac{1}{2} \sum_{\alpha \neq \beta} U_{\alpha
\beta} (n_{\alpha}-\langle n_{\alpha}\rangle) ( n_{\beta} -
\langle n_{\beta}\rangle).
\label{eq:mfh}
\end{eqnarray}
The mean-field Hamiltonian derived from the above Hamiltonian, is:
\begin{equation}
\hat{h}=\sum_{\alpha} \tilde{\epsilon}_{\alpha}
n_{\alpha} + \sum_{\alpha \neq \beta} t_{\alpha \beta} c_{\alpha}^{\dagger}
{c_{\beta}} + U^{SCF}_{\alpha \alpha},
\label{rscf1}
\end{equation}
where
\begin{equation}
U^{SCF}_{\alpha \alpha}= U_{\alpha \alpha}(n_{\alpha} -
\frac{\langle n_{\alpha}\rangle}{2}) + \frac{1}{2} \sum_{\alpha \neq \beta} U_{\alpha \beta}
(n_{\beta} - \langle n_{\beta}\rangle).
\label{eq:rscf}
\end{equation}
is the Self Consistent Field, the calculation of $\langle n_\alpha\rangle$ performed self consistently with
the one electron Hamiltonian $\hat{h}$. In the following sections, we derive appropriate parameters
$\tilde{\epsilon}$, $t$ and $U$ for benzene, to describe the two different approaches i.e., the CI (Eq.~\ref{eq:mfh}) and the
SCF approaches (Eq.~\ref{eq:rscf}), and compare them in parallel in the case of both equilibrium and
non-equilibrium conditions.
\begin{figure}
\centerline{\epsfig{figure=CB_fig2.ps,width=5in,height=5in}}
\caption{Model Hamiltonian and Equilibrium Properties. (a) Selection of on-site $\epsilon_{\alpha}$
and hopping parameter $t_{\alpha \beta}$. Comparison of our model Hamiltonian levels
with frontier LDA/6-31g levels. Parameters are fixed based on a close match between
the doubly degenerate HOMO and LUMO levels and singly degenerate HOMO-1, LUMO+1 levels. (b)
Charging parameter matched according to a consistent Restricted SCF based $N-\mu$ plot (shown continuous line).
Total energy based Many-Body calculation (shown dotted line) as well as RSCF calculation is consistent
with Gaussian based calculation \cite{rrak}. c) One particle spectral function shows
peaks at the energy levels of the single particle Hamiltonian $\tilde{h}$.
d) Lehmann spectral function evaluated via many-electron spectrum yields many
more spectral peaks corresponging to removal (addition) of electrons
from the neutral ground state into various charge configurations (excitations) of singly charged species.
Notice that the IP-EA and HOMO-LUMO gaps are equal to the corresponding charge-stability
plateaus $N=N_0$ for many-body and SCF calculations shown in b).}
\label{fig:ben}
\end{figure}
Fig.2(a) shows the selection of mean field on-site energies $\tilde{\epsilon}_{\alpha}$ and hopping
parameter $t_{\alpha \beta}$, by comparing the eigen-energies of our model SCF Hamiltonian (Eq.~\ref{rscf1})
with the frontier orbitals within the local density
approximation (LDA) in the 6-31g basis set, shown in the left and right sections of Fig. 2a respectively.
The carbon-carbon hopping term $t_{\alpha \beta}=-2.0 eV$
has been used from already tabulated data \cite{ralbert}, which yields $\tilde{\epsilon}_{\alpha}=-4.42 eV$ for the above fit.
Note that the Highest Occupied Molecular Orbital (HOMO) levels and Lowest Unoccupied Molecular Orbital (LUMO)
levels are doubly degenerate in our model tight binding Hamiltonian as well as in the LDA basis set.
\subsection{Equilibrium Electron Number v/s chemical potential: Choosing Charging parameters}
A distinguishing aspect of CB is the abrupt charge addition as opposed to a gradual one in
an SCF calculation shown in Fig.2(b). This fact is readily seen in the figure,
in which the SCF and CB calculations are presented using the one electron and many-electron
Hamiltonians described by Eq.~\ref{eq:mfh} and~\ref{rscf1} respectively. In the weak coupling limit,
the CB result is more physical, and the SCF calculation does not do justice to this integer charge transfer.
However, schemes such as self interaction correction could be introduced within the one-electron Hamiltonian
\cite{rpal,ssan,pals} to incorporate this. But it turns out that even such schemes may not capture non-equilibrium correctly \cite{rbhasko}. It
is however expected that as coupling strength to the electrodes is increased,
the electron transfer resembles the SCF result. There is as yet no clear formalism that addresses \cite{rmat}
this crossover, even in the equilibrium case although the two
opposite limits namely SCF and CB are well understood. While the two limits can individually be handled
by perturbative expansion in the small parameters $U/\Gamma$ and $\Gamma/U$, $U$ being the single electron
charging energy and $\Gamma$ being the level broadening, the intermediate regime is hard to handle owing to
the non-existence of a suitable small parameter or `fine structure constant' for transport.
The plateau in the charge addition diagram $N$ versus $\mu$, in which the electron number is stabilized,
spans the HOMO-LUMO gap in the SCF case and the Ionization potential-Electron Affinity (IP-EA gap) in the
CB calculation. The IP (EA) is defined as the energy when an electron can be removed (added) to the neutral
molecule carrying $N_0$ electrons. This occurs when the chemical potential $\mu$ equals the energy difference between
ground states differing by an electron number $\mu=E_G^{N_0}-E_G^{N_0-1}$ for IP,
and $\mu=E_G^{N_0+1}-E_G^{N_0}$ for EA. The situation is however different in the case of SCF.
Here the charge transfer dictated by a self consistent potential (Eq.~\ref{eq:rscf}) is gradual,
in which two electrons are transferred adiabatically over a span of $2U$ corresponding to the removal of two electrons.
This is usually referred to as the restricted SCF (RSCF mentioned in Fig. 2b). Most SCF calculations
in the literature \cite{rfulde} employ different variants of this scheme. There are also
spin unrestricted SCF techniques \cite{ssan,rpal,pals} which take into account the
abrupt charge transfer in a weakly coupled system, due to self-interaction correction,
but it is not yet clear whether they work out of equilibrium \cite{rbhasko,rbhasko2}.
One expects that the IP occurs roughly midway during the gradual charge removal in the RSCF scheme \cite{rmat,rfulde}.
We use this fact to estimate our charging parameters $U_{\alpha \beta}$, with the aid of a Gaussian-98 based
calculation for the equilibrium electron number v/s chemical potential ($N-\mu$),
published elsewhere \cite{rrak}. The calculation corresponding to the equilibrium number of
$N_0=42$ maps onto our model calculations for $N_0=6$, focussing thus on the frontier
orbitals and ignoring the inner core that is frozen in our estimate
for $\tilde{\epsilon}_\alpha$.
By implementing a Restricted SCF scheme using Eq. ~\ref{eq:rscf} within in our
model Hamiltonian, we obtain a close match of the $N-\mu$ plots in the range between $N_0$
and $N_0-1$ in comparison with the Gaussian-98 calculation in \cite{rrak}. Using an estimate of the onsite
charging $U_{\alpha \alpha}$, we calculate $U_{\alpha \beta}$ using the Matago-Nishimoto
approximation:
\begin{equation}
U_{\alpha \beta}=\frac{e^2}{4\pi\epsilon_0
r_{\alpha \beta} +\frac{2e^2}{U_{\alpha \alpha}+U_{\beta \beta}}},
\label{eq:ppp}
\end{equation}
where $r_{\alpha \beta}$ is the inter carbon distance in benzene. In
each case, evaluation of $n_{\alpha}$ is done self-consistently using an equilibrium value $N_0=6$, and
$\langle n_{\alpha \sigma}\rangle=\frac{1}{2}$.
Using exact eigen-energies of the many-electron Hamiltonian Eq.~\ref{eq:mfh} with the above parameters,
an $N-\mu$ calculation using these total energies (shown dotted red in Fig.2b)
is in excellent agreement with respect to Gaussian calculations in \cite{rrak}.
Note that the $\mu=IP$ in Fig.2b occurs midway between $N=N_0$ and $N=N_0-2$ in the RSCF charging diagram.
It is worth mentioning that the many-body calculation presented
in this figure takes all correlation energies into account and is the exact ground state
energy within our defined model Hamiltonian.
\subsection{Equilibrium Spectral function}
Conduction through molecules via molecular orbitals is well understood in the SCF picture \cite{rfer}.
In the strongly coupled regime (most appropriate for an SCF treatment), fractional
charge transfer occurs, and Density of States (DOS) is evaluated at
equilibrium \cite{rdatta1} in order to capture the effect of the strong coupling
with contact. An interplay of molecular DOS and charging treated self-consistently
determines the non-equilibrium response (current-voltage or I-V characteristics).
The density of states calculated from the one-electron Green's function \cite{rdatta1}
in Fig. 2c shows peaks at the single electron eigen spectrum.
As the coupling to electrodes gets stronger, the single electron DOS will show signatures and
artifacts of contact bondings \cite{liang,rfer}.
In the weak coupling (CB) limit however, integer charge addition is favored, and
transitions between states that differ by a single electron appear as spectral signatures \cite{rralph}. At
equilibrium, it is convenient to introduce the {\it{Ground State Spectral
Function}} by defining the Green's function in the Lehmann
representation \cite{rfulde}:
\begin{eqnarray}
G_{\alpha \beta}(E) &=& \frac{\langle N,0|c_{\alpha}|N+1,j\rangle\langle N+1,j|c^\dagger_{\beta}|N,0\rangle}{E+i0^+ - (E^{N+1}_j -E^N_0)} \nonumber\\
&+& \frac{\langle N,0|c^{\dagger}_{\beta}|N-1,j\rangle \langle N-1,j|c^{}_{\alpha}|N,0\rangle}{E+i0^+ - (E^N_0 - E^{N-1}_j)} \nonumber\\
A_{\alpha \beta}(E)&=& i[G(E)-G^\dagger (E)]
\label{eq:leh}
\end{eqnarray}
where $\alpha, \beta$ correspond to the orbital index,
which in our case are the sites of the benzene molecule, and $| N, i \rangle$
denotes the $i^{th}$ excited state of
a charge configuration of $N$ electrons. The poles of this spectral function represent various
transition energies for addition (removal) of electrons from the neutral ground
state:
\begin{eqnarray}
\epsilon^{Nr}_{0j}&=& E^{N}_{0}-E^{N-1}_{j} \nonumber\\
\epsilon^{Na}_{0j}&=& E^{N+1}_{j}-E^{N}_{0}
\label{eq:sp0}
\end{eqnarray}
whose spectral strengths are given by:
\begin{eqnarray}
{\tau}^{Nr}_{0j,\alpha \beta} &=& \langle
N,0|c_{\alpha}|N+1,j\rangle\langle N+1,j|c^\dagger_{\beta}|N,0\rangle \nonumber \\
{\tau}^{Na}_{0j,\alpha \beta} &=& \langle
N,0|c^{\dagger}_{\beta}|N-1,j\rangle\langle N-1,j|c^{}_{\alpha}|N,0\rangle
\label{eq:w0}
\end{eqnarray}
The first (addition) term adds an electron to orbital $\beta$, taking the system from an N electron
ground state to the $j^{th}$ (N+1) electron excited state, and then removes it from orbital $\alpha$, bringing
it back to ground state. The second (removal) equation first removes an electron from $\alpha$ and then adds it to
$\beta$.
One can re-write the expression in terms of {\it{diagonal terms only}}, replacing the recurring index $\alpha$ with a single index, in a more convenient form as:
\begin{equation}
A^N_{0\alpha}(E)=\sum_{j}\left[{\tau}^{Nr}_{0
j \alpha}\delta(E-\epsilon^{Nr}_{0j})+{\tau}^{Na}_{0j
\alpha}\delta(E-\epsilon^{Na}_{0j}) \right]
\label{eq:sp0}
\end{equation}
The spectral function shown in Fig. 2d represents the removal and addition strength of various transitions at
their energies given by Eq.~\ref{eq:sp0}. Notice that there are numerous peaks in this spectrum calculated
from the many-electron transitions, due to the possible transfer to
various excited states of charged species shown in Fig. 2d. It is
important to note that although each transition has a non-trivial spectral weight given by Eq.~\ref{eq:w0},
{\it{they satisfy an overall sum rule}} that amounts to the total electron number in the system. We will
see in
subsequent sections
that these transitions involving excited states show up directly as transport signatures frequently
observed in experiments.
\section{Non-Equilibrium}
This section is devoted to the various unique transport signatures in the weak coupling (CB) regime, many
of
which have experimental significance. We elaborate on how various excited states get accessed as a result
of contacts maintained at different potentials (non-equilibrium), and what the SCF theory completely
misses in this regime. Throughout this paper we describe the electrodes (contacts)
using corresponding electrochemical potentials $\mu_L$ and $\mu_R$ and coupling strengths $\gamma_L$ and $\gamma_R$.
\subsection{Coulomb Blockade approach: Rate equation model}
Transport in the CB limit \cite{rlikharev,rralph,rhettler} is often modeled with a rate equation approach,
in which the steady state addition and removal of electrons is described with a rate equation for the
nonequilibrium probability $P^N_i$ of each N electron many-body state
$|N,i\rangle$ with total energy $E^N_i$. The master equation
involves transition rates $R_{(N,i)\rightarrow(N\pm 1,j)}$ between states differing by a
single electron, leading to a set of independent equations defined by the size of the
Fock space \cite{rralph}
\begin{equation}
\frac{dP^N_i}{dt} =
-\sum_j\left[R_{(N,i)\rightarrow(N \pm 1,j)}P^N_i -R_{(N\pm
1,j)\rightarrow(N,i)}P^{N\pm 1}_j\right]
\label{ebeenakker}
\end{equation}
along with the normalization equation $\sum_{i,N}P^N_i = 1$. We define rate constants
\begin{eqnarray}
\Gamma_{ij\alpha}^{Nr} &=& \gamma_\alpha|{\tau}^{Nr}_{ij\alpha}|^2\nonumber\\
\Gamma_{ij\alpha}^{Na} &=& \gamma_\alpha|{\tau}^{Na}_{ij\alpha}|^2,
\end{eqnarray}
where $\gamma_{\alpha}$
represents lead molecule broadening or coupling via the end atoms, described
using Fermi's Golden rule. These constants represent
the partial probability for the electron to be injected by the end atom
into a given many-electron ground or excited state. The transition rates are
now given by
\begin{eqnarray}
R_{(N,i)\rightarrow(N-1,j)} &=& \sum_{\alpha=L,R}\Gamma_{ij\alpha}^{Nr}\left[1-f(\epsilon^{Nr}_{ij}-\mu_\alpha)\right]\nonumber\\
R_{(N-1,j)\rightarrow(N,i)} &=&\sum_{\alpha=L,R}\Gamma_{ij\alpha}^{Nr}f(\epsilon^{Nr}_{ij}-\mu_\alpha).
\end{eqnarray}
for the removal levels $(N,i \leftrightarrow N-1,j)$, and replacing $(r
\rightarrow a, f \rightarrow1-f)$ for the addition levels $(N,i \leftrightarrow N+1,j)$. $\mu_\alpha$ are the contact electrochemical potentials, $f$ is the
corresponding Fermi function, with single particle removal and addition energies
$\epsilon^{Nr}_{ij} = E^N_i - E^{N -1}_j$, and $\epsilon^{Na}_{ij} = E^{N+1}_j -
E^N_i$. Finally, the steady-state solution to Eq.(\ref{ebeenakker}) is used to
get the left terminal current as
\begin{equation}I =
\pm\frac{e}{\hbar}\sum_{ij}\left[R^L_{(N,i)\rightarrow(N\pm 1,j)}P^N_i-
R^L_{(N\pm 1, j)\rightarrow(N,i)}P^{N\pm 1}_j \right]
\end{equation}
where states
corresponding to a removal of electrons by the left electrode involve a negative
sign.
\subsection{STM Limit:}
We briefly elucidate the relationship between spectral functions defined earlier and STM conduction spectra.
Electrical conduction depends on the measurement geometry \cite{rfer} and charging determined by
{\it{capacitive}} voltage-division ratio $\eta$ between leads in the Laplace solution as
opposed to the
{\it{resistive}}
voltage-division ratio $\gamma = \gamma_R/\gamma_L$ which determines the extent
to which the levels are filled or emptied by the leads. The source and
drain potentials are then given by $\mu_L = E_F + \eta V_d$ and $\mu_R = E_F -(1-\eta)V_d$.
Consider a simple picture in which one contact is very weakly coupled ($\gamma<<1, \eta=0$),
equivalent to the molecule being in equilibrium with left contact $\mu_L$. In the $\eta=0$
limit the molecular energy levels are pinned to this contact implying that for a positive voltage $\mu_R < \mu_L$,
and $\mu_L$ remains at the equilibrium position. This picture is analogous to STM {\it{shell tunneling}} experiments \cite{rbanin}, in which the weakly
coupled STM tip acts as a voltage probe, thereby generating the
single particle spectrum, the molecule/dot held in equilibrium with the more
strongly coupled contact, in this case, the substrate.
\begin{itemize}
\item {\it{Ground State Spectral function}}: It is expected, with a more strongly coupled contact,
that the right contact voltage probe, such as an STM tip can add or withdraw into or out of the
dot at energies corresponding to addition or removal energies defined in Eq. ~\ref{eq:spl}. The
stronger coupling to the left contact ensures that an electron be added or removed as soon as the tip removes
or adds an electron thus maintaining overall charge neutrality. In this case
conductance spectrum proportional to the equilibrium spectral function is
obtained as shown in Fig.3b.
\begin{figure}
\centerline{\epsfig{figure=CB_fig3.ps,height=4.5in,width=6in}}
\caption{STM limit - mapping spectral signatures: a) Removal Spectral functions of ground state $A^{Nr}_{0,L/R}(E)$ (continuous) and first excited state $A^N_{1,\alpha}(E)$ reproduced in the STM spectra in b). The STM spectra can also show
signatures of charge neutral excited states shown dotted in a),
depending on the position of the equilibrium chemical potential (see text). c) Simple schematic depicting the interplay of $A^{Nr}_{0,L/R}(E)$ and $A^N_{1,L/R}(E)$ resulting in satellite peaks S1 and S2 in conduction spectra of lower right half. d) Conductance spectra reproducing features of both ground and excited state spectral functions $A^{Nr}_{0,L/R}(E)$ and $A^N_{1,\alpha}(E)$. }
\label{fig:spec}
\end{figure}
\item {\it{Excited Spectral Functions}}: In the previous case, the chemical
potential of the left contact is fixed above the transition level
$\epsilon^{Nr}_{00}$ but below $\epsilon^{Nr}_{10}$, thus maintaining the molecule's charge neutrality in its
ground state (i.e., $| N,0 \rangle$), and hence only the Ground State spectral
signature $A^{N}_{0,j}(E)$ is observed. However, in a general non-equilibrium scenario,
access to excited states of the neutral
and charged molecule becomes feasible and hence description in terms of spectral
functions corresponding to addition/removal from the $i^{th}$ excited state of
the neutral molecule is required:
\begin{equation}
A^N_{i,\alpha}(E)=\sum_{j}\left[{\tau}^{Nr}_{i j
\alpha}\delta(E-\epsilon^{Nr}_{ij})+{\tau}^{Na}_{ij
\alpha}\delta(E-\epsilon^{Na}_{ij}) \right],
\label{eq:spi}
\end{equation}
where $\alpha$ now corresponds to the two sites that are coupled to the left (L)/
right (R) contacts. For example, let the equilibrium chemical potential be
situated at a position above $\epsilon^{Nr}_{10}=E^{N}_{1}-E^{N-1}_{0}$, shown dotted in Fig.3a.
Given, a positive bias (${\mu}_L>{\mu}_R$) the above transition
is energetically feasible only if the ground state of the cation ($| N-1,0
\rangle$) is accessed, which occurs for a tip voltage corresponding to $\mu_R$ below $\epsilon^{Nr}_{00}$. Once this transition is accessed, spectral function $A^N_{1,L/R}$ involving the first excited state gets involved due to the initial condiction $\mu_L > \epsilon^{Nr}_{10}$, due to which the neutral excited state $| N,1 \rangle$ can be accessed. This results in additional satellite peaks S1 and S2 in Fig. 3d.
A schematic of transitions that consititute the satellite peaks S1 and S2 due to $A^N_{1,\alpha}(E)$ is shown in
Fig.3c. In this figure, we have only shown the removal levels for brevity, and extension of
the argument by including addition levels is trivial. In general, in the STM
regime, one can write a simple expression to evaluate the conductance formula
as a weighted average over various excited state spectral functions:
\begin{equation}
\frac{dI}{dV_R} \approx \frac{e^2}{h}\gamma^{}_R\sum_iP^N_i A_{iR}^{N}(\mu_R).
\end{equation}
\end{itemize}
We have thus shown a simple signature that indicates the access of excitations in the many-body spectrum of
the neutral molecule. In general, one may view near-equilibrium conduction in the CB regime using
single particle energy levels:
\begin{equation}
\epsilon^N_{ij} = E^N_i -E^{N-1}_j
\label{eq:spl}
\end{equation}
and their corresponding spectral
weights.
\subsection{Break Junction limit}
The break junction limit is achieved by setting $\eta=0.5, \gamma=1$,
implying that both contacts are equally coupled to the molecular
dot and half the applied voltage appears across the molecular levels which in our case transition energies
$\epsilon^{N}_{ij}$. The many-body configuration of the
molecule consists of its ground state $|N,0 \rangle$ and the first excited state $|
N,1 \rangle$ separated by a gap similar to the HOMO-LUMO gap $\Delta$, followed by a
set of closely spaced excitations denoted by $|N,i \rangle, i > 1$.
The I-V characteristics
in this limit show certain key signatures which result from how these excitations are accessed.
The onset of conduction is established by the offset between the equilibrium Fermi energy $E_F$ and the
first accessible transition energy $\epsilon^{Nr}_{00}$. The qualitative shape of the I-Vs depends
on how the excitations are accessed. Recall that
$\eta=0.5$ implies that the molecular levels are displaced with respect to the
contact electrochemical potentials by the applied voltage. If the excited states are not accessed
simultaneously or prior to the threshold
transition $\epsilon^{Nr}_{00}$, as shown in Fig.4b, the I-V has a brief staircase of plateaus
before a quasi linear rise in current.
This quasilinear current rise occurs due to a huge number of closely spaced transport
channels that are triggered only when
transitions involving an excitation appear within
the bias window. However, the quasilinear current can also appear prematurely without an intervening plateau,
if a feasible transition to an excited state appears in the bias window at or before
the threshold transition. This situation is shown in Fig.4b, where
$\epsilon^{Nr}_{10}$ also appears at threshold, resulting in a quasilinear regime immediately
following the onset.
The two distinct I-Vs have been observed
experimentally \cite{rdekker,jpark,rweber1,rweber2,pnas} and depend merely on the position of the equilibrium
electrochemical potential (Fermi energy) with respect to the transition energies.
In the meanwhile, similar SCF based I-V characteristics show adiabatically smeared out currents
whose onsets get postponed by the changing position of the equilibrium $E_F$, as shown in Fig. 4c.
The SCF potential from Eq. 5, determines how levels float with respect to their non-equilibrium occupation \cite{rdatta1}.
It is readily seen by comparing Fig.3a,b with Fig.3c that any self consistent potential cannot change
the qualitative features of the I-Vs in order to resemble the CB features.
\begin{figure}
\centerline{\epsfig{figure=CB_fig4.ps,height=4.5in,width=6in}}
\caption{CB transport under identical contact coupling. a) Schematic of CB conduction resulting in qualitatively different I-V characteristics. For 1) $\mu_F=\mu_0$ one observes I-V with a Coulomb staircase with a plateau followed by a quasi-linear rise. 2) $\mu_F=\mu_1$ one observes the quasi linear I-V upon reaching threshold. (b) This occurs with the intersection of $\mu_L=\epsilon^{Nr}_{10}$ and $\mu_R=\epsilon^{Nr}_{00}$ line in the stability diagram. Stability diagram shown for $N=6$ particle blockade region. c) Distinct I-V's under cross sections $\mu_F=\mu_0$ and $\mu_F=\mu_1$. }
\label{fig:bj}
\end{figure}
\subsection{Connection to experiments: Fitting Data using Coulomb Blockade model}
We consider matching the I-V shapes using our CB model using consistent fitting parameters.
The experiments conducted on conjugated phenylenes \cite{rweber1,rweber2} at low temperatures ($T=30K$)
suggest strong Coulomb Blockade effects. It is worth noting that an `orthodox' theory simply involving
junction resistors and capacitors would also manage to capture the zero-bias suppressed conductance,
the subsequent sharp onset and the linear current rise; however, it would not capture the intervening plateaus,
fine structures in the I-Vs, and the gateability of the current levels and their asymmetry features that arise
due to discrete transitions in the molecular configuration space \cite{jpark,rscott}. In contrast with
metallic islands, a molecular dot shows significant size quantization that leads
to quantum corrections to the junction capacitance, and gets further modified
at high bias to involve nonlinear corrections to it arising from partial densities of states
filled separately by the two contacts.
While our model explains salient features of a lot of Coulomb Blockade experiments \cite{jpark,rscott,rdekker}
it is interesting to note that in some cases the same molecule showed CB behavior at low temperature and SCF behavior at higher
temperatures \cite{rweber1,rweber2}. A possible explanation is that
at low temperature the molecule could be frozen into a configuration where the
plane of the middle ring is oriented perpendicular to the side rings,
while room temperature structures sample other configurations and are
rotated on average. This is supported by the fact that current levels at room temperature are an
order of magnitude greater, which can be attributed to an increased average degree of
conjugation along the molecular backbone.
In contrast at low temperature the rotated central ring has a weaker coupling with the
rest of the backbone, which could reduce its broadening while increasing electron localization and charging,
leading to CB behavior.
In fact some of the experiments feature bulky middle groups like
antracence. Steric side groups that are deliberately inserted to facilitate this
rotation of the central rings and enforce CB \cite{rweber2}. While doing exact
calculations on these molecular structures is beyond the scope of the
present paper, we consider making simple fits by considering the following facts:
\begin{itemize}\item Current Levels: Using the fitting parameters
$\gamma_1=\gamma_2 \approx 5-10 meV $ we obtain current levels similar to experimental
data. It is important to note that changing $\gamma$ does not affect
the conductance before the threshold voltage which shows a vanishingly small pre-threshold current.
\item Threshold Voltage: We noticed in the last section with that the
gap $\Delta$ between ground and first excited states of the neutral molecule
is important in determining the qualitative shape
of the I-V. When the equilibrium electrochemical
potential $E_F$ lies above mid-gap between $\epsilon^{Nr}_{10}$ and $\epsilon^{Nr}_{00}$,
the first excited state becomes voltage-accessible before the ground state of the charged
species is accessed and populated simultaneously via $\mu_R=\epsilon^{Nr}_{00}$, giving
rise to the quasi linear I-V immediately following the very first current onset.
In all the experimental data, we observe a threshold voltage
between $0.5-0.7$ V thus tuning the gap $\Delta \approx 0.6-0.8$ V.
\end{itemize}
\begin{figure}
\centerline{\hskip 1cm\epsfig{figure=CB_fig1a.ps,height=5in,width=6in}}
\caption{Experimental fits for data \cite{rweber1,rweber2}. a) $T=30K$, $\gamma \approx 5meV$. b) $T=295K$, $\gamma \approx 5meV$ c)$T=30K$, $\gamma \approx 0.25 meV$. }
\label{fig:wfit}
\end{figure}
Fig.~\ref{fig:wfit}(a) (b) and (c) are fits obtained for experiments \cite{rweber1,rweber2}.
In case of molecular asymmetries \cite{rweber1}, only positive bias is considered \cite{rbhasko},
the I-V asymmetries themselves being attributed to polarization
effects \cite{rasymm}. In obtaining an experimental fit for \cite{rweber2}, in
Fig.~\ref{fig:wfit}(b), we used $T=295$ K consistent
with experiment. Notice that the first peak has broadened significantly.
The higher temperature Coulomb Blockade could possibly be attibuted to the fact that molecule involved has an anthracene based middle ring that is much bulkier, thus leading to a
higher temperature frozen configuration stabilized by steric
interactions.
The molecular system we consider is a simple prototypical molecule
(benzene in our case) with calculations based on
simple parameters that are associated with
this minimal system. Performing calculations on a
real molecule-electrode system will be needed to
yield a quantitative fit in terms of threshold voltage, current levels and
positions of peaks. However, the conduction mechanism remains the same. The exponentially
larger configuration space of even a minimal Coulomb Blockaded molecule makes a first-principles calculation of its transport
properties inordinately challenging compared to SCF treatments in the literature. However, the SCF calculations
do not capture the non-equilibrium transition rates between the many-body states, which as we argued earlier
carry crucial correlation signatures that are experimentally observable for ultrashort molecules.
Such a ``real'' calculation involving the quantum chemistry of larger molecules and
contact bondings within this nonequilibrium full CI treatment
is still at a very early stage \cite{rdelaney2}. Furthermore, it needs to be supplemented
with the broadening of the many-particle states that could affect the interference between nearby levels,
an issue that has received relatively little attention \cite{rjauho,rbraig,mwl2,rgurevich,aw}
and requires further work.
\section{conclusion}
In this paper, we have developed a Coulomb Blockade approach for molecular conduction through
short molecules using Benzene as a prototype. We have shown how equilibrium and non-equilibrium
signatures are very different from the traditional NEGF-SCF viewpoint, and that the CB approach
is appropriate in the weak coupling limit. Many I-V features distinct to the CB regime are often seen
in experiments. These features that are easily obtained using a full Configuration Interaction
master equation approach are potentially very hard to obtain within any
effective one-electron potential, even for a minimal model. A particular challenge
therefore lies in bridging the SCF and CB regimes while paying close attention to coherent level
broadening and associated interferences. The emergence of many recent experiments on molecular dots,
exploring the interplay between charging, quantization and level-broadening, should prove invaluable
in further theoretical developments in this regard.
|
2,877,628,091,060 | arxiv | \section{Introduction}
\label{sec:introduction}}
\else
\section{Introduction}
\label{sec:introduction}
\fi
\IEEEPARstart{A}{s} the number of \emph{Internet of Things (IoT)} devices deployed dramatically increases worldwide~\cite{Kolias2017DDoSBotnet
}, and the traffic volume of IoT-based DDoS attacks reaches unprecedented levels~\cite{Kolias2017DDoSBotnets, Bertino2017BotnetsSecurity, Hallman2017IoDDoSBotnets}, the need for timely detection of IoT botnet attacks
has become imperative for mitigating the risks associated with these attacks.
Instantaneous detection promotes network security, as it expedites the alerting and disconnection of compromised IoT devices from the network, thus stopping the botnet from propagating and preventing further outbound attack traffic.
\begin{table*}[!htbp]
\caption{Prior studies conducted on the detection of IoT-related anomalies, botnets, and malware attacks}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{c||cccccccc}
\hhline{=========}
\textbf{Paper}
&\begin{tabular}[t]{@{}c@{}}\textbf{Detected}\\\textbf{Botnet}\end{tabular}
&\begin{tabular}[t]{@{}c@{}}\textbf{Botnet}\\\textbf{Operational}\\\textbf{Step}\end{tabular}
&\textbf{Attack(s)}
&\begin{tabular}[t]{@{}c@{}}\textbf{Detection}\\\textbf{Approach}\end{tabular}
&\begin{tabular}[t]{@{}c@{}}\textbf{Deployment}\\\textbf{Level}\end{tabular}
&\begin{tabular}[t]{@{}c@{}}\textbf{Assumed}\\\textbf{Environment}\end{tabular}
&\begin{tabular}[t]{@{}c@{}}\textbf{Research}\\\textbf{Type}\end{tabular}
&\begin{tabular}[t]{@{}c@{}}\textbf{Data}\\\textbf{for}\\\textbf{Evaluation}\end{tabular}
\\
\hhline{=========}
\cite{Bertino2017BotnetsSecurity}&\makecell{Linux.Darlloz\\worm, Mirai}&Infection&DDoS&\makecell{Intrusion prevention,\\traffic monitoring}&\makecell{Network\\(routers, gateways)}&-&\makecell{Survey}&-\\
\hline
\cite{Hallman2017IoDDoSBotnets}&Mirai&\makecell{Various operational\\steps, depending\\on the malware}&DDoS&-&-&-&Survey&-\\
\hline
\cite{Ozcelik2017Software-DefinedDDoS}&Mirai&\makecell{Scanning\\(propagation)}&\makecell{Mirai-infected\\IoT devices scan \\for further devices}&\makecell{Dynamic\\updating\\of flow rules}&"Thin fog"&\makecell{Critical\\ infrastructures}&Experimental&\makecell{Emulated\\IoT nodes,\\simulated data}\\
\hline
\cite{Summerville2016Ultra-lightweightDevices}&-&-&\makecell{Worm propagation,\\code injection,\\ tunneling attack}&\makecell{Deep packet\\anomaly detection}&Host&-&Experimental&\makecell{Two real\\devices}\\
\hline
\cite{Pa2016IoTPOT:Threats}
&\makecell{ZORRO, *.sh,\\GAFGYT,\\ KOS, nttpd}&All&-&\makecell{Honeypot to\\collect and\\analyze attacks}&Both&-&Experimental&\makecell{Real\\data}\\
\hline
\cite{Sedjelmaci2016AMethodology}&-&-&\makecell{Devices are\\attacked by\\a DoS attack}&\makecell{Hybrid: signature-\\based and anomaly\\detection (BPN)}&Host&WSN&Experimental&Simulation\\
\hline
\cite{Bostani2017HybridApproach}&-&-&\makecell{Routing attacks\\(sinkhole and\\selective-forwarding)}&\makecell{Hybrid: specification-\\based and anomaly\\detection (OFPC)}&\makecell{Network\\(routers and\\root nodes)}&\makecell{6LoWPAN WSN,\\representing\\a smart city}&Experimental&Simulation\\
\hline
\cite{Butun2015AnomalyThings}&-&-&-&\makecell{Several methods,\\including\\anomaly detection}&\makecell{Network\\(cloud)}&\makecell{Sensing systems\\and distributed\\cloud platforms}&\makecell{Survey on\\challenges and\\detection approaches}&-\\
\hline
\cite{Midi2017KalisThings}&-&-&\makecell{ICMP flood, replication, wormhole,\\TCP SYN flood, HELLO jamming, data\\modification, selective forwarding, smurf}&\makecell{Knowledge\\driven,\\ anomaly detection}&Network&\makecell{Adapts to ZigBee/XBee/\\6LoWPAN (on IEEE 802.15.4),\\WiFi (on IEEE 802.11), and BT}&Experimental&\makecell{Real devices,\\simulated data}\\
\hline
\cite{Raza2013SVELTE:Things}&-&-&\makecell{Routing attacks like spoofed\\or altered information,\\sinkhole, selective-forwarding}&\makecell{Hybrid: signature-\\based and\\anomaly detection}&\makecell{Hybrid:\\border router\\and hosts}&6LoWPAN&Experimental&Simulation\\
\hline
\cite{2017AThings}&-&-&-&\makecell{Several methods,\\including\\anomaly detection}&\makecell{Host and\\network}&-&Survey&-\\
\hhline{=========}
\end{tabular}
}
\label{tab:related_work}
\end{table*}
Botnets such as Mirai are typically constructed in several distinct operational steps~\cite{Kolias2017DDoSBotnets}, namely \emph{propagation}, \emph{infection}, \emph{C\&C communication}, and \emph{execution of attacks}. Unlike most previous studies on botnet detection (see Table~\ref{tab:related_work}), which addressed the early operational steps, we focus on the last step.
We concentrate on large enterprises, which are
expected to face an ever growing range and quantity of IoT devices, normally connecting to their networks via Wi-Fi (short-range communications like Bluetooth and ZigBee are not in our current scope). These devices can be either self-deployed (e.g., \textit{smart} smoke detectors) or dynamically introduced from the outside by employees and visitors (e.g., BYO wearables).
Assuming that botnet attacks are unlikely to disappear, the fundamental question we address is as follows. Given a large number of heterogeneous IoT devices connected to an organizational network, can we devise a centralized, automated method that is highly effective and accurate in detecting compromised IoT devices which have been added to a botnet and have been used to launch attacks?
For detecting attacks launched from IoT bots we propose a network-based approach, which uses deep learning techniques to perform anomaly detection. Specifically, we extract statistical features which capture behavioral snapshots of benign IoT traffic, and train a deep autoencoder \emph{(one for each device)} to learn the IoT's normal behaviors. The deep autoencoder attempts to compress
snapshots. When an autoencoder fails to reconstruct a snapshot, then it is a strong indication that the observed behavior is anomalous (i.e., the IoT device has been compromised and is exhibiting an unknown behavior). An advantage of using deep autoencoders, is their ability to learn complex patterns, e.g., of various device functionalitie
. This results in an anomaly detector with hardly any false alarms. We empirically show that the autoencoders' false alarm rate is considerably lower than three other algorithms commonly used for anomaly detection~\cite{tuor2017deep}.
The following are the benefits of using this approach to detecting infected IoTs:
\textbf{Heterogeneity tolerance}. Compared to classical computing environments, the IoT domain is highly diverse~\cite{Bertino2017BotnetsSecurity, Hallman2017IoDDoSBotnets}. However, by profiling each device with a separate autoencoder, our method addresses the growing heterogeneity of IoT devices.
\textbf{Open World}. Typically in deep learning applications, models are trained to classify based on labels provided by experts (e.g. malicious or benign). However, our autoencoders are trained to detect when a behavior is abnormal. Thus our method can detect new previously 'unseen' botnet behaviors, which is important given the continuously evolving variants~\cite{Bertino2017BotnetsSecurity} or new botnets, which already make most detection methods obsolete~\cite{garcia2014survey}.
\textbf{Efficiency}. In the enterprise scenario, it is common that the traffic data of all connected hosts is monitored, but the amount of monitored traffic is prohibitively large to store and use for training deep neural networks. Our method uses incremental statistics to perform the feature extraction, and the training of the autoencoders can be performed in semi-online manner (train on a batch of observations and then discard). Therefore the training is practical, and there is no storage concern. Additionally,
our method is network-based so it does not consume any computation, memory, or energy resources from the (typically constrained) IoT devices. Thus, our method does not jeopardize their functionality or impair their lifespan. Our focus on the attack operational step (as opposed to the early steps) also makes our method indifferent to the botnet propagation protocols
and the possibly encrypted~\cite{garcia2014survey} C\&C channels.
The contributions of this paper can be summarized as follows:
\begin{enumerate}
\item To the best of our knowledge, we are the first to apply autoencoders to IoT network traffic for anomaly detection, as a complete means of detecting botnet attacks. Even in the larger domain of network traffic analysis, autoencoders have not been used as fully automated standalone malware detectors, but rather as preliminary tools for either feature learning~\cite{arnaldo2017learning} or dimensionality reduction~\cite{li2015hybrid}, or at most as semi-manual outlier detectors which substantially depend on human labeling for subsequent classification~\cite{veeramachaneni2016ai} or further inspection by security analysts~\cite{tuor2017deep}.
\item Unlike previous experimental studies on the detection of IoT botnets or IoT traffic anomalies which relied on emulated or simulated data (\hspace{1sp}\cite{Sedjelmaci2016AMethodology, Ozcelik2017Software-DefinedDDoS, Bostani2017HybridApproach, Midi2017KalisThings}), we perform empirical evaluation with real traffic data, gathered from nine commercial IoT devices infected by authentic botnets from two families. We examine Mirai and BASHLITE, two of the most common IoT-based botnets, which have already demonstrated~\cite{Kolias2017DDoSBotnets} their harmful capabilitie
. To enable reproducibility and address the lack of public botnet datasets~\cite{garcia2014survey}, particularly for the IoT, we share our network traces at http://archive.ics.uci.edu/ml/datasets/detection\_of\_
IoT\_botnet\_attacks\_N\_BaIoT.
\end{enumerate}
\section{Related Work}\label{sec:related_work}
The botnet detection methods suggested thus far can be categorized based on (1) the specific operational step to be detected, and (2) the detection approach. Table~\ref{tab:related_work} is based on this categorization and further summarizes previous studies on the detection of IoT-related anomalies, botnets, and malware attacks.
Among the \emph{botnets' operational steps}, previous IoT-related detection studies (e.g., ~\cite{Ozcelik2017Software-DefinedDDoS} and~\cite{Summerville2016Ultra-lightweightDevices}) focused mainly on the early steps of propagation and communication with the C\&C server.
However, given that botnet attacks continue to mutate on a daily basis~\cite{Kolias2017DDoSBotnets} and become increasingly sophisticated~\cite{Bertino2017BotnetsSecurity}, we anticipate that some of these mutations will eventually succeed at bypassing existing methods of early detection.
Moreover, mobile IoT devices
might get contaminated when connected to external networks.
For instance, smartwatches may connect to dubious \emph{free Wi-Fi} networks when their owners arrive at airports.
Hence, monitoring organizational networks for identifying the early steps of infection alone is insufficient.
Accordingly, we focus on a later step of a botnet operation, when IoT bots begin launching cyberattacks. In that sense, our method adds a \emph{last line of defense} security layer.
It instantly detects the IoT-based attacks and minimizes their impact by issuing an immediate alert which recommends the isolation of any compromised device from the network until it is sanitized.
\begin{table*}[!t]
\caption{Extracted features}
\centering
\begin{tabular}{lllc}
\hhline{====}
\multicolumn{1}{c}{\textbf{Value}} & \multicolumn{1}{c}{\textbf{Statistic}} & \multicolumn{1}{c}{\textbf{Aggregated by}} & \begin{tabular}[t]{@{}c@{}}\multicolumn{1}{c}{\textbf{Total Number}}\\\multicolumn{1}{c}{\textbf{of Features}}\end{tabular} \\
\hhline{====}
Packet size (of outbound packets only) & Mean, Variance & \makecell[l]{Source IP,\textsuperscript{1} Source MAC-IP,\textsuperscript{2}\\Channel, Socket\textsuperscript{3}} & 8 \\
\hline
Packet count & Number & \makecell[l]{Source IP, Source MAC-IP,\\Channel, Socket} & 4 \\
\hline
\makecell[l]{Packet jitter (the amount of time\\between packet arrivals)} & Mean, Variance, Number & Channel & 3 \\
\hline
\makecell[l]{Packet size (of both inbound and\\outbound together)} & \makecell[l]{Magnitude, Radius, Covariance,\\Correlation coefficient} & Channel, Socket & 8 \\
\hhline{====}
\multicolumn{4}{l}{\textsuperscript{1} The source IP is used to track the host as a whole.}\\
\multicolumn{4}{l}{\textsuperscript{2} The source MAC-IP adds the capability to distinguish between traffic originating from different gateways and spoofed IP addresses.}\\
\multicolumn{4}{l}{\makecell[l]{\textsuperscript{3} The sockets are determined by the source and destination TCP or UDP port numbers. For example, all of the traffic sent from\\192.168.1.12:1234 to 192.168.1.50:80 (traffic flowing from one socket to another).}}\\
\\
\multicolumn{4}{l}{\makecell[l]{Further details and the datasets themselves are publicly available at\\ http://archive.ics.uci.edu/ml/datasets/detection\_of\_IoT\_botnet\_attacks\_N\_BaIoT.}}\\
\hhline{====}
\end{tabular}
\label{tab:extracted_features}
\end{table*}
Among the suggested \emph{botnet detection approaches}, a primary distinction is made between host-based~\cite{Sedjelmaci2016AMethodology, Summerville2016Ultra-lightweightDevices} and network-based~\cite{Ozcelik2017Software-DefinedDDoS, Bostani2017HybridApproach, Butun2015AnomalyThings, Midi2017KalisThings} approaches. We consider host-based techniques less realistic for detecting compromised IoT devices, because (1) we cannot rely on the good will of all IoT manufacturers to install designated host-based anomaly detectors on their products; (2) there is limited access to some IoT devices (e.g., wearables), so the installation of software on end devices cannot be enforced; (3) the constrained computation and power of most IoT devices impose constraints on the complexity and efficiency of host-based anomaly detection algorithms, which also might consume energy and computation from the devices and thus harm their functionality; and (4) in the enterprise scenario we assume, where various and numerous IoT devices connect to the organizational network, a single non-distributed solution is preferred.
A hierarchical taxonomy of network-based botnet detection approaches, not limited to the IoT domain, is proposed by~\cite{garcia2014survey}. Honeypots are one of the detection sources surveyed in this study. Honeypots have commonly been used for collecting, understanding, characterizing, and tracking
botnets~\cite{Pa2016IoTPOT:Threats
. However, they are not necessarily useful for detecting compromised endpoints or the attacks emanating from them. Moreover, honeypots normally require a substantial investment in procurement or emulation of real devices,
data inspection, signature extraction,
and keeping up with mutations.
As per~\cite{garcia2014survey}, normal networks constitute an alternative detection source, where network intrusion detection systems (NIDSs) monitor traffic data continuously and automatically, while using pattern matching to detect signs of undesirable activities.
Such patterns may rely on (1) signatures identified by honeypots, (2) DNS traffic with a potential C\&C server, (3) traffic anomalies~\cite{Summerville2016Ultra-lightweightDevices}, (4) data mining, or (5) hybrid approaches~\cite{Sedjelmaci2016AMethodology, Bostani2017HybridApproach}.
Similar to~\cite{Summerville2016Ultra-lightweightDevices}, we find that the anomaly-based approach is best suited for detecting compromised IoT devices, because these connected appliances are typically task-oriented (e.g., specifically designed to detect motion or measure humidity).
Accordingly, they execute fewer, and potentially less, complex network protocols, and exhibit traffic with less variance than PCs. As such, detecting deviations from their normal patterns should be more accurate and robust.
Many detection algorithms were surveyed in~\cite{garcia2014survey}, however
artificial neural networks were left uncited, and autoencoders were not mentioned at all.
Such works within the greater domain of cybersecurity have been published more recently, yet they are dissimilar to our approach, unrelated to the IoT, and often not directly connected to botnets. For instance,~\cite{arnaldo2017learning, li2015hybrid} and~\cite{yu2017network} applied shallow autoencoders for preliminary feature learning and dimensionality reduction, followed by Random Forest, Deep Belief Networks, and Softmax, respectively for classification and fine-tuning.
Although autoencoders were extended for outlier detection in~\cite{veeramachaneni2016ai}, they still required security analysts to actively label data for subsequent supervised learning.
Closer to our approach, the authors of~\cite{tuor2017deep} apply deep learning to system logs for detecting insider threats.
Differently from us, they use DNNs and RNNs (LSTMs),
and depend on further manual inspection
In conclusion, our method differs from previous studies as we learn from benign data by training deep autoencoders for each device, and use them as standalone automatic tools for instantaneous detection of existing and unseen IoT botnet attacks.
\section{Proposed Detection Method}\label{sec:proposed_detection_method}
The method we propose for detecting IoT botnet attacks relies on deep autoencoders for each device, trained on statistical features extracted from benign traffic data.
When applied to new (possibly infected) data of an IoT device, detected anomalies may indicate that the device is compromised. This method consists of the following main stages: (1) data collection, (2) feature extraction, (3) training an anomaly detector,
and (4) continuous monitoring.
\textbf{Data collection.} We capture the raw network traffic data (in \emph{pcap} format)
using port mirroring on the switch through which the organizational traffic typically flows.
To ensure that the training data is clean of malicious behaviors, the normal traffic of an IoT is collected immediately following its installation in the network.
\textbf{Feature extraction.} Whenever a packet arrives, we take a behavioral snapshot of the hosts and protocols that communicated this packet. The snapshot obtains the packet's context by extracting 115 traffic statistics over several temporal windows to summarize all of the traffic that has (1) originated from the same IP in general, (2) originated from both the same source MAC and the same IP address, (3) been sent between the source and destination IPs (\emph{channel}), and (4) been sent between the source to destination TCP/UDP sockets (\emph{socket}).
We extract the same set of 23 features (capturing the above, see Table~\ref{tab:extracted_features}) from five time windows of the most recent 100ms, 500ms, 1.5sec, 10sec, and 1min.
These features can be computed very fast and incrementally and thus facilitate real time detection of malicious packets. Additionally, although generic these features can capture specific behaviors like source IP spoofing~\cite{Bertino2017BotnetsSecurity}, characteristic of Mirai's attacks. For instance, when a compromised IoT device spoofs an IP, the features aggregated by the Source MAC-IP, Source IP and Channel will immediately indicate a large anomaly due to the unseen behavior originating from the spoofed IP address.
\textbf{Training an anomaly detector.} As our base anomaly detector, we use deep autoencoders and maintain a model for each IoT device separately. An autoencoder is a neural network which is trained to reconstruct its inputs after some compression. The compression ensures that the network learns the meaningful concepts and the relation among its input features. If an autoencoder is trained on benign instances only, then it will succeed at reconstructing normal observations, but fail at reconstructing abnormal observations (unknown concepts). When a significant reconstruction error is detected, then we classify the given observations as being an anomaly
We optimize the parameters and hyperparameters of each trained model such that when applied to unseen traffic the model maximizes the true positive rate (TPR, detecting attacks once they occur) and minimizes the false positive rate (FPR, wrongly marking benign data as malicious).
For training and optimization, we use two separate datasets which only contain benign data, from which the model \emph{learns} patterns of normal activity.
The first dataset is the \emph{training set}
\(DS_{trn}\)), and it is used for training the
autoencoder, given input parameters such as the \emph{learning rate}
\(\eta\), the size of the gradient descent step), and the number of \emph{epochs} (complete passes through the entire \(DS_{trn}\)).
The second dataset is the \emph{optimization set}
\(DS_{opt}\)), and it is used to optimize these two hyperparameters (\(\eta\) and \(epochs\)) iteratively until the mean square error (\(MSE\)) between a \(model\)'s input (the original feature vector) and output (the reconstructed feature vector) stops decreasing. Stopping at this point
prevents overfitting $DS_{trn}$, thus promoting better detection results with future data. \(DS_{opt}\) is later used to optimize a threshold
$tr$) which discriminates between benign and malicious observations; finally, it is also used to optimize the window size
$ws$), by which the FPR is minimized.
Once the \(model\) training and optimization is complete the
\(tr^*\) is set. This anomaly threshold, above which an instance is considered anomalous, is calculated as the sum of the sample mean and standard deviation of \(MSE\) over \(DS_{opt}\) (see Equation~\ref{eq:threshold}).
\begin{equation}
tr^*=\overline{MSE}_{DS_{opt}}+s(MSE_{DS_{opt}})
\label{eq:threshold}
\end{equation}
Preliminary experiments revealed that deciding whether a device's packet stream is anomalous or not based on a single instance enables very accurate detection of IoT-based botnet attacks (high TPR). However, benign instances were too often (in approximately 5-7\% of cases) falsely marked as anomalous.
Thus we base the abnormality decision on a \emph{sequence} of instances by implementing a majority vote on a moving window. We determine the minimal window size \(ws^*\) as the shortest sequence of instances, a majority vote which produces 0\% FPR on \(DS_{opt}\) (see Equation~\ref{eq:window_size}).
\begin{equation}
ws^*=\operatorname*{arg\,min}_{|ws|}(|\{packet \in ws|MSE(packet)>tr^*\}|>\frac{|ws|}{2})
\label{eq:window_size}
\end{equation}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]
{lab_architecture_v3.png}
\caption{Lab setup for detecting IoT botnet attacks}
\label{fig:lab_architecture}
\end{figure*}
\textbf{Continuous monitoring for anomaly detection.} Eventually, we apply the optimized \(model\) to feature vectors extracted from continuously observed packet
, to mark each instance as benign or anomalous.
Then, a majority vote on a sequence (the length of \(ws^*\)) of marked instances is used to decide whether the entire respective stream is benign or anomalous.
Consequently, an alert can be issued upon the detection of an anomalous stream, as it might indicate malicious activity on the IoT device.
\section{Empirical evaluation}\label{sec:empirical_evaluation}
In our experiments, we strived to authentically represent IoT devices deployed in an enterprise setting, infected by real-world botnets, and executing genuine attacks.
\textbf{Lab setup.}
To replicate a typical organizational data flow, we collected the traffic data from IoT devices that were connected via Wi-Fi to several access points, wire connected to a central switch which also connects to a router.
For sniffing the network traffic, we performed port mirroring on the switch, and recorded the data using Wireshark.
To evaluate our detection method as realistically as possible, we also deployed all of the components of two botnets (see Figure~\ref{fig:lab_architecture}) in our isolated lab and used them to infect nine commercial IoT devices (see Table~\ref{tab:device_overview_and_params}).
\begin{table*}[!hb]
\caption{
Overview of the training stage: dataset properties and training summary, optimized hyperparameters for autoencoders, and botnet infections}
\centering
\resizebox{\textwidth }{!}{
\begin{tabular}{c||llrcc||cccc||cc}
\hhline{============}
& \multicolumn{5}{c}{\textbf{Dataset Properties and Training Summary}} & \multicolumn{4}{c}{\textbf{Optimized Hyperparameters of Autoencoders}} & \multicolumn{2}{c}{\textbf{Botnet Infections}}\\
\hline
\begin{tabular}[t]{@{}c@{}}\textbf{Device}\\\textbf{ID}\end{tabular}
& \textbf{Device Make and Model}
& \textbf{Device Type}
& \begin{tabular}[t]{@{}c@{}}\textbf{Number}\\\textbf{of Benign}\\\textbf{Instances}\end{tabular}
& \begin{tabular}[t]{@{}c@{}}\textbf{Training}\\\textbf{Time}\\\textbf{(\(sec\))}\end{tabular}
& \begin{tabular}[t]{@{}c@{}}\textbf{Object}\\\textbf{Size}\\\textbf{(\(kB\))}\end{tabular}
& \begin{tabular}[t]{@{}c@{}}\textbf{Learning}\\\textbf{Rate}\\\textbf{(\(\eta\))}\end{tabular}
& \begin{tabular}[t]{@{}c@{}}\textbf{Number}\\\textbf{of Epochs}\\\textbf{(\(epochs\))}\end{tabular}
& \begin{tabular}[t]{@{}c@{}}\textbf{Anomaly}\\\textbf{Threshold}\\\textbf{(\(tr^*\))}\end{tabular}
& \begin{tabular}[t]{@{}c@{}}\textbf{Window}\\\textbf{Size}\\\textbf{(\(ws^*\))}\end{tabular}
& \textbf{Mirai}
& \textbf{BASHLITE} \\
\hhline{============}
1 & Danmini & Doorbell & 49,548 & 555 & 172 & 0.012 & 800 & 0.042 & 82 & \Checkmark & \Checkmark \\
2 & Ennio & Doorbell & 39,100 & 215 & 172 & 0.003 & 350 & 0.011 & 22 & - & \Checkmark \\
3 & Ecobee & Thermostat & 13,113 & 54 & 172 & 0.028 & 250 & 0.011 & 20 & \Checkmark & \Checkmark \\
4 & Philips B120N/10 & Baby Monitor & 175,240 & 292 & 172 & 0.016 & 100 & 0.030 & 65 & \Checkmark & \Checkmark \\
5 & Provision PT-737E & Security Camera & 62,154 & 275 & 172 & 0.026 & 300 & 0.035 & 32 & \Checkmark & \Checkmark \\
6 & Provision PT-838 & Security Camera & 98,514 & 795 & 172 & 0.008 & 450 & 0.038 & 43 & \Checkmark & \Checkmark \\
7 & SimpleHome XCS7-1002-WHT & Security Camera & 46,585 & 220 & 172 & 0.017 & 230 & 0.056 & 23 & \Checkmark & \Checkmark \\
8 & SimpleHome XCS7-1003-WHT & Security Camera & 19,528 & 190 & 172 & 0.006 & 500 & 0.004 & 25 & \Checkmark & \Checkmark \\
9 & Samsung SNH 1011 N & Webcam & 52,150 & 150 & 172 & 0.013 & 150 & 0.074 & 32 & - & \Checkmark \\
\hhline{============}
\end{tabular}
}
\label{tab:device_overview_and_params}
\end{table*}
\textbf{Botnets deployed.}
We focused on two of the most common IoT botnet families: BASHLITE and Mirai. We deployed both of them in our labs and collected traffic data before and after the infection.
\emph{BASHLITE} (also known as Gafgyt, Q-Bot, Torlus, LizardStresser, and Lizkebab) is one of the most infamous types of IoT botnets, and its code and behavior can be found in other IoT malware as well.
To launch an attack, the botnet infects Linux-based IoT devices by brute forcing default credentials of devices with open Telnet ports.
In our research, the IoT devices were infected using the binaries from the IoTPOT dataset~\cite{Pa2016IoTPOT:Threats} (namely Gafgyt).
In order to adjust the attacks to our lab, the IP address of the C\&C server was extracted from the malware's binary, and all of the network traffic to this IP was routed to a server in our lab that functions as a C\&C server. Once a new bot connected to this server and was under its control, this server was able to command the infected device to launch
attacks
\emph{Mirai} is the second botent we deployed in our isolated network, using its published source code~\cite{GitHubPurposes}. The experimental setup included a C\&C server and a server with a scanner and loader.
The scanner and loader components are responsible for scanning and identifying vulnerable IoT devices, and loading the malware to the vulnerable IoT devices detected.
Once a device was infected, it automatically started scanning the network for new victims while waiting for instructions from the C\&C server.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth
{TPR_FPR_algos.jpg}
\caption[TPR_FPR_algos]%
{{\small Methods' detection accuracy}}
\label{fig:TPR_FPR_algos}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{avg_detection_times_devices.jpg
\caption[]%
{{\small Methods' detection time (seconds)}}
\label{fig:detection_times_algorithms}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidt
]{fpr_d_score.jpg}
\caption[]%
{{\small Average FPR explained by traffic characteristics}}
\label{fig:fpr_d_score}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidt
]{detction_time_d_score.jpg}
\caption[]%
{{\small Detection time explained by traffic characteristics}}
\label{fig:detection_time_d_score}
\end{subfigure}
\caption[what is this good for]
{\small Experimental results using the test set: comparison of methods and potential explanations}
\label{fig:results}
\end{figure*}
\textbf{Attacks executed.} The following is the list of attacks executed and tested in our lab.
\begin{itemize}[leftmargin=*]
\item BASHLITE Attacks
\begin{enumerate}
\item Scan: Scanning the network for vulnerable devices
\item Junk: Sending spam data
\item UDP: UDP flooding
\item TCP: TCP flooding
\item COMBO: Sending spam data and opening a connection to a specified IP address and port
\end{enumerate}
\item Mirai Attacks
\begin{enumerate}
\item Scan: Automatic scanning for vulnerable devices
\item Ack: Ack flooding
\item Syn: Syn flooding
\item UDP: UDP flooding
\item UDPplain: UDP flooding with fewer options, optimized for higher PPS
\end{enumerate}
\end{itemize}
\textbf{Experimental results and discussion.}
Each of the nine sets of \emph{benign} data we collected in our lab, corresponding to the nine IoT devices, was divided chronologically into three equidimensional sets: (1) $DS_{trn}$ for training the autoencoder, (2) $DS_{opt}$ for parameter optimization, and (3) the benign part of $DS_{tst}$ for estimating FPR.
In order to imitate real-world settings and thus assess our method more realistically, we made sure to incorporate traffic from the entire (normal) life cycle of the devices. Particularly, in each of the three sets of each IoT device we included not only traffic data of frequent actions (e.g., a webcam transmitting video) but also infrequent actions (e.g., accessing a webcam via the mobile app, moving in front of it, or booting it).
For training and optimization we used Keras. Each autoencoder had an input layer whose dimension is equal to the number of features in the dataset (i.e., 115). As noted by~\cite{li2015hybrid} and~\cite{arnaldo2017learning}, autoencoders effectively perform dimensionality reduction internally, such that the code layer between the encoder(s) and decoder(s) efficiently compresses the input layer and reflects its essential characteristics.
In our experiments, four hidden layers of encoders were set at decreasing sizes of 75\%, 50\%, 33\%, and 25\% of the input layer's dimension.
The next layers were decoders, with the same sizes as the encoders, however with an increasing order (starting from 33\%).
Table~\ref{tab:device_overview_and_params} provides technical details about the training stage, while focusing on the dataset properties, the optimized hyperparameters of the autoencoders, and the botnet infections.
Following the stage of autoencoder training and optimization, we used the same (benign) data to train three other algorithms commonly used~\cite{tuor2017deep} for anomaly detection: \emph{Local Outlier Factor (\emph{LOF}), \emph{One-Class SVM}, and \emph{Isolation Forest}.
We optimized their hyperparameters exactly as we did for the autoencoders, including} \(tr\) and \(ws\).
Finally, we executed all of the above attacks with the same duration via Mirai and BASHLITE's C\&C servers.
Then we extracted the features from the malicious data and appended each benign part of $DS_{tst}$ (previously mentioned) to the respective malicious part of $DS_{tst}$, to form a single test dataset per IoT device with both benign and malicious instances.
The experimental results on \(DS_{tst}\)
(see Figure \ref{fig:results}) are promising:
\begin{itemize}[leftmargin=*]
\item Our method succeeded in detecting every single attack launched by every compromised IoT device, i.e., TPR of 100\%. As evident in Figure \ref{fig:TPR_FPR_algos}, LOF and SVM reached similar TPRs, much better than the Isolation Forest which demonstrated an inferior and highly variable TPR.
\item Our method also raised the fewest false alarms. It demonstrated a mean FPR of 0.007$\pm$0.01, lower and more consistent than SVM (0.026$\pm$0.029), Isolation Forest (0.027$\pm$0.041) and LOF (0.086$\pm$0.081).
\item Moreover, our method required only 174$\pm$212 milliseconds to detect the attacks, and frequently much less time. As evident in Figure~\ref{fig:detection_times_algorithms}, for most of the evaluated IoT devices the average detection time of our method was lower than all the other methods. Assuming that the detection of attack-related anomalies can automatically trigger an immediate isolation of the compromised IoT device from the network, launched attacks can be stopped in less than a second.
This is a substantial reduction from the typical duration of DDoS attacks~\cite{Blenn2017QuantifyingBackscatter}, whose distribution normally ranges between 20-90 seconds, plus a long tail where 10\% of the attacks continue more than a day, and 2\% last longer than a month.
\end{itemize}
In terms of TPR, FPR and detection time, the deep autoencoders exemplified superiority for most devices.
This is probably due to the ability of deep architectures to learn nonlinear structure mapping and approximate complex functions~\cite{li2015hybrid}. Additionally, the constrained complexity of deep autoencoders, imposed by the reduced dimensionality in the hidden layers, prevents them from learning the trivial identity function~\cite{tuor2017deep}.
Therefore, deep autoencoders tend to fit common patterns better than uncommon ones.
This is beneficial for IoT devices, as they normally are task-oriented, so their specified functionality should translate into few normal traffic patterns.
Despite this tendency to fit common traffic patterns (generated by frequent actions), the autoencoders succeeded in capturing patterns of the infrequent actions (e.g., boots) as well, demonstrated through low FPR.
In real-world applications, the FPR can be adjusted by manipulating the $tr^*$ and/or $ws^*$, however with some cost of TPR and detection times.
\section{Conclusion}\label{sec:conclusion}
Although the autoencoders in our experiments obtained an FPR of zero on most IoT devices in a test set, the difference in the FPR among the remaining IoT devices led us to further analyze our data.
We observed that the Philips B120N/10 baby monitor demonstrated the highest FPR relative to the other devices;
it also produced the largest amount of traffic (see Table~\ref{tab:device_overview_and_params}), so one could expect that the abundance of training instances would result in more robust machine learning models.
However, this device also has the most diverse set of capabilities, as it is equipped with a two-way intercom function, motion detection, audio detection, and several other sensors for ambient light, temperature, and humidity.
Given this, it might be more difficult to capture its normal behavior, and therefore future observations may be subject to more categorization errors.
Accordingly, we hypothesize that the difficulty in capturing the normal traffic behavior varies among IoT devices, and that this difficulty may be correlated with (1) the device's capabilities, and (2) the network communications it normally produces.
A similar notion was raised by~\cite{Bertino2017BotnetsSecurity}, who argues that the specialized functionality of today's IoT devices leads to predictable behaviors.
In turn, the ease of establishing baseline behaviors for IoT devices facilitates anomaly detection as a means of detecting attacks. To this end, interesting questions arise:
\begin{itemize}[leftmargin=*]
\item Can the predictability of traffic behavior of IoT devices be quantified?
\item Can the relation between the predictability level and the static features of IoT devices (e.g., number and type of sensors, memory size, operating system) or dynamic features (e.g., number of unique destination IPs per hour, variance of the ratio between outgoing and incoming traffic) be formalized?
\item Can these features be ranked based on their influence on this predictability level?
\end{itemize}
We presume that the predictability of traffic behavior can be directly translated into performance measures of anomaly detection.
For example, an IoT device with a high level of traffic predictability would make any anomalous action stand out, and thus the TPR should increase and detection times should decrease in this case.
For empirical validation we extracted static and dynamic features from the (benign) training set.
Then we trained regression models to study these features' effect on the average FPR and detection times, obtained on the test set by the four detection methods we evaluated. Figures~\ref{fig:fpr_d_score} and~\ref{fig:detection_time_d_score} depict our preliminary findings via the features found most significant. Figure~\ref{fig:fpr_d_score} shows how an increase in the variability of inbound traffic translates (\emph{p}-value=0.019) into larger average FPR. This makes sense, as lower predictability is prone to manifest through unpredictable (yet benign) traffic behaviors, falsely identified as anomalous.
Figure~\ref{fig:detection_time_d_score} shows how an increase in the maximal volume of inbound traffic promotes (\emph{p}-value=0.001) longer detection times.
As we optimize $ws^*$ to reach 0\% FPR on $DS_{opt}$, lower predictability leads to higher $ws^*$ (more instances for majority voting) and subsequently higher detection times.
Ultimately, a solid predictability score can be leveraged by large organizations in order to ensure network functionality and limit the impact that compromised devices might have on the network.
That is, security policies may not allow the connection of IoT devices with low predictability scores to their networks, since they pose difficulties in attack detection.
In our future work we plan to further define and investigate the subject of traffic predictability, both theoretically and empirically.
As another extension to the current study, we also plan to evaluate transfer learning techniques by assessing the accuracy of models trained on specific devices when they are applied to identical devices, possibly when connected to other organizational networks. This can help (1) save time (e.g., organizations can deploy models previously learned elsewhere, without the need to collect data and train the models themselves), and (2) detect compromised IoT devices which have been contaminated prior to connecting to the organizational network, such that the organization has no benign data of them for model training.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}\label{acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to
thank Yan Lin Aung, Amit Subhashchandra Tambe,
Simon Dzanashvili and Tar Wolfson for their valuable contribution.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,877,628,091,061 | arxiv | \subsection{\texorpdfstring{}{}}}
\newcommand{\subsection{}}{\subsection{}}
\newcommand{\parref}[1]{\hyperref[#1]{\S\ref*{#1}}}
\newcommand{\chapref}[1]{\hyperref[#1]{Chapter~\ref*{#1}}}
\makeatletter
\newcommand*\if@single[3]{%
\setbox0\hbox{${\mathaccent"0362{#1}}^H$}%
\setbox2\hbox{${\mathaccent"0362{\kern0pt#1}}^H$}%
\ifdim\ht0=\ht2 #3\else #2\fi
}
\newcommand*\rel@kern[1]{\kern#1\dimexpr\macc@kerna}
\newcommand*\widebar[1]{\@ifnextchar^{{\wide@bar{#1}{0}}}{\wide@bar{#1}{1}}}
\newcommand*\wide@bar[2]{\if@single{#1}{\wide@bar@{#1}{#2}{1}}{\wide@bar@{#1}{#2}{2}}}
\newcommand*\wide@bar@[3]{%
\begingroup
\def\mathaccent##1##2{%
\if#32 \let\macc@nucleus\first@char \fi
\setbox\z@\hbox{$\macc@style{\macc@nucleus}_{}$}%
\setbox\tw@\hbox{$\macc@style{\macc@nucleus}{}_{}$}%
\dimen@\wd\tw@
\advance\dimen@-\wd\z@
\divide\dimen@ 3
\@tempdima\wd\tw@
\advance\@tempdima-\scriptspace
\divide\@tempdima 10
\advance\dimen@-\@tempdima
\ifdim\dimen@>\z@ \dimen@0pt\fi
\rel@kern{0.6}\kern-\dimen@
\if#31
\overline{\rel@kern{-0.6}\kern\dimen@\macc@nucleus\rel@kern{0.4}\kern\dimen@}%
\advance\dimen@0.4\dimexpr\macc@kerna
\let\final@kern#2%
\ifdim\dimen@<\z@ \let\final@kern1\fi
\if\final@kern1 \kern-\dimen@\fi
\else
\overline{\rel@kern{-0.6}\kern\dimen@#1}%
\fi
}%
\macc@depth\@ne
\let\math@bgroup\@empty \let\math@egroup\macc@set@skewchar
\mathsurround\z@ \frozen@everymath{\mathgroup\macc@group\relax}%
\macc@set@skewchar\relax
\let\mathaccentV\macc@nested@a
\if#31
\macc@nested@a\relax111{#1}%
\else
\def\gobble@till@marker##1\endmarker{}%
\futurelet\first@char\gobble@till@marker#1\endmarker
\ifcat\noexpand\first@char A\else
\def\first@char{}%
\fi
\macc@nested@a\relax111{\first@char}%
\fi
\endgroup
}
\makeatother
\mapcitekey{Saito:HodgeModules}{Saito-HM}
\mapcitekey{Saito:MixedHodgeModules}{Saito-MHM}
\mapcitekey{Durfee:Neighborhoods}{Durfee}
\mapcitekey{Ramanujam:KodairaVanishing}{Ramanujam}
\mapcitekey{Esnault+Viehweg:VanishingTheorems}{EV}
\mapcitekey{Esnault+Viehweg:LogarithmicDeRham}{EV-log}
\mapcitekey{Saito:DModules}{Saito-an}
\mapcitekey{Kodaira:DifferentialGeometricMethod}{Kodaira}
\mapcitekey{Deligne+Illusie}{DI}
\mapcitekey{Saito:Theory}{Saito-th}
\mapcitekey{Saito:Kollar}{Saito-K}
\mapcitekey{Popa:SaitoVanishing}{Popa}
\mapcitekey{Schnell:sanya}{sanya}
\newcommand{\HM}[2]{\operatorname{HM}(#1, #2)}
\newcommand{\HMZ}[3]{\operatorname{HM}_{#1}(#2, #3)}
\newcommand{\HMp}[2]{\operatorname{HM}(#1, #2)}
\newcommand{\HMZp}[3]{\operatorname{HM}_{#1}(#2, #3)}
\newcommand{\MHMp}[1]{\operatorname{MHM}(#1)}
\newcommand{\MHMps}[2]{\operatorname{MHM}(#1, #2)}
\newcommand{\MHMpS}[2]{\operatorname{MHM} \bigl( #1, #2 \bigr)}
\newcommand{\operatorname{Perv}_{\QQ}}{\operatorname{Perv}_{\mathbb{Q}}}
\newcommand{\mathrm{D}_{\mathit{coh}}^{\mathit{b}}}{\mathrm{D}_{\mathit{coh}}^{\mathit{b}}}
\newcommand{\shO_P}{\shf{O}_P}
\newcommand{\shO_U}{\shf{O}_U}
\newcommand{\OU^{\times}}{\shO_U^{\times}}
\newcommand{\OX^{\times}}{\shf{O}_X^{\times}}
\newcommand{\shO_Z^{\times}}{\shO_Z^{\times}}
\newcommand{\shO_Z}{\shf{O}_Z}
\newcommand{\tilde{U}}{\tilde{U}}
\newcommand{\tilde{X}}{\tilde{X}}
\newcommand{\Omega_U}{\Omega_U}
\newcommand{\Omega_D}{\Omega_D}
\newcommand{\tilde{K}}{\tilde{K}}
\newcommand{\tilde{\delta}}{\tilde{\delta}}
\newcommand{\DR_{\log}}{\DR_{\log}}
\newcommand{\mathscr{I}}{\mathscr{I}}
\newcommand{\omega_X}{\omega_X}
\renewcommand{\mathbb{D}}{\mathbf{D}}
\newcommand{\mathit{dx}}{\mathit{dx}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\begin{document}
\title{On Saito's vanishing theorem}
\author{Christian Schnell}
\address{%
Department of Mathematics\\
Stony Brook University\\
Stony Brook, NY 11794-3651}
\email{cschnell@math.sunysb.edu}
\begin{abstract}
We reprove Saito's vanishing theorem for mixed Hodge modules by the method of Esnault
and Viehweg. The main idea is to exploit the strictness of direct images on certain
branched coverings.
\end{abstract}
\date{\today}
\maketitle
\section{Overview}
\subsection{Introduction}
The Kodaira vanishing theorem is one of the most useful results in algebraic
geometry. Besides the original differential-geometric proof by Kodaira \cite{Kodaira}
and the famous algebraic proof by Deligne and Illusie \cite{DI}, there are at
least two other proofs that are based on Hodge theory. One is due to
Ramanujam \cite{Ramanujam}, and uses the weak Lefschetz theorem; the other is due to
Esnault and Viehweg \cite{EV-log}, and uses branched coverings and the degeneration
of the Hodge-de Rham spectral sequence.
Saito's vanishing theorem \cite[\S2.g]{Saito-MHM} is a generalization of Kodaira's theorem to
mixed Hodge modules; it contains as special cases several other results, such as
Koll\'ar's vanishing theorem for higher direct images of dualizing sheaves. More
precisely, Saito uses Artin's vanishing theorem for perverse sheaves on affine
varieties to obtain a vanishing theorem for the graded quotients of the de Rham
complex of any graded-polarizable mixed Hodge module; his proof is therefore a
distant cousin of Ramanujam's. In this paper, we show that Saito's theorem can
also be proved by the method of Esnault and Viehweg: the key point is to exploit the
strictness of direct images on certain branched
coverings. The argument is perhaps less elegant than Saito's, but it has
two advantages:
\begin{enumerate}
\item The vanishing theorem follows from results about polarizable Hodge
modules, without appealing to vanishing theorems for perverse sheaves.
\item The result can be stated and proved entirely in terms of pure Hodge
modules, without the need for using mixed Hodge modules.
\end{enumerate}
Since mixed Hodge modules are more complicated than pure ones, the second point may
be useful to someone who is trying to understand Saito's vanishing theorem
with a minimum of theoretical background. Those who are interested in the original
proof can also consult Popa's expository paper \cite{Popa}.
\subsection{Statement of the result}
We will first state the vanishing theorem for mixed Hodge modules, because this is the
version that Saito gives; but in fact, the general case follows very easily from
the special case of pure Hodge modules.
Let $Z$ be a reduced projective algebraic variety. We denote by $\MHMp{Z}$ the
abelian category of graded-polarizable mixed Hodge modules on $Z$. It is defined by
embedding $Z$ into a complex manifold $X$, for example into complex projective space,
and then looking at all graded-polarizable mixed Hodge modules on $X$ whose support
is contained in $Z$. One can show that this definition is independent of the choice of
embedding; for the convenience of the reader, an outline of the proof is included in
\parref{par:MHM} below. Given $M \in \MHMp{Z}$, we write $(\mathcal{M}, F_{\bullet} \mathcal{M})$
for the underlying filtered $\mathscr{D}$-module: $\mathcal{M}$ is a regular holonomic left
$\mathscr{D}$-module on $X$ whose support is contained in $Z$, and $F_{\bullet} \mathcal{M}$ is a
good filtration by coherent $\shf{O}_X$-modules. We set $n = \dim X$, and denote by
\[
\DR(\mathcal{M}) = \Bigl\lbrack
\mathcal{M} \to \Omega_X^1 \otimes \mathcal{M} \to \dotsb \to \Omega_X^n \otimes \mathcal{M}
\Bigr\rbrack \decal{n}
\]
the de Rham complex of $\mathcal{M}$; by a theorem of Kashiwara, it is a
perverse sheaf on $X$ with support in $Z$. The complex $\DR(\mathcal{M})$ is naturally
filtered by the family of subcomplexes
\[
F_p \DR(\mathcal{M}) = \Bigl\lbrack
F_p \mathcal{M} \to \Omega_X^1 \otimes F_{p+1} \mathcal{M} \to \dotsb
\to \Omega_X^n \otimes F_{p+n} \mathcal{M}
\Bigr\rbrack \decal{n},
\]
and one can use the properties of mixed Hodge modules to show that each
\[
\gr_p^F \DR(\mathcal{M}) = \Bigl\lbrack
\gr_p^F \mathcal{M} \to \Omega_X^1 \otimes \gr_{p+1}^F \mathcal{M} \to \dotsb
\to \Omega_X^n \otimes \gr_{p+n}^F \mathcal{M}
\Bigr\rbrack \decal{n}
\]
is a well-defined complex of coherent $\shO_Z$-modules, whose isomorphism class in the
derived category $\mathrm{D}_{\mathit{coh}}^{\mathit{b}}(\shO_Z)$ does not depend on the choice of ambient complex
manifold $X$. For the convenience of the reader, the argument is recalled in
\lemmaref{lem:subquotient}.
As one of the first applications of his theory of mixed Hodge modules, Saito
proved the following vanishing theorem for those complexes \cite[\S2.g]{Saito-MHM}.
\begin{theorem} \label{thm:Saito}
Let $M \in \MHMp{Z}$ be a graded-polarizable mixed Hodge module on a reduced
projective variety $Z$. If $L$ is an ample line bundle on $Z$, one has
\begin{align*}
H^i \bigl( Z, \gr_p^F \DR(\mathcal{M}) \otimes L \bigr) = 0
\quad \text{for $i > 0$ and $p \in \mathbb{Z}$,} \\
H^i \bigl( Z, \gr_p^F \DR(\mathcal{M}) \otimes L^{-1} \bigr) = 0
\quad \text{for $i < 0$ and $p \in \mathbb{Z}$.}
\end{align*}
\end{theorem}
Note that Kodaira's vanishing theorem is a special case: the pair
$(\shf{O}_X, F_{\bullet} \shf{O}_X)$, with $\gr_p^F \shf{O}_X = 0$ for $p \neq 0$, is part of a
polarizable Hodge module, and one has
\[
\omega_X = \gr_{-n}^F \DR(\shf{O}_X) \quad \text{and} \quad
\shf{O}_X \decal{n} = \gr_0^F \DR(\shf{O}_X).
\]
Although \theoremref{thm:Saito} is stated in terms of mixed Hodge modules, it is
really a result about pure ones; we will see below that the same is true for
the proof.
\subsection{Reduction to the pure case}
We now explain how to obtain \theoremref{thm:Saito} from a statement about pure Hodge
modules. For a reduced and irreducible projective variety $Z$, we denote by
$\HMZp{Z}{Z}{w}$ the abelian category of polarizable Hodge modules of weight $w$ with
strict support $Z$; the precise definition is
\[
\HMZp{Z}{Z}{w} = \HMZp{Z}{X}{w},
\]
where $X$ is a complex manifold containing $Z$, and where $M \in \HMp{X}{w}$ belongs
to $\HMZp{Z}{X}{w}$ iff the support of every nonzero subobject or quotient object of
$M$ is equal to $Z$. As before, one can show that this category does not depend on the
choice of embedding; in fact, an important result by Saito
\cite[Theorem~3.21]{Saito-MHM} says that $\HMZp{Z}{Z}{w}$ is equivalent to the
category of generically defined polarizable variations of Hodge
structure of weight $w - \dim Z$.
\begin{theorem} \label{thm:Saito-pure}
Let $Z$ be a reduced and irreducible projective variety, and let $M \in \HMZp{Z}{Z}{w}$
be a polarizable Hodge module with strict support $Z$. Then one has
\begin{align*}
H^i \bigl( Z, \gr_p^F \DR(\mathcal{M}) \otimes L \bigr) = 0
\quad \text{for $i > 0$ and $p \in \mathbb{Z}$,} \\
H^i \bigl( Z, \gr_p^F \DR(\mathcal{M}) \otimes L^{-1} \bigr) = 0
\quad \text{for $i < 0$ and $p \in \mathbb{Z}$,}
\end{align*}
where $L$ is any ample line bundle on $Z$.
\end{theorem}
It is easy to deduce \theoremref{thm:Saito} from this special case. Suppose first
that $Z$ is a reduced projective variety, and that $M \in \HMp{Z}{w}$ is a
polarizable Hodge module of weight $w$. Then $M$ admits a decomposition by
strict support, and because the vanishing theorem is true for each summand by
\theoremref{thm:Saito-pure}, it is true for $M$ as well. To deal with the general
case, recall that every $M \in \MHMp{Z}$ has a finite weight filtration $W_{\bullet}
M$ with the property that $\gr_w^W M \in \HMp{Z}{w}$; because the functor $\gr_p^F
\DR$ is exact, we obtain the vanishing theorem for arbitrary graded-polarizable mixed
Hodge modules.
\subsection{Idea of the proof}
To prove \theoremref{thm:Saito-pure}, we shall use a method invented by Esnault and
Viehweg. The general idea, explained for example in \cite[\S1]{EV}, is to deduce
vanishing theorems from the $E_1$-degeneration of certain spectral sequences.
As a motivation for what follows, let us briefly recall how Esnault and Viehweg prove
the Kodaira vanishing theorem. Let $L$ be an ample line bundle on a smooth projective
variety $X$. For sufficiently large $N$, the line bundle $L^N$ becomes very ample,
and we can find a smooth divisor $D \subseteq X$ with $L^N \simeq \shf{O}_X(D)$. Such a
divisor determines a branched covering $\pi \colon Y \to X$ (see
\parref{par:coverings}), and one can show that
\[
\pi_{\ast} \shO_Y \simeq \shf{O}_X \oplus \bigoplus_{i=1}^{N-1} L^{-i} \quad \text{and} \quad
\pi_{\ast} \Omega_Y^1 \simeq \Omega_X^1 \oplus
\bigoplus_{i=1}^{N-1} \Omega_X^1(\log D) \otimes L^{-i}.
\]
Now $Y$ is again a smooth projective variety, and so its Hodge-de Rham spectral
sequence degenerates at $E_1$; in particular, the mapping $d \colon H^i(Y, \shO_Y) \to
H^i(Y, \Omega_Y^1)$ is equal to zero. From this, one can deduce that the restriction mapping
\[
H^i(X, L^{-1}) \to H^i(D, \shf{O}_D \otimes L^{-1})
\]
is also equal to zero: the key point is that $d \colon \shO_Y \to \Omega_Y^1$ induces a
$\mathbb{C}$-linear mapping $L^{-1} \to \Omega_X^1(\log D) \otimes L^{-1}$, whose composition
with the residue mapping is, up to a constant factor, equal to the $\shf{O}_X$-linear mapping
$L^{-1} \to \shf{O}_D \otimes L^{-1}$. Consequently,
\[
H^i(X, L^{-N-1}) \to H^i(X, L^{-1})
\]
must be surjective; because of Serre duality,
\[
H^i(X, \omega_X \otimes L) \to H^i(X, \omega_X \otimes L^{N+1})
\]
must be injective. But now we can kill the right-hand side by taking $N \gg 0$, and
so we get the vanishing of $H^i(X, \omega_X \otimes L)$ for $i > 0$.
The proof of \theoremref{thm:Saito-pure} follows the same path. Since $Z$ may be
singular, we first extend the line bundle $L$ to a small open neighborhood $X$ in
some projective embedding of $Z$. We then take a sufficiently generic branched
covering $\pi \colon Y \to X$, and use the strictness of direct images for
polarizable Hodge modules to prove that
\[
H^i \Bigl( Z, L \otimes \gr_p^F \DR(\mathcal{M}) \Bigr) \to
H^i \Bigl( Z, L^{N+1} \otimes \gr_p^F \DR(\mathcal{M}) \Bigr)
\]
must be injective for $i > 0$. Because the complex $\gr_p^F \DR(\mathcal{M})$ is
concentrated in non-positive degrees, we can again kill the right-hand side by taking
$N \gg 0$. This proves half of \theoremref{thm:Saito-pure}; the other half follows by
Sere duality, because the de Rham complex is compatible with the duality functor (see
\lemmaref{lem:polarization}).
\subsection{Note to the reader}
I wrote this paper for those who already know the definitions and basic results from
the theory of polarizable Hodge modules \cite{Saito-HM}. If you are not familiar with
Saito's theory, but nevertheless interested in the proof of the vanishing theorem, I
would recommend taking a look at Saito's nicely-written survey article
\cite{Saito-th} or at the more recent \cite{sanya}. Two of the results that we need
-- about Hodge modules on singular varieties and about non-characteristic inverse
images -- are distributed among several of Saito's papers; in the interest of
readability, I therefore decided to include an outline of their proof.
Please note that I chose to use left $\mathscr{D}$-modules throughout: this
is more convenient when working with inverse images and simplifies certain
arguments with differential forms. However, because Saito uses right $\mathscr{D}$-modules,
a little bit of translation is needed when looking up results in
\cite{Saito-HM,Saito-MHM}. The rules are as follows. Suppose that $(\mathcal{M},
F_{\bullet} \mathcal{M})$ is a filtered left $\mathscr{D}$-module on an $n$-dimensional complex
manifold $X$. Then the associated filtered right $\mathscr{D}$-module is
\[
\Bigl( \omega_X \otimes_{\shf{O}_X} \mathcal{M}, \, \omega_X \otimes_{\shf{O}_X} F_{\bullet+n} \mathcal{M} \Bigr),
\]
with $\mathscr{D}$-module structure given by $(\omega \otimes m) \cdot \xi =
(\omega \cdot \xi) \otimes m - \omega \otimes (\xi \cdot m)$, where $\omega$, $m$,
and $\xi$ are sections of $\omega_X$, $\mathcal{M}$, and the tangent sheaf $\mathscr{T}_X$,
respectively.
The conventions for indexing the $V\!$-filtration are also different for left and right
$\mathscr{D}$-modules. In the case of $\mathcal{M}$, the rational $V\!$-filtration along $t = 0$ is
a decreasing filtration $V^{\bullet} \mathcal{M}$ with the property that $t \partial_t -
\alpha$ acts nilpotently on $\gr_V^{\alpha} \mathcal{M}$. The corresponding filtration on
$\omega_X \otimes \mathcal{M}$ \cite[D\'efinition~3.1.1]{Saito-HM} is the increasing filtration
\[
V_{\bullet} \bigl( \omega_X \otimes_{\shf{O}_X} \mathcal{M} \bigr)
= \omega_X \otimes_{\shf{O}_X} V^{-\bullet-1} \mathcal{M};
\]
the change is needed to keep the (right) action of $t \partial_t - \alpha$ on
$\gr_{\alpha}^V(\omega_X \otimes \mathcal{M})$ nilpotent.
\subsection{Acknowledgements}
I thank Mihnea Popa for many useful conversations about mixed Hodge modules and
vanishing theorems. During the preparation of this paper, I have been supported in
part by NSF-grant DMS-1331641.
\section{Some background}
\subsection{Mixed Hodge modules on singular varieties}
\label{par:MHM}
Since \theoremref{thm:Saito} is stated for mixed Hodge modules on projective
varieties, it may be helpful to review the definition of the category $\MHMp{Z}$ in
the case where $Z$ is a possibly singular projective algebraic variety. The idea is
to embed $Z$ into a complex manifold $X$ (such as projective space), and then to
define
\[
\MHMp{Z} \subseteq \MHMp{X}
\]
as the full subcategory of all graded-polarizable mixed Hodge modules on $X$ whose
support is contained in $Z$. To make this definition meaningful, one has to show that
the resulting category does not depend on the embedding; we shall explain below how
this is done.
\begin{note}
The same definition works for any analytic space that can be embedded into a
complex manifold. On an arbitrary analytic space $Z$, such embeddings may only exist
locally, and so one has to cover $Z$ by embeddable open subsets and work with
collections of mixed Hodge modules on the open sets that are compatible on
intersections. This idea is developed in \cite{Saito-an}.
\end{note}
From now on, let $Z$ be a reduced projective algebraic variety. Given an embedding $i
\colon Z \hookrightarrow X$ into a complex manifold, we consider the full subcategory
\[
\MHMps{X}{i} \subseteq \MHMp{X}
\]
of all graded-polarizable mixed Hodge modules on $X$ whose support is contained in
the image of $Z$. Obviously, we can always take $X$ to be projective space of some
dimension; but for the proof of \theoremref{thm:Saito}, it will be useful to allow
other complex manifolds, too. It is easy to see that $\MHMps{X}{i}$ is an abelian
category. The main result is that this category does not depend on the choice of
embedding.
\begin{proposition} \label{prop:independence}
Given two embeddings $i \colon Z \hookrightarrow X$ and $j \colon Z \hookrightarrow Y$,
one has a canonical equivalence of categories between $\MHMps{X}{i}$ and
$\MHMps{Y}{j}$.
\end{proposition}
The tool for proving this is the following version of Kashiwara's equivalence
for mixed Hodge modules.
\begin{proposition} \label{prop:Kashiwara}
Let $f \colon X \hookrightarrow Y$ be a closed embedding between two complex manifolds. For any
closed analytic subspace $i \colon Z \hookrightarrow X$, the direct image functor
\[
f_{\ast} \colon \MHMp{X} \to \MHMp{Y}
\]
induces an equivalence of categories between $\MHMps{X}{i}$ and $\MHMps{Y}{f \circ i}$.
\end{proposition}
\begin{proof}
This is proved in \cite[Lemme~5.1.9]{Saito-HM} for pure Hodge modules, and asserted in
\cite[2.17.5]{Saito-MHM} for mixed ones. The essential point is to show that the
underlying filtered $\mathscr{D}$-module $(\mathcal{M}, F_{\bullet} \mathcal{M})$ of a mixed Hodge
module $M \in \MHMps{Y}{f \circ i}$ comes from $X$. For $\mathcal{M}$, this follows
from a more general result for coherent $\mathscr{D}$-modules in Kashiwara's thesis; in
order to deal with the filtration $F_{\bullet} \mathcal{M}$, one has to use one of the
axioms characterizing mixed Hodge modules \cite[Proposition~3.2.2]{Saito-HM}.
\end{proof}
Now let us prove \propositionref{prop:independence}. Since we cannot directly compare
$X$ and $Y$, we use the product embedding $(i,j) \colon Z \hookrightarrow X \times Y$, as in
the following diagram:
\begin{equation} \label{eq:embeddings}
\begin{tikzcd}
Z \arrow{dr}{(i,j)} \arrow[bend left=25]{drr}{j} \arrow[bend right=30]{ddr}{i} \\
& X \times Y \dar{p} \rar{q} & Y \\
& X
\end{tikzcd}
\end{equation}
Because the situation is symmetric, it suffices to show that the direct image functor
\[
p_{\ast} \colon \MHMpS{X \times Y}{(i,j)} \to \MHMps{X}{i}
\]
is an equivalence of categories. Note that $p_{\ast}$ is obviously faithful: in fact, this
is true for the underlying perverse sheaves because $p$ is an isomorphism on the
image of $Z$, and the functor from mixed Hodge modules to perverse sheaves is
faithful. So the issue is to show that $p_{\ast}$ is essentially surjective.
Let $M \in \MHMp{X}$ be a graded-polarizable mixed Hodge module whose support is
contained in $i(Z)$. To construct from $M$ an object on $X \times Y$, we use the
existence of good local sections for $p$. More precisely, for every point of $Z$,
there is an open neighborhood $U \subseteq X$ and a holomorphic mapping $f \colon U
\to Y$ such that $f \circ i = j$; this follows from the basic properties of
holomorphic functions. Now
\[
(\id, f) \colon U \hookrightarrow U \times Y
\]
is a closed embedding with the property that $(\id, f) \circ i = (i,j)$, and so $(\id,
f)_{\ast} M$ is a graded-polarizable mixed Hodge module on $U \times Y$
whose support is contained in the image of $(i,j)$. If we choose a different
holomorphic mapping $f' \colon U \to Y$, then $(\id, f)_{\ast} M$ and $(\id,
f')_{\ast} M$ are canonically isomorphic by virtue of \propositionref{prop:Kashiwara}.
This fact allows us to glue the local objects together into a well-defined object of
$\MHMpS{X \times Y}{(i,j)}$; it is clear from the construction that its image under
$p_{\ast}$ is isomorphic to the original mixed Hodge module $M$.
\subsection{Subquotients of the de Rham complex}
\label{par:DR}
In this section, we collect a few general results about the graded quotients of the
de Rham complex. Let $X$ be a complex manifold, and let $(\mathcal{M}, F_{\bullet} \mathcal{M})$
be a filtered $\mathscr{D}$-module on $X$. We begin with a more careful local description
of the differentials in the complex
\[
\DR(\mathcal{M}) = \Bigl\lbrack
\mathcal{M} \to \Omega_X^1 \otimes \mathcal{M} \to \dotsb \to \Omega_X^n \otimes \mathcal{M}
\Bigr\rbrack \decal{n}.
\]
Let $x_1, \dotsc, x_n$ be local holomorphic coordinates on $X$. Then the
differentials
\[
\nabla \colon \Omega_X^k \otimes \mathcal{M} \to \Omega_X^{k+1} \otimes \mathcal{M}
\]
in the de Rham complex are given by the formula
\begin{equation} \label{eq:DR-local}
\nabla(\alpha \otimes m) = (-1)^n d\alpha \otimes m + (-1)^n
\sum_{i=1}^n (\mathit{dx}_i \wedge \alpha) \otimes \frac{\partial}{\partial x_i} m.
\end{equation}
The extra factor of $(-1)^n$ is due to the shift in the definition of $\DR(\mathcal{M})$;
it is part of a consistent set of sign conventions. Because $F_{\bullet} \mathcal{M}$ is a
good filtration, it is obvious from this description that each
\begin{equation} \label{eq:DR-F}
F_p \DR(\mathcal{M}) = \Bigl\lbrack
F_p \mathcal{M} \to \Omega_X^1 \otimes F_{p+1} \mathcal{M} \to \dotsb
\to \Omega_X^n \otimes F_{p+n} \mathcal{M}
\Bigr\rbrack \decal{n}
\end{equation}
is a subcomplex. When we go to one of the graded quotients $\gr_p^F \DR(\mathcal{M})$, we
obtain the following formula for the differentials:
\begin{align*}
\Omega_X^k \otimes \gr_{p+k}^F \mathcal{M} \to \Omega_X^{k+1} \otimes \gr_{p+k+1}^F \mathcal{M}, \quad
\alpha \otimes m \mapsto (-1)^n
\sum_{i=1}^n (\mathit{dx}_i \wedge \alpha) \otimes \frac{\partial}{\partial x_i} m.
\end{align*}
Now let us consider the case where $(\mathcal{M}, F_{\bullet} \mathcal{M})$ is part of a mixed
Hodge module on $X$. In the case where the support of $\mathcal{M}$ is contained in an
analytic subset $Z$, the properties of mixed Hodge modules imply that each
$\gr_p^F \DR(\mathcal{M})$ is actually a complex of coherent $\shO_Z$-modules.
\begin{lemma} \label{lem:subquotient}
Let $M \in \MHMp{X}$ be a mixed Hodge module on a complex manifold $X$. If the
support of $M$ is contained in an analytic subset $Z \subseteq X$, then each
\[
\gr_p^F \DR(\mathcal{M}) = \Bigl\lbrack
\gr_p^F \mathcal{M} \to \Omega_X^1 \otimes \gr_{p+1}^F \mathcal{M} \to \dotsb
\to \Omega_X^n \otimes \gr_{p+n}^F \mathcal{M}
\Bigr\rbrack \decal{n}
\]
is a well-defined complex of coherent $\shO_Z$-modules; its isomorphism class in
$\mathrm{D}_{\mathit{coh}}^{\mathit{b}}(\shO_Z)$ is independent of the embedding of $Z$ into a complex manifold.
\end{lemma}
\begin{proof}
We first prove that each $\gr_p^F \mathcal{M}$ is a coherent sheaf on $Z$. Let $f$ be an
arbitrary local section of the ideal sheaf $\mathscr{I}_Z$; by the definition of mixed Hodge
modules, $(\mathcal{M}, F_{\bullet} \mathcal{M})$ is quasi-unipotent and regular along $f = 0$. By
\cite[Lemme~4.2.6]{Saito-HM}, this implies that $f \cdot F_p \mathcal{M} \subseteq F_{p-1}
\mathcal{M}$, which means that $f$ annihilates $\gr_p^F \mathcal{M}$.
The independence of the choice of embedding follows from
\propositionref{prop:independence} and the compatibility of the de Rham complex with
direct images. Suppose we have another embedding $j \colon Z \hookrightarrow Y$ into a complex
manifold. As in \eqref{eq:embeddings}, we consider the product embedding $(i,j)
\colon Z \hookrightarrow X \times Y$. By \propositionref{prop:independence}, we have $M \simeq
p_{\ast} M'$ for a graded-polarizable mixed Hodge module $M' \in \MHMp{X \times Y}$ whose
support is contained in $(i,j)(Z)$; as the situation is symmetric, it suffices to
prove that
\[
p_{\ast} \Bigl( \gr_p^F \DR(\mathcal{M}') \Bigr) \simeq
\mathbf{R} p_{\ast} \Bigl( \gr_p^F \DR(\mathcal{M}') \Bigr) \simeq \gr_p^F \DR(\mathcal{M}).
\]
But because $p$ is an isomorphism over the image of $Z$, this follows from the
definition of the direct image functor for filtered $\mathscr{D}$-modules \cite[\S2.3.7]{Saito-HM}.
\end{proof}
Another very useful result is the compatibility of the de Rham complex with the
duality functor. In combination with Serre duality, it can be used to show that the two
assertions for $L$ and $L^{-1}$ in \theoremref{thm:Saito} are equivalent.
\begin{lemma} \label{lem:polarization}
Let $M \in \HMp{X}{w}$ be a polarizable Hodge module on an $n$-dimensional complex
manifold $X$. Then any polarization on $M$ induces an isomorphism
\[
\mathbf{R} \mathcal{H}\hspace{-1pt}\mathit{om}_{\shf{O}_X} \Bigl( \gr_p^F \DR(\mathcal{M}), \omega_X \decal{n} \Bigr)
\simeq \gr_{-p-w}^F \DR(\mathcal{M}).
\]
\end{lemma}
\begin{proof}
Recall that a polarization of $M$ induces an isomorphism $M(w) \simeq \mathbb{D} M$ with the
dual Hodge module; in particular, the filtered $\mathscr{D}$-module underlying $\mathbb{D} M$ is
isomorphic to $(\mathcal{M}, F_{\bullet-w} \mathcal{M})$. The assertion therefore follows from
the compatibility of the filtered de Rham complex with the duality functor
\cite[\S2.4.3]{Saito-HM}. Since the result is not explicitely stated there, we shall
quickly sketch the proof.
The main tool is the equivalence, on the level of derived categories, between
filtered $\mathscr{D}$-modules and filtered differential complexes \cite[\S2.2]{Saito-HM};
under this equivalence, the pair $(\mathcal{M}, F_{\bullet} \mathcal{M})$ goes to the de Rham
complex $\DR(\mathcal{M})$, endowed with the filtration in \eqref{eq:DR-F}. Now choose an
injective resolution
\[
0 \to \omega_X \to \mathcal{K}^{-n} \to \dotsb \to \mathcal{K}^{-1} \to \mathcal{K}^0 \to 0
\]
by right $\mathscr{D}_X$-modules that are injective as $\shf{O}_X$-modules; such a resolution
exists because the injective dimension of $\omega_X$ is equal to $n$. As explained in
\cite[\S2.4.11]{Saito-HM}, the de Rham complex of $\mathbb{D} M$ is isomorphic, as
a filtered differential complex, to the simple complex associated with the double
complex
\[
\mathcal{H}\hspace{-1pt}\mathit{om}_{\shf{O}_X} \bigl( \DR(\mathcal{M}), \mathcal{K}^{\bullet} \bigr).
\]
Here the filtration on the double complex is given by the rule
\begin{align*}
F_p \mathcal{H}\hspace{-1pt}\mathit{om}_{\shf{O}_X}& \bigl( \Omega_X^{n-i} \otimes \mathcal{M}, \mathcal{K}^j \bigr) \\
= &\menge{\phi \colon \Omega_X^{n-i} \otimes \mathcal{M} \to \mathcal{K}^j}
\phi \bigl( \Omega_X^{n-i} \otimes F_{n-i-p-1} \mathcal{M} \bigr) = 0},
\end{align*}
due to the fact that $\gr_p^F \mathcal{K}^j = 0$ for $p \neq 0$. In particular, we have
\[
\gr_p^F \mathcal{H}\hspace{-1pt}\mathit{om}_{\shf{O}_X} \bigl( \Omega_X^{n-i} \otimes \mathcal{M}, \mathcal{K}^j \bigr) \simeq
\mathcal{H}\hspace{-1pt}\mathit{om}_{\shf{O}_X} \bigl( \Omega_X^{n-i} \otimes \gr_{n-i-p}^F \mathcal{M}, \mathcal{K}^j \bigr).
\]
This gives us a canonical isomorphism in the derived category between the associated
graded of the de Rham complex of $\mathbb{D} M$ and
\[
\mathbf{R} \mathcal{H}\hspace{-1pt}\mathit{om}_{\shf{O}_X} \Bigl( \gr_{-\bullet}^F \DR(\mathcal{M}), \omega_X \decal{n} \Bigr),
\]
using that $\mathcal{K}^{\bullet}$ is quasi-isomorphic to $\omega_X \decal{n}$. Together with
the remark about the polarization from above, this implies the asserted isomorphism.
\end{proof}
This result also bounds the range in which the graded quotients of the de Rham
complex are nontrivial. Since $F_p \mathcal{M} = 0$ for $p \ll 0$, it makes sense to define
\begin{equation} \label{eq:pM}
p(M) = \min \menge{p \in \mathbb{Z}}{\gr_p^F \DR(\mathcal{M}) \neq 0}.
\end{equation}
By definition, we have $\gr_p^F \DR(\mathcal{M}) = 0$ for $p < p(M)$;
\lemmaref{lem:polarization} shows that the complex $\gr_p^F \DR(\mathcal{M})$ is also exact
for $p > -p(M) - w$. Another consequence is that the Grothendieck dual of the
coherent sheaf
\[
\gr_{p(M)}^F \DR(\mathcal{M}) = \omega_X \otimes F_{p(M)+n} \mathcal{M}
\]
is isomorphic to the complex $\gr_{-p(M)-w}^F \DR(\mathcal{M})$.
\subsection{Non-characteristic inverse images}
\label{par:inverse-image}
In this section, we review the construction of inverse images for polarizable Hodge
modules under sufficiently generic morphisms. Let $f \colon Y \to X$ be a holomorphic
mapping between complex manifolds, and let $r = \dim Y - \dim X$ denote its relative
dimension. In this situation, we have the following morphisms between the cotangent
bundles of $X$ and $Y$:
\[
\begin{tikzcd}
Y \times_X T^{\ast} X \dar{p_2} \rar{\mathit{df}} & T^{\ast} Y \\
T^{\ast} X
\end{tikzcd}
\]
Let $\mathcal{M}$ be a regular holonomic left $\mathscr{D}$-module on $X$, and $F_{\bullet} \mathcal{M}$ a
good filtration by coherent $\shf{O}_X$-modules. Recall that the characteristic variety
$\Ch(\mathcal{M}) \subseteq T^{\ast} X$ is the support of the coherent sheaf determined by
the coherent $\gr_{\bullet}^F \! \mathscr{D}_X$-module $\gr_{\bullet}^F \mathcal{M}$.
The following definition is a slightly modified version of \cite[\S3.5.1]{Saito-HM}.
\begin{definition}
We say that the morphism $f$ is \define{non-characteristic} for $(\mathcal{M}, F_{\bullet}
\mathcal{M})$ if the following two conditions are satisfied:
\begin{aenumerate}
\item The restriction of $\mathit{df}$ to $p_2^{-1} \Ch(\mathcal{M})$ is a finite mapping.
\item We have $L^i f^{\ast}(\gr_p^F \mathcal{M}) = 0$ for every $i < 0$ and every $p \in \mathbb{Z}$.
\end{aenumerate}
\end{definition}
The first condition is a transversality property. Since $\mathcal{M}$ is regular holonomic,
one can find a Whitney stratification adapted to it; note that every irreducible
component of $\Ch(\mathcal{M})$ is the conormal variety of the closure of a stratum.
Given a point $y \in Y$, let $S \subseteq X$ be the stratum containing $f(y)$; then
we are asking that
\[
T_{f(y)} S + f_{\ast} \bigl( T_y Y \bigr) = T_{f(y)} X.
\]
In the case where $Y$ is a subvariety of $X$, for example, this is saying that $Y$ is
transverse to every stratum. The second condition, on the other hand, is a kind of
flatness property: we are asking that the higher derived functors are trivial when we
pull back the $\shf{O}_X$-modules $\gr_p^F \mathcal{M}$.
\begin{example}
Smooth morphisms are always non-characteristic.
\end{example}
The point of the two conditions is that the naive pullback $f^{\ast} \mathcal{M}$ is again a
regular holonomic $\mathscr{D}$-module on $Y$, and that the filtration $F_{\bullet} f^{\ast}
\mathcal{M} = f^{\ast} F_{\bullet} \mathcal{M}$ is again a good filtration. Except for regularity,
this is proved in \cite[Lemme~3.5.5]{Saito-HM}; the point is that
$f^{\ast} \gr_{\bullet}^F \mathcal{M}$ is coherent over $\gr_{\bullet}^F \! \mathscr{D}_Y$, because
pushing forward by finite morphisms preserves coherence.
From now on, we consider the case of a polarizable Hodge module $M \in \HM{X}{w}$. We
say that $f \colon Y \to X$ is non-characteristic for $M$ if it is non-characteristic
for the underlying filtered $\mathscr{D}$-module $(\mathcal{M}, F_{\bullet} \mathcal{M})$. The
following result shows that the naive inverse image of $M$ is then again a polarizable
Hodge module.
\begin{theorem} \label{thm:inverse-image}
Let $M \in \HMZp{Z}{X}{w}$ be a polarizable Hodge module on $X$, with strict support
$Z$. If $f \colon Y \to X$ is non-characteristic for $M$, then we have
\[
f^{\ast} \mathcal{M} \simeq \mathcal{M}_Y \quad \text{and} \quad
f^{\ast} F_{\bullet} \mathcal{M} \simeq F_{\bullet} \mathcal{M}_Y
\]
for a polarizable Hodge module $M_Y \in \HMZp{f^{-1}(Z)}{Y}{w+r}$.
\end{theorem}
This can be proved in several ways, but perhaps the cleanest one is to use the
relationship between polarizable Hodge modules and polarizable variations of Hodge
structure. According to \cite[Theorem~3.21]{Saito-MHM}, the Hodge module $M$ comes
from a polarizable variation of Hodge structure of weight $w - \dim Z$ on a
Zariski-open subset of the smooth locus of $Z$. We may clearly assume that $f(Y)$
intersects $Z$; the transversality condition implies that the preimage of the smooth
locus of $Z$ is dense in $f^{-1}(Z)$, and that $\dim f^{-1}(Z) = \dim Z + r$. We can
therefore pull the variation of Hodge structure back to a Zariski-open subset of
$f^{-1}(Z)$, and use Saito's result again to extend it to a polarizable Hodge module
$M_Y \in \HMZp{f^{-1}(Z)}{Y}{w+r}$; this procedure explains why the weight changes by
the relative dimension $r$. We denote the underlying filtered $\mathscr{D}$-module by
$(\mathcal{M}_Y, F_{\bullet} \mathcal{M}_Y)$.
It remains to show that $\mathcal{M}_Y \simeq f^{\ast} \mathcal{M}$ and that $F_{\bullet} \mathcal{M}_Y
\simeq f^{\ast} F_{\bullet} \mathcal{M}$. By construction, this is true on the Zariski-open
subset to which we pulled back the variation of Hodge structure; what we have to
prove is that both sides are extended to $Y$ in the same way. Here the strategy is to
use some of the conditions in the definition of Hodge modules, in particular the
compatibility between the Hodge filtration and the $V\!$-filtration.
We observe first that $f$ factors through its graph as
\[
\begin{tikzcd}
Y \rar{i} \arrow[bend left=40]{rr}{f} & Y \times X \rar{p_2} & X;
\end{tikzcd}
\]
because both $p_2$ and $i$ are again non-characteristic for $M$, it suffices to
deal separately with the case of a smooth morphism and the case of a closed
embedding.
\begin{lemma}
When $f \colon Y \to X$ is a smooth morphism, \theoremref{thm:inverse-image} is true.
\end{lemma}
\begin{proof}
The question is local on $X$, and so we may assume that there is a holomorphic
function $g \colon X \to \mathbb{C}$ such that $Z_0 = Z \cap g^{-1}(0)$ contains the
singular locus of $Z$, and such that $M$ comes from a polarizable variation of Hodge
structure on $Z \setminus Z_0$. We now define $h = g \circ f$, and consider the
following diagram:
\[
\begin{tikzcd}
Y \rar{(\id, h)} \dar{f} & Y \times \mathbb{C} \dar{f \times \id} \\
X \rar{(\id, g)} & X \times \mathbb{C}
\end{tikzcd}
\]
Because of \propositionref{prop:Kashiwara}, it suffices to prove the assertion for
the direct image $(\id, g)_{\ast} M$ on $X \times \mathbb{C}$; this amounts to replacing
$(\mathcal{M}, F_{\bullet} \mathcal{M})$ by
\[
\mathcal{M}_g = \bigoplus_{i \geq 0} \mathcal{M} \otimes \partial_t^i
\quad \text{and} \quad
F_{\bullet} \mathcal{M}_g
= \bigoplus_{i \geq 0} F_{\bullet-i-1} \mathcal{M} \otimes \partial_t^i,
\]
where $t$ denotes the coordinate on $\mathbb{C}$ and $\partial_t = \partial/\partial t$
the corresponding vector field. After making the obvious replacements, we may
therefore assume that $g^{-1}(0)$ and $h^{-1}(0)$ are complex manifolds, and that we
have holomorphic vector fields $\partial_g$ and $\partial_h$ with the property that
$\lie{\partial_g}{g} = 1$ and $\lie{\partial_h}{h} = 1$.
We will first prove that $\mathcal{M}_Y \simeq f^{\ast} \mathcal{M}$. Let $V^{\bullet} \mathcal{M}$ and
$V^{\bullet} \mathcal{M}_Y$ denote the rational $V\!$-filtrations along $g = 0$ and $h =
0$, respectively; for left $\mathscr{D}$-modules, the conventions are that $g \colon
V^{\alpha} \mathcal{M} \to V^{\alpha+1} \mathcal{M}$ and $\partial_g \colon V^{\alpha} \mathcal{M} \to
V^{\alpha-1} \mathcal{M}$, and that the operator $g \partial_g - \alpha$ acts nilpotently
on $\gr_V^{\alpha} \mathcal{M} = V^{\alpha} \mathcal{M} / V^{> \alpha} \mathcal{M}$. Now the point is
that $\mathcal{M}$ has strict support $Z$, which is not contained in $g^{-1}(0)$; this
implies that $\partial_g \colon \gr_V^0 \mathcal{M} \to \gr_V^{-1} \mathcal{M}$ is surjective,
and hence that
\[
\mathcal{M} = \mathscr{D}_X \cdot V^{-1} \mathcal{M}
= \sum_{i=0}^{\infty} \partial_g^i \bigl( V^{>-1} \mathcal{M} \bigr).
\]
Recall that $V^{>-1} \mathcal{M}$ only depends on the restriction of $\mathcal{M}$ to $Z
\setminus Z_0$, which is the flat bundle underlying our variation of Hodge structure.
By construction, $V^{>-1} \mathcal{M}_Y \simeq f^{\ast} V^{>-1} \mathcal{M}$, and so we obtain
\[
f^{\ast} \mathcal{M} \simeq \sum_{i=0}^{\infty} \partial_h^i \bigl( f^{\ast} V^{>-1} \mathcal{M} \bigr)
\simeq \sum_{i=0}^{\infty} \partial_h^i \bigl( V^{>-1} \mathcal{M}_Y \bigr)
\simeq \mathcal{M}_Y.
\]
To get the corresponding statement for the filtrations, we will use the fact that $M$ and
$M_Y$ are Hodge modules. One of the conditions in the definition is that the mapping
$\partial_g \colon F_p \gr_V^{\alpha+1} \mathcal{M} \to F_{p+1} \gr_V^{\alpha} \mathcal{M}$ is
surjective for every $p \in \mathbb{Z}$ and every $\alpha < -1$; because $M$ has strict
support $Z$, this is also true when $\alpha = -1$. According to
\cite[Remarque~3.2.3]{Saito-HM}, we therefore have
\[
F_p \mathcal{M} = \sum_{i = 0}^{\infty}
\partial_g^i \bigl( V^{>-1} \mathcal{M} \cap j_{\ast} j^{\ast} F_{p-i} \mathcal{M} \bigr),
\]
where $j \colon X \setminus X_0 \hookrightarrow X$ denotes the open embedding; the right-hand
side is again determined by the variation of Hodge structure on $Z \setminus
Z_0$. Now the flatness condition in the definition of being non-characteristic
implies that
\begin{align*}
f^{\ast} F_p \mathcal{M} &\simeq \sum_{i = 0}^{\infty}
\partial_h^i \bigl( f^{\ast} V^{>-1} \mathcal{M} \cap j_{\ast} j^{\ast} f^{\ast} F_{p-i} \mathcal{M} \bigr) \\
&\simeq \sum_{i = 0}^{\infty}
\partial_h^i \bigl( V^{>-1} \mathcal{M}_Y \cap j_{\ast} j^{\ast} F_{p-i} \mathcal{M}_Y \bigr)
= F_p \mathcal{M}_Y,
\end{align*}
which is the result we were after.
\end{proof}
\begin{lemma}
When $f \colon Y \to X$ is a closed embedding, \theoremref{thm:inverse-image} is
true.
\end{lemma}
\begin{proof}
The problem is again local on $X$, and so we may assume that $f$ is a complete
intersection. If we factor $f$ into a composition of closed embeddings of
codimension $1$, then each step is again non-characteristic by
\cite[Lemme~3.5.4]{Saito-HM}; in this way, we reduce the problem to the case where
$Y$ is defined by a single holomorphic function $g \colon X \to \mathbb{C}$. Because the
embedding is non-characteristic, it is not hard to show that $V^{\bullet} \mathcal{M}$ is
the $g$-adic filtration \cite[Lemme~3.5.6]{Saito-HM}, and hence that
\[
\gr_V^0 \mathcal{M} \simeq f^{\ast} \mathcal{M} \quad \text{and} \quad
\gr_V^{-1} \mathcal{M} \simeq 0;
\]
moreover, the flatness condition implies that $F_{\bullet} \gr_V^0 \mathcal{M} \simeq f^{\ast}
F_{\bullet} \mathcal{M}$. In particular, the action of $N = g \partial_g$ on $\gr_V^0
\mathcal{M}$ is trivial; according to the definition of Hodge modules, this means that the
pair $(f^{\ast} \mathcal{M}, f^{\ast} F_{\bullet} \mathcal{M})$ is part of a polarizable Hodge module of
weight $w-1$ on $Y$. Because $f^{\ast} \mathcal{M}$ has strict support $f^{-1}(Z)$, the
uniqueness statement in \cite[Theorem~3.21]{Saito-MHM} implies that this polarizable
Hodge module must be isomorphic to $M_Y$.
\end{proof}
\subsection{Branched coverings}
\label{par:coverings}
The proof of \theoremref{thm:Saito-pure} makes use of certain branched coverings. We
briefly review the construction in the special case that we need; for a more complete
discussion, including proofs, see \cite[\S3]{EV}.
Let $X$ be a complex manifold, and let $L$ be a holomorphic line bundle on $X$.
Suppose that for some integer $N \geq 1$, there is a global section $s \in H^0(X,
L^N)$ whose zero scheme is a smooth divisor $D \subseteq X$. In this situation, one
can construct another complex manifold $Y$ and a branched covering
\[
\pi \colon Y \to X
\]
in the following way. Let $p \colon L \to X$ denote the projection from the line
bundle, now considered as a complex manifold. Then $p^{\ast} L$ has a tautological section
$s_L$, and we may define $Y$ as the zero scheme of the section $s_L^N - p^{\ast} s$. It is
easy to see that $Y$ is a complex manifold: over any open subset $U \subseteq X$
where $L$ is trivial, the section $s$ is represented by a holomorphic function $f
\colon U \to \mathbb{C}$, and $\pi^{-1}(U)$ is the submanifold of $U \times \mathbb{C}$ defined by
the equation $t^N = f$. The local description can be used to prove that
\[
\pi_{\ast} \shO_Y \simeq \shf{O}_X \oplus \bigoplus_{i=1}^{N-1} L^{-i},
\]
and, more generally, that
\[
\pi_{\ast} \Omega_Y^k \simeq \Omega_X^k \oplus
\bigoplus_{i=1}^{N-1} \Omega_X^k(\log D) \otimes L^{-i}
\]
for $k = 1, \dotsc, \dim X$; here $\Omega_X^k(\log D)$ is the sheaf of logarithmic
differential forms. For instance, the summand $L^{-1}$ in the decomposition of $\pi_{\ast}
\shO_Y$ corresponds to $t \cdot \shf{O}_X$, and the summand $\Omega_X^k(\log D) \otimes
L^{-1}$ corresponds to
\[
t \cdot \Omega_X^k + \mathit{dt} \wedge \Omega_X^{k-1}
= t \cdot \left( \Omega_X^k + \frac{\mathit{df}}{f} \wedge \Omega_X^{k-1} \right),
\]
remembering that $\mathit{df}/f = N \mathit{dt}/t$. In both formulas, the $N$ summands on the
right-hand side are in one-to-one correspondence with the characters of the group of
$N$-th roots of unity, which acts on $Y$ in the obvious way.
\section{Proof of the theorem}
\subsection{Extending line bundles}
We now begin with the preparations for the proof of \theoremref{thm:Saito-pure}.
Let $Z$ be a reduced and irreducible projective variety, and let $L$ be an ample line
bundle on $Z$. Fix an integer $N \geq 2$ such that $L^N$ is very ample; then $Z$
embeds into the projective space $P = \mathbb{P} \bigl( H^0(Z, L^N) \bigr)$, and the restriction of
$\shO_P(1)$ is isomorphic to $L^N$. The purpose of this section is to
extend $L$ to a small open neighborhood of $Z$ in $P$; this will allow us to work with
branched coverings that are again complex manifolds.
\begin{lemma} \label{lem:extension}
It is possible to extend $L$ to a holomorphic line bundle $L_X$ on an open
neighborhood $X \supseteq Z$, in such a way that $L_X^N \simeq \shf{O}_X(1)$.
\end{lemma}
\begin{proof}
According to a result by Durfee \cite[Proposition~1.6 and \S2]{Durfee}, we can find
an open set $X \subseteq P$ containing $Z$, with the property that the inclusion $Z
\hookrightarrow X$ is a homotopy equivalence. From the exponential sequence -- which is also
valid on $Z$ by definition of the sheaf $\shO_Z$ -- we obtain a commutative diagram
\[
\begin{tikzcd}
H^1 \bigl( X, \mathbb{Z}(1) \bigr) \arrow[equal]{d} \rar & H^1(X, \shf{O}_X) \dar \rar{\exp} &
H^1(X, \OX^{\times}) \dar \rar & H^2 \bigl( X, \mathbb{Z}(1) \bigr) \arrow[equal]{d} \\
H^1 \bigl( Z, \mathbb{Z}(1) \bigr) \rar & H^1(Z, \shO_Z) \rar{\exp} &
H^1(Z, \shO_Z^{\times}) \rar & H^2 \bigl( Z, \mathbb{Z}(1) \bigr)
\end{tikzcd}
\]
with exact rows. By construction, the first Chern class $c_1 \bigl( \shf{O}_X(1) \bigr) \in
H^2 \bigl( X, \mathbb{Z}(1) \bigr)$ maps to $c_1(L^N) = N \cdot c_1(L)$, and is therefore
divisible by $N$. This means that we can find a holomorphic line bundle $M_X$ with
the property that
\[
N \cdot c_1(M_X) = c_1 \bigl( \shf{O}_X(1) \bigr) \quad \text{and} \quad
c_1(M_X) \restr{Z} = c_1(L).
\]
Consequently, there are two elements $\alpha \in H^1(X, \shf{O}_X)$ and $\beta \in H^1(Z,
\shO_Z)$ such that
\[
\exp(\alpha) \cdot \class{M_X}^N = \class{\shf{O}_X(1)} \quad \text{and} \quad
\exp(\beta) \cdot \class{M_X} \restr{Z} = \class{L};
\]
square brackets mean the isomorphism class of the corresponding line bundle.
The element $\alpha \restr{Z} - N \beta$ belongs to the image of $H^1 \bigl( Z,
\mathbb{Z}(1) \bigr)$; by adjusting $\alpha$, we can arrange that $\beta$ is equal to the
restriction of $\alpha/N$. Now let $L_X$ be any holomorphic line bundle on $X$ with
\[
\class{L_X} = \exp(\alpha/N) \cdot \class{M_X}.
\]
The formulas above show that $\class{L_X}^N = \class{\shf{O}_X(1)}$ and $\class{L_X}
\restr{Z} = \class{L}$, and so we have found the desired extension of $L$.
\end{proof}
\subsection{Hodge modules and strictness}
For the remainder of the argument, we may assume that $Z$ is embedded into
a complex manifold $X$, in such a way that the given ample line bundle on $Z$ is the
restriction of a holomorphic line bundle $L$ on $X$. We may also assume that $M \in
\HMZp{Z}{X}{w}$ is a polarizable Hodge module on $X$ with strict support $Z$; this is
because the graded quotients of the de Rham complex do not depend on the embedding
(by \lemmaref{lem:subquotient}). It is important to keep in mind that the underlying
filtered $\mathscr{D}$-module $(\mathcal{M}, F_{\bullet} \mathcal{M})$ lives on $X$.
Now let $D \subseteq X$ be the divisor of a sufficiently general section $s \in
H^0(X, L^N)$ or, more concretely, the intersection of $X$ with a sufficiently general
hyperplane $H \subseteq P$. Then $D$ is non-characteristic for $M$, and so we obtain
from \theoremref{thm:inverse-image} a polarizable Hodge module $M_D \in \HMZp{D \cap
Z}{D}{w-1}$ with the property that
\[
\mathcal{M}_D \simeq \mathcal{M} \restr{D} \quad \text{and} \quad
F_p \mathcal{M}_D \simeq F_p \mathcal{M} \restr{D}.
\]
By construction, we have $L^N \simeq \shf{O}_X(D)$, and we denote by
\[
\pi \colon Y \to X
\]
the resulting branched covering of $X$; since $D$ is smooth, $Y$ is again a complex
manifold. It is easy to see that $\pi$ is also non-characteristic for $M$;
this gives us another polarizable Hodge module $M_Y \in \HMZp{\pi^{-1}(Z)}{Y}{w}$
with
\[
\mathcal{M}_Y \simeq \pi^{\ast} \mathcal{M} \quad \text{and} \quad
F_p \mathcal{M}_Y \simeq \pi^{\ast} F_p \mathcal{M}.
\]
Both $M_Y$ and $M_D$ will play a role in the proof of \theoremref{thm:Saito-pure}.
We begin by deducing the vanishing of certain morphisms from the fact that $M_Y$ is a
polarizable Hodge module. By construction, the support of $M_Y$ is equal to the
projective variety $\pi^{-1}(Z)$. According to Saito's direct image theorem
\cite[Th\'eor\`eme~5.3.1]{Saito-HM}, the direct image of $(\mathcal{M}_Y, F_{\bullet}
\mathcal{M}_Y)$ under the morphism from $Y$ to a point is therefore strict; concretely,
this means that the spectral sequence
\begin{equation} \label{eq:spectral-sequence}
E_1^{p,q} = H^{p+q} \bigl( Y, \gr_{-p}^F \DR(\mathcal{M}_Y) \bigr)
\Longrightarrow H^{p+q} \bigl( Y, \DR(\mathcal{M}_Y) \bigr)
\end{equation}
degenerates at $E_1$. Since the spectral sequence comes from a filtered complex, it
is easy to describe the $E_1$-differentials in terms of $\DR(\mathcal{M}_Y)$. For each $p \in
\mathbb{Z}$, we have a short exact sequence of complexes
\[
0 \to \gr_{p-1}^F \DR(\mathcal{M}_Y) \to
F_p \DR(\mathcal{M}_Y) \big/ F_{p-2} \DR(\mathcal{M}_Y) \to \gr_p^F \DR(\mathcal{M}_Y) \to 0.
\]
In the derived category of complexes of sheaves of $\mathbb{C}$-vector spaces, it is part of
a distinguished triangle; the third morphism in this triangle is
\[
\gr_p^F \DR(\mathcal{M}_Y) \to \gr_{p-1}^F \DR(\mathcal{M}_Y) \decal{1}.
\]
As in the case of $d \colon \shO_Y \to \Omega_Y^1$, this morphism is in general not
$\shO_Y$-linear. The degeneration of the spectral sequence in
\eqref{eq:spectral-sequence} has the following consequence.
\begin{lemma} \label{lem:strictness}
For every $i,p \in \mathbb{Z}$, the induced morphism on cohomology
\[
H^i \bigl( Y, \gr_p^F \DR(\mathcal{M}_Y) \bigr)
\to H^{i+1} \bigl( Y, \gr_{p-1}^F \DR(\mathcal{M}_Y) \bigr)
\]
is equal to zero.
\end{lemma}
\subsection{Comparison with the original complex}
The purpose of this section is to obtain information about the complex $\gr_p^F
\DR(\mathcal{M})$ from \lemmaref{lem:strictness}. The first step is to take the direct
image of $\DR(\mathcal{M}_Y)$ by the finite morphism $\pi \colon Y \to X$.
\begin{lemma} \label{lem:subcomplex}
The complex $\pi_{\ast} \DR(\mathcal{M}_Y)$ has a direct summand isomorphic to
\[
\Bigl\lbrack
L^{-1} \otimes \mathcal{M} \to \Omega_X^1(\log D) \otimes L^{-1} \otimes \mathcal{M} \to \dotsb
\to \Omega_X^n(\log D) \otimes L^{-1} \otimes \mathcal{M}
\Bigr\rbrack \decal{n},
\]
compatible with the filtration $\pi_{\ast} F_{\bullet} \DR(\mathcal{M}_Y)$.
\end{lemma}
\begin{proof}
Note that the functor $\pi_{\ast}$ is exact because $\pi$ is a finite morphism; the
isomorphism $\mathcal{M}_Y \simeq \pi^{\ast} \mathcal{M}$ and the projection formula therefore imply that
\[
\pi_{\ast} \DR(\mathcal{M}_Y) \simeq \Bigl\lbrack
\pi_{\ast} \shO_Y \otimes \mathcal{M} \to \pi_{\ast} \Omega_Y^1 \otimes \mathcal{M} \to \dotsb
\to \pi_{\ast} \Omega_Y^n \otimes \mathcal{M}
\Bigr\rbrack \decal{n}.
\]
We now take the summand with $L^{-1}$ in the decomposition of each term (see
\parref{par:coverings}). To show that
this leads to a subcomplex, we can exploit the group action: the group of $N$-th roots of
unity acts on the entire complex, and we are taking the summand corresponding to the
standard character. That the decomposition respects the filtration is obvious.
\end{proof}
We shall give a second proof in local coordinates in \parref{par:computations} below.
To simplify the notation, let us denote by
\[
\tilde{K} \subseteq \pi_{\ast} \DR(\mathcal{M}_Y) \quad \text{and} \quad
F_{\bullet} \tilde{K} \subseteq \pi_{\ast} F_{\bullet} \DR(\mathcal{M}_Y)
\]
the subcomplex in \lemmaref{lem:subcomplex}, together with the induced filtration. As
before, we have a collection of $\mathbb{C}$-linear connecting morphisms
\[
\tilde{\delta}_p \colon \gr_p^F \tilde{K} \to \gr_{p-1}^F \tilde{K} \decal{1}
\]
in the derived category; because $\tilde{K}$ is a direct summand, the degeneration of the
spectral sequence in \eqref{eq:spectral-sequence} means that the induced morphisms
\[
H^i \bigl( X, \gr_p^F \tilde{K} \bigr)
\to H^{i+1} \bigl( X, \gr_{p-1}^F \tilde{K} \bigr)
\]
are also equal to zero. To exploit this fact, we are now going to relate the graded
quotients $\gr_p^F \tilde{K}$ to the two complexes $\gr_p^F \DR(\mathcal{M})$ and $\gr_p^F
\DR(\mathcal{M}_D)$.
\begin{proposition} \label{prop:morphisms}
Let $L_D$ denote the restriction of $L$ to the divisor $D$.
\begin{aenumerate}
\item We have a morphism of complexes
\[
f_p \colon L^{-1} \otimes \gr_p^F \DR(\mathcal{M}) \to \gr_p^F \tilde{K},
\]
induced by the natural inclusions $\Omega_X^k \hookrightarrow \Omega_X^k(\log D)$.
\item We have a morphism of complexes
\[
r_p \colon \gr_p^F \tilde{K} \to L_D^{-1} \otimes \gr_{p+1}^F \DR(\mathcal{M}_D),
\]
induced by the residue mappings $\Res_D \colon \Omega_X^k(\log D) \to \Omega_D^{k-1}$.
\end{aenumerate}
\end{proposition}
The proof requires a small calculation in local coordinates; we postpone it until
\parref{par:computations} and first state the main result.
\begin{proposition} \label{prop:composition}
Up to a constant factor of $(-1)^n N$, the composition
\[
\begin{tikzcd}
L^{-1} \otimes \gr_p^F \DR(\mathcal{M}) \rar{f_p} & \gr_p^F \tilde{K} \rar{\tilde{\delta}_p} &
\gr_{p-1}^F \tilde{K} \decal{1} \rar{r_{p-1}} &
L_D^{-1} \otimes \gr_p^F \DR(\mathcal{M}_D) \decal{1}
\end{tikzcd}
\]
is equal to the restriction mapping.
\end{proposition}
The proof can be found in \parref{par:computations}. The point is of course
that, because of the above factorization, the induced morphism on cohomology
\begin{equation} \label{eq:restriction}
H^i \Bigl( X, L^{-1} \otimes \gr_p^F \DR(\mathcal{M}) \Bigr)
\to H^{i+1} \Bigl( D, L_D^{-1} \otimes \gr_p^F \DR(\mathcal{M}_D) \Bigr)
\end{equation}
is equal to zero. Once this is known, \theoremref{thm:Saito-pure} can be proved very
easily by using Serre's vanishing theorem and induction on the dimension.
\subsection{Proof of Saito's theorem}
We are now ready to prove \theoremref{thm:Saito-pure}. We first observe that the two
assertions
\begin{align}
H^i \bigl( Z, \gr_p^F \DR(\mathcal{M}) \otimes L \bigr) = 0
\quad \text{for $i > 0$ and $p \in \mathbb{Z}$,} \label{eq:assertion-1} \\
H^i \bigl( Z, \gr_p^F \DR(\mathcal{M}) \otimes L^{-1} \bigr) = 0
\quad \text{for $i < 0$ and $p \in \mathbb{Z}$,} \label{eq:assertion-2}
\end{align}
are equivalent to each other by virtue of \lemmaref{lem:polarization}; it is
therefore enough to prove the second one. This will be done by induction on the
dimension. Since $D \subseteq X$ is a smooth divisor with $L^N \simeq \shf{O}_X(D)$, we
have a short exact sequence
\[
0 \to L_D^{-N} \to \Omega_X^1 \restr{D} \to \Omega_D^1 \to 0.
\]
As shown in \parref{par:computations} below, it induces a short exact sequence of
complexes
\begin{equation} \label{eq:short-exact}
0 \to L_D^{-N} \otimes \gr_{p+1}^F \DR(\mathcal{M}_D) \to
\gr_p^F \DR(\mathcal{M}) \restr{D} \to \gr_p^F \DR(\mathcal{M}_D) \decal{1} \to 0.
\end{equation}
By induction, we can assume that the $i$-th cohomology of $L_D^{-N-1} \otimes
\gr_{p+1}^F \DR(\mathcal{M}_D)$ vanishes for every $i < 0$; it follows that
\begin{equation} \label{eq:induction}
H^i \Bigl( D, L_D^{-1} \otimes \gr_p^F \DR(\mathcal{M}) \restr{D} \Bigr)
\to H^{i+1} \Bigl( D, L_D^{-1} \otimes \gr_p^F \DR(\mathcal{M}_D) \Bigr)
\end{equation}
is injective. Because we already know that the morphism in \eqref{eq:restriction} is
equal to zero, the injectivity of \eqref{eq:induction} means that the morphism
\[
H^i \Bigl( X, L^{-1} \otimes \gr_p^F \DR(\mathcal{M}) \Bigr)
\to H^i \Bigl( D, L_D^{-1} \otimes \gr_p^F \DR(\mathcal{M}_D) \restr{D} \Bigr)
\]
is also equal to zero. This obviously implies the surjectivity of
\[
H^i \Bigl( X, L^{-N-1} \otimes \gr_p^F \DR(\mathcal{M}) \Bigr) \to
H^i \Bigl( X, L^{-1} \otimes \gr_p^F \DR(\mathcal{M}) \Bigr)
\]
for $i < 0$; note that the morphism is nothing but multiplication by the
global section $s \in H^0(X, L^N)$ that we chose at the beginning of the proof.
Now we can easily complete the proof of \theoremref{thm:Saito-pure} with the help of
Serre's vanishing theorem. Recall that $\gr_p^F \DR(\mathcal{M}) \in \mathrm{D}_{\mathit{coh}}^{\mathit{b}}(\shO_Z)$ does not
depend on the choice of embedding; what we have shown above is that the
multiplication morphism
\begin{equation} \label{eq:multiplication}
H^i \Bigl( Z, L^{-N-1} \otimes \gr_p^F \DR(\mathcal{M}) \Bigr) \to
H^i \Bigl( Z, L^{-1} \otimes \gr_p^F \DR(\mathcal{M}) \Bigr)
\end{equation}
is surjective for every $i < 0$ and every sufficiently general section $s \in H^0(Z, L^N)$.
By \lemmaref{lem:polarization} and Serre duality, we have
\[
H^i \Bigl( Z, L^{-N-1} \otimes \gr_p^F \DR(\mathcal{M}) \Bigr)
\simeq H^{-i} \Bigl( Z, L^{N+1} \otimes \gr_{-p-w}^F \DR(\mathcal{M}) \Bigr).
\]
This becomes equal to zero for $N \gg 0$, because $\gr_{-p-w}^F \DR(\mathcal{M})$ is
concentrated in non-positive degrees. The surjectivity of \eqref{eq:multiplication}
therefore implies the desired vanishing for $\gr_p^F \DR(\mathcal{M})$.
\begin{note}
The proof becomes simpler in the case of the lowest graded quotient
\[
\gr_{p(M)}^F \DR(\mathcal{M}) = \omega_X \otimes F_{p(M) + n} \mathcal{M}
\]
of the de Rham complex. This amounts to taking $p = -p(M)-w$ in the argument
above; the point is that the complex $\gr_{p+1}^F \DR(\mathcal{M}_D)$ is now exact, because
\[
p + 1 = 1 - p(M) - w = 1 - p(M_D) - (w - 1).
\]
Consequently, \eqref{eq:induction} is automatically injective, and so we do not need
any vanishing on $D$ to conclude that \eqref{eq:multiplication} is surjective.
Many other interesting results about the coherent $\shO_Z$-module $\gr_{p(M)}^F
\DR(\mathcal{M})$ can be found in \cite{Saito-K}.
\end{note}
\subsection{Computations in local coordinates}
\label{par:computations}
We now prove \propositionref{prop:morphisms} and \propositionref{prop:composition},
as well as the exactness of the sequence of complexes in \eqref{eq:short-exact}.
Since it is easiest to do this by a calculation in local coordinates, we shall first
give a description of the complex $\tilde{K}$ in a neighborhood of the divisor $D$.
Let $x_1, \dotsc, x_n$ be local holomorphic coordinates on $X$, with the property
that the divisor $D$ is defined by the equation $x_n = 0$. On $Y$, we can choose
local holomorphic coordinates $y_1, \dotsc, y_n$ in such a way that $\pi \colon Y
\to X$ is represented by
\[
(x_1, \dotsc, x_{n-1}, x_n) = (y_1, \dotsc, y_{n-1}, y_n^N).
\]
In particular, the line bundle $L$ is trivial on the open set in question; note that
the summand $L^{-1}$ in the decomposition of $\pi_{\ast} \shO_Y$ corresponds to $y_n \cdot \shf{O}_X$.
The $\mathscr{D}$-module structure on $\mathcal{M}_Y \simeq \pi^{\ast} \mathcal{M}$ comes from the natural
morphism $\mathscr{D}_Y \to \pi^{\ast} \mathscr{D}_X$. As in \parref{par:DR}, the differentials
$\nabla_Y \colon \Omega_Y^k \otimes \mathcal{M}_Y \to \Omega_Y^{k+1} \otimes \mathcal{M}_Y$ in the de
Rham complex of $\mathcal{M}_Y$ are therefore given in local coordinates by
\[
\nabla_Y(\alpha \otimes m) = (-1)^n d\alpha \otimes m + (-1)^n
\sum_{i=1}^n (\pi^{\ast} \mathit{dx}_i \wedge \alpha) \otimes \frac{\partial}{\partial x_i} m.
\]
Since $L$ is trivial on the open set in question, the induced differentials
\[
\pi_{\ast} \nabla_Y \colon \Omega_X^k(\log D) \otimes \mathcal{M}
\to \Omega_X^{k+1}(\log D) \otimes \mathcal{M}
\]
are represented by the formula
\begin{equation} \label{eq:new-differential}
\begin{split}
\pi_{\ast} \nabla_Y(\alpha \otimes m) &= (-1)^n \frac{d(y_n \alpha)}{y_n} \otimes m + (-1)^n
\sum_{i=1}^n (\mathit{dx}_i \wedge \alpha) \otimes \frac{\partial}{\partial x_i} m \\
&= (-1)^n \frac{1}{N} \frac{\mathit{dx}_n}{x_n} \wedge \alpha \otimes m
+ \nabla(\alpha \otimes m),
\end{split}
\end{equation}
where $\nabla$ is defined as in \eqref{eq:DR-local}. With this description,
\propositionref{prop:morphisms} is easy.
\begin{proof}[Proof of \propositionref{prop:morphisms}]
The formula in \eqref{eq:new-differential} shows that the morphisms
\[
\Omega_X^k \otimes L^{-1} \otimes \gr_{p+k}^F \mathcal{M} \to
\Omega_X^k(\log D) \otimes L^{-1} \otimes \gr_{p+k}^F \mathcal{M}
\]
are compatible with the differentials in $L^{-1} \otimes \gr_p^F \DR(\mathcal{M})$ and
$\gr_p^F \tilde{K}$; this proves the first assertion. Our definition of the residue mapping
is
\[
\Res_D \colon \Omega_X^k(\log D) \to \Omega_D^{k-1}, \quad
\Res_D \left( \frac{df}{f} \wedge \alpha \right) = \alpha \restr{D}
\]
where $f$ is an arbitrary local defining equation for $D$; it interacts better with
the sign conventions for the de Rham complex than the usual definition. The residue
mapping induces morphisms
\[
r_p \colon \Omega_X^k(\log D) \otimes L^{-1} \otimes \gr_{p+k}^F \mathcal{M}
\to \Omega_D^{k-1} \otimes L_D^{-1} \otimes \gr_{p+k}^F \mathcal{M}_D,
\]
and we have to check that they are compatible with the differentials in both
complexes. This is straightforward: the residue of
\[
(-1)^n \sum_{i=1}^n (\mathit{dx}_i \wedge \alpha) \otimes \frac{\partial}{\partial x_i} m
\]
is evidently
\[
(-1)^n \sum_{i=1}^n \Res_D (\mathit{dx}_i \wedge \alpha) \otimes
\frac{\partial}{\partial x_i} m \restr{D}
= (-1)^{n-1} \sum_{i=1}^{n-1} \mathit{dx}_i \wedge \Res_D(\alpha) \otimes
\frac{\partial}{\partial x_i} m \restr{D},
\]
hence equal to what we get when we apply $\nabla_D$ to $\Res_D(\alpha) \otimes m \restr{D}$.
\end{proof}
Now we can prove the main technical result, namely \propositionref{prop:composition}.
\begin{proof}[Proof of \propositionref{prop:composition}]
It suffices to check this in a neighborhood of any given point of $D$. After choosing
local coordinates as above, the line bundle $L$ becomes trivial on the open set in
question, and the differentials in the complex $\tilde{K}$ are given by the formula in
\eqref{eq:new-differential}. To simplify the notation, we define
\[
K = \DR(\mathcal{M}) \quad \text{and} \quad
F_{\bullet} K = F_{\bullet} \DR(\mathcal{M}),
\]
and denote by $\delta_p \colon \gr_p^F K \to \gr_{p-1}^F K \decal{1}$ the connecting
morphisms in the derived category. Since we have trivialized $L$, the morphisms
$\Omega_X^k \otimes \mathcal{M} \to \Omega_X^k(\log D) \otimes \mathcal{M}$ give rise to a
commutative diagram
\[
\begin{tikzcd}
0 \rar & \gr_{p-1}^F K \rar \dar{f_{p-1}} & F_p K / F_{p-2} K \rar \dar[dashed] &
\gr_p^F K \rar \dar{f_p} & 0 \\
0 \rar & \gr_{p-1}^F \tilde{K} \rar & F_p \tilde{K} / F_{p-2} \tilde{K} \rar & \gr_p^F \tilde{K} \rar & 0
\end{tikzcd}
\]
where both rows are exact and the solid arrows are morphisms of complexes (by
\propositionref{prop:morphisms}); the dashed arrow is not a morphism of complexes. In
the derived category, we now consider the following square, which is in general
not commutative:
\[
\begin{tikzcd}
\gr_p^F K \rar{\delta_p} \dar{f_p} & \gr_{p-1}^F K \decal{1} \dar{f_{p-1}} \\
\gr_p^F \tilde{K} \rar{\tilde{\delta}_p} & \gr_{p-1}^F \tilde{K} \decal{1}
\end{tikzcd}
\]
According to \lemmaref{lem:connecting} below, the difference
\[
\tilde{\delta}_p f_p - f_{p-1} \delta_p \colon \gr_p^F K \to \gr_{p-1}^F \tilde{K} \decal{1}
\]
can be computed by comparing the differential in $\tilde{K}$ and the differential in $K$;
going back to \eqref{eq:new-differential}, the result is that $\tilde{\delta}_p f_p -
f_{p-1} \delta_p$ equals
\[
\Omega_X^k \otimes \gr_{p+k}^F \mathcal{M} \to \Omega_X^{k+1}(\log D) \otimes \gr_{p+k}^F \mathcal{M},
\quad
\alpha \otimes m \mapsto (-1)^n \frac{1}{N} \frac{\mathit{dx}_n}{x_n} \wedge \alpha \otimes m.
\]
If we compose this with the residue mapping, we find that the morphism $r_{p-1}
\tilde{\delta}_p f_p = r_{p-1}(\tilde{\delta}_p f_p - f_{p-1} \delta_p)$ is equal to
\[
\Omega_X^k \otimes \gr_{p+k}^F \mathcal{M} \to \Omega_D^k \otimes \gr_{p+k}^F \mathcal{M}_D,
\quad
\alpha \otimes m \mapsto (-1)^n \frac{1}{N} \alpha \restr{D} \otimes m \restr{D},
\]
and therefore agrees with the restriction mapping up to a factor of $(-1)^n N$.
\end{proof}
It remains to say a few words about the sequence of complexes in
\eqref{eq:short-exact}. Starting from the short exact sequence of locally free
sheaves
\[
0 \to L_D^{-N} \to \Omega_X^1 \restr{D} \to \Omega_D^1 \to 0,
\]
we can take exterior powers to obtain a family of short exact sequences
\[
0 \to L_D^{-N} \otimes \Omega_D^{k-1} \to \Omega_X^k \restr{D} \to \Omega_D^k \to 0
\]
for $k = 0, 1, \dotsc, n$. Since $\Omega_D^k$ is locally free, the resulting sequences
\[
0 \to L_D^{-N} \otimes \Omega_D^{k-1} \otimes \gr_{p+k}^F \mathcal{M}_D
\to \bigl( \Omega_X^k \otimes \gr_{p+k}^F \mathcal{M} \bigr) \restr{D}
\to \Omega_D^k \otimes \gr_{p+k}^F \mathcal{M}_D \to 0
\]
are still short exact. Using the formulas in \parref{par:DR}, it is an easy exercise to
show that both morphisms are compatible with the differentials.
\subsection{Connecting morphisms}
In this section, we prove a small lemma about connecting morphisms in short exact
sequences of complexes. Let $B_1$ and $B_2$ be two complexes in an abelian category,
and suppose that we have a family of morphisms
\[
f^n \colon B_1^n \to B_2^n
\]
that do not necessarily commute with the differentials; this is of course precisely
the situation that we encountered during the proof of \propositionref{prop:composition}.
If we define
\[
\varphi^n = f^{n+1} d_1^n - d_2^n f^n
\colon B_1^n \to B_2^{n+1},
\]
then $\varphi \colon B_1 \to B_2 \decal{1}$ is a morphism of complexes; here it is
necessary to remember that the $n$-th differential in the shifted complex $B_2
\decal{1}$ is equal to $-d_2^{n+1}$. Suppose in addition that we have the
following commutative diagram:
\[
\begin{tikzcd}
0 \rar & A_1 \rar{i_1} \dar{e} & B_1 \rar{p_1} \dar[dashed]{f} & C_1 \rar \dar{g} & 0 \\
0 \rar & A_2 \rar{i_2} & B_2 \rar{p_2} & C_2 \rar & 0
\end{tikzcd}
\]
In this diagram, all solid arrows are morphisms of complexes; both squares commute;
and both rows are exact. Because $e$ and $g$ commute with the differentials, it is
easy to see that $\varphi = i_2 \psi p_1$ for a unique morphism of complexes $\psi
\colon C_1 \to A_2 \decal{1}$. Note that although $\varphi$ is homotopy equivalent to
zero, this is no longer the case for $\psi$; in particular, viewed as a morphism in
the derived category, $\psi$ is typically nonzero.
In the derived category, each row of the diagram is part of a distinguished triangle,
and we denote the third morphism in this triangle by $\delta_k \colon C_k \to A_k
\decal{1}$. We can now consider the following square of morphisms:
\[
\begin{tikzcd}
C_1 \rar{\delta_1} \dar{g} & A_1 \decal{1} \dar{e} \\
C_2 \rar{\delta_2} & A_2 \decal{1}
\end{tikzcd}
\]
Unless $f$ is a morphism of complexes, the square is not commutative. The following
lemma shows that
\begin{lemma} \label{lem:connecting}
In the derived category, we have $\delta_2 g - e \delta_1 = \psi$.
\end{lemma}
\begin{proof}
We begin by describing the morphism $\delta_k$. Let
\[
M_k = B_k \oplus A_k \decal{1}
\]
denote the mapping cone of $i_k \colon A_k \to B_k$, with differential given by the
matrix
\[
\begin{pmatrix}
d_k & i_k \\
0 & -d_k
\end{pmatrix}.
\]
There are two obvious morphisms $p_k \colon M_k \to C_k$ and $q_k \colon M_k \to A_k
\decal{1}$; the first one is a quasi-isomorphism, and $\delta_k p_k =
q_k$. Next, we observe that
\[
\begin{pmatrix}
d_2 & i_2 \\
0 & -d_2
\end{pmatrix} \begin{pmatrix}
f & 0 \\
\psi p_1 & e
\end{pmatrix} = \begin{pmatrix}
f & 0 \\
\psi p_1 & e
\end{pmatrix} \begin{pmatrix}
d_1 & i_1 \\
0 & -d_1
\end{pmatrix},
\]
which means exactly that
\[
h = \begin{pmatrix}
f & 0 \\
\psi p_1 & e
\end{pmatrix} \colon M_1 \to M_2
\]
is a morphism of complexes with $p_2 h = g p_1$ and $q_2 h = \psi p_1 + e q_1$. But then
\[
(\delta_2 g - e \delta_1) p_1 = \delta_2 p_2 h - e q_1 = q_2 h - e q_1
= \psi p_1,
\]
which proves the assertion because $p_1$ is a quasi-isomorphism.
\end{proof}
\bibliographystyle{amsalphax}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\ZM}{\relax\ifhmode\unskip\space\fi Zbl }
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\arXiv}[1]{\relax\ifhmode\unskip\space\fi\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,877,628,091,062 | arxiv | \section{Introduction} \label{sec:intro}
Let $\{ \mathcal{P}_i \}_{i=1}^n$ be a collection of disjoint ideal polyhedra in $\mathbb{H}^3$. A \textit{face pairing} on $\{ \mathcal{P}_i \}$ is a collection of isometries $\{\phi_f\, |\, f \in \mathcal{P}_i^{(2)}\, 1 \leq i \leq n\}$ of $\mathbb{H}^3$ with the following properties. If $f$ is a face of $\mathcal{P}_i$, $\phi_f$ takes $f$ onto a face $f'$ of some $\mathcal{P}_j$, with $\phi_f(\mathcal{P}_i) \cap \mathcal{P}_j = f'$, and $\phi_{f'} = \phi_f^{-1}$. Now let $M$ be a complete hyperbolic $3$-manifold with finite volume. An \textit{ideal polyhedral decomposition} of $M$ is an isometry between $M$ and a quotient $\bigsqcup_i \mathcal{P}_i/\sim$, where $\sim$ is the equivalence relation generated by a face pairing on $\{\mathcal{P}_i\}$. If the dihedral angles of every polyhedron $\mathcal{P}_i$ are all equal to $\pi/2$ then the decomposition is called an \emph{ideal right-angled polyhedral decomposition}.
Our first result relates fundamental groups of 3-manifolds that admit ideal right-angled polyhedral decompositions to the class of right-angled Coxeter groups. A \emph{right-angled Coxeter group} $W$ is defined by a finite, simplicial graph $\Delta$ (called the \emph{nerve} of $W$) and has an easily described presentation: the generators are the vertices; every generator is an involution; and the commutator of two generators is trivial if and only if they are joined by an edge in $\Delta$. We will refer to the vertices of the nerve as the \emph{standard generating set} for $W$. The properties of such $W$ discovered in \cite{Agol} and \cite{Hag} will particularly concern us.
\begin{thm} \label{rt ang cox} Suppose $M$ is a complete hyperbolic $3$-manifold with finite volume that admits a decomposition into right-angled ideal polyhedra. Then $\pi_1M$ has a subgroup of finite index isomorphic to a word-quasiconvex subgroup of a right-angled Coxeter group (equipped with the standard generating set). \end{thm}
See Section \ref{sec:QCERF} for the definition of word quasiconvexity. In the terminology of \cite{HW}, Theorem \ref{rt ang cox} asserts that $\pi_1M$ is \emph{virtually special}. The proof relies on work of Haglund--Wise \cite{HW} defining a class of \textit{special} cube complexes --- non-positively curved cube complexes whose hyperplanes lack certain pathologies --- which are locally isometric into cube complexes associated to right-angled Coxeter groups. In Section \ref{cube cx} we review the relevant definitions and in Section \ref{std square} describe a \textit{standard} square complex associated with an ideal polyhedral decomposition of a hyperbolic $3$-manifold.
When an ideal polyhedral decomposition is right-angled, the associated standard square complex is non-positively curved, and hyperplanes are carried by totally geodesic surfaces. We will establish these properties in Subsection \ref{std square} and Section \ref{sec:geodesic hyperplanes}. Separability properties of totally geodesic surfaces then imply that pathologies may be removed in finite covers. We describe these properties and prove Theorem \ref{rt ang cox} in Section \ref{sec:separability}.
This result has important consequences for the geometry and topology of such manifolds. The first follows directly from work of Agol \cite{Agol}, and confirms that the manifolds we consider satisfy Thurston's famous Virtually Fibered Conjecture.
\begin{cor} Suppose $M$ is a complete hyperbolic $3$-manifold with finite volume that admits a decomposition into right-angled ideal polyhedra. Then $M$ is virtually fibered. \end{cor}
{A $3$-manifold that satisfies the hypotheses of Theorem \ref{rt ang cox} is necessarily not compact, so its fundamental group is not hyperbolic in the sense of Gromov, but rather hyperbolic \emph{relative to} the collection of its cusp subgroups. }Nonetheless, {our second theorem } implies that {its }subgroup structure shares the separability properties of {its }compact {cousins}. This generalizes \cite[Theorem 1.3]{HW} to the relatively hyperbolic setting. We say a subgroup $H$ of a group $G$ is a \emph{virtual retract} if $H$ is contained in a finite-index subgroup $K$ of $G$ and the inclusion map $H\hookrightarrow K$ has a left inverse. {(See \cite{long_subgroup_2008} for further details of virtual retractions.)}
\begin{thm}\label{rel qc still sep}
Let $\mathcal{S}$ be a compact, virtually special cube complex and suppose that $\pi_1\mathcal{S}$ is hyperbolic relative to a collection of finitely generated abelian subgroups. Then every relatively quasiconvex subgroup of $\pi_1\mathcal{S}$ is a virtual retract. \end{thm}
{As in the case of \cite[Theorem 1.3]{HW}, the proof of }Theorem \ref{rel qc still sep} relies on {a } result of Haglund for separating subgroups of right-angled Coxeter groups \cite[Theorem A]{Hag}, but {it also } requires new ingredients to surmount the technical obstacle that not every relatively quasiconvex subgroup is word-quasiconvex. The first is Theorem \ref{t: Fully rel. qc subgroups}, a variation of \cite[Theorem 1.7]{manning_separation_2008}, which establishes that every relatively quasiconvex subgroup is a retract of a \textit{fully relatively quasiconvex} subgroup (see the definition above Theorem \ref{t: Fully rel. qc subgroups}). The second ingredient, Proposition \ref{p: Fully qc implies combinatorially qc}, extends work in \cite{hruska_relative_2008} to show that fully relatively quasiconvex subgroups satisfy the hypotheses of \cite[Theorem A]{Hag}.
{Even without any restrictions on the types of parabolic subgroups allowed, our results prove that certain subgroups of relatively hyperbolic group{s }are virtual retracts: see Theorem \ref{t: Retracts with no hypotheses} and its corollaries for precise statements.}
The consequences of Theorem \ref{rel qc still sep} follow a long-standing theme in the study of $3$-manifolds and their fundamental groups. For a group $G$ and a subgroup $H$, we say $H$ is \textit{separable} in $G$ if for every $g \in G - H$, there is a finite-index subgroup $K < G$ such that $H<K$ and $g \not\in K$. If $G = \pi_1M$ for some manifold $M$, work of G.P. Scott links separability of $H$ with topological properties of the corresponding cover $M_H \to M$ \cite{Scott}. A group is called \emph{LERF} if every finitely generated subgroup is separable.
\begin{cor}\label{c: Corollary 2}
Suppose $M$ is a complete hyperbolic $3$-manifold with finite volume that admits a decomposition into right-angled ideal polyhedra. Then:
\begin{enumerate}
\item \label{item 1} $\pi_1M$ is LERF.
\item \label{item 2} Every geometrically finite subgroup of $\pi_1M$ is a virtual retract.
\end{enumerate}
\end{cor}
We will prove Theorem \ref{rel qc still sep} and Corollary \ref{c: Corollary 2} at the end of Section \ref{sec:QCERF}.
The study of LERF 3-manifold groups dates back to \cite{Scott}. Although there are examples of graph manifolds with non-LERF fundamental group \cite{burns_notegroups_1987}, it remains unknown whether every hyperbolic 3-manifold group is LERF. Gitik \cite{Gitik} constructed examples of hyperbolic 3-manifolds with totally geodesic boundary whose fundamental groups are LERF, and it is a consequence of Marden's Tameness Conjecture that her closed examples are also LERF. Agol, Long and Reid proved that the Bianchi groups are LERF \cite{ALR}.
It is natural to ask to what extent Theorem \ref{rt ang cox} describes \textit{new} examples of $3$-manifold groups that virtually embed into right-angled Coxeter groups, and more generally to what extent it describes new examples of LERF 3-manifold groups. Hitherto, there have only been a limited number of techniques for proving that finite-volume 3-manifolds are LERF. The techniques of \cite{Gitik} did not produce non-compact, finite-volume examples, so we shall not consider them here.
Agol, Long and Reid \cite{ALR} proved that geometrically finite subgroups of right-angled, hyperbolic reflection groups are separable. They deduced a similar result for the Bianchi groups by embedding them as totally geodesic subgroups of higher-dimensional, arithmetic right-angled reflection groups. One might na\"ively suppose that the fundamental group of a $3$-manifold that decomposes into right-angled polyhedra $\{\mathcal{P}_i\}$ is commensurable with the reflection group in one of the $\mathcal{P}_i$, or perhaps a union of several, and therefore that Theorem \ref{rt ang cox} could be deduced using the techniques of \cite{ALR}.
We address the above possibility in Sections \ref{sec:examples} and \ref{sec: augmented}. There we describe infinite families of hyperbolic $3$-manifolds that decompose into right-angled polyhedra but are not commensurable with \textit{any} $3$-dimensional reflection orbifold. Indeed, Section \ref{sec: augmented} considers a very broad class of hyperbolic $3$-manifolds, the augmented link complements (previously considered in \cite{LAD} and \cite{Purcell_cusps}, for example), that decompose into right-angled polyhedra. Our investigations there strongly support the following hypothesis: a ``generic'' augmented link complement is not commensurable with any $3$-dimensional reflection orbifold.
If $M$ decomposes into isometric copies of a single, highly symmetric polyhedron $\mathcal{P}$, we show in Proposition \ref{symm comm} that $\pi_1 M$ is indeed commensurable with the reflection group in the sides of $\mathcal{P}$. The lowest-complexity right-angled ideal polyhedra (measured by number of ideal vertices) are the $3$- and $4$-antiprisms (see Figure \ref{P_1 and P_2}), and these are sufficiently symmetric for the hypotheses of Proposition \ref{symm comm} to apply. However, in Section \ref{sec:One cusp}, we describe hybrid examples not commensurable with reflection groups.
\begin{thm}\label{t: One-cusp examples}
For each $n \in \mathbb{N}$, there is complete, one-cusped hyperbolic $3$-manifold $N_n$ that decomposes into right-angled ideal polyhedra, such that $N_n$ is not commensurable with $N_m$ for any $m \neq n$, nor to any $3$-dimensional reflection orbifold.
\end{thm}
Recently, Haglund and Wise have proved that every Coxeter group is virtually special \cite{haglund_coxeter_????}. Since $\pi_1 N_n$ is not commensurable with any $3$-dimensional reflection group, the results of \cite{haglund_coxeter_????} do not apply to it. The proof of Theorem \ref{t: One-cusp examples} uses work of Goodman--Heard--Hodgson \cite{GHH} to explicitly describe the commensurator of $\pi_1 N_n$.
A rich class of manifolds that satisfy the hypotheses of Theorem \ref{rt ang cox} consists of the \textit{augmented links} introduced by Adams \cite{Adams}. Any link $L$ in $S^3$ with hyperbolic complement determines (not necessarily uniquely) an augmented link using a projection of $L$ which is \textit{prime} and \textit{twist-reduced}, by adding a ``clasp'' component encircling each crossing region. (See Section \ref{sec: augmented} for precise definitions.) Each link with hyperbolic complement admits a prime, twist reduced diagram, and the augmented link obtained from such a diagram also has hyperbolic complement (cf.~\cite[Theorem 6.1]{Purcell_cusps}). Ian Agol and Dylan Thurston showed in an appendix to \cite{LAD} that each augmented link satisfies the hypotheses of Theorem \ref{rt ang cox}.
\begin{example}[Agol--Thurston] \label{fully augmented} Let $M$ be a complete hyperbolic manifold homeomorphic to the complement in $S^3$ of an augmented link. Then $M$ admits a decomposition into two isometric right-angled ideal polyhedra. \end{example}
In Section \ref{sec: augmented}, we describe another polyhedron, the ``\crush'', that distills the most important combinatorial features of the Agol--Thurston ideal polyhedral decomposition. We record criteria, in Lemmas \ref{symmetric links} and \ref{hidden symmetries}, that describe certain situations in which one may conclude that an augmented link complement is commensurable with the reflection orbifold in the associated right-angled polyhedron. Section \ref{sec:low cx augmented} describes the scissors congruence classification of the complements of augmented links with up to $5$ crossing regions. Finally, in Section \ref{sec:lobel} we prove:
\newtheorem*{Lobellthm}{Theorem \ref{Lobell thm}}
\begin{Lobellthm} There is a class of augmented links $L(n)$, $n \geq 3$, such that for all but finitely many $n$, $M(n) \doteq S^3 - L(n)$ is not arithmetic nor commensurable with any $3$-dimensional hyperbolic reflection orbifold. Moreover, at most finitely many $M(n)$ occupy any single commensurability class. \end{Lobellthm}
The \crush s of the links of Theorem \ref{Lobell thm} are the famous L\"obell polyhedra. We believe that the behavior recorded in the theorem is generic among augmented links, but these are particularly amenable to analysis.
While this work was in preparation, we became aware of \cite{BHW} and \cite{B}, which provide other examples of virtually special hyperbolic manifolds. The former mostly concerns arithmetic lattices, while the latter deals with finite-sheeted covers of the 3-sphere that branch over the figure-eight knot.
\subsection*{Acknowledgements}
The authors would like to thank Ian Agol, Dave Futer, Alan Reid and Matthew Stover for useful conversations. Thanks also to Jack Button for confirming some of our Alexander polynomial computations, and to Jessica Purcell for a helpful reference to \cite{Purcell_cusps}. Finally, we thank the referee for a careful reading and helpful comments.
\section{Preliminaries}
\subsection{Cube complexes} \label{cube cx}
In this subsection we review relevant notions about cube complexes following the treatment of Haglund--Wise \cite{HW}. Another helpful reference is \cite{BrH}, particularly Chapters I.7 and II.5.
\begin{dfn}(\cite[Definition 2.1]{HW}) Let $I = [-1,1] \subset \mathbb{R}$. A \textit{cube complex} $X$ is a $CW$-complex such that each $k$-cell has a homeomorphism to $I^k \subset \mathbb{R}^k$ with the property that the restriction of the attaching map to each $(k-1)$-face of $\partial I^k$ to $X^{k-1}$ is an isometry onto $I^{k-1}$ followed by the inclusion of a $(k-1)$-cell. A map $f \colon\thinspace X \to Y$ between cube complexes is \textit{combinatorial} if for each $k$-cell $\phi \colon\thinspace I^{k} \to X$, the map $f \circ \phi$ is a $k$-cell of $Y$ following an isometry of $I^k$. A \textit{square complex} is a $2$-dimensional cube complex, and we will refer by \textit{vertex}, \textit{edge}, or \textit{square} to the image in $X$ of a $0$-, $1$- or $2$-cell, respectively. \end{dfn}
Now let $X$ be a square complex. We will take the \textit{link} of the vertex $(1,1) \in I^2$ to be the line segment in $I^2$ joining $(0,1)$ to $(1,0)$ (the midpoints of the edges abutting $(1,1)$), and the link of another vertex $v$ to be the image of the link of $(1,1)$ under the symmetry taking it to $v$. The link of a vertex $v \in X$ is the $1$-complex obtained by joining the links of $v$ in the squares of $X$ attaching to it. We say $X$ is \textit{simple} if for each vertex $v$ there is a combinatorial map from the link of $v$ to a simplicial graph. In particular, if $X$ is simple then no two squares meet along consecutive edges.
We will say a square complex $X$ is \textit{nonpositively curved} if for each vertex $v$ in $X$, the link of $v$ does not contain any simple cycle with fewer than four edges. (We are taking Gromov's link condition as a definition; see eg, \cite[Ch. II.5]{BrH} for a discussion.) In particular, $X$ is simple. If $X$ is simply connected and nonpositively curved, we will say $X$ is $\ensuremath\mathrm{CAT}(0)$. For a more general discussion, see \cite{BrH}, in particular Chapter II.5.
The notion of a \textit{hyperplane} is very important in defining ``special'' cube complexes. Here we will specialize the definition in \cite{HW} to square complexes.
\begin{dfn}(\cite[Definition 2.2]{HW}) The \textit{midlines} of $I^2$ are the subsets $I \times \{0\}$ and $\{0\} \times I$, each parallel to two edges of $X$. The \textit{center} of a square $\phi\colon\thinspace I^2 \to X$ is $\phi(0,0)$, and the \textit{midpoint} of an edge $\phi \colon\thinspace I \to X$ is $\phi(0)$. A midline of $I^2$ meets its two \textit{dual} edges perpendicularly at their midpoints.
Given a square complex $X$, we define a graph $Y$, the associated \textit{midline complex}, as follows. The $0$-cells of $Y$ are the midpoints of the edges of $X$, and the $1$-cells of $Y$ are midlines of squares of $X$, attached by the restrictions of the corresponding attaching maps. A \textit{hyperplane} of $X$ is a component of the associated midline complex $Y$. \end{dfn}
By the definition of the midline complex, each hyperplane $Y$ has an immersion into $X$, taking an edge to the midline of the square that contains it. Definition 3.1 of \cite{HW} describes the following pathologies of hyperplane immersions: \textit{self-intersection}, \textit{one-sidedness}, \textit{direct} or \textit{indirect self-osculation}, or \textit{inter-osculation}. If the hyperplanes of $X$ do not have any such pathologies, and its one-skeleton is bipartite, we will say that $X$ is $C$-\textit{special}.
The following theorem of Haglund--Wise is our main concern.
\begin{thm}[\cite{HW}, Lemma 4.3]\label{t: Haglund--Wise} Let $X$ be a $C$-special square complex. Then there exists a right-angled Coxeter group $W$, an injective homomorphism $\pi_1 X\hookrightarrow W$ and a $\pi_1 X$-equivariant, combinatorial, isometric embedding from the universal cover of $X$ into the Davis--Moussong complex of $W$. In particular, $\pi_1X$ is isomorphic to a word-quasiconvex subgroup of $W$ (with respect to the standard generating set).\end{thm}
The Davis--Moussong complex of a right-angled Coxeter group $W$ is a certain CAT(0) cube complex on which $W$ acts naturally. The reader is referred to \cite{HW} for the definition. A square complex $X$ is called \emph{virtually special} if $X$ has a $C$-special finite-sheeted covering space. To prove Theorem \ref{rt ang cox}, we will prove that $\pi_1 M$ is isomorphic to the fundamental group of a virtually special square complex.
We will find the notion of a regular neighborhood of a hyperplane from \cite{HW} useful.
\begin{dfn}
Let $Y\to X$ be a hyperplane of a square complex $X$. A \emph{(closed) regular neighborhood} for $Y$ is a cellular $I$-bundle $p:N\to Y$ equipped with a combinatorial immersion $j:N\to X$ such that the diagram
\[\xymatrix{
N\ar@{>}[d]^{p}\ar@{>}[rd]^{j}& \\
Y\ar@{>}[r] & X
}\]
commutes. (Here the $I$-bundle $N$ is given the obvious square-complex structure: the preimage of a vertex is an edge and the preimage of an edge is a square.)
\end{dfn}
Every hyperplane of a non-positively curved square complex has a regular neighborhood \cite[Lemma 8.2]{HW}. The $I$-bundle $p \colon\thinspace N \to Y$ has a section taking each $e \in Y^{(1)}$ to a midline of the square $p^{-1}(e)$. We refer to $Y \subset N$ as embedded by this section. In \cite[Definition 8.7]{HW}, the \textit{hyperplane subgroup} $\pi_1 Y < \pi_1 X$ is defined as the image of $j_*$ after an appropriate choice of basepoint.
\subsection{A standard square complex} \label{std square}
In this subsection we will take $M$ to be a complete hyperbolic $3$-manifold of finite volume, with an ideal polyhedral decomposition $\{\mathcal{P}_i\}$. For a pair of faces $f$ and $f'$ of polyhedra $\mathcal{P}_i$ and $\mathcal{P}_j$ such that $f' = \phi_f(f)$, we say that $f$ and $f'$ represent a face of the decomposition. Similarly, let $\{e_j\}_{j=1}^n$ be a sequence of edges of polyhedra $\mathcal{P}_{i_j}$ with the property that for each $j <n$, there is a face $f_j$ of $\mathcal{P}_{i_j}$ containing $e_j$ such that $\phi_{f_j}(e_j) = e_{j+1}$. Then we say the edges $e_j$ represent an edge of the decomposition.
For each $i$, let $\overline{\mathcal{P}}_i$ be the union of $\mathcal{P}_i$ with its ideal vertices. (In the Poincar\'e ball model for $\mathbb{H}^3$, the ideal vertices of $\mathcal{P}_i$ are its accumulation points on $\partial B^3$.) Each face pairing isometry $\phi_f$ induces a homeomorphism from $\bar{f}$, the union of $f$ with its ideal vertices, to $\bar{f}'$, where $f' = \phi_f(f)$.
The extended face pairings determine a cell complex $\c$ such that $M$ is homeomorphic to $\c - \c^{(0)}$. The $0$-cells of $\c$ are equivalence classes of ideal vertices under the equivalence relation generated by $v \sim \phi_f(v)$ for ideal vertices $v$ of faces $f$. The $1$- and $2$- cells of $\c$ are equivalence classes of edges and faces of the $\mathcal{P}_i$ under the analogous equivalence relation, and the $3$-cells are the $\overline{\mathcal{P}}_i$.
Let $\c'$ be the barycentric subdivision of the cell complex $\c$ associated to an ideal polyhedral decomposition. If $v$ is a vertex of a cell $\overline{\mathcal{P}}$ of $\c'$, the open star of $v$ in $\overline{\mathcal{P}}$ is the union of the interiors of the faces of $\overline{\mathcal{P}}$ containing $v$. The open star $\mathfrak{st}(v)$ of $v$ in $\c'$ is the union of the open stars of $v$ in the cells of $\c'$ containing it. Take $\mathfrak{st}(\c^{(0)})$ to be the disjoint union of the open stars in $\c'$ of the vertices of $\c$. Then $\mathcal{S}_0 \doteq \c' - \mathfrak{st}(\c^{(0)})$ is the unique subcomplex of $\c'$, maximal with respect to inclusion, with the property that $\mathcal{S}_0^{(0)} = (\c')^{(0)} - \c^{(0)}$.
A simplex of $\mathcal{S}_0$ is determined by its vertex set, which consists of barycenters of cells of $\c$. We will thus refer to each simplex of $\mathcal{S}_0$ by the tuple of cells of $\c$ whose barycenters are its vertices, in order of increasing dimension. For example, a simplex of maximal dimension is a triangle of the form $(\bar{e},\bar{f},\overline{\mathcal{P}}_i)$, where $e$ is an edge and $f$ a face of some ideal polyhedron $\mathcal{P}_i$ in the decomposition of $M$, with $e \subset f$.
\begin{lemma} \label{spine}
There is a cellular deformation retraction $\Phi$ taking $M$ to $|\mathcal{S}_0|$.
\end{lemma}
\proof Let $v$ be an ideal vertex of $\overline{\mathcal{P}}_i$, and let $U$ be the open star in $\c'$ of the equivalence class of $v$ in $\c^{(0)}$. Let $U_0$ be the component of $U \cap \overline{\mathcal{P}}_i$ containing $v$. Then $\overline{U}_0$ is homeomorphic to the cone to $v$ of its frontier in $\overline{\mathcal{P}}_i$, a union of triangles of $\mathcal{S}_0$. Hence there is a ``straight line'' deformation retraction of $\overline{U}_0 - \{v\}$ to its frontier. These may be adjusted to match up along faces of the $\mathcal{P}_i$, determining $\Phi$.
\endproof
The standard square complex is obtained by taking a union of faces of $\mathcal{S}_0$.
\begin{dfn} Let $M$ be a complete hyperbolic $3$-manifold with a decomposition into ideal polyhedra $\{\mathcal{P}_i\}$, with associated cell complex $\c$ such that $M \cong \c - \c^{(0)}$, and let $\mathcal{S}_0 = \c' - \mathfrak{st}(\c^{(0)})$, where $\c'$ is the first barycentric subdivision of $\c$. Define the \textit{standard square complex} $\mathcal{S}$ associated to $\{\mathcal{P}_i\}$, with underlying topological space $|\mathcal{S}| = |\mathcal{S}_0|$, as follows: $\mathcal{S}^{(0)} = \mathcal{S}_0^{(0)}$, $\mathcal{S}^{(1)} = \mathcal{S}_0^{(1)} - \{(\bar{e},\overline{\mathcal{P}}_i)\,|\, e \subset \mathcal{P}_i\}$, and $\mathcal{S}^{(2)} = \{ (\bar{e},\bar{f},\overline{\mathcal{P}}_i) \cup (\bar{e},\bar{g},\overline{\mathcal{P}}_i)\,|\, f,g \subset \mathcal{P}_i\ \mbox{and}\ f\cap g = e\}$. \end{dfn}
Since each $2$-dimensional face $(\bar{e},\bar{f},\overline{\mathcal{P}}_i) \cup (\bar{e},\bar{g},\overline{\mathcal{P}}_i)$ of $\mathcal{S}$ is the union of two triangles of $\mathcal{S}_0$ which meet along the edge $(\bar{e},\overline{\mathcal{P}}_i)$, it may be naturally identified with a square. Furthermore, since it is exactly the set of edges of the form $(\bar{e},\overline{\mathcal{P}}_i)$ which are in $\mathcal{S}_0^{(1)} - \mathcal{S}^{(1)}$, $\mathcal{S}$ has the structure of a cell complex.
\begin{lemma}\label{bipartite} Let $\mathcal{S}$ be the standard square complex associated to an ideal polyhedral decomposition $\{\mathcal{P}_i\}$. Then $\mathcal{S}^{(1)}$ is bipartite. \end{lemma}
\begin{proof} By definition, the vertices of $\mathcal{S}$ are barycenters of cells of the cell complex $\c$ associated to $\{\mathcal{P}_i\}$. We divide them into two classes by parity of dimension. An edge of $\mathcal{S}$ is of the form $(\bar{f},\overline{\mathcal{P}}_i)$ for some $i$, where $f$ is a face of $\mathcal{P}_i$, or $(\bar{e},\bar{f})$, where $e$ is an edge and $f$ a face of some polyhedron. In either case, the endpoints belong to different classes. \end{proof}
Say a cell of $\mathcal{S}$ is \textit{external} if it is contained in $\mathcal{S} \cap \c^{(2)}$, and \textit{internal} otherwise. Each square of $\mathcal{S}$ has two adjacent external edges, of the form $(\bar{e},\bar{f})$ and $(\bar{e},\bar{f}')$ in the notation above, and two internal edges $(\bar{f},\overline{\mathcal{P}}_i)$ and $(\bar{f}',\overline{\mathcal{P}}_i)$. In particular, each external edge of each square is opposite an internal edge, and vice-versa.
\begin{lemma} \label{external_spine} As one-subcomplexes, $\mathcal{S} \cap \c^{(2)} = (\c^{(2)})' - \mathfrak{st}(\c^{(0)})$, where $(\c^{(2)})'$ is the barycentric subdivision of $\c^{(2)}$. In particular, $\Phi$ restricts to a deformation retraction from $\bigcup_i \partial \mathcal{P}_i$ to $|\mathcal{S} \cap \c^{(2)}|$. \end{lemma}
\proof By definition $\mathcal{S} \cap \c^{(2)} = \mathcal{S}_0 \cap \c^{(2)}$, whence the first claim of the lemma follows. The second claim now holds because $\Phi$ is cellular. \endproof
\begin{lemma} \label{two-sided} Suppose $H$ is a hyperplane of the standard square complex associated to an ideal polyhedral decomposition $\{\mathcal{P}_i\}$ of a complete hyperbolic $3$-manifold $M$, and let $p \colon\thinspace N \to H$ be the regular neighborhood of $H$. $N$ has boundary components $\partial_e N$ and $\partial_i N$, mapped by $j \colon\thinspace N \to M$ to a union of external and internal edges, respectively. \end{lemma}
\begin{proof} Let $s$ be a square of $\mathcal{S}$. The vertices of $s$ are the barycenters of $\bar{e}$, $\bar{f}$, $\bar{g}$, and $\overline{\mathcal{P}}_i$, where $\mathcal{P}_i$ is a polyhedron in the decomposition of $M$, $e$ is an edge of $\mathcal{P}_i$, and $f$ and $g$ are the faces of $\mathcal{P}_i$ intersecting in $e$. One midline of $s$ has vertices on the midpoints of the opposite edges $(\bar{e},\bar{f})$ and $(\bar{g},\overline{\mathcal{P}}_i)$ of $s$, and the other has vertices on the midpoints of $(\bar{f},\overline{\mathcal{P}}_i)$ and $(\bar{e},\bar{g})$. Take $H$ to be the hyperplane containing the midline $m$ with vertices on $(\bar{e},\bar{f})$ and $(\bar{g},\overline{\mathcal{P}}_i)$.
Let $s_0 = p^{-1}(m) \subset N$; then $s_0$ is a square which $j$ maps homeomorphically to $s$. The edges of $s_0 \cap \partial N$ are mapped by $j$ to the edges of $s$ parallel to $m$. These are $(\bar{f},\overline{\mathcal{P}}_i)$, which is internal, and $(\bar{e},\bar{g})$, which is external. Let $b_i$ be the edge mapped to $(\bar{f},\overline{\mathcal{P}}_i)$ by $j$, let $b_e$ be mapped to $(\bar{e},\bar{g})$, and let $\partial_i N$ and $\partial_e N$ be the components of $\partial N$ containing $b_i$ and $b_e$, respectively. It is \textit{a priori} possible that $\partial_i N = \partial_e N$, but we will show that $\partial_i N$ (respectively, $\partial_e N$) is characterized by the fact that its edges map to internal (resp, external) edges of $\mathcal{S}$.
Let $s_1$ be a square of $N$ adjacent to $s_0$. Then the edge $m_1 \doteq p(s_1)$ of $H$ is the midline of the square $s' = j(s_1)$ adjacent to $s$. Suppose first that $s'$ meets $s$ along the external edge $(\bar{e},\bar{f})$. Then there is a polyhedron $\mathcal{P}_j$ of the decomposition with a face $f'$ and edge $e' \subset f$ with $\phi_f(f) = f'$ and $\phi_f(e)=e'$ (ie, $f$ and $f'$ represent the same face of the decomposition of $M$, and $e$ and $e'$ the same edge), such that the vertices of $s'$ are the barycenters of $\bar{e}'$, $\bar{f}'$, $\bar{g}_1$, and $\overline{\mathcal{P}}_j$. Here $g_1$ is the other face of $\overline{\mathcal{P}}_j$ containing $e'$.
Since $m_1$ meets $m$, it has an endpoint at the midpoint of $(\bar{e}',\bar{f}')$, which is identified with $(\bar{e},\bar{f})$ in $M$. Then the other endpoint of $m_1$ is on the opposite edge $(\bar{g}_1,\overline{\mathcal{P}}_j)$ of $s'$. The external edge $(\bar{e}',\bar{g}_1)$ of $s'$ which is parallel to $m_1$ meets the external edge $(\bar{e},\bar{g})$ of $s$ at the barycenter of the edge of the decomposition represented by $e$ and $e'$. It follows that $j$ maps the edge of $s_1 \cap \partial N$ adjacent to $b_e$ to $(\bar{e},\bar{g})$. Likewise, the edge of $s_1 \cap \partial N$ adjacent to $b_i$ is mapped to the internal edge $(\bar{f}',\overline{\mathcal{P}}_j)$ of $s'$.
Now suppose $s'$ meets $s$ along the internal edge $(\bar{g},\overline{\mathcal{P}}_i)$. Then there is an edge $e_1$ of $g$ such that the vertices of $s'$ are the barycenters of $\bar{e}_1$, $\bar{g}$, $\bar{f}_1$, and $\overline{\mathcal{P}}_i$. Here $f_1$ is the other face of $\mathcal{P}_i$ containing $e_1$. Then $m_1$ meets $m$ at the midpoint of $(\bar{g},\overline{\mathcal{P}}_i)$. Since $b_e$ is mapped by $j$ to $(\bar{e},\bar{g})$, the edge of $s_1 \cap \partial N$ adjacent to it is mapped to the external edge $(\bar{e}_1, \bar{g})$. It follows that the other edge of $s_1 \cap \partial N$ is mapped to the internal edge $(\bar{f}_1,\overline{\mathcal{P}}_i)$ of $s'$ parallel to $m_1$.
The above establishes that the union of the set of edges of $\partial_i N$ mapped to internal edges of $\mathcal{S}$ is open and nonempty in $\partial_i N$. Since it is clearly also closed, it is all of $\partial_i N$. An analogous statement holds for $\partial_e N$, establishing the lemma.
\end{proof}
It is occasionally useful to think of the standard square complex associated to an ideal polyhedral decomposition as a subdivision of the ``dual two-complex''. If $\c$ is the cell complex associated to the ideal polyhedral decomposition $\{\mathcal{P}_i\}$, let $D\c$ be the two-complex with a vertex at the barycenter of each $3$-cell of $\c$, for each $f \in \c^{(2)}$ an edge $Df$ crossing $f$, and for each $e \in \c^{(1)}$ a face $De$ crossed by $e$. The standard square complex $\mathcal{S}$ is obtained from $D\c$ by dividing each face along its intersections with the $2$-cells of $\c$ which meet at the edge.
\begin{lemma} \label{nonpos} Suppose $\{\mathcal{P}_i\}$ is a decomposition of $M$ into right-angled ideal polyhedra. The standard square complex $\mathcal{S}$ associated to $\{\mathcal{P}_i\}$ is non-positively curved.
\end{lemma}
\proof Recall that $\mathcal{S}$ is non-positively curved if and only if in the link of any vertex, each simple cycle has length at least $4$. If $v$ is a vertex of $\mathcal{S}$, a simple cycle of length $k$ in the link of $v$ is a sequence of squares $s_0,s_1, \hdots,s_{k-1}$ with the following properties: for each $i$ there is an edge $e_i$ with $v \subset e_i \subset s_i \cap s_{i+1}$ (taking $i+1$ modulo $k$), and $s_i \neq s_j$ and $e_i \neq e_j$ when $i \neq j$.
Since the decomposition $\{\mathcal{P}_i\}$ is into right-angled polyhedra, the dual two-complex $D\c$ described above the lemma is a square complex. This follows from the fact that each edge of $\c$ is contained in four faces of $\c$. We will show that $D\c$ is non-positively curved; since $\mathcal{S}$ is a subdivision of $D\c$, it will follow that $\mathcal{S}$ is non-positively curved.
Suppose $v$ is a vertex of $D\c$, and let $\{De_0,\hdots,De_{k-1}\}$ be a simple cycle in the link of $v$ in $D\c$. The associated sequence of edges $\{ Df_0, \hdots,Df_{k-1}\}$ determines a sequence of distinct faces $\{f_0,\hdots, f_{k-1}\}$ of the polyhedron $\mathcal{P}_i$ containing $v$, each meeting the next in an edge. It follows immediately from the necessary conditions of Andreev's theorem \cite{Andreev} and the fact that $\mathcal{P}_i$ is right-angled that every such cycle has length at least four. The conclusion of the lemma follows.
\endproof
\section{Totally geodesic hyperplane groups} \label{sec:geodesic hyperplanes}
Fix an orientable, complete hyperbolic manifold $M = \mathbb{H}^3/\Gamma$ of finite volume, equipped with a decomposition $\{\mathcal{P}_i\}$ into right-angled ideal polyhedra. Here we have identified $M$ with the quotient of $\mathbb{H}^3$ by a discrete group of isometries $\Gamma$, thus identifying $\pi_1M$ with $\Gamma$. Let $\mathcal{S}$ be the standard square complex associated to the polyhedral decomposition as in Section \ref{std square}. The goal of this section is, for each hyperplane $H \to X$, to identify a totally geodesic surface immersed in $M$ which ``carries'' $H$.
Since each $\mathcal{P}_i$ is right-angled and the angle in $M$ around each edge is $2\pi$, the equivalence class of each edge has four members. If $f$ represents a face of the decomposition and $e$ an edge of $f$, define the {\it flat $e$-neighbor of $f$} to be the face of the decomposition that meets $f$ at angle $\pi$ along $e$ in $M$.
If $\mathcal{P}_i$ is the polyhedron containing $f$, let $g$ be the other face of $\mathcal{P}_i$ containing $e$. Let $g' = \phi_{g}(g)$, a face of some polyhedron $\mathcal{P}_j$, and let $e' = \phi_{g}(e)$. Then $e$ and $e'$ represent the same edge of the decomposition, and the flat $e$-neighbor of $f$ is represented by the face $f_1$ of $\mathcal{P}_j$ which intersects $g'$ along $e'$. Let $\Sigma_f$ be the collection of faces of the decomposition, minimal with respect to inclusion, satisfying the properties below.
\begin{enumerate}
\item $f \in \Sigma_f$, and
\item if $g\in \Sigma_f$ and $e$ is an edge of $g$, then every flat $e$-neighbor of $g$ is in $\Sigma_f$.
\end{enumerate}
Note that if $g \subset \Sigma_f$ is a 2-cell then $\Sigma_f = \Sigma_g$. Furthermore, there is a sequence $\{f = f_0, f_1, \hdots, f_n = g\}$ such that for each $i>0$ there is an edge $e_i$ with $f_i$ a flat $e_i$-neighbor of $f_{i-1}$. Call such a sequence a {\it path of flat neighbors.}
Now let $\widehat{\Sigma}_f$ be the quotient of $\Sigma_f$ by the following edge pairings: if $g$ represents an element of $\Sigma_f$ and $e$ is an edge of $g$, glue $g$ to its flat $e$-neighbor $g'$ by the restriction of the face pairing isometry $\phi_{g}$ described above. Since each face of the decomposition has a unique flat $e$-neighbor along each of its edges, $\widehat{\Sigma}_f$ is topologically a surface without boundary. It is connected, since any two faces in $\Sigma_f$ are connected by a path of flat neighbors, and it inherits a hyperbolic structure from its faces, since the edge gluing maps are isometries.
The inclusion maps of faces $\{ g \hookrightarrow \mathcal{P}_i\, |\, g \subset \mathcal{P}_i,\ g \in \Sigma_f\}$ determine an immersion from $\widehat{\Sigma}_f$ to $\bigsqcup_i \mathcal{P}_i/ \sim$. This is not necessarily an embedding because the preimage of an edge may consist of two edges of $\widehat{\Sigma}_f$, each mapped homeomorphically. However, by construction it is a local isometry.
\begin{lemma} \label{lem:geodesic_immersion} Let $i \colon\thinspace \widehat{\Sigma}_f \to M$ be the composition of the inclusion-induced map to $\bigsqcup_i \mathcal{P}_i/\sim$ with the isometry to $M$. Then $i$ is a proper immersion which maps onto its image with degree one.
\end{lemma}
\begin{proof} If $g$ is a face of $\mathcal{P}_i$, the inclusion $g \hookrightarrow \mathcal{P}_i$ is proper by definition. Since the collection $\{\mathcal{P}_i\}$ is finite, it follows that $i$ is proper. By construction, the interior of each face in $\Sigma_f$ is mapped homeomorphically by $i$, thus it has degree one onto its image. \end{proof}
Since the map $i \colon\thinspace \widehat{\Sigma}_f \to M$ is a proper local isometry and $M$ is complete, the hyperbolic structure on $\widehat{\Sigma}_f$ is complete. Since it is contained in the union of finitely many polygons of finite area, $\widehat{\Sigma}_f$ has finite area. Choosing an isometric embedding of $f$ in $\mathbb{H}^2$ thus determines a developing map identifying the universal cover of $\widehat{\Sigma}_f$ with $\mathbb{H}^2$, and identifying $\pi_1 \widehat{\Sigma}_f$ with a subgroup $\Gamma_f$ of $\mathrm{Isom}(\mathbb{H}^2)$.
Now fix a component $\tilde{f}$ of the preimage of $i(f)$ under the universal cover $\mathbb{H}^3 \rightarrow M$. This choice determines a lift $\tilde{\imath} \colon\thinspace \mathbb{H}^2 \to \mathbb{H}^3$ of $i \colon\thinspace \widehat{\Sigma}_f \to M$, equivariant with respect to the actions of $\Gamma_f$ on $\mathbb{H}^2$ and $i_* (\pi_1 \widehat{\Sigma})$ on $\mathbb{H}^3$.
\begin{lemma} \label{stabilizer} Let $\mathcal{H}$ be the geodesic hyperplane of $\mathbb{H}^3$ containing $\tilde{f}$. Then $\tilde{\imath}$ maps $\mathbb{H}^2$ isometrically onto $\calh$, and $i_*$ takes $\pi_1 \widehat{\Sigma}_f$ isomorphically onto $\mathrm{Stab}_{\Gamma}(\calh)$. \end{lemma}
\begin{proof} Since $i$ is a local isometry, $\tilde{\imath}$ maps $\mathbb{H}^2$ isometrically onto the geodesic hyperplane in $\mathbb{H}^3$ containing $\tilde{\imath}(f) = \tilde{f}$, hence $\calh$. Since $\pi_1 \widehat{\Sigma}_f$ acts faithfully on $\mathbb{H}^2$ by isometries, its action on $\calh$, and hence all of $\mathbb{H}^3$ is also faithful. If $i_*(\pi_1 \widehat{\Sigma}_f)$ were properly contained in $\mathrm{Stab}_{\Gamma}(\calh)$, the embedding $i$ would factor through the covering map $\calh/i_*(\pi_1 \widehat{\Sigma}_f) \to \calh/\mathrm{Stab}_{\Gamma}(\calh)$, contradicting the fact that $i$ maps onto its image with degree one.
\end{proof}
Let us now take $\Gamma_f = i_* (\pi_1\widehat{\Sigma}_f)$ and $\widehat{M}_f = \mathbb{H}^3/\Gamma_f$. By Lemma \ref{stabilizer}, $i \colon\thinspace \widehat{\Sigma}_f \to M$ lifts to an embedding $\hat{\imath}$ to $\widehat{M}_f$, such that $\widehat{M}_f$ is homeomorphic to $\hat{\imath}(\widehat{\Sigma}_f) \times \mathbb{R}$. We thus obtain the following diagram.
\[ \xymatrix{ \mathbb{H}^2 \ar[d] \ar[r]^{\tilde{\imath}} & \mathbb{H}^3 \ar[d] \\
\widehat{\Sigma}_f \cong \mathbb{H}^2/\Gamma_f \ar[dr]_{i} \ar[r]^{\hat{\imath}} & \widehat{M}_f := \mathbb{H}^3/\Gamma_f \ar[d] \\
& M}
\]
Below we will refer by $\widehat{\Sigma}_f \subset \widehat{M}_f$ to the image of $\hat{\imath}$.
\begin{dfn} Let $M$ be a complete, orientable, hyperbolic $3$-manifold of finite volume equipped with a decomposition $\{\mathcal{P}_i\}$ into right-angled ideal polyhedra, and suppose $H$ is a hyperplane of the associated square complex, with regular neighborhood $(N,p,j)$. Choose a midline $m$ of $H$, let $s = p^{-1}(m)$, and let $\mathcal{P}_i$ contain $j(s)$. There is a unique face $f$ of $\mathcal{P}_i$ containing the external edge of $j(s\cap \partial N)$, and we define $\widehat{\Sigma}(H) = \widehat{\Sigma}_f$, $\Gamma(H) = \Gamma_f$, and $\widehat{M}(H) = \widehat{M}_f$. \end{dfn}
\begin{lemma} \label{lem:nbhd_lift} Using notation from the definition above, let $\widehat{\mathcal{S}}$ be the standard square complex associated to the decomposition $\widehat{M}(H)$ inherits from $\{\mathcal{P}_i\}$. Then $j \colon\thinspace N \to \mathcal{S}$ lifts to an immersion $\hat{\jmath}$ to $\widehat{\mathcal{S}}$, taking $\partial_e N$ to a spine of $\widehat{\Sigma}(H)$, such that $\hat{\jmath}|_{\partial_e N}$ is an embedding if $\widehat{\Sigma}(H)$ is orientable, and a two-to-one cover if not. \end{lemma}
\begin{cor} \label{geod sfces carry} If $\Sigma(H)$ is orientable, $\pi_1 H = \Gamma(H)$; otherwise $\pi_1 H$ is the index-two orientation-preserving subgroup of $\Gamma(H)$. \end{cor}
\begin{proof}[Proof of Lemma \ref{lem:nbhd_lift}] Suppose $m_0$ and $m_1$ are two adjacent midlines of $H$, and let $s_0 = p^{-1}(m_0)$ and $s_1 = p^{-1}(m_1)$ in $N$. Take $\mathcal{P}_{i_0}$ and $\mathcal{P}_{i_1}$ to be the polyhedra containing $j(s_0)$ and $j(s_1)$, respectively, and let $f_0$ be the face of $\mathcal{P}_{i_0}$ and $f_1$ the face of $\mathcal{P}_{i_1}$ containing $j(s_0 \cap \partial_e N)$ and $j(s_1 \cap \partial_e N)$. If $m_0$ meets $m_1$ at the midpoint of an internal edge of $\mathcal{S}$, it is clear that $\mathcal{P}_{i_0} = \mathcal{P}_{i_1}$ and $f_0 = f_1$.
If $m_0$ meets $m_1$ in an external edge of $\mathcal{S}$, then $\mathcal{P}_{i_0}$ and $\mathcal{P}_{i_1}$ abut in $M$ along a face of the decomposition. Let $g \subset \mathcal{P}_{i_0}$ represent this face of the decomposition. Then $g$ and $f_0$ meet along an edge $e$, and $g' = \phi_g(g) \subset \mathcal{P}_{i_1}$ and $f_1$ meet along $e' = \phi_g(e)$. Hence if $m_0$ meets $m_1$ in an external edge of $\mathcal{S}$, there is an edge $e$ of the decomposition of $M$ such that $f_0$ and $f_1$ represent flat $e$-neighbors. It follows that a sequence of edges $m_0,m_1,\hdots,m_k$ of $H$, with the property that $m_i$ is adjacent to $m_{i-1}$ for each $i>0$, determines a path of flat neighbors in $\Sigma(H)$. Therefore $j$ maps $\partial_e N$ into $i(\widehat{\Sigma}(H))$.
Now let $f$ be a face of some polyhedron $\mathcal{P}_i$ representing a face of $\Sigma(H)$. The cover $\widehat{M}(H)$ inherits a polyhedral decomposition from that of $M$, and since the covering map is injective on a neighborhood of $\hat{\imath}(f)$, there is a unique polyhedron $\widehat{\mathcal{P}}_i$ of this decomposition with the property that $\widehat{\mathcal{P}}_i$ projects to $\mathcal{P}_i$ and contains $\hat{\imath}(f)$.
For a square $s$ of $N$, we thus define $\hat{\jmath}(s)$ to be the component of the preimage of $j(s)$ contained in $\widehat{\mathcal{P}}_i$, where $\mathcal{P}_i$ is the polyhedron containing $j(s)$.
Suppose $\mathcal{P}_{i_0}$ and $\mathcal{P}_{i_1}$ contain faces $f_0$ and $f_1$, respectively, each representing a face of $\Sigma(H)$, which are flat $e$-neighbors for some edge $e$. Let $g \subset \mathcal{P}_{i_0}$ satisfy $g \cap f_0 = e$ and $\phi_g(e) = \phi_g(g) \cap f_1 \subset \mathcal{P}_{i_1}$. Since $\hat{\imath}(f_0)$ and $\hat{\imath}(f_1)$ meet in $\widehat{M}(H)$ along the preimage of $e$, $\widehat{\mathcal{P}}_{i_0}$ and $\widehat{\mathcal{P}}_{i_1}$ meet along the face represented by the preimage of $g$. For adjacent squares $s_0$ and $s_1$ in $N$, it follows that if $j(s_0)$ and $j(s_1)$ meet along an external edge of $\mathcal{S}$, then $\hat{\jmath}(s_0)$ and $\hat{\jmath}(s_1)$ meet along an external edge of $\widehat{\mathcal{S}}$.
If $s_0$ and $s_1$ are adjacent squares of $N$ such that $j(s_0)$ meets $j(s_1)$ in an internal edge of $\mathcal{S}$ contained in a polyhedron $\mathcal{P}_i$, then $\hat{\jmath}(s_0)$ meets $\hat{\jmath}(s_1)$ in $\widehat{\mathcal{P}}_i$. Thus $\hat{\jmath}$ is continuous. Since $j$ is an immersion, $\hat{\jmath}$ is an immersion as well. We claim $\hat{\jmath}$ maps $\partial_e N$ onto $\widehat{\mathcal{S}} \cap \widehat{\Sigma}(H)$.
Since $\hat{\jmath}$ is continuous, the image of $\partial_e N$ is closed in $\widehat{\mathcal{S}} \cap \widehat{\Sigma}(H)$. Now suppose $e_0$ and $e_1$ are adjacent edges of $\widehat{\mathcal{S}} \cap \widehat{\Sigma}(H)$ such that $e_0 \subset \hat{\jmath}(\partial_e N)$. Let $s_0 \subset N$ be a square such that $\hat{\jmath}(s_0)$ contains $e_0$, and let $m_0 = p(s_0)$ be a midline of $j(s_0)$. There is a square $s$ of $\mathcal{S}$, containing the projection of $e_1$ to $M$, such that $s \cap j(s_0)$ is a union of edges containing the projection of $e_0 \cap e_1$. Let $m_1$ be the midline of $s$ meeting $m_0$; then $m_1 \in H$, so by definition $s_1 = p^{-1}(m_1)$ is mapped by $j$ to $s$. Now from the above it follows that $\hat{\jmath}(s_1)$ contains $e_1$. This implies that $\hat{\jmath}(\partial_e N)$ is open in $\widehat{\mathcal{S}} \cap \widehat{\Sigma}(H)$ and proves the claim.
Lemma \ref{external_spine} implies that $\widehat{\mathcal{S}} \cap \widehat{\Sigma}(H)$ is a spine for $\widehat{\Sigma}(H)$, hence $\hat{\jmath}$ maps $\partial_e N$ onto a spine of $\widehat{\Sigma}(H)$. Each square $s \subset N$ has the property that $s \cap \partial_e N$ is the unique edge of $s$ mapped by $\hat{\jmath}$ into $\widehat{\Sigma}(H)$. For let $f\subset \Sigma(H)$ be the face of $\mathcal{P}_i$ containing $j(s \cap \partial_e N)$, where $\mathcal{P}_i$ contains $j(s)$, let $g$ be the face containing the other external edge of $j(s)$, and let $f_1$ be the flat $e$-neighbor of $f$, where $e = f \cap g$. Then $\hat{\imath}(f)$ and $\hat{\imath}(f')$ are in $\widehat{\Sigma}(H)$. If the face $\hat{g}$ of $\widehat{\mathcal{P}}_i$ adjacent to $\hat{\imath}(f)$ were also in $\hat{\imath}(\widehat{\Sigma}(H))$, $\hat{\imath}$ would not be an embedding.
Now suppose $\hat{\jmath}(s_0) = \hat{\jmath}(s_1)$ for squares $s_0$ and $s_1$ of $N$. By the property above, there is an edge $e$ of $\widehat{\mathcal{S}} \cap \widehat{\Sigma}(H)$ such that $\hat{\jmath}(s_0 \cap \partial_e N) = e = \hat{\jmath}(s_1 \cap \partial_e N)$. It follows that $j$ maps the external edge of each of $s_0$ and $s_1$ to the projection of $e$ in $M$. By definition, $p(s_0)$ is the midline of $j(s_0)$ parallel to $j(s_0 \cap \partial_e N)$, and the same holds true for $s_1$. Thus $p(s_0) = p(s_1)$, so $s_0 = s_1$.
The paragraph above implies that $\hat{\jmath}|_{\partial_e N}$ is at worst two-to-one, since each external edge of $\widehat{\mathcal{S}}$ is contained in exactly two squares. Since $\widehat{M}(H)$ is orientable, if $\widehat{\Sigma}(H)$ is orientable as well, then it divides any sufficiently small regular neighborhood into two components. Since $N$ is connected and $\hat{\jmath}$ is continuous, in this case its image is on one side of $\widehat{\Sigma}(H)$, so $\hat{\jmath}|_{\partial_e N}$ is an embedding.
If $\widehat{\Sigma}(H)$ is nonorientable, then a regular neighborhood is connected. Thus in this case, for any edge $e$ of $\widehat{\mathcal{S}} \cap \widehat{\Sigma}(H)$, both squares containing $e$ are in the image of $\hat{\jmath}$, and the restriction to $\partial_e N$ maps two-to-one.
\end{proof}
The final result of this section characterizes some behaviors of hyperplanes of $\mathcal{S}$ in terms of the behavior of their associated totally geodesic surfaces. Below we say distinct hyperplanes $H_1$ and $H_2$ are \textit{parallel} if $\Sigma(H_1) = \Sigma(H_2)$.
\begin{prop} \label{hyperplanes vs surfaces} Let $M$ be a complete, orientable hyperbolic $3$-manifold equipped with a decomposition $\{\mathcal{P}_i\}$ into right-angled ideal polyhedra, with associated standard square complex $\mathcal{S}$, and let $H_1$ and $H_2$ be hyperplanes of $\mathcal{S}$. If $H_1$ osculates $H_2$ along an external edge of $\mathcal{S}$, then either \begin{enumerate}
\item $H_1 = H_2$ and $\Sigma(H_1)$ is nonorientable; or
\item $H_1$ and $H_2$ are parallel and $\Sigma(H_1)=\Sigma(H_2)$ is orientable. \end{enumerate}
$H_1$ intersects $H_2$ if and only if $i(\widehat{\Sigma}(H_1))$ intersects $i(\widehat{\Sigma}(H_2))$ at right angles. \end{prop}
\begin{proof} Suppose $H_1$ osculates $H_2$ along an external edge $e$. Then there are squares $s_1$ and $s_2$ of $\mathcal{S}$ intersecting along $e$, such that the midline $m_1$ of $s_1$ parallel to $e$ is in $H_1$, and the midline $m_2 \subset s_2$ parallel to $e$ is in $H_2$. If $f$ is the face of the decomposition containing $e$, then by definition $f \in \Sigma(H_1)$ and $f \in \Sigma(H_2)$. Since $s_1$ and $s_2$ are on opposite sides of $f$ in $M$, Lemma \ref{lem:nbhd_lift} implies alternatives $\mathit{1}$ and $\mathit{2}$.
Suppose $H_1$ intersects $H_2$ in a square $s$ contained in some polyhedron $\mathcal{P}_i$, and for $j = 0,1$ let $m_j$ be the midline of $s$ in $H_j$. For each $j$, there is a unique external edge $e_j$ of $s$ parallel to $m_j$. By definition, the faces $f_1$ and $f_2$ of $\mathcal{P}_i$ containing $e_1$ and $e_2$ are contained in $\Sigma(H_1)$ and $\Sigma(H_2)$, respectively. Since $\mathcal{P}_i$ is right-angled they meet at right angles, establishing the lemma. \end{proof}
\section{Embedding in Coxeter groups} \label{sec:separability}
Let $M = \mathbb{H}^3/\Gamma$ be a complete, orientable hyperbolic $3$-manifold of finite volume, equipped with a decomposition $\{\mathcal{P}_i\}$ into right-angled ideal polyhedra. In this section we describe separability properties of hyperplane subgroups which allow pathologies to be removed in finite covers of $M$.
If $H$ is a subgroup of a group $G$, we say $H$ is \textit{separable} in $G$ if for each $g \in G - H$ there is a subgroup $K$, of finite index in $G$, such that $H <K$ and $g \notin K$. The separability result needed for the proof of Theorem 1.1 follows from \cite[Lemma 1]{Long} and extends its conclusion to a slightly more general class of subgroups.
\newcommand\Longlemma{Let $M = \mathbb{H}^3/\Gamma$ be a complete, orientable hyperbolic $3$-manifold with finite volume, and let $\calh \subset \mathbb{H}^3$ be a hyperplane such that $\mathrm{Stab}_{\Gamma}(\calh)$ acts on $\calh$ with finite covolume. Then the subgroup of $\mathrm{Stab}_{\Gamma}(\calh)$ that acts preserving an orientation of $\calh$ is separable in $\Gamma$.}
\begin{lemma}[Cf.\ \cite{Long} Lemma 1] \label{Long} \Longlemma \end{lemma}
\theoremstyle{plain} \newtheorem*{Longlemma_lemma}{Lemma \ref{Long}}
\begin{proof}
It follows from \cite[Lemma 1]{Long} that $\mathrm{Stab}_\Gamma(\calh)$ is separable. It remains to consider the case in which $\mathrm{Stab}_\Gamma(\calh)$ is orientation-reversing on $\calh$ and to show that the orientation-preserving subgroup is separable.
As in \cite[Theorem 1]{Long}, there is a finite-sheeted covering $M'\to M$ such that the immersed surface $\calh/\mathrm{Stab}_\Gamma(\calh)$ lifts to an embedded surface $\Sigma$ in $M'$. Because $M'$ is orientable, the surface $\Sigma$ is one-sided. Let $N$ be a closed regular neighbourhood of $\Sigma$ and let $M_0$ be the complement of the interior of $N$ in $M'$. The boundary of $N$ is homeomorphic to $\widetilde\Sigma$, the orientable double cover of $\Sigma$.
The neighbourhood $N$ has the structure of a twisted interval bundle over $\Sigma$, so $\pi_1N\cong\pi_1\Sigma$. The double cover $\widetilde N$ of $N$ obtained by pulling back the bundle structure along the covering map $\widetilde\Sigma\to\Sigma$ is an orientable interval bundle over $\widetilde\Sigma$ and hence homeomorphic to the product $\widetilde\Sigma\times [-1,+1]$. This homeomorphism can be chosen so that $\widetilde\Sigma\times\{0\}$ double covers $\Sigma$.
The inclusion map $i:\partial N\hookrightarrow N$ has precisely two lifts to $\widetilde N$; let $i^\pm$ be the lift that identifies $\partial N$ with $\widetilde\Sigma\times\{\pm 1\}$. Construct a new manifold $\widetilde M$ as follows: let $M_0^\pm$ be two copies of $M_0$ and let $\partial^\pm N$ be the corresponding copy of $\partial N$ in $M^\pm$; then $\widetilde M$ is obtained from
\[
M_0^+\sqcup \widetilde N\sqcup M_0^-
\]
by identifying $x\in \partial^\pm N$ with $i^\pm(x)$. By construction, $\widetilde M$ is a double cover of $M'$ and so a finite-sheeted cover of $M$. The image of $\calh$ in $\widetilde M$ is precisely the orientable double cover of $\Sigma$, so $\pi_1\widetilde M$ is a finite-index subgroup of $\Gamma$ that contains the orientation-preserving elements of $\mathrm{Stab}_\Gamma(\calh)$ but not the orientation-reversing ones, as required.
\end{proof}
If $H$ is a hyperplane of the standard square complex associated to the decomposition of $M$ into right-angled ideal polyhedra, Lemma \ref{stabilizer} and Corollary \ref{geod sfces carry} together describe a geodesic hyperplane $\calh$, such that $\mathrm{Stab}_{\Gamma}(\calh)$ acts on it with finite covolume and $\pi_1 H$ is the subgroup which preserves an orientation of $\calh$. Thus:
\begin{cor} \label{Longsep} Suppose $M = \mathbb{H}^3/\Gamma$ is a complete, orientable hyperbolic $3$-manifold of finite volume that admits a decomposition $\{\mathcal{P}_i\}$ into right-angled ideal polyhedra. If $H$ is a hyperplane of the standard square complex associated to $\{\mathcal{P}_i\}$, then $\pi_1 H$ is separable in $\Gamma$. \end{cor}
This implies, using \cite[Corollary 8.9]{HW}, that a hyperbolic manifold $M$ with a right-angled ideal polyhedral decomposition has a finite cover whose associated square complex lacks most pathologies forbidden in the definition of special complexes.
\begin{prop} \label{no self osculate} Suppose $M = \mathbb{H}^3/\Gamma$ is a complete, orientable hyperbolic $3$-manifold with finite volume that admits a decomposition into right-angled ideal polyhedra $\{\mathcal{P}_i\}$. There is a cover $M' \to M$ of finite degree such that hyperplanes of the standard square complex of $M'$ do not self-intersect or -osculate. \end{prop}
\begin{proof} Let $X$ be the standard square complex associated to $\{\mathcal{P}_i\}$. Lemma \ref{spine} implies that the inclusion $X \hookrightarrow M$ induces an isomorphism $\pi_1 X \to \Gamma$. By Corollary \ref{Longsep}, each hyperplane subgroup is separable in $\pi_1 X$, so by \cite[Corollary 8.9]{HW}, $X$ has a finite cover $X'$ such that hyperplanes of $X'$ do not self-intersect or -osculate. Let $\Gamma'$ be the subgroup of $\pi_1 X$ corresponding to $X'$, and let $M' \to M$ be the cover corresponding to $\Gamma'$. The decomposition $\{\mathcal{P}_i\}$ of $M$ lifts to a right-angled ideal decomposition of $M'$ with standard square complex $X'$, proving the proposition.
\end{proof}
Proposition \ref{no self osculate} already implies that a large class of hyperbolic $3$-manifolds is virtually special. Below we will say that the decomposition $\{\mathcal{P}_i\}$ of $M$ is \textit{checkered} if the face pairing preserves a two-coloring --- an assignment of white or black to each face $f$ of each $\mathcal{P}_i$ such that if another face $f'$ of $\mathcal{P}_i$ intersects $f$ in an edge, it has the opposite color. The decompositions of augmented link complements described in the appendix to \cite{LAD} are checkered, for example.
\begin{thm} \label{check cox} Suppose $M$ is a complete hyperbolic $3$-manifold with finite volume that admits a checkered decomposition into right-angled ideal polyhedra. Then $\pi_1M$ has a subgroup of finite index that is isomorphic to a word-quasiconvex subgroup of a right-angled Coxeter group. \end{thm}
\begin{proof} Let $M = \mathbb{H}^3/\Gamma$ be a complete hyperbolic $3$-manifold of finite volume with a decomposition $\{\mathcal{P}_i\}$ into right-angled polyhedra. If the decomposition is checkered, and $f$ represents a face of the decomposition, it is easy to see that for each edge $e \subset f$, the flat $e$-neighbor of $f$ has the same color as $f$. It follows that each face of the surface $\Sigma_f$ described in Section \ref{sec:geodesic hyperplanes} has the same color as $f$. If $H$ is a hyperplane of the square complex $X$ associated to $\{\mathcal{P}_i\}$, we will say $H$ is white if all faces of $\Sigma(H)$ are white, and black if they are black.
By Proposition \ref{hyperplanes vs surfaces}, a hyperplane intersects only hyperplanes of the opposite color and osculates only hyperplanes of the same color along an external edge. If hyperplanes $H_0$ and $H_1$ osculate along an internal edge, let $s_0$ and $s_1$ be squares of $\mathcal{S}$, meeting along an internal edge $e$, with parallel midlines $m_0 \in H_0$ and $m_1 \in H_1$. Then $e$ is of the form $(\bar{g},\overline{\mathcal{P}}_i)$, where $\mathcal{P}_i$ is the polyhedron containing $s_0$ and $s_1$ and $g$ is a face of $\mathcal{P}_i$. The edges of $s_0$ and $s_1$ opposite $e$ are contained in faces $f_0$ and $f_1$ of $\mathcal{P}_i$ in $\widehat{\Sigma}(H_0)$ and $\widehat{\Sigma}(H_1)$, respectively. Then each of $f_0$ and $f_1$ intersects $g$, so the color of $f_0$ and $f_1$ is opposite that of $g$. It follows that hyperplanes of $\mathcal{S}$ do not inter-osculate.
By Proposition \ref{no self osculate}, $M$ has a finite cover $M'$ such that hyperplanes of the square complex $X'$ associated to the lifted ideal polyhedral decomposition of $M'$ do not self-intersect or -osculate. The lifted ideal polyhedral decomposition of $M'$ inherits the checkered property from that of $M$, so by the above, hyperplanes of $X'$ do not inter-osculate. In addition, Lemma \ref{nonpos} implies that $X'$ is nonpositively curved, Lemma \ref{two-sided} implies that each hyperplane is two-sided, and Lemma \ref{bipartite} implies that $X'^{(1)}$ is bipartite. Thus $X'$ is $C$-special, and by Theorem \ref{t: Haglund--Wise}, the subgroup $\Gamma' < \Gamma$ corresponding to $M'$ embeds as a word-quasiconvex subgroup of a right-angled Coxeter group. \end{proof}
In fact, we will show below that every right-angled decomposition determines a twofold cover whose associated decomposition is checkered. This uses the lemma below, which is a well known consequence of Andreev's theorem.
\begin{lemma} Let $\mathcal{P} \subset \mathbb{H}^3$ be a right-angled ideal polyhedron of finite volume. There are exactly two checkerings of the faces of $\mathcal{P}$. \end{lemma}
Theorem \ref{rt ang cox} follows quickly from this lemma and Theorem \ref{check cox}.
\begin{proof}[Proof of Theorem \ref{rt ang cox}] Suppose $\{ \mathcal{P}_i \}_{i=1}^n$ is a right-angled ideal decomposition of $M$. Let $\{ \mathcal{P}_i^{(0)}, \mathcal{P}_i^{(1)} \}_{i=1}^n$ be a collection of disjoint right-angled polyhedra such that for each $i$, $\mathcal{P}_i^{(0)}$ and $\mathcal{P}_i^{(1)}$ are each isometric to $\mathcal{P}_i$, and the faces of $\mathcal{P}_i^{(0)}$ have the opposite checkering of the faces of $\mathcal{P}_i^{(1)}$. Here we take for granted that we have fixed marking isometries $\mathcal{P}_i^{(j)} \to \mathcal{P}_i$ for each $j \in \{0,1\}$, so that each face $f$ of $\mathcal{P}_i$ has fixed correspondents $f^{(0)} \subset \mathcal{P}_i^{(0)}$ and $f^{(1)} \subset \mathcal{P}_i^{(1)}$.
For each $i$ and each face $f$ of $\mathcal{P}_i$, we determine face pairing isometries $\phi_{f^{(0)}}$ and $\phi_{f^{(1)}}$ for $\{ \mathcal{P}_i^{(0)}, \mathcal{P}_i^{(1)} \}$ using the following requirements: each $\phi_{f^{(j)}}$, $j \in \{0,1\}$ must commute with $\phi_f$ under the marking isometries, and each must preserve color. Thus if $f' = \phi_f(f)$ and $f'^{(0)}$ has the same color as $f^{(0)}$, we take $\phi_{f^{(j)}}(f^{(j)}) = f'^{(j)}$ for each $j$; otherwise we take $\phi_{f^{(j)}}(f^{(j)}) = f'^{(1-j)}$
Let $\widetilde{M}$ be the quotient of $\{ \mathcal{P}_i^{(0)}, \mathcal{P}_i^{(1)} \}_{i=1}^n$ by the face pairing isometries described above. By construction, $\widetilde{M}$ is a double cover of $M$, and it is easy to see that $\widetilde{M}$ is disconnected if and only if the original decomposition $\{ \mathcal{P}_i \}$ admits a checkering. If it did, Theorem \ref{check cox} would apply directly to $M$, so we may assume that it does not. Then, by Theorem \ref{check cox}, the conclusion of Theorem \ref{rt ang cox} applies to $\widetilde{M}$; hence it applies as well to $M$.
\end{proof}
\section{Virtual retractions and quasiconvexity} \label{sec:QCERF}
This section contains the proof of Theorem \ref{rel qc still sep}. We will need to work with various different definitions of quasiconvexity for subgroups. These definitions all coincide in the case of a Gromov-hyperbolic group because Gromov-hyperbolic metric spaces enjoy a property sometimes known as the Morse Property, which asserts that quasigeodesics are uniformly close to geodesics. In our case, $M$ has cusps and therefore $\Gamma=\pi_1M$ is not Gromov hyperbolic but rather relatively hyperbolic. One of the results we use to circumvent this difficulty, Proposition \ref{p: Fully qc implies combinatorially qc}, makes use of of \cite[Theorem 1.12]{druu_tree-graded_2005}, which the authors call the `Morse Property for Relatively Hyperbolic Groups'.
\begin{dfn}
Let $X$ be a geodesic metric space. A subspace $Y$ is \emph{quasiconvex} if there exists a constant $\kappa$ such that any geodesic in $X$ between two points of $Y$ is contained in the $\kappa$-neighbourhood of $Y$.
\end{dfn}
We will apply this notion in two contexts. If $U$ is a CAT(0) cube complex with base vertex $v$ and a group $G$ acts properly discontinuously by combinatorial isometries on $U$ then we consider the one-skeleton $X=U^{(1)}$ with the induced length metric (where each edge has length one). We say that a subgroup $H$ is \emph{combinatorially quasiconvex} if $Hv$ is a quasiconvex subspace of $X$. In fact, combinatorial quasiconvexity is independent of the choice of basepoint if the action of $G$ on $U$ is special \cite[Corollary 7.8]{HW}.
On the other hand, given a group $G$ with a generating set $S$ we can consider the Cayley graph $Cay_S(G)$. A subgroup $H$ is \emph{word quasiconvex} if $H$ is a quasiconvex subspace of $Cay_S(G)$.
Let $W$ be a right-angled Coxeter group with standard generating set $S$ and let $U$ be the universal cover of the Davis--Moussong complex for $W$. The one-skeleton of $U$ is very closely related to $Cay_S(W)$: the edges of the Cayley graph come in pairs; identifying these pairs gives $U^{(1)}$. Furthermore, the image of the universal cover of a special cube complex under the isometry defined by Haglund and Wise to the Davis--Moussong complex of $W$ is a convex subcomplex \cite[Lemma 7.7]{HW}. We therefore have the following relationship between combinatorial quasiconvexity and word quasiconvexity in special cube complexes.
\begin{remark}
Suppose that $G$ is the fundamental group of a C-special cube complex $\mathcal{S}$, so that $G$ is isomorphic to a word-quasiconvex subgroup of a right-angled Coxeter group $W$ \cite{HW}. If $H$ is a subgroup of $G$, then $H$ is combinatorially quasiconvex in $G$ (with respect to the action of $G$ on the universal cover of $\mathcal{S}$) if and only if $H$ is word quasiconvex in $W$ (with respect to the standard generating set).
\end{remark}
The idea is to prove Theorem \ref{rel qc still sep} by applying the following theorem of Haglund.
\begin{thm}[\cite{Hag} Theorem A]\label{t: Haglund's Theorem}
Let $W$ be a right-angled Coxeter group with the standard generating set and let $H$ be a word-quasiconvex subgroup. Then $H$ is a virtual retract of $W$.
\end{thm}
Theorem A of \cite{Hag} is not stated in this form. Nevertheless, as observed in the paragraph following Theorem A, this is what is proved.
\begin{cor}[Cf.\ \cite{HW} Corollary 7.9]\label{c: Word-qc implies separable}
{If $G$ is the fundamental group of a compact, virtually special cube complex and $H$ is a combinatorially quasiconvex subgroup of $G$ then $H$ is a virtual retract of $G$.}
\end{cor}
\begin{proof}
Let $G'$ be a special subgroup of finite index in $G$. It is clear that $H'=H\cap G'$ is combinatorially quasiconvex in $G'$. By the above remark, $H'$ is word-quasiconvex in the right-angled Coxeter group $W$, so $H'$ is a virtual retract of $W$ and hence of $G'$ by Theorem \ref{t: Haglund's Theorem}. By \cite[Theorem 4.4]{HW}, $G$ is linear. We can now apply the argument of \cite[Theorem 2.10]{long_subgroup_2008} to deduce that $H$ is a virtual retract of $G$.
\end{proof}
The reader is referred to \cite{manning_separation_2008} and \cite{hruska_relative_2008} for definitions of \textit{relatively hyperbolic} groups and \textit{relatively quasiconvex} subgroups, which are the subject of Theorem \ref{rel qc still sep}. (See Proposition \ref{p: Hruska's characterization} below for a characterization of relative quasiconvexity.) In order to deduce Theorem \ref{rel qc still sep} from Corollary \ref{c: Word-qc implies separable}, it would be enough to show that every relatively quasiconvex subgroup of the relatively hyperbolic fundamental group of a C-special cube complex is combinatorially quasiconvex. Unfortunately, this may be false. For instance, the diagonal subgroup of $\mathbb{Z}^2$ with the standard generating set is not quasiconvex. The next theorem, a minor modification of a result of \cite{manning_separation_2008}, gets round this difficulty.
\begin{dfn}
Suppose a group $G$ is hyperbolic relative to a finite set of subgroups $\mathcal{P}$. Then a relatively quasiconvex subgroup is called \emph{fully relatively quasiconvex} if for every $P\in\mathcal{P}$ and every $g\in G$, either $H\cap gPg^{-1}$ is trivial or $H\cap gPg^{-1}$ has finite index in $gPg^{-1}$.
\end{dfn}
\begin{thm}[Cf.\ \cite{manning_separation_2008} Theorem 1.7]\label{t: Fully rel. qc subgroups}
Suppose that $G$ is hyperbolic relative to $\mathcal{P}$ and that every $P\in\mathcal{P}$ is finitely generated and abelian. If $Q$ is a relatively quasiconvex subgroup of $G$ then $G$ has a fully relatively quasiconvex subgroup $H$ such that $Q$ is a retract of $H$.
\end{thm}
\begin{proof}
In the proof of \cite[Theorem 1.7]{manning_separation_2008}, the authors construct a sequence of relatively quasiconvex subgroups
\[
Q=Q_0\subseteq Q_1\subseteq\ldots\subseteq Q_n=H
\]
with $H$ fully relatively quasiconvex. We recall a few details of the construction of $Q_k$ from $Q_{k-1}$. We will modify this construction slightly so that $Q_{k-1}$ is a retract of $Q_k$ for each $k$. For some maximal infinite parabolic subgroup $K_k$ of $Q_{k-1}$, there is $P_k\in\mathcal{P}$ and $f_k\in G$ such that $K_k\subseteq f_kP_kf_k^{-1}$. Manning and Martinez-Pedroza find a finite-index subgroup $R_k$ of $f_kP_kf_k^{-1}$ that contains $K_k$ and excludes a certain finite set $F$. We shall impose an extra condition on $R_k$ that is easily met when $P_k$ is abelian, namely that $K_k$ should be a direct factor of $R_k$. Just as in \cite{manning_separation_2008}, the next subgroup in the sequence is now defined as $Q_k=\langle Q_{k-1}, R_k\rangle$, and just as in that setting it follows that $Q_k$ is relatively quasiconvex.
It remains only to show that $Q_{k-1}$ is a retract of $Q_k$. By assertion (1) of \cite[Theorem 3.6]{manning_separation_2008}, the natural map
\[
Q_{k-1}*_{K_k} R_k\to Q_k
\]
is an isomorphism. But $K_k$ is a direct factor of $R_k$ and so there is a retraction $R_k\to K_k$, which extends to a retraction $Q_k\to Q_{k-1}$ as required.
\end{proof}
In light of Theorem \ref{t: Fully rel. qc subgroups}, to prove Theorem \ref{rel qc still sep} it will suffice to show that when $G$ is the relatively hyperbolic fundamental group of a non-positively curved cube complex, its fully relatively quasiconvex subgroups are combinatorially convex. This is the content of Proposition \ref{p: Fully qc implies combinatorially qc} below.
Hruska has extensively investigated various equivalent definitions of relative hyperbolicity and relative quasiconvexity \cite{hruska_relative_2008}. Corollary 8.16 of \cite{hruska_relative_2008} provides a characterization of relative quasiconvexity in terms of geodesics in the Cayley graph. Unfortunately, to prove Theorem \ref{rel qc still sep} we need to work in the one-skeleton of the universal cover of a cube complex. This is not actually a Cayley graph unless the cube complex in question has a unique vertex. It is, however, quasi-isometric to the Cayley graph. Therefore, we will need a quasigeodesic version of Hruska's Corollary 8.16. Fortunately, we shall see that Hruska's proof goes through.
In what follows, $S$ is any choice of finite generating set for $G$ and $d$ is the usual length metric on $Cay_S(G)$. For any $g\in G$ write $l(g)$ for $d(1,g)$, the word length of $g$ with respect to $S$. For $x\in Cay_S(G)$ we denote by $B(x,R)$ the open ball of radius $R$ about $x$. We define
\[
N_R(Y)=\bigcup_{y\in Y} B(y,R)
\]
for any subspace $Y\subseteq Cay_S(G)$ and any $R>0$. To keep notation to a minimum we will work with $\tau$-quasigeodesics, which are more usually defined as $(\tau,\tau)$-quasigeodesics. That is, a path $c$ is a $\tau$-quasigeodesic if
\[
\frac{1}{\tau}|s-t|-\tau\leq d(c(s),c(t))\leq \tau|s-t|+\tau
\]
for all suitable $s$ and $t$. We will always assume that our quasigeodesics are continuous, which we can do by \cite[Lemma III.H.1.11]{BrH}. The following definition is adapted from \cite{hruska_relative_2008}.
\begin{dfn}[Cf.\ \cite{hruska_relative_2008} Definition 8.9]
Let $H$ be a subgroup of $G$. Let $c$ be (the image of) a quasigeodesic in $Cay_S(G)$. If $x\in c$ is not within distance $R$ of the endpoints of $c$ and
\[
B(x,R)\cap c\subseteq N_\epsilon(gP)
\]
for some $g\in G$ and $P\in\mathcal{P}$ then $x$ is called \emph{$(\epsilon,R)$-deep} in $gP$. If $x\in c$ is not $(\epsilon,R)$-deep in any such coset $gP$ then $x$ is called an \emph{$(\epsilon,R)$-transition point} of $c$.
\end{dfn}
The next proposition characterizes relatively quasiconvex subgroups in terms of quasigeodesics in the Cayley graph. Roughly, it asserts that every point on a quasigeodesic between elements of $H$ is either close to $H$ or is close to some peripheral coset $gP$.
\begin{prop}[Cf.\ \cite{hruska_relative_2008} Corollary 8.16]\label{p: Hruska's characterization}
Suppose $G$ is hyperbolic relative to $\mathcal{P}$ and $H$ is a subgroup of $G$. Then $H$ is relatively quasiconvex in $G$ if and only if for every $\tau$ there are constants $\epsilon, R, \kappa$ such that the following two properties hold.
\begin{enumerate}
\item For any continuous $\tau$-quasigeodesic $c$ in $Cay_S(G)$, any connected component $\bar{c}$ of the set of all $(\epsilon,R)$-deep points of $c$ is $(\epsilon,R)$-deep in a unique peripheral left coset $gP$; that is, there exists a unique $P\in\mathcal{P}$ and $gP\in G/P$ such that every $x\in\bar{c}$ is $(\epsilon,R)$-deep in $gP$ and no $x\in\bar{c}$ is $(\epsilon,R)$-deep in any other peripheral left coset.
\item If the quasigeodesic $c$ joins two points of $H$ then the set of $(\epsilon,R)$-transition points of $c$ is contained in $N_\kappa(H)$.
\end{enumerate}
\end{prop}
The statement of \cite[Corollary 8.16]{hruska_relative_2008} only deals with the case when $c$ is a geodesic. However, the necessary results of Section 8 of \cite{hruska_relative_2008} also hold in the quasigeodesic case.
The following proposition completes the proof of Theorem \ref{rel qc still sep}.
\begin{prop}\label{p: Fully qc implies combinatorially qc}
Let $G$ be finitely generated and relatively hyperbolic. Suppose that $G$ acts properly discontinuously and cocompactly by isometries on a geodesic metric space $X$. Fix a basepoint $v\in X$. For any fully relatively quasiconvex subgroup $H\subseteq G$ there exists a constant $\nu$ such that any geodesic between two points of the orbit $Hv$ lies in the $\nu$-neighbourhood of $Hv$. In particular, if $G$ is the fundamental group of a non-positively curved cube complex then, taking $X$ to be the one-skeleton of the universal cover, it follows that $H$ is combinatorially quasiconvex.
\end{prop}
Proposition \ref{p: Hruska's characterization} implies that, to prove Proposition \ref{p: Fully qc implies combinatorially qc}, it is enough to prove that deep points of quasigeodesics between points of $H$ lie in a bounded neighbourhood of $H$. The key technical tool is the following lemma, which is nothing more than the Pigeonhole Principle.
\begin{lemma}\label{l: Pigeonhole Lemma}
Let $G$ be a finitely generated group. Fix a choice of finite generating set and the corresponding word metric on $G$. If $H,K$ are subgroups and $H\cap K=1$ then
\[
\#(H\cap N_r(K))<\infty
\]
for any $r>0$.
\end{lemma}
\begin{proof}
For a contradiction, suppose $h_i\in H\cap N_r(K)$ are distinct for all $i\in\mathbb{N}$. For each $i$, there is $k_i\in K$ with $d(h_i,k_i)< r$. Let $g_i=h_i^{-1}k_i$, so $l(g_i)<r$. The ball of radius $r$ in $G$ is finite, so $g_i=g_j$ for some $i\neq j$ by the Pigeonhole Principle. But now
\[
h_ih_j^{-1}=h_ig_ig_j^{-1}h_j^{-1}=k_ik_j^{-1}
\]
is a non-trivial element of $H\cap K$, a contradiction.
\end{proof}
It follows that only short elements of $H$ can be close to parabolic left cosets for which $H$ intersects the stabilizer trivially.
\begin{lemma}\label{l: Only short elements are close to empty parabolics}
Suppose $G$ is hyperbolic relative to $\mathcal{P}$ and $H$ is any subgroup of $G$. Let $g\in G$ and $P\in\mathcal{P}$ be such that $H\cap gPg^{-1}=1$. For any $r>0$ there exists finite $\lambda=\lambda(r,gP)$ such that if $h\in N_r(gP)\cap H$ then $l(h)\leq\lambda$.
\end{lemma}
\begin{proof}
Choose $g$ of minimal word length in $gP$ and set $k=l(g)$. For any $p\in P$, $d(gp,gpg^{-1})=k$ and it follows that
\[
N_r(gP)\subseteq N_{k+r}(gPg^{-1})
\]
by the triangle inequality. Therefore, by Lemma \ref{l: Pigeonhole Lemma} with $K=gPg^{-1}$, $N_r(gP)\cap H$ is finite and so
\[
\lambda=\max\{l(h)\mid h\in N_r(gP)\cap H\}
\]
is as required.
\end{proof}
We are now ready to prove Proposition \ref{p: Fully qc implies combinatorially qc}.
\proof[Proof of Proposition \ref{p: Fully qc implies combinatorially qc}]
Consider a geodesic $b$ in $X$ joining two points of $Hv$. We need to show that $b$ is contained in a uniformly bounded neighbourhood of $Hv$.
By the \v{S}varc--Milnor Lemma, $G$ has a finite generating set $S$ and $X$ is quasi-isometric to the Cayley graph $Cay_S(G)$. The geodesic $b$ maps to some $\tau$-quasigeodesic in $Cay_S(G)$, which we denote $c$. Furthermore, we can assume that $c$ is continuous by \cite[Lemma III.H.1.11]{BrH}. It is therefore enough to show that $c$ is contained in a uniformly bounded neighbourhood of $H$ in the word metric $d$ on $Cay_S(G)$.
Let $\epsilon$, $R$ and $\kappa$ be as in Proposition \ref{p: Hruska's characterization}. By assertion 2 of Proposition \ref{p: Hruska's characterization}, the $(\epsilon,R)$-transition points of $c$ are contained in the $\kappa$-neighbourhood of $H$. Therefore, it remains to show that the $(\epsilon,R)$-deep points of $c$ are contained in a uniformly bounded neighbourhood of $H$.
Let $\bar{c}$ be a connected component of the set of all $(\epsilon,R)$-deep points of $c$. By definition, every $x\in\bar{c}$ is in the $\epsilon$-neighbourhood of some peripheral left coset $gP$. By assertion 1 of Proposition \ref{p: Hruska's characterization}, the component $\bar{c}$ is contained between two $(\epsilon,R)$-transition points of $c$, which we shall denote $y_1$ and $y_2$. We can take these points to be arbitrarily close to $\bar{c}$, and hence we can assume that $d(y_i,gP)\leq\epsilon$ for $i=1,2$. On the other hand, by assertion 2 of Proposition \ref{p: Hruska's characterization}, there exist $h_1,h_2\in H$ such that $d(h_i,y_i)<\kappa$ for $i=1,2$. Therefore, $h_i\in N_{\epsilon+\kappa}(gP)$ for $i=1,2$.
Let $h_0=h_1^{-1}h_2$ and let $g_0=h_1^{-1}g$, so $h_0\in N_{\epsilon+\kappa}(g_0P)$ and, without loss of generality, $l(g_0)\leq \epsilon+\kappa$. There are two cases to consider, depending on whether $h_0$ is long or short. Let
\[
\lambda_{\max}=\max\{\lambda(\epsilon+\kappa,gP)\mid P\in\mathcal{P},l(g)\leq\epsilon+\kappa, H\cap gPg^{-1}=1\}
\]
where $\lambda(\epsilon+\kappa,gP)$ is provided by Lemma \ref{l: Only short elements are close to empty parabolics}. In the first case, $l(h_0)\leq\lambda_{\max}$ so $d(h_1,h_2)\leq\lambda_{\max}$ and therefore $d(y_1,y_2)<\lambda_{\max}+2\kappa$. Because $c$ is a $\tau$-quasigeodesic it follows that for every $x\in\bar{c}$, for some $i=1,2$, we have that
\[
d(x,y_i)<\lambda'=\frac{\tau^2}{2}(\lambda_{\max}+2\kappa+\tau)+\tau
\]
and so $d(x,h_i)<\lambda'+\kappa$.
In the second case, $l(h_0)>\lambda_{\max}$ and so $H\cap g_0Pg_0^{-1}\neq 1$ by Lemma \ref{l: Only short elements are close to empty parabolics}. Therefore $H\cap g_0Pg_0^{-1}$ has finite index in $g_0Pg_0^{-1}$ because $H$ is fully relatively quasiconvex. For each $g\in G$ and $P\in\mathcal{P}$ for which $H\cap gPg^{-1}$ has finite index in $P$, let $\mu=\mu(gP)$ be a number such that $gPg^{-1}\subseteq N_\mu(H\cap gPg^{-1})$. Set
\[
\mu_{\max}=\max\{\mu(gP)\mid P\in\mathcal{P}, l(g)\leq\epsilon+\kappa, |gPg^{-1}:H\cap gPg^{-1}|<\infty\}.
\]
Therefore
\[
g_0Pg_0^{-1}\subseteq N_{\mu_{\max}}(H)
\]
and so
\[
g_0P\subseteq N_{\mu_{\max}+\epsilon+\kappa}(H)
\]
because $l(g_0)\leq\epsilon+\kappa$. For each $x\in\bar{c}$ we have $h_1^{-1}x\in N_\epsilon(g_0P)$ and so $h_1^{-1}x\in N_{\mu_{\max}+2\epsilon+\kappa}(H)$. Therefore $x\in N_{\mu_{\max}+2\epsilon+\kappa}(H)$.
In summary, we have shown the following: the $(\epsilon,R)$-transition points of the geodesic $c$ are contained in the $\kappa$-neighbourhood of $H$; the short $(\epsilon,R)$-deep components of $c$ are contained in the $(\lambda'+\kappa)$-neighbourhood of $H$; and the long $(\epsilon,R)$-deep components of $c$ are contained in the $(\mu_{\max}+2\epsilon+\kappa)$-neighbourhood of $H$. Therefore, $c$ is completely contained in the $\nu$-neighbourhood of $H$, where
\[
\nu=\max\{\kappa,\lambda'+\kappa,\mu_{\max}+2\epsilon+\kappa\}
\]
This completes the proof.
\endproof
We have assembled all the tools necessary to prove Theorem \ref{rel qc still sep}.
\proof[Proof of Theorem \ref{rel qc still sep}]
Let $Q$ be a relatively quasiconvex subgroup of $G=\pi_1\mathcal{S}$. By Theorem \ref{t: Fully rel. qc subgroups}, there exists a fully relatively quasiconvex subgroup $H$ of $G$ such that $Q$ is a retract of $H$. Let $X$ be the one-skeleton of the universal cover of $\mathcal{S}$, equipped with the induced length metric. By Proposition \ref{p: Fully qc implies combinatorially qc}, for any basepoint $v$ the orbit $Hv$ is quasiconvex in $X$; that is, $H$ is a combinatorially quasiconvex subgroup of $G$. Therefore, by Corollary \ref{c: Word-qc implies separable}, $H$ is a virtual retract of $G$ and so $Q$ is also a virtual retract of $G$, as required.
\endproof
Corollary \ref{c: Corollary 2} now follows easily.
\proof[Proof of Corollary \ref{c: Corollary 2}]
Let $\Gamma=\pi_1M$. As pointed out in \cite{canary_mardens_2008}, to prove that $\Gamma$ is LERF it is enough to prove that $\Gamma$ is GFERF --- that is, that the geometrically finite subgroups are separable. Furthermore, by \cite[Proposition 3.28]{Hag}, it is enough to prove that the geometrically finite subgroups of $G$ are virtual retracts.
First, suppose that $M$ is orientable. Let $Q$ be a geometrically finite subgroup of $\Gamma$. By \cite[Theorem 1.3]{manning_separation_2008}, for instance, $\Gamma$ is hyperbolic relative to its maximal parabolic subgroups and $Q$ is a relatively quasiconvex subgroup of $\Gamma$. The maximal parabolic subgroups of $\Gamma$ are isomorphic to $\mathbb{Z}^2$. By Theorem \ref{rt ang cox}, $\Gamma$ is the fundamental group of a virtually special cube complex $\mathcal{S}$, so $Q$ is a virtual retract of $\Gamma$ by Theorem \ref{rel qc still sep}.
If $M$ is nonorientable then we can pass to a degree-two orientable cover $M'$ with fundamental group $\Gamma'$. As above, we see that for every geometrically finite subgroup $Q$ of $\Gamma$, the intersection $Q'=Q\cap \Gamma'$ is a virtual retract of $\Gamma'$. Now, by the proof of \cite[Theorem 2.10]{long_subgroup_2008}, it follows that $H$ is a virtual retract of $\Gamma$.
\endproof
{
We take this opportunity to note that the combination of Proposition \ref{p: Fully qc implies combinatorially qc} and {Corollary \ref{c: Word-qc implies separable} }shows that many subgroups of virtually special relatively hyperbolic groups are virtual retracts, even without any hypotheses on the parabolic subgroups. Indeed, we have the following.
\begin{thm}\label{t: Retracts with no hypotheses}
Let $\mathcal{S}$ be a compact, virtually special cube complex and suppose that $\pi_1\mathcal{S}$ is relatively hyperbolic. Then every fully relatively quasiconvex subgroup of $\pi_1\mathcal{S}$ is a virtual retract.
\end{thm}
Recall that an element $\gamma$ of a relatively hyperbolic group is called \emph{hyperbolic} if it is not conjugate into a parabolic subgroup. Denis Osin has shown that cyclic subgroups generated by hyperbolic elements are strongly relatively quasiconvex \cite[Theorem 4.19]{osin_relatively_2006}. In the torsion-free case this implies \emph{a fortiori} that such subgroups are fully relatively quasiconvex.
\begin{cor}
Let $\mathcal{S}$ be a compact, virtually special cube complex and suppose that $\pi_1\mathcal{S}$ is relatively hyperbolic. For any hyperbolic element $\gamma\in\pi_1\mathcal{S}$, the cyclic subgroup $\langle \gamma\rangle$ is a virtual retract of $\pi_1\mathcal{S}$.
\end{cor}
Combining Theorem \ref{t: Retracts with no hypotheses} with \cite[Theorem 1.7]{manning_separation_2008}, we obtain a slightly weaker version of Theorem \ref{rel qc still sep} that holds when the peripheral subgroups are only assumed to be LERF and slender. (A group is \emph{slender} if each subgroup is finitely generated.)
\begin{cor}\label{c: Virtual retracts with more general parabolics}
Let $\mathcal{S}$ be a compact, virtually special cube complex and suppose that $\pi_1\mathcal{S}$ is hyperbolic relative to a collection of slender, LERF subgroups. Then every relatively quasiconvex subgroup of $\pi_1\mathcal{S}$ is separable and every fully relatively quasiconvex subgroup of $\pi_1\mathcal{S}$ is a virtual retract. \end{cor}
This result would apply if $\pi_1\mathcal{S}$ were the fundamental group of a finite-volume negatively curved manifold of dimension greater than three, in which case the parabolic subgroups would be non-abelian but nilpotent. Note that the full conclusion of Theorem \ref{rel qc still sep} does not hold in this case: nilpotent groups that are not virtually abelian contain cyclic subgroups that are not virtual retracts.}
\section{Examples}\label{sec:examples}
In this section we describe many hyperbolic $3$-manifolds that decompose into right-angled ideal polyhedra. Our aim is to display the large variety of situations in which Theorem \ref{rt ang cox} applies, and to explore the question of when a manifold that decomposes into right-angled ideal polyhedra is commensurable with a right-angled reflection orbifold. When this is the case, the results of this paper follow from previous work, notably that of Agol--Long--Reid \cite{ALR}. The theme of this section is that this occurs among examples of lowest complexity, but that one should not expect it to in general.
Lemma \ref{symm comm} describes when one should expect a manifold $M$ that decomposes into right-angled ideal polyhedra to be commensurable with a right-angled reflection orbifold. This is the case when all of the polyhedra decomposing $M$ are isometric to a single right-angled ideal polyhedron $\mathcal{P}$, which furthermore is highly symmetric. A prominent example which satisfies this is the Whitehead link complement, which is commensurable with the reflection orbifold in the regular ideal octahedron.
The octahedron (also known as the $3$-antiprism, see Figure \ref{antiprisms}) is the simplest right-angled ideal polyhedron, as measured by the number of ideal vertices. Propositions \ref{P_1 comm} and \ref{P_2 comm} imply that any manifold that decomposes into isometric copies of the right-angled ideal octahedron or, respectively, the $4$-antiprism, is commensurable with the corresponding reflection orbifold. On the other hand, in Section \ref{sec:One cusp} we will describe an infinite family of ``hybrid'' hyperbolic $3$-manifolds $N_n$, each built from both the $3$- and $4$-antiprisms, that are not commensurable with any $3$-dimensional hyperbolic reflection orbifold. We use work of Goodman-Heard-Hodgson \cite{GHH} here to expicitly identify the commensurator quotients for the $N_n$.
\subsection{The simplest examples.}\label{sec: rt ang comm}
It may initially seem that a manifold that decomposes into right-angled polyhedra should be commensurable with the right-angled reflection orbifold in one or a collection of the polyhedra. This is not the case in general; however, the technical lemma below implies that it holds if all of the polyhedra are isometric and sufficiently symmetric.
\begin{lemma}\label{symm comm} Let $M$ be a complete hyperbolic $3$-manifold with a decomposition $\{\mathcal{P}_i\}$ into right-angled ideal polyhedra. For a face $f \in \mathcal{P}_i$, let $\gamma_f$ be reflection in the hyperplane containing $f$. If for each such face, $\phi_f \circ \gamma_f$ is an isometry to the polyhedron $\mathcal{P}_j$ containing $\phi_f(f)$, then $\pi_1 M$ is contained in $\Gamma \rtimes \mathrm{Sym}(\mathcal{P}_1)$, where $\Gamma$ is the reflection group in $\mathcal{P}_1$ and $\mathrm{Sym}(\mathcal{P}_1)$ is its symmetry group. \end{lemma}
\begin{proof} Let $M$ be a hyperbolic $3$-manifold satisfying the hypotheses of the lemma. There is a ``dual graph'' to the polyhedral decomposition $\{\mathcal{P}_i\}$ with a vertex for each $i$, such that the vertex corresponding to $\mathcal{P}_i$ is connected by an edge to that corresponding to $\mathcal{P}_j$ for every face $f$ of $\mathcal{P}_i$ such that $\phi_f(f)$ is a face of $\mathcal{P}_j$. Let $\mathcal{T}$ be the tiling of $\mathbb{H}^3$ by $\Gamma$-translates of $\mathcal{P}_1$. A maximal tree $T$ in the dual graph determines isometries taking the $\mathcal{P}_i$ into $\mathcal{T}$ as follows.
Suppose $f$ is a face of $\mathcal{P}_1$ that corresponds to an edge of $T$. Then by hypothesis $\phi_f^{-1}(\mathcal{P}_i) = \gamma_f(\mathcal{P}_1)$, where $\mathcal{P}_i$ contains $\phi_f(f)$. For arbitrary $i$, let $\alpha$ be an embedded edge path in $T$ from the vertex corresponding to $\mathcal{P}_1$ to that of $\mathcal{P}_i$, and suppose $\mathcal{P}_{i_0}$ corresponds to the vertex with distance one on $\alpha$ from that of $\mathcal{P}_i$. We inductively assume that there exists an isometry $\phi_{i_0}$ such that $\phi_{i_0}(\mathcal{P}_{i_0})$ is a $\Gamma$-translate of $\mathcal{P}_1$. Let $f$ be the face of $\mathcal{P}_{i_0}$ corresponding to the edge of $T$ between $\mathcal{P}_{i_0}$ and $\mathcal{P}_i$. Then $\phi_{i_0} \gamma_f \phi_{i_0}^{-1} = \gamma_{\phi_{i_0}(f)}\in \Gamma$, so by hypothesis,
$$ \phi_{i_0} \circ \phi_f^{-1}(\mathcal{P}_i) = \phi_{i_0} \gamma_f(\mathcal{P}_{i_0}) = (\phi_{i_0}\gamma_f\phi_{i_0}^{-1})(\phi_{i_0}(\mathcal{P}_{i_0}))$$
is a $\Gamma$-translate of $\mathcal{P}_1$.
Now for each $i$, after replacing $\mathcal{P}_i$ by $\phi_i(\mathcal{P}_i)$ we may assume that there is some $\gamma_i \in \Gamma$ such that $\mathcal{P}_i = \gamma_i(\mathcal{P}_1)$. For a face $f$ of $\mathcal{P}_i$, let $\mathcal{P}_j$ be the polyhedron containing $\phi_f(f)$. Then by hypothesis $\gamma_j^{-1} \phi_f \gamma_f \gamma_i \in \mathrm{Sym}(\mathcal{P}_1)$. Therefore $\phi_f \in \Gamma \rtimes \mathrm{Sym}(\mathcal{P}_1)$; thus the lemma follows from the Poincar\'e polyhedron theorem.
\end{proof}
\begin{figure}
\begin{center}
\input{antiprisms.pdf_t}
\end{center}
\caption{The $3$- and $4$-antiprisms.}
\label{antiprisms}
\end{figure}
A natural measure of the complexity of a right-angled ideal polyhedron is its number of ideal vertices. By this measure, the two simplest right-angled ideal polyhedra are the $3$- and $4$-antiprisms, pictured in Figure \ref{antiprisms}. (The general definition of a $k$-antiprism, $k \geq 5$ should be evident from the figure.)
\begin{lemma}\label{simple right} The only right-angled ideal polyhedra with fewer than ten vertices are the $3$- and $4$- antiprisms. \end{lemma}
\begin{proof} By a \textit{polyhedron} we mean a $3$-complex with a single $3$-cell whose underlying topological space is the $3$-dimensional ball, such that no two faces that share an edge $e$ have vertices in common other than the endpoints of $e$. By Andreev's theorem, there is a right-angled ideal polyhedron in $\mathbb{H}^3$ with the combinatorial type of a given polyhedron if and only if each vertex has valence $4$, there are no prismatic $3$- or $4$-circuits, and the following criterion holds: given faces $f_0$, $f_1$, and $f_2$ such that $f_0$ and $f_2$ each share an edge with $f_1$, $f_0$ and $f_2$ have no vertices in common with each other but not $f_1$. (A prismatic $k$-circuit is a sequence of $k$ faces $f_0, f_1, \hdots f_{k-1}$ such that no three faces have a common vertex but for each $i$, $f_i$ shares an edge with $f_{i-1}$ and $f_{i+1}$, taking indices modulo $k$.)
If $f$ is a $k$-gon face of a right-angled ideal polyhedron $\mathcal{P}$, the final criterion above implies that $\mathcal{P}$ has at least $2k$ ideal vertices, since each face that abuts $f$ contributes at least one unique vertex to $\mathcal{P}$. Thus any right-angled ideal polyhedron with fewer than $10$ ideal vertices has only triangular and quadrilateral faces. Let $v$, $e$, and $f$ be the number of vertices, edges and faces of $\mathcal{P}$, respectively. Since each vertex has valence $4$, we have $4v = 2e$. If $\mathcal{P}$ has only triangular faces, then $2e = 3f$, and an Euler characteristic calculation yields
$$ v - e + f = \frac{3f}{4} - \frac{3f}{2} + f = 2. $$
Therefore in this case $f=8$, and it is easy to see that $\mathcal{P}$ must be the $3$-antiprism.
If $\mathcal{P}$ has a quadrilateral face $f$ and only $8$ vertices, then by the final criterion of the first paragraph all faces adjacent to it are triangles. The union of $f$ with the triangular faces adjacent to it is thus a subcomplex that is homeomorphic to a disk and contains all vertices of $\mathcal{P}$. It follows that $\mathcal{P}$ is the $4$-prism. Since each vertex of a right-angled ideal polyhedron is $4$-valent, the number of vertices is even, and the lemma follows. \end{proof}
It is well known that the $3$-antiprism $\mathcal{P}$, better known as the octahedron, is \textit{regular}: there is a symmetry exchanging any two ordered triples $(v,e,f)$ where $v \subset e \subset f$ are faces of dimension $0$, $1$, and $2$, respectively. Now suppose $M$ is a manifold with a decomposition into polyhedra $\{\mathcal{P}_i\}$ such that for each $i$, there is an isometry $\gamma_i \colon\thinspace \mathcal{P} \to \mathcal{P}_i$. If $\mathcal{P}_i$ and $\mathcal{P}_j$ are polyhedra in this decomposition, containing faces $f$ and $f'$, respectively, such that $\phi_f (f) = f'$, then $\gamma_j^{-1} \phi_f \gamma_i$ takes one face of $\mathcal{P}$ isometrically to another; hence it is realized by a symmetry $\sigma$ of $\mathcal{P}$. It follows that $\gamma_{f'} \circ \gamma_j \sigma \gamma_i^{-1} = \phi_f$. Thus Lemma \ref{symm comm} implies:
\begin{prop}\label{P_1 comm} Let $\Gamma_1$ be the group generated by reflections in the sides of the octahedron $\mathcal{P}$, and let $\Sigma_1$ be its symmetry group. If $M$ is a complete hyperbolic manifold that decomposes into copies of $\mathcal{P}$, then $\pi_1 M < \Gamma_1 \rtimes \Sigma_1$. In particular, $\pi_1 M$ is commensurable to $\Gamma_1$. \end{prop}
The $4$-antiprism does not have quite enough symmetry to directly apply Lemma \ref{symm comm}, but its double across a square face is the cuboctahedron, the semi-regular polyhedron pictured on the right-hand side of Figure \ref{P_1 and P_2}. The cuboctahedron has a symmetry exchanging any two square or triangular faces, and each symmetry of each face extends over the cuboctahedron.
\begin{figure}
\begin{center}
\input{P1_and_P2.pdf_t}
\end{center}
\caption{The ideal octahedron $\mathcal{P}_1$ and cuboctahedron $\mathcal{P}_2$.}
\label{P_1 and P_2}
\end{figure}
\begin{prop}\label{P_2 comm} Let $\Gamma_2$ be the group generated by reflections in the sides of the cuboctahedron, and let $\Sigma_2$ be its group of symetries. If $M$ is a complete hyperbolic $3$-manifold that decomposes into copies of the cuboctahedron, then $\pi_1(M)<\Gamma_2 \rtimes \Sigma_2$. If $M$ decomposes into $4$-antiprisms, then $\pi_1(M)$ has an index-$2$ subgroup contained in $\Gamma_2 \rtimes \Sigma_2$. \end{prop}
\begin{proof} Since face pairing isometries must in particular preserve combinatorial type, it follows from Lemma \ref{symm comm} as argued above Proposition \ref{P_1 comm} that if $M$ decomposes into copies of the cuboctahedron, then $\pi_1(M) < \Gamma_2 \rtimes \Sigma_2$.
Opposite square faces of the $4$-antiprism inherit opposite colors from any checkering. Thus if a hyperbolic $3$-manifold $M$ has a checkered decomposition into right-angled ideal $4$-antiprisms, they may be identified in pairs along, say, dark square faces, yielding a decomposition into right-angled ideal cuboctahedra. The proof of Theorem \ref{rt ang cox} shows that if the decomposition of $M$ is not checkered, there is a twofold cover $\widetilde{M} \to M$ that inherits a checkered decomposition. Hence if $M$ decomposes into $4$-antiprisms, $\widetilde{M}$ decomposes into copies of the cuboctahedron. The final claim of the proposition follows.
\end{proof}
The results of \cite{Hatcher} imply that for $j=1,2$, $\Gamma_j \rtimes \Sigma_j$ is isomorphic to the arithmetic group $\mathrm{PGL}_2(\mathcal{O}_j)$, where $\mathcal{O}_j$ is the ring of integers of $\mathbb{Q}(\sqrt{-j})$.
The fundamental domain for $\mathrm{Sym}(\mathcal{P}_1)$ pictured in Figure \ref{P_1 and P_2} intersects $\partial \mathcal{P}_1$ in a $(2,3,\infty)$ triangle. We refer by $\Lambda$ to the group generated by reflections in the sides of this triangle. The fundamental domain for $\mathrm{Sym}(\mathcal{P}_2)$ intersects a triangular face in a $(2,3,\infty)$ triangle as well; thus $\Lambda$ embeds in $\Gamma_j \rtimes \mathrm{Sym}(\mathcal{P}_j)$ for $j=1$ and $2$. The lemma below records an observation we will find useful in the following sections.
\begin{lemma}\label{transitive} For $j=1,2$, let $\mathcal{T}_j$ be the tiling of $\mathbb{H}^3$ by $\Gamma_j$-conjugates of $\mathcal{P}_j$. The action of $\Gamma_j \rtimes \mathrm{Sym}(\mathcal{P}_j)$ is transitive on the set of all geodesic planes that contain a triangular face of a tile of $\mathcal{T}_j$. \end{lemma}
This lemma follows from the fact, evident by inspection of the fundamental domains in Figure \ref{P_1 and P_2}, that $\mathrm{Sym}(\mathcal{P}_j)$ acts transitively on triangular faces of $\mathcal{P}_j$.
\subsection{A family of one-cusped manifolds}\label{sec:One cusp}
In this section, we exhibit an infinite family $\{N_n\}$ of pairwise incommensurable manifolds that are not commensurable to \emph{any} 3-dimensional reflection group. Each of these manifolds has a single cusp, and they are constructed using an explicit right-angled ideal polyhedral decomposition.
\begin{dfn} For $n\geq 2$, let $\{ \mathcal{P}_i \}_{i=1}^{n+2}$ be a collection of right-angled ideal polyhedra embedded in $\mathbb{H}^3$ with the following properties.
\begin{enumerate}
\item $\mathcal{P}_i$ is an octahedron if $i \in \{1,n+2\}$, and a cuboctahedron otherwise.
\item There is an ideal vertex $\hat{v}$ shared by all the polyhedra.
\item $\mathcal{P}_i \cap \mathcal{P}_j$ if and only if $i = j\pm1$.
\item If $\mathcal{P}_i$ and $\mathcal{P}_j$ meet, then they share a triangular face.
\end{enumerate}
Define $\mathcal{D}_n = \bigcup_{i=1}^{n+2} \, \mathcal{P}_i$. \end{dfn}
An isometric copy in $\mathbb{H}^3$ of such a collection is determined by an embedding of $\mathcal{P}_1$, a choice of $\hat{v}$, and a choice of triangular face $\mathcal{P}_1 \cap \mathcal{P}_2$. If we use the upper half space model for $\mathbb{H}^3$ then $\mathrm{Isom}^+(\mathbb{H}^3)$ is identified with $\mathrm{PSL}_2(\mathbb{C})$, by isometrically extending the action by M\"obius transformations on $\partial \mathbb{H}^3 = \mathbb{C}\cup\{\infty\}$. Using this model, we apply an isometry so that $\hat{v} = \infty$ we can project the faces of the $\mathcal{P}_i$'s to $\partial \mathbb{H}^3$ to get a cell decomposition of $\mathbb{C}$. This decomposition is pictured for $n=2$ in Figure \ref{one}.
Each 2-cell in the figure corresponds to a face of some $\mathcal{P}_i$ which is not shared by any other $\mathcal{P}_j$. Shade half of the faces of $\mathcal{P}_1$ and $\mathcal{P}_{n+2}$ gray and label them $A, B, C, D, E, F, G$, and $H$ as indicated in the figure. Label the square face of $\mathcal{P}_2$ which shares an edge with $B$ (respectively $A$, $D$) as $X_1$ (respectively $Y_1$, $Z_1$). Label the square face opposite $X_1$ as $X_1'$ and so on. Now use the parabolic translation ${\sf c}$ that takes $\mathcal{P}_2$ to $\mathcal{P}_3$ to translate the labeling to the other cuboctahedra, adding one to the subscript every time we apply ${\sf c}$.
\begin{figure}[h]
\setlength{\unitlength}{.1in}
\begin{picture}(40,17)
\put(-3,0) { \includegraphics[width=4.5in]{one.pdf} }
\put(7.2,8){$A$}
\put(5,14){$B$}
\put(2.75,8){$C$}
\put(5,2){$D$}
\put(13.8,14){$X_1$}
\put(13.8,2){$Z_1$}
\put(10.5,8){$Y_1$}
\put(17.5,8){$Y_1'$}
\put(13.8,9.6){$Z_1'$}
\put(13.8,6.5){$X_1'$}
\put(24.3,14){$X_2$}
\put(24.3,2){$Z_2$}
\put(21,8){$Y_2$}
\put(28,8){$Y_2'$}
\put(24.3,9.6){$Z_2'$}
\put(24.3,6.5){$X_2'$}
\put(36,8){$F$}
\put(33.8,14){$G$}
\put(31.5,8){$H$}
\put(33.8,2){$E$}
\end{picture}
\caption{$\mathcal{D}_2$.}
\label{one}
\end{figure}
Define the isometries ${\sf a, b, f, g, x, y, z} \in \text{Isom}^+(\mathbb{H}^3)$ as follows. The isometry taking $A$ to $B$ so that their shared vertex is taken to the vertex shared by $B$ and $C$ is ${\sf a}$. The isometry taking $C$ to $D$ so that their shared vertex is taken to the vertex shared by $B$ and $D$ is ${\sf b}$. The isometry taking $E$ to $F$ so that their shared vertex is taken to the vertex shared by $F$ and $G$ is ${\sf f}$. The isometry taking $G$ to $H$ so that their shared vertex is taken to the vertex shared by $H$ and $F$ is ${\sf g}$. The isometry taking $Y_1'$ to $X_1$ so that their shared vertex is taken to the vertex shared by $X_1$ and $Z_1'$ is ${\sf x}$. The isometry taking $Z_1'$ to $Z_1$ so that the vertex shared by $Z_1'$ and $Y_1'$ is taken to the vertex shared by $X_1'$ and $Z_1$ is ${\sf y}$. The isometry taking $X_1'$ to $Y_1$ so that their shared vertex is taken to the vertex shared by $Y_1$ and $Z_1$ is ${\sf z}$.
The set $S_n$ defined below is a collection of face pairings for $\{ \mathcal{P}_i \}_1^{n+2}$. Here we take $\sf x^c = c x c^{-1}$.
\[ S_n \ = \ \left\{ {\sf a, b, f, g, x, y, z, x^c, y^c, z^c,} \ldots, {\sf x}^{{\sf c}^{n-1}}, {\sf y}^{{\sf c}^{n-1}}, {\sf z}^{{\sf c}^{n-1}}\right\}\]
By examining the combinatorics of these face pairings, one deduces that the quotient by these side pairings is a complete hyperbolic manifold $N_n$ with finite volume and a single cusp. (See, for instance, \cite[Theorem 11.1.6]{Ratcliffe}.) By Poincar\'e's polyhedron theorem \cite[Theorem 11.2.2]{Ratcliffe}, $\Delta_n = \langle S_n \rangle$ is discrete and $\mathcal{D}_n$ is a fundamental domain for $\Delta_n$. Furthermore, in the manner of \cite{CD}, one can write down explicit matrices in $\text{PSL}_2(\mathbb{C})$ which represent these isometries and see that the trace field for $\Delta_n$ is $\mathbb{Q}(i, \sqrt{2})$. Hence, $N_n \cong \mathbb{H}^3/\Delta_n$ is non-arithmetic.
\begin{dfn} The \textit{commensurator} of $\Gamma < \mathrm{Isom}(\mathbb{H}^3)$ is defined as
$$ \mathrm{Comm}(\Gamma) \doteq \{ \sfg \in \mathrm{Isom}(\mathbb{H}^3)\,|\,[\Gamma:\sfg\Gamma\sfg^{-1} \cap \Gamma]<\infty\}. $$ \end{dfn}
It is easy to see that every group commensurable with $\Gamma$ is contained in $\mathrm{Comm}(\Gamma)$. A well known theorem of Margulis asserts that if $\Gamma$ is discrete and acts with finite covolume, then $\mathrm{Comm}(\Gamma)$ is itself discrete if and only if $\Gamma$ is not arithmetic (see \cite[(1) Theorem]{Margulis}).
Let $G_n = \mathrm{Comm}(\Delta_n)$ and $O_n = \mathbb{H}^3/G_n$. Since $\Delta_n$ is a non-arithmetic Kleinian group, $G_n$ is discrete and $O_n$ is an orbifold. We will use the techniques of Goodman--Hodgson--Heard \cite{GHH} to prove the following proposition.
\begin{prop}\label{OP comm}
Every element of $G_n$ is orientation preserving. Hence, $\Delta_n$ is not commensurable to any 3-dimensional reflection group.
\end{prop}
Theorem \ref{t: One-cusp examples} will follow immediately from the proposition above upon observing that the $N_n$ are pairwise incommensurable. This follows most easily from a \textit{Bloch invariant} computation. The Bloch invariant of a hyperbolic $3$-manifold $M$ is a sum of parameters, each an element of $\mathbb{C}^*$, of a tetrahedral decomposition of $M$, considered as an element of $\mathcal{P}(\mathbb{C})$. For a field $k$, the \textit{Pre-Bloch group} $\mathcal{P}(k)$ is the quotient of the free $\mathbb{Z}$-module on $k-\{0,1\}$ by a ``five-term relation'' that can be geometrically interpreted as relating different decompositions of the union of two tetrahedra. The Bloch group $\calb(k)$ is a subgroup of $\mathcal{P}(k)$; see eg.~\cite{Neumanns_bloch}.
We will use the decomposition of $N_n$ into a collection of $2$ right-angled ideal octahedra and $n$ cuboctahedra. These may each be divided into tetrahedra yielding a decomposition of $N_n$. The parameters of the tetrahedra contained in the octahedron sum to an element $\beta_1\in \calb(\mathbb{Q}(i))$, and those of the cuboctahedron sum to an element $\beta_2\in \calb(\mathbb{Q}(i\sqrt{2}))$. It can be showed that $\beta_1$ and $\beta_2$ are linearly independent in $\calb(\mathbb{Q}(i,\sqrt{2}))$, and this in turn implies that the invariants $2\cdot \beta_1 + n\cdot\beta_2$ of the $N_n$ are pairwise linearly independent. Hence the $N_n$ are pairwise incommensurable; see \cite[Prop.~4.5]{CD} for an analogous proof.
In proving Proposition \ref{OP comm}, we give a partial description of the commensurator $G_n$. We use the algorithm of \cite{GHH} to perform such computations here and in Section \ref{sec:lobel}, so we briefly introduce the set-up below. The \textit{Lorentz inner product} on $\mathbb{R}^4$ is the degenerate bilinear pairing
\[ \langle {\bf v, w} \rangle \ = \ v_1 w_1+v_2w_2+v_3w_3-v_4w_4.\]
The \textit{hyperboloid model} of $\mathbb{H}^3$ is the set $\{ \mathbf{v} \, | \, \langle {\bf v, v} \rangle=-1, \, v_4>0 \}$ equipped with the Riemannian metric on tangent spaces determined by the Lorentz inner product. The \textit{positive light cone} is the set $L^+ = \{ {\bf v} \, | \, \langle {\bf v,v} \rangle=0, \, v_4\geq0 \}$. The \textit{ideal boundary} $\partial \mathbb{H}^3$ is identified with the set $PL^+$ of equivalence classes of $\bv \in L^+$, where $\bv \sim \bw$ if $\bw = \lambda \bv$ for $\lambda \in \mathbb{R}^+$.
Given a vector ${\bf v} \in L^+$, we say the set $H_{\bf v} = \{ {\bf w} \in \mathbb{H}^3 \, | \, \langle {\bf v, w} \rangle =-1\}$ is a \textit{horosphere centered at $v = [\bv]$}. If $\alpha \in \mathbb{R}^+$ the horosphere $H_{\alpha {\bf v}}$ is a horosphere centered at the same ideal point as $H_{\bf v}$ and if $\alpha \leq 1$ then $H_{\bf v}$ is contained in the horoball determined by $\alpha {\bf v}$. This correspondence between vectors in $L^+$ and horospheres in $\mathbb{H}^3$ is a bijection. Hence, we call the vectors in $L^+$ \emph{horospherical vectors}.
The group $\text{Isom}(\mathbb{H}^3)$ is the subgroup $\text{O}_0(3,1) \subset \text{GL}_4(\mathbb{R})$ (acting by matrix multiplication) which preserves the Lorentz inner product and the sign of the last coordinate of each vector in $\mathbb{R}^4$.
Suppose $M=\mathbb{H}/\Lambda$ is a complete finite volume hyperbolic orbifold with $k$ cusps. For each cusp $c_i$ of $M$, choose a horospherical vector ${\bf v}_i$ for which $H_{{\bf v}_i}$ projects to a cross section of $c_i$ under the covering map $\mathbb{H}^3 \rightarrow M$. Then $V = \Lambda \cdot \{ {\bf v}_i \}_1^k$ is $\Lambda$-invariant and determines a $\Lambda$-invariant set of horospheres. The convex hull $C$ of $V$ in $\mathbb{R}^4$ is called the \emph{Epstein--Penner convex hull}. Epstein and Penner show that $\partial C$ consists of a countable set of 3-dimensional faces $F_i$, where each $F_i$ is a finite sided Euclidean polyhedron in $\mathbb{R}^4$. Furthermore, this decomposition of $\partial C$ projects to a $\Lambda$--invariant tiling $\mathcal{T}$ of $\mathbb{H}^3$ \cite[Prop 3.5 and Theorem 3.6]{epstein_euclidean_1988}. If $M$ is a manifold then the quotient of this tiling by $\Lambda$ gives a cell decomposition of $M$. We refer to the tiling as a {\em canonical tiling} for $M$ and to the cell decomposition as a {\em canonical cell decomposition} of $M$. If we make a different choice for $\{ {\bf v}_i\}_1^k$ by multiplying each vector by a common positive scalar then the resulting Epstein--Penner convex hull differs from $C$ by multiplication by this scalar. The combinatorics of the boundary of this scaled convex hull is identical to that of $C$ and projects exactly to the tiling $\mathcal{T}$. Hence, we obtain all possibilities for canonical tilings using initial sets of the form $\{ {\bf v}_1, \alpha_2 {\bf v}_2, \ldots, \alpha_k {\bf v}_k \}$.
Consider the group of symmetries $\text{Sym}(\mathcal{T}) \subset \text{Isom}(\mathbb{H}^3)$. Since $\mathcal{T}$ is $\Lambda$-invariant $\Lambda \subset \text{Sym}(\mathcal{T})$. On the other hand, $\text{Sym}(\mathcal{T})$ acts on the set $V$ of horospherical vectors. It follows that $\text{Sym}(\mathcal{T})$ is discrete \cite[Lemma 2.1]{GHH} and therefore $\mathbb{H}^3/\Lambda \rightarrow \mathbb{H}^3/\text{Sym}(\mathcal{T})$ is a finite cover between orbifolds.
Suppose that $\Lambda$ is non-arithmetic. Since $\text{Comm}(\Lambda)$ is the unique maximal discrete group that contains $\Lambda$, then $\text{Sym}(\mathcal{T}) \subset \text{Comm}(\Lambda)$ for every canonical tiling $\mathcal{T}$. Futhermore, every canonical tiling for $\text{Comm}(\Lambda)$ is also a canonical tiling for $\Lambda$, hence $\text{Comm}(\Lambda) = \text{Sym}(\mathcal{T})$ for some canonical tiling $\mathcal{T}$ for $\Lambda$.
We say that a set $\{ \mathcal{P}_i \}$ of ideal polyhedra $\Lambda$-{\em generate} the tiling $\mathcal{T}$ if every tile of $\mathcal{T}$ is of the form $\gamma \mathcal{P}_i$ for some $\gamma \in \Lambda$ and some $i$. The canonical tilings can be determined using elementary linear algebra. According to \cite[Lemma 3.1]{GHH}, a set $\{ \mathcal{P}_i \}$ of ideal polyhedra $\Lambda$-generates the canonical tiling associated to the set $V$ if
\begin{enumerate}
\item $\Lambda \cdot \{ \mathcal{P}_i\}$ is a tiling of $\mathbb{H}^3$,
\item given any vertex of any $\mathcal{P}_i$ there is a horospherical vector ${\bf v} \in V$ so that the vertex lies at the center of the horosphere $H_{\bf v}$,
\item\label{coplanar} the set of horospherical vectors corresponding to the vertices of any given $\mathcal{P}_i$ lie on a single plane in $\mathbb{R}^4$,
\item\label{positive tilt} if $\mathcal{P}_i$ and $\gamma \mathcal{P}_j$ are two tiles that meet in a common face then the Euclidean planes in $\mathbb{R}^4$ determined by the two tiles meet convexly.
\end{enumerate}
The last two conditions can be re-phrased using linear algebra. If $\{ {\bf v}_1,\hdots, {\bf v}_s \}$ are the horospherical vectors for $\mathcal{P}_i$ and ${\bf w}$ is a horospherical vector for a neighboring tile which is not shared by $\mathcal{P}_i$ then there exists a {\em normal} vector for $\mathcal{P}_i$, ${\bf n} \in \mathbb{R}^4$ such that
\begin{enumerate}
\item[(3)] (coplanar) ${\bf n} \cdot {\bf v}_i =1$ for every $i=1, \ldots s$, and
\item[(4)] (positive tilt) ${\bf n} \cdot {\bf w} >1$,
\end{enumerate}
where $\cdot$ denotes the standard Euclidean inner product. Observe that these conditions are invariant under $\text{Isom}(\mathbb{H}^3)$, for if ${\bf n} \cdot \bv=\alpha$ and $A \in \text{Isom}(\mathbb{H}^3)$ then $({\bf n} A^{-1}) \cdot A\bv= \alpha$.
\begin{prop} \label{tiling} Let $\Delta_n < \mathsf{O}_0(3,1)$ be determined by the following embedding of the $\mathcal{P}_i$ in $\mathbb{H}^3$: the isometry group of $\mathcal{P}_2$ fixes $(0,0,0,1)^T$, the ideal vertex $\hat{v}$ shared by the $\mathcal{P}_i$ is $[\hat\bv]$, where $\hat\bv = (2,0,0,2)^T$, and $\mathcal{P}_1 \cap \mathcal{P}_2$ has ideal vertices $[\hat\bv],[\bv_9], [\bv_4]$, where $\bv_4 = (1,1,-\sqrt{2},2)^T$ and $\bv_9 = (1,-1,-\sqrt{2},2)^T$. Let $\mathcal{T}_n$ be the tiling of $\mathbb{H}^3$ determined by $V_n = \Delta_n \cdot \{\hat\bv\}$. The tiles of $\mathcal{T}_n$ are the $\Delta_n$-orbits of the $\mathcal{P}_i$.
\end{prop}
\proof
If $X$ is a $4 \times n$ matrix we denote the $i^{\text{th}}$ column of $X$ by $x_i$. When the columns of $X$ lie in $L^+$ and the convex hull of the corresponding ideal points is an ideal polyhedron we call the polyhedron $\mathcal{P}_X$. Consider the matrices
\[M \ = \ \left(
\begin{array}{llllllllllll}
2 & 1 & 0 & 1 & 0 & -1 & -2 & -1 & 1 & -1 & -1 & 1 \\
0 & 1 & 2 & 1 & -2 & -1 & 0 & -1 & -1 & 1 & 1 & -1 \\
0 & \sqrt{2} & 0 & -\sqrt{2} & 0 & \sqrt{2} & 0 & -\sqrt{2} & -\sqrt{2} & -\sqrt{2} & \sqrt{2} & \sqrt{2} \\
2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2
\end{array}
\right).\]
\[N \ = \ \left(
\begin{array}{llllll}
\sqrt{2} & 0 & 0 & -\sqrt{2} & 0 & 0 \\
0 & \sqrt{2} & 0 & 0 & -\sqrt{2} & 0 \\
0 & 0 & \sqrt{2} & 0 & 0 & -\sqrt{2} \\
\sqrt{2} & \sqrt{2} & \sqrt{2} & \sqrt{2} & \sqrt{2} & \sqrt{2}
\end{array}
\right)\]
The columns of $M$ and $N$ are horospherical vectors and represent horospheres centered about the ideal vertices of a regular ideal cuboctahedron and octahedron respectively. These matrices are chosen so that, for $X=M,N$, the isometries in $\text{Isom}(\mathcal{P}_X)$ all fix $(0,0,0,1)^T \in \mathbb{H}^3$ and the columns of $X$ are $\text{Isom}(\mathcal{P}_X)$--invariant. Furthermore, if ${\sf h}$ is the orientation preserving hyperbolic isometry that takes the triangular face $(n_1, n_2, n_3)$ of $\mathcal{P}_N$ to the triangular face $(m_1, m_9, m_4)$ of $\mathcal{P}_M$ so that ${\sf h}(\mathcal{P}_N) \cap \mathcal{P}_M$ is exactly this face, then our choice of horospheres agree on this intersection. That is, ${\sf h}(n_1, n_2, n_3)=(m_1, m_9, m_4)$.
Let $\mathcal{P}_1={\sf h}(\mathcal{P}_N)$ and $\mathcal{P}_2=\mathcal{P}_M$. Embed the remaining polyhedra in $\{ \mathcal{P}_i \}_1^{n+2}$, as described above, so that the common ideal vertex is the center of the $m_1$ horosphere. Choose horospherical vectors for the $\mathcal{P}_i$'s so that they are $\text{Isom}(\mathcal{P}_i)$--invariant and to coincide with the horospherical vectors of $\mathcal{P}_{i\pm1}$ wherever ideal vertices are shared.
Notice that the face pairings of $\mathcal{P}_i$ in $S_n$ are all compositions of elements of $\text{Isom}(\mathcal{P}_i)$ with parabolics that fix an ideal vertex of $\mathcal{P}_i$. Since we have chosen our horospherical vectors to be $\text{Isom}(\mathcal{P}_i)$--invariant, it follows that our choice of horospheres is compatible with the face pairings in $S_n$. Hence, the choice of horospheres descends to a choice of horospherical torus in $N_n$ and therefore determines a canonical cell decomposition of $N_n$ and a canonical tiling of $\mathbb{H}^3$ whose symmetry group is $G_n$. To prove the proposition, we need to show that this tiling is $\mathcal{T}_n$.
Take ${\bf n} = (0,0,0,1/2)^T$. Then ${\bf n} \cdot m_i =1$ for $i=1, \ldots 12$ and $\sqrt{2} {\bf n} \cdot n_i =1$ for $i=1,\ldots, 6$. Therefore by Goodman--Hodgson--Heard's criterion (\ref{coplanar}), the horospherical vertices of ${\sf k} (\mathcal{P}_i)$ are coplanar for every ${\sf k} \in \Delta_n$. It remains only to show that condition (\ref{positive tilt}) holds for adjacent pair of cuboctahedra that meet along a triangular face, an adjacent pair of cuboctahedra that meet along a square face, and an octahedron adjacent to a cuboctahedron.
If $Q$ is a cuboctahedron adjacent to $\mathcal{P}_M$ sharing the triangular face $(m_1, m_9, m_4)$ with $\text{Isom}(Q)$--invariant horospherical vectors which agree with $(m_1, m_9, m_4)$ then $w=(7, 1, -5 \sqrt{2}, 10)^T$ is a horospherical vector for $Q$ which is not shared by $\mathcal{P}_M$. We have ${\bf n} \cdot w = 5 >1$. If $Q$ is a cuboctahedron adjacent to $\mathcal{P}_M$ sharing the square face $(m_1, m_2, m_3, m_4)$ with $\text{Isom}(Q)$--invariant horospherical vectors which agree with $(m_1, m_2, m_3, m_4)$ then $w=(3, 5, - \sqrt{2}, 6)^T$ is a horospherical vector for $Q$ which is not shared by $\mathcal{P}_M$. We have ${\bf n} \cdot w = 3 >1$. The octahedron ${\sf h}(\mathcal{P}_N)$ is adjacent to $\mathcal{P}_M$ sharing the face $(m_1, m_9, m_4)$. Its vectors are invariant under the isometry group of ${\sf h}(\mathcal{P}_N)$ and they agree with those of $\mathcal{P}_M$ along the shared face. The vector $w=(2+2\sqrt{2}, 0, -2-2\sqrt{2}, 4+4\sqrt{2})^T$ is a horospherical vector for ${\sf h}(\mathcal{P}_N)$ which is not shared by $\mathcal{P}_M$. We have ${\bf n} \cdot w = 2+\sqrt{2}>1$.
\endproof
For $i=2, n+1$, shade each face of $\mathcal{P}_i$ gray if it is identified with a face of an octahedron in the quotient. For the other cuboctahedra $\mathcal{P}_i$, color each triangular face red if it is identified with a face of $\mathcal{P}_{i-1}$. (In Figure \ref{one}, every white triangular face of $\mathcal{P}_3$ should be colored red.)
The tiles of $\mathcal{T}_n$ inherit a coloring from the coloring of the $\mathcal{P}_i$'s. We can further classify the triangular faces in cuboctahedral tiles of $\mathcal{T}$ into \emph{type I} and \emph{type II} triangles. A face of a cuboctahedral tile $T$ is type I if it has exactly one ideal vertex that is shared by a triangular face of $T$ of the opposite color. Triangular faces of cuboctahedra that are not type I are type II.
\proof[Proof of Proposition \ref{OP comm}]
Suppose ${\sf h} \in G_n - \Delta_n$. By \cite{GHH}, ${\sf h}$ is a symmetry for the tiling $\mathcal{T}_n$. The polyhedron $\mathcal{D}_n$ is a fundamental domain for $\Delta_n$, so by composing ${\sf h}$ with some element of $\Delta_n$, we may assume that ${\sf h}(\mathcal{P}_2) \in \{ \mathcal{P}_i \}_1^{n+2}$. It is clear that ${\sf h}$ must preserve the set of gray faces in the tiling, hence ${\sf h}(\mathcal{P}_2)$ is either $\mathcal{P}_2$ or $\mathcal{P}_{n+1}$.
The isometry ${\sf h}$ must also preserve the types of the triangular faces of cuboctahedra. By examining the combinatorics of the face pairings in $S_n$, we see that every cuboctahedron in the tiling has exactly two vertices that are shared by a pair of type I triangles. There is one such vertex for each of the two triangular colors on the tile. Let $v$ be the vertex of $\mathcal{P}_2$ which is shared by the two gray type I triangles of $\mathcal{P}_2$ and $w$ the vertex shared by the two white type I triangles. If ${\sf h}(\mathcal{P}_2)=\mathcal{P}_2$ then, by considering the coloring of $\mathcal{P}_2$ we see that ${\sf h}$ must be the order-2 elliptic fixing $v$ and $w$. If, on the other hand, we have ${\sf h}(\mathcal{P}_2)=\mathcal{P}_{n+1}$ then ${\sf h}(v)$ must be the vertex shared by the two gray type I triangles of $\mathcal{P}_{n+1}$ and ${\sf h}(w)$ must be the vertex shared by the two red type I triangles of $\mathcal{P}_{n+1}$. The gray pattern on $\mathcal{P}_{n+1}$ forces ${\sf h}$ to be orientable.
\endproof
\section{Augmented links}\label{sec: augmented}
A rich class of examples that satisfy the hypotheses of Theorem \ref{rt ang cox} is that of the \textit{augmented links}. These were introduced by Adams \cite{Adams} and further studied in e.g.~\cite{LAD}, \cite{Purcell}, and \cite{Purcell_cusps}. In this section we will describe their construction and, in Section \ref{sec:low cx augmented}, classify up to scissors congruence the complements of augmented links with at most $5$ twist regions. We will discuss when an augmented link complement is commensurable with a right-angled reflection orbifold, and in Section \ref{sec:lobel} describe an infinite family of augmented link complements that do not have this property.
A link $L$ in $S^3$ with hyperbolic complement determines (not necessarily uniquely) an augmented link using a projection of $L$ which is \textit{prime} and \textit{twist-reduced}. We will regard a projection of $L$ as a $4$-valent graph in the plane, together with crossing information at each vertex, and use the term \textit{twist region} to denote either a maximal collection of bigon regions of the complement arranged end-to-end or an isolated crossing that is not adjacent to any bigon.
A projection is prime if there is no simple closed curve $\gamma$ in the projection plane intersecting it in exactly two points, with the property that each component of the complement of $\gamma$ contains a crossing. A projection is twist-reduced if for every simple closed curve $\gamma$ in the projection plane which intersects it in four points, such that two points of intersection are adjacent to one crossing and the other two are adjacent to another, there is a single twist region containing all crossings in one component of the complement of $\gamma$.
\begin{figure}
\includegraphics{augmentfig8.pdf}
\caption{Augmenting the figure-$8$ knot.}
\label{augmentfig8}
\end{figure}
An augmented link is obtained from a prime, twist reduced projection by encircling each twist region with a single unknotted component, which we call a \textit{clasp}. This process is illustrated in Figure \ref{augmentfig8} for the figure-$8$ knot, pictured on the left-hand side with its twist regions in boxes. The augmented link that it determines is pictured in the middle of the figure. Each link with hyperbolic complement admits a prime, twist reduced diagram, and the augmented link obtained from such a diagram also has hyperbolic complement (a direct proof of this fact is given in Theorem 6.1 of \cite{Purcell_cusps}). Thus every hyperbolic link complement in $S^3$ is obtained by Dehn surgery on some cusps of the complement of an augmented link.
Each clasp of an augmented link $L$ bounds a disk that has two points of transverse intersection with $L$. Given such a disk $D$, a family of homeomorphisms of $S^3-L$ is determined by cutting along the twice-punctured open disk $D-L$ and re-gluing by a rotation of angle $n\cdot 2\pi$, where $n \in \mathbb{Z}$. This adds or subtracts $2n$ crossings to the twist region of $L$ encircled by the clasp bounding $D$. It follows that the link on the right-hand side of Figure \ref{augmentfig8} has a complement homeomorphic to that of the link in the middle. The complements of two augmented links that differ by only a single crossing in a twist region are not necessarily homeomorphic; however, we will see below that they are scissors congruent. We also have:
\begin{lemma}\label{projection reflection} Let $L$ be an augmented link. Reflection through the projection plane determines an automorphism of $S^3 - L$. \end{lemma}
This is because while such a reflection changes the sign of each crossing, it does not change the parity of the number of crossings per twist region.
Given an augmented link projection, the appendix to \cite{LAD} describes a decomposition of its complement into two isometric ideal polyhedra. These polyhedra may be checkered so that each white face lies in the projection plane and each dark face is an ideal triangle in a ``vertical'' twice-punctured disk. This is illustrated in Figure \ref{augmentpoly} for an augmented link with two twist regions.
\begin{figure}
\includegraphics{augmentpoly.pdf}
\caption{An augmented link, the associated polyhedron, and its \crush.}
\label{augmentpoly}
\end{figure}
On the left-hand side of the figure, the dotted lines divide each twice-punctured clasp disk into the union of two ideal triangles. We arrange for these disks to meet the projection plane transversely in the dotted lines, so the darkened ideal triangles lie above the projection plane and the others below it. Cutting the link complement along the clasp disks and the projection plane yields two ideal polyhedra, one above and one below the projection plane, with edges coming from the dotted arcs. These are isomorphic by reflection through the projection plane. Flattening the two-skeleton of the polyhedron above it onto the plane yields the polyhedron in the middle of the figure, an ideal octahedron, where each of the darkened half-disks on the left-hand side gives rise to two ideal triangles and the link itself has been shrunken to darkened rectangles at the vertices. (See also \cite[Figure 3]{Purcell}.)
If $L$ is an augmented link, after removing all crossings in each twist region, we call the polyhedron produced by cutting along the projection plane and clasp disks the \textit{ideal polyhedron associated to $L$}. This polyhedron may be checkered by coloring black the triangular faces that lie in clasp disks and white the faces that lie in the projection plane. Note also that each black triangular face has a unique ideal vertex corresponding to a clasp. The following lemma summarizes the construction of the appendix to \cite{LAD}, in our language.
\begin{lemma}\label{LAD decomp} If $L$ is an augmented link with hyperbolic complement, there is a right-angled checkered ideal polyhedron $\mathcal{P}$ in $\mathbb{H}^3$ combinatorially isomorphic to the ideal polyhedron associated to $L$. For a face $f$ of $\mathcal{P}$, let $\rho_f$ denote reflection in the plane containing $f$. Fix a white face $f_0$ of $\mathcal{P}$, and let $\overline{\mathcal{P}} = \rho_{f_0}(\mathcal{P})$, $\bar{f} = \rho_{f_0}(f)$ for each face $f$ of $\mathcal{P}$, and $\bar{v} = \rho_{f_0}(v)$ for each ideal vertex. Then the quotient of $\mathcal{P} \cup \overline{\mathcal{P}}$ by the following face pairing gives a right-angled ideal decomposition of $S^3-L$. \begin{enumerate}
\item If $f \neq f_0$ is a white face of $\mathcal{P}$, let $\phi_f = \rho_{f_0} \circ \rho_f$, taking $f$ to $\bar{f} \subset \overline{\mathcal{P}}$.
\item If $f$ is a black triangular face of $\mathcal{P}$, let $f'$ be the black face of $\mathcal{P}$ that shares the ideal vertex $v$ of $f$ corresponding to a clasp. \begin{enumerate}
\item\label{even twist} If the corresponding twist region has an even number of crossings, let $\phi_f$ be the unique orientation-preserving isometry with $\phi_f(f) = f'$, $\phi_f(v) = v$, and $\phi_f(\mathcal{P}) \cap \mathcal{P}= f'$.
\item\label{odd twist} If the corresponding twist region has an odd number of crossings, let $\phi_f$ be the unique orientation-preserving isometry with $\phi_f(f) = \bar{f}'$, $\phi_f(v) = \bar{v}$, and $\phi_f(\mathcal{P})\cap\overline{\mathcal{P}} = \bar{f}'$. \end{enumerate} \end{enumerate}
Furthermore, $\rho_{f_0}$ induces the isometry of $S^3 - L$ supplied by Lemma \ref{projection reflection}. In particular, $\phi_{\bar{f}} = \rho_{f_0} \circ \phi_f \circ \rho_{f_0}$ for each face $f$ of $\mathcal{P}$. \end{lemma}
For another discussion of the content of Lemma \ref{LAD decomp}, see \cite[\S 2.3]{Purcell}. In particular, Figure 4 there clarifies the different gluings producing twist regions with even vs.~odd numbers of crossings. The last sentence of the lemma is not covered in \cite{LAD}; however it follows easily from the discussion above.
On the right-hand side of Figure \ref{augmentpoly} is the compact polyhedron obtained from the checkered ideal octahedron by the following rule: it has a vertex corresponding to every dark face and an edge joining each pair of vertices that correspond to dark faces which share ideal vertices. We will call this the \textit{\crush} of $L$, since it may be regarded as obtained by crushing the darkened faces of the associated right-angled polyhedron to points. We note that each vertex of the \crush\ has valence 3, since each dark face is an ideal triangle. The right-angled ideal polyhedron associated to $L$ is recovered by truncation from its \crush.
For an alternative perspective on obtaining the \crush\ and a connection with Andreev's theorem, we refer the reader to Section 6 of \cite{Purcell_cusps}, in particular page 487. The one-skeleton of the \crush\ is the graph $\Gamma$ dual to the nerve $\gamma$ of the circle packing defined there. We thank Jessica Purcell for pointing this out.
Figure \ref{6prismlinks} illustrates two augmented links with the same underlying polyhedron, each depicted draped over the one-skeleton of its \crush, the $6$-prism. (More generally, for $k \geq 3$ we will call the $k$-\textit{prism} the polyhedron combinatorially isomorphic to the cartesian product of a $k$-gon with an interval.) Since the associated right-angled ideal polyhedron is obtained by truncating vertices of the \crush, its ideal vertices occur at midpoints of edges. Each triangular face resulting from truncation is paired with one of its neighbors across an ideal vertex producing a clasp; thus for each vertex of the \crush, exactly one edge which abuts it is encircled by a clasp. Each other edge carries a single strand of the ``horizontal'' component of the augmented link.
\begin{figure}
\includegraphics{6prismlinks.pdf}
\caption{Two augmented links with \crush\ the $6$-prism.}
\label{6prismlinks}
\end{figure}
Since the ideal polyhedron $\mathcal{P}$ associated to an augmented link is canonically obtained from its \crush, each symmetry of the \crush\ determines a combinatorial symmetry of $\mathcal{P}$. Together with Mostow rigidity, this implies:
\begin{lemma}\label{symmetric crush} Let $L$ be an augmented link, $\mathcal{P}$ the associated right-angled ideal polyhedron in $\mathbb{H}^3$, and $\mathcal{C}$ its \crush. There is a canonical injection $\mathrm{Sym}(\mathcal{C}) \to \mathrm{Sym}(\mathcal{P})$. \end{lemma}
Lemma \ref{symmetric crush} implies that the complement of an augmented link with a highly symmetric \crush\ may be commensurable with the reflection group in the associated right-angled polyhedron.
\begin{lemma}\label{symmetric links} Let $L$ be an augmented link, $\mathcal{P}$ the associated right-angled polyhedron, and $\mathcal{C}$ its \crush, and suppose $\mathcal{C}$ has the property that for each clasp component $K$ of $L$, corresponding to an edge $e$ of $\mathcal{C}$ with vertices $v$ and $v'$, \begin{enumerate}
\item if $K$ encloses a twist region with an even number of crossings, there is a reflective involution of $\mathcal{C}$ preserving $e$ and exchanging $v$ with $v'$.
\item if $K$ encloses a twist region with an odd number of crossings, there is a rotational involution of $\mathcal{C}$ preserving $e$ and exchanging $v$ with $v'$. \end{enumerate}
Then $\pi_1(S^3-L) < \Gamma_{\mathcal{P}} \rtimes \mathrm{Sym}(\mathcal{P})$, where $\Gamma_{\mathcal{P}}$ is the group generated by reflections in $\mathcal{P}$ and $\mathrm{Sym}(\mathcal{P})$ is the group of symmetries of $\mathcal{P}$.\end{lemma}
\begin{proof} Lemma \ref{symmetric crush} implies that for each edge $e$ of $\mathcal{C}$ corresponding to a clasp $K$ of $L$, there is an involution $\iota_e$ of $\mathcal{P}$ that exchanges the triangular faces $f$ and $f'$ corresponding to $v$ and $v'$, and fixes the ideal vertex that they share. This involution is a reflection or $180$-degree rotation in case (1) or (2) above, respectively.
We now use the notation of Lemma \ref{LAD decomp}, and record that case (\ref{even twist}) there is the same as case (1) above. In this case, $\iota_e \circ \rho_f$ realizes the orientation-preserving isometry $\phi_f$ there. In case (2) above, corresponding to case (\ref{odd twist}) of Lemma \ref{LAD decomp}, the required isometry $\phi_f$ is realized by $\rho\circ\iota_e\circ\rho_f$. \end{proof}
Lemma \ref{symmetric links} implies for instance that the link on the left-hand side of Figure \ref{6prismlinks} is commensurable with the reflection group in the corresponding right-angled polyhedron, but it does not apply to the link on the right-hand side on account of the twist region with a single crossing. On the other hand, the commensurability classes of some links are entirely determined by their \crush s.
\begin{cor} \label{rt ang prism} Suppose $L$ is an augmented link such that the \crush\ of $L$ is a regular polyhedron. Then $\pi(S^3-L)$ is commensurable with the reflection group in the sides of the corresponding right-angled polyhedron. \end{cor}
In some cases the \crush\ of an augmented link may not have much symmetry, but it may be built from highly symmetric polyhedra. In such cases the link may have hidden symmetries. We will say a \crush\ is \textit{decomposable} if it contains a prismatic $3$-cycle --- that is, a sequence of three faces so that any two intersect along an edge but all three do not share a common vertex --- and \textit{indecomposable} otherwise.
\begin{figure}
\input{decompose3prism.pdf_t}
\caption{Decomposing the $3$-prism into two tetrahedra}
\label{decompose3prism}
\end{figure}
If $\mathcal{C}$ is a decomposable \crush, we decompose along a prismatic $3$-cycle by selecting a simple closed curve $\gamma$ which lies in the union of the faces of the cycle and intersects each of the edges of the cycle once, and using the following procedure: cut along $\gamma$, separate the components that result, and complete each by replacing $\gamma$ with a single vertex containing the endpoints of all the three edges intersecting it. This is illustrated for the triangular prism in Figure \ref{decompose3prism}, with the dotted curve on the left-hand side representing $\gamma$. Decomposing results in a disjoint union of two tetrahedra.
Suppose $L$ is a link with a decomposable \crush\ $\mathcal{C}$, and let $f_0$, $f_1$, and $f_2$ determine a prismatic $3$-cycle of $\mathcal{C}$. Then the corresponding faces in the associated right-angled ideal polyhedron $\mathcal{P}$, obtained by truncating vertices of $\mathcal{C}$, do not pairwise intersect but each two share an ideal vertex. It is an elementary fact of hyperbolic geometry that there is a single hyperplane $\calh$ which perpendicularly intersects the hyperplanes containing each of $f_0$, $f_1$ and $f_2$. Cutting $\mathcal{P}$ along $\calh$ decomposes it into two new right-angled ideal polyhedra, each with an ideal triangular face contained in $\calh$. Their \crush s are obtained by decomposing $\mathcal{C}$ along the prismatic cycle determined by $f_0$, $f_1$, and $f_2$.
\begin{lemma}\label{hidden symmetries} Suppose $L$ is an augmented link such that the \crush\ of $L$ decomposes into a disjoint union of copies of $\mathcal{C}$, where $\mathcal{C}$ is a regular polyhedron. Then $\pi_1(S^3-L)$ is contained in $\Gamma_{\mathcal{P}} \rtimes \mathrm{Sym}(\mathcal{P})$, where $\mathcal{P}$ is the right-angled ideal polyhedron obtained from $\mathcal{C}$ by truncating vertices. \end{lemma}
\begin{proof} There is a tiling $\mathcal{T}$ of $\mathbb{H}^3$ consisting of $\Gamma_{\mathcal{P}}$-translates of $\mathcal{P}$. If $\mathcal{C}_0$ is the \crush\ of $L$, then the hypothesis and the description above the lemma establish that the associated right-angled polyhedron $\mathcal{P}_0$ is a union of tiles of $\mathcal{T}$. Checkering $\mathcal{P}_0$ so that dark faces are triangles obtained by truncating vertices of $\mathcal{C}_0$, we claim that for each pair of dark faces $f$ and $f'$ which share an ideal vertex $v$, there exist in $\Gamma_{\mathcal{P}} \rtimes \mathrm{Sym}(\mathcal{P})$ both a reflective and a rotational involution of $\mathbb{H}^3$ exchanging $f$ and $f'$ and fixing $v$. We will prove the claim by induction on the number of tiles comprising $\mathcal{P}_0$. The case of one tile, $\mathcal{C}_0 = \mathcal{C}$, follows as in the proof of Lemma \ref{symmetric links} from the fact that $\mathcal{C}$ is regular.
Suppose that $\mathcal{P}_0$ is the union of more than one tile, and let $\gamma(\mathcal{P})$ be a $\Gamma_{\mathcal{P}}$-translate of $\mathcal{P}$ such that $\mathcal{P}_0$ is the union of $\gamma(\mathcal{P})$ and a polyhedron $\mathcal{P}_1 \subset \mathcal{T}$ across a face $f$ which is an ideal triangle. The checkering of $\mathcal{P}_0$ determines checkerings of each of $\mathcal{P}_1$ and $\gamma(\mathcal{P})$ by declaring $f$ to be dark. The claim holds for $\mathcal{P}_1$ by induction and for $\gamma(\mathcal{P})$ by the base case. Thus it only remains to verify the claim for dark faces of $\mathcal{P}_0$ sharing an ideal vertex, one of which lies in $\mathcal{P}_1$ and one in $\gamma(\mathcal{P})$.
Suppose $f_0$ and $f_1$ are dark faces, of $\gamma(\mathcal{P})$ and $\mathcal{P}_1$ respectively, which share an ideal vertex $v$ in $\mathcal{P}_0$. Then each of $f_0$ and $f_1$ shares $v$ with $f$. Let $\rho_0$ (respectively, $\rho_1$) be a reflective involution in $\Gamma_{\mathcal{P}} \rtimes \mathrm{Sym}(\mathcal{P})$ fixing $v$ and exchanging $f_0$ (resp. $f_1$) with $f$, and let $\iota_0$ and $\iota_1$ be rotational involutions satisfying the same description. Then $\rho_1 \circ \rho_0$ and $\iota_1 \circ \rho_0$ are isometries of infinite order taking $f_0$ to $f_1$. This can be discerned by considering their actions on a horosphere centered at $v$, intersected by $\gamma(\mathcal{P})$ and $\mathcal{P}_1$ in adjacent rectangles. The first acts on this cross section as a translation and the second as a glide reflection. If $\rho_{f_1}$ is reflection in the hyperplane containing $f_1$, it follows that $\rho_{f_1} \circ \rho_1 \circ \rho_0$ and $\rho_{f_1} \circ \iota_1 \circ \rho_0$ satisfy the conclusion of the claim.
The conclusion of the lemma now follows from Lemma \ref{LAD decomp}. \end{proof}
\subsection{Examples with low complexity}\label{sec:low cx augmented}
The most natural measure of complexity of an augmented link is the number of twist regions, which is equal to half the number of dark faces of the associated right-angled polyhedron, or half the number of vertices of its \crush. Here we will classify the augmented link complements with up to five twist regions up to \textit{scissors congruence}. We will say that finite-volume hyperbolic 3-manifolds are scissors congruent if they can be cut into identical collections of ideal polyhedra. It is natural for us to use this invariant because many different augmented links may be produced by different choices of face pairing on the same underlying right-angled polyhedron.
\begin{lemma}\label{indecomp up to ten} The indecomposable \crush s with at most ten vertices are the tetrahedron, the cube (or $4$-prism), and the $5$-prism. \end{lemma}
\begin{proof} The only indecomposable \crush\ with a triangular face is the tetrahedron, since the family of faces adjacent to a triangular face determines a prismatic $3$-cycle unless they share a common vertex. On the other hand, if a \crush\ $\mathcal{C}$ with at most ten vertices has a face which is a $k$-gon for $k \geq 6$, then two edges which emanate from distinct vertices of this face must share a common endpoint. That $\mathcal{C}$ is decomposable follows from the claim below.
\begin{claim} Suppose $f$ is a face of a \crush\ $\mathcal{C}$, and $e_0$ and $e_1$ are distinct edges of $\mathcal{C}$, each with one endpoint on $f$, which share a vertex $v$. Then $e_0$ and $e_1$ bound a triangle face of $\mathcal{C}$ together with an edge of $f$. \end{claim}
\begin{proof}[Proof of claim] The set $f \cup e_0 \cup e_1$ cuts $\partial \mathcal{C}$ into two disks. Let $D$ be the closure of the disk that does not intersect the edge $e_2 \neq e_0, e_1$ with an endpoint at $v$. There is a face $f' \subset D$ of $\mathcal{C}$ which has $v$ as a vertex and $e_0$ and $e_1$ as edges. Then $f'$ intersects $f$ along an edge $e_0'$ with an endpoint at $e_0 \cap f$ and also along an edge $e_1'$ with an endpoint at $e_1\cap f$. But since $f$ and $f'$ cannot meet along more than one edge, we must have $e_0' = e_1'$. Thus since $e_0 \cup e_1 \cup e_0'$ forms a simple closed curve in the boundary of $f'$, $f' = D$ is a triangle. \end{proof}
Thus if $\mathcal{C}$ is indecomposable and not a tetrahedron, with at most ten vertices, then every face of $\mathcal{C}$ is a quadrilateral or pentagon. Let $j$ be the number of quadrilateral faces and $k$ the number of pentagon faces, and let $v$ and $e$ be the number of vertices and edges, respectively. Since each vertex is $3$-valent, we have $3v=2e$, and since each edge bounds two faces we have $2e = 4j+5k$. Computing the Euler characteristic thus yields:
$$ v - e + (j+k) = \frac{4j+5k}{3} - \frac{4j+5k}{2} + (j+k) = \frac{j}{3}+\frac{k}{6}=2. $$
Using the equation above we find that $j+k/2 = 6$. Since we require that $\mathcal{C}$ have at most ten vertices, the vertex and edge equations yield $4j+5k \leq 30$. Thus using the fact that $j$ and $k$ are non-negative integers, we find that either $j=6$ and $k=0$ (and hence $v=8$) or $j=5$ and $k=2$ (and $v= 10$). The cube and the $5$-prism respectively realize these possibilities. It remains to show that these are the unique \crush s with the prescribed numbers of quadrilateral and pentagon faces.
In general, if a \crush\ $\mathcal{C}$ has a $k$-gon face which is adjacent to only quadrilaterals, then $\mathcal{C}$ is the $k$-prism. This immediately implies that the only \crush\ with six quadrilateral faces and no pentagons is a cube. Similarly, if $\mathcal{C}$ is an indecomposable \crush\ with two pentagonal faces and five quadrilaterals, then $\mathcal{C}$ is a $5$-prism unless the pentagonal faces are adjacent. In the latter case, we note that the union of the pentagonal faces has eight vertices, and by the claim above and indecomposability, the three ``free'' edges emanating from one of them have distinct vertices. Hence $\mathcal{C}$ has at least eleven vertices, a contradiction. Therefore the $5$-prism is the only indecomposable \crush\ with five quadrilateral faces and two pentagons. \end{proof}
\begin{lemma}\label{decomp up to ten} If $\mathcal{C}$ is a decomposable \crush\ with at most ten vertices, a maximal sequence of decompositions yields a disjoint union of up to four tetrahedra or of a single tetrahedron and a single cube. \end{lemma}
\begin{proof} Suppose $\mathcal{C}$ is a decomposable \crush, and let $\mathcal{C}_0$ and $\mathcal{C}_1$ be obtained by decomposing $\mathcal{C}$ along a prismatic $3$-cycle. If $v$, $v_0$, and $v_1$ are the numbers of vertices of $\mathcal{C}$, $\mathcal{C}_0$, and $\mathcal{C}_1$, respectively, then from the description of decomposition one finds that
$$v+2 = v_0 + v_1.$$
It is easy to see that each \crush\ has at least four vertices, and that the tetrahedron is the unique such with exactly four. Thus by the equation above, any \crush\ with six vertices decomposes into two tetrahedra. (By the classification of indecomposable \crush s, every \crush\ with six vertices is decomposable.) If $\mathcal{C}$ is a decomposable \crush\ with eight vertices, we thus find that a sequence of two decompositions yields a disjoint union of three tetrahedra.
Finally, suppose that $\mathcal{C}$ is a decomposable \crush\ with ten vertices, and decompose it along a prismatic $3$-cycle into \crush s $\mathcal{C}_0$ and $\mathcal{C}_1$ with $v_0 \leq v_1$ vertices, respectively. Then either $v_0 = v_1 = 6$ or $v_0 = 4$ and $v_1=8$. In the former case, the above implies that neither $\mathcal{C}_0$ nor $\mathcal{C}_1$ is indecomposable; hence each decomposes into a disjoint union of two tetrahedra. In the case $v_0 = 4$ and $v_1 = 8$, $\mathcal{C}_0$ is a tetrahedron. If $\mathcal{C}_1$ is indecomposable, it is a cube; othewise, a sequence of two decompositions cuts it into a disjoint union of three tetrahedra. \end{proof}
The scissors congruence classification of augmented links with up to five twist regions is now readily obtained. Below let $L$ be an augmented link. \begin{itemize}
\item If the \crush\ of $L$ decomposes into a disjoint union of tetrahedra, then $S^3-L$ is a union of right-angled ideal octahedra. It thus follows from Lemma \ref{hidden symmetries} and the results of \cite{Hatcher} that $\pi_1(S^3-L) < \mathrm{PGL}_2(\mathcal{O}_1)$. This holds in particular for all augmented links with at most three twist regions, or for any with four twist regions and a decomposable \crush.
\item If $L$ has four twist regions and an indecomposable \crush, then $S^3-L$ is a union of two right-angled ideal cuboctahedra, and by Corollary \ref{rt ang prism} and the results of \cite{Hatcher}, $\pi_1(S^3-L) < \mathrm{PGL}_2(\mathcal{O}_2)$. \end{itemize}
\begin{figure}
\begin{center}
\includegraphics{decomp5twist.pdf}
\end{center}
\caption{Augmented links with $5$ twist regions and a decomposable \crush.}
\label{decomp5twist}
\end{figure}
In particular, the commensurability class of an augmented link with at most four twist regions is determined by its \crush, and each such link falls into one of two commensurability classes. The augmented links with five twist regions display more variability. \begin{itemize}
\item If $L$ has five twist regions and an indecomposable \crush\ $\mathcal{C}$, then $\mathcal{C}$ is the $5$-prism. In most cases, we have $\pi_1(S^3-L) < \Gamma_{\mathcal{P}} \rtimes \mathrm{Sym}(\mathcal{P})$, where $\mathcal{P}$ is the associated right-angled polyhedron, the double of the $5$-antiprism across one of its pentagon faces. This holds by Lemma \ref{symmetric links}, unless $L$ has a twist region with an odd number of crossings that corresponds to an edge of a pentagon face of $\mathcal{C}$.
\item If $L$ has five twist regions and a decomposable \crush\ that does not decompose into tetrahedra, then $S^3-L$ is a union of two right-angled octahedra and two cuboctahedra. Two such links are pictured in Figure \ref{decomp5twist}. Using the techniques of \cite[\S 4.3]{CD}, one can show that the horizontal component that runs across all vertices of the \crush\ on the right-hand side has cusp parameter that is $\mathrm{PGL}_2(\mathbb{Q})$-inequivalent to the parameters of all cusps of the left-hand link. Hence their complements are incommensurable. \end{itemize}
From the classification above, we find that an augmented link with at most five twist regions is almost determined up to commensurability by its \crush. This is primarily because the indecomposable \crush s with at most ten vertices have so much symmetry. Already among those with twelve vertices, we find an example with less symmetry. This is pictured on the left-hand side of Figure \ref{indecomp}. On the right-hand side is an augmented link that has this polyhedron as a \crush.
\begin{figure}[ht]
\begin{center}
\includegraphics{indecomp.pdf}
\end{center}
\caption{An indecomposable \crush\ with $12$ vertices, and an augmented link built on it.}
\label{indecomp}
\end{figure}
\begin{lemma}\label{indecomp twelve} The indecomposable \crush s with twelve vertices are the $6$-prism and the polyhedron on the left-hand side of Figure \ref{indecomp}. \end{lemma}
\begin{proof} Reasoning as in the proof of Lemma \ref{indecomp up to ten}, we find that a \crush\ with twelve vertices and a face which is a $k$-gon for $k > 6$ is decomposable, and that such a \crush\ with a hexagonal face is the $6$-prism. Thus as in the proof of that lemma, we are left to consider \crush s with all quadrilateral and pentagon faces. If $j$ is the number of quadrilateral and $k$ the number of pentagonal faces, an Euler characteristic calculation again yields $j +k/2 = 6$. Counting vertices in this case yields $4j + 5k = 36$, and solving these two equations yields $j = 4$ and $k = 4$.
Let $\mathcal{C}$ be an indecomposable \crush\ with twelve vertices and $4$ each of quadrilateral and pentagon faces. Then every pentagon face of $\mathcal{C}$ is adjacent to at least one other pentagon face.
\begin{claim} No vertex of $\mathcal{C}$ is shared by three pentagon faces. \end{claim}
\begin{proof}[Proof of claim] Suppose $v$ is a vertex with this property, and let $v_0$, $v_1$, and $v_2$ be the vertices adjacent to $v$ in the one-skeleton of $\mathcal{C}$. Then for $i \in \{0,1,2\}$, let $f_i$ be the face of $\mathcal{C}$ which contains $v_i$ but not $v$. We may assume without loss of generality that $f_0$ and $f_1$ are quadrilaterals (at least two must be).
Consider the subcomplex of $\partial \mathcal{C}$ which is the union of $f_0$, $f_1$, and the pentagon faces containing $v$. If any edges on the boundary of this subcomplex were identified in $\partial \mathcal{C}$, then it would have a prismatic $k$-cycle for $k \leq 3$; hence this subcomplex is a disk embedded in $\partial \mathcal{C}$. It contains all twelve vertices, and sixteen out of the eighteen edges of $\mathcal{C}$. But it is easy to see that any way of joining the four ``free'' vertices by two edges in the complement yields a triangular face, contradicting indecomposability.
\end{proof}
One may also rule out the possibility of a quadrilateral face which meets only pentagonal faces --- the union of these faces would be an embedded disk containing all twelve vertices but only fourteen edges --- and to establish that each pentagonal face meets at least two other pentagonal faces.
Thus the pentagonal faces form a prismatic $4$-cycle of $\mathcal{C}$, neither of whose complementary regions can be occupied by a single quadrilateral. It follows that $\mathcal{C}$ is as pictured in Figure \ref{indecomp}.
\end{proof}
\subsection{L\"obell links} \label{sec:lobel}
\begin{figure}[ht]
\centering
\input{Lobell4.pdf_t}
\caption{The L\"obell link $L(4)$ and its $4$-fold cyclic quotient.}
\label{fig:L4 and quot}
\end{figure}
For $n \geq 3$, we will denote by $\mathcal{L}(n)$ the $n$th L\"obell polyhedron. This is the unique polyhedron with vertices of valence $3$ and faces consisting of $n$-gons $F$ and $F'$, and $2n$ pentagons, such that $F$ has distance $3$ from $F'$ in the dual graph. The L\"obell polyhedron $\mathcal{L}(4)$ is pictured on the left-hand side of Figure \ref{fig:L4 and quot}, under a link that has it as a \crush. We denote this link $L(4)$. There is an evident rotational symmetry of $(S^3,L(4))$, with order $4$ and quotient the link on the right-hand side of Figure \ref{fig:L4 and quot}. An additional component, the fixed axis of this rotation, has been added to the diagram and labeled with $4$. For arbitrary $n \geq 3$, we define $L(n)$ to be the link with \crush\ $\mathcal{L}(n)$ that $n$-fold branched covers the diagram on the right-hand side. The main result of this section is:
\begin{thm}\label{Lobell thm} For all but finitely many $n \geq 4$, $M(n) \doteq S^3 - L(n)$ is not arithmetic nor commensurable with any $3$-dimensional hyperbolic reflection orbifold. Moreover, at most finitely many $M(n)$ occupy any commensurability class. \end{thm}
\begin{remark} Since $\mathcal{L}(5)$ is the dodecahedron, $L(5)$ falls under the purview of Corollary \ref{rt ang prism} and so is commensurable with a right-angled reflection orbifold. Therefore the stipulation ``all but finitely many'' above is necessary. We do not know of any $M(n)$ that is arithmetic, however. We note also that $\mathcal{L}(3)$ decomposes into two tetrahedra and a cube, whereas $\mathcal{L}(n)$ is indecomposable for $n > 3$. \end{remark}
Proving the theorem requires identifying the commensurator quotient of $M(n)$. We begin by identifying the symmetry group of $\mathcal{L}(n)$.
\begin{fact} For $n \neq 5$, the symmetry group of $\mathcal{L}(n)$ has presentation
$$ \Sigma(n) = \langle\ \sfa, \sfb_n, \sfs\,|\, (\sfb_n)^n = \sfs^2 = \sfa^2 = 1, \sfs\sfb_n \sfs = (\sfb_n)^{-1}, \sfa\sfb_n\sfa = (\sfb_n)^{-1}, \sfa \sfs \sfa = \sfb_n\sfs\ \rangle. $$
The subgroup $\langle \sfa, \sfb_n\rangle$ preserves orientation, and $\sfs$ reverses it. The subgroup $\langle \sfb_n,\sfs \rangle$ preserves each $n$-gon face, and $\sfa$ exchanges them. \end{fact}
\begin{figure}
\begin{center}
\input{Lnsymm_1.pdf_t}
\end{center}
\caption{A fundamental domain for the action of $\langle \sfb_n\rangle$ on $\mathcal{L}(n)$, and the corresponding sub-polyhedron $\mathcal{O}(n)$ of $\mathcal{P}(n)$.}
\label{Lnsymm}
\end{figure}
\begin{proof} Since $n \neq 5$, $\mathcal{L}(n)$ has exactly two $n$-gon faces $F$ and $F'$. Let $e_0,e_1,\hdots,e_{n-1}$ be a cyclic ordering of the edges of $F$; ie, for each $i$, $e_i$ shares a vertex with $e_{i+1}$, where $i+1$ is taken modulo $n$. The union of $F$ with the pentagonal faces of $\mathcal{L}(n)$ that abut it is a disk $D$ embedded in $\partial \mathcal{L}(n)$, with boundary consisting of $2n$ edges that can be cyclically ordered $f_1,f_2,\hdots,f_{2n}$ as follows: for $0 \leq i < n$, let $F_i$ be the pentagonal face of $\mathcal{L}(n)$ containing $e_i$ and let $f_{2i+1} \subset F_i \cap \partial D$ and $f_{2(i+1)} \subset F_{i +1} \cap \partial D$ be the unique pair of edges that share a vertex (with $i+1$ taken modulo $n$).
We now let $\sfb_n$ be the rotational symmetry of $F$ taking $e_i$ to $e_{i+1}$ for each $i$, and take $\sfs$ to be the reflection of $F$ preserving $e_0$ and exchanging $e_i$ with $e_{n-i}$ for $0 < i<n$. It is easy to see that these extend to a rotation and reflection of $\mathcal{L}(n)$, respectively, yielding the subgroup $\langle \sfb_n,\sfs \rangle$ described above (we refer to the extensions by the same name).
There is a symmetry $\sfa$ of the embedded circle $f_1 \cup f_2 \cup \hdots f_{2n}$ that preserves $f_1$ and $f_{n+1}$, exchanging endpoints of each, and exchanges $f_i$ with $f_{2n+2-i}$ for $1<i\leq n$. This extends to a rotational symmetry of $\mathcal{L}(n)$ taking $F$ to $F'$. In particular, for $0 \leq i < n$, we can take $F_i'$ to be the pentagonal face adjacent to $F'$ that contains $f_{2i+1}$ and $f_{2(i+1)}$. Then $\sfa$ takes $F_i$ to $F'_{n-i}$.
The relations on $\sfb_n$, $\sfs$, and $\sfa$ follow by considering their actions on $F$. Since every automorphism of $\mathcal{L}$ either exchanges $F$ and $F'$ or preserves each, there is a map to $\mathbb{Z}/2\mathbb{Z} = \{\pm 1\}$ taking such an element to $-1$ or $1$, respectively. The subgroup $\langle \sfb_n,\sfs\rangle$ is contained in the kernel of this map; since it is the entire symmetry group of $F$, it is the entire kernel. Hence the entire symmetry group of $\mathcal{L}(n)$ is generated by $\langle \sfb_n,\sfs\rangle$ and $\sfa$, which maps to $-1$.
\end{proof}
A fundamental domain for the action on $\mathcal{L}(n)$ of the cyclic group $\langle \sfb_n \rangle$ is depicted on the left-hand side of Figure \ref{Lnsymm}, cut out by the dotted line segments. These should be interpreted as meeting at the point at infinity, in addition to the center of $F$. The segment that runs through the edge joining endpoints of $e_0$ and $f_{2n}$ is fixed by the reflection $\sfs\sfb_n$, and the other is fixed by $\sfb_n\sfs$.
Recall that by Lemma \ref{symmetric crush}, each symmetry of $\mathcal{L}(n)$ determines a symmetry of the right-angled ideal polyhedron $\mathcal{P}(n)$ obtained by truncating vertices of $\mathcal{L}(n)$. In particular, $\sfs\sfb_n$ and $\sfb_n\sfs$ determine reflective symmetries of $\mathcal{P}(n)$. Cutting along the mirrors of these reflections yields the polyhedron $\mathcal{O}(n)$ pictured on the right-hand side of the figure. The three edges with ``free'' ends should again be interpreted as meeting at the point at infinity. The darkened vertices of $\mathcal{O}(n)$ are ideal; the remaining vertices, each the midpoint of an edge of $\mathcal{P}(n)$, are not.
The intersection of the mirror of $\sfs$ with $\partial \mathcal{O}(n)$ is the dotted axis on the right-hand side of Figure \ref{Lnsymm}. Clearly, $\sfs$ restricts to an isometry of $\mathcal{O}(n)$. Although $\sfa$ does not preserve $\mathcal{O}(n)$, it does preserve the sub-polyhedron, obtained by cutting along the mirror of $\sfs$, that contains the ideal vertex labeled $r_5$. Indeed, it acts on this polyhedron as a $180$-degree rotation fixing $r_5$ and the midpoint of the edge labeled $2\pi/n$, exchanging each of $r_3$ and $r_2$ with an unlabeled ideal vertex.
Since $\mathcal{P}(n)$ is right-angled, each edge of $\mathcal{O}(n)$ that is contained in one of $\mathcal{P}(n)$ has dihedral angle $\pi/2$. Since the mirrors of $\sfs\sfb_n$ and $\sfb_n\sfs$ meet each edge of $\mathcal{P}(n)$ transversely, each edge of $\mathcal{O}(n)$ that is the intersection of $\partial\mathcal{P}(n)$ with a mirror of one of these reflections has dihedral angle $\pi/2$ as well. Thus the only edge of $\mathcal{O}(n)$ with a dihedral angle different than $\pi/2$ is the intersection of the mirrors of $\sfs\sfb_n$ and $\sfb_n\sfs$, labeled $2\pi/n$ at the top of the figure. That this is the dihedral angle follows from the fact that the product of these reflections is the rotation $(\sfb_n)^2$, through an angle of $2\cdot 2\pi/n$.
Each symmetry of $\mathcal{L}(n)$, $n \neq 5$, exchanges edges enclosed by clasps of $L(n)$; hence the corresponding isometry of $\mathcal{P}(n)$ induces one of $M(n) = S^3-L(n)$. Since $\mathcal{O}(n)$ is a fundamental domain for the action of the rotation group $\langle \sfb_n\rangle$ on $\mathcal{L}(n)$, Lemma \ref{LAD decomp} implies $\mathcal{O}(n) \cup \overline{\mathcal{O}}(n)$ is a fundamental domain for the action on $\mathbb{H}^3$ of the orbifold fundamental group of $O(n) = M(n)/\langle\sfb_n\rangle$. Here $\overline{\mathcal{O}}(n) \doteq \sfd_1(\mathcal{O}(n))$, where $\sfd_1$ is the reflection through the white face of $\mathcal{O}(n)$ whose sole ideal vertex is $r_2$. Using the further symmetries $\sfa$ and $\sfs$ of $\mathcal{P}(n)$, we thus obtain the lemma below.
\begin{lemma}\label{O(n) group} Let $\sfd_2$ be the reflection through the white face of $\mathcal{O}(n)$ with ideal vertices $r_2$, $r_3$, $\sfs(r_3)$, $r_5$, $r_6$, and let $\sfc$ be the parabolic isometry fixing $r_3$ and taking $r_2$ to $r_5$. Then $O(n)$ is isometric to $\mathbb{H}^3/\Gamma(n)$, where $$
\Gamma(n) = \langle\ \sfd_1\sfd_2, \sfd_1\sfd_2^{\sfa}, \sfd_1\sfd_2^{\sfs\sfa}, \sfd_1\sfd_1^{\sfa}, \sfb_n, \sfc, \sfc^{\sfa}, \sfc^{\sfs}, \sfb_n^{\sfd_1}, \sfc^{\sfd_1}, \sfc^{\sfd_1\sfa}, \sfc^{\sfd_1\sfs}\ \rangle. $$
Furthermore, the isometry of $O(n)$ visible on the right-hand side of Figure \ref{fig:L4 and quot} as reflection through the projection plane is induced by $\sfd_1$. \end{lemma}
Let $L$ be the link in $S^3$ that is the union of the fixed locus of $O(n)$ with the other components pictured on the right-hand side of Figure \ref{fig:L4 and quot}. Then $O(n)$ is obtained from $S^3-L$ by $(n,0)$-Dehn filling on the added component, where the meridian here is chosen to lie in the projection plane and the longitude bounds a $3$-punctured disk. Because the singular locus of $O(n)$ is the image of the edge $e$ of $\mathcal{O}(n)$ with dihedral angle $2\pi/n$, $S^3 - L$ is obtained from $\mathcal{O}(n) - e$ by the restriction of the face pairings described in Lemma \ref{O(n) group}. Thus Poincar\'e's polyhedron theorem implies:
\begin{lemma}\label{L group} Let $\mathcal{O}$ be the all-right polyhedron in $\mathbb{H}^3$ homeomorphic to $O(n) - e$, and let $\sfa$, $\sfb$, $\sfc$, $\sfd_1$ and $\sfd_2$ have the same combinatorial descriptions as the correspondingly-named isometries determined by $\mathcal{O}(n)$. Let
$$ \Gamma_L = \langle \sfd_1\sfd_2, \sfd_1\sfd_2^{\sfa}, \sfd_1\sfd_2^{\sfs\sfa}, \sfd_1\sfd_1^{\sfa}, \sfb, \sfc, \sfc^{\sfa}, \sfc^{\sfs}, \sfb^{\sfd_1}, \sfc^{\sfd_1}, \sfc^{\sfd_1\sfa}, \sfc^{\sfd_1\sfs} \rangle. $$
Then $S^3 - L$ is homeomorphic to $\mathbb{H}^3/\Gamma_L$. \end{lemma}
The only aspect of this lemma that requires comment is that Andreev's theorem implies that there is a right-angled polyhedron $\mathcal{O}$ with the requisite combinatorial description. An ideal vertex of $\mathcal{O}$ replaces the edge of $\mathcal{O}(n)$ with dihedral angle $2\pi/n$. Thus $\sfb$ is parabolic, rather than elliptic like $\sfb_n$.
Denote by $r_7$ the ideal vertex of $\mathcal{O}$ fixed by $\sfb$; that is, $r_7$ replaces the edge of $\mathcal{O}(n)$ with dihedral angle $2\pi/n$. The polyhedron obtained by cutting along the mirror of $\sfs$, that has $r_5$ as an ideal vertex, has $180$-degree rotational symmetry $\sfa$ fixing $r_5$ and $r_7$. Therefore a single geodesic plane contains the ideal vertices $r_2$, $r_5$, $r_7$, and $\sfa(r_2)$. Let $\mathcal{Q}_0$ be the polyhedron with $r_3$ as an ideal vertex that is obtained by cutting along this plane.
\begin{figure}
\begin{center}
\input{calq_0.pdf_t}
\end{center}
\caption{$\mathcal{Q}_0$ and $\mathcal{Q}$}
\label{calq_0}
\end{figure}
An ideal polyhedron $\mathcal{Q}$ may be obtained from $\mathcal{Q}_0$ as follows. The geodesic plane $[r_5,r_3,\sfa(r_2)]$ containing $r_5$, $r_3$, and $\sfa(r_2)$ cuts off a tetrahedron $\mathcal{T}$, with a finite vertex opposite this plane, from the remainder of $\mathcal{Q}_0$. Let $\mathcal{Q} = (\overline{\mathcal{Q}_0 - \mathcal{T}})\cup \sfc^{-1}(\mathcal{T})$. Since all edges abutting each finite vertex of $\mathcal{Q}_0$ have dihedral angle $\pi/2$, the finite vertices of $\mathcal{Q}_0$, which are identified in $\mathcal{Q}$, lie in the interior of an edge of $\mathcal{Q}$. We have depicted $\mathcal{Q}_0$ and $\mathcal{Q}$ on the left- and right-hand sides of Figure \ref{calq_0}, respectively, coloring black the face of $\mathcal{Q}$ in $[r_5,r_3,\sfa(r_2)]$ and its image under $\sfc^{-1}$.
The lemma below follows from Poincar\'e's polyhedron theorem and the descriptions from Lemma \ref{L group} of face pairing isometries on $\mathcal{O} \cup \sfd_1(\mathcal{O})$ yielding $S^3 - L$.
\begin{lemma}\label{commensurator} Let $\Gamma = \langle \sfa, \sfc,\sfd_1,\sfd_2,\sfd_3 \doteq \sfa\sfs\sfa \rangle$ be generated by face pairings for $\mathcal{Q}$. Then $\mathbb{H}^3/\Gamma$ is a three-cusped hyperbolic $3$-orbifold, and $\Gamma_L \lhd \Gamma$ with index $8$. \end{lemma}
The isometry $\sfd_3$ defined in Lemma \ref{commensurator} acts as reflection in the face of $\mathcal{Q}$ containing $r_7$, $\sfa(r_2)$, $r_3$, and $\sfc^{-1}\sfa(r_2)$, since $\sfa$ takes this face into the mirror of $\sfs$. That the other generators act as face pairings follows from previous observations. The index computation uses the fact that $\mathcal{O}$ is the union of $4$ isometric copies of $\mathcal{Q}$; namely, $\mathcal{O} = \mathcal{Q} \cup \sfa(\mathcal{Q}) \cup \sfs(\mathcal{Q} \cup \sfa(\mathcal{Q}))$. In verifying that each generator for $\Gamma_L$ lies in $\Gamma$, it is helpful to note that $\sfb = \sfd_1 \sfs \in \Gamma$.
The key result in the proof of Theorem \ref{Lobell thm} is the proposition below.
\begin{prop}\label{prop:comm} $\Gamma$ is its own commensurator. \end{prop}
We defer the proof of Proposition \ref{prop:comm} for now, and first apply it.
\begin{proof}[Proof of Theorem \ref{Lobell thm}] Since the orbifold fundamental group $\Gamma(n)$ of $O(n)$ contains the elliptic element $\sfb_n$, with order $n$, its invariant trace field $k\Gamma(n)$ contains the trace of $(\sfb_n)^2$ and thus $\mathbb{Q}(\cos (2\pi i\frac{2}{n}))$ (cf.~\cite[\S 3.3]{MaR} for the definition and properties of the invariant trace field). This is a degree-two subfield of the cyclotomic field $\mathbb{Q}(\zeta_k)$, where $k = n$ if $n$ is odd and $k=n/2$ otherwise. Thus $\liminf_{n \to \infty} [k\Gamma(n):\mathbb{Q}]$ is infinite. It follows that at most finitely many $O(n)$ belong to any one commensurability class. Furthermore, at most finitely many are arithmetic, since non-compact arithmetic hyperbolic $3$-manifolds have quadratic invariant trace fields.
Throwing away the arithmetic $\Gamma(n)$, Margulis' theorem implies that $\mathrm{Comm}(\Gamma(n))$ is a finite extension of $\Gamma(n)$ for the remaining $n$. We remarked above Lemma \ref{O(n) group} that each symmetry of $\mathcal{L}(n)$ determines an isometry of $M(n) = S^3 - L(n)$. In particular, there are isometries determined by $\sfa$ and $\sfs$, and since $\langle \sfb_n \rangle \lhd \langle \sfa,\sfb_n,\sfs \rangle$, these generate a group of isometries of $O(n)= M(n)/\langle \sfb_n \rangle$ with order $4$. By Lemma \ref{O(n) group}, $\sfd_1$ determines an additional isometry of $O(n)$, that can easily be seen to commute with $\langle \sfa,\sfs\rangle$. Thus $\mathrm{Comm}(\Gamma(n))$ contains the degree-$8$ extension $\langle \Gamma(n),\sfa,\sfs,\sfd_1\rangle$ of $\Gamma(n)$.
As the right-hand side of Figure \ref{fig:L4 and quot} makes clear, $O(n)$ is obtained from $S^3-L$ by $(n,0)$-Dehn filling on a fixed component. Therefore the hyperbolic Dehn surgery theorem implies that the $O(n)$ converge geometrically to the hyperbolic structure on $S^3 - L$, and in particular, their volumes approach its from below. (See eg.~\cite[\S E.5]{BenPet} for background on the hyperbolic Dehn surgery theorem.) Furthermore, the explicit descriptions above imply that the $\Gamma(n)$ converge algebraically to $\Gamma_L$, and the $\langle \Gamma(n), \sfa,\sfs,\sfd_1\rangle$ to $\Gamma$.
If on an infinite subsequence, $\langle \Gamma(n),\sfa,\sfs,\sfd_1\rangle$ were contained in $\mathrm{Comm}(\Gamma(n))$ properly, then a further subsequence of the $\mathrm{Comm}(\Gamma(n))$ would converge to a discrete group $\Gamma_0$ with covolume a proper fraction of that of $\Gamma$. This follows from the fact that the Chabauty topology on discrete subgroups of $\mathrm{PSL}_2(\mathbb{C})$ with bounded covolume is compact, see eg.~\cite[Corollary E.1.7]{BenPet}. In this case, since $\langle \Gamma(n), \sfa,\sfs,\sfd_1\rangle \to \Gamma$ and limits are unique in this topology (see eg.~\cite[Lemma E.1.1]{BenPet}), we would have $\Gamma < \Gamma_0$ properly, contradicting Proposition \ref{prop:comm}. Thus for all but finitely many $n$, $\mathrm{Comm}(\Gamma(n)) = \langle\Gamma(n),\sfa,\sfs,\sfd_1\rangle$.
Fixing a horosphere $\calh$ centered at the ideal vertex $r_3$ of $\mathcal{O}(n)$, a fundamental domain for the action on $\calh$ of its stabilizer in $\mathrm{Comm}(\Gamma(n))$ is thus the rectangle $\mathcal{O}(n) \cap \calh$. Two parallel sides of this rectangle are given by the intersection of $\calh$ with the white sides of $\mathcal{O}(n)$ containing $r_3$. One of these, contained in the side with ideal vertices $r_2$, $r_3$, $\sfs(r_3)$, $r_5$, and $r_6$, is stabilized by the reflection $\sfd_2 \in \Gamma(n)$ defined in Lemma \ref{O(n) group}. The other is stabilized by the reflection $\sfd_3 \doteq \sfa\sfs\sfa \in \mathrm{Comm}(\Gamma(n))$, defined in analogy with the identically-named element of $\Gamma$ from Lemma \ref{commensurator}. For the other pair of parallel sides of this rectangle, the parabolic $\sfc$ fixing $r_3$ acts by translation taking one to the other.
The stabilizer of $\calh$ in $\mathrm{Comm}(\Gamma(n))$ is thus $\langle \sfc, \sfd_2,\sfd_3\rangle$. If $\Gamma'$ were a reflection group commensurable with $\Gamma(n)$, then $\mathrm{Stab}_{\Gamma'}(r_3)$ would be a reflection group contained in $\langle \sfc,\sfd_2,\sfd_3\rangle$, acting on $\calh$ with finite coarea. But since $\sfc$ translates parallel to the lines fixed by $\sfd_2$ and $\sfd_3$, every reflection in $\langle \sfc,\sfd_2,\sfd_3\rangle$ fixes a line parallel to the lines fixed by $\sfd_2$ and $\sfd_3$. Hence no reflection subgroup of $\langle \sfc,\sfd_2,\sfd_3\rangle$ acts on $\calh$ with finite coarea. Therefore $\mathrm{Comm}(\Gamma(n))$ is not commensurable with a reflection group.
\end{proof}
Proving Proposition \ref{prop:comm} requires an explicit description of $\Gamma$. This will follow from the lemma below, which describes an embedding of $\mathcal{Q}$ in the upper half-space model for $\mathbb{H}^3$.
\begin{lemma}\label{first calq embed} There is an isometric embedding of $\mathcal{Q}$ in $\mathbb{H}^3$ determined by the following ideal vertices: $r_2 = -1+i$, $r_3 = 0$, $r_5 = (\sqrt{3}+i)/2$, $r_7 = \infty$. \end{lemma}
\begin{proof} Our description of $\mathcal{Q}$ includes the following facts: its edge joining the ideal vertex $r_7$ to $\sfc^{-1}\sfa(r_2)$ has a dihedral angle of $\pi/2$, and there are two quadrilateral faces with ideal vertices $r_7$, $\sfa(r_2)$, $r_5$, $r_2$ and $r_7$, $\sfa(r_2)$, $r_3$, $\sfc^{-1}\sfa(r_2)$, respectively. We will choose an embedding of $\mathcal{Q}$ that sends the latter face into the geodesic plane of $\mathbb{H}^3$ with ideal boundary $\mathbb{R}\cup\{\infty\}$, taking $r_3$ to $0$ and $r_7$ to $\infty$ in particular.
\begin{figure}
\begin{center}
\input{calq_embed.pdf_t}
\end{center}
\caption{An embedding of $\mathcal{Q}$ in $\mathbb{H}^3$.}
\label{calq_embed}
\end{figure}
We have pictured such an embedding in Figure \ref{calq_embed}. The ideal vertices $\sfc^{-1}\sfa(r_2)$ and $\sfa(r_2)$ go to points $x$ and $z$, respectively, in $\mathbb{R}$ on either side of $r_3 = 0$. We take $x < 0$ and $z>0$. Since the edge joining $r_7$ to $\sfc^{-1}\sfa(r_2)$ has dihedral angle $\pi/2$, the image of $r_2$ is of the form $x+iy$ for some $y \in \mathbb{R}$. We may assume $y >0$, by reflecting through $\mathbb{R}$ if necessary. The final ideal vertex $r_5$ lies somewhere on the line segment joining $r_2$ with $\sfa(r_2)$, since it is in the ideal boundary of a plane containing $r_2$, $r_7$, and $\sfa(r_2)$. Its coordinates are determined by the fact that $\sfa$ preserves this plane, fixing $r_5$ and $r_7$.
We have darkened the triangles in $\mathbb{C}$ that lie under the dark faces of $\mathcal{Q}$ after the embedding described above. The parabolic isometry $\sfc$ takes one to the other, fixing $r_3$, thus it is of the form $\left(\begin{smallmatrix} 1 & 0 \\ w & 1 \end{smallmatrix}\right)$ for some $w \in \mathbb{C}$. Using the fact that $\sfc$ takes $\sfc^{-1}\sfa(r_2)=x$ to $\sfa(r_2)=z$, a computation implies $w = (x-z)/xz$. Another computation, using the fact that $\sfc(r_2) = r_5$, determines $z = -x(\sqrt{3}+1)$.
We are free to choose $x < 0$, since one choice may be changed to another by applying a hyperbolic isometry fixing $0$ and $\infty$ to $\mathcal{Q}$. Choosing $x = -1$ yields: \begin{align*}
& \sfc^{-1}\sfa(r_2) = x = -1 && r_2 = x+iy = -1+i && \sfa(r_2) = z = \sqrt{3}+1 && r_5 = \frac{\sqrt{3} + i}{2}. \end{align*}
This is the embedding described in the statement. \end{proof}
A few additional parabolic fixed points that will be useful below we name as follows: let $r_1 = \sfd_3(r_2)$, $r_4 = \sfd_3(r_5)$, and $r_6 = \sfd_3^{\sfa}(r_5)$. Note that $r_2$, $r_4$, $r_5$, and $r_6$ are each $\Gamma$-equivalent to $r_1$.
In proving Proposition \ref{prop:comm}, it will be convenient to use a different embedding of $\mathcal{Q}$ than that described in Lemma \ref{first calq embed} above. Let us apply a M\"obius transformation taking $r_1$, $r_2$, and $r_3$ to $0$, $1$, and $\infty$, respectively. Such a map is given by $z \mapsto \frac{1+i}{2} + i/z$. This takes the other ideal vertices to: \begin{align*}
& r_4 = i\frac{1+\sqrt{3}}{2} && r_5 = 1+i\frac{1+\sqrt{3}}{2} && r_6 = 1+ i\frac{3+\sqrt{3}}{6} && r_7 = \frac{1+i}{2} \end{align*}
The representation of $\Gamma$ determined by the embedding described above is related to that determined by the embedding of Lemma \ref{first calq embed} by conjugation by
$$\left(\begin{smallmatrix} -i\frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2}(1-i) \\ -\frac{\sqrt{2}}{2}(1+i) & 0 \end{smallmatrix}\right).$$
Since $\mathcal{Q}$ is a fundamental domain for $\Gamma$, each cusp of $\mathbb{H}^3/\Gamma$ corresponds to a point on $\partial \mathbb{H}^3$ that is $\Gamma$-equivalent to an ideal vertex of $\mathcal{Q}$. Inspection of the face pairings of Lemma \ref{commensurator} thus reveals that $\mathbb{H}^3/\Gamma$ has exactly three cusps. We let $c_1$ correspond to the points of $\Gamma \cdot r_1$, $c_2$ to $\Gamma \cdot r_7$, and $c_3$ to $\Gamma \cdot r_3$.
Our explicit description of $\mathcal{Q}$ allows computation of the invariant trace field and cusp parameters. This implies:
\begin{lemma} $\Gamma$ is non-arithmetic. The cusps $c_1$ and $c_2$ are commensurable to each other and are not commensurable to $c_3$.
\end{lemma}
\begin{proof} An explicit description of generators for $\Gamma$, as may be obtained from Lemma \ref{first calq embed}, enables direct computation of the invariant trace field (see \cite[\S 3.5]{MaR}). Performing this calculation, we find that $\Gamma$ has trace field $\mathbb{Q}(i,\sqrt{3})$. Alternatively, the link $L$ may be entered into the computer program Snappea, and the resulting triangulation data into Snap, yielding the same description (see \cite{CGHN}). Since every non-compact arithmetic hyperbolic $3$-manifold has an imaginary quadratic invariant trace field, $\Gamma$ is not arithmetic.
Using the embedding described in Lemma \ref{first calq embed}, we find that an index-$8$ subgroup of $\mathrm{Stab}_{\Gamma}(\infty)$ is generated by $z \mapsto z + 2(2+\sqrt{3})$ and $z \mapsto z+ 2i$; thus the parameter of the associated cusp $c_2$ is $\mathrm{PGL}_2(\mathbb{Q})$-equivalent to $i(2+\sqrt{3})$ (cf.~\cite[\S 4.3]{CD}). After re-embedding as above, the stabilizer of $\infty$ corresponds to the cusp $c_3$. An index-$2$ subgroup of this lattice is generated by $\sfc \colon\thinspace z \mapsto z+i\frac{1+\sqrt{3}}{2}$ and the product of reflections $\sfd_2\sfd_3\colon\thinspace z \mapsto z + 1$. Thus the parameter of $c_3$ is $\mathrm{PGL}_2(\mathbb{Q})$-equivalent to $i(1+\sqrt{3})$. A similar computation reveals that $c_1$ has the same parameter as $c_2$. Since the complex modulus is a complete commensurability invariant for lattices in $\mathbb{C}^2$, and $i(1+\sqrt{3})$ is not $\mathrm{PGL}_2(\mathbb{Q})$-equivalent to $i(2+\sqrt{3})$, the lemma follows. \end{proof}
From Margulis' theorem, we immediately obtain:
\begin{cor} $\text{Comm}(\Gamma)$ is a finite extension of $\Gamma$, and the minimal orbifold $O \doteq \mathbb{H}^3/\text{Comm}(\Gamma)$ has either two or three cusps. \end{cor}
In particular, if $O$ has two cusps then $c_1$ and $c_2$ are identified by the covering map $\mathbb{H}^3/\Gamma \rightarrow O$. We have used the algorithm of Goodman--Heard--Hodgson \cite{GHH} to compute $\text{Comm}(\Gamma)$. Recall that we introduced the setting for this algorithm in Section \ref{sec:One cusp} between the statements of Propositions \ref{OP comm} and \ref{tiling}.
Let
\begin{align*}
{\bf v}_1& \ = \ \left( -2, \,2, \,-1, \,3 \right)^T \\
{\bf v}_7& \ = \ \left( 0, \,0, \,9-4\sqrt{3}, \,9-4\sqrt{3} \right)^T \\
{\bf v}_3& \ = \ \left( 0, \,0, \,-3, \,3 \right)^T.
\end{align*}
These vectors are chosen so that there is an isometry $\Phi$ from the upper half space model to the hyperboloid model which takes the parabolic fixed point $r_i$ to the center of the horosphere $H_{{\bf v}_i}$ when $i=1,3,7$. Under $\Phi$, the isometries ${\sf a,b,c}, {\sf d}_1, {\sf d}_2, {\sf d}_3$ correspond to the matrices ${\sf A, B, C}, {\sf D}_1, {\sf D}_2, {\sf D}_3 \in \text{O}_0(3,1)$ listed below.
\begin{align*}
{\sf A}&\ = \ \left(
\begin{array}{cccc}
-1 & 0 & -1/2 & 1/2\\
0 & -1 & \sqrt{3}/2 & -\sqrt{3}/2 \\
-1/2 &\sqrt{3}/2 & 1/2 & 1/2 \\
-1/2 & \sqrt{3}/2 & -1/2 & 3/2
\end{array}
\right) &
{\sf D}_1& \ = \ \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & -1 & -1 & 1 \\
0 & -1 & \frac{1}{2} & \frac{1}{2} \\
0 & -1 & -\frac{1}{2} & \frac{3}{2}
\end{array}
\right)
\end{align*}\begin{align*}
{\sf B}& \ = \ \left(
\begin{array}{cccc}
1 & 0 & -1 & 1 \\
0 & 1 & 0 & 0 \\
1 & 0 & 1/2 & 1/2\\
1 & 0 & -1/2 & 3/2
\end{array}
\right) &
{\sf D}_2& \ = \ \left(
\begin{array}{cccc}
-1 & 0 & 2 & 2 \\
0 & 1 & 0 & 0 \\
2 & 0 & -1 & -2 \\
-2 & 0 & 2 & 3
\end{array}
\right)
\end{align*}\begin{align*}
{\sf C}&\ = \ \left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & -1-\sqrt{3} & -1-\sqrt{3} \\
0 & 1+\sqrt{3} & -1-\sqrt{3} & -2-\sqrt{3} \\
0 & -1-\sqrt{3} & 2+\sqrt{3} & 3+\sqrt{3}
\end{array}
\right) &
{\sf D}_3& \ = \ \left(
\begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array}
\right)
\end{align*}
Thus, $\Phi$ allows us to also think of $\Gamma$ as a subgroup of $\text{O}_0(3,1)$. Each ${\bf v}_i$ is a horospherical vector for the cusp $c_i$ of $\mathbb{H}^3/\Gamma$ so $\{ {\bf v}_1, {\bf v}_3, {\bf v}_7 \}$ determines a $\Gamma$-invariant set $V$ as above. We have $\{ {\bf v}_i \}_1^7$ given by ${\bf v}_i = \Phi(r_i)$
and these vectors may be calculated explicitly by applying appropriate isometries from $\Gamma$. We have that ${\bf v}_i$ is the $i^\text{th}$ column of the matrix
\[ \left(
\begin{array}{ccccccc}
-2 & 2 & 0 & -2 & 2 & 6 & 0 \\
2 & 2 & 0 & -2 \sqrt{3} & -2 \sqrt{3} & -2 \sqrt{3} & 0 \\
-1 & -1 & -3 & -3 & -3 & -1 & 9-4 \sqrt{3} \\
3 & 3 & 3 & 5 & 5 & 7 & 9-4 \sqrt{3}
\end{array}
\right).\]
As discussed above, we obtain all possibilities for canonical tilings associated to $\Gamma$ by using initial sets of the form $\{ {\bf v}_1, \beta {\bf v}_7, \gamma {\bf v}_3 \}$ where $\beta, \gamma \in \mathbb{R}^+$. We write $\calh(\beta, \gamma)$ to denote the set $\Gamma \cdot \{ {\bf v}_1, \beta {\bf v}_7, \gamma {\bf v}_3 \}$ and $\mathcal{T}(\beta, \gamma)$ to denote the associated canonical tiling.
Recall that $O=\mathbb{H}^3/\text{Comm}(\Gamma)$ has either 2 or 3 cusps. If $O$ has 3 cusps then, for any pair $(\beta, \gamma)$, $\calh(\beta, \gamma)$ descends to cusp cross sections of $O$ and so $\text{Comm}(\Gamma) = \text{Sym}(\mathcal{T}(\beta, \gamma))$. If $O$ has 2 cusps then there is some ${\sf g} \in \text{Comm}(\Gamma)$ and $\beta_0$ with ${\sf g}({\bf v}_1) = \beta_0 {\bf v}_7$. We have $\text{Comm}(\Gamma) = \text{Sym}(\mathcal{T}(\beta_0, \gamma))$ for any $\gamma \in \mathbb{R}^+$. Therefore, it suffices to compute the triangulations $\mathcal{T}(\beta, 1)$ for $\beta \in \mathbb{R}^+$. Either there exists a unique $\beta_0$ so that $\text{Sym}(\mathcal{T}(\beta_0,1))$ contains an isometry taking ${\bf v}_1$ to $\beta_0 {\bf v}_7$ or there is no such $\beta$. In the first case, $O$ has 2 cusps and $\text{Comm}(\Gamma) = \text{Sym}(\mathcal{T}(\beta_0,1))$. In the second case, $O$ has 3 cusps and $\text{Comm}(\Gamma) = \text{Sym}(\mathcal{T}(\beta, 1))$ for every $\beta$.
\begin{lemma} \label{lem:sym}
$O$ has 3 cusps and $\text{Comm}(\Gamma) = \text{Sym}(\mathcal{T}(\beta, 1))$ for every $\beta$.
\end{lemma}
\proof The proof follows by showing that there does not exist a unique $\beta$ so that $\text{Sym}(\mathcal{T}(\beta_0,1))$ contains an isometry taking ${\bf v}_1$ to $\beta_0 {\bf v}_7$. We first describe the canonical triangulations as $\beta$ decreases from $\infty$ to $0$. The interval $(0,\infty)$ has a finite cell decomposition so that if two values for $\beta$ are chosen from the same cell then they determine the same canonical triangulation. As $\beta$ moves to the boundary of a $1$-cell there is a pair of neighboring tiles $T_1$ and $T_2$ so that the tilt at their common face changes from positive to zero. At the boundary value, these two tiles merge to form a tile in the new canonical triangulation. The decomposition of $(0,\infty)$ and the associated tilings of $\mathbb{H}^3$ are described in Tables \ref{table:first two} -- \ref{table:last eleven}. The triangulations $\mathcal{T}(\beta,1)$ can be checked by repeatedly verifying the coplanar and positive tilt conditions on sets of $\Gamma$-generating tiles. In the tables, we let $[p_1, \ldots , p_k]$ denote the convex hull in $\mathbb{H}^3$ of a collection $\{ p_1, \ldots , p_k\}\subset \partial \mathbb{H}^3$.
\begin{table
\begin{tabular}{lclcl} \\ \\ & &$\beta$ & & $\Gamma$-Generating Tiles \\
\hline \hline
&& && $\mathcal{P}_1=[\bv_3, \bv_4, \bv_5, {\sf A}(\bv_2)]$ \\
$\mathcal{T}_1$&& $\beta > \frac{3}{11}(4+3\sqrt{3})$ && $\mathcal{P}_2=[\bv_3, \bv_5, {\sf A}(\bv_2), {\sf A}(\bv_3)]$\\
&& && $\mathcal{P}_3=[\bv_3, \bv_5, {\sf CA}(\bv_2), {\sf CA}(\bv_3)]$\\
&& && $\mathcal{P}_4=[\bv_3, \bv_4, \bv_5, {\sf CA}(v_2)]$\\
&& && $\mathcal{P}_5=[\bv_4, \bv_5, {\sf CA}(v_2), {\sf C}(\bv_7)]$\\
\hline && && $\mathcal{P}_1=[\bv_3, \bv_4, \bv_5, {\sf A}(\bv_2)]$ \\
$\mathcal{T}_2$&& $\beta=\frac{3}{11}(4+3\sqrt{3})\sim 2.51$ && $\mathcal{P}_2=[\bv_3, \bv_5, {\sf A}(\bv_2), {\sf A}(\bv_3)]$\\
&& && $\mathcal{P}_3=[\bv_3, \bv_5, {\sf CA}(\bv_2), {\sf CA}(\bv_3)]$\\
&& && $\mathcal{P}_4=[\bv_3, \bv_4, \bv_5, {\sf CA}(v_2), {\sf C}(\bv_7)]$\\
\hline
\end{tabular}
\caption{The data that determine the first two canonical tilings.}
\label{table:first two}
\end{table}
\begin{table}
\centering
\begin{tabular}{lclcl} \\ \\ & &$\beta$ & & $\Gamma$-Generating Tiles \\
\hline \hline
&& && $ \mathcal{P}_1=[\bv_3, \bv_4, \bv_5, {\sf C}(\bv_7)]$ \\
$\mathcal{T}_3$&& $\frac{1}{22}(21+13\sqrt{3})<\beta<\frac{3}{11}(4+3\sqrt{3})$&& $\mathcal{P}_2=[\bv_3, \bv_4, \bv_5, {\sf A}(\bv_2)]$\\
&& && $\mathcal{P}_3=[\bv_3, \bv_5, {\sf A}(\bv_2), {\sf A}(\bv_3)]$\\
&& && $\mathcal{P}_4=[\bv_3,\bv_5, {\sf C}(\bv7), {\sf CA}(\bv_2)]$\\
&& && $\mathcal{P}_5=[\bv_3, \bv_5, {\sf CA}(\bv_2), {\sf CA}(\bv_3)]$\\
\hline && && $\mathcal{P}_1=[\bv_3, \bv_4, \bv_5, {\sf C}(\bv_7)]$ \\
$\mathcal{T}_4$&& $\beta =\frac{1}{22}(21+13\sqrt{3})\sim 1.978$&&$\mathcal{P}_2=[\bv_3, \bv_4, \bv_5, {\sf A}(\bv_2)]$\\
&& && $\mathcal{P}_3=[\bv_3, \bv_5, {\sf A}(\bv_2), {\sf A}(\bv_3)]$\\
&& && $\mathcal{P}_4=[\bv_3,\bv_5, {\sf C}(\bv7), {\sf CA}(\bv_2), {\sf CA}(\bv_3)]$ \\
\hline && && $ \mathcal{P}_1=[\bv_3, \bv_4, \bv_5, {\sf C}(\bv_7)]$ \\
$\mathcal{T}_5$&& $\frac{1}{11}(9+4\sqrt{3})<\beta<\frac{1}{22}(21+13\sqrt{3})$&& $ \mathcal{P}_2=[\bv_3, \bv_4, \bv_5, {\sf A}(\bv_2)]$\\
&& && $ \mathcal{P}_3=[\bv_3,\bv_5, {\sf A}(\bv_2), {\sf A}(\bv_3)]$\\
&& && $ \mathcal{P}_4=[\bv_3,\bv_7, {\sf A}(\bv_2), {\sf A}(\bv_3)]$\\
\hline && && $\mathcal{P}_1=[\bv_3, \bv_4, \bv_5, {\sf C}(\bv_7)]$ \\
$\mathcal{T}_6$&& $\beta = \frac{1}{11}(9+4\sqrt{3})\sim 1.45$&&$ \mathcal{P}_2=[\bv_3, \bv_4, \bv_5, {\sf A}(\bv_2)]$\\
&& &&$ \mathcal{P}_3=[\bv_3,\bv_5, \bv_7, {\sf A}(\bv_2), {\sf A}(\bv_3)]$\\
\hline && && $\mathcal{P}_1=[\bv_1, \bv_2, \bv_3, \bv_7]$ \\
$\mathcal{T}_7$&&$\frac{1}{121}(72+43\sqrt{3})<\beta<\frac{1}{11}(9+4\sqrt{3})$&& $\mathcal{P}_2=[\bv_2, \bv_3, \bv_5, \bv_7]$\\
&& &&$\mathcal{P}_3=[\bv_3,\bv_5,\bv_7, {\sf A}(\bv_2)]$\\
&& &&$\mathcal{P}_4=[\bv_3, \bv_4,\bv_5, {\sf A}(\bv_2)]$\\
\hline && && $\mathcal{P}_1=[\bv_1, \bv_2, \bv_3, \bv_7]$ \\
$\mathcal{T}_8$&& $\beta = \frac{1}{121}(72+43\sqrt{3}) \sim 1.21$&&$\mathcal{P}_2=[\bv_2, \bv_3, \bv_5, \bv_7]$\\
&& &&$\mathcal{P}_3=[\bv_3,\bv_4,\bv_5,\bv_7,{\sf A}(\bv_2)]$ \\
\hline && && $\mathcal{P}_1=[\bv_1, \bv_2, \bv_3, \bv_7]$ \\
$\mathcal{T}_9$&&$(-21+13\sqrt{3})^{-1}<\beta<\frac{1}{121}(72+43\sqrt{3})$&& $\mathcal{P}_2=[\bv_2, \bv_3, \bv_5, \bv_7]$\\
&& &&$\mathcal{P}_3=[\bv_3,\bv_4,\bv_5,\bv_7]$\\
&& &&$\mathcal{P}_4=[\bv_2, \bv_5, \bv_6, \bv_7]$\\
\hline && && $\mathcal{P}_1=[\bv_1, \bv_2, \bv_3, \bv_7]$ \\
$\mathcal{T}_{10}$&& $\beta = (-21+13\sqrt{3})^{-1} \sim 0.659$&&$\mathcal{P}_2=[\bv_2, \bv_3, \bv_5, \bv_7]$\\
&& &&$\mathcal{P}_3=[\bv_3,\bv_4,\bv_5,\bv_7]$\\
&& &&$\mathcal{P}_4=[\bv_2, \bv_5, \bv_6, \bv_7,{\sf D}_2(\bv_7)]$\\ \hline
\end{tabular}
\caption{More canonical tilings.}
\label{table:first ten}
\end{table}
\begin{table}
\centering
\begin{tabular}{lclcl} \\ \\ & &$\beta$ & & $\Gamma$-Generating Tiles \\
\hline \hline
&& && $\mathcal{P}_1=[\bv_3, \bv_4, \bv_5, {\sf C}(\bv_7)]$\\
$\mathcal{T}_{11}$ && $\frac{1}{143}(48+25\sqrt{3})<\beta<(-21+13\sqrt{3})^{-1}$ && $\mathcal{P}_2=[\bv_2, \bv_3, \bv_5, \bv_7]$ \\
&& && $\mathcal{P}_3=[\bv_3, \bv_4, \bv_5, \bv_7]$\\
&& && $\mathcal{P}_4=[\bv_5, \bv_6, \bv_7, {\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_5=[\bv_2, \bv_5, \bv_7,{\sf D}_2(\bv_7) ]$\\
\hline && && $\mathcal{P}_1=[\bv_3, \bv_4, \bv_5, {\sf C}(\bv_7)]$\\
$\mathcal{T}_{12}$ && $\beta = \frac{1}{143}(48+25\sqrt{3}) \sim 0.638$ && $\mathcal{P}_2=[\bv_2, \bv_3, \bv_5, \bv_7,{\sf D}_2(\bv_7) ]$\\
&& && $\mathcal{P}_3=[\bv_3, \bv_4, \bv_5, \bv_7]$\\
&& && $\mathcal{P}_4=[\bv_5, \bv_6, \bv_7, {\sf D}_2(\bv_7)]$\\
\hline
&& && $\mathcal{P}_1=[\bv_3, \bv_4, \bv_5, {\sf C}(\bv_7)]$\\
$\mathcal{T}_{13}$ && $\frac{1}{11}(6-\sqrt{3})<\beta<\frac{1}{143}(48+25\sqrt{3})$ && $\mathcal{P}_2=[\bv_3,\bv_4,\bv_5,\bv_7]$\\
&& && $\mathcal{P}_3=[\bv_3,\bv_5,\bv_7,{\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_4=[\bv_2, \bv_3,\bv_7,{\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_5=[\bv_5, \bv_6, \bv_7, {\sf D}_2(\bv_7)]$\\
\hline
&& && $\mathcal{P}_1=[\bv_3, \bv_4, \bv_5, \bv_7, {\sf C}(\bv_7)]$\\
$\mathcal{T}_{14}$ && $\beta = \frac{1}{11}(6-\sqrt{3}) \sim 0.39$ && $\mathcal{P}_2=[\bv_3,\bv_5,\bv_7,{\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_3=[\bv_2, \bv_3,\bv_7,{\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_4=[\bv_5, \bv_6, \bv_7, {\sf D}_2(\bv_7)]$\\
\hline
&& && $\mathcal{P}_1=[\bv_3, \bv_5, \bv_7, {\sf C}(\bv_7)]$\\
$\mathcal{T}_{15}$ && $\frac{1}{33}(3+5\sqrt{3})<\beta<\frac{1}{11}(6-\sqrt{3})$ && $\mathcal{P}_2=[\bv_4, \bv_5, \bv_7, {\sf C}(\bv_7)]$\\
&& && $\mathcal{P}_3=[\bv_3,\bv_5,\bv_7,{\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_4=[\bv_2, \bv_3,\bv_7,{\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_5=[\bv_5, \bv_6, \bv_7, {\sf D}_2(\bv_7)]$\\
\hline
&& && $\mathcal{P}_1=[\bv_3, \bv_5, \bv_7, {\sf C}(\bv_7)]$\\
$\mathcal{T}_{16}$ && $\beta=\frac{1}{33}(3+5\sqrt{3}) \sim 0.353$ && $\mathcal{P}_2=[\bv_4, \bv_5, \bv_7, {\sf C}(\bv_7), {\sf D}_1^{\sf C}(\bv_7)]$\\
&& && $\mathcal{P}_3=[\bv_3,\bv_5,\bv_7,{\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_4=[\bv_2, \bv_3,\bv_7,{\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_5=[\bv_5, \bv_6, \bv_7, {\sf D}_2(\bv_7)]$\\
\hline
&& && $\mathcal{P}_1=[\bv_3, \bv_5, \bv_7, {\sf C}(\bv_7)]$\\
$\mathcal{T}_{17}$ && $ \frac{1}{143}(24+7\sqrt{3}) <\beta<\frac{1}{33}(3+5\sqrt{3})$ && $\mathcal{P}_2=[\bv_5, \bv_7, {\sf C}(\bv_7), {\sf D}_1^{\sf C}(\bv_7)]$\\
&& && $\mathcal{P}_3=[\bv_3,\bv_5,\bv_7,{\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_4=[\bv_2, \bv_3,\bv_7,{\sf D}_2(\bv_7)]$\\
&& && $\mathcal{P}_5=[\bv_5, \bv_6, \bv_7, {\sf D}_2(\bv_7)]$\\
\hline
&& && $\mathcal{P}_1=[\bv_3, \bv_5, \bv_7, {\sf C}(\bv_7),{\sf D}_2(\bv_7), {\sf D}_2{\sf C}(\bv_7) ]$\\
$\mathcal{T}_{18}$ && $\beta=\frac{1}{143}(24+7\sqrt{3}) \sim 0.252$ && $\mathcal{P}_2=[\bv_5, \bv_7, {\sf AC}(\bv_7), {\sf D}_2(\bv_7)]$\\
\hline
&& && $\mathcal{P}_1=[\bv_3, \bv_7, {\sf C}(\bv_7), {\sf D}_2(\bv_7), {\sf D}_2{\sf C}(\bv_7) ]$\\
$\mathcal{T}_{19}$ && $\frac{1}{33}(6-\sqrt{3}) <\beta<\frac{1}{143}(24+7\sqrt{3})$ && $\mathcal{P}_2=[\bv_5, \bv_7, {\sf C}(\bv_7), {\sf D}_2(\bv_7), {\sf D}_2{\sf C}(\bv_7) ]$\\
&& && $\mathcal{P}_3=[\bv_5, \bv_7, {\sf AC}(\bv_7),{\sf D}_2(\bv_7) ]$\\
\hline
&& && $\mathcal{P}_1=[\bv_3, \bv_7, {\sf C}(\bv_7), {\sf D}_2(\bv_7), {\sf D}_2{\sf C}(\bv_7) ]$\\
$\mathcal{T}_{20}$ && $\beta=\frac{1}{33}(6-\sqrt{3})\sim 0.129 $ && $\mathcal{P}_2=[\bv_5, \bv_7, {\sf C}(\bv_7), {\sf D}_2(\bv_7), {\sf D}_2{\sf C}(\bv_7), {\sf AC}(\bv_7) ]$\\
\hline
&& && $\mathcal{P}_1=[\bv_3, \bv_7, {\sf C}(\bv_7), {\sf D}_2(\bv_7), {\sf D}_2{\sf C}(\bv_7) ]$\\
$\mathcal{T}_{21}$ && $ \beta<\frac{1}{33}(6-\sqrt{3}) $ && $\mathcal{P}_2=[ \bv_7, {\sf C}(\bv_7), {\sf D}_2(\bv_7), {\sf D}_2{\sf C}(\bv_7),{\sf AC}(\bv_7)]$\\
&& && $\mathcal{P}_3=[\bv_5, {\sf C}(\bv_7), {\sf D}_2{\sf C}(\bv_7),{\sf AC}(\bv_7)]$\\
\hline
\end{tabular}
\caption{The remaining tilings.}
\label{table:last eleven}
\end{table}
From our earlier observations, it remains only to check that there are no symmetries of the even numbered tilings that interchange vertices of $\Gamma.\bv_1$ with those of $\Gamma.\bv_7$. The arguments for each of the cases are very similar, we start with $\mathcal{T}_2$ as a model case. Recall that $\bv_2$, $\bv_4$, $\bv_5$, and $\bv_6$ are each $\Gamma$-equivalent to $\bv_1$.
Suppose there exists $\gamma \in \mathrm{Sym}(\mathcal{T}_2)$ exchanging $\Gamma.\bv_1$ with $\Gamma.\bv_7$. Then $\gamma(\mathcal{P}_4)$ is a tile of $\mathcal{T}_2$ with exactly five vertices. $\mathcal{P}_4$ is the unique generating tile with five vertices so there exists $\gamma' \in \Gamma$ with $\gamma' \gamma(\mathcal{P}_4)=\mathcal{P}_4$. Since $\gamma' \in \Gamma$ it preserves the cusp classes of the vertices of tiles in $\mathcal{T}_2$. On the other hand, since $\gamma$ exists, the minimal orbifold must have exactly two cusps, hence $\gamma' \gamma$ must exchange the vertices of $\mathcal{P}_4$ in $\Gamma.\bv_1$ with those in $\Gamma.\bv_7$. But our explicit description implies that there are three of the former and only one of the latter, a contradiction.
The same sort of argument also works for the remaining even numbered triangulations with the exception of $\mathcal{T}_{12}$ and $\mathcal{T}_{14}$. Consider the case of $\mathcal{T}_{12}$. Suppose there is $\gamma \in \mathrm{Sym}(\mathcal{T}_{12})$ exchanging $\Gamma.\bv_1$ with $\Gamma.\bv_7$. Arguing as before, we have an element $\delta \in \text{Comm}(\Gamma)$ with $\delta(\mathcal{P}_2)=\mathcal{P}_2$ and which interchanges its vertices in $\Gamma.\bv_1$ with those in $\Gamma.\bv_7$. Since $\mathcal{P}_2$ has two vertices in $\Gamma.\bv_1$ and two in $\Gamma.\bv_7$, we have not yet arrived at a contradiction. But such a $\delta$ still cannot exist since it can be seen that the two vertices in $\Gamma.\bv_7$ are connected by an edge but those in $\Gamma.\bv_1$ are not.
The argument for $\mathcal{T}_{14}$ follows the outline of the argument for $\mathcal{T}_{12}$. Here $\mathcal{P}_1$ is the unique generating tile with five vertices, its two vertices in $\Gamma.\bv_1$ are connected by an edge, and those in $\Gamma.\bv_7$ are not.
\endproof
\proof[Proof of Proposition \ref{prop:comm}]
By Lemma \ref{lem:sym}, we have $\text{Comm}(\Gamma) = \text{Sym}(\mathcal{T}_{18})$ so to prove the theorem we need to show $\text{Sym}(\mathcal{T}_{18})=\Gamma$. We already know that $\Gamma \subset \text{Sym}(\mathcal{T}_{18})$.
Suppose that $\gamma \in \text{Sym}(\mathcal{T}_{18})-\Gamma$ is non-trivial. Since $\mathcal{T}_{18}$ is $\Gamma$-generated by $\mathcal{P}_1$ and $\mathcal{P}_2$ and these two polyhedra have different numbers of vertices, we may assume that $\gamma(\mathcal{P}_1)=\mathcal{P}_1$. By composing $\gamma$ with ${\sf d}_2$, if necessary, we may assume that $\gamma$ is orientation preserving. $\mathcal{P}_1$ has one vertex in $\Gamma.\bv_1$, one in $\Gamma.\bv_3$, and four in $\Gamma.\bv_7$; thus by Lemma \ref{lem:sym}, $\gamma$ fixes $\bv_5$ (which is in $\Gamma.\bv_1$) and $\bv_3$.
Using our embedding in the upper half space model, the vertices of $\mathcal{P}_1$ in $\Gamma.\bv_7$ are taken to:
\begin{align*}
& r_7=\frac{1+i}{2}, && {\sf c}(r_7) =\frac{1+i(2+\sqrt{3})}{2}, && {\sf d}_2{\sf c}(r_7)=\frac{3+i(2+\sqrt{3})}{2}, && {\sf d}_2(r_7) = \frac{3+i}{2}. \end{align*}
Since $\gamma$ is an elliptic isometry preserving $\mathcal{P}_1$ and fixing $r_3=\infty$ and $r_5 = 1 + \frac{i}{2}(1+\sqrt{3})$, it must act as a cyclic permutation on the set $\{r_7, \,{\sf c}(r_7), \,{\sf d}_2{\sf c}(r_7), \,{\sf d}_2(r_7)\}$. But it is easy to see that the axis of $\gamma$ is not perpendicular to the plane that these points span, so this is impossible.
\endproof
\bibliographystyle{plain}
|
2,877,628,091,063 | arxiv | \section{Introduction}\label{sec:intro}
\subsection{Motion in General Relativity}
The problem of motion in General Relativity is a complex but fundamental one.
It is the motion of objects {\it and} of light that allows for precise
tests of the theory, by connecting it to observations. Conversely, the way that objects move allows one to infer, study and map the amount of matter contributing to the motion. When the object (a star, a planet, etc...) is idealized as point-like, it moves
-- to first approximation -- along geodesics of the spacetime ``generated'' by the rest of the universe.
Static, spherically symmetric objects in otherwise empty spacetime give rise to a Schwarzschild geometry. Geodesic motion around a Schwarzschild background has been studied for decades. The symmetries of the spacetime allow for three constants of motion, which simplify considerably the analysis and make the problem integrable. However, spherical symmetry does not necessarily imply staticity when matter pervades the geometry. For example, radially oscillating stars produce an effective geometry which is time-dependent in their interior but Birkhoff's theorem guarantees that the spacetime outside such a configuration is described by a Schwarzschild geometry \cite{bookWeinberg:1972}.
Here, we do not have in mind the interior of stars, but rather the motion of planets, stars and compact objects within extended, dynamical matter distributions. One important example is described by a self-gravitating massive scalar, and is used to describe dark matter (DM), either at a fundamental level or as an effective description~\cite{Barack:2018yly}. The field equations describing a (real) minimally coupled massive scalar can give rise to spatially bound, time-dependent, spherically symmetric solutions~\cite{Bogolyubsky:1976nx,Copeland:1995fq,Honda:2001xg,Fodor:2006zs,Brito:2015yga}.
Thus, the question arises as to how matter moves in time-periodic geometries and whether the motion exhibits characteristic behavior.
Some aspects of this question were considered previously within a very specific context -- that of oscillating bosonic DM -- and within some approximations~\cite{Khmelnitsky:2013lxt,Blas:2016ddr,Ferreira:2017pth, AokiSoda:2017a}.
Here we wish to consider the full problem of geodesic motion, in what looks like a classic problem in Newtonian physics and General Relativity: how do particles move in a time-dependent and periodic gravitational potential?
The general description and discussion of symmetries of motion of massive particles is provided in Section~\ref{sec:motionformalism}. In Section~\ref{sec:examples}, we construct and examine, both analytically and numerically, several examples. Our focus is mostly on whether and how resonances and instabilities manifest in this motion, as they are the most distinct signatures of time-periodic backgrounds.
\subsection{Galactic haloes}
Although extremely successful on a large variety of scales, the cold dark matter (CDM) model is met with several problems at galactic scales.
One way to alleviate these problems is to rely on baryonic feedback processes\footnote{Such as reionization, (baryonic) radiative cooling, star formation, supernovae explosions etc.}. However, these mechanisms aren't at present consistently and uniformly successful in solving small scale challenges. Other approaches, within the DM paradigm, rely on modifying linear or non-linear aspects of the structure formation. These include modification of very slow small-scale suppression in the primordial power spectrum of the DM density fluctuations and/or importance of DM self-interaction~\cite{Bullock:2017xww}. The first approach is natural in the warm DM model, where DM particles have significant thermal velocities, and in the case of ultra-light axions (ULA), where the power spectrum small scale suppression occurs because of the large particle's de Broglie wavelength~\cite{Hui:2016ltb, Marsh:2016rep}.
It should be noted that there is no a priori reason why these alternative models should solve all of the small scale problems (``catch all'' solution), while reproducing large-scale success of the CDM, as some (or all) of them may indeed be solved by progress in the understanding of baryonic feedback (however, see Ref. \cite{Famaey:2017xou}). It is, thus, necessary to think about direct tests of such models. Both warm DM and ULA have been also considered in the context of mixed DM cosmologies \cite{AnderhaldenDiemand:2013, MarshSilk:2013}.
When the ULA particle has a mass in the range $10^{-21}\,\mbox{{\scriptsize $\stackrel{<}{\sim}$}}\, m[\text{eV}] \,\mbox{{\scriptsize $\stackrel{<}{\sim}$}}\, 10^{-23}$ and a weak self-interaction\footnote{For scalar DM models with strong self-interactions see Appendix \ref{AppGPP}.}, it is a candidate for the dominant component of the DM and is usually referred to as a fuzzy DM (FDM).
ULA particles are motivated by QCD and string theory~\cite{Marsh:2017hbv} and are described by a real scalar field, subject of this work. Solutions to the field equations should, in this context, be interpreted as describing a region of the dark halo. Because the system is dilute, it can be analysed in the weak field limit, along with the study of test particle motion inside such a background. We discuss and study this scenario in Section~\ref{sec:NOscillatons}, Section~\ref{sec:darkhalo} and Appendices \ref{AppEliptical} and \ref{AppBinary} and show that observable consequences of motion in ULA background may be found in the analysis of motion of S2-like stars, at the Galactic center.
\subsection{Compact objects and BH mimickers}
In the other limit, compact configurations made from a real scalar field, dubbed oscillatons, and motion inside them, e.g. extreme mass ratio inspirals (EMRI), can be of relevance in understanding the nature of compact, massive and dark objects~\cite{Cardoso:2017njb,Macedo:2013qea,Macedo:2013jja,Ferreira:2017pth,HelferLim:2018, Barack:2018yly}. Thus, our results can also make close contact with attempts at explaining observations of compact objects.
We review the description of oscillatons in both the strong-field (numerically, in Section~\ref{sec:ROscillatons}) and the weak-field regime (analytically, in Section~\ref{sec:NOscillatons} and Appendices \ref{AppEKGNewt} and \ref{AppGPP}). We also discuss how dynamical oscillatons spacetimes are. In Section \ref{sec:applications_oscillatons} we confirm previous results~\cite{2006GReGr..38..633B} for the motion in oscillatons, show that no resonances can be excited by motion in them and develop an analytical framework for description of motion in dilute oscillatons. The motion in self-interacting Newtonian oscillatons is briefly discussed in Appendix~\ref{AppGPP_self_int_motion}. The motion of light in oscillaton spacetimes, in the context of gravitational redshift, is discussed in Appendix~\ref{AppGredshift}.
\section{Setup}
\subsection{Dynamic, time-periodic spacetimes} \label{sec:spacetime}
We adopt geometric units ($G=c=1$) and consider a spherically symmetric (Lorentzian) spacetime which in radial coordinates $(t,r,\vartheta,\varphi)$ takes the form
\begin{equation}
ds^2=g_{\mu\nu}d x^\mu dx^\nu=-Adt^2+Bdr^2+r^2d\Omega^2\,,\label{eq:metric}
\end{equation}
where $A\equiv A(t,r)>0,\, B\equiv B(t,r)>0$ and $d\Omega^2=d\vartheta^2+\sin^2(\vartheta)d\varphi^2$ is the metric on the two-sphere.
The functions $A$ and $B$ are taken to be periodic functions of the time coordinate.
In most of this work, the spacetime is to a good approximation static, and the time-dependence is but a small deviation away from staticity.
Thus, we find it convenient to expand the metric around a static reference spacetime $g^{(0)}_{\mu\nu}$
\begin{equation}
g_{\mu\nu}=g^{(0)}_{\mu\nu}+\epsilon g^{(1)}_{\mu\nu}\,.\label{metric_expansion}
\end{equation}
Having in mind specific applications, we consider spacetimes for which
\begin{eqnarray}
A(t,r)&=&a_0(r)+\epsilon a_1(r)\cos\left(2\omega t\right)\,,\label{eq:weakly1}\\
B(t,r)&=&b_0(r)+\epsilon b_1(r)\cos\left(2\omega t\right)\,,\label{eq:weakly2}
\eeq
thus describing time-periodic geometries with period $T=\pi/\omega$.
\subsection{An example: oscillatons} \label{sec:Oscillatons}
One possible time-periodic metric is obtained in the context of minimally coupled scalar fields, giving rise to
self-gravitating structures~\cite{Seidel:1991zh,Brito:2015yga}.
These are time-dependent, spherically symmetric, real~\footnote{Complex scalar field counterparts to oscillatons are known as boson stars, whose metric is stationary but have an harmonic boson~\cite{Liebling:2012fv}. This configurations have $\text{U}(1)$ symmetry and are hence protected by a charge.} scalar field solutions $\Phi(t,r)$ of the coupled Einstein-Klein-Gordon (EKG) equations derived from the action
\begin{equation}
\label{eq:action}
S = \int d^4x \sqrt{-g} \left(\frac{1}{16 \pi} R - \frac{1}{2} g^{\alpha \beta}\partial_{\alpha}\Phi \partial_{\beta} \Phi - U(\Phi)\right)\,,
\end{equation}
where $U(\Phi)$ is the scalar potential.
The canonical axion potential is
$U(\Phi) = \mu^2 f_{a}^2 (1-\cos (\Phi/f_{a} ))$, and the object is then regarded as an axion star~\cite{Helfer:2017a}. Here,
$f_{a}$ is relevant energy scale (decay constant), $\mu = m/\hbar$ is the mass parameter and $m$ is the mass of the scalar field.
If the axion potential is expanded in a Taylor series for small scalars, and the field is interpreted in a classical sense, we call such an object oscillaton. Unless stated otherwise, we focus on non-self-interacting oscillatons with the scalar potential containing only the mass term
\begin{equation}
U(\Phi) = \frac{1}{2} \mu^2 \Phi^2\,.
\end{equation}
The dynamics and stability of these objects were studied in Refs.~\cite{Seidel:1991zh,Alcubierre:2003sx,Okawa:2013jba,Brito:2015yfh}, where a set of stable ground states were found (excited states are unstable and we do not discuss them here). These solutions actually have a small radiating tail~\cite{Grandclement:2011wz,Page:2003rd,Fodor:2009kg}, but the mass-loss rate is for much of the parameter space larger than a Hubble time. Such solutions can be formed through gravitational collapse and cooling mechanisms~\cite{Seidel:1993zk,Guzman:2006yc,Guzman:2004wj,Okawa:2013jba,Brito:2015yfh}.
By minimizing action in Eq.~\eqref{eq:action} we obtain:
\begin{eqnarray}
&&R_{\alpha \beta }-\frac{R}{2}g_{\alpha \beta }= 8\pi T_{\alpha \beta }\,,\label{eq:einstein}\\
&&T_{\alpha \beta} = \partial_{\alpha} \Phi \partial_{\beta} \Phi - \frac{1}{2} g_{\alpha \beta} \left( g^{\gamma \sigma} \partial_{\gamma}\Phi \partial_{\sigma}\Phi + \mu^2 \Phi^2 \right)\,,\label{eq:stressenergy}\\
&&\frac{1}{\sqrt{-g}} \partial_{\alpha} \left( \sqrt{-g} g^{\alpha \beta} \partial_{\beta} \Phi \right) = \mu^2 \Phi\,. \label{eq:kg}
\eeq
From now on, and for numerical convenience, we will be writing the spherically symmetric metric as
\begin{equation}
ds^2 = B(t,r) \left(- \frac{1}{C(t,r)}dt^2 + dr^2 \right) + r^2 d\Omega^2 \label{eq:metric2}\,,
\end{equation}
set units such that $\mu = 1$ and redefine the scalar through
\begin{equation}
\Phi = \frac{\Psi}{\sqrt{4 \pi}}\,.
\end{equation}
With these definitions, Eqs.~\eqref{eq:einstein} and \eqref{eq:kg} lead to:
\begin{align}
& -\frac{B'}{r B}+B \left(\Psi^2-\frac{1}{r^2}\right)+C \partial_t{\Psi}^2+\Psi'^2+\frac{1}{r^2} = 0\,, \label{eq:1of4}\\
& 2 \Psi' \partial_t{\Psi}-\frac{\dot{B}}{r B} = 0\,, \label{eq:2of4}\\
& \frac{B'}{B}+B \left(r \Psi^2-\frac{1}{r}\right)+\frac{1}{r} = \frac{C'}{C}+r C\partial_t{\Psi}^2+r \Psi'^2\,, \label{eq:3of4}\\
& r B \Psi+\frac{r C'}{2C} \Psi'+\frac{r \partial_t{C}}{2} \partial_t{\Psi}-2\Psi'-r \Psi''+r C \partial^2_t{\Psi} = 0\,. \label{eq:4of4}
\end{align}
Here and throughout, primes stand for radial derivatives while $\partial_t\equiv \partial/\partial t$ and dot for proper time derivatives.
\subsubsection{Fully relativistic results} \label{sec:ROscillatons}
Oscillaton geometries can be obtained through the expansion:
\begin{align}
& B(t,r) = \sum_{j=0}^{\infty} b_{j}(r) \cos(2 j \omega t)\,, \label{EKGmetricB} \\
& C(t,r) = \sum_{j=0}^{\infty} c_{j}(r) \cos(2 j \omega t)\,, \label{EKGmetricC} \\
& \Psi(t,r) = \sum_{j=0}^{\infty} \Psi_{j+1}(r) \cos([2 j+1] \omega t)\,, \label{EKGmetricPhi}
\end{align}
truncated at a finite $j$, which depends on the accuracy necessary (for most cases $j=1$ is accurate at a $\sim 1\%$ level).
Accordingly, we will also use reference spacetimes for which,
\begin{align}
& B(t,r) = b_{0}(r) + b_{1}(r) \cos(2 \omega t)\,,\label{eq:expB}\\
& C(t,r) = c_{0}(r) + c_{1}(r) \cos(2 \omega t)\,,\label{eq:expC}\\
& \Psi(t,r) = \Psi_{1}(r) \cos(\omega t) + \Psi_{2}(r) \cos(3 \omega t)\,.\label{eq:expPhi}
\end{align}
The coefficients $b_{0},\,b_{1}...$ can be obtained by inserting the expansion above in the equations of motion, and requiring the vanishing of each harmonic term, order by order.
In this particular case, one finds six ODEs for the variables ${b_0,b_1,c_0,c_1,\Psi_1,\Psi_2}$. These equations are solved numerically using a shooting method; we have the freedom to choose the central value of $\Psi_1$, but we fix the radial metric functions, so that $b_0(0) = 1$ and $b_1(0) = 0$ and the derivatives of the scalar field components $\Psi_1'(0) = \Psi_2'(0) = 0$. Finally, we impose asymptotic flatness, requiring that as $r \to \infty$ the functions $\Psi_1,\Psi_2, b_1, c_1 \to 0$ and $c_0,b_0 \to 1$. In the end of the process we obtain, for each value of the central magnitude of the scalar field, the profiles of all the components of the metric and the scalar field as well as a value for the fundamental frequency $\omega$ of the oscillaton -- see Fig.~\ref{fig:OvsPhi0}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{OvsPhi0}
\caption{The fundamental frequency $\omega$ of a scalar oscillaton, as a function of the central value of the scalar field. The minimum value of the frequency is given by $\omega/\mu \sim 0.782$.
The dashed line is the weak-field prediction, discussed in the next section, and agrees well with the full relativistic results at low compactness.}
\label{fig:OvsPhi0}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{MvsR}
\caption{Mass of the scalar oscillatons as a function of its radius. The maximum mass of stable configurations is $\mu M_{\text{max}} \sim 0.6$ and corresponds to a compactness $\mathcal{C} \sim 0.07$. These values mark the boundary between stable and unstable oscillaton configurations \cite{UrenaLopez:2002gx}: oscillatons with larger radii are stable, and are unstable for smaller radii. It is in the unstable branch of this plot that the maximum compactness is attained, $\mathcal{C}_{\text{max}} \sim 0.1$.}
\label{fig:MvsR}
\end{figure}
Notice that since $A(t,r) = B(t,r)/C(t,r)$ [see Eqs.~\eqref{eq:metric} and \eqref{eq:metric2}] , the coefficients of $A$ are obtained like this
\begin{eqnarray}
a_0&=& \frac{2 b_0 c_0- b_1 c_1}{2 c_0^2+c_1^2}\,,\\
a_1&=& \frac{2 b_1 c_0 - 2 b_0 c_1}{2c_0^2 - c_1^2}\,,
\eeq
such that $A$ is written as
\begin{equation}
A(t,r)\equiv \frac{B}{C} = a_0(r) + a_{1}(r) \cos(2 \omega t)\,.\label{EKGmetricA}
\end{equation}
Given that the solutions are spherically symmetric and asymptotically flat, the effective mass of these configurations is given by the following expression~\footnote{In this expression we recover the reduced mass $\mu$ of the scalar field.}
\begin{equation}
M=\frac{1}{\mu} \lim_{r \to \infty}\frac{r}{2}\left(1 - \frac{1}{b_{0}(r)}\right)\,.\label{eq:OscillatonMass}
\end{equation}
We (arbitrarily) define the radius of the oscillaton as the location at which $98\%$ of the total mass is contained. Using these definitions, we obtain a good agreement with previous works on the subject -- see Figs.~\ref{fig:OvsPhi0}-\ref{fig:MvsR} and compare with Refs.~\cite{Seidel:1991zh,UrenaLopez:2001tw,Brito:2015yfh}.
The dynamical oscillaton spacetime can be characterized by comparing the magnitude of its time-dependent to its time-independent components.
These quantities depend on the compactness $\mathcal{C}$ of the spacetime,
\begin{equation}
\mathcal{C} = \frac{M}{R}\,.
\end{equation}
Our numerical results indicate that at small $\mathcal{C}$, and restoring the mass $\mu$, one has
\begin{equation}
\mu R \approx \frac{9.8697}{\mu M}\,.\label{eq_C_num}
\end{equation}
At large distances, the scalar profile decays exponentially and the spacetime is described by the Schwarzschild geometry. We thus focus on the metric components close to the origin, $r\ll 1/(M\mu^2)$. Our numerical results, for $\mathcal{C} < 0.07$, are described by:
\begin{eqnarray}
\frac{a_1(0)}{a_0(0)} &\sim& 6.2 \mathcal{C}+21.8 \mathcal{C}^2 -126 \mathcal{C}^3 + 6160.2 \mathcal{C}^4\,,\\
\frac{|b_1(0.5)|}{b_0(0.5)} &\sim& -0.0003\mathcal{C}+ 0.08 \mathcal{C}^2 - 6.3 \mathcal{C}^3 +325.8\mathcal{C}^4 \,.
\eeq
The error associated is of order $0.3\%$ for $a_1/a_0$ and $2\%$ for $b_1/b_0$.
From these fits, we see that the time-dependent part of the $g_{tt}$ component isn't always subdominant with respect to the corresponding static part. Unlike the time-dependent part of $g_{rr}$, which remains subdominant for all oscillatons, we see that for $g_{tt}$ the time-dependent part grows such that its magnitude becomes comparable, and even dominant, to the magnitude of the static part.
One can take a closer look at the way in which compactness influences the spacetime metric by observing that its components can be written, for $r\mu < 1$ and $\mathcal{C} < 0.01$, as:
\begin{eqnarray}
\label{eq:EKGmetricCRexpansion}
a_0(r) &=& f_{a0}(\mathcal{C}) + g_{a0}(\mathcal{C})\mu^2 r^2\\
a_1(r) &=& f_{a1}(\mathcal{C}) + g_{a1}(\mathcal{C})\mu^2 r^2 \\
b_0(r) &=& f_{b0}(\mathcal{C}) + g_{b0}(\mathcal{C})\mu^2 r^2 \\
b_1(r) &=& f_{b1}(\mathcal{C}) + g_{b1}(\mathcal{C}) \mu^2 r^2
\eeq
where the coefficients depend only on the compactness and are given in Table~\ref{tab:funcs}. We have also restored the mass $\mu$ for clarity.
The errors on the corresponding functions, in this range of values, are at most of $(0.6,3.4,0.05,3.0)\%$ for $a_0,\,a_1,\,b_0,\,b_1$ respectively.
\begin{table}[]
\centering
\caption{The behavior of oscillaton spacetimes at small radii, as described by \eqref{eq:EKGmetricCRexpansion}, where $\mathcal{C} \equiv M/R$. These results were obtained for $\mathcal{C} < 0.01$.}
\label{tab:funcs}
\begin{tabular}{ll}
$(f_{a0},\,f_{a1})=$ & \!\!\!\!\!\!\!($1 - 6.456 \mathcal{C} - 1673.7 \mathcal{C}^3\,,\quad6.163 \mathcal{C} - 1400.5 \mathcal{C}^3$)\\
$(g_{a0},\,g_{a1})=$ & \!\!\!\!\!\!\!($1.808 \mathcal{C}^2 + 77.162 \mathcal{C}^3\,,\quad -5.486 \mathcal{C}^2 - 2.820 \mathcal{C}^3$)\\
\,\,\,\,\,\qquad$f_{b0}=$ & \!\!\!\!\!$1 - 0.819 \mathcal{C}^3 + 156.20 \mathcal{C}^4 - 6107.0 \mathcal{C}^5$\\
\,\,\,\,\,\qquad$g_{b0}=$ & \!\!\!\!\!$1434.94 \mathcal{C}^3 - 163338 \mathcal{C}^4 + 5.816 \mathcal{C}^5$\\
\,\,\,\,\,\qquad$f_{b1}=$ & \!\!\!\!\!$0.013 \mathcal{C}^3 - 2.34 \mathcal{C}^4 + 64.604 \mathcal{C}^5$\\
\,\,\,\,\,\qquad$g_{b1}=$ & \!\!\!\!\!$-5.56 \mathcal{C}^3 - 63.71 \mathcal{C}^4 - 976.58 \mathcal{C}^5$
\end{tabular}
\end{table}
\subsubsection{Newtonian regime} \label{sec:NOscillatons}
The previous results focused on numerical solutions of the field equations. We now wish to make contact with semi-analytical results, valid in a ``Newtonian'' regime
when the velocities and densities involved are small~\cite{Seidel:1990jh,UrenaLopez:2002gx,Guzman:2004wj}. In this regime, we assume (something borne out of numerical calculations, see Fig.~\ref{fig:OvsPhi0}) that the frequency of the scalar field $\omega\sim \mu$. To be concrete, we take $\omega^2=\mu^2+k^2$ so that, up to the second order in wave number $k$, we can write
\begin{equation} \label{eq:Nomega}
\omega=\mu+\frac{k^2}{2\mu}+{\cal O}(k^4)\,.
\end{equation}
The group velocity of the field is equal, up to the first order in $k$, to $v=k/\mu$ and is our expansion parameter. As the field is ``trapped'' by self-gravity, $k^{2}<0$ and we expect for the long-range behaviour to be of the form $\psi \sim e^{i k r} \sim e^{-|k| r}$.
For the details of the expansion reader is referred to the Appendix \ref{AppEKGNewt} and we only cite here the form of metric coefficients and the field ansatz:
\begin{eqnarray}
A(t,r)&=& 1 + 2V(r)+ 2V_{1}(r) \cos(2 \omega t)+{\cal O}(v^4)\,,\label{eq:NexpA}\\
B(t,r)&=& 1 + {\cal O}(v^4)\,,\label{eq:NexpB}\\
\Psi(t,r) &=& \psi(r) \cos(\omega t)\,.\label{eq:NexpPhi}
\eeq
We will from now on set $\mu=1$ and refer to $V(r)$ as the Newtonian potential and to $V_{1}(r)\cos(2\omega t)$ as the time-dependent potential. Working consistently up to second order in $v$, EKG system reduces to:
\begin{eqnarray}
e\psi&=&-\frac{1}{2r}(r\psi)''+V\psi\,,\label{eq:NSchrodinger}\\
(rV)''&=&\frac{1}{2} r\psi^2\,,\label{eq:NPoisson}\\
V'_{1}&=&-\frac{1}{2} r\psi^{2}\,,\label{eq:NV2}
\eeq
where $e=k^2/2<0$. Note that the equations \eqref{eq:NSchrodinger} and \eqref{eq:NPoisson} are decoupled from \eqref{eq:NV2} and form the Schrödinger-Poisson (SP) system~\cite{Hui:2016ltb, Guzman:2004wj}. When the additional self-interacting potential is present this system is called Gross-Pitaevskii-Poisson system \cite{Chavanis:2011} (see Appendix \ref{AppGPP}).
Equation \eqref{eq:NV2} is present for oscillatons (and not for boson stars where the field is complex and harmonic) and is responsible for the time-dependence of the $A(t,r)$ metric coefficient. We can find the mass of the Newtonian oscillaton as $M=\int_{0}^{\infty} d V \rho(r)$, where $\rho(r) \equiv T_{tt}=\psi^2/(8\pi)$, and see that by definition it does not depend on the function $b_0(r)$ as is the case in general (\ref{eq:OscillatonMass}) and as expected from fully relativistic analysis (see Figure 1 in Ref.~\cite{Brito:2015yfh}).
Analytical solutions for these systems in general do not exist but there is a high precision approximate analytical solution
in the case of the non-self-interacting fields~\cite{KlingRajaraman:2017}, which is the focus of this work.
Non-self-interacting oscillatons exhibit a Yukawa-like behavior at large distances. Thus, there is no well-defined notion of surface, even at a Newtonian level. The radius of this kind of object is defined as we did in the fully relativistic case.
As the SP system admits scale symmetry, solutions corresponding to different masses can be obtained from a unique solution by rescaling~\cite{Guzman:2004wj,KlingRajaraman:2017}.
The scaling that leaves SP system invariant for various parameters is given by,
\begin{eqnarray}
r &\to& \frac{r}{\lambda}\,,\,e \to \lambda^2 e\,,\nonumber\\
\psi &\to& \lambda^2 \psi\,,\,V \to \lambda^2 V\,,\, V_{1} \to \lambda^2 V_{1}\,,\, M \to \lambda M\,,\label{eq:NSPscale}
\eeq
where $\lambda$ is the scale factor. We will fix this factor as in Ref.~\cite{KlingRajaraman:2017} by identifying $2\lambda^2=-e$. A scale-independent field is found by expanding field around zero value of the radial coordinate and at infinity and matching these solutions. The free parameters are found by fitting this solution onto the numerical solution of the scale invariant SP system. These parameters are proportional to the scale invariant value of the central $(s_{0})$ and long-range $(\alpha)$ field expansion, the scale invariant mass $(\beta)$ and linearly related to the central value of the scale-invariant Newtonian potential $(v_{0})$. Technical details are left to Appendix \ref{AppGPP_non_self}. The numerical values of these parameters, along with the scale invariant radius $Z$, are:
\begin{eqnarray}
s_{0} &=& 1.022\,,\quad v_{0} = 0.938 \nonumber\,,\\
\alpha &=& 3.495\,,\quad \beta = 1.753\nonumber\,,\\
Z &= & 5.172\,.
\eeq
From Eq.~\eqref{eq:NSPscale}, it is obvious that the scaling between mass and radius is of the form
\begin{equation} \label{eq:R_M_N_Osc}
R = \frac{Z\beta}{M}=\frac{9.065}{M}
\end{equation}
and $\lambda=\sqrt{\mathcal{C} Z/\beta}$. Notice the excellent agreement with the low compactness full numerical result,
Eq.~\eqref{eq_C_num}. From the scaling relations, we can find the dependence of the field frequency \eqref{eq:Nomega} on the central value of the field
\begin{equation}
\omega=1-\frac{\psi(0)}{2s_{0}}.
\end{equation}
The plot of this function is superposed on the relativistic $\omega-\Psi(0)$ plot (Fig. \ref{fig:OvsPhi0}). We can see that the agreement for small values of $\Psi(0)$ is very good.
We will now provide comparison between small radius metric coefficients expansion in terms of compactness $\mathcal{C}$ obtained in fully relativistic analysis summarized in \eqref{eq:EKGmetricCRexpansion} and Table \ref{tab:funcs} and in Newtonian limit. The small $r$ behaviour of Newtonian oscillaton density is (see Appendix~\ref{AppGPP_non_self})
\begin{equation} \label{eq:Ndensitysmallr}
\rho(r) = \Lambda (a+b(\lambda r)^2) + {\cal O}(r^4)\,,
\end{equation}
where $\Lambda=\lambda^4/8\pi $, $a=s_{0}^2,\,b=-s_{0}^2 v_{0}/3$.
Newtonian non-self-interacting oscillatons do not have defined surface and the normalisation procedure for the Newtonian potential is not the same as for the sphere in Newtonian gravity. We have
\begin{equation}
V(r) = -\int^{\infty}_{0}\frac{dr}{r^2}m(r)+\int^{r}_{0}\frac{dy}{y^2}m(y)\,,
\end{equation}
where $m(r)$ is the Newtonian mass function.
The first term -- proportional to $\mathcal{C}$ (as can be seen from a dimensional analysis) -- is integrated using the full expansion described in the Appendix~\ref{AppGPP_non_self}. The second term reduces to $2\pi \Lambda a r^2/3$ at ${\cal O}(r^3)$. Similarly
\begin{equation}
V_{1}(r) =4\pi \int^{\infty}_{0}dr r\rho (r)-4\pi \int^{r}_{0}dy y \rho(y).
\end{equation}
The small-$r$ expansion for the second term gives us $-2\pi\Lambda a r^2$ at ${\cal O}(r^3)$. The first, of the order $\mathcal{C}$, is integrated using the full expansion. We get the following results for the parameters defined in \eqref{eq:EKGmetricCRexpansion},
\begin{eqnarray} \label{eq:Nmetricexpansion}
f_{a0}&=& 1 - 5.720 \mathcal{C}\,,\quad f_{a1}= 5.720 \mathcal{C}\,,\\
g_{a0}&=& 1.514 \mathcal{C}^2\,,\quad g_{a1}= - 4.543 \mathcal{C}^2\,,\\
f_{b0} &=& 1\,,\\
g_{b0} &=& g_{b1}=f_{b1}=0\,.
\eeq
in very good agreement with respect to fully relativistic expansion from Table~\ref{tab:funcs} (notice that the fully relativistic expansion is restricted to only mildly Newtonian oscillatons).
For small $r$, $V_{1}$ is larger in magnitude than the Newtonian potential. This seemingly odd result was recognized in Ref.~\cite{UrenaLopez:2002gx}. The physical origin of this property can be traced to the scalar pressure, which is of the same order of magnitude as the energy density. We are thus in a weak-field but Newtonian-like limit. The gradient of the Newtonian potential is dominated by the magnitude of the gradient of the time-dependent potential for $\lambda r \,\mbox{{\scriptsize $\stackrel{<}{\sim}$}}\, 0.57Z$ and becomes an order of magnitude larger at $\lambda r \approx 1.06Z$~\cite{weakly}.
\section{Motion in dynamical spacetimes} \label{sec:motionformalism}
\subsection{Geodesics in the full geometry} \label{sec:motionfullgeo}
The action of a pointlike particle in the spacetime metric described by \eqref{eq:metric} is
\begin{equation}
S[\gamma(\tau)]= \int_{\gamma(\tau)}L(x,\dot{x})= \int_{\tau_i}^{\tau_f}g_{\alpha\beta}\dot{x}^\alpha \dot{x}^\beta d\tau \label{Lagrangean}\,,
\end{equation}
where $\gamma(\tau)$ is a curve on the spacetime. The geodesics on this metric are the curves that extremize the action above, when varied with respect to the curve $\gamma(\tau)$,
\begin{equation}
\ddot{x}^\mu+\Gamma^{\mu}_{\hphantom{\mu}\beta\alpha}\dot{x}^\beta\dot{x}^\alpha=0\,,
\end{equation}
where the Christoffel symbols are defined by
\begin{equation}
\Gamma^{\mu}_{\hphantom{\mu}\beta\alpha} \equiv \frac{g^{\mu \nu}}{2}\left(\partial_{\beta}g_{\nu\alpha} + \partial_{\alpha}g_{\nu\beta} - \partial_{\nu}g_{\alpha\beta}\right)\, .
\end{equation}
Since $\tau$ is an affine parameter we can always re-parametrize it such that $L=-1,0,1$ for
timelike, null or spacelike geodesics, respectively.
For the Lagrangean~\eqref{Lagrangean}, $\varphi$ is a cyclic coordinate. Thus, its conjugate momentum, the angular momentum along the $z-$axis $r^2\sin(\vartheta)\dot{\varphi}=J$, is a conserved quantity . Due to the spherical symmetry of the metric, the geodesics will always be planar. Therefore, without loss of generality, we set $\vartheta = \frac{\pi}{2}$. The geodesic equations are reduced to two nontrivial coupled equations,
\begin{eqnarray}
&&\ddot{t}+\frac{1}{2A}\left(\partial_t A\dot{t}^2+ 2A'\dot{r}\dot{t}+\partial_t B\dot{r}^2\right)=0 \,,\label{geodesics1}\\
&&\ddot{r}+\frac{1}{2B}\left(B'\dot{r}^2+A'\dot{t}^2- 2 r\dot{\varphi}^2 + 2\partial_t B\dot{r}\dot{t}\right)=0 \,. \label{geodesics2}
\eeq
Two examples of trajectories concern circular and radial motion, for which
\begin{eqnarray}
r(\tau)&=& r_0\,,\, \dot{r}=0\,,\, \ddot{r}=0\\
r(\tau=0)&=&r_{\rm init}\,,\,\dot{\varphi}=0\,,
\eeq
respectively.
Consider first circular motion in our chart representation of the manifold. The substitution $\dot{\varphi} = \Omega$ in equation \eqref{geodesics2} yields,
\begin{equation}
\frac {A'}{2B}\dot{t}^2-\frac{r_0\Omega^2}{B}=0\,.
\label{eq:dott0}
\end{equation}
There is a solution if $A'>0$. Solving Eq.~\eqref{eq:dott0} for $\dot{t}$ and differentiating to find $\ddot{t}$ we may rewrite equation~\eqref{geodesics1} as
\begin{equation}
\frac{\partial_{t} A'}{A'}-\frac{\partial_t A}{A}=0.
\end{equation}
Any non-null separable function $A(t,r)=a_t(t)a_r(r)$ satisfies this condition, making it sufficient for circular motion to be allowed.
For timelike ($L=-1$) motion we find,
\begin{equation}
\Omega=\frac{1}{\sqrt{\frac{2r_0a_r(r_0)}{a^{'}_r(r_0)}-r^{2}_0}},\label{Angular_freq}
\end{equation}
implying that $2a_r/a'_r>r_0$ at $r_0$. Note, incidentally, that the {\it coordinate} angular velocity is,
\begin{equation}
\tilde{\Omega}=\frac{d\varphi}{dt}=\frac{d\varphi}{d\tau}\frac{d\tau}{dt}=\frac{\Omega}{\dot{t}}\,.
\label{eq:OmtotildeOm}
\end{equation}
From \eqref{eq:dott0},
\begin{equation}
\dot{t}=\Omega\sqrt{\frac{2r_0}{a'_r\left(r_0\right)a_t\left(r_0\right)}}\,,
\end{equation}
thus we find
\begin{equation}
\tilde{\Omega}=\sqrt{\frac{a'_r\left(r_0\right)a_t\left(r_0\right)}{2r_0}}\,, \label{eq:coordinatangfreq}
\end{equation}
in agreement with standard results~\cite{Cardoso:2008bp}.
Following the same prescription, null geodesics must satisfy
\begin{equation}
r_0=\frac{2a_r(r_0)}{a^{'}_r(r_0)}\,.\label{constraintnull}
\end{equation}
When these conditions are applied to standard spacetimes, such as the Schwarzschild geometry, one recovers well-known results~\cite{Cardoso:2008bp}.
\subsection{Geodesics in weakly dynamic spacetimes}
Up to now, our results are generic and valid for any geometry of the form \eqref{eq:metric}. When the spacetime is {\it approximately}
static as in Eq.~\eqref{metric_expansion}, some simplifications occur.
Let $x^{\mu}_0(\tau)$ be the solution of the equations of motion derived from the Lagrangian
$L_0(x^\mu,\dot{x}^\mu)=g^{(0)}_{\alpha\beta}\dot{x}^\alpha\dot{x}^\beta$. Assume $x^\mu_0(\tau)$ to be known.
Let us consider what happens to the solution when the metric is slightly perturbed as in Eq.~\ref{metric_expansion}, such that it gives rise to a new Lagrangian
$L(x,\dot{x})=L_0(x^\mu,\dot{x}^\mu)+\epsilon L_1(x^\mu,\dot{x}^\mu)$. The associated geodesics will have a different solution, which should lie sufficiently close to $x^\mu_0(\tau)$. Lets find it, by expanding around $x_0^\mu$:
\begin{eqnarray}
&&x^\mu(\tau)=x_0^\mu(\tau)+\epsilon\,\eta^\mu(\tau)\,,\\
&&L(\eta,\dot{\eta},\tau)=L_0(x_0)+\epsilon L_1(x_0) + \epsilon \frac{\partial L_0}{\partial x^\alpha} \eta^\alpha+\epsilon \frac{\partial L_0}{\partial \dot{x}^\alpha} \dot{\eta}^\alpha\nonumber\\
&+& \epsilon^2 \left(\frac{1}{2}\frac{\partial^2L_0}{\partial x^\alpha x^\beta}\eta^\alpha \eta^\beta+\frac{1}{2}\frac{\partial^2L_0}{\partial \dot{x}^\alpha \dot{x}^\beta}\dot{\eta}^\alpha \dot{\eta}^\beta+\frac{\partial^2L_0}{\partial {x}^\alpha \dot{x}^\beta}\eta^\alpha \dot{\eta}^\beta \right)\nonumber\\
&+& \epsilon^2 \frac{\partial L_1}{\partial x^\alpha} \eta^\alpha +\epsilon^2 \frac{\partial L_1}{\partial \dot{x}^\alpha}\eta^\alpha+{\cal O}(\epsilon^3)\,.
\label{expansion}
\eeq
Although not explicitly stated, each partial derivative of the Lagrangean is to be evaluated at $x_0(\tau)$, here and in the following.
Since $x_0(\tau)$ is known and solves the $0^{th}$ order equations of motion, the above expansion only depends on $\eta$, $\dot{\eta}$ and $\tau$. Thus, to extremize the action of this Lagrangean, the variation must be done in $\eta$. The Euler-Lagrange equations of ~\eqref{expansion} yield, up to second order in $\epsilon$, the following nontrivial result
\begin{equation}
\frac{d}{d \tau}\left(\frac{\partial \zeta}{\partial\dot{x}^i} \right)-\frac{\partial \zeta}{\partial x^i} = \frac{\partial L_1}{\partial x^i} - \frac{d}{d\tau}\left(\frac{\partial L_1}{\partial \dot{x}^i} \right)\,,
\label{initgeoress}
\end{equation}
where we defined
\begin{equation}
\zeta =\frac{\partial L_0}{\partial x^j}\eta^j + \frac{\partial L_0}{\partial \dot{x}^j}\dot{\eta}^j\,.
\end{equation}
Expressing $L_0$ as a function of the metric, equation \eqref{initgeoress} can be further simplified into
\begin{eqnarray}
&&\ddot{\eta}^\gamma+2\Gamma^{\gamma}_{(0)\alpha\beta}\dot{x}_0^\alpha\dot{\eta}^\beta+\left(\partial_\delta \Gamma^{\gamma}_{(0)\alpha\beta}\right)\dot{x}_0^\alpha\dot{x}_0^\beta \eta^\delta\nonumber\\
&&=-\frac{1}{2}g^{(0)\, \gamma \beta}\left(\frac{d}{d\tau}\left(\frac{\partial L_1}{\partial \dot{x}^\beta} \right) -\frac{\partial L_1}{\partial x^\beta} \right)\,.
\label{georessonance}
\eeq
If the LHS was equated to 0, one would have the geodesic deviation equation, describing the evolution of a perturbation on the geodesic itself. However, there is a ``force-term'' presented on the RHS illustrating how the metric's perturbations disturb the geodesics of the background static spacetime. This result can also be obtained by firstly applying Euler-Lagrange equations to $L(x,\dot{x})$ and only then considering the expansion around $x_0$.
By inspection of equation \eqref{initgeoress}, if $x_0^\gamma(\tau)$ is cyclic in $L_0$, the equation may be integrated on both sides, leading to a first order equation
\begin{eqnarray}
\frac{\partial \zeta}{\partial\dot{x}^\gamma} &=& \frac{\partial^2L_0}{\partial \dot{x}^\gamma \dot{x}^\beta} \dot{\eta}^\beta + \frac{\partial^2L_0}{\partial \dot{x}^\gamma x^\beta} \eta^\beta\nonumber\\
&=&2g^{(0)}_{\gamma\beta}\dot{\eta}^\beta+2\partial_\beta (g^{(0)}_{\gamma\alpha})\dot{x}_0^\alpha\eta^\beta\nonumber\\
&=&\int_{\tau_0}^{\tau}\left(-\frac{d}{dy}\left(\frac{\partial L_1}{\partial \dot{x}^\gamma} \right) +\frac{\partial L_1}{\partial x^\gamma} \right)dy+C_\gamma,
\label{cyclicgeoress}
\eeq
where $C_\gamma$ is a real constant depending on the initial conditions.
\subsection{Motion in weakly-dynamic, time-periodic spacetimes} \label{sec:periodic_spacetimes}
We will now specialize the discussion to weakly-dynamic \textit{and} time-periodic geometries, like the one described by \eqref{eq:weakly1} and \eqref{eq:weakly2}.
For the background metric, both $t$ and $\varphi$ are cyclic coordinates in $L_0$, allowing the corresponding two equations in system \eqref{georessonance} to be rewritten as first order equations using \eqref{cyclicgeoress}:
\begin{eqnarray}
\dot{\eta}^\varphi&=&-\frac{2}{r_0}\dot{\varphi}_0\eta^r+\frac{C_\varphi}{2r_0^2} +\frac{F_{\varphi}}{2r_0^2}\,,\label{cyclivarsphi}\\
\dot{\eta}^t&=&-\frac{a_0'}{a_0}\dot{t}_0\eta^r-\frac{C_t}{2a_0}-\frac{F_t}{2a_0}\,,
\label{cyclivarst}
\eeq
where
\begin{eqnarray}
EL_\gamma &=& \left(-\frac{d}{d\tau}\left(\frac{\partial L_1}{\partial \dot{x}^\gamma} \right) +\frac{\partial L_1}{\partial x^\gamma} \right)\,,\\
{\cal F}_t&=&\int_{\tau_0}^{\tau} EL_t(y) \, dy\,,\quad {\cal F}_\varphi=\int_{\tau_0}^{\tau} EL_\varphi(y) \, dy\,,
\eeq
and $a_0(r_0) = g^{(0)}_{tt}(r_0)$, $b_0(r_0) = g^{(0)}_{rr}(r_0)$.
As we discussed previously, geodesics on the background metric $g^{(0)}$ defined in Eq.~\eqref{metric_expansion} are planar, thus allowing the choice $\vartheta_0(\tau)=\pi/2$.
Replacing the relations \eqref{cyclivarsphi} and \eqref{cyclivarst} on the system \eqref{georessonance}, we are left with a system of two decoupled second-order equations for $\eta^r$ and $\eta^\vartheta$:
\begin{eqnarray}
&&2 r_0 \left(\ddot{\eta}^\vartheta+\eta^\vartheta \dot{\varphi}_0^2\right)+4\dot{\eta}^\vartheta \dot{r}_0=\frac{EL_\vartheta }{r_0 }\label{2ordertheta}\,, \\
&& \eta^r \left(\dot{r}_{0}^2 b_0''+\dot{t}_{0}^2 a_0''+6 \dot{\varphi}^2\right)+2 \dot{\eta}^r \dot{r}_0 b_0'+2 b_0 \ddot{\eta}^r\nonumber\\
&&- \frac{\eta^r b_0'\left(\dot{r}_0^2 b'_0+\dot{t}_0^2 a_0'-2 r_0 \dot{\varphi}^2\right)}{b_0} = \frac{\dot{t}_0 a_0' \left({\cal F}_t+C_t+2 \eta^r \dot{t}_0 a_0'\right)}{a_0}\nonumber\\
&&+\frac{2 \dot{\varphi} \left({\cal F}_\varphi+C_\varphi\right)}{r_0}+EL_r \,\label{2orderr}\,.
\eeq
\subsection{Symmetries of motion} \label{sec:symmetries}
Since the metric coefficients of a dynamic spacetime are time dependent, time homogeneity -- valid in the time-independent spacetime -- no longer holds generically and time is not a cyclic coordinate. Specifically, in the case of time-periodic spacetimes, symmetry is not completely lost but reduced to a discrete subgroup, akin to (space) translation symmetry in crystals~\cite{crystals}.
Breaking of the time homogeneity has interesting consequences for the study of the test particle initial value problem; take for example the case of circular background motion. In the weakly-dynamic spacetime, the time-dependent part of the metric is treated as a perturbation. Assume therefore a metric expansion of the form \eqref{eq:weakly1} and \eqref{eq:weakly2}, where the background metric
$g^{(0)}_{\alpha\beta}$ is static and spherically symmetric. If we want our ``initial time'' $t_{i}$ to correspond to a vanishing perturbation, we should set it to $\pi/(4\omega)$ or to $3\pi/(4\omega)$. The solution to the problem depends on the specific choice one makes, because of the different signs of the cosine derivatives in the time-dependent part of the metric.
Note that this situation is consistent with the spherical symmetry of the problem: for initial times different from zero, the relation $\varphi \propto t$ is not valid anymore, instead $\varphi \propto (t-t_{i})$. In other words, it is irrelevant at what point on the initial orbit (at fixed times) our particle is when the perturbation is turned on; however, it is important at what point in time that happened. Time-periodic perturbations break the time homogeneity and the same initial conditions, but different $t_i$, don't lead to the solutions which can be related by the time translation in $t_i$.
In the quantum treatment of the electron motion in crystals, Bloch's theorem implies that there is a conservation of the crystal momentum. However, as the symmetry is discrete, Noether theorem is not applicable and the conservation law is a consequence of the linearity of Quantum Mechanics (Schrödinger equation is subject of the Floquet theory for the symmetry in question). As General Relativity is highly non-linear, we should not expect that point particle energy will be periodic in $\pi/\omega$, in analogy with electrons in crystals, in general. We can calculate the change of the particle energy function $E(t)=-\partial L/\partial \dot{t}$, at the order ${\cal O}(\epsilon^2)$, between two arbitrary moments in time
\begin{eqnarray}
\label{eq:energy_periodicity_general}
&& E(t_2)-E(t_1)= \nonumber\\
&& 2\epsilon \omega \int^{\tau(t_2)}_{\tau(t_1)}\left(b_1(r_{0})\dot{r}_{0}^2-a_1(r_{0})\dot{t}_{0}^2\right)\sin{(2\omega t_0(y))} dy\,.
\eeq
The integrand is not necessarily periodic in $\pi/\omega$, so we can't claim $E(t)=E(t+n\pi/\omega),n \in \mathbb{N}$. However, this conclusion will change when the equations of motion become linear, as in Section \ref{sec:circularmotionexample}.
We should also note that spacetimes with metric expansion as in the example do admit time-inversion symmetry. This symmetry is broken when friction is present, as in Section~\ref{sec:dissipation}.
\section{The excitation of resonances} \label{sec:examples}
\subsection{Circular and radial background motion - linear regime \label{sec:circularmotionexample}}
We start by considering the perturbations presented in \eqref{eq:weakly1} and \eqref{eq:weakly2} on background circular geodesics. Imposing the circularity condition ($\dot{r}_0=0, \ddot{r}_0=0$) in Eq.~\eqref{geodesics2} one finds
\begin{equation}
r_0(\tau)=r_0\,,\quad \varphi_0(\tau)=\Omega\tau\,,\quad t_0(\tau)=\frac{\Omega}{\tilde{\Omega}}\\
\tau+t_i\,,\label{Circular Solution}
\end{equation}
where $\tilde{\Omega}$ is given by equation \eqref{eq:coordinatangfreq} and $t_i$ is a constant. The requirement that the motion is timelike is equivalent to requiring that $\Omega$ be given by Eq.~\eqref{Angular_freq}. Furthermore, $EL_\vartheta$ and $EL_\varphi$ vanish; replacing $EL_t$ and $EL_r$ by their corresponding expressions, an analytical solution is found for $\eta^r(\tau)$ through \eqref{2orderr}:
\begin{equation}
\eta^r(\tau) = \mathcal{D}(\omega) \cos \left(2 \omega t_0(\tau)\right)+C_1 \cos (\Theta \tau + C_2) + C_3\,.
\label{circularsol}
\end{equation}
where $C_1$ and $C_2$ are real constants dependent on the initial conditions of $\eta^r$, $C_3$ is related with the initial conditions of $\eta^t$ and $\eta^\varphi$, and:
\begin{eqnarray}
\mathcal{D}(\omega) &=& \frac{r_0 \left(a_0 a_1'-a_1 a_0'\right) }{8 r_0 \omega^2 b_0 a_0+2 r_0 (a'_0)^2-a_0 \left(r_0 a_0''+3 a_0'\right)}\,, \nonumber\\
\Theta &=& \Omega \sqrt{\frac{r_0 a_0 a_0''-2 r_0 a_0'^2+3 a_0 a_0'}{b_0a_0a_0'}}\,.\label{Theta}
\eeq
It is clear that there may exist {\it resonances} in the motion, when the amplitude $\mathcal{D}(\omega)$ diverges. This occurs at frequencies
$\omega=\omega_{\rm res}$ for which the denominator of $\mathcal{D}(\omega)$ above vanishes. We find
{\begin{equation} \label{eq:circresonant}
\omega_{\rm res}=\pm\frac{\Theta}{2\dot{t}_0}\,.
\end{equation}}
The frequency $\Theta$ corresponds to the proper radial epicyclic frequency for this static, axially symmetric spacetime \cite{RonaldoVieira:2017}.
To obtain the radial epicyclic frequency in coordinate time we need to divide $\Theta$ by $\dot{t}_0$. Note that the frequency of the metric perturbation in \eqref{eq:weakly1} and \eqref{eq:weakly2} is in fact $2\omega$. Then, the effective frequency of resonance corresponds to $2\omega_{res}=\Omega/\dot{t}_0$ which, as stated, is the radial epicyclic frequency in coordinate time.
Thus, our system behaves as a classic, driven harmonic oscillator: when the ``forcing'' frequency equals the natural (epicyclic) frequency, a resonance occurs.
If $\Theta$ differs from $\Omega$, the geodesics will precess. The above is a very generic prediction of a smoking-gun of time-periodic spacetimes.
Finally, let us apply \eqref{eq:energy_periodicity_general} to this specific background motion. As $r_0$ and $\dot{t}_0$ do not depend on the proper time, the integral reduces to zero when $t \rightarrow t+n\pi/\omega,n \in \mathbb{N}$. This conclusion is not valid during the resonant motion when the higher order terms become important.
We now focus on perturbations on radial geodesics. Imposing $\dot \varphi_0(\tau)=0$ and $\ddot \varphi_0(\tau)=0$, an explicit analytic solution for $\eta^r$ is not possible for general radial geodesics. Thus we specialize to motion of small amplitude around the geometric center of our spacetime. Expanding $L_0$ to first order around $r=0$ and $\dot{r}=0$, the geodesics following from $L_0(x,\dot x)$ admit the following solution:
\begin{eqnarray}
r_0(\tau)&=&\tilde{r}_0\cos\left(\Omega_0 \tau\right)+\frac{\dot{ \tilde{r}}_0}{\Omega_0}\sin\left(\Omega_0\tau\right)\,,\\
t_0(\tau)&=&\alpha\tau+t_i\,,\\
\Omega_0&=&\alpha\sqrt{\frac{a_0''}{2b_0}}\,,\quad\alpha=\dot{t}_0(\tau)\,,\label{Radial Solution}
\eeq
where $\tilde{r}_0$ and $\dot{\tilde{r}}_0$ are initial conditions. Although not explicitly stated, all quantities are to be evaluated at $r=0$. In the preceding derivation, we used the fact that the parity of $a_0$, $b_0$, $a_1$ and $b_1$ implies that these functions and their odd radial derivatives vanish at the origin, for regular spacetimes.
Expanding \eqref{2orderr} on $r$ and $\dot{r}$ around 0, using the appropriate expressions for $EL_\varphi$, $EL_t$ and $EL_r$, we obtain the following analytic solution,
\begin{eqnarray}
\eta^r(\tau)&=&-\mathcal{Q}(\omega)\frac{\dot{r}_0(\tau ) \cos (2 \omega \alpha \tau )
}{(\alpha \omega ) \left(4 b_0 a_0\left(2 b_0\omega ^2-a_0''\right)\right)}\nonumber\\
&-& \mathcal{G}(\omega)\frac{r_0(\tau) \sin (2 \omega \alpha \tau )
}{4 b_0a_0 \left(2 b_0 \omega^2-a_0''\right)}\nonumber\\
&+&C_1 \cos (\omega_0 \tau +C_2)-C_t\frac{\dot{r}_0(\tau )}{2 \alpha a_0} \tau\,, \label{nrradial}
\eeq
$C_1$ and $C_2$ are constants depending on the initial conditions on $\eta^r$. $C_t$ is the integration constant in \eqref{cyclivarst} and
\begin{eqnarray}
\mathcal{G}(\omega)&=&b_0\left(a_0a_1''-a_1a_0''\right)+b_1a_0 a_0''\,,\\
\mathcal{Q}(\omega)&=&4 b_0b_1 a_0\omega^2+\mathcal{G}(\omega)\,.
\eeq
The solution grows linearly in time unless $C_t=0$. This is an expected result as, by inspection of equation \eqref{cyclivarst}, one may conclude that, for a non vanishing $C_t$, $\dot{\eta}^t$ is given by a sum of a constant with the integral of a periodic function, inducing a linear growth of $\eta^t$. As all our equations depend on a small and stable evolution of the motion, this result may be alarming. Nevertheless one may always choose $C_t$ to be null, as it is only related to the initial conditions of $\dot{\eta}^t$ which are decoupled of the initial perturbation on the radial direction. For these reasons, we will use $C_t=0$.
Again, there is a frequency $\omega$ for which the two denominators on \eqref{nrradial} vanish, corresponding to a resonance. This frequency is
\begin{equation}\label{eq:freqrad}
\omega_{\rm res}=\sqrt{\frac{a_0''(0)}{2b_0(0)}}=\frac{\Omega_0}{\alpha}\,.
\end{equation}
Now, resonance occurs when the perturbation frequency $(2\omega)$ is two times the "natural" frequency by which $r_0(\tau)$ oscillates. This result is extremely intuitive, if one imagines the spacetime pushing the object away from the centre, while the latter is also going away from it, and pulling inwards when the object starts moving towards the centre. Then, in each half period of the small oscillation in $r_0$, the metric's perturbation must complete a full period. Similarly to the circular case, \eqref{eq:energy_periodicity_general} implies energy periodicity in $\pi/\omega$ for the radial motion, when we concentrate on small amplitude deviations ${\cal O}(\tilde{r}_0^2)$.
\subsection{Non-relativistic motion in the weak-field regime} \label{sec:N_dynamics}
In order to precisely establish the analogy with the driven harmonic oscillator from the previous section, and to understand dynamical aspects beyond the linear regime, we consider non-relativistic particle motion in the weak field limit. The Lagrangian for the test particle non-relativistic equatorial motion in a weak and asymptotically flat spherically-symmetric spacetime, like the one given by \eqref{eq:NexpA} and \eqref{eq:NexpB} is described by
\begin{equation}
2L=-(1+\nu)\dot{t}^2+\dot{r}^2+r^2\dot{\varphi}^2.
\end{equation}
Here $\nu(t,r)=2V(r)+\epsilon 2V_{1}(r)\cos(2 \omega t)$. $V(r)$ is the Newtonian gravitational potential and $V_1(r)$ originates from time-dependent part of the $A(t,r)$ metric coefficient.
These quantities are defined in Eq.~\eqref{eq:NexpA2} and are related to the original metric coefficients as $1+2V=a_0$ and $2V_1=a_1$.
The Euler-Lagrange equations reduce to:
\begin{align}
& -(1+\nu)\ddot{t}=\frac{1}{2} \partial_t \nu\dot{t}^2+ \nu' \dot{t}\dot{r},\\
& \partial^2_t r+\partial_t r\frac{\ddot{t}}{\dot{t}^2}=-\frac{1}{2} \nu'+r\tilde{\Omega}^2. \label{eq:EL_weak_field_radial_0}
\end{align}
As we are interested in the non-relativistic motion $\dot{r}<<\dot{t}$
\begin{align}
& \partial^2_t r-\frac{\tilde{J}^2}{r^3}=-\frac{1}{2}\nu', \label{eq:ELweakfieldradial}
\end{align}
where we have introduced the coordinate angular momentum $\tilde{J}=r^2\tilde{\Omega}$. We see that the second term on the l.h.s. of \eqref{eq:EL_weak_field_radial_0} is of order $\sim v/c^2$, when we restore $c$. These equations are valid even for the highly-dynamical spacetime, as we will further elaborate in the Section~\ref{sec:Mathieu_rapid}.
Now we focus on the weakly-dynamical spacetime $(\epsilon<<1)$ and weak orbital perturbations from background circular motion $r=r_0+\epsilon \eta^r $ as in \eqref{Circular Solution}. Equation \eqref{eq:ELweakfieldradial} then reduces to:
\begin{align}
& \tilde{\Omega}=\sqrt{\frac{1}{r_0}V'}, \\
& \partial^2_t\eta^r+(V'' +3\tilde{\Omega}^2 )\eta^r=-V_1'\cos{(2\omega t)} \label{eq:ELweakfieldcircpert}.
\end{align}
In the last equation and until the end of this and the next section both potentials and their derivatives, with respect to $r$, are to be evaluated at $r_0$. Last equation represents equation of motion for the driven linear harmonic oscillator, as claimed. Resonance occurs when
\begin{equation} \label{eq:N_res_freq}
\omega_{\rm res}=\frac{1}{2}\sqrt{V''+3 \tilde{\Omega}^2 }\,.
\end{equation}
This result agree with the appropriate limit of Eq.~\eqref{Theta}.
For radial motion $(\tilde{J}=0)$, equation \eqref{eq:ELweakfieldradial} yields:
\begin{align}
& \partial^2_t r_0=-V', \label{eq:EL_weak_field_radial_pert_0th} \\
& \partial^2_t\eta^r+V'' \eta^r=-V_1'\cos{(2\omega t)} \label{eq:EL_weak_field_radial_pert_1st}.
\end{align}
The solution of these equations depends on the form of the potential. However (see also Section~\ref{sec:circularmotionexample}), we can expand around the initial state $\{ r_0=0, \partial_t r_0=0 \}$ and for small amplitudes $\tilde{r}_0$ obtain
\begin{equation}
r_0(t) = \tilde{r}_0\cos\left(\tilde{\Omega}_0 t \right)+\frac{\dot{\tilde{r}}_0}{\tilde{\Omega}_0}\sin\left(\tilde{\Omega}_0 t \right)\,,
\end{equation}
with $\tilde{\Omega}_0=\sqrt{V''(0)}$. Expanding $V_1(0)$ to the first non-zero term (as $V'(0)=V_1'(0)=0$, see Section~\ref{sec:circularmotionexample}), \eqref{eq:EL_weak_field_radial_pert_1st} becomes
\begin{equation}
\partial^2_t\eta^r+\tilde{\Omega}^2_0 \eta^r=-V_1''(0)\cos{(2\omega t)r_0(t)}\,.\label{eqeta2}
\end{equation}
When we solve this equation, we see that resonance occurs when
\begin{equation} \label{eq:N_res_freq_radial}
\omega_{\rm res}=\tilde{\Omega}_0\,.
\end{equation}
This result coincides with the relativistic one~\eqref{eq:freqrad}.
\subsection{Higher order corrections, instabilities and the Mathieu equation} \label{sec:Mathieu}
The motion described by \eqref{eq:ELweakfieldradial} is formally the same as that of a point particle (in Newtonian gravity) around a spherical body whose luminosity changes~\cite{Saslaw:1978}~\footnote{Such scenario is relevant for the analysis of dynamics of dust or small planetary systems' bodies around variable stars, where the time-dependent radiation pressure, acting as a perturbation, influences orbital dynamics.}. In the following, we take a similar approach in order to assess the dynamics beyond the linear regime.
In the previous sections, we used a linear expansion
$x^{\mu}=x^{\mu}_0+\epsilon \eta^{\mu} $ in the small parameter $\epsilon$ to understand the evolution of the perturbation. This expansion is in fact a truncated version
of the correct full series $x^{\mu}=x^{\mu}_0+\sum^{\infty}_{n=1}\eta^{\mu}_{n} \epsilon^n$. To understand what new features can arise in the full theory,
we now expand in the radial coordinate, still at the linear level, but with different parameter strength $\lambda < \epsilon$ - This will allow us to ``effectively'' probe the higher-order terms. Using this expansion, the equation of motion for the $\eta^r$ [Eqs.~\eqref{eq:EL_weak_field_radial_pert_1st} and \eqref{eq:ELweakfieldcircpert}] now reduces to
\begin{eqnarray}
&& \lambda \partial^2_t\eta^r+\lambda \Big((2\omega_{\rm res})^2+\epsilon V_1 '' \cos{(2\omega t)} \Big)\eta^r \nonumber\\
&=& -\epsilon F\cos{(2\omega t)} +\mathcal{O}(\lambda^2)\,, \label{eq:ELweakfieldcircpertMathieu}
\eeq
where $F=V_1 '$ for circular orbits and $F=V_1''r_0(t)$ for radial motion is the forcing term. The corresponding resonance frequencies $\omega_{\rm res}$ are given by Eqs.~\eqref{eq:N_res_freq} and \eqref{eq:N_res_freq_radial} for circular and radial motion respectively. For $\lambda=\epsilon$, we recover the previous results at linear order in these parameters. The $\epsilon\lambda$ term in \eqref{eq:ELweakfieldcircpertMathieu} impacts the equations of motion of $\eta^{r}_2$, thus explaining our claim that we are ``probing'' higher order behaviour. From now on, we absorb $\epsilon$ in $V_1$ and $\lambda$ in $\eta^r$. Equation \eqref{eq:ELweakfieldcircpertMathieu} is known as the inhomogeneous Mathieu equation.
We can classify the motion described by \eqref{eq:ELweakfieldcircpertMathieu}, by comparing the driving and natural frequencies $\omega$ and $\omega_{\rm res}$ respectively. The motion can then be in {\it adiabatic} $(\omega_{\rm res} \gg \omega)$, nearly {\it parametric-resonant} $(\omega_{\rm res} \sim \omega)$ or {\it rapidly oscillating background} $(\omega_{\rm res} \ll \omega)$ regime. In the next few subsections we will focus on the circular background motion, but the analysis is easily generalized.
\subsubsection{Adiabatic regime} \label{sec:Mathieu_adiabatic}
Here it is natural to use an adiabatic approximation \cite{LLbookmechanics, binney2011galactic} in order to understand the motion of particles. For time scales of the order of $1/\omega_{\rm res}$, Eq.~\eqref{eq:ELweakfieldcircpertMathieu} describes an harmonic oscillator with a constant driving force. This type of external force does not deform the phase portrait of the oscillator, but only shifts it by a constant amount $\eta^r_{c,0}=-V_1 '/W(0)^2$ (think of the mass attached to the vertical elastic spring), where
\begin{equation}
W(t)=\sqrt{(2\omega_{\rm res})^2+V_1 '' \cos{(2\omega t)}}\,.
\end{equation}
The time dependence will, on the one hand, modify the center of the phase-space trajectory [non-homogeneous term in \eqref{eq:ELweakfieldcircpertMathieu}] as $\eta^r_c \approx (W(0)/W)^2\eta^r_{c,0}\cos{(2\omega t)}$. On the other hand, the phase-space trajectory will be itself deformed because of the time dependence of the effective frequency $W$ [homogeneous part of \eqref{eq:ELweakfieldcircpertMathieu}]. The Hamiltonian that effectively describes the $\{\eta^r,\partial_t\eta^r \}$ motion is
\begin{equation}
H=\frac{1}{2}(\partial_t\eta^r)^2+\frac{1}{2}W^2(\eta^r-\eta^r_{c})^2=IW\,,\label{eq:hamiltonian_adiabatic}
\end{equation}
where we introduced the action-angle variables $\{ I,\theta \}$ as $\eta^r-\eta^r_c=\sqrt{2I/W} \sin{\theta}$, $\partial_t\eta^r=\sqrt{2IW} \sin{\theta}$. As the action is approximately preserved during the adiabatic process we can calculate it at the initial time $I(0)=I_0$ and find approximate analytical solution to the \eqref{eq:ELweakfieldcircpertMathieu} in this regime
\begin{equation} \label{eq:adiabatic_solution}
\eta^r(t) \approx \eta^r_c(t)+\sqrt{\frac{2I_0}{W}} \sin{\theta(t)},
\end{equation}
where $\theta(t) \approx (2\omega_{\rm res})t$ and we used Hamiltonian equations of motion $\partial_t \theta \equiv \partial H / \partial I= W$. From \eqref{eq:hamiltonian_adiabatic} we can also see the leading behavior of the energy function - it will be periodic with the period of $\pi/\omega$.
\subsubsection{Parametric resonances} \label{sec:Mathieu_parametric}
When the driving and natural frequencies are similar, parametric resonance can occur. These resonances happen when $W(t)=2\omega$. The dominant resonance is $\omega=\omega_{\rm res}$, but others can have important cumulative effects in the energy transfer and can make the orbit unstable~\cite{Saslaw:1978}. The stability of the homogeneous Mathieu equation is a well studied problem (e.g. Ref.~\cite{benderbook}). The inhomogeneity of the Mathieu's equation does not change the conclusions of this stability analysis if the driving term is harmonic with the same driving frequency as the one in $W$~\cite{CairncrossPelster:2014}. We will introduce dimensionless time and rescale parameters in \eqref{eq:ELweakfieldcircpertMathieu} as $t = 2 \omega t$, $a=(\omega_{\rm res}/\omega)^2$, $2\epsilon=V_1 ''/(2 \omega)^2$ and $f=-V_1 '/(2 \omega)^2$:
\begin{equation} \label{eq:inhomogen_Mathie}
\partial^2_t\eta^r+(a+2\epsilon \cos{t})\eta^r=f\cos{t}.
\end{equation}
The stability of Mathieu's equation can be represented on the parametric $\epsilon-a$ (Ince-Strutt) stability diagram (see Fig. 11.11 in Ref.~\cite{benderbook}). For most values of the parameters $a$, there are regions of $\epsilon$ where the solution to the Mathieu's equation are bounded and the corresponding orbits of particles are stable. However, if $a=n^2/4$ $(n \in \mathbb{N})$, instability occurs for any $\epsilon \neq 0$ \cite{benderbook}. These values are in terms of the driving frequency terms expressed as
\begin{eqnarray}
\omega&=&2\omega_{\rm res}/n \nonumber\\
&=& \{ 2\omega_{\rm res},\omega_{\rm res}, 2\omega_{\rm res}/3, \omega_{\rm res}/2, 2 \omega_{\rm res}/5, \rm{...} \}. \label{eq:N_omega_instable}
\eeq
It should be noted that these results do not depend on the sign of $\epsilon$, as Ince-Strutt stability diagram is symmetric under reflections about $a$ axis.
\subsubsection{Rapidly oscillating background}
\label{sec:Mathieu_rapid}
Motion in a rapidly oscillating background is effectively dictated by the static background, because the perturber acts so rapidly that the system doesn't have time to adapt, similarly to the sudden approximation in Quantum Mechanics. This is the case, as we shall see, even when the ``perturbing'' force is the same order of magnitude as the ``non-perturbing'' force. Because of this, we will be general and start the discussion by rewriting \eqref{eq:ELweakfieldradial} as
\begin{equation} \label{eq:NII_law_osc}
\partial^2_t r = - U'(r)+F(r)\cos{(2\omega t)},
\end{equation}
where $U'(r)=V'(r)-\tilde{J}^2/r^3$ and $F(r)=-V_1'(r)$. Let the radial coordinate be decomposed as $r=r_{s}+\xi^r$, where $r_{s}$ and $\xi^r$ are slowly and rapidly varying parts, respectively. After this decomposition, the slow and rapid parts of the equation of motion \eqref{eq:NII_law_osc} must be separately satisfied \cite{LLbookmechanics}\footnote{One should be careful with the initial conditions when there is a non-zero phase \cite{RidingerDavidson:2007}.}. Rapid part will have the form
\begin{equation}
\partial^2_t\xi^r=F(r_s)\cos{(2\omega t)}+\mathcal{O}(\xi^r),
\end{equation}
where we assumed that $\xi^r$ terms are small. This equation can be easily integrated
\begin{equation} \label{eq:osc_motion_eta}
\xi^r(t)=-\frac{1}{(2\omega)^2} F(r_s)\cos (2 \omega t)
\end{equation}
and we can see that $\xi^r$ is indeed small, because of $1/\omega^2$ suppression, and that our assumption that the coordinate can be perturbatively decomposed irrespective of the fact that the ``perturbative'' force is bigger than the ``non-perturbative'' is correct.
The slowly varying part of Eq.~\eqref{eq:NII_law_osc}, after averaging, has the form
\begin{equation}
\partial^2_t r_{s}=-U'(r_s)-\frac{1}{2(2\omega)^2} F(r_s)F'(r_s),
\end{equation}
As claimed, motion is governed by the time-independent effective potential and time-varying part is suppressed by $1/\omega^2$.
\subsection{Numerical evolution in the homogeneous background} \label{sec:toy_model}
\begin{figure*}[th]
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth,clip]{RadiusvTime_1}&
\includegraphics[width=0.45\textwidth,clip]{RadiusvTime_2}
\end{tabular}
\caption{Evolution of an initially circular geodesic in the spacetime of a time-periodic geometry~\eqref{eq:toyA}-\eqref{eq:toyB}, for different spacetime frequency $\omega$. The geodesic was circular in the static geometry of a constant density star with radius ${\mathcal C}=0.1$, placed at an initial radius $r=3.11 M$. Because the full geometry is now time-dependent with $\epsilon=10^{-3}$, the motion is not perfectly circular nor closed. For this example, there is a resonance at $M\omega=M\omega_{\rm res}=34.7\times 10^{-3}$. It is apparent that as $\omega$ is tuned in close to resonance the motion differs wildly from its unperturbed circular trajectory.}
\label{fig:GeoNumCirc}
\end{figure*}
\begin{figure*}[th]
\centering
\includegraphics[scale=1.3]{Radiusparametric.pdf}
\caption{Parametric representation of the geodesic corresponding to resonance; the central object is the same as that in Fig.~\ref{fig:GeoNumCirc}. The values of $t/M$ are the initial instant for which the geodesic is being represented, during $\delta t/M=500$. As stated, the motion is not perfectly circular nor closed. The geodesic oscillates periodically from the initial orbit to an ellipse of large eccentricity, returning to the unperturbed motion after a beating period.
}
\label{fig:ParametricCirc}
\end{figure*}
To explore these results in a specific setup, we solved the unperturbed equations of motion numerically in an artificial, toy-model:
a constant density star spacetime, on top of which a time-periodic fluctuation was superposed. To be specific, we set the metric components:
\begin{eqnarray}
A&=&A_{\rm star}(r)+\frac{M^2\,A_{\rm star}(r)}{M^2+r^2}\epsilon \cos(2\omega t)\,, \label{eq:toyA} \\
B&=&B_{\rm star}(r)+\frac{M^2\,B_{\rm star}(r)}{M^2+r^2}\epsilon \cos(2\omega t)\,,\label{eq:toyB}
\eeq
Here, $A_{\rm star}(r), B_{\rm star}(r)$ correspond to the geometry of a constant density star, of mass $M$ and radius $R$ in General Relativity~\cite{bookWeinberg:1972, Macedo:2013jja}. This rather arbitrary choice could mimic for example radially oscillating stars or other geometries. For us here, it is merely a toy arena where we can test of previous results.
We assume that there is no coupling between the fluid in the star and the orbiting object, and that therefore the object follows a geodesic. A straightforward analysis shows that
for $\epsilon=0$ there are {\it stable} circular timelike geodesics for any $r^2<R^3/(2M)$ and that they are all stable if the compactness $\mathcal{C}<23/54$.
In the weak field limit of our toy model \eqref{eq:toyA}, $A_{\rm star}(r)=1+2V_{\rm star}(r)$ and
\begin{equation} \label{eq:N_star_potential}
V_{\rm star}=-\mathcal{C}\Big(\frac{3}{2}- \frac{r^2}{2R^2}\Big)
\end{equation}
corresponds to the potential of a homogeneous sphere in the Newtonian gravity (or spherical harmonic oscillator). From \eqref{eq:N_res_freq} we obtain $\omega_{\rm res}=\tilde{\Omega}$. This result is consistent with the evaluation of \eqref{eq:circresonant} for dilute relativistic stars described by \eqref{eq:toyA}, when $\Theta \approx 2 \Omega$. Note that in this setup the homogeneous part of \eqref{eq:ELweakfieldcircpert} is the result expected from Newtonian gravity - precession occurs for the radial perturbations of circular motion inside the homogeneous sphere with the epicyclic frequency $2\tilde{\Omega}$ \cite{binney2011galactic}.
For the relativistic numerical investigation we took a star with compactness $\mathcal{C}=0.1$ and $\epsilon=10^{-3}$. Using the above, the functions $a_0\,,\,a_1\,,\,b_0\,,\,b_1$ [defined in \eqref{eq:weakly1}-\eqref{eq:weakly2}] are trivially known, and the geodesics can be numerically solved without approximations, using the full metric. We imposed initial conditions corresponding to fully unperturbed circular geodesics,
and monitored the position $r(t)/M$. The trajectory is shown in Fig.~\ref{fig:GeoNumCirc} for three different ``driving'' frequencies $\omega$.
Since the equations of motion are accurate up to order ${\cal O}(\epsilon^2)$, an absolute resonance is not featured in our fully numerical solution. Instead, we find a beating pattern due to the interference of the two sinusoidal signals in \eqref{circularsol}. We applied a numerical Fourier analysis to these solutions to understand the spectrum of frequencies present in the data.
Our results show a clear, discrete spectrum of two frequencies for each example. These match, to within an error of $~0.1\%$, to the ones given by $\Theta(\omega)$ in \eqref{Theta} and $\omega$, confirming the validity of our perturbative analytic results. The beating frequency is defined by the half difference between the frequencies of the two signals in \eqref{circularsol}
\begin{eqnarray} \label{eq_beating_freq}
\omega_{\rm beat}=\frac{\omega-\omega_{\rm res}}{2}\,.
\eeq
Therefore, the beating period becomes larger as $\omega$ approaches $\omega_{\rm res}$ , given by Eq. \eqref{eq:circresonant}, as seen in Fig.~\ref{fig:GeoNumCirc}. Likewise, the position $r(t)$ grows to larger
amplitudes as $\omega$ reaches $\omega_{\rm res}$, indicating that this is, in fact, a resonance.
To understand whether instabilities, as predicted by the analysis of
Sections \ref{sec:Mathieu_parametric}, were possible, we numerically evolved the orbits for ``driving'' frequencies $\omega$ given by \eqref{eq:N_omega_instable} and explored the parameter space spanned by $\{ \mathcal{C},\,\epsilon,\,r_0,\, t_i\}$. The motion is confined to within the ``star'' at all times, i.e. $r(t)<R$. For all the parameter values that we explored, we found {\it resonant-like} behaviour for $\omega=2 \omega_{\rm res}$. For $\omega= \omega_{\rm res}/2$, the behaviour depends on the
value of the parameters. When $\mathcal{C} \sim 0.1$, small $r_0$ and large $\epsilon$ we found resonant-like behavior. However, for other points in the parameter space, the envelope of the $r(t)$ is seemingly linearly and very slowly growing. This growth may be tamed at some proper time, but we haven't observed this in all cases. Regarding the phase $t_i$, the existence of resonances seems
to be independent on it.
We have not found any resonant or unbounded motion for $\omega=2\omega_{\rm res}/3$, $\omega=2\omega_{\rm res}/5$ or $\omega=\omega_{\rm res}/3$. As this conclusions haven't changed for dilute backgrounds (i.e. weak fields) where $\mathcal{C} \sim 10^{-3}$ we can conclude that higher order terms of the expansion are responsible for the taming of the instabilities predicted by the Mathieu equation. It should be noted that we numerically evolved trajectories until $\tau/M \sim 10^{9}$ and that resonant or unbounded motion may become apparent at later times in the cases where it was not found.
The orbital motion in the adiabatic and rapidly-oscillating background regime is in very good agreement with the behaviour described by Eq.~\eqref{eq:adiabatic_solution} and in Section \ref{sec:Mathieu_rapid}, respectively, for small and qualitatively even for large compactness~\footnote{In these two regimes, the equations of motion are stiff: they contain two time scales with big gaps between them and one should use appropriate integrators \cite{PressRecipes:2007}.}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{EnergyvTime_circular}
\caption{Time evolution of the variation of the orbit's energy ($E=-\partial L/\partial t$), weighted by its initial value. The particle has a background circular motion around the toy model under the same conditions that were simulated in fig. \ref{fig:GeoNumCirc}, with $\epsilon = 1/100$. The perturbative term has frequencies, $2\omega_{\rm res}$, $\omega_{\rm res}$ and $.5\omega_{\rm res}$, for which the evolution of the energy is periodic with the frequency of the envelope on resonant motion.}
\label{fig:EnergyvTime_circular}
\end{figure}
The time evolution of the energy, $E=-\partial L/\partial t$, is as expected from our earlier general considerations. In particular, focus on the behaviour of $\Delta E=|E(n\pi/\omega)-E_0|$, $n \in \mathbb{N}$ and $E_0=E(0)$. When the driving frequency is not given by $\omega=\{\omega_{\rm res}/2,\omega_{\rm res},2\omega_{\rm res} \}$, we find $\Delta E/E_0 \ll 1$. On the other hand, when $\omega$ equals one of these resonant frequencies, the relative change is periodic, as seen in Fig.~\ref{fig:EnergyvTime_circular}. We find that the period is the same as the envelope of $r(t)$ in Fig.~\ref{fig:GeoNumCirc}.
Finally, the parametric representation of the geodesic in Cartesian coordinates, shown in Fig.~\ref{fig:ParametricCirc}, features the predicted precession of the geodesic. The initial circular geodesic is deformed into an ellipse whose eccentricity peaks when the deviation is also maximum, before returning to circular after a beating period.
We have also studied radial motion in this background. The features of the motion strongly depend on the parameters. In general, if the time-varying component of the metric is strong enough
(i.e. for large enough $\epsilon$), resonances can be excited for {\it any} initial conditions {\it and} for any background frequency. We observed such behavior for sufficiently dilute configurations.
The natural frequency of small-amplitude oscillations in general differs from the background's. However, the spacetime drives the object to frequencies
which seem to be a multiple of background's.
This drift in frequency is confirmed by numerical Fourier analysis and the phenomena is observed both for ``driving'' frequencies much larger and smaller than the natural frequency (we find a drift even when the driving frequency is two order of magnitude larger or smaller). The behaviour of the solution in this scenario departs strongly from the one described in Section~\ref{sec:circularmotionexample}; however, all the resonances are tamed at some point in proper time. This behavior seems to be strongly model-dependent. In the small $\epsilon$ regime, our results
are similar to the circular case.
\subsection{Motion in oscillatons} \label{sec:applications_oscillatons}
We now briefly discuss how the previous results apply to oscillaton spacetimes, studied in Section~\ref{sec:Oscillatons}.
The motion in the spacetime describing oscillatons has been studied numerically for very specific conditions~\cite{2006GReGr..38..633B}, the results obtained agree qualitatively with the conclusions drawn from Section~\ref{sec:toy_model} for homogeneous backgrounds, and with our own simulations of motion in oscillatons. In Ref.~\cite{2006GReGr..38..633B}, background circular geodesics were perturbed and numerically solved for an oscillaton, with initial conditions corresponding to a turning point. For different values of initial radius and angular momentum, bound orbits with elliptic-like behavior were observed, similar to the ones in our toy model example of Fig.~\ref{fig:ParametricCirc}. Both the period and amplitude of the oscillation were sensible to these conditions, as seen in both our analytical and numerical results. Ref.~\cite{2006GReGr..38..633B} also found that there are initial conditions for which the amplitude of oscillations are negligible.
Our results agree with these findings.
An obvious question concerns the existence of resonances for these objects. We used our numerical results from Section~\ref{sec:ROscillatons} to compute the ratio between the resonant and the oscillaton's frequency, for both circular, Eq.~\eqref{eq:circresonant}, and radial, Eq.~\eqref{eq:freqrad} motion. We paid also special attention to multiples of such ratio, for which the Mathieu equation predicts instabilities - Eq. \eqref{eq:N_omega_instable}. Such ratio, by virtue of being dimensionless, can only depend on the product of oscillaton $M$ and the scalar $\mu$ mass, and the compactness $\mathcal{C}$ is a suitable choice of dimensionless combination. For the most compact oscillatons $(\mathcal{C} \sim 0.07)$, the difference between $\omega$ and $2\omega_{res}$ was a factor of two too large, and the gap widens as $\mathcal{C}$ decreases.
In summary, our results indicate that neither on circular nor radial motion is able to excite resonances in oscillaton spacetimes.
We can use the analytical solution in the Newtonian regime (Section~\ref{sec:NOscillatons}) to confirm these findings for dilute oscillatons. Taking the expansion~\eqref{eq:Nmetricexpansion} and using it in Eq.~\eqref{eq:freqrad}, we find that, for radial motion near the origin the resonance frequency is
\begin{eqnarray}
\omega_{\rm res}\left(\mathcal{C}\right)=1.33791 \, \mathcal{C}\,.
\eeq
Therefore, the resonance frequencies are bounded from above by the maximum allowed compactness. For the maximum compactness for which the Newtonian analysis is valid, $\mathcal{C}\approx 0.01$, one finds $\omega_{\rm max}=0.013379$. This upper bound is considerably {\it lower} than the oscillatons's frequency of Fig.~\ref{fig:OvsPhi0} (even in the relativistic scenario).
Thus no resonances are excited by radial motion.
For circular motion, using Eq.~\eqref{eq:circresonant} we find,
\begin{eqnarray}
\omega_{\rm res}(\mathcal{C})=\frac{\mathcal{C}}{2\sqrt{2}}\sqrt{\frac{14.32 - 88.6408 \, \mathcal{C}}{1 + \mathcal{C} \,(-6.19 + 1.79\, r^2 \, \mathcal{C})}}\,.
\eeq
Thus, we find again an upper bound
\begin{eqnarray}
\omega_{\rm res}<\frac{\mathcal{C}}{2\sqrt{2}}\sqrt{\frac{14.32}{1 -6.19\, \mathcal{C}}}\,.\label{eq:freqcircmaj}
\eeq
The r.h.s grows with compactness between 0$<\mathcal{C}<0.161551$. This means that for the Newtonian regime ($\mathcal{C}<0.01$) the frequency of resonance in circular background motion is bounded by the r.h.s of \eqref{eq:freqcircmaj} evaluated at $\mathcal{C}=0.01$, which is $\omega_{\rm max}= 0.0138134$. Once again, this value does not get near the frequencies of oscillatons corresponding to the Newtonian regime in Figure \ref{fig:OvsPhi0}. We conclude that the motion in Newtonian oscillatons is in the rapidly oscillating background regime $(\omega \gg \omega_{\text{res}})$. It is, thus, appropriate to use the formalism developed in Section \ref{sec:Mathieu_rapid}, to understand motion in these oscillatons, applicable irrespective of the fact that spacetime is highly-dynamical or not.
We can focus the discussion on homogeneous oscillatons where, via \eqref{eq:NPoisson}, \eqref{eq:NV2} and \eqref{eq:N_star_potential},
\begin{eqnarray}
V'(r)=-\frac{1}{3}V_{1}'(r)=\frac{M}{R^3}r.
\eeq
This description is, by \eqref{eq:Ndensitysmallr}, valid for motion near the center. The amplitude of the rapidly varying part of the radial coordinate \eqref{eq:osc_motion_eta} is proportional to $\tilde{\Omega}^2/ \omega^2$, where $\tilde{\Omega}^{-1}=\sqrt{R^3/M}$ is the dynamical time scale of the slowly-varying radial component. If the object has $\mathcal{C}=10^{-2}$, on the verge of the weak-field limit validity, then (see also Fig. \ref{fig:res_self_int_osc}) $\tilde{\Omega}^2/\omega^2 \sim 10^{-4}$. Thus, the motion is always well described by a smooth transition between background (circular or radial) motion on which small perturbations are superposed.
We also studied motion in the presence of scalar self-interaction, at the Newtonian level, but we find the same qualitative behavior (see Appendix \ref{AppGPP_self_int_motion}). The attractive self-interaction makes the configuration more compact in the inner core, with respect to the non-self-interacting one. For highly compact oscillatons the gap between $2\omega_{\rm res}$ and $\omega$ is not too large, and the Newtonian analysis raises the intersting issue of whether or not such self-interaction term could lead to resonances in highly-compact oscillatons.
Such question is specially interesting for the full axion potential~\cite{Helfer:2017a}.
\subsection{The effect of dissipation on resonances} \label{sec:dissipation}
\begin{figure}[th]
\includegraphics[width=0.5\textwidth]{DragRadiusvTime}
\caption{Radial motion of an object subject to very weak dissipation on the time-periodic geometry \eqref{eq:toyA}-\eqref{eq:toyB}, where resonance occurs for $r/M = 6$. The star has $\mathcal{C}=0.1$, the inspiralling object has $R_p = 10\, \mu_{p_0} =10^{-3}M$. The sound speed was taken to be $c_s=0.6 c$.\label{fig:DragGeoNumCirc}}
\end{figure}
The above analysis neglects dissipative effects. It is, in principle, possible that the resonances do not leave any observable imprint
in realistic situations: gravitational drag, along with gravitational radiation losses could, for instance, drive the inspiralling body inwards without
even being affected by resonances. To test this, we have added a dissipative force $F$ to the motion of the body, of initial mass $\mu_p(\tau=0)=0$ and radius $R_p$. The equations of motion are given by
\begin{eqnarray}
\mu_p\left(\ddot{x}^{\gamma} + \Gamma^{\gamma}_{\alpha\beta} \dot{x}^{\alpha} \dot{x}^{\beta}\right) = F^{\gamma}.
\label{eq:dragmotion}
\eeq
The force can describe several effects, such as gravitational radiation reaction, accretion of gravitational drag~\cite{Macedo:2013qea,Barausse:2014tra}. Regarding accretion, for a small compact object (radius $R_p$ much smaller than the mean free path) its mass growth is determined by
\begin{eqnarray}
\dot{\mu}_p = \frac{\pi \rho R_p^2}{v},
\label{eq:accretion}
\eeq
where $\rho$ is the density of the compact object ``generating'' the dynamical spacetime configuration and $v$ the relative velocity between the orbiting body and the compact star. The gravitational drag force may be modelled by dynamical friction on a constant-density medium~\cite{Macedo:2013qea,Barausse:2014tra}
\begin{eqnarray}
F_{DF}=- \frac{4 \pi \mu_p ^2 \rho}{v^2} I_v\,,
\eeq
with,
\begin{eqnarray}
I_v =
\begin{cases}
\frac{1}{2}\log \left(\frac{1 + v/c_s}{1 - v/ c_s}\right) - v/c_ s , \quad &v < c_s \\
\frac{1}{2}\log \left(1-\frac{c_s^2}{v^2}\right) + \log \left(\frac{v t}{r_{\rm min}}\right) , \quad &v > c_s \\
\end{cases}
\eeq
where $c_s$ is the velocity of sound in this medium and $r_{\rm min}\sim R_p/v$~\cite{Macedo:2013qea}.
In our simulations, we always used subsonic motion.
Choosing once again $\vartheta = \pi/2$ and taking into account the effect of accretion and gravitational drag, the dissipation force may be modelled in a Newtonian way:
\begin{eqnarray}
F_D^t &=& 0\,,\\
F_D^r &=& -\dot{\mu}_p\dot{r} + F_{DF} \frac{\dot{r}}{v}\,,\\
F_D^\varphi &=& -r\dot{\mu}_p\dot{\varphi} + F_{DF} \frac{r\dot{\varphi}}{v}\,.\label{eq:dissipationequation}
\eeq
The system \eqref{eq:dragmotion} and \eqref{eq:accretion} determines the motion of an object through this perturbed spacetime.
Fig.~\ref{fig:DragGeoNumCirc} represents the radial evolution of an originally circular motion of a very small object in the previous homogeneous toy model, undergoing subsonic dissipation. As in the Section \ref{sec:toy_model}, the compactness of the star is taken to be $\mathcal{C}=0.1$, the density of the medium $M^2\rho=3M^3/(4 \pi R^3)\simeq2.38732 \cdot 10 ^{-4}$ and the speed of sound ($c_s$) was chosen to be $0.6 c$. Regarding the orbiting object, its initial mass and radius was chosen to be very small on the geometry ($R_p = 10 \,\mu_{p_0} = 10^{-3}M$), such that the effect of the drag would be small. The oscillating frequency of this toy model was chosen such that resonance would occur at $r/M = 6$. It is clear from Fig.~\ref{fig:DragGeoNumCirc} that the object undergoes a very slow inspiralling motion until it reaches $r/M \sim 6$. Then, the eccentricity of the orbit lowers, and the trajectory becomes similar to the resonant behaviour studied in the previous section. This implies that an object captured by the gravity of this periodic structures will undergo a resonance (if possible) when reaching the correspondent radius, enabling its observation with appropriate techniques.
Our results for larger damping indicate that the drag hastens the decay of the object to the center of the star. For large enough damping, the resonance (and forcing) has little impact on the motion, as expected. Consequently, if the friction is too large, resonance might be unobservable.
\section{Implications for ultra-light axion DM halos} \label{sec:darkhalo}
We will now use the previous results to investigate whether the motion in the background of the ultra light axion DM can lead to observable consequences.
Previous works have considered the impact of FDM on motion of binary pulsars~\cite{Khmelnitsky:2013lxt, Blas:2016ddr, DeMartinoBroadhurst:2017} and laser interferometers~\cite{AokiSoda:2017a}. We are here interested not in the impact of FDM on the motion of binary pulsars around their center of mass but, for illustration, motion of that center of mass around the galactic halo. We discuss the former aspect of motion in Appendix \ref{AppBinary}.
Masses of axion particles in the range $10^{-23}\,\mbox{{\scriptsize $\stackrel{<}{\sim}$}}\, m[\text{eV}] \,\mbox{{\scriptsize $\stackrel{<}{\sim}$}}\, 10^{-27}$ are of interest in the context of mixed axion DM (MDM)~\cite{HlozekMarsh:2017}.
\subsection{Halo description} \label{sec:darkhalo_descr}
Galactic DM halos in the FDM scenario are stabilized by Heisenberg's uncertainty relation (or quantum pressure in a hydrodynamical perspective, see Appendix \ref{AppGPP}) and are related to the weak-field limit of the EKG system, studied in Section~\ref{sec:NOscillatons} and Appendix \ref{AppEKGNewt}. However, the connection between Newtonian oscillatons and DM halos is not straightforward. It is theoretically expected that a dark halo consists of a nearly homogeneous core surrounded by particles which are behaving like CDM \cite{MarshPop:2015}. Simulations of galaxy formation with FDM confirm such picture~\cite{SchiveChiueh:2014, Mocz:2017}. The density profile of such effectively cold, DM region is described by the standard Navarro-Frenk-White (NFW) profile~\cite{binney2011galactic}. In the case of the FDM, this core
is usually referred to as a pseudo-soliton~\footnote{``Pseudo'' because it is not protected by a charge. However, it is stable on cosmological scales (see Section \ref{sec:ROscillatons}).}. Simulations reveal non-local scaling relations between the parameters that describe the soliton and the whole halo \cite{Schive:2014hza, Mocz:2017, 4caveat}.
There are established procedures for constructing FDM halos and reconstructing observational galactic rotation curves~\cite{MarshPop:2015,GonzalezMoralesMarsh:2017, Bernaletal:2017a}. The dark halo is described by
\begin{equation} \label{eq:halodensity}
\rho(r)=\rho_{\text{sol}}(r)\theta(r_{\epsilon}-r)+\rho_{\text{NFW}}(r)\theta(r-r_{\epsilon})\,,
\end{equation}
where
\begin{equation} \label{eq:solitondensity}
\rho_{\text{sol}}(r)=\frac{\rho_c}{(1 + 0.091(r/r_c)^2)^8} \,,
\end{equation}
and
\begin{equation}
\rho_{\text{NFW}}(r)=\frac{\rho_s}{\frac{r}{r_s}\big(1 + \frac{r}{r_s} \big)^2}\,,
\end{equation}
are soliton and NFW profiles, respectively, and $\theta$ is the Heaviside function. Here, $\rho_c$ is the central density of the soliton, $r_c$ (the core radius) is the point at which the density falls off to half of its central value, $\rho_s$ is related to the density of the Universe at the moment the halo collapsed, $r_s$ is NFW scale radius, $r_{\epsilon}$ is soliton-NFW transition radius. Demanding continuity of the soliton and NFW densities at the transition (and optionally their first derivative), we are left with only four (three) free parameters which can be found by fitting galactic rotation curves.
The soliton density function~\eqref{eq:solitondensity} was found by fitting onto results of galaxy formation simulations~\cite{SchiveChiueh:2014}. The fitted density distribution \eqref{eq:solitondensity} for the soliton is in excellent agreement with our approximate analytical solution of Section~\ref{sec:NOscillatons}. One of the two soliton parameters $(\rho_c, r_c)$ can be replaced instead by the axion particle mass $m$~\footnote{We remind the reader that $m=\mu\hbar$.}. This is a global parameter independent of the galactic details. From the definition of $r_c$ and the scaling in Eq. \eqref{eq:NSPscale},
\begin{equation} \label{eq:fuzzy_cdensity}
\rho_c=1.94 \times 10^{-2} \Big ( \frac{r_c}{1\text{kpc}} \Big )^{-4} \Big ( \frac{m}{m_{22}} \Big )^{-2} \frac{M_{\odot}}{\text{pc}^3}.
\end{equation}
In the last equation $m_{22}=10^{-22} \text{eV}$ and we used our analytical profile to obtain the numerical prefactor. Simulations indicate that the transition radius usually corresponds to $r_{\epsilon} \approx 3.5 r_c$ \cite{Mocz:2017}. The scale-invariant radius of that point is $Z_s \equiv \lambda \mu r_{\epsilon} \approx 1.035 Z$. In order to make a connection with our description of oscillatons in Section \ref{sec:Oscillatons}, we also find how their compactness\footnote{Here, we define $\mathcal{C_{\text{s}}}=M_s/r_{\epsilon}$, where $M_s=M(r_{\epsilon})$. We are not considering whole Newtonian oscillatons, because in FDM halos the exponential tail is substituted with the NFW tail. From the scaling relations $\mathcal{C}=\mathcal{C_\text{s}}(Z_s/Z)(\beta/\beta_s)$ and using the full expansion we find $\beta_s=1.725$.} depends on the core radius and the particle's mass
\begin{equation}
\label{eq:compactness_halo}
\mathcal{C_{\text{s}}} = 3.08 \times 10^{-9} \Big ( \frac{r_c}{1\text{kpc}} \Big )^{-2} \Big ( \frac{m}{m_{22}} \Big )^{-2}.
\end{equation}
Note that rotation curve fits indicate changes of $r_{c}$ by at most two orders of magnitude $r_c \sim (10^{-2},1) \text{kpc}$ for FDM particle's masses interval \cite{Bernaletal:2017a}. If we use parameters for the Milky Way (MW), estimated in Refs.~\cite{SchiveChiueh:2014,DeMartinoBroadhurst:2017}: $m=0.8m_{22}$ and $r_c=120 \text{pc}$, we find $\rho_c=146 M_{\odot}/\text{pc}^3$ and $\mathcal{C_{\text{s}}}=3.34 \times 10^{-7}$.
\subsection{Halo spacetime dynamics and general features of motion} \label{sec:darkhalo_motion}
In Section \ref{sec:NOscillatons} we showed that the spacetime describing the soliton is highly dynamical, in the sense that the gradient of the time-dependent potential (time-dependent force) $|V'_{1}|$ is of the same order of magnitude as or larger than the Newtonian gravitational force $V'$. Regarding the halo ``atmosphere'' (outer layer), as $r_s \sim r_\epsilon$ \cite{Bernaletal:2017a}, we are interested in the $r\gg r_s$ limit of the NFW profile $\rho_{\text{NFW}}(r) \sim \rho_s ( r/r_s )^{-3}$. In this limit, the time-dependent force becomes {\it logarithmically} smaller than the Newtonian force $V'/|V'_{1}| \sim \ln(r/r_s)$. The dark halo radius, taken as $r_{200}$, the point when the mean halo density is $200$ times bigger than the cosmological critical density, is usually two orders of magnitude larger than $r_s$. Thus, even at the halo radius the dynamical component is of the same order of magnitude as the static component. We can conclude that the whole halo is highly dynamical.
As we saw in Section~\ref{sec:applications_oscillatons}, the motion in solitons is in a rapidly oscillating background regime, as $\tilde{\Omega} \ll \omega$, where $1/\tilde{\Omega}$ is timescale associated with the motion dictated by Newtonian force. The ratio of the celestial object's Keplerian orbital and the oscillaton frequency depends on the core radius and the particle's mass near the soliton center\footnote{A useful number to keep in mind is that the period of oscillation for ULA with $m=m_{22}$ is $T \sim 1 \text{yr}$.} [from Eq. \eqref{eq:fuzzy_cdensity}]
\begin{equation} \label{eq:fuzzy_omega_ratio}
\frac{\tilde{\Omega}}{\omega} \approx 4 \times 10^{-9} \Big ( \frac{r_c}{1\text{kpc}} \Big )^{-2} \Big ( \frac{m}{m_{22}} \Big )^{-2}\,.
\end{equation}
For our reference MW parameters, we find $\tilde{\Omega}/\omega \sim 4.34 \times 10^{-7}$. As rotation curves in outer regions of the halo ``atmosphere'' develop plateaus, the orbital frequencies must go further down at large distances. The effect of the time-dependent force on the orbital motion in this regime is, as explained in Sections~\ref{sec:Mathieu_rapid} and \ref{sec:applications_oscillatons}, suppressed by $(\tilde{\Omega}/\omega)^2$. In the absence of other matter sources, such suppression is extremely large in the galactic context (for the above mentioned estimates for MW, this is equal to $10^{-13}$) irrespective of the highly dynamical nature of the spacetime.
This order-of-magnitude reasoning neglects the presence of other matter (baryons and CDM) components. Let us assume rigid body rotation and estimate on which scales resonances -- in the presence of such other matter components -- may occur
\begin{equation} \label{eq:fuzzy_circluar_velocity}
r \sim 2 \times 10^{-6} \Big ( \frac{v}{10\text{km}/\text{s}} \Big ) \Big ( \frac{m}{m_{22}} \Big )^{-1} \text{pc}\,.
\end{equation}
Thus, for typical orbital velocities resonances are not encountered on $\text{kpc}$ scales for neither the FDM nor the MDM mass range. We need to rely on the gravity of SMBH, and we address this in the next section.
As we saw, the impact of the time-dependent force on the object's position is suppressed, but ``only'' by $\tilde{\Omega}/\omega$ [as one can see from Eq.\eqref{eq:osc_motion_eta}]. Can one test this effect in astrophysics? To understand this issue, we now estimate the amplitude of the radial velocity oscillation with respect to the value dictated by the time-independent forces, with the Doppler effect in mind. This ratio is
\begin{eqnarray} \label{eq:Doppler_ratio}
&&\frac{\Delta v}{v}(r)=2.78\times 10^{-7}\Big ( \frac{m}{m_{22}} \Big )^{-1}\frac{\rho_{\text{ULA}}(r)}{10^2\frac{M_{\odot}}{\text{pc}^3}} \\ \nonumber
&& \Big(\frac{r}{10^2\text{pc}}\Big )^{\frac{3}{2}} \Big(\frac{M_\text{tot}(r)}{10^{9} M_\odot}\Big)^{-\frac{1}{2}},
\eeq
where $M_\text{tot}(r)$ corresponds to the total (DM+baryon) enclosed halo mass\footnote{We model baryon component of the MW Galaxy as a bulge+disk. As we are interested in a simple model, we will describe bulge with Hernquist profile \cite{binney2011galactic}, having total mass of $\sim 1 \times 10^{10} M_{\odot}$ and a scale length $\sim 1 \text{kpc}$ \cite{WidrowDubinski:2005}. For the description of (thin) disk we used 3MN profile, where MN stands for Miyamoto-Nagai disk, with the parameters for the potential taken from Ref. \cite{RoryChris:2015} and a total mass of $4.6 \times 10^{10} M_{\odot}$.}.
For MW parameters given in Section~\ref{sec:darkhalo_descr}, the value of this ratio is $\sim 10^{-7}$ outside SMBH gravitational influence radius $\sim 2\text{pc}$ \cite{MerrittD:2010} and inside $r \,\mbox{{\scriptsize $\stackrel{<}{\sim}$}}\, 200 \text{pc} $. What happens for even lower axion masses, in the MDM domain? $M(r)$ is dominated by baryons in the first few $\text{kpc}$ and is unchanged. For a toy model of the dark halo in mixed DM cosmologies we take $\rho_{\text{DM}}(r)=\rho_{\text{ULA}}(r)+\rho_{\text{CDM}}(r)$, where ULA component $\rho_{\text{ULA}}(r)$ is described by \eqref{eq:halodensity} and CDM component $\rho_{c}(r)$ by a separate NFW profile~\cite{AnderhaldenDiemand:2013}. These two profiles are related by demanding that their relative abundance is equal to the cosmological one\footnote{Relative cosmological abundance of CDM and axions is given by $f_{i}=\Omega_{i}/\Omega_{d}$, where $i=a,c$ can stand for cold $\Omega_{c}$ and axion $\Omega_{a}$ components and total dark sector density is given by $\Omega_{d}=\Omega_{a}+\Omega_{c}$. Present CMB constraint allow for $f_{a} \leq 0.53$ for $m=10^{-24} \text{eV}$, $f_{a} \leq 0.33$ $(m=10^{-25} \text{eV})$, $f_{a} \leq 0.05$ $(m=10^{-26} \text{eV})$ and $f_{a} \leq 0.03$ $(m=10^{-27} \text{eV})$ \cite{HlozekMarsh:2017}.}
at $\sim r_{200}$. For our simple estimate, the cosmological abundance ratio is taken to be the same throughout the halo, approximately~\cite{AnderhaldenDiemand:2013} and that the NFW parameters are the same for cold DM component as the one in the pure-CDM MW~\cite{SalucciNesti:2013}. With these assumptions in mind, for $m \sim 10^{-24} \text{eV}$, the ratio in Eq. \eqref{eq:Doppler_ratio} would have the value $\sim 10^{-6}$ for stars inside $r \,\mbox{{\scriptsize $\stackrel{<}{\sim}$}}\, 30 \text{pc} $. For orbital velocities of the order of $100 \text{km}/\text{s}$, this would require unrealistic helioseismology-level \cite{Brito:2015yfh} precision of $0.1\text{m}/\text{s}$ in order to be detected over $100\text{yr}$. For $m \sim 10^{-27} \text{eV}$ \eqref{eq:Doppler_ratio} can be as high as $\sim 10^{-4}$, however oscillation timescales become very large $\sim 10^5 \text{yr}$. Note that these radial velocities refer to the halo center and that the Solar System itself would experience radial velocity fluctuation, albeit much more suppressed at the Solar galactocentric distance of $8 \text{kpc}$.
\subsection{Constraining ULA density at the Galactic center} \label{sec:darkhalo_res}
We now show that the motion of bright S stars can be used to constraint ULA DM densities at the center of our galaxy.
Data from the last 20 years was used to probe Yukawa-like modifications of gravity \cite{HeesPRL:2017}, and new data is expected to be of further help in this endeavour~\cite{HeesELT:2017}. This year the S0-2 star will be at its closest distance from the SMBH, and a redshift measurement is expected~\cite{HeesProc:2017}.
The behaviour of matter in the sub-parsec region is dominated by the SMBH gravity. We should stress that our understanding of DM behaviour in the presence of the SMBH and during the galactic evolution timescales is still in its infancy and mostly focused on CDM~\cite{ GenzelEisenhauer:2001, MerrittD:2010} (but see also Refs.~\cite{Hui:2016ltb,Ferreira:2017pth, Helfer:2017a, LopezAlbores:2018, 3caveat}). The core density estimate from Section~\ref{sec:darkhalo_descr} may not apply in the sub-parsec region, since the SMBH can make the region denser, e.g. by adiabatic growth \cite{MerrittD:2010}. Thus, we use constraints from previous analysis of the orbit of S stars as a rough upper limit on the extended background and treat fraction of ULA component as a free parameter. Present constraints allow for $1\sigma$ upper limit of $M_{\text{ext}}=10^{-2} M_{\text{SMBH}}$, where $ M_{\text{SMBH}}=4.02 \times 10^6 M_{\odot}$ and background radius cutoff was fixed at $R=11 \text{mpc}$ in order to encompass whole of S0-2 star orbit~\cite{BoehleGhez:2016}. Some CDM estimates in this region are of order $\sim 10^3 M_\odot$ \cite{GhezSalim:2008}. This background consists of faint stars, compact objects and DM. We arbitrarily take the maximum contribution of ULA, $\lambda_{\text{ULA}}$, to be $30\%$. In this approach we don't have prior restrictions on the axion mass range that we probe but orbital timescales focus the range on FDM and MDM.
To estimate the impact of ULA time-dependent force, consider a simplified model where stellar orbits are influenced only by the (non-rotating) SMBH at the Post-Newtonian (PN) level and all other matter components are incorporated in the homogeneous spherically-symmetric extended background $\rho_{\text{ext}}$. The stellar equations of motion are, from Sections \ref{sec:NOscillatons} and \ref{sec:N_dynamics} and Refs.~\cite{RubilarEckart:2001, poissonwillbook}
\begin{eqnarray}
\partial^2_t \vec{r} &=&-\frac{1}{r^3}\Big[\Big(1+4\frac{1}{rc^2}+\frac{v^2}{c^2}\Big)\vec{r}-4\vec{v}\frac{\vec{v} \cdot\vec{r}}{c^2}\Big]\nonumber\\
&-&\frac{M_{\text{ext}}(r)}{r^3}\vec{r}+4\pi \lambda_{\text{ULA}} \rho_{\text{ext}}\vec{r}\cos{(2\omega t+2\Upsilon)}\,.\label{eq_motion_smbh}
\eeq
In the above we use dimensionless quantities,
\begin{equation} \label{eq_variables}
t =\frac{t}{\tau_{\text{dyn}}}\,,\quad r =\frac{r}{10\text{mpc}}\,,\quad M =\frac{M}{M_{\text{SMBH}}}\,,
\end{equation}
$\tau_\text{dyn}=\sqrt{GM_{\text{SMBH}}/r_{I}^3}^{-1}=7.4\text{yr}$ is the dynamical timescale associated with the gravity of SMBH at $r_{I}=10\text{mpc}$ and $\Upsilon$ represents phase difference.
In the context of our results from Sections \ref{sec:circularmotionexample}-\ref{sec:toy_model} for the circular example, the presence of the additional SMBH potential does not change the picture significantly, as it can be incorporated in $V(r_0)$.
Most studied S stars around Sgr $\text{A}^{\star}$ are on a highly eccentric orbits, and the detailed treatment of such motion is outside the scope of this work. For such orbit, Eq. \eqref{eq:N_res_freq} is not applicable as the resonant frequency can change by as much as one order of magnitude along the orbit. We will onward focus only on S0-2 star. Its (initial) orbital parameters are $a_{0}=4.878(8)\text{mpc}$, $e_{0}=0.892(2)$ and $T_{0}=15.92(4)\text{yr}$ \cite{BoehleGhez:2016}.
Present constraints on periastron precession are: $|\dot{w}_{0}|<1.7\times10^{-3}\,\text{rad}/\text{yr}$~\cite{HeesPRL:2017,HeesProc:2017}.
We have numerically solved the equations of motion, for different extended mass $M_{\text{ext}}$ and ULA abundance $\lambda_{\text{ULA}}$ as well as phase difference $\Upsilon$.
The system starts from the apocenter, and we monitor the secular (osculating) orbital parameters, averaging over one orbit. The motion is contained within a fixed orbital plane, even at the PN level \cite{poissonwillbook}, and, ipso facto, orbital inclination and longitude of the ascending node are fixed, which leaves us with semi-major axis, eccentricity, orbital period and periastron precession. From this set of orbital elements, only periastron precession is affected by PN effects \cite{poissonwillbook} and the homogeneous background \cite{RubilarEckart:2001, ZakharovNucita:2007, JiangLin:1985}, when a time-dependent ULA force is neglected.
General orbital behaviour is similar to the one described in Section \ref{sec:toy_model}. Numerical calculations show, for the cases that we examined, that (anti-)resonant behaviour of the radial coordinate can be found both when $2\omega
=(2n+1)\tilde{\Omega} $ (odd resonances) and $2\omega
=2n\tilde{\Omega}$ (even resonances), where $\tilde{\Omega}=2\pi/T$ is orbital mean motion, $T$ is orbital period and $n \in \mathbb{N}$\footnote{Similar ratios are known from resonant phenomena (mean-motion resonances) in celestial mechanics and galactic astronomy \cite{book_murraydermott, binney2011galactic}. This observations demands further investigation.}. Resonances become less pronounced, in absolute terms, with increasing $n$. When $\Upsilon=0$ only odd resonances occur and their shape is Gaussian-like. The ``sign'' of these resonances, i.e. whether they lead to increase or decrease of the oribtal radius as well as the timescales involved, depend on the environment. We also find, as in Section \ref{sec:toy_model}, (non-symmetric) window around dominant resonance inside which oscillations are slightly amplified with respect to the other driving frequencies. These cases can be analytically understood with the help of first-order perturbation theory (Appendix \ref{AppEliptical}). In Fig. \ref{fig:SMBH_semi_major} we show the semi-major axis secular evolution for the first three resonant frequencies as well as one non-resonant and one close to the dominant resonance. Qualitative behaviour of the $e$ and $T$ is similar. In Tables \ref{tab:S2_prim_res} and \ref{tab:S2_sec_res} we list amplitudes of secular changes of $a$ for two dominant resonant frequencies, as well as their sign. Notice that the current observational precision $0.16\%$ is enough to probe almost all the scenarios that we examined during the resonant amplification.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{semi_major_SMBH_B.pdf}
\caption{Relative change of secular semi-major axis $a$ with respect to the value without present time-dependent force $a_0$ for three dominant resonant (blue; solid, dashed and dashed-dotted lines), one non-resonant (dashed red line) and one near-resonant (dashed black line) frequencies. The background is, for illustrative purposes, taken as the least conservative one: $M_{\text{ext}}=10^{-2}M_{\text{SMBH}}$, $\lambda_{\text{ULA}}=0.3$ and we take $\Upsilon=0$. The axion particle masses correspond to multiples of mean motion and some of the values can be found in Tables \ref{tab:S2_prim_res} and \ref{tab:S2_sec_res}. }
\label{fig:SMBH_semi_major}
\end{figure}
For $\Upsilon=\pi/2$ only even resonances occur with the same phenomenology as their odd counterparts. For all other phase differences, of the form $\Upsilon=\pi/m$, $m \in \mathbb{N} \setminus \{1,2\}$, the resonance evolution is akin to a sinusoidal shape (see Fig.~\ref{fig:SMBH_semi_major_phase}). For a given resonance, the amplitude of orbital parameters are mildly dependent on $\Upsilon$: at most (for $a$) they were lowered by $50 \%$ (for $m=3$), compared to the $m=1$ or $m=2$ case, but notice that now $a$ periodically becomes both larger and smaller than its undisturbed value (in absence of time-dependent force). This qualitative dependence on the relative phase is consequence of absence of time-translation symmetry as discussed in Section \ref{sec:symmetries}.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Motion_around_SMBH_Phi_rel.pdf}
\caption{Relative change of secular semi-major axis $a$, with respect to the reference one, for dominant (odd) resonant frequency and different values of phase differences $\Upsilon$. Background is the same as in Figure \ref{fig:SMBH_semi_major}.}
\label{fig:SMBH_semi_major_phase}
\end{figure}
\begin{table}[]
\centering
\caption{Secular amplitudes of S0-2 star semi-major axis $a$, that correspond to primary resonance, and whose sign is denoted by $\sigma$: $+$ for amplification and $-$ for depletion. In each case the homogeneous background mass $M_{\text{ext}}$ (in units of $[0.01 M_{\text{SMBH}}]$) and ULA abundance $\lambda_{\text{ULA}}$ were varied. We fix phase difference to $\Upsilon=0$.
Note that the resonance is located at frequencies $\omega=m/\hbar$.}
\label{tab:S2_prim_res}
\begin{tabular}{|c c | c | c c |}
\hline \hline
$M_{\text{ext}} $ & $m\,[m_{22}]$ & $\lambda_{\text{ULA}}$ & $\sigma$ & $\langle a \rangle \,[\text{mpc}]$ \\
\hline
$1$ & $0.0412$ & $0.3$ & $+$ & $5.089$ \\
$$ & $$ & $0.05$ & $+$ & $4.963$ \\
$$ & $$ & $0.005$ & $-$ & $4.854$ \\
\hline
$0.1$ & $0.0413$ & $0.3$ & $+$ & $4.931$ \\
$$ & $$ & $0.05$ & $+$ & $4.895$ \\
\hline
$0$ & $/$ & $0$ & $/$ & $4.878$ \\
\hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Same as Table \ref{tab:S2_prim_res} for secondary resonances.}
\label{tab:S2_sec_res}
\begin{tabular}{|c c | c | c c |}
\hline \hline
$M_{\text{ext}}$ & $m\,[m_{22}]$ & $\lambda_{\text{ULA}}$ & $\sigma$ & $\langle a \rangle \,[\text{mpc}]$ \\
\hline
$1$ & $0.124$ & $0.3$ & $+$ & $4.932$ \\
$$ & $$ & $0.05$ & $-$ & $4.859$ \\
$0.1$ & $0.124$ & $0.3$ & $+$ & $4.887$ \\
\hline
\end{tabular}
\end{table}
The extended background leads to a retrograde periastron shift of stellar orbits, as reviewed in Section \ref{sec:toy_model}. On the other hand, relativistic effects of the strong SMBH gravity lead to a prograde periastron shift, potentially masking the previous effect. Periastron precession was found by identifying successive radial maximums (in order to evade numerical difficulties explained in Ref. \cite{RubilarEckart:2001}). Sign of $\dot{w}$ direction depends on the background mass $M_{\text{ext}}$ i.e. whether it is dominated by the SMBH PN or background Newtonian contribution. The shape of periastron precession with respect to time is similar to the other orbital parameters. For resonant motion, base value of $\dot{w}$ tends to be lower (in relative terms), as compared to non-resonant one, and $\dot{w}$ increases when amplification occurs. Depending on the values of $M_{\text{ext}}$ and $\lambda_{\text{ULA}}$ this can lead to change of sign and, as a consequence, direction of periastron precession. This manifests in apoastron developing some kind of helix trajectory as in Fig.~\ref{fig:SMBH_apoastron}. When the oscillating frequency is similar to the resonant one, range of $\dot{w}$ values is similar to the one that correspond to resonant frequency.
For all cases that we considered, value of $\dot{w}$ was inside present constraints.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{apoastron_S2_v2.pdf}
\caption{Apoastron shift of S0-2 star $(e=0.892)$ during $t=300\tau_{\text{dyn}}$ in orbital plane for $M_\text{ext}=8 \times 10^{-3}M_{\text{SMBH}}$, $\lambda_{\text{ULA}}=0.3$, $\Upsilon=0$ and resonant motion $(2\omega=\tilde{\Omega})$. The orbits are presented at the beginning (full line) and at the end of the interval (dashed). Black dots correspond to the apoastron position during this interval. Notice that SMBH dominates background in determining periastron shift direction (retrograde), but during resonant motion short change of direction of apoastron shift occurs. Orbital coordinates correspond to $x=r\cos{\varphi}$ and $y=r\sin{\varphi}$ and are given in the units of $10 \text{mpc}$.}
\label{fig:SMBH_apoastron}
\end{figure}
Inference of ULA mass and abundance from semi-major axis is highly degenerate, as the phase difference, type of resonance and timescales over which the resonance develops significantly contribute to the problem\footnote{Whether inference of other orbital elements could significantly lower the degeneracy of the problem should be subject of further studies.}. A rough picture can be obtained by focusing on a near-resonant window and the first-order perturbation theory, as described in Fig. \ref{fig:contur_plot_S2_A}. Long-term and precise observations of this and other S stars (and comparison with other constraints) will allow for constraining ULA densities for axion masses that correspond to the resonant frequencies. Stars with longer orbital periods (or equivalently smaller axion masses), cannot be probed in this way as dynamical timescales become large. However, this type of resonant phenomena is known, in celestial mechanics and galactic astronomy \cite{binney2011galactic, book_murraydermott}, to leave fingerprints in the orbital parameter space, something that deserves further scrutiny. One of the better known of these structures, and potentially similar to this problem, are Kirkwood gaps in the distribution of semi-major axes of the main-belt asteroid orbits \cite{book_murraydermott}. Identification of these structures could be possible with the observation of a large number of stars in the central sub-parsec and parsec scales~\cite{ValluriDebattista:2012}.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{plots_1s_order_S2_A.pdf}
\caption{Relative change of secular semi-major axis $a$ amplitude, with respect to the reference one for different particle masses and ULA abundances. Results are obtained using first-order pertrubation theory (see Appendix \ref{AppEliptical}) for primary near-resonant window. We have neglected dependence of the phase difference as is it is, in this case, part of the argument of a harmonic function and influences only the time at which the semi-major axis develops a maximum. Note that the maximum values are obtained for $m=5 \times 10^{-2} m_{22}$ contour, as the primary resonance corresponds to $m=4.1 \times 10^{-2} m_{22}$ (see Table \ref{tab:S2_prim_res}).}
\label{fig:contur_plot_S2_A}
\end{figure}
\subsection{Spectroscopic fingerprints of halo spacetime} \label{sec:spectro}
In Appendix \ref{AppGredshift} we explore the gravitational redshift induced by oscillaton spacetime oscillation. From the observational perspective, this effect may manifest itself as a systematic error in other spectroscopic measurements, such as radial velocity (RV) curve determination~\cite{2caveat}. For the S0-2 star, luckily data-reduction within a similar context was addressed already: the presence of a binary companion of S0-2 would induce periodic signature in RV measurements. In Ref.~\cite{ChuDoHees:2018} RV residual was formed by subtracting orbital variation of radial velocity. On this residual Lomb-Scargle periodogram was performed with periods sampled between $1$ and $150$ days, which corresponds to $m$ between $4 \times 10^{-2} m_{22}$ and $0.63m_{22}$. Results showed that no significant periodic signal was found. Note that average uncertainties for RV points are $\sim (30-50) \text{km}/\text{s}$. Estimates suggest that with Extremely Large Telescopes (ELT) such uncertainties will be lowered by an order of magnitude and a bit lower \cite{HeesProc:2017}.
Taking the MW estimates from Section~\ref{sec:darkhalo_descr}, we find that the gravitational redshift (modulation) amplitude is $z \sim 10^{-6}$ (see Appendix~\ref{AppGredshift}). This would lead to a periodic change of $\sim 0.1 \text{km}/\text{s}$, corresponding to a mass $m \sim 0.8m_{22}$. This value is smaller than the expected uncertainties with ELT. The presence of baryons, which contribute to the static metric component, would make this amplitude even lower.
\section{Discussion}
The motion of objects in time-period spacetimes has a number of distinctive features. Suprisingly,
and despite the potential application to dark matter physics at the galactic center, most of these features had not been discussed previously.
We have shown that the periodic forcing on orbiting stars and planets, exherted by oscillating dark matter leads to unique
features in the motion of such objects. When applied to stars close to the galactic center, such features may well be observed in the near future.
Together with resonances excited by {\it non-axisymmetric} ultralight dark matter close to BHs~\cite{Ferreira:2017pth,Brito:2014wla,Fujita:2016yav,Brito:2017zvb,Cardoso:2018tly,Baumann:2018vus},
or by homogeneous oscillating DM in binary systems~\cite{Khmelnitsky:2013lxt}, our results suggest very strongly that the next few years will see either strong constraints on such DM models,
or very strong indications that it describes our universe.
\begin{acknowledgments}
We thank Philippe Grandclement for useful correspondence and for sharing with us his spectral code to build oscillatons, which we used to check our own
numerics. We would like to thank the anonymous reviewer for useful comments. M.B. acknowledges discussions on galactic phenomenology with Nemanja Martinović and the generous support of the GWverse COST Action CA16104 on ``Black holes, gravitational waves and fundamental physics''.
M. F. acknowledges financial support provided by Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia Grant number PD/BD/113481/2015 awarded in the framework of the Doctoral Programme IDPASC - Portugal.
The authors acknowledge financial support provided under the European Union's H2020 ERC Consolidator Grant ``Matter and strong-field gravity: New frontiers in Einstein's theory'' grant agreement no. MaGRaTh--646597. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development $\&$
Innovation.
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 690904.
\end{acknowledgments}
|
2,877,628,091,064 | arxiv | \section{Introduction}
\label{secintro}
Extremal metrics were introduced by Calabi \cite{c1}. Let $(X,\omega)$ be a K\"ahler manifold of complex dimension $n$. An extremal metric is a critical point of the functional
\begin{eqnarray*}
g & \mapsto & \int_X (S(g))^2 \frac{\omega_g^n}{n!}
\end{eqnarray*}
defined on K\"ahler metrics $g$ representing the K\"ahler class $[\omega]$, where $S(g)$ is the scalar curvature of the metric $g$.
Constant scalar curvature K\"ahler metrics (CSCK for short), and in particular K\"ahler-Einstein metrics, are extremal metrics. In this work we will focus on the polarized case, assuming that there is an ample holomorphic line bundle $L\rightarrow X$ with $c_1(L)=[\omega]$. In this special case, it has been conjectured by Yau in the K\"ahler-Einstein case \cite{yau}, and then in the CSCK case by the work of Tian \cite{tian} and Donaldson \cite{Don02} that the existence of a CSCK metric representing $c_1(L)$ should be equivalent to a GIT stability of the pair $(X,L)$. This conjecture has been extended to extremal metrics by Sz\'ekelyhidi \cite{sz} and Mabuchi \cite{ma12}.
\par
Let $(X,L)$ be a polarized K\"ahler manifold. Donaldson has shown \cite{Don01} that if $X$ admits a CSCK metric in $c_1(L)$, and if $\mathrm{Aut}(X,L)$ is discrete, then the CSCK metric can be approximated by a sequence of balanced metrics. This approximation result implies in particular the unicity of a CSCK metric in its K\"ahler class.
This method has been adapted by Mabuchi \cite{ma04} to the extremal metric setting to prove unicity of an extremal metric up to automorphisms in a polarized K\"ahler class. Then, Chen and Tian proved unicity of an extremal metric in its K\"ahler class up to automorphisms with no polarization assumption \cite{ct}.
\par
In a sequel to his work on balanced metrics \cite{Don05}, Donaldson shows that if $\mathrm{Aut}(X,L)$ is discrete, a CSCK metric is an absolute minimum of the Mabuchi energy $E$, or K-energy, introduced by Mabuchi \cite{ma}.
The approximation result of Donaldson does not hold true for CSCK metrics if the automorphism group is not discrete. There are counter-examples of Ono, Yotsutani and the first author \cite{osy}, or Della Vedova and Zudas \cite{dz}. However, Li managed to show that even if $\mathrm{Aut}(X,L)$ is not discrete, a CSCK metric would provide an absolute minimum of $E$ \cite{li}.
\par
By a theorem of Calabi \cite{c2}, extremal metrics are invariant under a maximal connected compact sub-group $G$ of the reduced automorphism group $\mathrm{Aut}_0(X)$ \cite{fuj}. Any two such compact groups are conjugated in $\mathrm{Aut}_0(X)$ and the study of extremal metrics is done modulo one such group.
In the extremal setting, the modified K-energy $E^G$ (see definition \ref{def:modK}) plays the role of the K-energy for CSCK metrics. This functional has been introduced independently by Guan \cite{gu}, Simanca \cite{si} and Chen and Tian \cite{ct} and is defined on the space of $G$-invariant K\"ahler potentials with respect to a $G$-invariant metric. In \cite{ct}, Chen and Tian prove that extremal metrics minimize the modified K-energy up to automorphisms of the manifold, with no polarization assumption. In this paper, we give a different proof of this result in the polarized case. We generalize Li's work to extremal metrics, using some weighted balanced metrics, which are called $\sigma$-balanced metrics (see definition \ref{def:sigma-balanced} in section 2):
\begin{theointro}
\label{theo:min}
Let $(X,L)$ be a polarized K\"ahler manifold and $G$ a maximal connected compact sub-group of the reduced automorphism group $\mathrm{Aut}_0(X)$. Then $G$-invariant extremal metrics representing $c_1(L)$ attain the minimum of the modified K-energy $E^G$.
\end{theointro}
The proof relies on two observations. We will consider a sequence of Fubini-Study metrics $\omega_k$ associated to Kodaira embeddings of $X$ into higher and higher dimension projective spaces. The first observation is that if we define $\omega_k$ to be the metric associated to an extremal metric in $c_1(L)$ by the map $Hilb_k$ (see definition in section~\ref{sec:background}, equation~(\ref{eq:Hilb})), then $\omega_k$ will be close to a $\sigma$-balanced metric. The second point
is that $\sigma$-balanced metrics, if they exist, are minima of the functionals $Z^{\sigma}_k$ (section~\ref{sec:background}, equation~(\ref{eq:Zk})) that converge to the modified Mabuchi functional. Then a careful analysis of the convergence properties of the $\omega_k$ and $Z_k^{\sigma}$ yields the proof of our main result.
\begin{remark}
We shall mention that Guan shows in \cite{gu} that extremal metrics are local minima, assuming the existence of $C^2$-geodesics in the space of K\"ahler potentials.
\end{remark}
\subsection{Plan of the paper}
In section~\ref{sec:background}, we review the definition of extremal metrics and recall quantization of CSCK metrics. We then introduce $\sigma$-balanced metrics and the relative functionals. Then in section~\ref{sec:min}, we prove the main theorem. In the Appendix, we collect some facts and proofs of properties of $\sigma$-balanced metrics.
\subsection{Acknowledgments}
The first author is supported by MEXT, Grant-in-Aid for Young Scientists (B), No. 22740041.
The part of the article concerning $\sigma$-balanced metrics is presented by the first author in 2011 Complex Geometry and Symplectic Geometry Conference held in the University of Science and Technology in China.
He would like to thank the organizers for the invitation and the kind hospitality.
The second author would like to thank Hongnian Huang, Vestislav Apostolov and Andrew Clarke for useful discussions and their interest in this work, as well as Song Sun and Valentino Tosatti for their encouragement.
\section{Extremal metrics and Quantization}
\label{sec:background}
\subsection{Quantization}
Let $(X,L)$ be a polarized K\"ahler manifold of complex dimension $n$. Let $\mathcal{H}$ be the space of smooth K\"ahler potentials with respect to a fixed K\"ahler form $\omega \in c_1(L)$ :
\begin{eqnarray*}
\mathcal{H}= \lbrace \phi \in C^{\infty} (X) \;\vert \; \omega_{\phi}:=\omega + \sqrt{-1}\partial\overline\partial \phi > 0 \rbrace.
\end{eqnarray*}
For each $k$, we can consider $\mathcal{H}_k$ the space of hermitian metrics on $L^{\otimes k}$. To each element $h\in \mathcal{H}_k$ one associates a metric $ -\sqrt{-1} \partial\overline\partial \log(h)$ on $X$, identifying the spaces $\mathcal{H}_k$ to $\mathcal{H}$. Write $\omega_h$ to be the curvature of the hermitian metric $h$ on $L$. Fixing a base metric $h_0$ in $\mathcal{H}_1$ such that $\omega=\omega_{h_0}$ the correspondence reads
\begin{eqnarray*}
\omega_{\phi}=\omega_{e^{-\phi}h_0}=\omega+\sqrt{-1}\partial\overline\partial \phi .
\end{eqnarray*}
We denote by $\mathcal{B}_k$ the space of positive definite Hermitian forms on $H^0(X,L^{\otimes k})$. Let $N_k=dim(H^0(X,L^k))$.
The spaces $\mathcal{B}_k$ are identified with $GL_{N_k}(\mathbb{C})/ U(N_k)$ using the base metric $h_0^k$. These symmetric spaces come with metrics $d_k$ defined by Riemannian metrics:
\begin{eqnarray*}
(H_1,H_2)_h=Tr(H_1H^{-1}\cdot H_2 H^{-1}).
\end{eqnarray*}
\noindent There are maps :
\begin{eqnarray*}
Hilb_k : \mathcal{H} & \rightarrow &\mathcal{B}_k \\
FS_k : \mathcal{B}_k &\rightarrow &\mathcal{H}
\end{eqnarray*}
defined by :
\begin{eqnarray*}
\forall h\in \mathcal{H}\;, \; s\in H^0(X,L^{\otimes k})\;, \; \vert\vert s\vert\vert ^2_{Hilb_k(h)}=\int_X \vert s \vert_{h^k}^2 d\mu_h
\end{eqnarray*}
and
\begin{eqnarray*}
\forall H \in \mathcal{B}_k\; , \;
FS_k(H)= \frac{1}{k} \log \bigg(\frac{1}{N_k} \sum_{\alpha} \vert s_{\alpha}\vert_{h_0^k}^2\bigg)
\end{eqnarray*}
where $\lbrace s_{\alpha}\rbrace$ is an orthonormal basis of $H^0(X,L^{\otimes k})$ with respect to $H$
and $d\mu_{h}=\dfrac{\omega_{h}^n}{ n!}$ is the volume form.
Note that $\omega_{FS_k(H)}$ is the pull-back of the Fubini-Study metric on $\mathbb{C}\mathbb{P}_{N_k-1}$ under the projective embedding induced by $\lbrace s_{\alpha}\rbrace$.
A result of Tian \cite{tian90} states that any K\"ahler metric $\omega_{\phi}$ in $c_1(L)$ can be approximated by projective metrics, namely
\begin{eqnarray*}
\lim_{k\rightarrow \infty} FS_k \circ Hilb_k (\phi) = \phi
\end{eqnarray*}
where the convergence is uniform on $C^2(X,\mathbb{R})$ bounded subsets of $\mathcal{H}$.
\noindent The metrics satisfying
$$
FS_k\circ Hilb_k(\phi)=\phi
$$
are called balanced metrics, and the existence of such metrics is equivalent to the Chow stability of $(X,L^k)$ by Zhang \cite{zha} and Wang \cite{wa}.
Let $\mathrm{Aut}(X,L)$ be the group of automorphisms of the pair $(X,L)$.
From the work of Donaldson \cite{Don01}, if $X$ admits a CSCK metric in the K\"ahler class $c_1(L)$, and if $\mathrm{Aut}(X,L)$ is discrete, then there are balanced metrics $\omega_{\phi_k}$ for $k$ sufficiently large, with
$$
FS_k\circ Hilb_k(\phi_k)=\phi_k
$$
and these metrics converge to the CSCK metric on $C^{\infty}(X,\mathbb{R})$ bounded subsets of $\mathcal{H}$.
In the proof of these results, the density of state function plays a central role.
For any $\phi\in\mathcal{H}$ and $k>0$, let $\lbrace s_{\alpha} \rbrace$ be an orthonormal basis of $H^0(X,L^k)$ with respect to $Hilb_k(\phi)$. The $k^{th}$ Bergman function of $\phi$ is defined to be :
$$
\rho_k(\phi)=\sum_{\alpha}\vert s_{\alpha}\vert^2_{h^k}.
$$
It is well known that a metric $\phi\in Hilb_k(\mathcal{H})$ is balanced if and only if $\rho_k(\phi)$ is constant.
A key result in the study of balanced metrics is the following expansion:
\begin{theorem}[\cite{cat},\cite{ruan},\cite{tian90},\cite{zel}]
The following uniform expansion holds
$$
\rho_k(\phi)=k^n+A_1(\phi)k^{n-1}+A_2(\phi)k^{n-2}+...
$$
with $A_1(\phi)=\frac{1}{2}S(\phi)$ is half of the scalar curvature of the K\"ahler metric $\omega_\phi$
and for any $l$ and $R\in \mathbb{N}$, there is a constant $C_{l,R}$ such that
$$
\vert\vert\rho_k(\phi) -\sum_{j\leq R} A_j k^{n-j} \vert \vert_{C^l} \leq k^{n-R}.
$$
\end{theorem}
\noindent As a corollary, if $\phi_k=FS_k\circ Hilb_k(\phi)$, then
$$
\phi_k-\phi=\frac{1}{k}\log \rho_k(\phi)\rightarrow 0
$$
as $k \rightarrow \infty$.
In particular we have the convergence of metrics
\begin{equation}
\label{cor:ber}
\omega_{\phi_k}=\omega_{\phi}+O(k^{-2}).
\end{equation}
By integration over $X$ we also deduce
$$
\int_X \rho_k(\phi) d\mu_{\phi}=k^n\int_X d\mu_{\phi}+ k^{n-1}\frac{1}{2}\int_X S(\phi) d\mu_{\phi}+\mathcal{O}(k^{n-2})
$$
where $S(\phi)$ is the scalar curvature of the metric $g_{\phi}$ associated to the K\"ahler form $\omega_{\phi}$.
Thus
\begin{equation}
\label{cor:Nk}
N_k=k^n\mathrm{Vol}(X)+\frac{1}{2} \mathrm{Vol}(X) \underline{S} k^{n-1}+\mathcal{O}(k^{n-2}).
\end{equation}
where
$$
\underline{S}=2n\pi\frac{c_1(L)\cup[\omega]^{n-1}}{[\omega]^n}
$$
is the average of the scalar curvature and $\mathrm{Vol}(X)$ is the volume of $(X,c_1(L))$.
\subsection{The relative setup}
In order to find a canonical representative of a K\"ahler class, Calabi suggested \cite{c1} to look for minima of the functional
\begin{eqnarray*}
Ca : \mathcal{H} & \to & \mathbb{R} \\
\phi & \mapsto & \int_X (S(\phi)-\underline{S})^2 d\mu_{\phi}.
\end{eqnarray*}
In fact, critical points for this functional are local minima, called extremal metrics.
The associated Euler-Lagrange equation is equivalent to the fact that $grad_{\omega_{\phi}}(S(\phi))$ is a holomorphic vector field and constant scalar curvature metrics, CSCK for short, are extremal metrics.
\par
By a theorem of Calabi \cite{c2}, the connected component of identity of the isometry group of an extremal metric is a maximal compact connected subgroup of $\mathrm{Aut}_0(X)$. As all these maximal subgroups are conjugated, the quest for extremal metrics can be done modulo a fixed group action. Note that $\mathrm{Aut}_0(X)$ is isomorphic to $\mathrm{Aut}_0(X,L)$ the connected component of identity of $\mathrm{Aut}(X,L)$.
As we will see later, it will be useful to consider a less restrictive setup, working modulo a circle action. We then define the relevant functionals in a general situation and we fix $G$ a compact subgroup of $\mathrm{Aut}_0(X,L)$ and denote by $\mathfrak{g}$ its Lie algebra.
\subsubsection{Space of potentials}
We extend the quantization tools to the extremal metrics setup.
\par
Replacing $L$ by a sufficiently large tensor power if necessary, we can assume that $\mathrm{Aut}_0(X,L)$ acts on $L$ (see e.g. \cite{kob}).
Then the $G$-action on $X$ induces a $G$-action on the space of sections $H^0(X,L^k)$. This action in turn provides a $G$-action on the space $\mathcal{B}_k$ of positive definite hermitian forms on $H^0(X,L^k)$ and we define $\mathcal{B}_k^G$ to be the subspace of $G$-invariant elements.
The spaces $\mathcal{B}_k^G$ are totally geodesic in $\mathcal{B}_k$ for the distances $d_k$.
Define $\mathcal{H}^G$ to be the space of $G$-invariant potentials with respect to a $G$-invariant base point $\omega$.
We see from their definitions that we have the induced maps :
\begin{equation}
\label{eq:Hilb}
\begin{array}{cccc}
Hilb_k : & \mathcal{H}^G & \rightarrow &\mathcal{B}_k^G \\
FS_k :& \mathcal{B}_k^G &\rightarrow &\mathcal{H}^G.
\end{array}
\end{equation}
\subsubsection{Modified K-energy}
For a fixed metric $g$, we say that a vector field $V$ is a hamiltonian vector field if there is a real valued function $f$ such that
$$
V=J\nabla_g f
$$
or equivalently
$$
\omega(V,\cdot)= -df.
$$
For any $\phi\in\mathcal{H}^G$, let $P_{\phi}^G$ be the space of normalized (i.e. mean value zero) Killing potentials with respect to $g_{\phi}$ whose corresponding hamiltonian vector fields lie in $\mathfrak{g}$ and let $\Pi_{\phi}^G$ be the orthogonal projection from $L^2(X,\mathbb{R})$ to $P_{\phi}^G$ given by the inner product on functions
$$
(f,g) \mapsto \int fg d\mu_{\phi}.
$$
\noindent Note that $G$-invariant metrics satisfying $S(\phi)-\underline{S}-\Pi_{\phi}^G S(\phi)=0$ are extremal.
\begin{definition}\cite[Section 4.13]{gbook}
The reduced scalar curvature $S^G$ with respect to $G$ is defined by
$$
S^G(\phi)=S(\phi)-\underline{S}-\Pi_{\phi}^G S(\phi).
$$
The extremal vector field $V^G$ with respect to $G$ is defined by the equation
$$
V^G=\nabla_g (\Pi_\phi^G S(\phi))
$$
for any $\phi$ in $\mathcal{H}^G$ and does not depend on $\phi$ (see e.g. \cite[Proposition 4.13.1]{gbook}).
\end{definition}
\begin{remark}
Note that by definition the extremal vector field is real-holomorphic and lies in $J\mathfrak{g}$ where $J$ is the almost-complex structure of $X$, while $JV^G$ lies in $\mathfrak{g}$.
\end{remark}
\begin{remark}
When $G=\lbrace 1 \rbrace$ we recover the normalized scalar curvature. When $G$ is a maximal compact connected subgroup, or maximal torus of $\mathrm{Aut}_0(X)$, we find the reduced scalar curvature
and the usual extremal vector field initially defined by Futaki and Mabuchi \cite{fm}.
\end{remark}
\par
We are now able to define the relative Mabuchi K-energy, introduced by Guan \cite{gu}, Chen and Tian \cite{ct}, and Simanca \cite{si}:
\begin{definition}\cite[Section 4.13]{gbook}
\label{def:modK}
The modified Mabuchi K-energy $E^G$ (relative to $G$) is defined, up to a constant, as the primitive of the following one-form on $\mathcal{H}^G$:
$$
\phi \mapsto - S^G(\phi) d\mu_{\phi}.
$$
\end{definition}
\noindent If $\phi\in \mathcal{H}^G$, then the modified K-energy admits the following expression
$$
E^G(\phi)=-\int_X \phi (\int_0^1S^G(t\phi) d\mu_{t\phi} dt) .
$$
As for CSCK metrics, $G$-invariant extremal metrics whose extremal vector field lie in $J\mathfrak{g}$ are critical points of the relative Mabuchi energy.
\subsubsection{The $\sigma$-balanced metrics}
We present a generalization of balanced metrics adapted to the relative setting of extremal metrics.
\begin{definition}\label{def:sigma-balanced}
Let $\sigma_k(t)$ be a one-parameter subgroup of $\mathrm{Aut}_0(X,L^k)$.
Let $\phi \in \mathcal{H}$. Then $\phi$ is a $k^{th}$ $\sigma_k$-balanced metric if
\begin{equation}
\label{def:bal}
\omega_{kFS_k\circ Hilb_k (\phi)}=\sigma_k(1)^*\omega_{k\phi}
\end{equation}
\end{definition}
Conjecturally, the $\sigma$-balanced metrics would provide the generalization of the notion of balanced metric and would approximate an extremal K\"ahler metric. Indeed, in one direction, assume that we are given $\sigma_k$-balanced metrics $\omega_{\phi_k}$, with $\sigma_k\in \mathrm{Aut}_0(X,L^k)$ such that the $\omega_k$ converge to $\omega_{\infty}$.
Suppose that the vector fields $k\frac{d}{dt}\vert_{t=0}\sigma_k(t)$ converge to a vector field $V_{\infty}\in \mathfrak{h}_0$.
A simple calculation implies that $\omega_{\infty}$ must be extremal.
\par
We now define the functionals that play the role of finite dimensional versions
of the modified Mabuchi K-energy on $\mathcal{B}_k^G$ and $FS_k(\mathcal{B}_k^G)$ respectively.
First define $I_k=\log\circ \det$ on $\mathcal{B}_k^G$. This functional is defined up to an additive constant when we see $\mathcal{B}_k^G$ as a space of positive Hermitian matrix
once a suitable basis of $H^0(X,L^k)$ is fixed.
It is shown in \cite{cs} that $I_k$ gives a quantization of the Aubin functional $I$.
However in the extremal case, we need a modified version of the Aubin functional defined by the first author in order to fit with the balanced metrics. Let $V\in Lie(\mathrm{Aut}_0(X,L))$ and denote by $\sigma(t)$ the associated one parameter subgroup of $\mathrm{Aut}_0(X,L)$. Define
up to a constant for each $\phi\in \mathcal{H}$ the function $\psi_{\sigma,\phi}$ by
\begin{equation}
\label{def:psi}
\sigma(1)^*\omega_{\phi}=\omega_{\phi}+\sqrt{-1} \partial\overline\partial \psi_{\sigma,\phi}.
\end{equation}
We will see in the sequel how to choose suitably a normalization constant for these potentials.
We then consider a modified $I$ functional defined up to a constant by its
differential:
$$
\delta I^{\sigma}(\phi)(\delta\phi)=\int_X \delta \phi (1+\Delta_{\phi})e^{\psi_{\sigma,\phi}} d\mu_{\phi}
$$
where $\Delta_{\phi}=-g_{\phi}^{i\overline{j}}\frac{\partial}{\partial z_i}\frac{\partial}{\partial \overline{z}_j}$ is the complex Laplacian of $g_{\phi}$. We will also need to consider the potentials $\phi$ as metrics on the tensor powers $L^{\otimes k}$, we thus consider the normalized vector fields $V_k=-\frac{V}{4k}$ and the associated one-parameter groups $\sigma_k(t)$.
We choose the normalization
\begin{equation}
\label{eq:norm}
\int_X \exp{(\psi_{\sigma_k,\phi})}\; d\mu_{\phi} = \frac{N_k}{k^n}
\end{equation}
Then we define for each $k$
$$
\delta I^{\sigma}_k(\phi)(\delta\phi)=\int_X k\delta \phi (1+\frac{\Delta_{\phi}}{k})e^{\psi_{\sigma_k,\phi}} k^n d\mu_{\phi}.
$$
\begin{remark}
If $\sigma$ is the identity, we recover the usual Aubin functional.
\end{remark}
\begin{remark}
This one-form integrates along paths in $\mathcal{H}^G$ to a functional $I_k^{\sigma}(\phi)$
on
$\mathcal{H}^G$, which is independent on the path used from $0$ to $\phi$.
The proof of this fact is given in the Appendix, proposition \ref{prop:indep}.
\end{remark}
We define
$\mathcal{L}_k^{\sigma}$ on $\mathcal{H}^G$ and $Z^{\sigma}_k$ on $\mathcal{B}_k^G$ by
\begin{equation}
\label{eq:Lk}
\mathcal{L}^{\sigma}_k = I_k\circ Hilb_k + I_k^{\sigma}
\end{equation}
and
\begin{equation}
\label{eq:Zk}
Z^{\sigma}_k = I_k^{\sigma}\circ FS_k + I_k-k^n\log(k^n)\mathrm{Vol}(X).
\end{equation}
We will show in the following that these functionals converge to the modified $K$-energy in some sense. Note also that $\sigma_k$-balanced metrics are critical points for $\mathcal{L}_k^{\sigma}$ (proposition \ref{prop:criticL}) and, if $FS_k(H_k)$ is a $\sigma_k$-balanced metric for some $H_k\in \mathcal{B}_k^G$, then $H_k$ is a minimum for $Z_k^{\sigma}$ (proposition \ref{prop:Z_min}).
\section{Minima of the modified K-energy}
\label{sec:min}
The aim of this section is to prove Theorem \ref{theo:min}. For the convenience of the reader we give a sketch of the proof.
We will choose the special group $G$ corresponding to the Killing field $JV^*$ associated to the extremal vector field $V^*$ of the extremal K\"ahler metric $\omega^*=\omega_{\phi^*}$. We know that the metrics $\omega_k^*=\omega+\sqrt{-1}\partial\overline\partial \phi_k^*$ with K\"ahler potentials
$\phi^*_k=FS_k\circ Hilb_k (\phi^*)$ converge to $\omega^*$ (\cite{tian90}, \cite{cat} and \cite{zel}).
We begin our proof by showing
that the functionals $\mathcal{L}_k^{\sigma}$ converge to the modified Mabuchi functional on the space $\mathcal{H}^G$. Then we show that $Z_k^{\sigma}\circ Hilb_k$ and $\mathcal{L}_k^{\sigma}$ converge to the same functional, thus $Z_k^{\sigma}$ gives a quantization of the modified Mabuchi functional and we reduce our problem to studying the minima of $Z_k^{\sigma}$.
However the metrics $\omega_k^*$ constructed above are not in general critical points of $Z_k^{\sigma}$, as there is no reason for these metrics to be $\sigma_k$-balanced. We use instead an idea of Li \cite{li} relying on the Bergman kernel expansion to show that these metrics $\omega_k^*$ are almost $\sigma_k$-balanced metrics, in the sense that $Hilb_k(\omega_k^*)$ is a minimum of the functional $Z_k^{\sigma}$ up to an error which goes to zero when $k$ tends to infinity.
\par
Let $V^*$ be the extremal vector field of the class $c_1(L)$. In the polarized case, the vector field $JV^*$ generates a periodic action \cite{fm} by a one parameter-subgroup of automorphisms of $(X,L)$. Fix $G$ to be the one-parameter subgroup of $\mathrm{Aut}(X,L)$ associated to $JV^*$.
This group is isomorphic to $S^1$ or trivial by the theorem of Futaki and Mabuchi \cite{fm}. This will be a group of isometries for each of our metrics.
\begin{remark}
\label{rmk:mab}
The modified K-energy $E^{G_m}$ is defined to be the modified Mabuchi functional with respect to a maximal compact connected subgroup $G_m$ of $\mathrm{Aut}(X,L)$. Assume that $G$ is contained in such a $G_m$. Then $E^{G_m}$ is equal to $E^G$ when restricted to the space of $G_m$-invariant potentials.
Indeed, the projection of any $G_m$-invariant scalar curvature to the space of holomorphy potentials of $Lie(G_m)$ gives a potential for the extremal vector field by definition.
Thus a minimum of $E^G$ which is invariant under the $G_m$-action, such as an extremal metric, will be a minimum of the usual modified Mabuchi functional.
\end{remark}
Let $\sigma_k$ be the element of $\mathrm{Aut}(X,L)$ associated to the vector field $-\frac{V^*}{4k}$. We will also need to define for each $\phi$
in $\mathcal{H}^G$ the function
$
\theta(\phi)
$
to be the normalized (i.e. mean value zero) holomorphy potential of the vector field $V^*$ with respect to the metric $\omega_{\phi}$:
$$
g_\phi(V^*,\cdot )=d \theta(\phi)
$$
or
$$
\theta (\phi)=\Pi_\phi^G(S(\phi)).
$$
\subsection{The functionals $\mathcal{L}_k^{\sigma}$ converge to $E^G$}
In this section we prove the following fact :
\begin{proposition}
\label{prop:LquantizeE}
There are constants $c_k$ such that
$$
\frac{2}{k^n}\mathcal{L}_k^{\sigma} + c_k \rightarrow E^G
$$
as $k\rightarrow \infty$, where the convergence is uniform on $C^l(X,\mathbb{R})$ bounded subsets of $\mathcal{H}^G$.
\end{proposition}
\begin{proof}
We show that
$$
\frac{2}{k^n}\delta \mathcal{L}^{\sigma}_k \rightarrow \delta E^G
$$
uniformly on $C^l(X,\mathbb{R})$ bounded subsets of $\mathcal{H}^G$. First we compute $\delta \mathcal{L}^{\sigma}_k$. Following \cite{Don05}:
$$
\delta (I_k \circ Hilb_k)_{\phi}(\delta \phi)=- \int_X \delta \phi (\Delta_{\phi}+k) \rho_k(\phi) d\mu_{\phi}
$$
and by definition
$$
\delta (I_k ^{\sigma})_{\phi}(\delta \phi)=k^n \int_X \delta \phi (k+\Delta_{\phi})e^{\psi_k(\phi) } d\mu_{\phi}
$$
where we set $\psi_k(\cdot)=\psi_{\sigma_k,\cdot}$.
\\
Then
\begin{equation}
\label{eq:diffL}
\delta (\mathcal{L}^{\sigma}_k)_{\phi}(\delta \phi)=-\int_X \delta \phi (\Delta_{\phi}+k)(\rho_k(\phi)-k^n e^{\psi_k(\phi) }) d\mu_{\phi}.
\end{equation}
We need an expansion for the potential $\psi_k$ :
$$
\psi_k(\phi)=\frac{\theta(\phi)+\underline{S}}{2k}+\mathcal{O}_0(k^{-1})
$$
which proof is postponed to lemma \ref{lem:exppsi}.
Then by the expansions of $\psi_k(\phi)$ and $\rho_k(\phi)$
$$
(\Delta_{\phi}+k)(\rho_k(\phi)-k^n e^{\psi_k(\phi) })=k^n(\Delta_{\phi}+k)(1+\frac{S(\phi)}{2k}+\mathcal{O}(k^{-2})-1-\frac{\theta(\phi)+\underline{S}}{2k}+\mathcal{O}_0\mathcal{}(k^{-1})),
$$
$$
(\Delta_{\phi}+k)(\rho_k(\phi)-k^n e^{\psi_k(\phi) })=k^n (\frac{S(\phi)-\underline{S}-\theta(\phi))}{2} +\mathcal{O}(k^{-1})),
$$
and
$$
\frac{\delta (\mathcal{L}^{\sigma}_k)_{\phi}}{k^n} \rightarrow \frac{1}{2}\delta E^G_{\phi} .
$$
As the expansions of $\psi_k(\phi)$ and $\rho_k(\phi)$ are uniform on bounded subsets of $C^l(X,\mathbb{R})$ the result follows.
\end{proof}
\noindent The following lemma will be useful :
\begin{lemma}
\label{lem:exppsi}
The following expansion holds uniformly in $C^l(X,\mathbb{R})$ for $l>>1$:
\begin{equation}
\label{eq:exppsi}
\psi_k(\phi)=\frac{\theta(\phi)+\underline{S}}{2k}+\mathcal{O}_0(k^{-1})
\end{equation}
where $\mathcal{O}_0(k^{-1})$ denotes $k^{-1}$-times a function $\varepsilon(k)$ on $X$ with $\varepsilon(k)\rightarrow 0$ in $C^l(X,\mathbb{R})$ as $k \rightarrow 0$.
\end{lemma}
\begin{proof}
By definition
$$
\sigma_k(1)^*\omega(\phi)-\omega(\phi)=\sqrt{-1} \partial \overline\partial \psi_k(\phi),
$$
then
$$
\sigma_1(\frac{1}{k})^*\omega(\phi)-\omega(\phi)=\sqrt{-1} \partial \overline\partial \psi_k(\phi),
$$
where $\sigma_1(\frac{1}{k})$ is equal to $\exp(-\frac{1}{4k}V^*)$.
Dividing by $\dfrac{1}{k}$, and letting $k$ go to infinity,
$$
\mathcal{L}_{-\frac{1}{4}V^*}\omega(\phi)=\sqrt{-1}\partial\overline\partial \lim_{k\rightarrow \infty}(k\psi_k(\phi))
$$
Then by Cartan's formula,
$$
\begin{array}{ccc}
\mathcal{L}_{-\frac{1}{4}V^*}\omega(\phi) & = & -\frac{1}{4}d\omega_{\phi}(V^*,\cdot) \\
& = & -\frac{1}{4}d g_{\phi}(V^*,J \cdot) \\
\end{array}
$$
and by definition of holomorphy potentials
$$
\mathcal{L}_{-\frac{1}{4}V^*}\omega(\phi)=-\frac{1}{4}d d^c \theta(\phi)=\frac{\sqrt{-1}}{2}\partial\overline\partial \theta(\phi)
$$
thus
$$
\lim_{k\rightarrow \infty}(k\psi_k(\phi))=\frac{\theta(\phi)+c}{2}
$$
for some constant $c$.
By the normalization~(\ref{eq:norm}) of the function $\psi_k(\phi)$ we deduce
$$
\frac{N_k}{k^n}=\int_X \exp{(\psi_{\sigma_k,\phi})}\; d\mu_{\phi} = \int_X 1+\frac{\theta(\phi)+c}{2k}+\mathcal{O}(k^{-2}) d\mu_{\phi}.
$$
Recall that we choose $\theta(\phi)$ normalized to have mean value zero. Using formula (\ref{cor:Nk}) to expand $N_k=dim(H^0(X,L^k))$, we conclude that $c=\underline{S}$.
\end{proof}
\noindent From the above computations we also deduce the following :
\begin{proposition}
\label{prop:criticL}
Let $\phi \in \mathcal{H}$ be a $k^{th}$ $\sigma_k$-balanced metric. Then $\phi$ is a critical point of $\mathcal{L}_k^{\sigma}$.
\end{proposition}
\begin{proof}
By equation $(\ref{def:bal})$ of $\sigma_k$-balanced metrics and by definition (\ref{def:psi}) of $\psi_k(\phi)$ we deduce
$$
\rho_k(\phi)=C\exp(\psi_k(\phi))
$$
for some constant $C$. Integrating over $X$ and using the expansions $(\ref{cor:Nk})$ and $(\ref{eq:exppsi})$ we deduce
$$
\rho_k(\phi)=k^n\exp(\psi_k(\phi)).
$$
The result follows from the computation of the differential of $\mathcal{L}^{\sigma}_k$, equation $(\ref{eq:diffL})$.
\end{proof}
\noindent
A direct computation implies the similar result for $Z^\sigma_k$ (see proposition \ref{prop:Z_min} in the Appendix).
\subsection{Comparison of $Z_k^{\sigma}$ and $\mathcal{L}_k^{\sigma}$}
The aim of this section is to show that $Z_k^{\sigma}\circ Hilb_k$ and $\mathcal{L}_k^{\sigma}$ converge to the same functional. We will need the two following lemmas:
\begin{lemma}
\label{lem:hessian}
The second derivative of $I_k^{\sigma}$ along a path $\phi_s\in\mathcal{H}^G$
is equal to
$$
\frac{d^2}{ds^2}I_k^{\sigma}(\phi_s)=k^n\int_X (\phi''-\frac{1}{2} \vert d\phi'\vert^2)
(k+\Delta_{\phi_s})e^{\psi_k(\phi_s)}d\mu_{\phi_s}
$$
\end{lemma}
\begin{proof}
The proof of this result is given in the Appendix, section \ref{sec:hessian}.
\end{proof}
\begin{lemma}
\label{lem:concave}
Let $\phi\in\mathcal{H}^G$. Then there exists an integer $k_0$, depending on $\phi$,
such that for each $k\geq k_0$, the functional $I_k^{\sigma}$ is concave along
the path
$$
\begin{array}{ccc}
[0,1] & \rightarrow & \mathcal{H}^G \\
s & \mapsto & \phi+\frac{s}{k}\log(\rho_k(\phi))
\end{array}
$$
\end{lemma}
\begin{proof}
By lemma~\ref{lem:hessian}, the second derivative of $I_k^{\sigma}$ along
the path $\phi_k(s)=\phi+\frac{s}{k}\log(\rho_k(\phi))$ is
$$
k^n\int_X (\phi_k''-\frac{1}{2} \vert d\phi_k'\vert^2)
(k+\Delta_{\phi_k(s)})e^{\psi_k(\phi_k(s))}d\mu_{\phi_k(s)}.
$$
As $\phi_k'=\frac{1}{k}\log(\rho_k(\phi))$ and $\phi_k''=0$, this expression simplifies:
$$
\frac{d^2}{ds^2} I_k^{\sigma} (\phi_k(s))=-k^n\int_X \frac{1}{2} \vert d \frac{1}{k}\log(\rho_k(\phi)) \vert^2 (k+\Delta_{\phi_k(s)})e^{\psi_k(\phi_k(s))}d\mu_{\phi_k(s)}.
$$
We compute the leading term in the above expression as $k$ goes to
infinity. To simplify notation, let $T_k(\phi)=FS_k\circ Hilb_k(\phi)$. Note that $\omega_{\phi_1}=\omega_{T_k(\phi)}$. From (\ref{cor:ber}), the difference between $\omega_{\phi_0}$
and $\omega_{\phi_1}$ is $$\omega_{\phi_0}-\omega_{\phi_1}=\mathcal{O}(k^{-2}).$$
Thus we have the estimates
$$
\Delta_{\phi_k(s)}=\Delta_{\phi}+\mathcal{O}(k^{-1}),
$$
$$
d\mu_{\phi_k(s)}=d\mu_{\phi}+\mathcal{O}(k^{-1})
$$
and
$$
\psi_k(\phi_k(s))=\psi_k(\phi)+\mathcal{O}(k^{-1}).
$$
Then$$
\frac{d^2}{ds^2} I_k^{\sigma} (\phi_k(s))=-k^n\int_X \frac{1}{2} \vert d \frac{1}{k}\log(\rho_k(\phi)) \vert^2 (k+\Delta_{\phi})e^{\psi_k(\phi)} d\mu_{\phi} + \mathcal{O}(k^{n-4}).
$$
From this we deduce that the leading term as $k$ tends to infinity is
$$
-\frac{k^{n-3}}{8} \int_X \vert dS(\phi)\vert ^2 d\mu_{\phi}<0
$$
where once again we used the expansions of Bergman kernel and of $\psi_k(\phi)$ from lemma \ref{lem:exppsi}.
\end{proof}
\noindent Now we can prove the main result of this section:
\begin{proposition}
\label{prop:compare}
For each potential $\phi\in\mathcal{H}^G$, we have
$$
\lim_{k\rightarrow \infty} k^{-n}(\mathcal{L}_k^{\sigma}(\phi)-Z_k^{\sigma}\circ Hilb_k (\phi)) =0
$$
\end{proposition}
\begin{proof}
By definition,
$$
k^{-n}(\mathcal{L}_k^{\sigma}(\phi)-Z_k^{\sigma}\circ Hilb_k (\phi)) =
-k^{-n}( I_k^{\sigma} (T_k(\phi)) - I_k^{\sigma}(\phi) -k^n\log(k^n)\mathrm{Vol}(X))
$$
where $T_k=FS_k\circ Hilb_k$.
From lemma~\ref{lem:concave}, for $k$ large enough, the functional $I_k^{\sigma}$ is concave along the path
$$
s \mapsto \phi+\frac{s}{k}\log(\rho_k(\phi))
$$
going from $\phi$ to $T_k(\phi)$ in $\mathcal{H}^G$.
\noindent Thus
\begin{equation}
\label{eq:concave}
(\delta I_k^{\sigma})_{\phi}(\frac{1}{k}\log \rho_k(\phi))
\geq (I_k^{\sigma} (T_k(\phi)) - I_k^{\sigma}(\phi))
\geq (\delta I_k^{\sigma})_{T_k(\phi)}(\frac{1}{k}\log \rho_k(\phi)) .
\end{equation}
\noindent We deduce from the definitions that
\begin{equation}
\label{eq:concave}
k^{-n}(\mathcal{L}^{\sigma}_k(\phi)-Z^{\sigma}_k\circ Hilb_k (\phi))
\geq
-k^{-n}(\delta I_k^{\sigma})_{\phi}(\frac{1}{k}\log \rho_k(\phi)) + \log(k^n)\mathrm{Vol}(X)
\end{equation}
and
\begin{equation}
\label{eq:concave2}
-k^{-n}(\delta I_k^{\sigma})_{T_k(\phi)}(\frac{1}{k}\log \rho_k(\phi)) + \log(k^n)\mathrm{Vol}(X)
\geq
k^{-n}(\mathcal{L}^{\sigma}_k(\phi)-Z^{\sigma}_k\circ Hilb_k (\phi))
\end{equation}
and it remains to show that the right hand side of (\ref{eq:concave}) and the left hand side of (\ref{eq:concave2}) tend to zero.
First
$$
k^{-n}(\delta I_k^{\sigma})_{\phi}(\frac{1}{k}\log \rho_k(\phi)) - \log(k^n)\mathrm{Vol}(X) =
\int_X (\frac{1}{k}\log(\rho_k(\phi)))(k+\Delta_{\phi})e^{\psi_k(\phi)} d\mu_{\phi} -\mathrm{Vol}(X)\log(k^n)
$$
$$
=\int_X (\log(k^n)+\frac{S(\phi)}{2k}+\mathcal{O}(k^{-2}))(1+\frac{\Delta_{\phi}}{k})(1+\frac{\theta(\phi)+\underline{S}}{2k}+\mathcal{O}_0(k^{-1})) d\mu_{\phi} -\mathrm{Vol}(X)\log(k^n)
$$
by the expansion of Bergman kernel and lemma \ref{lem:exppsi}.
If follows that
$$
k^{-n}(\delta I_k^{\sigma})_{\phi}(\frac{1}{k}\log \rho_k(\phi)) - \log(k^n)\mathrm{Vol}(X) =\mathrm{Vol}(X)\log(k^n) + \mathcal{O}(k^{-1}) - \mathrm{Vol}(X)\log(k^n) \rightarrow 0
$$
as $k\rightarrow \infty$.
\par
Note that we didn't make use of the fact that the derivative $\delta I_k^{\sigma}$
was evaluated at $\phi$, so the above argument extends to the last term of the inequality (\ref{eq:concave2}), evaluated at $T_k(\phi)$, which thus tends to zero as well.
This ends the proof.
\end{proof}
\subsection{The metrics $Hilb_k(\omega^*)$ are almost $\sigma$-balanced }
We will need the following convexity property of $Z_k^{\sigma}$:
\begin{lemma}
\label{lem:Zconvex}
The functional $Z^{\sigma}_k$ is convex along geodesics in $\mathcal{B}_k^G$.
\end{lemma}
\begin{proof}
We follow the proof of Proposition 1 in \cite{Don05} (also Lemma 3.1 in \cite{PS03}).
Here we abbreviate the subscript $k$.
Take a geodesic $\{H(s)\}_{s\in \mathbb{R}}$ in $\mathcal{B}^G$.
By choosing an appropriate orthonormal basis $\{\tau_\alpha\}$ of $H(0)$, $H(s)$ is represented by
\begin{equation}\label{eq:diagonalize}
H(s)=diag(e^{2\lambda_\alpha s}), \,\,\, \lambda_\alpha\in \mathbb{R},
\,\, \sum_\alpha \lambda_\alpha=0
\end{equation}
with respect to the basis $\{ \tau_\alpha\}$.
We denote the associated one parameter subgroup of $SL(H^0(M,L))$ by $\varrho(s)$.
We denote the K\"ahler potential $\phi_s=FS(H(s))$ by
$$
\phi_s=\log \big(\sum_\alpha |\varrho(s) \cdot\tau_\alpha|^2/\sum_\beta |\tau_\beta|^2\big).
$$
First of all, we will show the first variation of $Z^\sigma$ along $\phi_s$.
From (\ref{eq:derivative_psi}), we have
\begin{eqnarray}
\nonumber
\frac{d Z^\sigma}{ds} (s)
&=&
\int_X
\phi'_{s}
(1+\Delta_{FS(H(s))})
e^{\psi_s}
d\mu_{FS(H(s))}
\\
\nonumber
&=&
\int_X
\phi'_{s}e^{\psi_s}
+
\frac{d}{ds} e^{\psi_s}
d\mu_{FS(H(s))}
\\
\nonumber
&=&
\int_X
\frac{\sum_\alpha
2
\lambda_\alpha |\varrho(s)\cdot\tau_\alpha|^2}{\sum_\beta |\varrho(s)\cdot\tau_\beta|^2}
\bigg(\frac{\sum_\gamma |\varrho(s)\cdot\sigma^* \tau_\gamma|^2}{\sum_\beta |\tau_\beta|^2}\bigg)
\\
\nonumber
&& \qquad\qquad
+
\bigg\{\frac{d}{ds}
\bigg(\frac{\sum_\gamma |\varrho(s)\cdot\sigma^* \tau_\gamma|^2}{\sum_\beta |\varrho(s)\cdot \tau_\beta|^2}\bigg)
\bigg\}
d\mu_{FS(H(s))}
\\
\label{eq:1st_variation_Z}
&=&
\int_X
\frac{\sum_\alpha
2
\widetilde{\lambda}_\alpha|\varrho(s)\cdot\sigma^* \tau_\alpha|^2}{\sum_\beta |\varrho(s)\cdot\tau_\beta|^2}
d\mu_{FS(H(s))},
\end{eqnarray}
where $\psi_s$ denotes $\psi_{\sigma, FS(H(s))}$.
In (\ref{eq:1st_variation_Z}), $H(s)$ is represented by
$$
H(s)=diag(e^{2\widetilde{\lambda}_\alpha s}), \,\,\, \lambda_\alpha\in \mathbb{R},
\,\, \sum_\alpha \widetilde{\lambda}_\alpha =0
$$
with respect to the basis $\{ \sigma^*\tau_\alpha\}$.
Let
$$
\varphi'_s:=\frac{\sum_\alpha 2 \widetilde{\lambda}_\alpha |\varrho(s)\cdot(\sigma^* \tau_\alpha)|^2}{\sum_\beta |\varrho(s)\cdot\tau_\beta|^2}.
$$
Then, we have
\begin{equation}\label{eq:2nd_vari_Z(2)}
\frac{d^2 Z^{\sigma}}{ds^2}(0)
=
\int_{X} \big\{\varphi''_0 -(\nabla \varphi'_0, \nabla \phi'_0)\big\} d\mu_{FS(H(0))}.
\end{equation}
Here we denote the connection of type $(1,0)$ by $\nabla$.
Following \cite{Don05}, it is sufficient to prove that the integrand of (\ref{eq:2nd_vari_Z(2)}) is equal to
\begin{equation}\label{eq:integrand}
\sum_\alpha|(\nabla \phi'_0, \nabla (\sigma^* \tau_\alpha)) - (2\widetilde{\lambda}_\alpha- \phi'_0)(\sigma^*\tau_\alpha) |_{FS(H(0))}^2
\end{equation}
pointwise on $X$.
Expanding out, (\ref{eq:integrand}) is equal to
\begin{eqnarray}
\nonumber
&&
\sum_\alpha |(\nabla \phi'_0, \nabla (\sigma^* \tau_\alpha))|_{FS(H(0))}^2
-2
\sum_\alpha(2\widetilde{\lambda}_\alpha - \phi'_0)((\nabla \phi'_0, \nabla(\sigma^* \tau_\alpha)), \sigma^*\tau_\alpha)
\\
\label{eq:expanding}
&& \qquad
+ \sum_\alpha(2\widetilde{\lambda}_\alpha - \phi'_0)^2|\sigma^* \tau_\alpha|_{FS(H(0))}^2.
\end{eqnarray}
The second term of (\ref{eq:expanding}) is equal to
\begin{eqnarray}
\nonumber
-2
\sum_\alpha(2\widetilde{\lambda}_\alpha - \phi'_0)(\nabla \phi'_0, ( \sigma^*\tau_\alpha, \nabla(\sigma^* \tau_\alpha)))
&=&
-2
\sum_\alpha(2\widetilde{\lambda}_\alpha - \phi'_0)
(\nabla \phi'_0,\nabla(|\sigma^*\tau_\alpha|_{FS(H(0))}^2))
\\
\nonumber
&=&
- 2(\nabla \phi'_0, \nabla \varphi'_0)
+
2\phi'_0(\nabla \phi'_0, \nabla e^{\psi_0})
\\
\nonumber
&=&
- 2(\nabla \phi'_0, \nabla \varphi'_0)
+
2\phi'_0 \psi'_0 e^{\psi_0}
\\
\label{eq:2ndterm}
&=&
- 2(\nabla \phi'_0, \nabla \varphi'_0)
+
2\phi'_0 (\varphi'_0 - \phi'_0 e^{\psi_0}).
\end{eqnarray}
In above, we use (\ref{eq:derivative_psi}) in the Appendix and
$$
\psi'_0 e^{\psi_0}
=
\frac{d}{ds}\bigg|_{s=0} e^{\psi_s}=\varphi'_0 - \phi'_0 e^{\psi_0}.
$$
The third term of (\ref{eq:expanding}) is equal to
\begin{equation}\label{eq:3rdterm}
\sum_\alpha 4 \widetilde{\lambda}_\alpha^2 |\sigma^*\tau_\alpha|_{FS(H(0))}^2
- 2 \varphi'_0 \phi'_0 + (\phi'_0)^2 e^{\psi_0}.
\end{equation}
Substituting (\ref{eq:2ndterm}) and (\ref{eq:3rdterm}) into (\ref{eq:expanding}), (\ref{eq:expanding}) is equal to
$$
\sum_\alpha |(\nabla \phi'_0, \nabla (\sigma^* \tau_\alpha))|_{FS(H(0))}^2
- 2(\nabla \phi'_0, \nabla \varphi'_0)
-
(\phi'_0)^2 e^{\psi_0}
+
\sum_\alpha 4 \widetilde{\lambda}_\alpha^2 |\sigma^*\tau_\alpha|_{FS(H(0))}^2.
$$
Since
$$
\varphi''_0
=
\sum_\alpha 4 \widetilde{\lambda}_\alpha^2 |\sigma^*\tau_\alpha|_{FS(H(0))}^2
- \varphi'_0 \phi'_0,
$$
the remain to be proved is
\begin{equation}\label{eq:remain}
\sum_\alpha |(\nabla \phi'_0, \nabla (\sigma^* \tau_\alpha))|_{FS(H(0))}^2
=
(\nabla \phi'_0, \nabla \varphi'_0)
+(\phi'_0)^2 e^{\psi_0}-\varphi'_0 \phi'_0.
\end{equation}
In the computation in (\ref{eq:2ndterm}), we found
$$
-(\phi'_0)^2 e^{\psi_0}+\varphi'_0 \phi'_0=(\nabla \phi'_0, \phi'_0\nabla e^{\psi_0}).
$$
Hence, (\ref{eq:remain}) is equivalent to
\begin{equation}\label{eq:FS_identity}
\sum_\alpha |(\nabla \phi'_0, \nabla (\sigma^* \tau_\alpha))|_{FS(H(0))}^2
=
(\nabla \varphi'_0, \nabla \phi'_0)
- (\nabla \phi'_0, \phi'_0\nabla e^{\psi_0}).
\end{equation}
This follows from the definition of the restriction $\omega_{FS(H(0))}$ of the Fubini-Study metric.
To see (\ref{eq:FS_identity}), recall that the Fubini-Study metric is given by
$$
\frac{\sum_{i}dz^i \wedge d\overline{z}^i}{1 + \sum |z^k|^2}
- \frac{(\sum \overline{z}^i dz^i) \wedge (\sum z^j d \overline{z}^j)}{1 + \sum |z^k|^2}
$$
on the coordinate chart $U_0= \{(1, z^2, \ldots, z^{N}) \in \mathbb{C}P^{N-1}\}$.
Then, we have
\begin{equation}\label{eq:FS_identity_left}
|(\nabla \phi'_0, \nabla \tau_\alpha)|_{FS(H(0))}^2
=
\frac{(\lambda_\alpha^2 +(\phi'_0)^2 -2 \phi'_0 \lambda_\alpha)|\tau_{\alpha}|^2}
{\sum_\beta |\tau_\beta|^2},
\end{equation}
\begin{equation}\label{eq:FS_identity_right1}
(\nabla \phi'_0, \nabla |\tau_\alpha|^2_{FS(H(0))})
=
\frac{(\lambda_\alpha -\phi'_0 )|\tau_{\alpha}|^2}
{\sum_\beta |\tau_\beta|^2},
\end{equation}
and
\begin{equation}\label{eq:FS_identity_right2}
(\nabla \phi'_0, \phi'_0 \nabla |\tau_\alpha|_{FS(H(0))}^2)
=
\frac{\phi'_0 \lambda_\alpha |\tau_\alpha|^2 - (\phi'_0)^2|\tau_\alpha|^2}
{\sum_{\beta} |\tau_\beta|^2}.
\end{equation}
We get (\ref{eq:FS_identity}) by substituting (\ref{eq:FS_identity_left}) to the left hand of (\ref{eq:FS_identity}), and (\ref{eq:FS_identity_right1}) and (\ref{eq:FS_identity_right2}) to the right hand of (\ref{eq:FS_identity}).
The proof is completed.
\end{proof}
\noindent
The following corollary is fundamental to understand the idea of this paper, although we do not use as it stands in the proof of the main theorem.
\begin{corollary}\label{prop:Z_min}
If $FS_k(H_k)$ is a $\sigma_k$-balanced metric for some $H_k\in \mathcal{B}_k^G$, then $H_k$ is a minimum of $Z^{\sigma}_k$ on $\mathcal{B}_k^G$.
\end{corollary}
\begin{proof}
Since $H_k$ is a $\sigma_k$-balanced metric, $\{c(\sigma^* \tau_\alpha)\}_\alpha$ is an orthonormal basis with respect to $T(H_k)$ for some $c>0$.
From (\ref{eq:1st_variation_Z}), $H_k$ is a critical point of $Z^\sigma_k$ on $\mathcal{B}_k^G$.
From Lemma \ref{lem:Zconvex}, this is an absolute minimum of $Z^\sigma_k$.
\end{proof}
\begin{proposition}
\label{prop:almostbal}
Let $\phi\in\mathcal{H}^G$. Then there are functions $\varepsilon_{\phi}(k)$ such that
$$
k^{-n}(Z_k^{\sigma}\circ Hilb_k(\phi)) \geq k^{-n}(Z_k^{\sigma}\circ Hilb_k(\phi^*)) + \varepsilon_{\phi}(k)
$$
and such that $\lim_{k \rightarrow \infty} \varepsilon_{\phi}(k)=0$ in $C^l(X,\mathbb{R})$ for $l>>1$.
\end{proposition}
\begin{proof}
We follow Li's proof of \cite{li}[Lemma 3.3.], adapted to our more general setting.
In the sequel, $C$ will stand for a constant depending on $\phi$, $\phi^*$ and the volume of the polarized manifold $(X,L)$, but independent on $k$.
The precise value of this constant might change but it won't be important for us.
\par
Let's set $H_k^*=Hilb_k(\phi^*)$ and $H_k=Hilb_k(\phi)$.
We choose an orthonormal basis $\lbrace \tau_{\alpha}^{(k)} \rbrace$ of $H^*_k$
such that in this basis $H_k^*$ is represented by the identity and
$$
H_k=diag(e^{2\lambda_{\alpha}^{(k)}}).
$$
Then evaluating $H_k$ on the orthonormal vectors $e^{\lambda_{\alpha}^{(k)}}\tau_{\alpha}^{(k)}$:
\begin{equation}
\label{eq1}
e^{-2\lambda_{\alpha}^{(k)}}=\int_X \vert \tau_{\alpha}^{(k)}\vert_{h_0^k}^2
d\mu_0.
\end{equation}
Comparing the metrics we have the existence of $C>0$ such that
$$
C^{-k} h_{\phi^*}^k
\leq
h_0^k\leq C^k h_{\phi^*}^k
$$
from which we deduce with (\ref{eq1}) the following estimate:
\begin{equation}
\label{eq:estimelambda}
\vert \lambda_{\alpha}^{(k)} \vert \leq C k.
\end{equation}
Let's consider the one-parameter subgroup of $\mathcal{B}_k^G$:
$$
s \mapsto H_k(s)=diag(e^{2s \lambda_{\alpha}^{(k)}}).
$$
This is a geodesic that goes from $H_k^*$ to $H_k$ in $\mathcal{B}_k^G$, thus by lemma~\ref{lem:Zconvex}:
$$
k^{-n}(Z_k^{\sigma}(H_k)-Z_k^{\sigma}(H_k^*))\geq k^{-n}f_k'(0)
$$
with
$$
f_k(s)=Z_k^{\sigma}(H_k(s)).
$$
\par
We then need to show that $\lim_{k\rightarrow \infty}k^{-n}f_k'(0)=0 $.
By a straightforward computation
$$
k^{-n}f_k'(0)=2k^{-n}\sum_{\alpha} \lambda_{\alpha}^{(k)} -\frac{2}{k}\int_X \frac{\rho_k^{\lambda}}{\rho_k} (k+\Delta)e^{\psi_k} d\mu
$$
where $\rho_k^{\lambda}=\sum_{\alpha}\lambda_{\alpha}^{(k)} \vert \tau_{\alpha}^{(k)}\vert^2_{h_0^k}$ and the quantities $\rho_k$, $\Delta$, $\psi_k$ and $d\mu$ are computed with respect to the extremal metric $\omega_{\phi^*}$.
Then
\begin{equation}
\label{eq:fprime}
2^{-1}k^{-n}f_k'(0)=k^{-n}\sum_{\alpha} \lambda_{\alpha}^{(k)}-\int_X \frac{\rho_k^{\lambda}}{\rho_k} e^{\psi_k} d\mu - \frac{1}{k} \int_X \frac{\rho_k^{\lambda}}{\rho_k} \Delta e^{\psi_k} d\mu.
\end{equation}
\noindent We first show that the last term in the sum of (\ref{eq:fprime}) tends to zero.
First note that from (\ref{eq:estimelambda}),
$$
\vert\frac{\rho_k^{\lambda}}{\rho_k}\vert \leq Ck
$$
thus
$$
\vert\frac{1}{k} \int_X \frac{\rho_k^{\lambda}}{\rho_k} \Delta e^{\psi_k} d\mu \vert\leq
C \int_X \vert\Delta e^{\psi_k}\vert d\mu
$$
and using lemma \ref{lem:exppsi} we deduce that this term goes to zero as
$k$ tends to infinity.
\noindent Then consider the second term in the right hand side of equation (\ref{eq:fprime}).
Using the expansions of $\psi_k$ and $\rho_k$ we deduce:
$$
\rho_k^{-1} e^{\psi_k}=k^{-n}(1-\frac{S}{2k}+\mathcal{O}(k^{-2}))(1+\frac{\theta+\underline{S}}{2k}+\mathcal{O}_0(k^{-1})).
$$
Here we use our crucial assumption,
that is $\omega_{\phi^*}$ is extremal, so $S=\theta + \underline{S}$ and thus
$$
\rho_k^{-1} e^{\psi_k}=k^{-n}(1+\mathcal{O}_0(k^{-1})).
$$
Then
$$
\int_X \frac{\rho_k^{\lambda}}{\rho_k} e^{\psi_k} d\mu =\int_X \frac{\rho_k^{\lambda}}{k^n}(1+\mathcal{O}_0(k^{-1})) d\mu.
$$
As
$$
\int_X \frac{\rho_k^{\lambda}}{k^n} d\mu= k^{-n}\sum_{\alpha} \lambda_{\alpha}^{(k)},
$$
the only remaining term to control at infinity in $k^{-n}f_k'(0)$ is
$$
\int_X \frac{\rho_k^{\lambda}}{k^n}\mathcal{O}_0(k^{-1}) d\mu.
$$
Using (\ref{eq:estimelambda}),
$$
\vert \frac{\rho_k^{\lambda}}{k^n}\mathcal{O}_0(k^{-1}) \vert \leq C k N_k k^{-n} \vert \mathcal{O}_0(k^{-1}) \vert.
$$
By equation (\ref{cor:Nk}), $N_k k^{-n}$ is bounded and as $\mathcal{O}_0(k^{-1})=k^{-1}\epsilon(k)$ with $\epsilon(k)\rightarrow 0$
$$
\lim_{k\rightarrow \infty}\int_X \frac{\rho_k^{\lambda}}{k^n}\mathcal{O}_0(k^{-1}) d\mu =0
$$
and
$$
\lim_{k\rightarrow \infty} k^{-n}f_k'(0)=0.
$$
\end{proof}
\subsection{Conclusion, proof of theorem \ref{theo:min}}
We conclude this section with the proof of Theorem~\ref{theo:min}. We show the following stronger theorem, which implies theorem \ref{theo:min} with remark \ref{rmk:mab}:
\begin{theorem}
Let $(X,L)$ be a polarized manifold that carries extremal metrics representing $c_1(L)$.
The modified Mabuchi functional with respect to the $G$-action induced by the extremal vector field of $c_1(L)$ attains its minimum at the extremal metrics.
\end{theorem}
\begin{proof}
Let $\phi\in\mathcal{H}^G$ and $\phi^*$ be the potential of an extremal metric.
\begin{equation}
\label{eq:end}
\begin{array}{ccc}
\mathcal{L}_k^{\sigma}(\phi) & = & Z_k^{\sigma}\circ Hilb_k(\phi)+(\mathcal{L}_k^{\sigma}(\phi)-Z_k^{\sigma}\circ Hilb_k(\phi)).
\end{array}
\end{equation}
By proposition~\ref{prop:almostbal}:
\begin{equation}
\label{eq:end}
\begin{array}{ccc}
\mathcal{L}_k^{\sigma}(\phi) & \geq & Z_k^{\sigma}\circ Hilb_k(\phi^*)+ k^n\varepsilon_{\phi}(k)+(\mathcal{L}_k^{\sigma}(\phi)-Z_k^{\sigma}\circ Hilb_k(\phi)) \\
\end{array}
\end{equation}
Then
\begin{equation}
\label{eq:end}
\begin{array}{ccc}
\mathcal{L}_k^{\sigma}(\phi) & \geq & \mathcal{L}_k^{\sigma}(\phi^*)+(Z_k^{\sigma}\circ Hilb_k(\phi^*)-\mathcal{L}_k^{\sigma}(\phi^*))+ \\
& & k^n\varepsilon_{\phi}(k)+(\mathcal{L}_k^{\sigma}(\phi)-Z_k^{\sigma}\circ Hilb_k(\phi))\\
\end{array}
\end{equation}
To conclude, from proposition~\ref{prop:compare},
$$
k^{-n}(Z_k^{\sigma}\circ Hilb_k(\phi^*)-\mathcal{L}_k^{\sigma}(\phi^*)) \rightarrow 0
$$
and
$$
k^{-n}(Z_k^{\sigma}\circ Hilb_k(\phi)-\mathcal{L}_k^{\sigma}(\phi)) \rightarrow 0
$$
as $k$ tends to infinity. So does $\varepsilon_{\phi}(k)$ by construction, see proposition~\ref{prop:almostbal}. Thus the result follows from proposition~\ref{prop:LquantizeE}, multiplying by $k^{-n}$ and letting $k$ go to infinity in~(\ref{eq:end}).
\end{proof}
\section{Appendix}
We give the proof of the results concerning the $\sigma$-balanced metrics.
We denote by $(\cdot, \cdot)$ any of the following Hermitian pairings
\begin{equation*}
\begin{array}{ll}
T^*X \times (T^*X\times L) \to L, & L\times (T^*X\times L) \to T^*X,
\\
L\times L \to \mathbb{C}, & T^*X \times T^*X \to \mathbb{C}
\end{array}
\end{equation*}
obtained by $\phi\in \mathcal{H}$ and $\omega_\phi$.
We denote the connection of type $(1,0)$ on the holomorphic tangent bundle $T'X$ by $\nabla$.
\subsection{The definition of $I^{\sigma}$}
\begin{proposition}
\label{prop:indep}
$I^\sigma(\phi)$ is independent of the choice of a path from $0$ to $\phi$.
\end{proposition}
\begin{proof}
Since $I^\sigma(\phi)$ satisfies the cocycle property
$$
I^\sigma(\phi_1,\phi_3)
=
I^\sigma(\phi_1,\phi_2)
+
I^\sigma(\phi_2,\phi_3)
$$
by definition, it is sufficient to prove $\frac{\partial^2}{\partial s \partial t}I^\sigma(\phi_{0,0},\phi_{t,s})$ is symmetric with respect to $s$ and $t$ for any family of path
$$
\{\Phi=\phi_{t,s} \mid (s,t)\in [0,1]\times [0,1],\, \phi_{0,s}=\phi_{1,s}\equiv 0\}
$$
in $\mathcal{H}$.
\begin{eqnarray}
\nonumber
&&
\frac{\partial^2}{\partial s \partial t}I^\sigma(\phi_{0,0},\phi_{t,s})
=
\frac{\partial}{\partial s}\int_X \big((1+\Delta_\Phi)\frac{\partial \Phi}{\partial t}\big)e^{\psi_{\sigma,\Phi}}d\mu_{\Phi}
\\
\nonumber
&=&
\int_X \big((\frac{\partial}{\partial s}\Delta_\Phi)\frac{\partial\Phi}{\partial t}\big)e^{\psi_{\sigma,\Phi}}
d\mu_{\Phi}
+\int_X \big((1+\Delta_\Phi)\frac{\partial^2\Phi}{\partial s \partial t}\big)e^{\psi_{\sigma,\Phi}}
d\mu_{\Phi}
\\
\label{eq:dtds_F0}
&&\quad
+\int_X \big((1+\Delta_\Phi)\frac{\partial\Phi}{\partial t}\big)\big(\frac{\partial e^{\psi_{\sigma,\Phi}}}{\partial s}\big)d\mu_{\Phi}
-\int_X \big((1+\Delta_\Phi)\frac{\partial\Phi}{\partial t}\big)e^{\psi_{\sigma,\Phi}}\big(\Delta_\Phi\frac{\partial\Phi}{\partial s}\big)d\mu_{\Phi}.
\end{eqnarray}
The first term in (\ref{eq:dtds_F0}) is
\begin{eqnarray*}
\int_X \big(\nabla\overline{\nabla}\frac{\partial\Phi}{\partial t},\nabla\overline{\nabla} \frac{\partial\Phi}{\partial s}\big)e^{\psi_{\sigma,\Phi}}
d\mu_{\Phi}
\end{eqnarray*}
which is symmetric.
The second term is obviously symmetric.
The third term is
\begin{equation}\label{eq:dd_F0_3}
\int_X \frac{\partial\Phi}{\partial t}\big(\nabla \psi_{\sigma,\Phi}, \nabla \frac{\partial\Phi}{\partial s}\big)e^{\psi_{\sigma,\Phi}}
d\mu_{\Phi}
+
\int_X \big(\Delta_\Phi\frac{\partial\Phi}{\partial t}\big)\big(\nabla\psi_{\sigma,\Phi}, \nabla \frac{\partial\Phi}{\partial s}\big)e^{\psi_{\sigma,\Phi}}
d\mu_{\Phi}.
\end{equation}
Here we use the following equality.
\begin{lemma}
\begin{equation}\label{eq:derivative_psi}
\frac{\partial\psi_{\sigma,\Phi}}{\partial s}=\big(\nabla\psi_{\sigma,\Phi},\nabla\frac{\partial\Phi}{\partial s}\big).
\end{equation}
\end{lemma}
\begin{proof}
Let $v$ be the gradient vector field of $\frac{\partial\Phi}{\partial s}$, i.e.,
\begin{equation}\label{eq:cf}
v
=grad_{\omega_{\Phi}}\bigg(\frac{\partial\Phi}{\partial s}\bigg)
=\sum_{i,j}g^{i\bar{j}}\frac{\partial}{\partial\bar z^j}\bigg(\frac{\partial\Phi}{\partial s}\bigg)
\frac{\partial}{\partial z^i}.
\end{equation}
We have
\begin{eqnarray*}
\frac{\partial}{\partial s}(\sigma(1)^*\omega_{\Phi}-\omega_{\Phi})
&=&
L_{v}(\sigma(1)^*\omega_{\Phi}-\omega_{\Phi})
=
\frac{\sqrt{-1}}{2\pi}
d\iota_{v}
\partial\bar\partial\psi_{\sigma,\Phi}
\\
&=&
\frac{\sqrt{-1}}{2\pi}
\partial\bar\partial \big(\nabla\psi_{\sigma,\Phi},\nabla\frac{\partial\Phi}{\partial s}\big)
\end{eqnarray*}
where $L_{v}$ is the Lie derivative along $v$.
Then, there exists some constant $c$ such that
\begin{equation}
\label{eq:constant_c}
\frac{\partial\psi_{\sigma,\Phi}}{\partial s}
=
\big(\nabla\psi_{\sigma,\Phi},\nabla\frac{\partial\Phi}{\partial s}\big)+c.
\end{equation}
Recall that
$$
\int_X\psi_{\sigma,\Phi}d\mu_{\Phi}
$$
is constant with respect to $s,\,t$ by normalization of $\psi_{\sigma,\Phi}$.
Since
$$
0
=\frac{\partial}{\partial s}\int_X {\psi_{\sigma,\Phi}}d\mu_{\Phi}
=\int_X \bigg(\frac{\partial\psi_{\sigma,\Phi}}{\partial s}-\big(\nabla\psi_{\sigma,\Phi},\nabla\frac{\partial\Phi}{\partial s}\big)\bigg)
d\mu_{\Phi},
$$
the constant $c$ in (\ref{eq:constant_c}) is zero.
Hence, (\ref{eq:derivative_psi}) is proved.
\end{proof}
\noindent
The forth term is
\begin{equation}\label{eq:dd_F0_4}
-\int_X e^{\psi_{\sigma,\Phi}}\frac{\partial\Phi}{\partial t}\Delta_\Phi\frac{\partial\Phi}{\partial s}d\mu_{\Phi}
-
\int_X e^{\psi_{\sigma,\Phi}}\Delta_\Phi\frac{\partial\Phi}{\partial t}\Delta_\Phi\frac{\partial\Phi}{\partial s}d\mu_{\Phi}.
\end{equation}
The sum of the first term in (\ref{eq:dd_F0_3}) and the first term in (\ref{eq:dd_F0_4}) is
$$
-\int_X \frac{\partial\Phi}{\partial t}\bigg(\Delta_\Phi\frac{\partial\Phi}{\partial s}+\big(\nabla\psi_{\sigma,\Phi}, \nabla\frac{\partial\Phi}{\partial s}\big)\bigg)e^{\psi_{\sigma,\Phi}}
d\mu_{\Phi}.
$$
This is symmetric, because the operator $\Delta_\Phi+(\nabla\psi_{\sigma,\Phi}, \nabla )$ is self-adjoint with respect to the weighted volume form $e^{\psi_{\sigma,\Phi}}d\mu_{\Phi}$.
The remaining is the second term in (\ref{eq:dd_F0_3}).
It is
$$
-\int_X \big(\nabla\overline{\nabla}\psi_{\sigma,\Phi},
\nabla\frac{\partial\Phi}{\partial t}\overline{\nabla}\frac{\partial\Phi}{\partial s}
\big)e^{\psi_{\sigma,\Phi}}
d\mu_{\Phi}
-
\int_X\big(\nabla\frac{\partial\Phi}{\partial t},\nabla\psi_{\sigma,\Phi}\big)
\big(\nabla\frac{\partial\Phi}{\partial s},\nabla\psi_{\sigma,\Phi}\big)
e^{\psi_{\sigma,\Phi}}
d\mu_{\Phi},
$$
which is symmetric.
\end{proof}
\subsection{Second derivative of $I_k^{\sigma}$}
\label{sec:hessian}
We give a computation of the second derivative of $I^\sigma_k$.
\begin{proof}[Proof of Lemma \ref{lem:hessian}]
\begin{eqnarray}
\nonumber
\frac{d^2 }{ds^2}I^\sigma_k(\phi_s)
&=&
k^n\frac{d}{ds}\int_X (k+\Delta_\phi )\phi' e^{{\psi_{\sigma,\phi}}} d\mu_{\phi}
\\
\nonumber
&=&
k^n\int_X (\nabla\overline{\nabla}\phi',\nabla\overline{\nabla}\phi')e^{\psi_{\sigma,\phi}}
d\mu_{\phi}
+k^n\int_X(k+\Delta_\phi )\phi''e^{\psi_{\sigma,\phi}} d\mu_{\phi}
\\
\label{eq:2nd_vr_01}
&&\quad
+k^n\int_X((k+\Delta_\phi )\phi'){\psi'_{\sigma,\phi}}e^{\psi_{\sigma,\phi}} d\mu_{\phi}
-k^n\int_X((k+\Delta_\phi )\phi')e^{\psi_{\sigma,\phi}}\Delta_\phi \phi' d\mu_{\phi}.
\end{eqnarray}
From (\ref{eq:derivative_psi}), the third term in (\ref{eq:2nd_vr_01}) is equal to
\begin{equation}\label{eq:2nd_vr_02}
k^n\int_X((k+\Delta_\phi )\phi')(\nabla{\psi_{\sigma,\phi}},\nabla\phi')e^{\psi_{\sigma,\phi}} d\mu_{\phi}.
\end{equation}
By the partial integral, the forth term in (\ref{eq:2nd_vr_01}) is equal to
\begin{eqnarray}
\nonumber
&&
-k^{n+1}\int_X|\nabla\phi'|^2e^{\psi_{\sigma,\phi}} d\mu_{\phi}
-k^{n+1}\int_X\phi'e^{\psi_{\sigma,\phi}}(\nabla{\psi_{\sigma,\phi}},\nabla\phi') d\mu_{\phi}
\\
\label{eq:2nd_vr_03}
&&\quad
-k^n\int_X(\nabla\Delta_\phi \phi',\nabla\phi')e^{\psi_{\sigma,\phi}} d\mu_{\phi}
-k^n\int_X(\Delta_\phi \phi') (\nabla{\psi_{\sigma,\phi}},\nabla\phi')e^{\psi_{\sigma,\phi}} d\mu_{\phi}.
\end{eqnarray}
Remark that the sum of the second and forth terms in (\ref{eq:2nd_vr_03}) cancels (\ref{eq:2nd_vr_02}).
The third term in (\ref{eq:2nd_vr_03}) is
\begin{eqnarray}
\nonumber
&&
-k^n\int_X
(\nabla\overline{\nabla}\phi',\nabla\overline{\nabla}\phi')
e^{\psi_{\sigma,\phi}} d\mu_{\phi}
-k^n\int_X
(\nabla\overline{\nabla}\phi',\nabla{\psi_{\sigma,\phi}}\overline{\nabla}\phi')
e^{\psi_{\sigma,\phi}} d\mu_{\phi}
\\
\nonumber
&=&
-k^n\int_X
(\nabla\overline{\nabla}\phi',\nabla\overline{\nabla}\phi')
e^{\psi_{\sigma,\phi}} d\mu_{\phi}
-k^n\int_X
|\nabla\phi'|^2\Delta_\phi {\psi_{\sigma,\phi}}
e^{\psi_{\sigma,\phi}} d\mu_{\phi}
\\
\nonumber
&&\quad
+k^n\int_X
|\nabla\phi'|^2|\nabla{\psi_{\sigma,\phi}}|^2
e^{\psi_{\sigma,\phi}} d\mu_{\phi}
\\
\label{eq:2nd_vr_04}
&=&
-k^n\int_X
(\nabla\overline{\nabla}\phi',\nabla\overline{\nabla}\phi')
e^{\psi_{\sigma,\phi}} d\mu_{\phi}
-k^n\int_X
|\nabla\phi'|^2\Delta_\phi
e^{\psi_{\sigma,\phi}} d\mu_{\phi}.
\end{eqnarray}
Substituting (\ref{eq:2nd_vr_02}), (\ref{eq:2nd_vr_03}) and (\ref{eq:2nd_vr_04}) for (\ref{eq:2nd_vr_01}), we get the second derivative of $I^\sigma_k(\phi)$.
\end{proof}
|
2,877,628,091,065 | arxiv | \section{Introduction
Seeking the physical origin of two accelerated expansion regimes of the Universe, namely, the primordial inflation and the present cosmic acceleration, is one of the most important theoretical challenges of cosmology today.
These unknown physical origins are referred by the primordial dark energy (DE) and the present DE. Various theoretical models have been proposed to accelerate the cosmological expansion.
Among those theoretical models, $f(R)$ gravity is a simple and nontrivial generalization of General Relativity. For a recent review, see Refs.~\cite{Sotiriou:2008rp,DeFelice:2010aj}. It introduces a function $f(R)$ in the action, where $R$ is Ricci curvature. This additional degree of freedom plays a role of a scalar field, which is called scalaron, and it can cause the accelerated expansion of the Universe. The original idea recognized as $R^2$ inflation model was proposed in Ref.~\cite{Starobinsky:1980te}, where de Sitter expansion was derived as a solution for the Einstein equation with quantum one-loop correction. After the accelerating expansion, the particles are gravitationally created through the oscillation of the scalaron, and it leads radiation dominated Universe~\cite{Zeldovich:1977,Starobinsky:1981vz,Vilenkin:1985md,Mijic:1986iv,Ford:1987}. The $R^2$ model predicts an almost scale-invariant spectrum, whose scalar and tensor components are consistent with recent observational data (See {\it e.g.}, \cite{Komatsu:2009}).
Later, the $R^2$ gravity was extended to a general function of $R$, namely $f(R)$, to describe the late time cosmic acceleration. After some early challenges, the viable $f(R)$ models were proposed~\cite{Hu:2007nk,Appleby:2007vb,Starobinsky:2007hu}, which can realize stable matter-dominated regime and subsequent late time acceleration. In these models, the expansion history of the Universe is close to that in the ${\rm \Lambda}$-cold dark matter (${\rm \Lambda}$CDM) model. From the model selection point of view, the key to distinguish the models is focusing on small deviation from the ${\rm \Lambda}$CDM model. As for background quantity, the equation of state parameter $w_{\rm DE}$ well characterizes the models. While it remains constant $w_{\rm DE}=-1$ in the ${\rm \Lambda}$CDM model, it is time dependent in the $f(R)$ gravity and even crosses the phantom divide at redshift $z\sim O(1)$~\cite{Hu:2007nk,Motohashi:2010tb,Motohashi:2011wy}. On the other hand, the growth of the matter density fluctuations is also useful to measure the deviation. Since in $f(R)$ gravity the effective gravitational constant depends on time and scale, the matter power spectrum is enhanced~\cite{Zhang:2005vt,Tsujikawa:2007gd,Hu:2007nk,Starobinsky:2007hu,Gannouji:2008wt,Tsujikawa:2009ku,Motohashi:2009qn,Narikawa:2009ux}. This enhancement not only measure the deviation but also provides another interesting consequence: $f(R)$ gravity admits larger neutrino mass. This is because massive neutrinos suppress evolution of matter fluctuations by free streaming, which cancels the enhancement in $f(R)$ gravity. As a result, $f(R)$ gravity allows larger neutrino mass up to $\sim 0.5$~eV~\cite{Motohashi:2010sj}. The constraint on sterile neutrino mass is also relaxed up to $\sim 1$~eV, which is consistent with recent experiments~\cite{Motohashi:2012wc}. Other distinguishable features of $f(R)$ gravity would be imprinted on cosmological gravitational waves \cite{Ananda:2008,Capozziello:2008,Alves:2009}. Future pulsar timing experiments and gravitational-wave detectors will be able to probe them directly and test gravity theories \cite{Lee:2008,Chamberlin:2011,Nishizawa:2009,Nishizawa:2010}.
However, the above viable $f(R)$ models still have theoretical problems~\cite{Starobinsky:2007hu}. If we start from some initial condition and calculate back to the past, the scalaron mass diverges quickly and the scalaron oscillates rapidly~\cite{Tsujikawa:2007xu}. Another problem is that the Ricci scalar also diverges at the past even if we include nonlinear effect~\cite{Appleby:2008tv}. This curvature singularity was also pointed out in Refs.~\cite{Frolov:2008uf,Kobayashi:2008tq}.
To solve these problems, $R^2$-corrected $f(R)$ model has been proposed~\cite{Appleby:2009uf}. This model is constructed from the late time acceleration part and $R^2$ term. Consequently, the reheating followed by inflation in this model is different from that in $R^2$ model, {\it i.e.}, the scalaron does not oscillate harmonically.
Of course, we can construct the other specific functional forms that avoid singularities and describe both the primordial DE and the present DE. However, these functions belong to the same class and their behavior are similar~\cite{Motohashi:2012}. In this sense, it is worth to study one specific model in detail as an example of such a class of extended $f(R)$ models.
The aim of this paper is to investigate the evolution during the inflation and reheating regimes of the $R^2$-corrected $f(R)$ model in detail.
In the previous work~\cite{Appleby:2009uf}, the reheating dynamics has been analyzed in the Jordan frame. However, the field equation is fourth order differential equation, which makes the physical interpretation unclear. In addition, in their analysis, the inflation and the reheating are separately solved with different initial conditions. Therefore, we focus on the evolution of the scalaron rolling on a potential in the Einstein frame, which clarifies the physical picture and allows us to understand the dynamics intuitively. We start from a certain initial condition imposed during inflationary regime and numerically solve the transition from the inflation to the reheating and the following reheating dynamics. Thus, our analysis is more accurate than the previous work \cite{Appleby:2009uf} and is complementary to the analysis in the Jordan frame.
This paper is organized as follows. In Sec.~\ref{sec-be}, we review the basic equations in $f(R)$ gravity in both the Jordan frame and the Einstein frame. We present the Einstein frame potential in the $R^2$-corrected $f(R)$ model and consider its characteristics analytically. In Sec.~\ref{sec-an}, we derive the analytic solutions of the inflation and reheating in the Einstein frame. We shall adopt the slow roll and the fast roll approximations and solve the field equations in each era. Sec.~\ref{sec-nr} contains the results of the numerical calculation. We confirm that the analytic solutions are sufficiently in agreement with the numerical results. We also consider the behavior at the end of the reheating. In Sec.~\ref{sec-rtcn}, we consider the connection between observables and model parameters and discuss its allowed ranges. Sec.~\ref{sec-cn} is devoted to conclusions and discussion. Throughout the paper, we adopt units $c=\hbar=1$.
\section{Basic equations
\label{sec-be}
We start to review the basic equations of $f(R)$ gravity in both the Jordan frame and the Einstein frame. To avoid confusion of the frames, we fix the subscript $J$ and $E$ to physical quantities in the Jordan frame and the Einstein frame, respectively. Otherwise we declare the frame in which the quantity is defined. $f(R)$ gravity is defined by the action
\be S=\int d^4x \sqrt{-g_J} \left[ \f{M_{\rm{Pl}}^2}{2} f(R_J) + {\cal L}_M(g^J_{\mu\nu}) \right] \;, \ee
where ${\cal L}_M$ is the Lagrangian density for the matter sector and $M_{\rm{Pl}}$ is the reduced Planck mass. By varying the action, we obtain the field equation in the Jordan frame
\begin{equation}
R^J_{\mu\nu} F(R_J) -\frac{1}{2} g^J_{\mu\nu} f(R_J) + ( g^J_{\mu \nu} \Box -\nabla_{\mu} \nabla_{\nu} ) F(R_J)
= \frac{T^J_{\mu\nu}}{M_{\rm{Pl}}^2} \;,
\label{eq10}
\end{equation}
with
\begin{equation}
F(R_J) \equiv \frac{d\,f(R_J)}{dR_J} \;, \quad \quad T_{\mu \nu}^J \equiv -\frac{2}{\sqrt{-g_J}} \frac{\delta (\sqrt{-g_J} {\cal{L}}_{\rm{M}})}{\delta g_J^{\mu \nu}} \;. \label{defF}
\end{equation}
We regard the Jordan frame as the physical frame. However, for our purpose analyzing the inflation and the reheating in $f(R)$ gravity, the formulation in the Einstein frame is useful because it contains the additional degree of freedom more explicitly as a scalar field and enables us to use the analogy of the single-field inflation. We can recast the theory to the Einstein gravity with a scalar field by choosing the conformal transformation of the metric as $g^E_{\mu\nu}\equiv F(R_J) g^J_{\mu\nu}$. The canonical scalar field $\phi$ is defined by
\be F(R_J)\equiv e^{\sqrt{\f{2}{3}}\f{\phi}{\Mpl}}. \label{FRJ} \ee
By the conformal transformation, the action is rewritten as
\be S=\int d^4x \sqrt{-g_E} \kk{\f{M_{\rm{Pl}}^2}{2} R_E-\f{1}{2}g_E^{\mu\nu} \partial_\mu \phi \partial_\nu \phi-V(\phi) +{\cal L}_M \left(e^{-\sqrt{\f{2}{3}}\f{\phi}{\Mpl}} g^E_{\mu\nu} \right)}, \ee
with the potential term
\be V(\phi)=\frac{\Mpl^2}{2} \f{R_J(\phi)F(R_J(\phi))-f(R_J(\phi))}{F(R_J(\phi))^2}. \label{EFpot} \ee
Then, the Einstein equation in the Einstein frame reduces to
\bea
H_E^2&=&\f{1}{3\Mpl^2}\kk{\f{1}{2}\mk{\f{d\phi}{dt_E}}^2 + V(\phi) + \rho_E}, \label{Eeq1} \\
\f{dH_E}{dt_E}&=&-\f{1}{2\Mpl^2}\kk{\mk{\f{d\phi}{dt_E}}^2 + \rho_E+P_E}. \label{Eeq2}
\eea
The equation of motion for the scalar field is
\be \f{d^2\phi}{dt_E^2}+3H_E\f{d\phi}{dt_E}+V_{,\phi}(\phi)=\f{1}{\sqrt{6} \Mpl} (\rho_E-3P_E). \label{Eeq3} \ee
From the conformal transformation, the time and scale factor in both frames are connected by
\be dt_J=e^{-\f{1}{\sqrt{6}}\f{\phi}{\Mpl}}dt_E, \quad a_J=e^{-\f{1}{\sqrt{6}}\f{\phi}{\Mpl}}a_E. \label{ta} \ee
The transformation of the Hubble parameter is derived from the above definitions,
\be H_J=e^{\f{1}{\sqrt{6}}\f{\phi}{\Mpl}}\mk{H_E-\f{1}{\sqrt{6}\Mpl}\f{d\phi}{dt_E}}. \label{HJ} \ee
By definition in Eq.~\eqref{defF}, the energy momentum tensors of the matter sector in both frames are connected by
\be T^E_{\mu\nu}= e^{-\sqrt{\f{2}{3}}\f{\phi}{\Mpl}} T^J_{\mu\nu}. \ee
For perfect fluid $T^{\mu}_{\nu}={\rm diag}(-\rho, P,P,P)$ in each frame, the energy density and the pressure are related as
\be \rho_E=e^{-2\sqrt{\f{2}{3}}\f{\phi}{\Mpl}}\rho_J,\quad P_E=e^{-2\sqrt{\f{2}{3}}\f{\phi}{\Mpl}}P_J. \ee
Note that the energy density in the Einstein frame couples with the scalaron.
In the inflation and reheating in $f(R)$ gravity, there is no inflaton field from the point of view in the Jordan frame. Consequently, particle creation occurs not through the decay of the inflaton but through the gravitational reheating~\cite{Zeldovich:1977,Starobinsky:1981vz,Vilenkin:1985md,Mijic:1986iv,Ford:1987}. Let us consider the gravitational particle creation in the Jordan frame. We introduce a minimally or nonminimally coupled massless scalar field $\chi$, which describes the created particles, into the matter action,
\be S=\int d^4x \sqrt{-g_J} \kk{\f{\Mpl^2}{2}f(R_J) -\f{1}{2}g_J^{\mu\nu} \partial_{\mu} \chi \partial_{\nu} \chi -\f{1}{2} \xi R\chi^2}. \ee
Since the scalar field $\chi$ couples with the metric in the Jordan frame, the radiation (massless scalar particle) is created purely gravitationally.
Adopting the standard treatment of the quantum field theory in curved spacetime, we can expand $\chi$ in Fourier modes with the annihilation and creation operators. Then, computing the Bogolubov coefficients in the expanding Universe, we obtain the number density of the created scalar particles~\cite{Zeldovich:1977,Starobinsky:1981vz,Vilenkin:1985md,Mijic:1986iv},
\be n_J(t_J)=\f{(1-6\xi)^2}{576\pi a_J^3} \int_{-\infty}^{t_J} dt'_J a_J^3R_J^2. \ee
The above equation holds regardless of the functional form of $f(R)$. Note that the particle creation is sourced by Ricci curvature. Since the Ricci curvature is significantly suppressed during the reheating era as we shall see below, particle creation hardly occurs. On the other hand, inflationary dynamics is the same as that of the $R^2$ inflation. Therefore, we can use the approximated formula for the $R^2$ model and turn off the particle creation during the reheating era when we perform numerical calculation. In the $R^2$ model with $f(R_J)=R_J+R_J^2/(6M^2)$, the energy density of the created particles is
\be \rho_J(t_J)=\f{g_* M (1-6\xi)^2}{1152\pi a_J^4} \int_{-\infty}^{t_J} dt'_J a_J^4 R_J^2 \label{rhoint}, \ee
where $g_*$ denotes the relativistic degree of freedom. In the present paper, we consider minimally coupled massless scalar field and set $\xi=0$ hereafter.
The evolution equation for the energy density of radiation is then
\be \f{d\rho_J}{dt_J}=-4H_J\rho_J+\f{g_*MR_J^2}{1152\pi}\;. \label{Jeq1} \ee
The pressure is obtained from the energy conservation equation,
\be P_J=\f{\rho_J}{3}-\f{g_*MR_J^2}{3456\pi H_J}. \ee
Finally, we introduce $gR^2$-AB model~\cite{Appleby:2009uf}, which describes the accelerated expansions in both the early and the present Universes,
\be f (R_J)=(1-g)R_J+g M^2\d \log\kk{\f{\cosh(R_J/\d M^2-b)}{\cosh b}}+\f{R_J^2}{6M^2}, \label{gR2AB} \ee
where $g,~b,~\d$, and $M$ are positively-defined model parameters.
$\d$ describes the ratio between the energy scale of the present DE to the primordial DE and takes dramatically small value.
$M$ will be fixed to $M/\Mpl \approx 1.2\times 10^{-5}$ later in Sec.~\ref{sec-rtcn} by the temperature fluctuations of the cosmic microwave background (CMB) anisotropy. $g$ is constrained in the range of $0<g<1/2$ by the stability conditions of $f(R)$ gravity: $F(R_J)>0$ and $dF(R_J)/dR_J>0$. Moreover, $g$ and $b$ are further constrained by the other stability condition as we shall show in Sec.~\ref{sec-rtcn}.
The function $F(R_J)$ is given by
\be F(R_J)=1-g+\f{R_J}{3M^2}+g \tanh (R_J/M^2\d -b). \label{FgABR2} \ee
The above $f(R_J)$ function is equivalently written in the following form:
\begin{align}
f(R_J) &=R_J-\frac{R_{\rm{vac}}}{2} + g M^2 \delta \log \left[ 1+e^{-2(R_J/M^2\delta -b)} \right]+\frac{R_J^2}{6M^2} \;,
\label{eq58} \\
R_{\rm{vac}} &\equiv 2 g M^2 \delta \left\{ b+\log(2\cosh b) \right\}.
\end{align}
In Eq.~(\ref{eq58}), the fourth term dominates at high curvature regime $R_J \gg M^2$ and causes inflation~\cite{Starobinsky:1980te}. The third term alters the reheating dynamics after the inflation, which is characteristic of the $gR^2$-AB model. The second term plays the same role as the current cosmological constant. By substituting the action into Eq.~(\ref{eq10}), the equation of motion in the Jordan frame is given by
\begin{align}
&H_J^{''} H_J-\frac{1}{2} (H_J^{'})^2+3 H_J^{'} H_J^2 +\frac{1-g}{2} M^2 H_J^2 -\frac{g}{2} M^2 (H_J^{'}+H_J^2) \tanh \left[ \frac{R_J}{M^2\delta}-b \right] \nonumber \\
&+\frac{g}{12} M^4\delta \ln \left[ \frac{\cosh (R_J/M^2\delta -b)}{\cosh b} \right] + \frac{3g(H_J^{''} H_J + 4 H_J^{'} H_J^2)}{\delta \cosh^2 (R_J/M^2\delta -b)}= \frac{M^2}{6M_{\rm{Pl}}^2} \rho_J \;.
\label{eq11}
\end{align}
The prime denote time derivative with respect to $t_J$. As expected from the specific form of $f(R_J)$ function, the equation of motion in the Jordan frame is nonlinear differential equation and quite complicated to solve.
The dynamics of inflation and reheating can be more intuitively understood from the potential in the Einstein frame, which is depicted in Fig.~\ref{fig:pot} (a). From the definition of Eq.~\eqref{EFpot}, we can interpret the shape of the potential in the following way. First, we notice that from Eq.~\eqref{FgABR2}, $F(R_J)$ is almost a step function at $R_J/M^2\simeq b\d$ with the change of the amplitude from $F=1-2g$ to $1$. In terms of the scalaron, the step corresponds from $\phi/\Mpl=\sqrt{6}\log\gamma$ to $0$, where $\gamma\equiv \sqrt{1-2g}$. In other words, during the scalaron moving in the range $\sqrt{6}\log\gamma \leq \phi/\Mpl \leq 0$, $R_J$ remains almost constant value, $R_J/M^2\simeq b\d$. Outside this interval, $F(R_J)$ is approximated by $F\simeq 1+R/3M^2$ for $R_J/M^2 > b\d$, {\it i.e.}, $\phi/\Mpl>0$, and $F\simeq 1-2g+R/3M^2$ for $R_J/M^2 < b\d$, {\it i.e.}, $\phi/\Mpl<\sqrt{6}\log\gamma$, respectively. By using these approximations, we can derive the potential analytically in terms of $\phi$:
\be
\f{V(\phi)}{\Mpl^2M^2}\simeq \left\{
\begin{array}{ll}
\displaystyle \f{3}{4}\mk{1-e^{-\sqrt{\f{2}{3}}\f{\phi}{\Mpl}}}^2, &\quad (\phi/\Mpl> 0) \\
\displaystyle \f{3}{4}\mk{1-\gamma^2 e^{-\sqrt{\f{2}{3}}\f{\phi}{\Mpl}}}^2. &\quad (\phi/\Mpl< \sqrt{6}\log \gamma)
\end{array}
\right. \label{potER2}
\ee
In these two regions, the potential is the same as that in the $R^2$ inflation. On the other hand, the characteristic plateau shows up for $\sqrt{6}\log\gamma<\phi/\Mpl<0$. By using $R_J/M^2\simeq b\d$, we can approximate the potential as
\be \f{V(\phi)}{\Mpl^2M^2}\simeq\f{b\d e^{\sqrt{\f{2}{3}}\f{\phi}{\Mpl}}-f(b\d M^2)/M^2}{2 e^{2\sqrt{\f{2}{3}}\f{\phi}{\Mpl}}}. \quad (\sqrt{6}\log \gamma<\phi/\Mpl< 0) \ee
From this expression, we can estimate the height of the bump in the plateau. We can obtain the position of the local maximum by solving $V'=0$. The solution is $\phi/\Mpl\simeq \sqrt{3/2}\log[2(1-2g)+b\d/3]\equiv \phi_m/\Mpl$. The potential has the local maximum when $\phi_m$ satisfies $\sqrt{6}\log \gamma < \phi_m/\Mpl < 0$, namely, $(3+b\d)/12 \lesssim g \lesssim (3+b\d)/6$. For $\d\ll 1$, this condition amounts to $1/4 \lesssim g \lesssim 1/2$. Therefore, in Fig.~\ref{fig:pot} (c), the potential for $g=0.35$ possesses the local maximum and the false vacuum. However, as $g$ approaches $g=1/4$, the local maximum is likely to disappear.
The potential heights at the right edge, the local maximum, and the left edge are given by
\be \f{V(0)}{\Mpl^2M^2}\simeq bg\d,\quad \f{V_\max}{\Mpl^2M^2}\simeq \f{b\d}{8(1-2g)} , \quad \f{V(\sqrt{6}\Mpl\log \gamma)}{\Mpl^2M^2}\simeq 0, \label{V0Vmax} \ee
where we used $\d\ll 1$ and $\log(\cosh b)\simeq b$. These estimations well explain the parameter dependences of the potential shape in Fig.~\ref{fig:pot} (b) - (d).
Next, let us consider the evolution of the scalaron.
The scalaron starts slow rolling from $\phi>0$ and plays a role of the inflaton. For $\phi>0$, the potential is the same as that of a pure $R^2$ model and is almost independent of model parameters $g,~b$ and $\d$. Thus, the scale factor in the Einstein frame experiences quasi-de-Sitter expansion. In this case, the scale factor in the Jordan frame also evolves exponentially, because the amplitude of $\phi$ slowly varies and thus the scale factors in both frames are related by multiplying an almost constant factor. As the scalaron approaches $\phi=0$, it rolls faster and enters the potential plateau with the kinetic energy larger than the potential energy. Then the scalaron oscillates in the plateau, gradually loses its kinetic energy, and finally reaches the false vacuum at $\phi=0$ because the chameleon effect lifts up the potential when the energy density of matter does not negligible~\cite{Motohashi:2012}. During this oscillation, the scale factor in the Jordan frame undergoes the periodic evolution due to the exponential factor in Eq.~(\ref{ta}). We shall see these situations in the next section.
\begin{figure}[t]
\centering
\includegraphics[width=85mm]{V.eps}
\includegraphics[width=85mm]{V_d.eps}
\includegraphics[width=85mm]{V_g.eps}
\includegraphics[width=85mm]{V_b.eps}
\caption{
Inflaton potential of the $gR^2$-AB model in the Einstein frame. (a): Potential for $g=0.35, b=5, \d=5\times 10^{-8}$. For $\phi>0$ and $\phi<\sqrt{6}\Mpl\log \gamma$, the potential is similar to that in pure $R^2$ inflation. On the other hand, for $\sqrt{6}\Mpl\log \gamma<\phi<0$, there is the characteristic plateau. The scalaron starts slow rolling from $\phi>0$, and enters the plateau with large kinetic energy and oscillates inside it. (b) - (d): How plateau changes its shape when one parameter is changed from the value in (a). The typical height of the plateau is $bg\d$ from the analytic estimation. }
\label{fig:pot}
\end{figure}
\section{Analytic Solutions
\label{sec-an}
To investigate the reheating dynamics in the Jordan frame, we work in the Einstein frame and derive the analytical expressions for the motion of the scalaron in the inflaton potential. For $\phi>0$, the potential is almost reduced to that of a pure $R^2$ model, then we can use the slow-roll approximation in Sec.~\ref{ssec-sr} and derive the analytic solutions. After the end of the inflation, the kinetic energy of the scalaron is dominant. Therefore, we can neglect the small structure of the potential during the oscillation in the plateau and use the approximation, $R_J\sim b\d$. We shall explore this case in Sec.~\ref{ssec-fr}. Since in both the slow-roll and oscillation regimes the energy density of the created radiation is subdominant in comparison with the energy density of the inflaton, we neglect its backreaction to the background dynamics to derive the analytic solutions. However, the energy density of radiation eventually becomes the same order as the total energy of the inflaton, and the reheating ends.
In the following, we define dimensionless variables $\hat t=Mt,~\hat\phi=\phi/\Mpl,~\hat V=V/M^2\Mpl^2,~\hat H=H/M,~\hat R=R/M^2,~\hat \rho_{\rm{r}}=\rho_{\rm{r}}/\Mpl^2M^2$ and abbreviate the hat in this section to avoid the complexity of notation.
\subsection{Slow roll approximation
\label{ssec-sr}
The inflaton starts to roll down the potential at $t_E=t_{E,\ini}$ ($t_J=t_{J,\ini}$) with small kinetic energy compared to the height of the potential. The energy density of the radiation is also negligible. The field equations \eqref{Eeq1} -- \eqref{Eeq3} with slow roll approximations are
\bea
H_E&=&\sqrt{\f{V(\phi)}{3}},\\
\dot H_E&=&-\f{V_\phi(\phi)^2}{6V},\\
\dot\phi&=&-\f{V_\phi(\phi)}{\sqrt{3V(\phi)}},
\eea
where the dot denotes the derivative with respect to the time in Einstein frame and the potential is approximated by Eq.~\eqref{potER2}.
We set the initial condition as $\phi=\phi_\ini$. Other initial quantities are fixed from the above equations with the approximated potential.
By substituting the potential into the field equations, we can derive the following solutions:
\bea
\phi(t_E)&=&\sqrt{\f{3}{2}}\log\kk{d_\ini^2-\f{2}{3}(t_E-t_{E,\ini})}, \label{phisr} \\
\dot \phi(t_E)&=&-\sqrt{\f{2}{3}}\kk{d_\ini^2-\f{2}{3}(t_E-t_{E,\ini})}^{-1}, \label{dphisr} \\
H_E(t_E)&=&\f{1}{2}\kk{1-\mk{d_\ini^2-\f{2}{3}(t_E-t_{E,\ini})}^{-1}},
\label{HEsr} \\
a_E(t_E)&=& a_{E,\ini} e^{(t_E-t_{E,\ini})/2} \kk{1-\f{2}{3}d_\ini^{-2}(t_E-t_{E,\ini})}^{3/4}, \label{aesr}\\
R_J(t_E)&=&3(e^{\sqrt{\f{2}{3}}\phi(t_E)}-1), \label{rjsr}
\eea
where we use $d_\ini\equiv e^{\f{\phi_\ini}{\sqrt{6}}}$ instead of $\phi_\ini$ itself.
The remaining task is to write down $t_E$ in terms of $t_J$ and to convert the above solutions.
Jordan frame time is derived by integrating Eq.~\eqref{ta},
\be t_J(t_E)=t_{J,\ini}-3\sqrt{d_\ini^2-\f{2}{3}(t_E-t_{E,\ini})}+3d_\ini, \ee
and its inverse function is
\be t_E(t_J)=t_{E,\ini}+(t_J-t_{J,\ini})\kk{d_\ini-\f{1}{6}(t_J-t_{J,\ini})}. \label{tetjsr} \ee
Thus, substituting Eq.~\eqref{tetjsr} into the solutions~\eqref{phisr} -- \eqref{rjsr} and using Eqs.~\eqref{ta} and \eqref{HJ}, we obtain the analytic solutions in terms of the Jordan frame quantities.
Next, we derive an analytic solution for $\rho_J$ by performing integration in Eq.~\eqref{rhoint} with Eqs.~\eqref{ta}, \eqref{phisr}, \eqref{aesr}, \eqref{rjsr} and \eqref{tetjsr},
\begin{align}
\rho_r(t_J)&=\f{g_*}{1152\pi}\mk{\f{M}{\Mpl}}^2
\f{1}{16S^2} \nonumber \\
& \times \kk{-2\sqrt{3}S(4S^4-14S^2+15)+3e^{S^2}\ck{-6d_\ini e^{-3d_\ini^2}(12d_\ini^4-14d_\ini^2+5)+5\sqrt{3\pi}\mk{\erf(\sqrt{3}d_\ini)+\erf(S)}}}, \label{rhoJsr}
\end{align}
where $S\equiv (t_J-t_{J,\ini}-3d_\ini)/\sqrt{3}$ and $\erf(x)$ is the error function. Hereafter, we refer $\rho_J$ as $\rho_r$, especially denoting radiation component.
Finally, we focus on boundary conditions. By using the above analytic solutions, we define the time $t_E=t_{E0}$ or $t_J=t_{J0}$ when the inflaton reaches $\phi=0$ for the first time. Strictly speaking, when $\phi\simeq 0$, the slow-roll approximation is not valid anymore and the boundary conditions are very sensitive due to the the sudden transition of the potential at $\phi\simeq 0$. Therefore, we should use the boundary conditions obtained from numerical computation. We only rely on the analytical boundary values as the estimator. We shall revisit this point in Sec.~\ref{sec-nr}.
The time $t_{E0}$ is analytically estimated by using the analytic solution \eqref{phisr},
\be t_{E0}=t_{E,\ini}+\f{3}{2}(d_\ini^2-1). \ee
In terms of Jordan frame time,
\be t_{J0}=t_{J,\ini}+3(d_\ini-1). \ee
The boundary conditions are
\be
\phi_0=0, \quad
\dot \phi_0=-\sqrt{\f{2}{3}}, \quad
a_{E0}=a_{E,\ini}d_\ini^{-2}e^{\f{3}{4}(d_\ini^2-1)}.
\ee
From Eq.~\eqref{Eeq1},
\be
H_{E0}=\f{1}{3},\quad
H_{J0}=\f{2}{3}.\label{eq8}
\ee
The energy density $\rho_r$ at $t_J=t_{J0}$ is given by setting $S=-\sqrt{3}$ in Eq.~\eqref{rhoJsr} and taking large $\phi_{\rm{ini}}$ limit, {\it i.e.}, $d_\ini \gg 1$,
\begin{equation}
\rho_{r0} = \f{c_0 g_*}{1152\pi}\mk{\f{M}{\Mpl}}^2 \;. \nonumber
\end{equation}
The coefficient $c_0$ is analytically given by $\kk{18+5e^3\sqrt{3\pi}\mk{1-\erf(\sqrt{3})}}/16 \approx 1.4$, but at this intermediate stage from the slow roll to the fast roll, the slow-roll approximation does not give the precise value. So we will use the result of the numerical calculation for the precise determination of $c_0$ later. We notice that since $\rho_{r0}\ll 3H_{J0}^2$ for typical values $g_*=106.75$ and $M \ll M_{\rm{Pl}}$, we can neglect the energy density of the created radiation for the purpose of deriving analytic formulas of reheating dynamics in the next subsection.
\subsection{Fast roll approximation
\label{ssec-fr}
When the inflaton falls down to the plateau on the bottom of the potential that lies $\sqrt{6}\log\gamma<\phi<0$, its kinetic energy is much greater than the potential energy, which is typically of the order of $bg\d$. Moreover, it is also greater than the energy density of radiation. Thus, we will use two approximations to derive the analytic solutions. First, the energy density created at the slow roll regime is not dominant as we mentioned above. Second, particle creation is negligible during the fast-roll regime because the source term $R_J^2$ keeps an almost constant value $b^2\d^2 \ll 1$. Thus, we can use the fast-roll approximation in Eqs.~(\ref{Eeq1}) -- (\ref{Eeq3}):
\bea
&&H_E^2=\f{\dot\phi^2}{6},\label{sr1}\\
&&\dot H_E=-\f{\dot\phi^2}{2},\label{sr2}\\
&&\ddot\phi+3H_E\dot\phi=0.\label{sr3}
\eea
In the plateau, the inflaton repeatedly oscillates between the left and right walls of the potential. We separately consider the time intervals dependent on the direction of the motion of the inflaton and derive the analytic solution in each regime.
The inflaton is reflected at the left side of the plateau at $\phi=\sqrt{6}\log\gamma$. We regard the reflection occurs instantly and define the first reflection at $t=t_{E1}$. After that, the inflaton reaches the right wall at $\phi=0$ and is reflected at $t=t_{E2}$. Thus, we can periodically define $t_{En}$ until the inflaton stops somewhere.
First, let us consider the interval $t_{E0}<t_E<t_{E1}$.
Eliminating $\dot\phi$ from Eq.~\eqref{sr1} and \eqref{sr2} and then solving it, we obtain the Hubble parameter and the scale factor
\bea
H_E(t_E)&=&\f{H_{E0}}{3H_{E0}(t_E-t_{E0})+1}, \label{HE1} \\
a_E(t_E)&=&a_{E0}\kk{3H_{E0}(t_E-t_{E0})+1}^{1/3}. \label{aE1}
\eea
From Eq.~\eqref{sr1}, we obtain $\dot\phi=\pm \sqrt{6}H_E$. For $t_{E0}<t_E<t_{E1}$, since the inflaton moves in the left direction, we choose $\dot\phi=-\sqrt{6} H_E$. Therefore, the solution is given by
\be \phi(t_E)=-\sqrt{\f{2}{3}}\log\kk{3H_{E0}(t_E-t_{E0})+1}. \label{pE1} \ee
Next, we move to the interval $t_{E1}<t_E<t_{E2}$. By using $\dot\phi=+\sqrt{6}H_E$, we can derive
\bea
H_E(t_E)&=&\f{H_{E1}}{3H_{E1}(t_E-t_{E1})+1}, \label{HE2} \\
a_E(t_E)&=&a_{E1}\kk{3H_{E1}(t_E-t_{E1})+1}^{1/3}, \label{aE2} \\
\phi(t_E)&=&\sqrt{\f{2}{3}}\log\kk{3H_{E1}(t_E-t_{E1})+1}+\sqrt{6}\log\gamma \;. \label{pE2}
\eea
The matching conditions between the two intervals are determined by setting $\phi(t_{E1})=\sqrt{6}\log\gamma$ in the solution for $t_{E0}<t_E<t_{E1}$ as
\bea
t_{E1}&=&t_{E0}+\f{\gamma^{-3}-1}{3H_{E0}}, \\
H_{E1}&=&H_{E0}\gamma^{3},\\
a_{E1}&=&a_{E0}\gamma^{-1}.
\eea
By substituting them into the solutions \eqref{HE2} -- \eqref{pE2}, we obtain the same expressions as in Eqs.~\eqref{HE1} and \eqref{aE1} for $H_E(t_E)$ and $a_E(t_E)$. However, $\phi(t_E)$ is different from Eq.~\eqref{pE1}. It is given by
\be \phi(t_E)=\sqrt{\f{2}{3}}\log\kk{3H_{E0}(t_E-t_{E0})+1}+2\sqrt{6}\log\gamma. \ee
Likewise, by solving the recurrence equations, we conclude that the boundary conditions for general $n$ are
\begin{align}
t_{En}&=t_{E0}+\f{\gamma^{-3n}-1}{3H_{E0}}, \label{tEn} \\
H_{En}&=H_{E0}\gamma^{3n},
\label{eq9} \\
a_{En}&=a_{E0}\gamma^{-n},\\
\phi_{En}&=\left\{
\begin{array}{ll}
\displaystyle 0 &\quad (n:{\rm even}) \\
\displaystyle \sqrt{6}\log\gamma &\quad (n:{\rm odd})
\end{array}
\right. \;.
\end{align}
Especially, it is noteworthy that the following relation holds regardless of the parity of $n$:
\be 3H_{En}(t_{E}-t_{En})+1=\gamma^{3(n-1)}[3H_{E0}(t_{E}-t_{E0})+1] \;. \ee
Thus, the solutions for $t_{En-1}<t_E<t_{En}$ are
\bea
H_E(t_E)&=&\f{H_{E0}}{3H_{E0}(t_E-t_{E0})+1},
\label{eq5} \\
a_E(t_E)&=&a_{E0}\kk{3H_{E0}(t_E-t_{E0})+1}^{1/3},
\label{eq6} \\
\phi(t_E)&=&\left\{
\begin{array}{ll}
\displaystyle -\sqrt{\f{2}{3}}\log\kk{3H_{E0}(t_E-t_{E0})+1}-(n-1)\sqrt{6}\log\gamma &\quad (n:{\rm odd}) \\
\displaystyle \sqrt{\f{2}{3}}\log\kk{3H_{E0}(t_E-t_{E0})+1}+n\sqrt{6}\log\gamma &\quad (n:{\rm even})
\end{array}
\right. \;, \\
\dot\phi(t_E)&=&\left\{
\begin{array}{ll}
\displaystyle -\sqrt{6}H_E(t_E) &\quad (n:{\rm odd}) \\
\displaystyle \sqrt{6}H_E(t_E) &\quad (n:{\rm even})
\end{array}
\right. \;. \label{dphifrn}
\eea
Notice that the solutions for $\phi$ and $\dot \phi$ have different expressions for the different parity of $n$ because of its direction of the motion.
Once the solutions in the Einstein frame are at hand, it is straightforward to convert them to those in the Jordan frame. The Jordan frame time $t_J$ for $t_{En-1}<t_E<t_{En}$ is obtained by integrating $e^{-\phi/\sqrt{6}}$,
\be
t_J(t_E)=
\left\{
\begin{array}{ll}
\displaystyle t_{Jn-1}+\f{\gamma^{n-1}}{4H_{E0}} \left[ \left\{ 3 H_{E0}(t_E-t_{E0})+1\right\}^{4/3}-\gamma^{-4(n-1)} \right] &\quad (n:{\rm odd}) \\
\displaystyle t_{Jn-1}+\f{\gamma^{-n}}{2H_{E0}} \left[ \left\{ 3 H_{E0}(t_E-t_{E0})+1\right\}^{2/3}-\gamma^{-2(n-1)} \right] &\quad (n:{\rm even})
\end{array}
\right. \;, \label{tJtEfr}
\ee
where $t_{Jn}$ is given by
\be
t_{Jn}=
\left\{
\begin{array}{ll}
\displaystyle t_{J0}+\f{(\gamma^4+\gamma^2+2)(\gamma^{-3(n-1)}-1)}{4H_{E0}(\gamma^4+\gamma^2+1)}+\f{\gamma^{-4}-1}{4H_{E0}\gamma^{3(n-1)}} &\quad (n:{\rm odd}) \\
\displaystyle t_{J0}+\f{(\gamma^4+\gamma^2+2)(\gamma^{-3n}-1)}{4H_{E0}(\gamma^4+\gamma^2+1)} &\quad (n:{\rm even})
\end{array}
\right. \;.
\label{eq4}
\ee
From Eqs.~\eqref{ta}, \eqref{HJ}, \eqref{eq5}, \eqref{eq6}, and \eqref{tJtEfr}, the Hubble parameter and the scale factor in the Jordan frame evolve as
\bea
H_J(t_J)&=&
\left\{
\begin{array}{ll}
\displaystyle \f{2\gamma^{3(n-1)}H_{E0}}{4\gamma^{3(n-1)}H_{E0}(t_J-t_{Jn-1})+1} &\quad (n:{\rm odd}) \\
\displaystyle 0 &\quad (n:{\rm even})
\end{array}
\right. \;, \label{HJfr} \\
a_J(t_J)&=&
\left\{
\begin{array}{ll}
\displaystyle a_{J0}\gamma^{-(n-1)}\kk{4\gamma^{3(n-1)}H_{E0}(t_J-t_{Jn-1})+1}^{1/2} &\quad (n:{\rm odd}) \\
\displaystyle a_{J0}\gamma^{-n} &\quad (n:{\rm even})
\end{array}
\right. \;. \\
\eea
$H_J(t_J)$ periodically takes discrete behaviors at $t_J=t_{Jn}$ and vanishes for even $n$. However, to be precise, $H_J$ is not exactly zero because we here neglect the contribution from the energy density of created radiation and the potential. We shall numerically confirm this point in the next section.
We define $H_{Jn}$ by $\lim_{\epsilon\to\pm 0} H(t_{Jn}+\epsilon)$ where $\pm$ for even and odd $n$, respectively. From the Hubble parameter \eqref{HJfr}, it is given by
\be H_{Jn}=2H_{En} = \gamma^{3n} H_{J0}. \ee
Here we used Eqs.~(\ref{eq8}) and (\ref{eq9}).
Let us remark the time averaged behavior during the oscillation. Since Eq.~\eqref{tEn} implies $\log (t_{En}-t_{E0}) \propto n$, the time intervals when inflaton goes to the left and the right are equal in terms of the logarithmic Einstein-frame time. On the other hand, Eq.~\eqref{tJtEfr} yields $\log (t_J-t_{J0})\simeq p\log (t_E-t_{E0})$ with $p=4/3$ and $2/3$ for the left-directed regime and the right-directed regime, respectively. Hence, the left-directed regime is twice as long as the right-directed regime in terms of the logarithmic Jordan-frame time. Taking care of these facts, we can derive the average power of the Jordan frame quantities. For instance, as for time duration, since the average power is $(4/3+2/3)/(1+1)=1$, we can say $t_E$ is proportional to $t_J$ in average, {\it i.e.}, $\langle t_J-t_{J0}\rangle\propto t_E-t_{E0}$. Since $\log a_J\simeq q \log t_J$ with $q=1/2$ and $0$ for the left-directed regime and the right-directed regime respectively, the averaged power is $(1/2\times 2+0\times 1)/(2+1)=1/3$, namely, $\langle a_J(t_J)\rangle \propto (t_J-t_{J0})^{1/3}$.
Hence, the averaged Hubble parameter is written as
\begin{equation}
\langle H_J(t_J) \rangle = \frac{H_{J0}}{3 H_{J0} (t_J-t_{J0}) +1}
\label{eq7} \;.
\end{equation}
Integrating this gives the averaged scale factor
\begin{equation}
\langle a_J(t_J) \rangle = a_{J0} \left[ 3 H_{J0} (t_J-t_{J0}) + 1 \right]^{1/3} \;.
\label{eq14}
\end{equation}
In summary, neglecting the small structure of the plateau and the energy density of the radiation enables us to solve the differential equations analytically. However, in the final phase of the reheating, they become important. We shall consider this effect in the next section.
\section{Numerical Results
\label{sec-nr}
As we derived in Sec.~\ref{sec-be}, the typical height of the local maximum of the potential plateau is $V_\max\propto \d$. The oscillation regime ends when the kinetic energy of scalaron is of the same order of $V_\max$. Thus, $\delta$ determines the end time of the reheating. However, since $\delta$ is the ratio between the energy scales of DE and of inflation, it has to be dramatically tiny value $\sim 10^{-120}$. Therefore, it is difficult to carry out numerical calculation for realistic model parameters. In this section, using the modest value of $\d$, we confirm the validity of the analytic solutions obtained in the previous section by comparing them with numerical results. Then we extrapolate the analytic solutions to the end of the reheating and estimate the reheating temperature as a function of the model parameters. Note that hereafter we explicitly fix the hat to normalized dimensionless physical variables, which was abbreviated in the previous section.
\subsection{Comparison with analytic solutions
\label{ssec-cwa}
\begin{figure}[t]
\centering
\includegraphics[width=85mm]{phi.eps}
\includegraphics[width=85mm]{NJ.eps}
\includegraphics[width=85mm]{HJ.eps}
\includegraphics[width=85mm]{RJ.eps}
\caption{Evolution of inflaton, $e$-folding number, Hubble parameter and Ricci scalar for model parameter $g=0.35, b=5, \d=5\times 10^{-8}, M/\Mpl=1.2\times 10^{-5}$. Numerical results (blue, solid), slow roll analytic solution (magenta, dashed) and fast roll analytic solution (green, dot-dashed) are presented.}
\label{fig:evol}
\end{figure}
We performed numerical calculation in the Einstein frame, {\it i.e.}, we solved the coupled evolution equations: Eqs.~\eqref{Eeq2}, \eqref{Eeq3}, and \eqref{Jeq1}. In particular, to avoid the complexity due to nonminimal couplings in radiation and matter sectors, we use Eq.~\eqref{Jeq1}, instead of the continuous equation in the Einstein frame, by translating it to the Einstein frame with Eqs.~\eqref{FRJ}, \eqref{ta}, and \eqref{HJ}. After the numerical calculation, we obtain physical quantities in the Jordan frame by the inverse conformal transformations.
Figure~\ref{fig:evol} illustrates the numerical results and the analytic solutions for a model parameter set: $g=0.35, b=5, \d=5\times 10^{-8}, M/\Mpl=1.2\times 10^{-5}$. Since observationally required value for $\d$ is too tiny to perform the numerical calculation, we chose this $\delta$ just for demonstration and compared its results with the analytic solutions. As mentioned before, the slow-roll approximation does not hold at the intermediate stage from the slow roll to the fast roll. So we used the boundary conditions obtained by the numerical calculation as the initial conditions for the fast-roll approximated solutions: $c_0 = 0.72$ and $\hat{H}_{J0} =0.26$ (These are different from the values obtained from slow-roll analytic solutions by about a factor of 2). The analytic solutions well approximate the numerical results. In the slow-roll regime, the scale factor undergoes quasi-de Sitter expansion. In the fast-roll regime, the inflaton oscillates two times inside the potential plateau. We see that the scale factor evolves as $a_J(t_J)\propto (t_J-t_{J0})^{1/2}$ and $a_J(t_J)\simeq \text{const}$ in order. Ricci scalar stays constant value $\simeq b\d$ except for several spikes during the reheating. In contrast to the analytical solution, $H_J$ does not vanish when the inflaton moves to the right direction in the potential plateau because of the contribution from the energy density $\rho_r$ of the created radiation.
At late time, the fast-roll approximation becomes worse as the inflaton loses its kinetic energy. Finally, the inflaton reaches a false vacuum at $\phi=0$.
\subsection{End of the reheating
\label{ssec-er}
Since the above calculation have performed for $\d=5\times 10^{-8}$, the scalaron ends the oscillation quickly before radiation dominates the Universe. However, the scenario is different for $\d\sim 10^{-120}$: the reheating appropriately ends by radiation domination. This is because $V(0)$ and $V_{\rm{max}}$ is too small to compete with the inflaton kinetic energy. For the following, by using the analytic solutions, we estimate both the times when the scalaron stops the oscillation and when radiation dominates the Universe.
First, let us estimate when the oscillation stops. It is estimated from the equality time of the kinetic energy of the scalaron and the local maximum of the plateau: $\dot\phi^2/2\sim V_\max$. We use the analytic solution for $\dot\phi$ in Eq.~\eqref{dphifrn} and the asymptotic behavior $\langle \hat{H}_E \rangle \sim 1/3(\hat{t}_E-\hat{t}_{E0})\sim 1/3(\hat{t}_J-\hat{t}_{J0})$. Thus, the scalaron ceases the oscillation at
\be \hat{t}_{Js}-\hat{t}_{J0} =\sqrt{\f{8(1-2g)}{3b\d}} , \ee
where we used $V_\max$ by Eq.~\eqref{V0Vmax}. Substituting $\d\sim 10^{-120}$, $M=1.2\times 10^{-5}\Mpl$, and $g,b \sim {\cal{O}}(1)$ yields $t_{Js}\sim 10^{46}~\text{GeV}^{-1}$, which is close to the Hubble time $H_0^{-1}$. To be precise, we should take the radiation and matter dominated epochs into account in the following evolution of the Universe, but it does not change the conclusion that the scalaron oscillation continues for the order of the Hubble time.
Next, let us estimate when the radiation dominates the Universe and the reheating ends. We compare the energy density of radiation due to particle creation and that of gravity. Since $R_J\simeq b\d$ during the oscillation in the plateau of the potential, we can expand $F$ around $R_J= b\d$ and take its linear order only,
\be F\simeq 1-g+\f{b\d}{3}+\mk{\f{1}{3}+\f{g}{\d}}(\hat{R}_J-b\d)\equiv e^{\sqrt{\f{2}{3}}\hat{\phi}}. \ee
Thus, we can explicitly connect the Ricci scalar in the Jordan frame with the inflaton as
\be \hat{R}_J(t_E)=\f{3\d(e^{\sqrt{\f{2}{3}}\hat{\phi}(t_E)}-1+g(b+1))}{3g +\d}. \ee
As we mentioned at the beginning of Sec.~\ref{ssec-fr}, the particle creation during the plateau oscillation phase is negligible because $\hat{R}_J\simeq b\delta \ll 1$. Therefore, $\rho_r$ is approximately given by
\be \langle \rho_r(t_J) \rangle =\rho_{r0}\mk{\f{\langle a_J(t_J) \rangle }{a_{J0}}}^{-4} \approx \frac{\rho_{r0}}{\left[ 3 H_{J0} (t_J-t_{J0})\right]^{4/3}}, \label{eq99} \ee
where we used Eq.~(\ref{eq14}).
As for gravitational contribution, it is convenient to define the effective energy density of gravity by the equation of motion in the Jordan frame in Eq.~(\ref{eq11}):
\begin{align}
H_J^2 &= \frac{1}{3M_{\rm{Pl}}^2} (\rho_r+\rho_g) \;,
\label{eq45} \\
\rho_g &\equiv \frac{3M_{\rm{Pl}}^2}{M^2} \left( g M^2 H_J^2 -2 H_J^{''} H_J +(H_J^{'})^2-6 H_J^{'} H_J^2 +g M^2 (H_J^{'}+H_J^2) \tanh \left[ \frac{R_J}{M^2 \delta}-b \right] \right. \nonumber \\
&\left. -\frac{g}{6} M^4\delta \log \left[ \frac{\cosh (R_J/M^2\delta -b)}{\cosh b} \right] - \frac{6g (H_J^{''} H_J + 4 H_J^{'} H_J^2)}{\delta \cosh^2 (R_J/M^2\delta-b)} \right) \;.
\label{eq42}
\end{align}
When $\rho_r$ is negligible compared to $\rho_g$, from Eq.~(\ref{eq7}), the energy density of gravity is reduced to
\begin{equation}
\langle \hat{\rho}_g \rangle \approx 3 \langle \hat{H}_J \rangle ^2 \approx \frac{1}{3(\hat{t}_J-\hat{t}_{J0})^2}\;.
\label{eq44}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=85mm]{rhoJ.eps}
\caption{Evolution of energy density of radiation $\hat \rho_r$ (blue) and effective energy density of gravity $\hat \rho_g\approx 3\hat H_J^2$ (red) for model parameter $g=0.35, b=5, \d=5\times 10^{-8}, M/\Mpl=1.2\times 10^{-5}$. For energy density of radiation, the analytic solutions for slow roll regime (magenta, dashed) and fast roll regime (green, dot-dashed) are also presented.}
\label{fig:rho}
\end{figure}
Figure~\ref{fig:rho} represents the evolution of $\hat \rho_r$ and $\hat \rho_g\approx 3 \hat H_J^2$. Throughout the inflation and reheating, the energy density of the radiation is subdominant for our choice of $\d=5\times 10^{-8}$. The radiation is mainly created at the end of inflation and decays as $\rho_r\propto a^{-4}$. Since the scale factor evolves as $t_J^{1/2}$ and constant periodically, $\rho_r$ correspondingly evolves as $t_J^{-2}$ and constant during the left-directed and right-directed regimes, respectively. As we mentioned, $3 \hat H_J^2$ does not vanish because of the contribution from the radiation. At $Mt_J\simeq 50$ and $1000$, $3 \hat H_J^2$ is smaller than $\rho_r$. This is because the required step width at the abrupt transition is so tiny that $\hat H_J$ over-decreases. This tiny discrepancy is unimportant and does not affect the subsequent evolution.
The end time of the reheating and the reheating temperature $T_{\rm{reh}}$ is defined by the condition $\langle \rho_r \rangle =\langle \rho_g \rangle$. Using Eqs.~(\ref{eq99}) and (\ref{eq44}), we obtain
\begin{equation}
\hat{t}_{J,\rm{reh}} - \hat{t}_{J0} \approx \frac{\sqrt{3} \hat{H}_{J0}^2}{\hat{\rho}_{r0}^{3/2}}
\approx 3.8 \times 10^5 (c_0 g_*)^{-3/2} \hat{H}_{J0}^2 \left( \frac{M}{M_{\rm{Pl}}}
\right)^{-3} \;,
\label{eq54}
\end{equation}
and
\begin{equation}
\frac{T_{\rm{reh}}}{M} = \hat{H}_{J0} \left( \frac{a_{J0}}{\langle a_{J,\rm{reh}} \rangle} \right) \approx \sqrt{\frac{\hat{\rho}_{r0}}{3}} \approx 9.6 \times 10^{-3} (c_0 g_*)^{1/2} \left( \frac{M}{M_{\rm{Pl}}} \right) \;.
\label{eq12}
\end{equation}
Since these equations include $H_{J0}$ and $\rho_{r0}$, the end of the reheating is sensitive to the boundary conditions at the transition from the slow-roll regime to the fast-roll regime. For $g_*=106.75$, $M/\Mpl=1.2\times 10^{-5}$, and numerically determined values, $c_0=0.72$, $\hat{H}_{J0}\approx 0.26$, the time and temperature of the reheating are $\hat{t}_{J,\rm{reh}} - \hat{t}_{J0} \approx 2.2\times 10^{16}$ and $T_{\rm{reh}} \approx 3.0 \times 10^7\,{\rm{GeV}}$, respectively.
\section{Constraints on the model parameters
\label{sec-rtcn}
In this section, based on our analytic solutions and numerical results obtained in the previous sections, we discuss the allowed ranges of the model parameters. First, we present the constraint on the energy scale $M$ in Sec.~\ref{ssec-conm}. It is fixed from the amplitude normalization of the CMB. Second, we consider the other parameters, $g$, $b$, and $\d$ in Sec.~\ref{ssec-congbd}. Once $M$ is determined, we can predict the other observable of the inflation and discuss its consistency with observations. $g$, $b$, and $\delta$, cannot be constrained from observational data because the reheating temperature in Eq.~(\ref{eq12}) is independent of those parameters. However, since the $gR^2$-AB model must realize current cosmic acceleration, its magnitude and stability constrain the allowed range of $g$, $b$, and $\delta$.
\subsection{Constraint on $M$
\label{ssec-conm}
When $R_J \gg M^2$, the $gR^2$-AB model~(\ref{eq58}) can be approximated to $R^2$ inflation, in which the primordial spectrum of a scalar mode at the leading order in the slow-roll parameter $\epsilon_1$ is given by~\cite{Hwang:2001pu}
\begin{equation}
{\cal{P}}_S \approx \frac{1}{96 \pi^2 \epsilon_1^2} \left( \frac{M}{M_{\rm{Pl}}} \right)^2 \;,
\label{eq53}
\end{equation}
where $\epsilon_1 \equiv -H_J^{'}/H_J^2$. This slow-roll parameter is related to the $e$-folding number between the end of inflation and the horizon crossing of the mode whose comoving wave number $k$ corresponds to the CMB scale today.
From the analytic solutions during the slow-roll regime, Eqs.~(\ref{HJ}), (\ref{phisr}) - (\ref{HEsr}), the Hubble parameter in the Jordan frame is written as
\begin{equation}
\hat{H}_J(t_J) = \frac{3\, \tau(\hat{t}_J)-1}{6\,\tau^{1/2}(\hat{t}_J)} \;, \quad \quad \tau(\hat{t}_J) \equiv d_{\rm{ini}}^2 -\frac{2}{3} (\hat{t}_J-\hat{t}_{J,{\rm{ini}}}) \left[ d_{\rm{ini}} - \frac{1}{6} (\hat{t}_J-\hat{t}_{J,{\rm{ini}}}) \right]
\label{eq13} \;.
\end{equation}
For $d_{\rm{ini}} \gg 1$, expanding Eq.~(\ref{eq13}) in powers of $t_J$ around $t_{J,{\rm{ini}}}$ gives
\begin{equation}
\hat{H}_J(\hat{t}_J) \approx \hat{H}_{J,{\rm{ini}}} -\frac{1}{6} (\hat{t}_J-\hat{t}_{J,{\rm{ini}}}) \;,
\end{equation}
where $\hat{H}_{J,{\rm{ini}}} = d_{\rm{ini}}/2$.
The $e$-folding number between the end of inflation at $t_{J,{\rm{end}}}$ and the horizon crossing of the CMB mode at $t_{Jk}$ is given by
\begin{equation}
N_k = \int_{t_{Jk}}^{t_{J,\rm{end}}} H_J dt_J \approx -\frac{H_{Jk}^2}{2H^{'}_{Jk}} = \frac{1}{2 \epsilon_1 (t_{Jk})} \;,
\end{equation}
where $H_{Jk} \equiv H_J(t_k)$ and we used the fact that $H^{'}_{Jk}$ is constant when we performed the integration. Then Eq.~(\ref{eq53}) is expressed as
\begin{equation}
{\cal{P}}_S \approx \frac{N_k^2}{24\pi^2} \left( \frac{M}{M_{\rm{Pl}}} \right)^2 \;.
\end{equation}
Using Eq.~(\ref{eq99}), we obtain the $e$-folding number when the comoving scale of CMB crosses the horizon during inflation:
\begin{equation}
N_k \approx 66.2 -\frac{1}{2} \log \left( \frac{1-\Omega_m}{0.7} \right) -\frac{1}{4} \log \left[ \left( \frac{c_0}{0.72} \right) \left( \frac{g_{\ast}}{106.75} \right) \right] \;.
\end{equation}
Since the parameters $\Omega_m$ and $g_{\ast}$ hardly change $N_k$, we set it to $N_k=66$. From the temperature fluctuation of CMB anisotropy~\cite{Komatsu:2009}, the amplitude of the power spectrum, ${\cal{P}}_S =(2.445 \pm 0.096) \times 10^{-9}$ at $k=0.002\,{\rm{Mpc}}^{-1}$, fixes the parameter $M$ to
\begin{equation}
\frac{M}{M_{\rm{Pl}}} \approx 1.2 \times 10^{-5} \;.
\end{equation}
At the CMB scale, the spectral indices of the scalar and tensor modes and the tensor-to-scalar ratio are given by~\cite{Hwang:2001pu}
\begin{align}
n_S-1 &\equiv \left. \frac{d \log {\cal{P}}_S(k)}{d \log k} \right|_{k=aH}
\approx -\frac{2}{N_k} \;, \\
n_T &\equiv \left. \frac{d \log {\cal{P}}_T(k)}{d \log k} \right|_{k=aH} \approx -\frac{3}{2 N^2_k} \;, \\
r &\equiv \frac{{\cal{P}}_T}{{\cal{P}}_S} \approx \frac{12}{N_k^2} \;,
\end{align}
at the leading order in the slow-roll parameter. For the above choice of $N_k$, $n_S \approx 0.97$ and $r \approx 2.8 \times 10^{-3}$, which are consistent with observational bounds~\cite{Komatsu:2009}.
\subsection{Constraints on $g$, $b$, and $\delta$
\label{ssec-congbd}
The parameters $g$, $b$ and $\delta$ considerably alter the dynamics of the reheating in the $gR^2$-AB model. However, as seen from Eqs.~(\ref{eq99}) and (\ref{eq12}), the reheating temperature and the radiation energy density at that time does not depends on these parameters. So the constraint on $g$, $b$, and $\delta$ comes not from the CMB observation but from a stability condition of a de-Sitter vacuum.
From Eq.~(\ref{eq10}),
\begin{equation}
3 \Box F(R_J) +R_J F(R_J) -2 f(R_J) = 8\pi G T_J \;,
\end{equation}
where $T_J \equiv g_J^{\mu\nu} T^J_{\mu\nu}$ and $\Box \equiv (1/\sqrt{-g_J})\, \partial_{\mu} \left[ \sqrt{-g_J}\, \partial^{\mu} \right] $. For the existence of a stable solution of a de-Sitter vacuum ($R_J={\rm{const}}.$, $T_J=0$), the following equation has to be satisfied:
\begin{equation}
R_J F(R_J) -2 f(R_J)=0 \;.
\label{eq59}
\end{equation}
For $R_J \ll M^2$, substituting Eq.~(\ref{eq58}) into Eq.~(\ref{eq59}) and using $R_{\rm{vac}} \approx 4 g b M^2 \delta$ lead to the equation
\begin{equation}
Q(y) \equiv y-4gb +2 g \left[ \log \left( 1+e^{-2(y-b)} \right) +\frac{y}{1+e^{2(y-b)}} \right] =0 \;.
\end{equation}
where $y \equiv R_J/M^2\delta$. This function $Q(y)$ typically has the shape shown in Fig.~\ref{fig17}. Therefore, a stable de-Sitter vacuum exists if $Q^{'}(y_0)=0$ has the solution $y=y_0>1$ at which $Q^{''}(y_0)>0$ and $Q(y_0) \leq 0$. The boundary of the allowed parameter region of $b$ and $g$ can be obtained by solving $Q(y_0)=0$ and $Q^{'}(y_0)=0$ under the condition $Q^{''}(y_0)>0$. We cannot solve the above equations analytically. Instead,
we fit the numerical solution and obtain the allowed region for $g$ as
\begin{equation}
\frac{1}{4} + \frac{0.28}{(b-0.46)^{0.81}} \leq g \leq \frac{1}{2} \;. \label{gcon}
\end{equation}
This region is shown in Fig.~\ref{fig16}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{gAB-stable-deSitter-Q-b10.eps}
\caption{Function $Q(y)$ for $b=10$ and $g=0.1$ (blue), $0.2$ (red), $0.3$ (yellow), and $0.4$ (green).}
\label{fig17}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{gAB-stable-deSitter.eps}
\caption{Allowed parameter region of $g$ and $b$. Numerical solution (solid curve), fitting (dotted curve). Above this curve, stable de-Sitter solutions exist.}
\label{fig16}
\end{center}
\end{figure}
Once the parameters $M$, $g$, and $b$ are fixed, $\delta$ should be determined so that the current observation of accelerating expansion is reproduced. With the Ricci curvature of the present universe, $R_{\rm{vac}} \sim 10^{-120} M_{\rm{Pl}}^2$, the parameter $\delta$ is given by
\begin{equation}
\delta = \frac{R_{\rm{vac}}}{2g M^2 (b+\log [ 2\cosh b])} \approx \frac{1}{4g b} \frac{R_{\rm{vac}}}{M^2} \;.
\end{equation}
\section{Conclusions and discussion
\label{sec-cn}
We have studied the inflation and reheating dynamics in $f(R)$ gravity, especially in $gR^2$-AB model. This model is capable to describe both accelerated expansions in the early Universe and the present time. In the Einstein frame, the inflaton potential of this model possesses a plateau and a false vacuum in the bottom of the potential. These are different features from original $R^2$ inflation model and they significantly change the reheating process. We have derived the analytic solutions in the slow-roll inflation regime and the fast-roll oscillation (reheating) regime. We have also carried out the numerical computation including the backreaction from particle creation, and have confirmed that both results agree well. According to the existence of the potential plateau, the particle creation via gravitational reheating mainly occurs in the slow-roll regime and is inefficient during the the fast-roll oscillation regime. Consequently, in contrast to the $R^2$ inflationary scenario, the reheating era lasts longer. Another interesting feature of this model is that the averaged time evolution of a scale factor is proportional to $t_J^{1/3}$ because of the periodic abrupt changes of the Hubble parameter.
Based on these results obtained from our analytic and numerical calculations, we have given the constraints on the model parameters. The parameter $M$ is pinned down by the observational amplitude of CMB temperature fluctuations. Also the value of $\delta$ is selected to correctly reproduce the current accelerated expansion of the Universe. On the other hand, the parameters $g$ and $b$ are poorly constrained because these parameters affect only the reheating dynamics after the inflation. To more tightly constrain $g$ and $b$, we need observations that can probe at much smaller scales than those of CMB and large-scale galaxy surveys. In the future searches for primordial black holes and the direct detection experiments of gravitational waves would provide new observational windows for the reheating dynamics in modified gravity.
\begin{acknowledgments
We would like to thank A.~A.~Starobinsky, T.~Suyama and J.~Yokoyama for helpful discussions and valuable comments. This work was supported in part by JSPS Research Fellowships for Young Scientists (H.M.) and Grant-in-Aid for JSPS Fellows (A.N.).
\end{acknowledgments}
|
2,877,628,091,066 | arxiv | \subsection{Ablation studies}
\label{sec:ablation}
We now ablate our key components under the \zeroshot{} LVIS setting with IN-L as the image-classification data.
We use our strong training recipe as described in \cref{sec:implementation_details} for all these experiments.
\vspace{0.05in}
\par \noindent \textbf{Variants of \OURS{}.}
~\reftab{weakloss} (bottom) lists variants of our \OURS loss (\cref{sec:image}).
All variants significantly improve over the Box-Supervised baseline.
Amongst these variants, max-size proposal performs considerably better than others on \zeroshot{} COCO.
This is likely because COCO images have more objects,
which makes the whole `image-box' less representative of all the objects
and `max-object-score' arbitrary in picking one object.
The mask-size proposal provides a focused region containing multiple objects.
\begin{table}[!b]
\vspace{-3mm}
\small
\begin{center}
\begin{tabular}{@{}l@{}c@{}c@{}c@{}c@{}c@{}c@{}}
\toprule
& \multicolumn{2}{c}{\!\!\!\!LVIS-base+IN-L} & \multicolumn{2}{c}{\!\!\!\!LVIS-base+CC\!\!\!\! }& \multicolumn{2}{c}{ZS COCO}\\
& \footnotesize mAP$\mask$ & \footnotesize mAP$\mask_{\text{novel}}$ & \footnotesize mAP$\mask$ & \footnotesize mAP$\mask_{\text{novel}}$ & \footnotesize mAP50$^{\text{box}}_{\text{all}}$ & \footnotesize mAP50$^{\text{box}}_{\text{novel}}$ \\
\cmidrule(r){1-1}
\cmidrule(r){2-3}
\cmidrule(r){4-5}
\cmidrule(r){6-7}
Box-Sup. & 30.0 & 16.3 & 30.0 & 16.3 & 39.3 & 1.3 \\
Multi-task & 31.3 & 17.3 & 31.2 & 17.0 & 38.7 & 1.6 \\
\cmidrule(r){1-1}
\cmidrule(r){2-3}
\cmidrule(r){4-5}
\cmidrule(r){6-7}
obj.-score& 32.2 & 24.4 & 29.8 & 18.2 & 43.3 & 20.4 \\
Image-box & \bf 32.4 & 23.8 & 30.9 & 19.5 & 43.4 & 21.0\\
Max-size & \bf 32.4 & \bf 24.9 & \bf 31.0 & \bf 19.8 & \bf 44.7 & \bf 24.1\\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\textbf{Variants of \OURS{}}.
We show overall mAP and novel class mAP on both \zeroshot{} LVIS~\cite{gu2021zero} and \zeroshot{} COCO~\cite{bansal2018zero}. The experiment setting follows ~\reftab{imagelabel}. Obj.-score is short for max-object-score. \emph{Max-size} consistently performs the best among the variants and is our default choice.
}
\lbltab{weakloss}
\vspace{-3mm}
\end{table}
\begin{figure*}[!t]
\centering
\begin{tabular}{@{}c@{\ }c@{\ }c@{\ }c@{\ }}
\includegraphics[trim={0 0 0 4.0cm}, clip, width=0.24\textwidth]{qualitative/oid/113.jpg}
& \includegraphics[trim={0 0 0 4.0cm}, clip, width=0.24\textwidth]{qualitative/oid/11.jpg}
& \includegraphics[trim={0 0 0 2.3cm}, clip, width=0.24\textwidth]{qualitative/oid/308.jpg}
&\includegraphics[trim={0 0 0 0.0cm}, clip, width=0.24\textwidth]{qualitative/oid/110.jpg} \\
\includegraphics[width=0.24\textwidth]{qualitative/o365/160.jpg}
&\includegraphics[trim={0 2.0cm 0 0}, clip, width=0.24\textwidth]{qualitative/o365/310.jpg}
&\includegraphics[width=0.24\textwidth]{qualitative/o365/1450.jpg}
&\includegraphics[trim={0 0 0 2.0cm}, clip, width=0.24\textwidth]{qualitative/o365/225.jpg}\\
\end{tabular}
\vspace{-4mm}
\caption{
\textbf{Qualitative results of our 21k-class detector.}
We show random samples from images containing novel classes in OpenImages (top) and Objects365 (bottom) validation sets.
We use the CLIP embedding of the corresponding vocabularies.
We show
LVIS classes
in \textcolor{plum3}{purple} and novel classes in \textcolor{green4}{green}.
We use a score threshold of $0.5$ and
show the most confident class for each box.
Best viewed on screen.}
\lblfig{qualitative}
\vspace{-6mm}
\end{figure*}
\vspace{0.03in}
\par \noindent \textbf{Shared classification parameters.}
We study the importance of sharing the classification parameters $\bW$ across the image-supervised data and the detection data.
In~\reftab{weakloss} (row 2), we consider \OURS{} image-box but use a separate classification head (two linear layers) for image-supervised data.
This variant can also be interpreted as multi-task learning with image-labeled data and improves mAP for both novel and all classes.
However, it is less effective than \OURS{} that shares the classifier parameters.
\begin{table}[!t]
\begin{center}
\begin{tabular}{@{}l@{}c@{\ \ }c@{\ \ }c@{\ \ }c@{}}
\toprule
& \multicolumn{2}{c}{Box-Supervised} & \multicolumn{2}{c}{\OURS{}} \\
Classifier & mAP$\mask$ & mAP$\mask_{\text{novel}}$ & mAP$\mask$ & mAP$\mask_{\text{novel}}$ \\
\cmidrule(r){1-1}
\cmidrule(r){2-3}
\cmidrule(r){4-5}
*CLIP~\cite{radford2021learning} & 30.2 & 16.4 & 32.4 & 24.9 \\
Trained & 27.4 & 0 & 31.7 & 17.4 \\
FastText~\cite{joulin2016fasttext} & 27.5 & 9.0 & 30.9 & 19.2 \\
OpenCLIP~\cite{openclip} & 27.1 & 8.9 & 30.7 & 19.4 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{
\textbf{\OURS{} with different classifiers.}
We vary the classifier used with \OURS{} and observe that it works well with different design choices.
While CLIP embeddings give the best overall performance (* indicates our default),
all classifiers benefit from our \OURS{}.
}
\lbltab{classifier}
\vspace{-4mm}
\end{table}
\vspace{0.03in}
\par \noindent \textbf{Classifier weights.}
We study the effect of different classifier weights $\bW$.
While our main \zeroshot{} experiments use CLIP~\cite{radford2021learning},
we show the gain of \OURS{} is independent of CLIP.
We train \baselineDet{} and \OURS{} with different classifiers, including a standard random initialized and trained classifier, and other \emph{fixed} language models~\cite{joulin2016fasttext,openclip}
The results are shown in~\reftab{classifier}.
By default, a trained classifier cannot recognize novel classes.
However, \OURS{} enables novel class recognition ability even in this setting ($17.4$ mAP$_{\text{novel}}$ for classes without detection labels).
Using language models such as FastText~\cite{joulin2016fasttext}
or an open-source version of CLIP~\cite{openclip}
leads to better novel class performance.
CLIP~\cite{radford2021learning} performs the best among them.
\vspace{0.03in}
\par \noindent \textbf{Pretraining \vs Co-training.}
Many existing methods use additional data only for pretraining ~\cite{zhang2021mosaicos,zareian2021open,desai2021virtex},
while we use image-labeled data for co-training.
We present results of \OURS{} with different types of pretraining in~\reftab{pretrain}.
\OURS{} provides similar gains across different types of pretraining, suggesting that our gains are orthogonal to advances in pretraining.
We believe that this is the case because pretraining improves the overall features, while \OURS{} uses co-training which improves the classifier.
\begin{table}[!t]
\begin{center}
\begin{tabular}{@{}l@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{}}
\toprule
& Pretrain data & mAP$\mask$ & mAP$\mask_{\text{novel}}$ \\
\midrule
\baselineDet & IN-1K & 26.1 & 13.6 \\
\OURS{} & IN-1K & 28.8 \scriptsize \textcolor{green4}{(+2.7)} &
21.7 \scriptsize \textcolor{green4}{(+8.1)}\\
\midrule
\baselineDet & IN-21K & 30.2 & 16.4 \\
\OURS{} & IN-21K & 32.4 \scriptsize \textcolor{green4}{(+2.2)} & 24.9 \scriptsize \textcolor{green4}{(+8.5)} \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{
\textbf{\OURS{} with different pretraining data.}
Top: our method using ImageNet-1K as pretraining and ImageNet-21K as co-training;
Bottom: using ImageNet-21K for both pretraining and co-training. Co-training helps pretraining in both cases. }
\vspace{-6mm}
\lbltab{pretrain}
\end{table}
\begin{table}[!t]
\begin{center}
\begin{tabular}{@{}l@{\ }c@{\ }c@{\ }c@{\ }c@{\ }@{}}
\toprule
& mAP$^{\text{box}}$ & mAP$^{\text{box}}_{\text{r}}$ & mAP$^{\text{box}}_{\text{c}}$ & mAP$^{\text{box}}_{\text{f}}$ \\
\midrule
\baselineDet & 31.7 & 21.4 & 30.7 & \bf 37.5 \\
\OURS{} & \bf 32.5 & \bf 26.2 & \bf 31.3 & 36.6 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{
\textbf{\OURS{} applied to Deformable-DETR~\cite{zhu2020deformable}.}
We report Box mAP on full LVIS. Our method improves Deformable-DETR.}
\vspace{-5mm}
\lbltab{detr}
\end{table}
\vspace{0.01in}
\par \noindent \textbf{Generalization to Deformable-DETR.}
We apply \OURS{} to the recent Transformer based Deformable-DETR~\cite{zhu2020deformable} to study its generalization.
We use their default training recipe, Federated Loss~\cite{zhou2021probabilistic} and train for a $4\times$ schedule ($\sim\!48$ LVIS epochs).
We apply the image supervision to the query from the encoder with the max predicted size.
~\reftab{detr} shows that \OURS{} improves over the baseline (+$0.8$ mAP and $+4.8$ mAP$_{\text{r}}$) and generalizes to Transformer based detectors.
\vspace{-0.05in}
\section{Limitations and Conclusions}
\vspace{-0.05in}
We present \OURS{} which is a simple way to use image supervision in large-vocabulary object detection.
While \OURS{} is simpler than prior assignment-based weakly-supervised detection methods, it supervises all image labels to the same region and does not consider overall dataset statistics.
We leave incorporating such information for future work.
The generalization capabilities of \OURS{} also benefit from the large-scale pretraining of CLIP, and it is an open question whether there are other ways to train the classifier.
Moreover, open vocabulary generalization has no guarantees on extreme domains.
Our experiments show \OURS{} improves large-vocabulary detection with various weak data sources, classifiers, detector architectures, and training recipes.
We hope \OURS{} makes object detection easier to deploy and encourages future research in open-vocabulary detection.
{\footnotesize
\par \noindent \textbf{Ethical Considerations.}
Our technical contribution is in leveraging image-labeled data for training large vocabulary detectors.
We believe our technical contribution is neutral from an ethical standpoint, and our model needs further analysis before deployment.
Detic's wide range of detection capabilities may introduce similar challenges to many other visual recognition and open-set recognition methods~\cite{radford2021learning}.
As the user can define arbitrary detection classes, class design and semantics may impact the model output.
}
\section{Experiments}
\lblsec{experiment}
We evaluate \OURS{} on the large-vocabulary object detection dataset \lvis{}~\cite{gupta2019lvis}.
We mainly use the \zeroshot{} setting proposed by Gu \etal~\cite{gu2021zero},
and also report results on the standard \lvis{} setting.
Additionally, we compare to prior work on the popular \zeroshot{} COCO benchmark~\cite{bansal2018zero}.
We describe our LVIS setups below and provide full details of the COCO benchmark in \supplement{sec:coco-details}.
\par \noindent \textbf{\lvis{}.}
The \lvis{}~\cite{gupta2019lvis} dataset has object detection and instance segmentation labels for $1203$ classes with 100K images.
The classes are divided into three groups - frequent, common, rare based on the number of training images.
We refer to this standard \lvis{} training set as \emph{\lvisall}.
Following \vild{}~\cite{gu2021zero},
we remove the labels of $317$ rare-class from training and consider them as novel classes in testing.
We refer to this partial training set with only frequent and common classes as \emph{\lvisbase}.
We report mask mAP which is the official metric for LVIS.
While our model is developed for box detection,
we use a standard class-agnostic mask head~\cite{He_2017_ICCV} to produce segmentation masks for boxes.
We train the mask head only on detection data.
\par \noindent \textbf{Image-supervised data.}
We use two sources of image-supervised data: \imnetFull~\cite{deng2009imagenet} and \ccFull~\cite{sharma2018conceptual}.
\imnetFull{} (IN-21K) contains 14M images for 21K classes.
For ease of training and evaluation, most of our experiments use the 997 classes that overlap with the \lvis{} vocabulary and denote this subset as \imnetLvis.
\ccFull~\cite{sharma2018conceptual} (CC) is
an image captioning dataset containing 3M images.
We extract image labels from the captions using exact text-matching and keep images whose captions mention at least one \lvis{} class.
See \supplement{sec:caption} for results of directly using captions.
The resulting dataset contains 1.5M images with 992 \lvis{} classes.
We summarize the datasets used in our experiments below.
\begin{table}[!h]
\small
\vspace{-3mm}
\begin{center}
\begin{tabular}{@{}l@{}c@{}c@{\ }c@{}}
\toprule
Notation & Definition & \#Imgs & \#Cls\\
\midrule
\lvisall & The original \lvis{} dataset~\cite{gupta2019lvis} & 100K & 1203 \\
\lvisbase & \lvis{} without rare-class annotations & 100K & 886 \\
\imnet & \!\!\!The original \imnetFull{} dataset~\cite{deng2009imagenet} & 14M & 21k \\
\imnetLvis & \!\!\!\!\!\!\!997 overlapping \imnet{} classes with \lvis{} & 1.2M & 997 \\
\cc & \!\!\!\!\!\!\!\ccFull~\cite{sharma2018conceptual} with \lvis{} classes & 1.5M & 992 \\
\bottomrule
\end{tabular}
\end{center}
\lbltab{datasets}
\vspace{-5mm}
\end{table}
\subsection{Implementation details}
\label{sec:implementation_details}
\begin{table*}[!t]
\begin{center}
\begin{tabular}{@{}ll@{\ \ \ \ }c@{\ \ \ \ }c@{\ \ \ \ }c@{\ \ \ \ }c@{\ \ \ \ }c@{\ \ }c@{}}
\toprule
\rowNumber{\#} & & \multicolumn{2}{c}{\!\!\!\!LVIS-base + IN-L} & \multicolumn{2}{c}{\!\!\!\!LVIS-base + CC} & \multicolumn{2}{c}{\!\!\!\!\Zeroshot{} COCO}\\
& & mAP$\mask$ & mAP$\mask_{\text{novel}}$ & mAP$\mask$ & mAP$\mask_{\text{novel}}$ & mAP50$^{\text{box}}_{\text{all}}$ & mAP50$^{\text{box}}_{\text{novel}}$ \\
\cmidrule(r){1-2}
\cmidrule(r){3-4}
\cmidrule(r){5-6}
\cmidrule(r){7-8}
\rowNumber{1} & Box-Supervised (base class) & 30.0 \scriptsize $\pm 0.4$ & 16.3 \scriptsize $\pm 0.7$
& 30.0 \scriptsize $\pm 0.4$ & 16.3 \scriptsize $\pm 0.7$
& 39.3 & 1.3 \\
\rowNumber{2} & Box-Supervised (base class, finetuned) & 29.7 \scriptsize $\pm 0.5$ & 15.7 \scriptsize $\pm 1.0$
& 29.7 \scriptsize $\pm 0.5$ & 15.7 \scriptsize $\pm 1.0$
& 40.6 & 1.0 \\
\cmidrule(r){1-2}
\cmidrule(r){3-4}
\cmidrule(r){5-6}
\cmidrule(r){7-8}
\rowNumber{3} & Self-training~\cite{sohn2020simple} & 30.3 \scriptsize $\pm 0.0$ & 15.6 \scriptsize $\pm 0.1$
& 30.1 \scriptsize $\pm 0.2$ & 15.9 \scriptsize $\pm 0.8$
& 39.5 & 1.8\\
\rowNumber{4} & Self-training~\cite{sohn2020simple} with image labels
& 31.7 \scriptsize $\pm 0.3$ & 19.4 \scriptsize $\pm 0.7$
& 30.7 \scriptsize $\pm 0.1$ & 18.2 \scriptsize $\pm 0.6$
& 39.2 & 0.9 \\
\cmidrule(r){1-2}
\cmidrule(r){3-4}
\cmidrule(r){5-6}
\cmidrule(r){7-8}
\rowNumber{5} & WSDDN~\cite{bilen2016weakly} & 29.8 \scriptsize $\pm 0.2$ & 15.6 \scriptsize $\pm 0.3$
& 30.0 \scriptsize $\pm 0.1$ & 16.5 \scriptsize $\pm 0.8$
& 39.9 & 5.9 \\
\rowNumber{6} & DLWL*~\cite{ramanathan2020dlwl} & 30.6 \scriptsize $\pm 0.1$ & 18.2 \scriptsize $\pm 0.2$
& 29.7 \scriptsize $\pm 0.3$ & 16.9 \scriptsize $\pm 0.6$
& 42.9 & 19.6 \\
\rowNumber{7} &
Predicted~\cite{redmon2017yolo9000}
& 31.2 \scriptsize $\pm 0.3$ & 20.4 \scriptsize $\pm 0.9$
& 29.4 \scriptsize $\pm 0.1$ & 15.9 \scriptsize $\pm 0.6$
& 41.9 & 18.7\\
\cmidrule(r){1-2}
\cmidrule(r){3-4}
\cmidrule(r){5-6}
\cmidrule(r){7-8}
\rowNumber{8} & \OURS{} (Ours) & \bf 32.4 \scriptsize $\pm 0.1$ & \bf 24.6 \scriptsize $\pm 0.3$
& \bf 30.9 \scriptsize $\pm 0.2$ & \bf 19.5 \scriptsize $\pm 0.3$
& \bf 44.7 & \bf 24.1\\
\cmidrule(r){1-2}
\cmidrule(r){3-4}
\cmidrule(r){5-6}
\cmidrule(r){7-8}
\rowNumber{9} & \color{gray} Box-Supervised (all class)
& \color{gray} 31.1 \scriptsize $\pm 0.4$ & \color{gray} 25.5 \scriptsize $\pm 0.7$
& \color{gray} 31.1 \scriptsize $\pm 0.4$ & \color{gray} 25.5 \scriptsize $\pm 0.7$
& \color{gray} 54.9 & \color{gray} 60.0 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{
\textbf{Different ways to use image supervision.}
We show overall and novel-class mAP on \zeroshot{} LVIS~\cite{gu2021zero} (middle two columns, with 886 base classes and 317 novel classes) and \zeroshot{} COCO~\cite{bansal2018zero}(right column, with 48 base classes and 17 novel classes).
The LVIS models are trained using our strong baseline~\cref{sec:implementation_details}.
The COCO models are trained using ResNet50-C4~\cite{bansal2018zero}.
All models are finetuned on the model in row \rowNumber{1}.
Rows \rowNumber{1-2}: supervised training on base-classes; It has non-zero novel-class mAP as it uses the CLIP classifier.
Rows \rowNumber{3-4}: self-training;
Rows \rowNumber{5-7}: prediction-based WSOD;
Row \rowNumber{8}: Our simple image-supervised WSOD.
{\color{gray} Row \rowNumber{9}: The top-line detector trained with novel-class annotations.}
We repeat LVIS experiments for 3 runs and report mean/ std.
}
\lbltab{imagelabel}
\vspace{-5mm}
\end{table*}
\noindent \textbf{\baselineDet: a strong \lvis{} baseline.}
We first establish a strong baseline on \lvis{} to demonstrate that our improvements are orthogonal to recent advances in object detection.
The baseline only uses the supervised bounding box labels.
We use the CenterNet2~\cite{zhou2021probabilistic} detector with ResNet50~\cite{he2016deep} backbone.
We use Federated Loss~\cite{zhou2021probabilistic} and repeat factor sampling~\cite{gupta2019lvis}.
We use large scale jittering~\cite{ghiasi2021simple} with input resolution $640\!\times\!640$ and train for a $4\times$ ($\sim\!\!48$ LVIS epochs) schedule.
To show our method is compatible with better pretraining,
we use ImageNet-21k pretrained backbone weights~\cite{ridnik2021imagenet21k}.
As described in \cref{sec:prelim}, we use the CLIP~\cite{radford2021learning} embedding as the classifier.
Our baseline is $9.1$ mAP higher than the detectron2
baseline~\cite{wu2019detectron2} ($31.5$ vs. $22.4$ mAP$\mask$) and trains in a similar time (17 vs. 12 hours on 8 V100 GPUs).
See \supplement{sec:lvis-baseline} for more details.
\noindent\textbf{Resolution change for image-labeled images.}
ImageNet images are inherently smaller and more object-focused than LVIS images~\cite{zhang2021mosaicos}.
In practice, we observe it is important to use smaller image resolution for ImageNet images.
Using smaller resolution in addition allows us to increase the batch-size with the same computation.
In our implementation, we use $320\!\!\times\!\!320$ for ImageNet and CC and ablate this in~\supplement{sec:ratio-and-size}.
\noindent\textbf{Multi-dataset training.}
We sample detection and classification mini-batches in a $1:1$ ratio,
regardless of the original dataset size.
We group images from the same dataset on the same GPU to improve training efficiency~\cite{zhou2021simple}.
\par \noindent\textbf{Training schedules.}
To shorten the experimental cycle and have a good initialization
for prediction-based WSOD losses~\cite{ramanathan2020dlwl,redmon2017yolo9000},
we always first train a converged base-class-only model ($4\times$ schedule)
and finetune on it with additional image-labeled data for another $4\times$ schedule.
The $4\times$ schedule for our joint training consists of $\sim\!\!24$ LVIS epochs plus $\sim\!\!4.8$ ImageNet epochs or $\sim\!\!3.8$ CC epochs.
Training our ResNet50 model takes $\sim\!22$ hours on 8 V100 GPUs.
The large 21K \swinB model trains in $\sim\!24$ hours on 32 GPUs.
\subsection{\Zeroshot{} detection with image labels}
We first explore how to use image-level supervision for \zeroshot{} detection.
We experiment under three settings: LVIS-base with IN-L~\cite{deng2009imagenet},
LVIS-base with CC~\cite{sharma2018conceptual}, and \zeroshot{} COCO with COCO-caption as the image-labeled data.
These three settings differ in numbers of objects and numbers of image-labels per images\footnote{COCO images are designed to contain many objects; ImageNet images mostly contain one object; CC does not have \#\!objects preferences by design.}.
~\reftab{imagelabel} shows the results.
The baselines (Box-Supervised (base class)) are trained without access to novel class bounding box labels.
It uses the CLIP classifier~\cite{gu2021zero} and has \zeroshot{} capabilities and obtains 15.4 mAP$_{\text{novel}}$.
We first show our improvements are not trivially from the longer training iterations in finetuning.
~\reftab{imagelabel} row \rowNumber{2} shows further finetuning the model using box-supervision does not improve the performance.
We compare with the following ways to use image-level supervision:
\par \noindent\textbf{Self-training} applies the \baselineDet{} model to the image-classification data and collects all predictions with score $>0.5$ as pseudo-labels for finetuning.
This self-training method~\cite{sohn2020simple,han2020joint} is originally proposed for un-labeled images and does not use image labels.
We improve it by filtering out pseudo-labels that do not match the image labels.
The results in ~\reftab{imagelabel} rows \rowNumber{3}, \rowNumber{4}
show self-training with image labels is effective on LVIS.
However, self-training does not improve on COCO as the base model is too weak to generate meaningful novel-class pseudo labels.
\par \noindent\textbf{Weakly-supervised Object Detection} (WSOD) relies on assigning image-labels to the predicted boxes.
In ~\reftab{imagelabel} rows \rowNumber{5-7}, we compare to these methods (See~\supplement{sec:predictionbaseddetails} for implementation details).
For DLWL~\cite{ramanathan2020dlwl}, we implement a simplified version that does not include bootstrapping and refer to it as DLWL*.
On the \lvis{} dataset, WSOD losses show an improvement while using ImageNet but do not provide gains with CC.
We believe that CC images contain more objects which makes the assignment problem harder.
\par \noindent\textbf{\OURS{} (Ours).}
~\reftab{imagelabel} row \rowNumber{8} shows the results of our \OURSFull{}.
Our simple loss significantly improves the baseline and other alternatives,
under all the three settings.
On the novel classes, \OURS{} provides a gain of $8.3$ points with ImageNet
and $3.2$ points with \cc.
Thus, \OURS{} with image-level labels leads to strong \zeroshot{} detection performance and can provide orthogonal gains to existing \zeroshot{} detectors~\cite{bansal2018zero}.
To further understand the \zeroshot{} capabilities of \OURS{}, we also
report the \emph{top-line} results trained with box labels for all classes denoted as {\color{gray} \baselineDet{} (all class)} (row \rowNumber{9}).
Despite not using box labels for the novel classes, \OURS{} with ImageNet
performs favorably compared to {\color{gray} \baselineDet{} (all class)}.
This result also suggests that bounding box annotations may not be required for new classes.
Our \OURS{} method combined with large image classification datasets is an easy and effective alternative for increasing detector vocabulary.
We provide more comparisons to WSOD methods in \supplement{sec:comparison-prediction-based}.
\subsection{Comparison to \zeroshot{} detectors}
We now use \OURS{} to train \zeroshot{} object detectors and compare them to state-of-the-art methods.
\begin{table}[!t]
\begin{center}
\begin{tabular}{@{}l@{\ \ \ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{}}
\toprule
& mAP$\mask$ & mAP$\mask_{\text{novel}}$ & mAP$\mask_{\text{c}}$ & mAP$\mask_{\text{f}}$ \\
\midrule
ViLD-text~\cite{gu2021zero} & 24.9 & 10.1 & 23.9 & \bf 32.5 \\
ViLD~\cite{gu2021zero} & 22.5 & 16.1 & 20.0 & 28.3 \\
ViLD-ens.~\cite{gu2021zero} & 25.5 & 16.6 & 24.6 & 30.3 \\
\midrule
\OURS{} & \bf 26.8 & \bf 17.8 & \bf 26.3 & 31.6 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\textbf{\Zeroshot{} \lvis{} compared to \vild~\cite{gu2021zero}.} We train our model \emph{using their training settings and architecture} (MaskRCNN-ResNet50, training from scratch).
We report mask mAP and its breakdown to novel (rare), common, frequent classes.
Variants of \vild{} use distillation (\vild) or ensembling (\vild-ens.).
\OURS{} uses a single model and improves both mAP and mAP$_{\text{novel}}$.
}
\lbltab{main-lvis-vild}
\vspace{-5mm}
\end{table}
\begin{table}[!b]
\begin{center}
\vspace{-5mm}
\begin{tabular}{@{}l@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{}}
\toprule
& mAP50$\bbox_{\text{all}}$ & mAP50$\bbox_{\text{novel}}$ & mAP50$\bbox_{\text{base}}$ \\
\midrule
Base-only\dag & 39.9 & 0 & \bf 49.9 \\
Base-only (CLIP) & 39.3 & 1.3 & 48.7 \\
WSDDN \cite{bilen2016weakly}\dag & 24.6 &20.5 & 23.4 \\
Cap2Det \cite{ye2019cap2det}\dag & 20.1 & 20.3 & 20.1 \\
SB \cite{bansal2018zero}\ddag & 24.9 & 0.31 & 29.2 \\
DELO \cite{zhu2020don}\ddag & 13.0 & 3.41 & 13.8 \\
PL \cite{rahman2020improved}\ddag & 27.9 & 4.12 & 35.9 \\
{OVR-CNN}~\cite{zareian2021open}\dag & {39.9} & {22.8} & {46.0} \\
\midrule
\OURS{} & \bf 45.0 & \bf 27.8 & 47.1 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\textbf{\Zeroshot{} COCO~\cite{bansal2018zero}.}
We compare \OURS{} using the same training data and architecture from OVR-CNN~\cite{zareian2021open}.
We report bounding box mAP at IoU threshold 0.5 using Faster R-CNN with ResNet50-C4 backbone.
\OURS{} builds upon the CLIP baseline (second row) and shows significant improvements over prior work.
\dag: results quoted from OVR-CNN~\cite{zareian2021open} paper or code. \ddag: results quoted from the original publications.
}
\lbltab{zs-coco}
\vspace{-5mm}
\end{table}
\par \noindent\textbf{\Zeroshot{} LVIS.}
We compare to \vild~\cite{gu2021zero},
which first uses CLIP embeddings~\cite{radford2021learning} for \zeroshot{} detection.
We strictly follow their training setup and model architecture (\supplement{sec:vild-details}) and report results in~\cref{tbl:main-lvis-vild}.
Here \vild-text is exactly our Box-Supervised baseline.
\OURS{} provides a gain of $7.7$ points on mAP$_\text{novel}$.
Compared to \vild-text, \vild{}, which uses knowledge distillation from the CLIP visual backbone, improves mAP$_\text{novel}$ at the cost of hurting overall mAP.
Ensembling the two models, \vild-ens{} provides improvements for both metrics.
On the other hand, \OURS{} uses a single model which improves both novel and overall mAP, and outperforms the \vild{} ensemble.
\par \noindent\textbf{\Zeroshot{} COCO.}
Next, we compare with prior works on \zeroshot{} COCO benchmark~\cite{bansal2018zero}
(see benchmark and implementation details in \supplement{sec:coco-details} \& \supplement{sec:caption}).
We strictly follow OVR-CNN~\cite{zareian2021open} to use Faster R-CNN with ResNet50-C4 backbone and do not use any improvements from~\cref{sec:implementation_details}.
Following~\cite{zareian2021open}, we use COCO captions
as the image-supervised data.
We convert the captions into image labels and use both the image labels and captions as supervision.
~\reftab{zs-coco} summarizes our results.
As the training set contains only 48 base classes, the base-class only model (second row) yields low mAP on novel classes.
\OURS{} improves the baseline and outperforms OVR-CNN~\cite{zareian2021open} by a large margin, using exactly the same model, training recipe, and data.
\begin{table}[!t]
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{@{}l@{}c@{\ }c@{\ }c@{\ }c@{}}
\toprule
& \multicolumn{2}{c}{Objects365~\cite{shao2019objects365}} & \multicolumn{2}{c}{OpenImages~\cite{kuznetsova2020open}} \\
& mAP$\bbox$ & mAP$\bbox_{\text{rare}}$ & mAP50$\bbox$ & mAP50$\bbox_{\text{rare}}$ \\
\cmidrule(r){1-1}
\cmidrule(r){2-3}
\cmidrule(r){4-5}
\baselineDet & 19.1 & 14.0 & 46.2 & 61.7 \\
\OURS{} w. IN-L & 21.2 & 17.8 & 53.0 & 67.1 \\
\OURS{} w. IN-21k & \bf 21.5 & \bf 20.0 & \bf 55.2 & \bf 68.8 \\
\cmidrule(r){1-1}
\cmidrule(r){2-3}
\cmidrule(r){4-5}
\color{gray} {\small Dataset-specific oracles} & \color{gray} 31.2 & \color{gray} 22.5 & \color{gray} 69.9 & \color{gray} 81.8 \\
\bottomrule
\end{tabular}}
\end{center}
\vspace{-5mm}
\caption{\textbf{Detecting 21K classes across datasets.}
We use \OURS{} to train a detector and evaluate it on multiple datasets \emph{without retraining}.
We report the bounding box mAP on Objects365 and OpenImages.
Compared to the \baselineDet{} baseline (trained on \lvis-all),
\OURS{} leverages image-level supervision to train robust detectors.
The performance of \OURS{} is $70\%$-$80\%$ of {\color{gray}dataset-specific models} (bottom row) that use dataset specific box labels.
}
\lbltab{crossdataset}
\vspace{-5mm}
\end{table}
\subsection{Detecting 21K classes across datasets}
\lblsec{crossdataset}
Next, we train a detector with the full 21K classes of ImageNet.
We use our strong recipe with \swinB{}~\cite{liu2021swin} backbone.
In practice, training a classification layer of 21K classes is computationally involved.\footnote{This is more pronounced in object detection than classification, as the ``batch-size'' for the classification layer is $512 \times$ image-batch-size, where $512$ is the number of RoIs per image.}
We adopt a modified Federated Loss~\cite{zhou2021probabilistic} that uniformly samples $50$ classes from the vocabulary at every iteration.
We only compute classification scores and back-propagate on the sampled classes.
As there are no direct benchmark to evaluate detectors with such large vocabulary,
we evaluate our detectors on new datasets \emph{without finetuning}.
We evaluate on two large-scale object detection datasets:
Objects365v2~\cite{shao2019objects365} and OpenImages~\cite{kuznetsova2020open}, both with around $1.8$M training images.
We follow LVIS to split $\frac{1}{3}$ of classes with the fewest training images as rare classes.
\reftab{crossdataset} shows the results.
On both datasets, \OURS{} improves the
Box-Supervised baseline by a large margin, especially on classes with fewer annotations.
Using all the 21k classes further improves performance owing to the large vocabulary.
Our single model significantly reduces the gap towards the dataset-specific oracles
and reaches $70\%$-$80\%$ of their performance without using the corresponding $1.8$M detection annotations.
See ~\reffig{qualitative} for qualitative results.
\subsection{The standard \lvis{} benchmark}
\begin{table}[!t]
\small
\begin{center}
\begin{tabular}{@{}l@{}c@{}c@{}c@{}c@{}c@{}}
\toprule
& Backbone& mAP$\mask$ & mAP$\mask_{\text{r}}$ & mAP$\mask_{\text{c}}$ & mAP$\mask_{\text{f}}$\\
\midrule
\baselineDet & ResNet50 & 31.5 & 25.6 & 30.4 & 35.2 \\
\OURS{} & ResNet50 & \bf 33.2 & \bf 29.7 & \bf 32.5 & \bf 35.5 \\
\midrule
\baselineDet & \swinB & 40.7 & 35.9 & 40.5 & \bf 43.1 \\
\OURS{} & \swinB & \bf{41.7} & \bf{41.7} & \bf 40.8 & 42.6\\
\midrule
MosaicOS~\cite{zhang2021mosaicos} & ResNeXt-101 & 28.3 & 21.7 & 27.3 & 32.4 \\
CenterNet2~\cite{zhou2021probabilistic} & ResNeXt-101 & 34.9 & 24.6 & 34.7 & 42.5\\
AsyncSLL~\cite{han2020joint} & ResNeSt-269 &36.0 & 27.8 & 36.7 & 39.6 \\
SeesawLoss~\cite{wang2021seesaw}\! & ResNeSt-200 & 37.3 & 26.4 & 36.3 & \bf 43.1\\
Copy-paste~\cite{ghiasi2021simple} & EfficientNet-B7\!\!\!\! & 38.1 & 32.1 & 37.1 & 41.9\\
Tan et al.~\cite{tan20201st} & ResNeSt-269 & 38.8 & 28.5 & 39.5 & 42.7\\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\textbf{Standard \lvis{}.}
We evaluate our baseline (\baselineDet) and \OURS{} using different backbones on the \lvis{} dataset.
We report the mask mAP.
We also report prior work on LVIS using large backbone networks (single-scale testing).
\OURS{} improves over the baseline with increased gains for the rare classes.
The improvements are consistent in the high-performance regime.
}
\lbltab{fulllvis}
\vspace{-5mm}
\end{table}
Finally, we evaluate \OURS{} on the standard \lvis{} benchmark~\cite{gupta2019lvis}.
In this setting, the baseline (\baselineDet) is trained with box and mask labels for all classes while \OURS{} uses additional image-level labels from \imnetLvis{}.
In~\cref{tbl:fulllvis}, we report the performance of these methods for two different backbones ResNet50~\cite{he2016deep} and Swin-B~\cite{liu2021swin}.
We report the mask mAP across all classes and also split the metric for rare, common, and frequent classes.
\OURS{} brings consistent improvement over the baseline
across these metrics for both the backbones.
On the rare classes, \OURS{} brings a significant improvement of $\sim\!4$ mAP$_\text{r}$ over the baseline for both backbones.
With the \swinB backbone, our model achieves $41.7$ mAP and $41.7$ \mAPr, closing the gap between the overall mAP and the rare mAP.
This suggests \OURS{} effectively uses image-level labels to improve the performance of classes with very few boxes labels.
In~\cref{tbl:fulllvis-mosaic}, we compare to MosaicOS~\cite{zhang2021mosaicos} which also uses image-level annotations to improve LVIS detectors.
We strictly follow their training recipe
(without any improvements in~\cref{sec:implementation_details})
and report results using the vanilla Mask R-CNN ~\cite{He_2017_ICCV} architecture.
We train \OURS{} using \imnetLvis{} as the image-supervised data.
MosaicOS uses \imnetLvis{} and additional web-search images as image-supervised data.
\OURS{} outperforms MosaicOS~\cite{zhang2021mosaicos} in mAP and \mAPr,
without using their multi-stage training framework and mosaic augmentation.
See \supplement{sec:mosaic-details} for more detailed comparisons.
\begin{table}[!t]
\begin{center}
\begin{tabular}{@{}l@{\ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{}}
\toprule
& mAP$\mask$ & mAP$\mask_{\text{r}}$ & mAP$\mask_{\text{c}}$ & mAP$\mask_{\text{f}}$\\
\midrule
\baselineDet & 22.6 & 12.3 & 21.3 & 28.6\\
MosaicOS~\cite{zhang2021mosaicos}\dag & 24.5 & 18.3 & \bf 23.0 & \bf 28.9\\
\OURS{} & \bf 24.9 & \bf 20.7 & \bf 23.0 & 28.7 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\textbf{Standard \lvis{} compared to MosaicOS.} We compare under the training settings of MosaicOS~\cite{zhang2021mosaicos} and report the mask mAP.
All methods use Mask R-CNN with the ResNet50.
MosaicOS uses additional images searched from Google (indicated by \dag).
\OURS{} provides a greater improvement while being easier to train.
}
\lbltab{fulllvis-mosaic}
\vspace{-5mm}
\end{table}
\section{Introduction}
Object detection consists of two sub-problems - finding the object (localization) and naming it (classification).
Traditional methods tightly couple these two sub-problems
and thus rely on box labels for all classes.
Despite many data collection efforts, detection datasets~\cite{lin2014microsoft,gupta2019lvis,shao2019objects365,kuznetsova2020open} are much smaller in overall size and object classes (vocabulary) than image classification datasets~\cite{deng2009imagenet}.
For example, the recent \lvis{} detection dataset~\cite{gupta2019lvis} has 1000+ classes with 120K images;
OpenImages~\cite{kuznetsova2020open} has 1.8M images for 500 classes.
Moreover, not all classes contain sufficient annotations to train a robust detector
(see \reffig{teaser:lvis-det}).
In classification, even the ten-year-old ImageNet dataset~\cite{deng2009imagenet} has 21K classes and 14M images (\reffig{teaser:stats}).
\begin{figure}[!t]
\centering
\begin{subfigure}{0.99\linewidth}
\includegraphics[page=2, width=\linewidth]{figs/teaser8.pdf}
\vspace{-5mm}
\caption{Number of images per class (smoothed) in LVIS, ImageNet, and CC.}
\lblfig{teaser:stats}
\end{subfigure}
\begin{subfigure}{0.99\linewidth}
\includegraphics[page=1, width=\linewidth]{figs/teaser8.pdf}
\vspace{-3mm}
\caption{Results from an LVIS detector.}
\lblfig{teaser:lvis-det}
\end{subfigure}
\begin{subfigure}{0.99\linewidth}
\includegraphics[page=3, width=\linewidth]{figs/teaser8.pdf}
\vspace{-3mm}
\caption{Results from \OURS{}.}
\lblfig{teaser:ours-det}
\end{subfigure}
\vspace{-3mm}
\caption{Qualitative results from a strong open-vocabulary LVIS detector (middle) and our open-vocabulary detector trained with image-level supervision (bottom).
Our \textbf{Det}ector with \textbf{i}mage \textbf{c}lasses (\OURS) leverages image-level supervision (top) with a simple loss and improves detection performance for all classes.
}
\lblfig{teaser}
\vspace{-5mm}
\end{figure}
\begin{figure*}[!t]
\centering
\begin{subfigure}{0.2\linewidth}
\includegraphics[page=1, width=\linewidth]{figs/framework8_masked.pdf}
\caption{Standard detection}
\end{subfigure}
\hspace{4mm}
\begin{subfigure}{0.355\linewidth}
\includegraphics[page=2, width=\linewidth]{figs/framework8_masked.pdf}
\caption{Label assignment in weakly-supervised detection}
\end{subfigure}
\hspace{4mm}
\begin{subfigure}{0.332\linewidth}
\includegraphics[page=3, width=\linewidth]{figs/framework8_masked.pdf}
\caption{Our image-supervised loss}
\end{subfigure}
\vspace{-3mm}
\caption{
\textbf{(left)} Standard detection requires ground-truth labeled boxes and cannot leverage image-level labels.
\textbf{(center)} Weakly supervised methods~\cite{redmon2017yolo9000,ramanathan2020dlwl,bilen2016weakly} use image-level labels by assigning them to the detector's predicted boxes (proposals).
Unfortunately, this assignment is error-prone especially for large vocabulary detection.
\textbf{(right)} Detic omits label-to-box assignment and supervise all image-labels to the \emph{max-size} proposal.
We show that this loss is both simpler and performs better than prior work.
}
\lblfig{framework}
\vspace{-5mm}
\end{figure*}
In this paper, we propose \textbf{Det}ector with \textbf{i}mage \textbf{c}lasses (\OURS) that uses
image-level supervision in addition to detection supervision.
We observe that the localization and classification sub-problems can be decoupled.
Modern region proposal networks already localize many `new' objects using existing detection supervision.
Thus, we focus on the classification sub-problem and use image-level labels to train the classifier and broaden the vocabulary of the detector.
We propose a simple classification loss that
applies the image-level supervision to the proposal with the largest spatial size,
and do not supervise other outputs for image-labeled data.
This is easy to implement and
massively expands
the detector’s vocabulary.
Most existing weakly-supervised detection techniques~\cite{tang2018pcl,huang2020comprehensive,fang2021wssod,xu2021end,liu2021unbiased}
use the weakly labeled data to supervise \emph{both} the localization and classification sub-problems of detection.
Since image-classification data has no box labels, these methods develop various label-to-box assignment techniques to obtain boxes.
For example, YOLO9000\cite{redmon2017yolo9000} and DLWL\cite{ramanathan2020dlwl} assign the image-label to proposals with high prediction scores.
Unfortunately, this assignment requires good initial detections which
leads to a chicken-and-egg problem--we need a good detector for good label assignment, but we need many boxes to train a good detector.
Our method completely side-steps the label assignment process by supervising the classification sub-problem alone when using classification data.
This also enables our method to learn detectors for new classes which would have been impossible to predict and assign.
Experiments on the \zeroshot{} LVIS~\cite{gupta2019lvis,gu2021zero} and
the \zeroshot{} COCO~\cite{bansal2018zero} benchmarks show
that our method can significantly improve over
a strong box-supervised baseline, on both novel and base classes.
With image-level supervision from ImageNet-21K~\cite{deng2009imagenet},
our model trained without novel class detection annotations improves the baseline
by $8.3$ point and matches the performance of using full class annotations in training.
With the standard LVIS annotations, our model reaches $41.7$ mAP and $41.7$ mAP$_{\text{rare}}$,
narrowing the gap between rare classes and all classes to $0$.
On \zeroshot{} COCO, our method outperforms the previous state-of-the-art OVR-CNN~\cite{zareian2021open} by $5$ point with the same detector and data.
Finally, we train a detector using the full ImageNet-21K with more than twenty-thousand classes.
Our detector generalizes much better to new datasets~\cite{shao2019objects365,kuznetsova2020open} with disjoint label spaces,
reaching $21.5$ mAP on Objects365 and $55.2$ mAP50 on OpenImages,
without seeing any images from the corresponding training sets.
\section{Approach}
We train object detectors using both object detection and image classification datasets.
We propose a simple way to leverage image supervision to learn object detectors, including for classes without box labels.
We first describe the object detection problem and then detail our approach.
\subsection{Notation and Preliminaries}
\lblsec{prelim}
Given an image $\bI \in \mathbb{R}^{3\times h \times w}$, object detection solves the two subproblems of (1) localization:
find all objects with their location, represented as a box $\bb_j \in \mathbb{R}^4$ and (2) classification:
assign a class label $c_j \in \cTest$ to the $j$-th object.
Here $\cTest$ is the class vocabulary provided by the user at test time.
During training, we use a detection dataset
$\dSup = \{(\bI, \{(\bb, c)_k\})_i\}_{i=1}^{|\dSup|}$ with vocabulary $\cSup$ that has both class and box labels.
We also use an image classification dataset
$\dWeak = \{(\bI, \{c_k\})_i\}_{i=1}^{|\dWeak|}$ with vocabulary $\cWeak$ that only has image-level class labels.
The vocabularies $\cTest$, $\cSup$, $\cWeak$ may or may not overlap.
\par \noindent\textbf{Traditional Object detection} considers $\cTest = \cSup$ and $\dWeak = \emptyset$.
Predominant object detectors~\cite{ren2015faster,He_2017_ICCV} follow a two-stage framework.
The first stage, called the \emph{region proposal network} (RPN), takes the image $\bI$ and produces a set of object proposals $\{(\bb, \bof, o)_j\}$, where
$\bof_j \in \mathbb{R}^D$ is a $D$-dimensional region feature and $o \in \mathbb{R}$ is the objectness score.
The second stage takes the object feature and outputs a classification score and a refined box location for each object, $s_j = \bW \bof_j$, $\hat{\bb}_j = \bB \bof_j + \bb_j$,
where $\bW \in \mathbb{R}^{|\cSup| \times D}$
and $\bB \in \mathbb{R}^{4 \times D}$
are the learned weights of the classification layer and the regression layer, respectively\footnote{We omit the two additional linear layers and the bias in the second stage for notation simplicity.}.
Our work focuses on improving classification in the second stage.
We observe that the proposal network of a detector and the bounding box regressors can already generalize to new classes (see \supplement{sec:proposal}).
This allows us to focus on training the classifier.
\par \noindent\textbf{\Zeroshot{} object detection} allows $\cTest \neq \cSup$.
Simply replacing the classification weights $\bW$ with language embeddings of class
names converts a traditional detector to an \zeroshot{} detector~\cite{bansal2018zero}.
We follow Gu \etal~\cite{gu2021zero} to use the CLIP embeddings~\cite{radford2021learning} as the classification weights.
CLIP~\cite{radford2021learning} trains aligned image and text features on a large corpus of image-text pairs and has demonstrated great zero-shot generalization ability in image classification.
In theory, this \zeroshot{} detector can detect any object class.
However, in practice, it yields unsatisfying detection results as shown in~\reffig{teaser}.
Our method uses image-level supervision to
improve object detection including in the \zeroshot{} setting.
\subsection{\OURS: \OURSFull}
\lblsec{image}
As shown in~\reffig{framework}, our method leverages the box labels from detection datasets $\dSup$ and image-level labels from classification datasets $\dWeak$.
During training, we compose a mini-batch using images from both types of datasets.
For images with box labels, we follow the standard two-stage detector training~\cite{ren2015faster}.
For image-level labeled images, we only train the features from a fixed region proposal for classification.
Thus, we only compute the localization losses (RPN loss and bounding box regression loss) on images with ground truth box labels.
Below we describe our modified classification loss for image-level labels.
\par \noindent \textbf{Classifier-only training with image labels.}
A sample from the weakly labeled dataset $\dWeak$ contains an image $\bI$ and a set of $K$ labels $\{c_k\}_{k=1}^K$.
We use the region proposal network to extract $N$ object features $\{(\bb, \bof, o)_j\}_{j=1}^{N}$.
We propose simple ways to use the image labels $\{c_k\}_{k=1}^K$ and supervise the region proposal features.
Our first idea is to use the whole image as a new ``proposal'' box.
We call this loss \textbf{image-box}.
We ignore all proposals from the RPN, and instead use an injected box of the whole image $\bb' = (0, 0, w, h)$.
We then apply the classification loss to its RoI features $\bof'$ for all classes $c \in \{c_k\}_{k=1}^K$:
$$L_{\text{image-box}} = BCE(\bW \bof', c)$$
where $BCE(s, c) = -log \sigma(s_c) - \sum_{k \neq c} log (1 - \sigma(s_k))$ is the binary cross-entropy loss, and $\sigma$ is the sigmoid activation.
Thus, our loss uses the features from the same `proposal' for solving the classification problem for all the classes $\{c_k\}$.
In practice, the image-box can be replaced by smaller boxes.
We introduce two alternatives:
the proposal with the \textbf{max object score} or the proposal with the \textbf{max size}:
$$L_{\text{max-object-score}} = BCE(\bW\bof_j, c), j = \text{argmax}_j o_j$$
$$L_{\text{max-size}} = BCE(\bW\bof_j, c), j = \text{argmax}_j (\text{size}(\bb_j)) $$
We show that all these three losses can effectively leverage the image-level supervision, while the max-size loss performs the best.
We thus use the max-size loss by default for image-supervised data.
We also note that the classification parameters $\bW$ are shared across both detection and classification data, which greatly improves detection performance (\cref{sec:ablation}).
The overall training objective is
\begin{equation*}
L(\bI)=\begin{cases}
L_{\text{rpn}} + L_{\text{reg}} + L_{\text{cls}}, & \text{if} \ \bI \in \dSup \\
\lambda L_{\text{max-size}}, & \text{if} \ \bI \in \dWeak
\end{cases}
\end{equation*}
where $L_{\text{rpn}}$, $L_{\text{reg}}$, $L_{\text{cls}}$ are standard losses in
a two-stage detector, $\lambda=0.1$ is the weight of our loss.
\par \noindent \textbf{Relation to weakly-supervised detection.}
In traditional weakly-supervised detection
~\cite{bilen2016weakly,redmon2017yolo9000,ramanathan2020dlwl},
a popular idea is to assign the image to the proposals.
Let $\bF = (\bof_1, \dots, \bof_N)$ be the stacked feature of all object proposals
and $\bS = \bW\bF$ be their classification scores.
For each $c \in \{c_k\}_{k=1}^K$,
$L = BCE(\bS_j, c), j = \mathcal{F}(\bS, c)$,
where $\mathcal{F}$ is the label-to-box assignment process.
In most methods, $\mathcal{F}$ is a function of the prediction $\bS$.
For example, $\mathcal{F}$ selects the proposal with max score on $c$.
Our key insight is that $\mathcal{F}$ should \emph{not} depend on the prediction $\bS$.
In large-vocabulary detection,
the initial recognition ability of rare or novel classes is low,
making the label assignment process inaccurate.
Our method side-steps this prediction-and-assignment process entirely and relies on a fixed supervision criteria.
\section{Related Work}
\par \noindent\textbf{Weakly-supervised object detection (WSOD)} trains object detector using image-level labels.
Many works use only image-level labels without any box supervision
~\cite{li2019weakly,shen2019cyclic,yang2019towards,wan2019c,shen2020enabling}.
WSDDN~\cite{bilen2016weakly} and OIRC~\cite{tang2017multiple} use a subnetwork to predict per-proposal weighting and sum up proposal scores into a single image scores.
PCL~\cite{tang2018pcl} first clusters proposals and then assign image labels at the cluster level.
CASD~\cite{huang2020comprehensive} further introduces feature-level attention and self-distillation.
As no bounding box supervision is used in training, these methods rely on low-level region proposal techniques~\cite{uijlings2013selective,arbelaez2014multiscale},
which leads to reduced localization quality.
Another line of WSOD work uses bounding box supervision together with image labels,
known as \textbf{semi-supervised WSOD}~\cite{yan2017weakly,fang2021wssod,uijlings2018revisiting,li2018mixed,liu2021mixed,zhong2020boosting,dong2021boosting}.
YOLO9000~\cite{redmon2017yolo9000} mixes detection data and classification data in the same mini-batch,
and assigns classification labels to anchors with the highest predicted scores.
DLWL~\cite{ramanathan2020dlwl} combines self-training and clustering-based
WSOD~\cite{tang2018pcl}, and again assigns image labels to max-scored proposals.
MosaicOS~\cite{zhang2021mosaicos} handles domain differences between detection
and image datasets by mosaic augmentation~\cite{bochkovskiy2020yolov4}
and proposed a three-stage self-training and finetuning framework.
In segmentation, Pinheiro \etal~\cite{pinheiro2015weakly} use a log-sum-exponential function to aggregate pixels scores into a global classification.
Our work belongs to semi-supervised WSOD.
Unlike prior work, we use a simple image-supervised loss.
Besides image labels, researchers have also studied complementary methods for
weak localization supervision like points~\cite{chen2021points} or scribles~\cite{ren2020ufo}.
\begin{figure*}[!t]
\centering
\begin{subfigure}{0.495\linewidth}
\includegraphics[page=4, width=\linewidth]{figs/framework8_masked.pdf}
\caption{Detection data}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[page=5, width=\linewidth]{figs/framework8_masked.pdf}
\caption{Image-labeled data }
\end{subfigure}
\vspace{-3mm}
\caption{
\textbf{Approach Overview.} We mix train on detection data and image-labeled data.
When using detection data, our model uses the standard detection losses to train the classifier ($\mathbf{W}$) and the box prediction branch ($\mathbf{B}$) of a detector.
When using image-labeled data, we only train the classifier using our modified classification loss.
Our classification loss trains the features extracted from the largest-sized proposal predicted by the network.
We use CLIP embeddings~\cite{radford2021learning} as the classification weights ($\bW$) for \zeroshot{} detection.
}
\lblfig{framework}
\vspace{-5mm}
\end{figure*}
\par \noindent\textbf{Open-vocabulary object detection}, or also named
\textbf{zero-shot object detection}, aims to
detect objects outside of the training vocabulary.
The basic solution~\cite{bansal2018zero} is to
replace the last classification layer with language embeddings
(e.g., GloVe~\cite{pennington2014glove}) of the class names.
Rahman \etal~\cite{rahman2020improved} and Li \etal~\cite{li2019zero} improve the classifier embedding by introducing external text information.
OVR-CNN~\cite{zareian2021open} pretrains the detector on image-text pairs using contrastive learning.
ViLD~\cite{gu2021zero} upgrades the language embedding to CLIP~\cite{radford2021learning}
and distills region features from CLIP image features.
Our work also uses CLIP~\cite{radford2021learning} embeddings as the classifier, but does not use distillation.
Instead, we incorporate additional image-labeled data for co-training.
\par \noindent\textbf{Large-vocabulary object detection}~\cite{gupta2019lvis} requires detecting 1000+ classes.
Many existing works focus on handling the long-tail problem
~\cite{pan2021model,li2020overcoming,zhang2021distribution,wu2020forest,Feng_2021_ICCV,chang2021image}.
Repeat factor sampling (RFS)~\cite{gupta2019lvis} oversamples classes with fewer annotations.
Equalization losses~\cite{tan2020equalization,tan2021equalization}
and SeeSaw loss~\cite{wang2021seesaw} reweights the per-class loss by balancing the gradients~\cite{tan2021equalization} or number of samples~\cite{wang2021seesaw}.
Federated Loss~\cite{zhou2021probabilistic} subsamples classes per-iteration to mimic the federated annotation~\cite{gupta2019lvis}.
Yang \etal~\cite{yang2019detecting} detects 11K classes with a label hierarchy.
Our method builds on these advances,
and we tackle the problem from a different aspect: using additional image-labeled data.
\par \noindent\textbf{Language supervision for object detection}
is a recent popular research topic.
VirTex~\cite{desai2021virtex} and OVR-CNN~\cite{zareian2021open} pretrain backbones on language tasks and show benefits for detection.
Cap2Det~\cite{ye2019cap2det} learns a mapping from sentences to image labels in the detector's vocabulary then applies WSOD~\cite{bilen2016weakly}.
MDETR~\cite{kamath2021mdetr} performs instance-level detection
by training on paired object-caption annotations.
We use language data~\cite{sharma2018conceptual} in a similar way as Cap2Det, but simpler:
we extract image labels from captions using a naive text-match.
\section{Region proposal quality}
\lblsec{proposal}
\begin{table}[!t]
\small
\begin{center}
\begin{subtable}[t]{\linewidth}
\begin{center}
\begin{tabular}{@{}l@{}c@{\ }c@{\ \ }c@{\ \ }c@{\ \ }c@{}}
\toprule
& AR$_{r}$50@100 & AR$_{r}$50@300 & AR$_{r}$50@1k & AR50@1k \\
\midrule
LVIS-all & 63.3 & 76.3 & 79.7 & 80.9 \\
LVIS-base & 62.2 & 76.2 & 78.5 & 81.0\\
\bottomrule
\end{tabular}
\caption{
\textbf{Proposal networks trained with (top) and without (bottom) rare classes.} We report recalls on rare classes and all classes at IoU threshold 0.5 with different number of proposals. Proposal networks trained \emph{without} rare classes can generalize to rare classes in testing.
}
\lbltbl{proposal-rare}
\end{center}
\end{subtable}
\begin{subtable}[t]{\linewidth}
\begin{center}
\begin{tabular}{@{}l@{\ \ }c@{\ \ \ \ }c@{\ \ \ \ }c@{}}
\toprule
& AR$_{\text{half-1st}}$50@1k & AR$_{\text{half-2nd}}$50@1k \\
\midrule
LVIS-half-1st & 80.8 & 69.6 \\
LVIS-half-2nd & 62.9 & 82.2 \\
\bottomrule
\end{tabular}
\caption{
\textbf{Proposal networks trained on half of the LVIS classes.} We report recalls at IoU threshold 0.5 on the other half classes. Proposal networks produce non-trivial recalls on novel classes.
}
\lbltab{proposal-strict}
\end{center}
\end{subtable}
\vspace{-3mm}
\caption{\textbf{Proposal network generalization ability evaluation.}
\textbf{(a)}: Generalize from $886$ LVIS base classes to the $317$ rare classes;
\textbf{(b)}: Generalize from uniformly sampled half LVIS classes (601/ 602 classes) to the other half.}
\lbltab{proposal}
\end{center}
\vspace{-6mm}
\end{table}
In this section, we show the region proposal network trained on LVIS~\cite{gupta2019lvis}
is satisfactory and can generalize to new classes by default.
We experiment under our strong baseline in ~\cref{sec:implementation_details}.
\reftab{proposal-rare} shows the proposal recalls with or without
rare classes in training.
First, we observe the recall gaps between the two models on rare classes are small (79.7 vs. 78.5);
second, the gaps between rare classes and all classes are small (79.7 vs. 80.9);
third, the absolute recall is relatively high
($\sim\!80\%$, note recall at IoU threshold 0.5 can be translated into oracle
mAP-pool~\cite{dave2021evaluating} given perfect classifier and regressor).
All observations indicate the proposals can generalize to new classes even though they are supervised to background during training.
These results are consistent with ViLD~\cite{gu2021zero}.
We in addition evaluate a more strict setting, where we uniformly split LVIS classes into two halves.
I.e., we use classes ID $1, 3, 5, \cdots$ as the first half, and the rest as the second half.
These two subsets have completely different definitions of ``objects''.
We then train a proposal network on each of them, and evaluate on both subsets.
As shown in \reftab{proposal-strict}, the proposal networks give non-trivial recalls at the complementary other half.
This again supports proposal networks trained on a diverse vocabulary learned a general concept of objects.
\section{Open-vocabulary COCO benchmark details}
\lblsec{coco-details}
Open-vocabulary COCO is proposed by Bansal et al.~\cite{bansal2018zero}.
They manually select 48 classes from the 80 COCO classes as base classes,
and 17 classes as novel classes.
The training set is the same as the full COCO, but only images containing at least one base class are used.
During testing, we report results under the ``generalized zero-shot detection''
setting~\cite{bansal2018zero}, where all COCO validation images are used.
We strictly follow the literatures~\cite{bansal2018zero,rahman2020improved,zareian2021open}
to use FasterRCNN~\cite{ren2015faster} with ResNet50-C4 backbone and the $1\times$ training schedule ($90k$ iterations).
We use horizontal flip as the only data augmentation in training and
keep the input resolution
fixed to $800\times1333$ in both training and testing.
We use SGD optimizer with a learning rate $0.02$ (dropped by $10\times$ at $60k$ and $80k$
iteration) and batch size $16$.
The evaluation metric on open-vocabulary COCO is box mAP at IoU threshold 0.5.
Our reproduced baseline matches OVR-CNN~\cite{zareian2021open}.
Our model is finetuned on the baseline model with another $1\times$ schedule. We sample detection data and image-supervised data in a $1:1$ ratio.
\section{Direct captions supervision}
\lblsec{caption}
\begin{table}[!t]
\begin{center}
\begin{tabular}{@{}l@{\ \ \ \ }l@{\ }c@{}c@{}}
\toprule
& Supervision & mAP$^\text{mask}$ & mAP$^\text{mask}_{\text{novel}}$ \\
\midrule
Box-Supervised & - & 30.2 & 16.4 \\
\OURS{} w. CC & Image label & \bf 31.0 & 19.8 \\
\OURS{} w. CC & Caption & 30.4 & 17.4 \\
\OURS{} w. CC & Both & \bf 31.0 & \bf 21.3 \\
\midrule
& & mAP50$^{\text{box}}_{\text{all}}$ & mAP50$^{\text{box}}_{\text{novel}}$ \\
\midrule
Box-Supervised & - & 39.3 & 1.3 \\
\OURS{} w. COCO-cap. & Image label & 44.7 & 24.1 \\
\OURS{} w. COCO-cap. & Caption & 43.8 & 21.0 \\
\OURS{} w. COCO-cap. & Both & \bf 45.0 & \bf 27.8 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\textbf{Direct caption supervision.} Top: Open-vocabulary LVIS with Conceptual Caption as weakly-labeled data; Bottom block: Open-vocabulary COCO with COCO-caption as weakly-labeled data. Directly using caption embeddings as a classifier is helpful on both benchmarks; the improvements are complementary to using image labels.}
\lbltab{caption}
\vspace{-4mm}
\end{table}
As we are using a language model CLIP~\cite{radford2021learning} as the classifier,
our framework can seamlessly incorporate the free-form caption text as image-supervision.
Using the notations in ~\cref{sec:image}, here
$\dWeak = \{(\bI, t)_i\}$ where $t$ is a free-form text.
In our open-vocabulary detection formulation, text $t$ can natrually be converted to an embedding by the CLIP~\cite{radford2021learning} language encoder $\mathcal{L}$: $w = \mathcal{L}(t)$.
Given a minibatch of $B$ samples $\{(\bI, t)_i\}_{i=1}^{B}$, we compose a dynamic classification layer by stacking all caption features within the batch
$\widetilde{\bW} = \mathcal{L}(\{t_i\}_{i=1}^{B})$.
For the $i$-th image in the minibatch, its ``classification'' label is the $i$-th text, and other texts are negative samples.
We use the injected whole image box to extract RoI feature $\bof'_i$ for image $i$.
We use the same binary cross entropy loss as classifying image labels:
$$L_{cap} = \sum_{i=1}^{B} BCE(\widetilde{\bW} \bof'_{i}, i) $$
We do not back-propagate into the language encoder.
We evaluate the effectiveness of the caption loss in \reftab{caption} on both \zeroshot{} LVIS and COCO.
We compare individually applying the max-size loss for image labels and the caption loss, and applying both of them.
Both image labels and captions can improve both overall mAP and novel class mAP.
Combining both losses gives a more significant improvement.
Our \zeroshot{} COCO results in \reftab{zs-coco} uses both the max-size loss and the caption loss.
\section{LVIS baseline details}
\lblsec{lvis-baseline}
\begin{table}[!t]
\small
\begin{center}
\begin{tabular}{@{}l@{}c@{}c@{}c@{}c@{}c@{}}
\toprule
& mAP$^{\text{box}}$ & mAP$^{\text{box}}_{\text{r}}$ & mAP$^{\text{mask}}$ & mAP$^{\text{mask}}_{\text{r}}$ & T \\
\midrule
D2 baseline~\cite{wu2019detectron2} & 22.9 & 11.3 & 22.4 & 11.6 & 12h \\
+Class-agnostic box\&mask & 22.3 & 10.1 & 21.2 & 10.1 & 12h \\
+Federated loss~\cite{zhou2021probabilistic} & 27.0 & 20.2 & 24.6 & 18.2 & 12h \\
+CenterNet2~\cite{zhou2021probabilistic} & 30.7 & 22.9 & 26.8 & 19.4 & 13h \\
+LSJ $640\!\times\!640$, $4\!\times\!$ sched.\!~\cite{ghiasi2021simple}\!\!\! & 31.0 & 21.6 & 27.2 & 20.1 & 17h \\
+CLIP classifier~\cite{radford2021learning} & 31.5 & 24.2 & 28 & 22.5 & 17h\\
+Adam optimizer, lr$2e$-$4$~\cite{kingma2014adam} & 30.4 & 23.6 & 26.9 & 21.4 & 17h \\
\rowcolor{aluminium1}+IN-21k pretrain~\cite{ridnik2021imagenet21k}* & 35.3 & 28.2 & 31.5 & 25.6 & 17h\\
\color{gray} +Input size $896\!\times\!896$ & \color{gray} 37.1 & \color{gray} 29.5
& \color{gray} 33.2 & \color{gray} 26.9 & \color{gray} 25h\\
\color{gray} +Swin-B backbone~\cite{liu2021swin} & \color{gray} 45.4 &
\color{gray} 39.9 & \color{gray} 40.7 & \color{gray} 35.9 & \color{gray} 43h\\
\midrule
\rowcolor{aluminium1}*Remove rare class ann.\cite{gu2021zero}& 33.8 & 17.6 & 30.2 & 16.4 & 17h\\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{
\textbf{LVIS baseline evolution.}
First row: the configuration from the detectron2 model zoo.
The following rows change components one by one.
Last row: removing rare class annotation from the ``+IN-21k pretrain*'' row.
The two \ctext[RGB]{238,238,236}{gray-filled rows} are the baselines in
our main paper, for full LVIS and open-vocabulary LVIS, respectively.
We show the rough wall-clock training time ($T$) on our machine with 8 V100 GPUs in the last column.}
\lbltab{lvis-baseline}
\vspace{-3mm}
\end{table}
We first describe the standard LVIS baseline from
the detectron2 model zoo\footnote{\url{https://github.com/facebookresearch/detectron2/blob/main/configs/LVISv1-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml}}
, which is exactly the baseline used in MosaicOS~\cite{zhang2021mosaicos}.
This baseline uses ResNet-50 FPN backbone and a $2\times$ training schedule
($180k$ iterations with batch-size $16$)\footnote{We are aware different projects use different notations of a $1\times$ schedule. In this paper we always refer $1\times$ schedule to $16 \times 90k$ images}.
Data augmentation includes horizontal flip and random resize short side [$640$, $800$], long side $<1333$.
The baseline uses SGD optimizer with a learning rate 0.02 (dropped by $10\!\times$ at $120k$ and $160k$ iteration).
The bounding box regression head and the mask head are class-specific.
~\reftab{lvis-baseline} shows the roadmap from the detectron2 baseline to
our baseline (\cref{sec:implementation_details}).
First, we prepare the model for new classes by making the box and mask heads class-agnostic.
This slightly hurts performance.
We then use Federated loss~\cite{zhou2021probabilistic}
and upgrade the detector to CenterNet2~\cite{zhou2021probabilistic}
(i.e., replacing RPN with CenterNet and multiplying proposal score to classification score).
Both modifications improve mAP and \mAPr significantly,
and CenterNet2 slightly increases the training time.
Next, we use the EfficientDet~\cite{tan2020efficientdet,ghiasi2021simple}
style large-scale jittering and train a longer schedule ($4\times$).
To balance the training time, we also reduce the training image size to
$640 \times 640$ (the testing size is unchanged at $800\times1333$)
and increase batch-size to $64$ (with the learning rate scaled up to $0.08$).
The resulting augmentation and schedule is slightly better than the default multi-scale training,
with $30\%$ more training time.
A longer schedule is beneficial when using more data, and can be improved by
larger resolution.
Next, we switch in the CLIP classifier~\cite{radford2021learning}. We follow ViLD~\cite{gu2021zero} to L2 normalize the embedding and RoI feature before dot-product.
Note CenterNet2 uses a cascade classifier~\cite{cai2018cascade}.
We use CLIP for all of them.
Using CLIP classifier improves rare class mAP.
Finally, we use an ImageNet-21k pretrained ResNet-50 model from Ridnik
\etal~\cite{ridnik2021imagenet21k}.
We remark the ImageNet-21k pretrained model requires using Adam optimizer
(with learning rate $2e$-$4$).
Combing all the improvements results in $35.3$ mAP$^{\text{box}}$
and $31.5$ mAP$^{\text{mask}}$, and trains in a favorable time ($17$h on 8 V100 GPUs).
We use this model as our baseline in the main paper.
Increasing the training resolution or using a larger backbone~\cite{liu2021swin}
can further increase performance significantly, at a cost of longer training time.
We use the large models only when compared to the state-of-the-art models.
\section{Resolution change for classification data}
\lblsec{ratio-and-size}
\begin{table}[!t]
\begin{center}
\begin{tabular}{@{}l@{\ \ }c@{\ \ \ }c@{\ \ \ }c@{\ \ \ }c@{}c@{}}
\toprule
& Ratio & Size & mAP$^\text{mask}$ & mAP$^\text{mask}_{\text{novel}}$ \\
\midrule
Bos-Supervised & 1: 0 & - & 30.2 & 16.4 \\
\midrule
\OURS{} w. IN-L & 1: 1 & 640 & 30.9 & 23.3 \\
\OURS{} w. IN-L & 1: 1 & 320 & 32.0 & 24.0\\
\OURS{} w. IN-L & 1: 4 & 640 & 31.1 & 23.5\\
\OURS{} w. IN-L & 1: 4 & 320 & \bf 32.4 & \bf 24.9 \\
\midrule
\OURS{} w. CC & 1: 1 & 640 & 30.8 & 21.6 \\
\OURS{} w. CC & 1: 1 & 320 & 30.8 & 21.5 \\
\OURS{} w. CC & 1: 4 & 640 & 30.7 & 21.0 \\
\OURS{} w. CC & 1: 4 & 320 & \bf 31.1 & \bf 21.8 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\textbf{Ablations of the resolution change.}
We report mask mAP on the open-vocabulary LVIS following the setting of ~\reftab{imagelabel}.
Top: ImageNet as the image-labeled data.
Bottom: CC as the image-labeled data.}
\lbltab{ratio-and-size}
\vspace{-3mm}
\end{table}
\begin{figure}[!t]
\centering
\begin{tabular}{@{}c@{\ }c@{}c@{}c@{}}
& $1/3$ schedule & $2/3$ schedule & Full schedule \\
\rotatebox[origin=c]{90}{Predicted\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!} & \includegraphics[width=0.33\linewidth]{dynamic/1_sc_30k.jpg}
& \includegraphics[width=0.33\linewidth]{dynamic/1_sc_60k.jpg}
& \includegraphics[width=0.33\linewidth]{dynamic/1_sc_90k.jpg} \\
\rotatebox[origin=c]{90}{Max-size\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}
& \includegraphics[width=0.33\linewidth]{dynamic/1_sz_30k.jpg}
& \includegraphics[width=0.33\linewidth]{dynamic/1_sz_60k.jpg}
& \includegraphics[width=0.33\linewidth]{dynamic/1_sz_90k.jpg} \\
\rotatebox[origin=c]{90}{Predicted\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}
& \includegraphics[trim={0 0 0 3.0cm}, clip, width=0.33\linewidth]{dynamic/2_sc_30k.jpg}
& \includegraphics[trim={0 0 0 3.0cm}, clip, width=0.33\linewidth]{dynamic/2_sc_60k.jpg}
& \includegraphics[trim={0 0 0 3.0cm}, clip, width=0.33\linewidth]{dynamic/2_sc_90k.jpg} \\
\rotatebox[origin=c]{90}{Max-size\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}
& \includegraphics[trim={0 0 0 3.0cm}, clip, width=0.33\linewidth]{dynamic/2_sz_30k.jpg}
& \includegraphics[trim={0 0 0 3.0cm}, clip, width=0.33\linewidth]{dynamic/2_sz_60k.jpg}
& \includegraphics[trim={0 0 0 3.0cm}, clip, width=0.33\linewidth]{dynamic/2_sz_90k.jpg} \\
\end{tabular}
\vspace{-3mm}
\caption{
\textbf{Visualization of the assigned proposals.} We show all proposals with score $>0.5$ in \textcolor{skyblue3}{blue} and the assigned proposal in \textcolor{DarkScarletRed}{red}. We compare the predicted loss with our max-size loss across different training iterations. Max-size loss in most case covers the target object, and is consistent during training.}
\lblfig{qualitative-proposal}
\vspace{-5mm}
\end{figure}
~\reftab{ratio-and-size} ablates the resolution change in ~\cref{sec:implementation_details}.
Using a smaller input resolution improves $\sim\!1$ point for both mAP and \mAPnoval{} with ImageNet,
but does not impact much with CC.
Using more batches for the weak datasets is slightly better than a $1:1$ ratio.
\section{WSOD losses implementation details}
\lblsec{predictionbaseddetails}
Following the notations in ~\cref{sec:image}, we implement the prediction-based WSOD losses as below:
\par \noindent \textbf{WSDDN}~\cite{bilen2016weakly}
learns a soft weight on the proposals to weight-sum the proposal classification scores into a single image classification score:
$$L_{\text{WSDDN}} = BCE(\sum_j (\text{softmax}(\bW'\bF)_j * \bS_j), c)$$
where $\bW'$ is a learnable network parameter.
\par \noindent \textbf{Predicted}~\cite{redmon2017yolo9000}
selects the proposal with the max predicted score on class $c$:
$$L_{\text{Predicted}} = BCE(\bS_j, c), j = \text{argmax}_j \bS_{jc} $$
\par \noindent \textbf{DLWL*}~\cite{ramanathan2020dlwl} first runs a clustering algorithm with IoU threshold 0.5.
Let $\mathcal{J}$ be the set of peaks of each cluster
(i.e., the proposal within the cluster and has the max predicted score on class $c$),
We then select the top $N_c=3$ peaks with the highest prediction scores on class $c$.
\begin{align*}
L_{\text{DLWL*}} = \frac{1}{N_c}\sum_{t=1}^{N_c} & BCE(\bS_{j_t}, c), \\
& j_t = \text{argmax}_{j \in \mathcal{J}, j \neq \{j_1, \dots, j_{t - 1}\}} \bS_{jc}\end{align*}
The original DLWL~\cite{ramanathan2020dlwl} in addition upgrades $\bS$ using an IoU-based assignment matrix from self-training and bootstrapping (See their Section 3.2).
In our implementation, we did not include this part, as our goal is to only compare the training losses.
\section{More comparison to predicted loss}
\lblsec{comparison-prediction-based}
\begin{table}[!t]
\begin{center}
\begin{subtable}[t]{\linewidth}
\begin{center}
\begin{tabular}{@{}l@{\ \ }c@{\ }c@{\ }c@{\ }c@{}}
\toprule
& Dataset & Backbone & mAP$^\text{mask}$ & mAP$^\text{mask}_{\text{novel}}$ \\
\midrule
Box-Supervised & & & 30.2 & 16.4 \\
Predicted & \multirow{1}{*}{LVIS-base} & \multirow{1}{*}{Res50} & 31.2 & 20.4 \\
Max-size &&& 32.4 \scriptsize \textcolor{green4}{(+1.2)} & 24.6 \scriptsize \textcolor{green4}{(+4.2)}\\
\midrule
Box-Supervised & & & 38.4 & 21.9 \\
Predicted & \multirow{1}{*}{LVIS-base} & \multirow{1}{*}{SwinB} & 40.0 & 31.7 \\
Max-size & & & 40.7 \scriptsize \textcolor{green4}{(+0.7)} & 33.8 \scriptsize \textcolor{green4}{(+2.1)}\\
\midrule
\midrule
Box-Supervised & & & 31.5 & 25.6 \\
Predicted & \multirow{1}{*}{LVIS-all} & \multirow{1}{*}{Res50} & 32.5 & 28.4 \\
Max-size & & & 33.2 \scriptsize \textcolor{green4}{(+0.7)} & 29.7 \scriptsize \textcolor{green4}{(+1.3)}\\
\midrule
Box-Supervised & & & 40.7 & 35.9 \\
Predicted & \multirow{1}{*}{LVIS-all} & \multirow{1}{*}{SwinB} & 40.6 & 39.8 \\
Max-size & & & 41.3 \scriptsize \textcolor{green4}{(+0.7)} & 40.9 \scriptsize \textcolor{green4}{(+1.1)}\\
\bottomrule
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\textbf{Predicted loss and max-size loss with different prediction qualities.}
We show the mask mAP of the box-supervised baseline, Predicted loss~\cite{redmon2017yolo9000}, and our max-size loss.
We show the delta between max-size loss and predicted loss in \textcolor{green4}{green}.
Improving the backbone and including rare classes in training can both narrow the gap. Max-size consistently performs better.
}
\lbltab{max-score-gap}
\end{subtable}
\begin{subtable}[t]{\linewidth}
\begin{center}
\begin{tabular}{@{}l@{\ \ \ \ }c@{\ \ \ \ }c@{\ \ \ \ }c@{\ \ \ \ }c@{\ \ \ \ }c@{}}
\toprule
& \multicolumn{2}{c}{Cover rate} & \multicolumn{3}{c}{Consistency} \\
& IN-L & COCO & IN-L & CC & COCO\\
\cmidrule(r){1-1}
\cmidrule(r){2-3}
\cmidrule(){4-6}
Predicted & 69.0 & 73.8 & 71.5 & 30.0 & 57.7\\
Max-size & 92.8 & 80.0 & 87.9 & 73.0 & 62.8 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\textbf{Assigned proposal cover rate and consistency.}
Left: ratio of assigned proposal covering the ground truth both. We evaluate on an ImageNet subset that has box ground truth and the annotated COCO training set; Right: average assigned bounding box IoU of between the final model and the half-schedule model.}
\lbltab{analysis}
\end{subtable}
\end{center}
\vspace{-5mm}
\caption{\textbf{Comparison between predicted loss and and max-size loss.}
\textbf{(a)}: comparison under different baselines.
\textbf{(b)}: comparison in customized metrics.}
\vspace{-5mm}
\end{table}
Our max-size loss performs significantly better than WSOD losses that are based on predictions as is shown in ~\reftab{imagelabel}.
In this section, we provide more detailed comparisons to the \textbf{Predicted} loss as a representative of prediction-based methods.
A straightforward reason is the predicted loss
requires a good initial prediction to guide the pseudo-label-based training.
However in the \zeroshot{} detection setting the initial predictions are inherently flawed.
To verify this, in ~\reftbl{max-score-gap}, we show both improving the backbone and including rare classes in training can narrow the gap.
However in the current performance regime, our max-size loss performs better.
We highlight two additional advantages of the max-size loss that may contribute to the good performance:
(1) the max-size loss is a safe approximation of object regions;
(2) the max-size loss is consistent during training.
\reffig{qualitative-proposal} provides qualitative examples of the assigned region for the predicted loss and the max-size loss.
First, we observe that while being coarse at the boundary, the max-size loss can \emph{cover} the target object in most cases.
Second, the assigned regions of the predicted loss are usually different across training iterations, especially in the early phase where the model predictions are unstable.
On the contrary, max-size loss supervises consistent regions across training iterations.
\reftab{analysis} quantitatively evaluates these two properties.
We use the ground truth box annotation in the full COCO detection dataset and a subset of ImageNet with bounding box annotation \footnote{\url{https://image-net.org/download-bboxes.php}. 213K of the 1.2M IN-L images have bounding box annotations.} to evaluate the cover rate.
We define cover rate as the ratio of image labels whose ground-truth box has $>0.5$ intersection-over-area with the assigned region.
We define the consistency metric as the average assigned-region IoU of the same image between the $1/2$ schedule and the final schedule.
\reftab{analysis} shows max-size loss is more favorable than predicted loss on these two metrics.
However we highlight that these two metrics alone do not always correlate to the final performance, as the \textbf{image-box} loss is perfect on both metrics but underperforms max-size loss.
\section{ViLD baseline details}
\lblsec{vild-details}
The baseline in ViLD~\cite{gu2021zero} is very different from detectron2.
They use MaskRCNN detector~\cite{He_2017_ICCV} with Res50-FPN backbone, but trains the network
from scratch without ImageNet pretraining.
They use large-scale jittering~\cite{ghiasi2021simple} with input resolution $1024\times1024$ and train a $32\times$ schedule.
The optimizer is SGD with batch size $256$ and learning rate $0.32$.
We first reproduce their baselines (both the oracle detector and ViLD-text) under the same setting.
We observe half of their schedule ($16\times$) is sufficient to closely match their numbers.
The half training schedule takes $4$ days on $4$ nodes (each with 8 V100 GPUs).
We then finetune another $16\times$ schedule using ImageNet data with our max-size loss.
\section{Details of comparison to MosaicOS}
\lblsec{mosaic-details}
\begin{table}[!t]
\begin{center}
\begin{tabular}{@{}l@{\ \ \ }c@{\ \ \ }c@{\ \ \ }}
\toprule
& mAP$^\text{mask}$ & mAP$^\text{mask}_{\text{r}}$ \\
\midrule
Box-Supervised~\cite{zhang2021mosaicos} & 22.6 & 12.3 \\
MosaicOS~\cite{zhang2021mosaicos} & 24.5 \scriptsize \textcolor{green4}{(+1.9)} & 18.3 \scriptsize \textcolor{green4}{(+6.0)} \\
\midrule
Box-Supervised (Reproduced) & 22.6 & 12.3 \\
\OURS{} (default classifier) & 25.1 \scriptsize \textcolor{green4}{(+2.5)} & 18.6 \scriptsize \textcolor{green4}{(+6.3)} \\
\midrule
Box-Supervised (CLIP classifier) & 22.3 & 14.1 \\
\OURS{} (CLIP classifier) & \bf 24.9 \scriptsize \textcolor{green4}{(+2.6)} & \bf 20.7 \scriptsize \textcolor{green4}{(+6.5)} \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{
\textbf{Detailed comparison to MosiacOS~\cite{zhang2021mosaicos}.}
Top block: results quoted from MosiacOS paper; Middle block: \OURS{} with the default random intialized and trained classifier; Bottom block: \OURS{} with CLIP classifier.}
\lbltab{mosiacos}
\vspace{-5mm}
\end{table}
We compare to MosaicOS~\cite{zhang2021mosaicos} by strictly following their baseline setup.
The detailed hyper-parameters follow the detectron2 baseline as described in ~\cref{sec:lvis-baseline}.
We finetune on the Box-supervised model with an additional $2\times$ schedule with Adam optimizer.
~\reftab{mosiacos} shows our re-trained baseline exactly matches their reported results from the paper.
Our method is developed based on the CLIP classifier, and we also report our baseline with CLIP.
The baseline has slightly lower mAP and higher mAP$_{\text{r}}$.
Our relative improvements over the baseline are slightly higher than MosiacOS~\cite{zhang2021mosaicos}.
We highlight our training framework is simpler and we use less additional training data (Google-searched images).
\section{Improvements breakdown to classes}
\lblsec{mAp-fixed}
\begin{table}[t]
\begin{center}
\begin{tabular}{@{}l@{\ \ \ \ \ \ }c@{\ \ \ \ \ \ }c@{\ \ \ \ \ \ }c@{}}
\toprule
& mAP$^\text{mask}$ & mAP$^\text{mask}_{\text{IN-L}}$ & mAP$^\text{mask}_{\text{non-IN-L}}$ \\
\midrule
Box-Supervised & 30.2 & 30.6 & 27.6 \\
Max-size & 32.4 & 33.5 & 28.1\\
\midrule
& mAP$^\text{mask}$ & mAP$^\text{mask}_{\text{CC}}$ & mAP$^\text{mask}_{\text{non-CC}}$ \\
\midrule
Box-Supervised & 30.2 & 30.1 & 29.5 \\
Max-size & 30.9 & 31.7 & 28.6 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\textbf{mAP breakdown into classes with and without image labels.}
Top: \OURS{} trained on ImageNet. Bottom: \OURS{} trained on CC. Most of the improvements are from classes with image-level labels. On ImageNet \OURS{} also improves classes without image labels thanks to the CLIP classifier.}
\lbltab{breakdown}
\vspace{-3mm}
\end{table}
\reftab{breakdown} shows mAP breakdown into classes with and without image labels for both the Box-Supervised baseline and \OURS{}. As expected, most of the improvements are from classes with image-level labels. On ImageNet, \OURS{} also improves classes without image labels thanks to the CLIP classifier which leverages inter-class relations.
\section{mAP$^{\text{Fixed}}$ evaluation}
\lblsec{mAp-fixed}
\begin{table}[!t]
\begin{center}
\begin{tabular}{@{}l@{\ }c@{\ }c@{\ }c@{\ }c@{}c@{}}
\toprule
Datasets & mAP$^\text{box}$ & mAP$^\text{box}_{\text{novel}}$ & mAP$^{\text{Fixed}}$ & mAP$_{\text{novel}}^{\text{Fixed}}$ \\
\cmidrule(r){1-1}
\cmidrule(r){2-3}
\cmidrule(r){4-5}
\baselineDet & 30.2 & 16.4 & 31.2 & 18.2 \\
\OURS{} & 32.4 \scriptsize \textcolor{green4}{(+2.2)} & 24.9 \scriptsize \textcolor{green4}{(+8.5)} & 33.4 \scriptsize \textcolor{green4}{(+2.3)} & 26.7 \scriptsize \textcolor{green4}{(+8.5)} \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\textbf{mAP$^{\text{Fixed}}$ evaluation}. Middle: the original box mAP metric used in the main paper. Right: the new box mAP$^{\text{Fix}}$ metric. Our improvements are consistent under the new metric.}
\lbltab{mAPfixed}
\vspace{-4mm}
\end{table}
\reftab{mAPfixed} compares our improvements under the new mAP$^{\text{fix}}$ proposed
in Dave \etal~\cite{dave2021evaluating}. Our improvements are consistent under the new metric.
\section{Image Attributions}
License for the images from OpenImages in ~\reffig{qualitative}:
\small
\begin{itemize}
\item ``Oyster'': Photo by The Local People Photo Archive (CC BY 2.0)
\item ``Cheetah'': Photo by Michael Gil (CC BY 2.0)
\item ``Harbor seal'': Photo by Alden Chadwick (CC BY 2.0)
\item ``Dinosaur'': Photo by Paxson Woelber (CC BY 2.0)
\end{itemize} |
2,877,628,091,067 | arxiv | \section{Introduction}
\label{1.Intro}
Multi-stage Manufacturing Systems (MMS) are playing an important role in the manufacturing industry.
In the manufacturing sector, automation, interconnection and electrification are increasingly adopted for better efficiency and cost-reduction. With the increase in software-based networking, monitoring, and control of manufacturing assets across networks, the risk of cyber-attacks also grows. Cybersecurity challenges in the manufacturing sector are serious and have significant impacts on the competitiveness of the national economy and defense. More than 400 manufacturers were targeted every day in 2016 and almost half of them were US manufacturers, resulting in more than 3 billion in losses \cite{mahoney2017cybersecurity}. The variety and scale of cyber-sabotage on manufacturing systems have been growing in recent years and received the increasing attention due to its catastrophic consequences to the economy, people and environment. Manufacturing systems are subject to a wide range of passive and active cyber threats ranging from data and intellectual property (IP) theft to cyber-sabotage that alter the quality of the product or destroy manufacturing machines, reducing manufacturing productivity, and increasing costs \cite{mahoney2017cybersecurity, tuptuk2018security, langner2011stuxnet}.
Most modern manufacturing systems are Multistage Manufacturing Systems (MMS) \cite{shi2006stream}, which consists of multiple components, stations, or stages to finish the final product. MMS is an engineering designed system, with the specific designated control sequence and features to be fabricated. Thus, the cyber and physical signals would have deterministic sequences and patterns in normal situations. Also, the cyber signals and physical signals are interrelated in a manufacturing system, which gives opportunities for data fusion to jointly analyze cyber and physical signals together. The quality of the final product is determined by complex interactions among multiple stages - the quality characteristics at one stage are not only influenced by local variations at that stage, but also by variations propagated from upstream stages. Thus, a cyber-threat may intrude a controller/machine in one stage and impact other stages through the cyber networks or physical networks, which makes the cyber-threat detection and diagnosis in MMS complex, challenging and interesting.
In this work, we seek to conduct a case study on the usefulness of cyber-physical information flow analysis in performing intrusion diagnosis in an MMS. Information flows are intrinsic properties of an MMS. In computer security, a basic information flow tracking technique is dynamic taint analysis (DTA). DTA tracks taint propagation from one data variable (e.g., a buffer holding a HTTP request) to another. Taint propagation paths are typically determined by data flows and implicit
flows in a computer program. And the union of all the taint propagation paths forms a taint graph. It is clear
that effective intrusion monitoring can be done via information flow tracking, and that taints graphs could
significantly enhance intrusion diagnosis. However, the existing DTA techniques cannot be directly used in an MMS, and a main reason is as follows: Without manufacturing-specific taint propagation rules, DTA cannot be implemented. For example, when a motor drive receives a distorted control signal from a compromised PLC, although the signal is a variable of the control logic program, the result (of the motor motion) is no longer a variable in any program. Restricted to cyberspace, no existing taint propagation rule can be used to propagate the taint onto the motion result.
\begin{figure*}[htbp]
\centering
\centerline{\includegraphics[width=0.8\textwidth]{Framework.pdf}}
\caption{Workflow Overview}
\label{emu}
\end{figure*}
In this paper, a case study is conducted to test the effectiveness of a set of manufacturing-specific taint propagation rules proposed by us. In particular, the case study consists of the following activities and findings:
\begin{itemize}
\item We have developed a preliminary generic cyber-physical information flow model for an MMS. The model identifies several types of manufacturing-specific security-related information flows. The model also identifies a set of manufacturing-specific taint propagation rules.
\item A scaled-down testbed has been built to collect MMS-specific event data.
\item We have used the specific event data collected from the specific testbed to customize the generic cyber-physical information flow model. The customized model can be used to explain the information flow semantics reflected in the specific event data.
\item Using the customized set of manufacturing-specific taint propagation rules, we have developed a software tool to construct a taint graph directly based on the event data collected from the testbed.
\item Through answering several specific intrusion diagnosis questions, including (a) ``if control signal X is compromised, would Y be part of the damage?" and (b) ``if damage Y is observed, would X be a causing factor?", we have been evaluating the soundness and effectiveness of the proposed cyber-physical information flow model for an MMS and the manufacturing-specific taint propagation rules.
\end{itemize}
To the best of our knowledge, this case study is the first one doing Cyber-Physical Taint Analysis (for intrusion diagnosis) in Multi-stage Manufacturing Systems.
The remaining of this paper is organized as follows. Section \ref{Back} provides preliminary introduction of Multistage Manufacturing System, threat model, and the existing works. Section \ref{Over} brings out the both challenges and insights of how to extend the existing DTA method in an Multi-stage Manufacturing System (MMS). Following the workflow shown in Figure \ref{emu}, Section \ref{UGA} introduces the ``Mini-S\&A-Testbed" used in this work and how the data is collected. Our proposed information flow model is demonstrated in Section \ref{Infor}. Section \ref{Taint} illustrates how the taint graph is generated based on both information flow model and collected data set. Section \ref{Case} provides two representative case studies for cause analysis. Section \ref{Con} concludes this work.
\section{Background}
\label{Back}
\subsection{Preliminary Introduction of Multistage Manufacturing System}
Multistage Manufacturing System (MMS) refers to a manufacturing system consisting of multiple components, stations, or stages required to finish the final product. Almost all modern manufacturing processes (e.g., assembly, machining, semiconductor fabrication, pharmaceutical manufacturing) fit this category. In this {\color{red}report}, a multi-stage hot rolling mill automation system testbed will be used for case studies and experiments. It consists of digital motor drives, Programmable Logic Controllers (PLC), control desks equipped with touchscreen LCD operator interfaces, remote I/O units, digital communication infrastructure and SCADA computer(s). All components of the system are connected to a digital communications network, to provide fast and safe data exchanges. Motor drives residing on this network are under the control of PLC for a safe start and rolling operation. A multistage hot rolling process often has 10 to 30 rolling stations, and each rolling station has multiple rolls driven by motors and gear boxes. All the rolls (motors) need to be controlled in a synchronized and cascaded speed, which is designed to match the cross-section reduction ratio of the rolled bars. If a security attack occurs to alter one (or several) roll (mo- tor) speed, it may cause bar surface quality defects, stretch/compress of rolling bar before/after the speed altered station, and even severer incidents such as broken bars or cobbles in the rolling process, inducing significant production system damage or failure.
\subsection{Threat Model}
In MMS, each manufacturing machine is controlled by a Programmable Logic Controller (PLC), and those machines are interconnected in both cyber and power networks for coordinated/planned manufacturing processes. The cyber-threats may compromise the integrity of machine controllers and man- ufacturing machines through the communication networks or other means such as Trojan or insider attack. The integrity attacks are stealthy and difficult to detect in the cyber-space, but may be observed in the physical world or signals.
In our threat model, integrity attacks may be launched from Stuxnet-like malware (e.g. \cite{langner2011stuxnet}) or malicious insiders. Malware can compromise PLC programs. In addition, we assume that insiders can compromise PLC source code to intentionally result in integrity violations (e.g., PLC logic bombs \cite{govil2017ladder}). It is found in survey studies \cite{ginter2017top, perelman2016top} that insider attacks are top security challenges for air-gapped ICS (Industry Control Systems). As a result, (a) PLC source code and configurations may not be trustworthy; (b) the SCADA computer may not be trustworthy either. In our threat model, integrity attacks may also be launched from a manufacturing communication network. For example, the widely-known Maroochy Water Services Attack \cite{abrams2008malicious} injects bogus radio commands (i.e. messages) to the sewage equipment, causing 800,000 liters of raw sewage to spill out into local parks, rivers and even the grounds of a hotel. This example indicates that network intrusions should be a main part of our threat model. Regarding what are trusted, we assume that the rest of the manufacturing system, including the digital motor drives, the remote I/O units, the DSP components, the RTOS (real-time operating systems), as well as our data collection mechanisms (e.g., the sensors) are trusted.
The integrity attacks, in general, may bypass the traditional cyber-security measures but could have been detected by monitoring physical-system signals, such as electric waveform and product quality signals in MMS. To further trace down the root causes, the system needs to audit cyber signals and perform taint analysis. The root causes could include intrusion attack, malwares, trojans, insider or hardware faults (which is not a cyber-threat).
\subsection{State of the Art}
The existing cyber-security research \cite{wu2017taxonomy, wu2018cybersecurity, elhabashy2019cyber} most relies on cyber signals, integrating physical signals for cyber-threat detection is still in its infancy. Methods were developed to detect cyber threats by monitoring and analyzing structural health of the finished parts \cite{vincent2015trojan}, multiple physical signals of manufacturing process (such as acoustic signals, vibration, magnetic intensity, coal feed) \cite{wang2018sensor, gao2018watching}, vision and acoustic signals \cite{wu2019detecting}. Traditionally, quality controls are used in MMS to reduce product variability and detect the equipment aging or failure with the goal to ensure the quality of manufacturing processes. In recent years, efforts are made to use quality control methods to address cyber security issues in manufacturing system \cite{wells2014cyber, pan2017taxonomies, elhabashy2019cyber, zeltmann2016manufacturing}. While these works provide crucial insights in cyber-physical security in manufacturing, they are limited in analyzing the effect of cyber threat on a single existing quality control tool, such as control chart. As MMSs are increasingly vulnerable to cyber threats, those existing quality controls are not effective to detect the malicious cyberattacks that cause machine failure and quality degradation \cite{elhabashy2019cyber, elhabashy2019quality}. There lacks a generalized methodology to detect and diagnose cyber and physical threats in manufacturing systems.
The cybersecurity of Industrial Control Systems (ICS), including PLC security, has been drawing increasing attention in the research community (e.g., \cite{garcia2017hey, feng2019systematic, zhang2019towards}). The existing works can be classified into 6 bodies.
\begin{itemize}
\item Novel cyber-attacks: In \cite{garcia2017hey}, a Rootkit attack is proposed to compromise PLCs. In \cite{formby2017out}, ransomware for ICS are proposed. In \cite{urbina2016limiting}, stealthy attacks (e.g., gradually decreasing the system integrity) on ICS are proposed. In \cite{soltan2018blackiot}, three classes of new cyber-attacks are proposed to disrupt the power grid with an IoT botnet.
\item Intrusion Detection: In \cite{formby2019temporal}, temporal execution behavior is analyzed to detect intrusions in PLCs. In \cite{cheng2017orpheus}, event-driven finite state machines are leveraged to detect data-oriented attacks on cyber-physical systems. In \cite{feng2019systematic}, a framework is proposed to generate invariants for anomaly detection in ICS. In \cite{aoudi2018truth}, a mechanism is proposed to detect stealthy attacks on ICS.
\item Statically verifying PLC logic: In \cite{aiken1998detecting, biallas2012arcade, nellen2014cegar, biha2011formal, park2000formal}, PLC logic is statically verified in a formal manner.
\item Detection of PLC safety violations: In \cite{janicke2015runtime, park2008plcstudio}, dynamic simulations of runtime behaviors are leveraged to detect PLC safety violations. In \cite{guo2017symbolic, mclaughlin2014trusted}, symbolic execution on PLC code is leveraged to detect PLC safety violations.
\item Safety vetting of PLC code: In \cite{zhang2019towards}, a new program-analysis-based approach is proposed to generate event sequences that can be used to automatically discover hidden safety violations.
\item Reverse engineering: In \cite{keliris2018icsref}, an ICS reverse engineering framework is proposed to reverse engineer PLC binaries.
\end{itemize}
As manufacturing machines and equipment are connected through power networks, cyber-threats will cause electrical signals/waveforms changes (that might include energy consumption, voltage, current, harmonics) in power networks. A method was developed to detect data integrity attacks modifying the G-code movement commands of 3D printing systems by monitoring current supplied to each electric machine \cite{moore2017power}; however, it is infeasible to monitor the current of each machine in a large-scale manufacturing systems. In smart grid, cyber security studies used waveform data from phasor measurement unit (PMU), microPMU, and smart meters \cite{tan2016survey, amini2015detecting, zhou2017partial, lu2018coupled, tian2018data, xun2018detectors}. While smart grid security studies provide necessary technical foundation, they mainly address the cyber threats affecting the functionality, stability, and cost of large-scale power networks rather than the function and precision of the devices and equipment, which are generally concerned in manufacturing systems. There are no existing works on analyzing electrical waveforms to detect threats in MMS. Researchers have obtained promising results from their preliminary works \cite{li2019detection, yang2019vulnerability} on cyber-security of electrical machines in power networks.
\section{Challenges \& insights }
\label{Over}
Information flows are intrinsic properties of an Multistage Manufacturing Systems (MMS). In computer security, a basic information flow tracking technique is \textit{dynamic taint analysis (DTA)} \cite{newsome2005dynamic}. DTA tracks taint propagation from one data variable (e.g., a buffer holding a HTTP request) to another. Taint propagation paths are typically determined by data flows and implicit flows in a computer program. And the union of all the taint propagation paths forms a \textit{taint graph}. It is clear that effective intrusion monitoring can be done via information flow tracking, and that taint graph could significantly enhance intrusion diagnosis. However, due to the following three gaps, the existing DTA techniques cannot be directly used in an MMS. (1) Without manufacturing-specific taint propagation rules, DTA cannot be implemented. Let's consider this MMS example: when a motor drive receives a distorted control signal from a compromised PLC, although the signal is a variable of the control logic program, the result (of the motor motion) is no longer a variable in any program. Restricted to cyberspace, no existing taint propagation rule can be used to propagate the taint onto the result. Hence, an MMS requires new rules to propagate taint from cyberspace to the physical world, and vice versa. (2) Since computer programs do not automatically track taint propagation, (dynamic) binary code instrumentation is widely used to let a program run extra taint tracking instructions. However, this often introduces substantial run-time overhead. Accordingly, due to strict real-time constraints, control logic programs running on PLCs are not suitable for (dynamic) binary code instrumentation. (3) Systematic DTA of an MMS has not yet been conducted in the literature.
\begin{figure*}[!t]
\centering
\centerline{\includegraphics[width=\textwidth]{testbed.jpg}}
\caption{Control Diagram of the Mini-S\&A-Testbed}
\label{test}
\end{figure*}
There are four main intrusion diagnosis questions for an MMS to be answered: (Q1) What steps have occurred in the intrusion? (Q2) Were any other components also compromised? (Q3) What was the entry point used to gain access to the MMS? (Q4) What is the root cause? Is it due to a cyber-attack or a physical fault? In an MMS, a product being manufactured is passed through a set linear sequence of mechanical operations: these operations form a "physical production path". Since the physical path is the outcome of a rigorous engineering design process, the designer is well aware of the intended information flows along the path. However, these information flows are not modeled by existing DTA tools. In MMS, manufacturing-specific tailoring of existing forward and backward taint tracking algorithms is required.
This brings up significant new challenges: (a) On one hand, the production path/line design literature does not provide an explicit physical-world information flow model. On the other hand, the modelling methodology of DTA is restricted to cyberspace only. (b) Even if the cyberspace taint graph were expanded to hold the information flows along the production path, the existing DTA methodology still cannot separate physical faults from cyber-threats.
The proposed model is based on two insights. (1) Some essential information flows associated with the production path are already being monitored by existing sensing components of an MMS. (2) Despite being manufacturing-specific, existing physical fault models could be leveraged to enable an information flow model to separate physical faults from cyber-threats.
\section{Mini-S\&A-Testbed \& Data Collection}
\label{UGA}
\begin{figure*}[!t]
\centering
\centerline{\includegraphics[width=\textwidth]{flowchart.jpg}}
\caption{Control Information Flow of the Field Oriented Control for PMSM}
\label{CPS}
\end{figure*}
To test and validate the taint analysis at device level, a real-world motor drive testbed is constructed to emulate the behaviours of the industrial machines. Such testbed is called ``Mini-S\&A-Testbed" and consists of a permanent magnet synchronous machine (PMSM), a three-phase inverter and an ARM-based digital control unit. Table~\ref{tab:specification} shows the detail specifications of the testbed and Fig.~\ref{test} shows the control diagram of the motor drive.
As shown in Fig.~\ref{test}, the control unit adopts the field oriented control algorithms to regulate the rotating speed of PMSM according to the requirement from the PC. The PC and the control unit are communicating through the NXP FreeMASTER interface and the Low Power UART (LPUART) module. The control algorithms have a two-level feedback control loop, the outer control loop and the inner control loop. The outer loop has a speed regulator associated with the field weakening module to control the motor rotating speed and the air gap flux. The outer loop generates the current references and send to the inner loop. The inner loop has two proportional–integral controllers for controlling the d- and q-axis current, respectively. The outputs of the inner control loop are the d- and q-axis voltage commands. Then the Inverse Park Transformation transforms such commands into stationary reference frame, and the PWM modulation module converts these commands into PWM signals, which directly control the six power switches in the inverter. Fig.~\ref{CPS} shows a simplified diagram of the control information flow and Table~\ref{tab:variable} lists the detail descriptions of all the variables in the control algorithms.
\begin{table}[t!]
\centering
\caption{Mini-S\&A-Testbed Specifications}
\begin{tabular}{c|c}
\hline
Control Unit & NXP S32K144 (Arm Cortex-M4F) \\
\hline
Power Module & SMARTMOS GD3000 3-phase motor driver \\
\hline
Motor & LINIX 45ZWN24-40 \\
\hline
Motor Ratings & 24V, 40W, 4000rpm, 2.3A, 2 pole pairs \\
\hline
Power Supply & PS-1250APL05/S3: 12 V, 5 A \\
\hline
\multirow{4}{*}{Interfaces} & On-board for CAN \\
& On-board for LIN \\
& On-board OpenSDA debug interface \\
& SWD/JTAG debug interface \\
\hline
\end{tabular}
\label{tab:specification}
\end{table}
In addition, the testbed has a data collection module as. Such module uses the National Instrument c-DAQ compact data acquisition device to collect and pre-process the physical measurements from the pre-deployed sensors.
\begin{table}[t!]
\centering
\caption{Control Variable Detail Descriptions}
\begin{tabular}{c|c}
\hline
speed$\_$req & motor rotating speed command \\
omega$\_$actual$\_$mech & feedback motor rotating speed \\
is$\_$d$\_$req & d-axis current reference \\
is$\_$q$\_$req & q-axis current reference \\
is$\_$q$\_$lim & q-axis current limitation \\
is$\_$d & feedback d-axis current \\
is$\_$q & feedback q-axis current \\
is$\_$a & feedback motor phase-A current \\
is$\_$b & feedback motor phase-B current \\
is$\_$c & feedback motor phase-C current \\
is$\_$alpha & feedback $\alpha$-axis current \\
is$\_$beta & feedback $\beta$-axis current \\
us$\_$d & d-axis voltage command \\
us$\_$q & q-axis voltage command \\
us$\_$alpha & uncompensated $\alpha$-axis voltage command\\
us$\_$beta & uncompensated $\beta$-axis voltage command \\
us$\_$alpha$\_$comp & compensated $\alpha$-axis voltage command \\
us$\_$beta$\_$comp & compensated $\beta$-axis voltage command \\
u$\_$dc & feedback DC bus voltage \\
theta & feedback motor rotor position \\
theta$\_$enc & rotor position signal from encoder \\
\hline
\end{tabular}
\label{tab:variable}
\end{table}
\section{Information Flow Model}
\label{Infor}
\begin{figure*}[!htbp]
\centering
\centerline{\includegraphics[width=0.7\textwidth]{inforflowloops.png}}
\caption{Cyber-Physical Info Flow Loops}
\label{infor}
\end{figure*}
This section is to introduce our proposed information flow model and how it could be applied to the min-testbed. There will be three parts:
(1) What are the Cyber-Physical Information Flow Loops;
(2) the mapping relationship between the real testbed (Figure \ref{test}) and its control information flow (Figure \ref{CPS});
(3) explanations on how specific components of Figure \ref{CPS} corresponds to the five new DTA notions in our information flow model (Figure \ref{infor}) defined in Section~\ref{Infor};
\subsection{Cyber-Physical Information Flow Loops}
As shown in Figure \ref{infor}, from the viewpoint of the MMS sensing components, the proposed information flow model is anchored by five new DTA notions: events, attributes, defect patterns, (physical) signals, and analysis results. In particular, (a) when a harmful \textit{event} (e.g., motion of a motor) is tainted, all of the event’s \textit{attributes} should be tainted. An event is recorded at every millisecond in our current data set. (b) Since one attribute of the event is usually a particular \textit{defect pattern} associated with the product, the defect pattern should be tainted. (c) When the defect pattern is being sensed, the \textit{physical signals} (e.g. quality and electrical signals) should be tainted. (d) All the \textit{analysis results} of any tainted signals should be tainted. For example, a previous event at an upstream motor is a main cause of the defect. (e) As soon as the previous event is tainted, we will reuse the above-mentioned rules to further propagate the taint.
\subsection{Mapping Relationship}
To better understand the information flow embedded in the testbed, we have this mapping relationship as the bridge to connect the testbed and the information flow together. As shown in Figure \ref{test}, there are three main components in the current testbed setup: a PC running FreeMASTER, a S32K144 ARM board, and a motor. The PC communicates with the ARM board through LPUART (Figure \ref{test}), which corresponds to the PC provides speed reference to Speed Controller on the ARM board(Figure \ref{CPS}). The ARM board, as the cyber-space, sends the PWM control signals to the motor through the FTM(PWM) module (Figure \ref{test}), which corresponds to the current controller sends PWM control signals to winding voltage in the motor drive (Figure \ref{CPS}). The motor, as the physical world, sends the current signals and the speed signal back to the ARM board through ADC module and FTM(QUAD) module respectively, which can be mapped to winding current sends current signals back to the current controller and rotor speed sends the speed signal back to the speed controller correspondingly. In this way, we could explicitly know how the information flows through the testbed.
\subsection{Control Information Flow}
Then, we will demonstrate how the control information flow is depicted with our proposed information flow model. Since our current focus is on how to use the collected data to generate the taint graph, we use a PC running FreeMASTER to represent the infected PLC program to send a malicious command to the ARM board as the attack scenario. Under such an assumption, we explain what the five new DTA notions in our information flow model stand for in the control information flow. Specifically,
\begin{itemize}
\item (Figure \ref{infor}) When a suspicious harmful event occurred (e.g., unexpected speed change of a motor), which corresponds to the change of rotor speed in Figure \ref{CPS}, and such an event will be tainted firstly.
\item Usually an event has its intrinsic attributes like some intermediate parameters in the control testbed (e.g., Us\_d, Us\_q in Figure \ref{test}) as well as a particular defect pattern associated with the product, and the defect pattern should also be tainted.
\item When the defect pattern is being sensed, the associated physical signals should be tainted (e.g., quality and electrical signals), which corresponds to the current signals and speed signal in Figure \ref{CPS}.
\item All the analysis results of the tainted signals should be tainted. For example, the statistical analysis results could reveal that a previous event at an upstream motor is a main cause of the defect.
\item (Figure \ref{infor}) As long as a sensor is tainted (e.g., current sensor and encoder in Figure \ref{CPS}), the taint will be propagated to the next event as part of the inputs.
\end{itemize}
At this point, we have acknowledged how the Cyber-Physical information flow loops express the information flow in the mini-testbed. Next, we will dive into the details about how to generate the taint graph.
\begin{figure*}[!htbp]
\centering
\centerline{\includegraphics[width=0.8\textwidth]{Sample.png}}
\caption{Sample Taint Graph}
\label{Sample}
\end{figure*}
\section{Taint Graph Generation}
\label{Taint}
The taint graph is generated based on the testbed mentioned in Section~\ref{UGA}. There will be three parts in this section:
(1) a description of the collected dataset;
(2) an introduction of what is taint graph;
(3) design and implementation of our software tool on how to generate the taint graph based on the model and manufacturing-specific taint propagation rules.
\subsection{Collected Dataset}
As mentioned in Section \ref{UGA}, a set of the physical measurements has been collected from the pre-deployed sensors on the testbed. Such a dataset is used to construct our taint graph aligning with the Cyber-Physical information flow loops.
The dataset is recorded in 99 time cycles and every cycle has 64 milliseconds. In every millisecond, 8 variables are collected, which include is\_a, is\_b, is\_c, theta, omega\_actual\_mech, us\_d, us\_q, and m\_PWM. The detailed description of the first 5 variables can be found in Table \ref{tab:variable}, the last one m\_PWM represents the modulate index for PWM control. Basically, the modulate index is used to generate the switching signal (s1-s6 in the diagram), but the index is not shown in the diagram \ref{test}.
\subsection{Taint Graph Concept}
At this point, it is known that how the information flow model expresses the information flow of the testbed. As shown in Figure \ref{Sample}, basically, the taint graph is to explicitly show the information flow embedded in the testbed.
The whole taint graph is generated following the time sequence of the collected data. At every specific millisecond (marked as one event), there is one sub-graph for the information flow of the current running testbed. For every sub-graph, we have taint source node, taint propagation, and taint sink node. The followings are how specifically the whole taint graph is constructed based on the proposed Cyber-Physical information flow model and the acquired data set of the testbed.
\begin{table*}[t!]
\centering
\caption{Dataset Snippet}
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\hline
Seconds & us\_d(2) & us\_q(3) & m\_PWM(4) & is\_a(5) & is\_b(6) & is\_c(7) & theta(8) & moega\_actual\_mech(9) \\
\hline
0 & 0.0591724 & -3.25714 & 0.490139 & 0.0040703 & 0.0557747 & -0.059845 & 63.3087 & 1023.32 \\
\hline
0.001 & 0.00780477 & -3.29621 & 0.411041 & -0.209606 & 0.162613 & 0.0469933 & 178.248 & 1001.01 \\
\hline
0.002 & -0.0758674 & -3.27923 & 0.341721 & -0.316444 & 0.299976 & 0.016468 & -32.333 & 961.48 \\
\hline
0.003 & -0.0818229 & -3.21599 & 0.290157 & -0.163818 & 0.406815 & -0.242996 & -159.07 & 954.77 \\
\hline
0.004 & 0.011224 & -3.18299 & 0.273483 & -0.011192 & 0.345764 & -0.334572 & -84.919 & 996.79 \\
\hline
... & ... & ... & ... & ... & ...& ... & ... & ...
\end{tabular}
\label{dataset}
\end{table*}
\begin{table*}[t!]
\centering
\caption{Graph Infor at 0.000}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
Source Node(S) & Destination Node(D) & Value of S & Value of D & Taint Label(S) & Taint Label(D) & Node Type(S) & Node Type(D) \\
\hline
0 & 1 & N/A & 1.0 & 1 & 0 & ts(taint source) & arm \\
\hline
1 & 2 & 1.0 & 0.0591724 & 0 & 0 & arm & an(attribute node) \\
\hline
1 & 3 & 1.0 & -3.25714 & 0 & 0 & arm & an \\
\hline
2 & 4 & 0.0591724 & 0.490139 & 0 & 0 & an & cs(control signal) \\
\hline
3 & 4 & -3.25714 & 0.490139 & 0 & 0 & an & cs \\
\hline
... & ... & ... & ... & ... & ...& ... & ...\\
\hline
5 & 1 & 0.0040703 & 1.0 & 0 & 0 & sd(sensor data) & arm\\
\hline
6 & 1 & 0.0557747 & 1.0 & 0 & 0 & sd & arm\\
\hline
... & ... & ... & ... & ... & ...& ... & ...\\
\hline
9 & 1 & 1023.32 & 1.0 & 0 & 0 & sd & arm\\
\end{tabular}
\label{input}
\end{table*}
\subsubsection{Taint Source Selection}
In the existing DTA techniques \cite{newsome2005dynamic}, taint source refers to the one where the untrusted data or malicious command is introduced. In our information flow, since the received malicious command can modify different parameters to launch different attacks. For example, in Figure \ref{test}, if speed reference is modified, which definitely will cause a series of changes in the control loop, and this speed reference will be marked as taint source in this attack scenario. If current reference is modified, the control loop will have the corresponding changes, and this current reference will be marked as taint source in this attack scenario. In a word, taint source selections mean different attack scenarios and generate different taint graphs.
\subsubsection{Taint Sink Node}
In the sub-graph for every specific millisecond, there are taint sink nodes where the information flows to. In our proposed information flow model, the sensor data (e.g., current sensor, speed sensor in Figure \ref{CPS}) will be marked as taint sink nodes.
\subsubsection{Other Node}
Besides the taint source node and taint sink node, there are two more types of node, one is PWM control signal \textit{m\_pwm}, which is sent from the ARM board to the motor drive, the other one is the intermediate parameters {\color {red} named as attribute node} in the control loop (e.g., Us\_d, Us\_q), which are also regarded as attributes of one specific event.
\subsubsection{Hierarchy Node System}
There is a hierarchy node system for the whole taint graph for the convenience of retrieving values of the specific nodes at the specific millisecond: a) The data is collected in the unit of cycle as the testbed runs; b) Every cycle has sixty-four milliseconds, and each millisecond is considered as one event; c) In addition, each node in the sub-graph has its variable ID. In a word, each node in the whole taint graph has a unique Node ID consisting of cycle ID (prefix), event ID (time stamp), Variable ID (suffix). For example. in cycle 66, and $35^{th}$ millisecond, node \textit{m\_pwm} has Node ID of 66-35-04.
\begin{figure*}[htbp]
\centering
\centerline{\includegraphics[width=0.8\textwidth]{taintgraph.pdf}}
\caption{Two Sub-graphs with connection}
\label{taintgraph}
\end{figure*}
\subsection{Design and Implementation of the Software Tool}
To construct the taint graph, we developed a software tool written in Python. There are three stages to generate the {\color{blue} taint graph}: 1) reorganize the dataset and assign the variable IDs to the collected 8 variables; 2) prepare a file for the event at 0 millisecond including the connection information between these variables, both taint information and node type of these variables, and use this file as input into our reading procedure to generate the same kind of file, which contain the above information for all recorded events; 3) the generated file will be fed into our tainting procedure to construct the taint graph.
\subsubsection{Preliminary process of the dataset}
For our testbed, This stage is to assign the variable IDs to the collected 8 variables, e.g., 0 for speed reference, 1 for the ARM board, 2-9 for us\_d, us\_q, m\_PWM, is\_a, is\_b, is\_c, theta, and omega\_actual\_mech, as shown in Table \ref{dataset}.
\subsubsection{Input Preparation}
As shown in Table \ref{input}, prepare a file for the event at 0 millisecond including the connection information between these variables, both taint information and node type of these variables, and use this file as input into our reading procedure to generate the same kind of file, which contain the above information for all recorded events.
\subsubsection{Taint Graph Generation}
At moment of zero second of cycle 1, the first sub-graph starts with our selected taint source (speed reference). As mentioned before, the whole taint graph is generated following the time sequence of the collected data. The single sub-graph will be connected to its adjacent one following the time sequence. The whole taint graph is formed until the end of the last sub-graph in our data set.
\paragraph{Sub-graph} For every event (at the specific millisecond), there will be one corresponding sub-graph. Notably, There are two exceptions of the sub-graph construction. One is the first sub-graph, which only has taint source as its input, while the following sub-graphs all have both taint source and sensor data (taint sink) as their inputs. The other one is that the last sub-graph does not have out edges to connect the next sub-graph. As shown in Figure \ref{Sample}, the first sub-graph starts with our selected taint source and connects to the attribute nodes followed by control signal node, which connects to sensor data node.
\paragraph{The Whole Taint Graph} Based on the control information flow in Figure \ref{CPS}, both taint source and sensor data will be used as inputs for the next event to calculate a new set of attribute node, control signal node, and sensor data (sink node). The whole taint graph is finalized after the last event.
\section{Cause Analysis}
\label{Case}
In this section, we will explain how to utilize the generated taint graph to do preliminary cause analysis through forward and backward taint tracking. There are two types of intrusion diagnosis questions to be answered, which are (a) ``if X is compromised, would Y be part of the damage?" (b) ``if damage Y is observed, would X be a causing factor?". Case study I is to demonstrate how forward taint tracking identify the damages caused by X, and case study II explains how to decide whether X should account for damage Y or not through backward taint tracking.
\subsection{Case Study I}
When a malicious command (e.g., modifying the speed reference to a bad one) is launched at the ARM board, through forward taint tracking on the sample taint graph, a series of damage and compromised components could be identified.
To answer the question ``if the speed reference is maliciously modified, what is the damage?", we have the following answers.
\begin{itemize}
\item In Figure \ref{taintgraph}, once the speed reference node (1-0-0) is maliciously modified, following the forward taint propagation, the arm boar node (d 1-0-1), the connected attribute node (1-0-2) \& (1-0-3), \textit{m\_PWM} node, the connected sensor data node (1-0-5), (1-0-6), (1-0-7), (1-0-8), (1-0-9) will all be part of the damage.
\item Furthermore, because the mini-testbed utilizes the control information flow in Figure \ref{CPS}, so both the maliciously modified speed reference and the generated sensor data will serve as the inputs for the next event, which means all the following events will all belong to the damage.
\end{itemize}
\subsection{Case Study II}
However, in the real world setting, some attacks can not be detected in real-time, which means some damage is observed firstly, and then intrusion diagnosis steps in to identify how the intrusion happened and what the root cause is.
For example, if defect Y was found in one product produced by motor M at time T, how could we identify the causing factors and even the root cause?
To answer the question ``if damage Y is observed, would X be a causing factor?", we have the following answers.
\begin{itemize}
\item In our collected dataset, variable omega\_actual\_mech is the feedback motor rotating speed, which directly impacts the product quality. In Figure \ref{taintgraph}, once we know at what time the damage Y is observed, we could locate the abnormal sensor data node omega\_actual\_mech (aa-bb-09), which will be used as the taint source node for backward taint tracking.
\item Through backward taint tracking on the taint graph, as shown in Figure \ref{taintgraph}, the taint starting from the abnormal sensor data node omega\_actual\_mech could be firstly propagated from (1-1-9) to (1-1-4). Through (1-1-2) \& (1-1-3), the taint is propagated from (1-1-4) to (1-1-1). At this point, the taint will be propagated within the current sub-graph to (1-1-0) and to the sensor data nodes in the previous event respectively. In the former case, the speed reference could be one the candidates who account for the damage Y.
\item For the latter case, the taint will be propagated through the events backwardly. During taint propagation, the values of those tainted nodes, e.g., (1-0-4), (1-0-5), (1-0-7), are compared with their normal values recorded under the same testbed configuration. Some obvious mismatching will be marked as suspicious candidates causing the damage Y.
\end{itemize}
\section{Conclusions}
\label{Con}
In this work, we conduct a case study which (a) extends the existing DTA method with manufacturing-specific taint propagation rules, and (b) applies the extended method to perform preliminary intrusion diagnosis with a small-scale test-bed.
Using the customized set of manufacturing-specific taint propagation rules, we have developed a software tool to construct a taint graph directly based on
the event data collected from the testbed.
Through answering several specific intrusion diagnosis questions, including (a) ``if control signal X is compromised, would Y be part of the damage?" and (b) ``if damage Y is observed, would X be a causing factor?", we have partially evaluated the soundness and effectiveness of the proposed cyber-physical DTA method through a case study.
|
2,877,628,091,068 | arxiv | \section{Introduction}
One-body transport models like the Boltzmann-Uehling-Uhlenbeck~(BUU) equation (see, e.g., Ref.~\cite{BerPR160}) provide a powerful tool to describe heavy-ion collisions.
Based on these models, valuable information on the details of nuclear reaction dynamics and nuclear matter properties has been obtained by analyzing data in heavy-ion collisions~\cite{BLIJMPE7,Dansc298,BarPR410,BLPR464}.
Among the obtained information, those concerning the equation of state~(EOS) of asymmetric nuclear matter at both subsaturation and suprasaturation densities and the in-medium nuclear effective interaction are of fundamental importance, since they are crucial in investigating the properties of various nuclear systems or astrophysical objects~\cite{BLIJMPE7,Dansc298,Latsc304,BarPR410,StePR411,LatPR442,BLPR464,TraIJMPE21,HorJPG41,BLEPJA50,HebARNPS65,BalPPNP91,OerRMP89,BLNPN27,BLPPNP99}.
Heavy-ion collisions not only serve as an alternative method in extracting information on subsaturation EOS through collective flows and particle production~(at intermediate beam energy)~\cite{BLPRL85,BarPR410,BLPR464,LCPRL94,TsaPRL102},
but also turn out to be the unique tool in terrestrial labs for the exploration of the suprasaturation density behaviors of the nuclear matter EOS~(at intermediate to high beam energy)~\cite{BLPRL88,XZGPRL102,FZQPLB683,RusPLB697,WXPLB718,CozPRC88,RusPRC94,CozEPJA54}.
In this sense, it is important to improve and upgrade the one-body transport models so that they can help to provide more reliable information on the EOS of asymmetric nuclear matter and the in-medium nuclear effective interaction.
To this end, the community of heavy-ion collisions at intermediate energies have actually made great efforts in recent years to improve the transport models for heavy-ion collisions by the program of transport model code comparisons~\cite{XJPRC93,ZYXPRC97}.
One of the basic input in one-body transport models is the single nucleon potential~(nuclear mean-field potential) under non-equilibrium conditions, which is generally dependent on nucleon momentum.
The momentum dependence of single nucleon potential, which has various origins like the exchange term of finite-range nuclear interaction and nuclear short-range correlations, is evident from the observed momentum or energy dependence of nucleon optical model potential~\cite{HamPRC41,CooPRC47}.
This has driven the construction and development of a lot of momentum-dependent mean-field potentials, and they have been extensively employed to study both nuclear matter and
heavy-ion collisions~\cite{GalPRC35,PraPRC37,WelPRC38,GrePRC59,DanNPA673,Bom2001,PerPRC65,DasPRC67,LCPRL94,CXPRC81,JXPRC82,LCEPJA50,JXPRC91}.
Most of momentum-dependent mean-field potentials so far applied in one-body transport models are parameterized phenomenologically and they cannot be directly used in nuclear structure calculations.
It is thus interesting and constructive to directly employ the momentum-dependent nuclear effective interactions, which are widely used in mean-field calculations of finite nuclei, to one-body transport models.
By doing this, experimental observables from finite nuclei and heavy-ion collisions can provide crosschecks for the single nucleon potentials, and thus enhance the understanding on in-medium nuclear effective interactions and the associated nuclear matter EOS.
The Skyrme interaction~\cite{SkyPM1,SkyNP9} has been used very successfully in describing the ground and lowly excited state properties of finite nuclei in mean-field calculations~\cite{BenRMP75,StoPPNP58}, as well as heavy-ion collisions at low energies in time-dependent Hartree-Fock (TDHF) calculations~\cite{MarCPC185,NakRMP88}.
However, its incorrect momentum dependence at high kinetic energies~(above about $300~\rm MeV/nucleon$), which fails to reproduce the empirical results on the nucleon optical potential obtained by Hama \textsl{et al.}~\cite{HamPRC41,CooPRC47}, hinders the application of the conventional Skyrme interaction~\cite{VauPRC5,VauPRC7,ChaNPA627,ChaNPA635} in transport model calculations for heavy-ion collisions at intermediate and high energies.
Recent development of quasi-local energy density functional and Skyrme pseudopotential~(the terminology represents effective interactions with quasi-local operators depending on spatial derivatives) makes it possible to incorporate the Skyrme interaction into the transport model calculations for intermediate to high energy heavy-ion collisions.
This is achieved by introducing additional higher order derivative terms (or higher order momentum dependence) in the conventional Skyrme effective interaction.
In Ref.~\cite{WRPRC98}, based on the next-to-next-to-next-to leading order~(N$3$LO) Skyrme pseudopotential, the Hamiltonian density and single nucleon potential under general non-equilibrium condition have been given, and three N$3$LO Skyrme pseudopotentials with single particle potential consistent with empirical optical potential up to $1~\rm GeV$ kinetic energy have been constructed.
This provides the possibility to study finite nuclei and heavy-ion collisions at incident energy up to about $1~\rm GeV/nucleon$ (where nuclear matter with about three times of saturation density can be formed~\cite{BLNPA708}) on the same footing by using the same nuclear effective interaction.
Another important aspect of developing the transport model for heavy-ion collisions is to improve the numerical stability as well as to guarantee the energy and momentum conservation during the dynamic process. The lattice Hamiltonian method~\cite{LenPRC39} provides a good recipe for this motivation. The lattice Hamiltonian method conserves the total energy exactly and the total momentum to a high degree of accuracy, and has been successfully employed into the study of heavy-ion collisions~\cite{GalPRC44,HXPRL65,HXPLB261,HXPRL67,HXPRC46a,HXPRC46b,HXPLB299,GrePRC59,PerPRC65,DasPLB726,MalPRC89,MalPRC91}.
As the first step of building a transport model with the lattice Hamiltonian method by incorporating the N$3$LO Skyrme pseudopotential, in the present work we develop a one-body transport model without considering nucleon-nucleon collisions based on the lattice Hamiltonian Vlasov~(LHV) method~\cite{LenPRC39} with nuclear mean-field potential based on the N$3$LO Skyrme pseudopotential.
In order to ensure the reliability of the LHV method, the initial phase space distribution is obtained self-consistently through varying the total energy.
The ground state properties of the present LHV transport model are examined.
Such a LHV transport model without nucleon-nucleon collision term can be applied to the study of dynamical evolution of quantum systems where the individual collisions are either inhibited by the Pauli exclusive principle or negligible due to the diluteness of the system.
In that case, collective motions of finite nuclei are ideal sites to test the validity of the present LHV method.
To this end, isoscalar giant monopole and isovector giant dipole modes of \isotope[208]{Pb} are studied based on the present LHV method.
The obtained results are compared with that from random-phase approximation~(RPA) and experimental data.
The present article is organized as follows.
In Sec.~\ref{S:SP}, we introduce the mean-field potential used in the LHV method, namely, N$3$LO Skyrme pseudopotential, and its Hamiltonian density under general nonequilibrium conditions.
In Sec.~\ref{S:LHV}, the details of the LHV method are given, and the initialization of nuclear ground state and the treatment of collective motions used in the present work are also introduced.
The ground state properties, isoscalar giant monopole, and isovector giant dipole modes of \isotope[208]{Pb} based on the LHV method with the Skyrme pseudopotentials are presented in Sec.~\ref{S:R&D}.
Finally, we summarize the present work and make a brief outlook in Sec.~\ref{S:S&O}.
\section{\label{S:SP}Mean-field potential}
\subsection{N3LO Skyrme pseudopotential}
The quasilocal nuclear energy density functional~(EDF) based on the density-matrix expansion provides an efficient way to investigate the universal EDF of a nuclear system.
In previous literature~\cite{CarPRC78,RaiPRC83}, the Skyrme interaction has been recognized as the corresponding pseudopotential of quasilocal nuclear EDF through Hartree-Fock~(HF) approximation, and a mapping has been established from the N$3$LO local EDF~\cite{CarPRC78} to N$3$LO Skyrme pseudopotential.
Such a mapping is worthwhile because it provides an order-by-order way to examine the validity of each term in the quasilocal effective interaction, since the precise structure of nuclear EDF can be derived from low-energy quantum chromodynamics with chiral perturbation theory~\cite{PugNPA723,KaiNPA724,FinNPA770}.
The N$3$LO Skyrme pseudopotential~\cite{CarPRC78,RaiPRC83} is a generalization of the standard Skyrme interaction~\cite{VauPRC5,VauPRC7,ChaNPA627,ChaNPA635} by adding terms that depend on derivative operators~(momentum operator) up to sixth order, which corresponds to the expansion of the momentum space matrix elements of a generic interaction in powers of the relative momenta up to the sixth order.
In this sense, the standard Skyrme interaction can be regarded as a N$1$LO Skyrme pseudopotential.
Such kind of generalization of the Skyrme interaction has been employed to describe EOS of nuclear matter~\cite{DavJPG40,DavJPG41,DavPS90,DavPRC91,DavAA585}, as well as the properties of finite nuclei~\cite{CarCPC181,BecPRC96}.
The full Skyrme pseudopotential generally contains central, spin-orbit, and tensor components.
Since in the present LHV method, and in most of the one-body transport models, only spin-averaged quantities are taken into consideration, we thus ignore the spin-orbit and tensor components, which are irrelevant to spin-averaged quantities.
The corresponding extended Skyrme interaction used in the present LHV method is written as
\begin{equation}\label{E:VSk}
v_{Sk} = V^{C}_{\text{N3LO}} + V^{DD}_{\text{N1LO}},
\end{equation}
with an overall factor $\hat{\delta}(\vec{r}_1-\vec{r}_2)$ omitted for the sake of clarity.
The central term is expressed as
\begin{widetext}
\begin{align}\label{E:VSkC}
V^{C}_{\text{N3LO}} = &~t_0(1+x_0\hat{P}_{\sigma}) + t^{[2]}_1(1+x^{[2]}_1\hat{P}_{\sigma})\frac{1}{2}(\hat{\vec{k}}'^2 + \hat{\vec{k}}^2) + t^{[2]}_2(1 + x^{[2]}_2\hat{P}_{\sigma})\hat{\vec{k}}'\cdot\hat{\vec{k}} + t^{[4]}_1(1+x^{[4]}_1\hat{P}_{\sigma})\Big[\frac{1}{4}(\hat{\vec{k}}'^2+\hat{\vec{k}}^2)^2+(\hat{\vec{k}}'\cdot\hat{\vec{k}})^2\Big]\notag\\
&+ t^{[4]}_2(1+x^{[4]}_2\hat{P}_{\sigma})(\hat{\vec{k}}'\cdot\hat{\vec{k}})(\hat{\vec{k}}'^2+\hat{\vec{k}}^2) + t^{[6]}_1(1 + x^{[6]}_1\hat{P}_{\sigma})(\hat{\vec{k}}'^2 + \hat{\vec{k}}^2)\Big[\frac{1}{2}(\hat{\vec{k}}'^2 + \hat{\vec{k}}^2)^2 + 6(\hat{\vec{k}}'\cdot\hat{\vec{k}})^2\Big]\notag\\
&+ t^{[6]}_2(1+x^{[6]}_2\hat{P}_{\sigma})(\hat{\vec{k}}'\cdot\hat{\vec{k}})[3(\hat{\vec{k}}'^2+\hat{\vec{k}}^2)^2+4(\hat{\vec{k}}'\cdot \hat{\vec{k}})^2],
\end{align}
\end{widetext}
where $\hat{\vec{k}}'$ and $\hat{\vec{k}}$ are derivative operators acting on left and right, and they take the conventional form as $-(\hat{\vec{\nabla}}_1-\hat{\vec{\nabla}}_2)/2i$ and
$(\hat{\vec{\nabla}}_1-\hat{\vec{\nabla}}_2)/2i$, respectively.
$\hat{P}_{\sigma}$ represents the spin exchange operator defined as $\hat{P}_{\sigma} = \frac{1}{2}(1 + \hat{\sigma}_1\hat{\sigma}_2)$, where $\hat{\sigma}_1$ and $\hat{\sigma}_2$ are Pauli matrices acting on the first and second state, respectively.
The density-dependent term $V^{DD}_{\text{N1LO}}$, which is introduced to mimic phenomenologically the effects of many-body interactions, is taken to be the same form as in the standard Skyrme interaction, i.e.,
\begin{equation}\label{E:VSkDD}
V^{DD}_{\text{N1LO}} = \frac{1}{6}t_3(1+x_3\hat{P}_{\sigma})\rho^{\alpha}\Big(\frac{\vec{r}_1 +\vec{r}_2}{2}\Big).
\end{equation}
In the above expressions, $t^{[n]}_i$, $x^{[n]}_i$ ($n=2, 4, 6$ and $i = 1, 2$), $t_0$, $t_3$, $x_0$, $x_3$ and $\alpha$ are Skyrme parameters. In particular, the parameters $t_0$ and $t^{[n]}_i$ are measures of the mean central potential, with an spin-exchange character specified by $x_0$ and $x^{[n]}_i$.
The parameters $t_3$ and $x_3$ are their analogs of the density-dependent potential, with $\alpha$ characterizing its density dependence.
The parameters $t^{[n]}_i$ and $x^{[n]}_i$ are related to the derivative terms which determine the momentum-dependent parts of the Hamiltonian density and the single-nucleon potentials.
With the introducing of the additional derivative terms in Eq.~(\ref{E:VSkC}), the N$3$LO Skyrme pseudopotential is able to describe the empirical single nucleon potential up to a kinetic energy of $1~\rm GeV$\cite{WRPRC98}.
\subsection{\label{S:H}Hamiltonian density with N$3$LO Skyrme pseudopotential}
During the heavy-ion collision process, the dinuclear system is generally far from
equilibrium.
In one-body transport models, such nonequilibrium conditions are described by the phase space distribution function (Wigner function) $f_{\tau}(\vec{r},\vec{p})$, with $\tau$ $=$ $1$~(or n) for neutrons and $-1$~(or p) for protons.
In the LHV method, we thus need to express the Hamiltonian density ${\cal H}(\vec{r})$ of the collision system in terms of $f_{\tau}(\vec{r},\vec{p})$.
The Hamiltonian density in HF approximation is obtained by calculating the expectation value of the total energy of the collision system.
For N$3$LO Skyrme pseudopotential expressed in Eq.~(\ref{E:VSk}), it takes the following form~(detailed derivation can be found in Ref.~\cite{WRPRC98}):
\begin{equation}\label{E:H}
\begin{split}
{\cal H}(\vec{r}) & = {\cal H}^{\rm kin}(\vec{r}) + {\cal H}^{\rm loc}(\vec{r})\\
& + {\cal H}^{\rm MD}(\vec{r}) + {\cal H}^{\rm grad}(\vec{r}) + {\cal H}^{\rm DD}(\vec{r}),
\end{split}
\end{equation}
with ${\cal H}^{\rm kin}(\vec{r})$, ${\cal H}^{\rm loc}(\vec{r})$, ${\cal H}^{\rm MD}(\vec{r})$, ${\cal H}^{\rm grad}(\vec{r})$, and ${\cal H}^{\rm DD}(\vec{r})$ being the kinetic, local, momentum-dependent, gradient, and density-dependent terms, respectively.
The kinetic term
\begin{equation}\label{E:Hkin}
{\cal H}^{\rm kin}(\vec{r}) = \sum_{\tau = n,p}\int d^3p\frac{p^2}{2m_{\tau}}f_{\tau}(\vec{r},\vec{p})
\end{equation}
and the local term
\begin{equation}\label{E:Hloc}
{\cal H}^{\rm loc}(\vec{r}) = \frac{t_0}{4}\bigg[(2 + x_0)\rho(\vec{r})^2 - (2x_0 + 1)\sum_{\tau = n,p}\rho_{\tau}(\vec{r})^2\bigg]
\end{equation}
are the same as those from the conventional Skyrme interaction.
The $\rho_{\tau}(\vec{r})$ in the local term is the nucleon density, which is related to
$f_{\tau}(\vec{r},\vec{p})$ through $\rho_{\tau}(\vec{r})$ $=$ $\int f_{\tau}(\vec{r},\vec{p})d\vec{p}$, while $\rho(\vec{r})$ represents the total nucleon density with $\rho(\vec{r})$ $=$ $\rho_n(\vec{r})$ $+$ $\rho_p(\vec{r})$.
The momentum-dependent term and gradient term contain the contributions from additional
derivative terms in Eq.(\ref{E:VSkC}). The momentum-dependent term can be expressed as
\begin{equation}\label{E:HMD}
\begin{split}
{\cal H}^{\rm MD}(\vec{r}) & = \int d^3pd^3p'{\cal K}_s(\vec{p},\vec{p}')f(\vec{r},\vec{p})f(\vec{r},\vec{p}')\\
& + \sum_{\tau = n, p}\int d^3pd^3p'{\cal K}_v(\vec{p},\vec{p}')f_{\tau}(\vec{r},\vec{p})f_{\tau}(\vec{r},\vec{p}'),
\end{split}
\end{equation}
with $f(\vec{r},\vec{p})$ $=$ $f_n(\vec{r},\vec{p})$ $+$ $f_p(\vec{r},\vec{p})$.
The ${\cal K}_{\rm s}(\vec{p},\vec{p}')$ and ${\cal K}_{\rm v}(\vec{p},\vec{p}')$ in Eq.~(\ref{E:HMD}) represent the isoscalar and isovector momentum-dependent kernel of mean-field potential, respectively.
For N$3$LO Skyrme pseudopotential used in the present work, ${\cal K}_{\rm s}(\vec{p},\vec{p}')$ and ${\cal K}_{\rm v}(\vec{p},\vec{p}')$ take the following forms,
\begin{align}
{\cal K}_{\rm s}(\vec{p},\vec{p}') & = \frac{C^{[2]}}{16\hbar^2}(\vec{p} - \vec{p}')^2 + \frac{C^{[4]}}{32\hbar^2}(\vec{p} - \vec{p}')^4\notag\\
& + \frac{C^{[6]}}{16\hbar^2}(\vec{p} - \vec{p}')^6,\label{E:mdks}\\
{\cal K}_{\rm v}(\vec{p},\vec{p}') & = \frac{D^{[2]}}{16\hbar^2}(\vec{p} - \vec{p}')^2 + \frac{D^{[4]}}{32\hbar^2}(\vec{p} - \vec{p}')^4\notag\\
& + \frac{D^{[6]}}{16\hbar^2}(\vec{p} - \vec{p}')^6.\label{E:mdkv}
\end{align}
In the present work, for the sake of simplicity of numerical derivatives, the gradient term is truncated at the second order, and only the isospin symmetric part is taken into account.
In other words, we only keep the second order derivative of the total baryon density $\rho(\vec{r})$, i.e.,
\begin{equation}\label{E:Hgrad}
{\cal H}^{\rm grad}(\vec{r}) = \frac{1}{16}E^{[2]}\Big\{2\rho(\vec{r})\nabla^2\rho(\vec{r}) - 2\big[\nabla\rho(\vec{r})\big]^2\Big\}.
\end{equation}
The complete gradient term in the Hamiltonian density for the N$3$LO Skyrme potential can be found in Ref.~\cite{WRPRC98}.
In the above expressions, for convenience, we have recombined the Skyrme parameters
related to the derivative terms in Eq.~(\ref{E:VSkC}), namely, $t_1^{[n]}$, $t_2^{[n]}$,
$x_1^{[n]}$, and $x_2^{[n]}$, into the parameters $C^{[n]}$ and $D^{[n]}$,
\begin{align}
C^{[n]} & = t_1^{[n]}(2+x_1^{[n]})+t_2^{[n]}(2+x_2^{[n]}),\label{E:Cn}\\
D^{[n]} & = -t_1^{[n]}(2x_1^{[n]}+1)+t_2^{[n]}(2x_2^{[n]}+1),\label{E:Dn}
\end{align}
which are related to the momentum-dependent terms, and $E^{[2]}$, which is related to the gradient terms with
\begin{equation}
E^{[2]} = -\frac{1}{4}\big[t_1^{[2]}(2+x_1^{[2]}) - t_2^{[2]}(2+x_2^{[2]})\big].\label{E:E2}
\end{equation}
The density-dependent term is expressed as
\begin{equation}\label{E:HDD}
\begin{split}
{\cal H}^{\rm DD}(\vec{r}) = \frac{t_3}{24}\bigg[(2 + x_3)\rho^2 - (2x_3 + 1)\sum_{\tau=n,p}\rho_{\tau}^2\bigg]\rho^{\alpha}.
\end{split}
\end{equation}
Based on the above expressions, one can see that the Hamiltonian density ${\cal H}(\vec{r})$ is explicitly dependent on $f_{\tau}(\vec{r},\vec{p})$, as well as the $\rho_{\tau}(\vec{r})$ and their derivatives.
\begin{table}[!htb]
\centering
\caption{Parameters related to the nuclear matter properties of three N$3$LO Skyrme pseudopotentials, SP$6$s, SP$6$m, and SP$6$h, and one conventional Skyrme interaction MSL$1$, where the recombination of Skyrme parameters defined in Eqs.~(\ref{E:Cn}) and (\ref{E:Dn}) are used.}
\begin{tabular}{ccccc}
\hline\hline
~ & SP$6$s & SP$6$m & SP$6$h & MSL$1$\\
\hline
$t_0$~($\rm{MeV\cdot fm}^{3}$) & -1814.64 & -1956.75 & -1675.52 & 1963.23\\
$x_0$ & 0.5400 & 0.2306 & -0.0902 & 0.3208\\
$t_3$~($\rm{MeV\cdot fm}^{3 + 3\alpha}$) & 10796.2 & 11402.9 & 9873.1 & 12174.9\\
$x_3$ & 0.8257 & 0.1996 & -0.4990 & 0.3219\\
$\alpha$ & 0.2923 & 0.2523 & 0.3168 & 0.2694\\
$C^{[2]}$~($\rm{MeV\cdot fm}^{5}$) & 597.877 & 637.195 & 677.884 & 435.519\\
$D^{[2]}$~($\rm{MeV\cdot fm}^{5}$) & -446.695 & -524.373 & -601.990 & -367.583\\
$C^{[4]}$~($\rm{MeV\cdot fm}^{7}$) & -26.2027 & -28.5209 & -31.2026 & 0.0\\
$D^{[4]}$~($\rm{MeV\cdot fm}^{7}$) & 23.2525 & 27.6873 & 32.4607 & 0.0\\
$C^{[6]}$~($\rm{MeV\cdot fm}^{9}$) & 0.0903 & 0.1000 & 0.1121 & 0.0\\
$D^{[6]}$~($\rm{MeV\cdot fm}^{9}$) & -0.0896 & -0.1080 & -0.1292 & 0.0\\
\hline\hline
\end{tabular}
\label{T:SPs}
\end{table}
The calculations in the present work are mainly based on three N$3$LO Skyrme pseudopotentials, SP$6$s, SP$6$m and SP$6$h~\cite{WRPRC98}, which can describe the empirical single nucleon potential up to $1~\rm GeV$ in kinetic energy.
The main difference of these three interactions is their suprasaturation behavior of the isospin dependent part of EOS, namely, the symmetry energy.
Since the nucleon momenta in both the ground state properties and low energy collective excitation are not much larger than the Fermi momentum of saturated nuclear matter, conventional Skyrme interactions are still able to give the empirical single nucleon potential.
In order to compare with RPA calculations, one conventional Skyrme interaction MSL$1$~\cite{ZZPLB726} is also adopted in the present work.
We list the Skyrme parameters related to nuclear matter properties for the above four Skyrme interactions in Table~\ref{T:SPs}, while more discussions about the gradient parameter $E^{[2]}$, which is irrelevant to nuclear matter properties, will be given in Sec.~\ref{S:INI}.
For further reference, the characteristic parameters of nuclear matter for these interactions are shown in Table~\ref{T:CPs}.
The definition of these quantities can be found, e.g., in Ref.~\cite{LCPRC80}.
\begin{table}[!htb]
\centering
\caption{Macroscopic characteristic quantities for SP$6$s, SP$6$m, SP$6$h, and MSL$1$, where $\rho_{\rm sc}$ $=$ $0.11/0.16\rho_0$ and $\rho_{\rm h}$ $=$ $0.5~{\rm fm}^{-3}$.}
\begin{tabular}{ccccc}
\hline\hline
~ & SP$6$s & SP$6$m & SP$6$h & MSL$1$\\
\hline
$\rho_{0}$~($\rm{fm}^{-3}$) & $0.1614$ & $0.1630$ & $0.1647$ & $0.1586$\\
$E_{0}$~($\rm{MeV}$) & $-16.04$ & $-15.94$ & $-15.61$ & $-16.00$\\
$K_{0}$~($\rm{MeV}$) & $240.9$ & $233.4$ & $240.8$ & $235.1$\\
$J_0$~($\rm{MeV}$) & $-377.0$ & $-384.2$ & $-358.2$ & $-372.7$\\
$E_{\rm{sym}}(\rho_{\rm{sc}})$~($\rm{MeV}$) & $25.43$ & $25.83$ & $25.98$ & $26.67$\\
$L(\rho_{\rm sc})$~($\rm MeV$) & $32.47$ & $46.75$ & $62.19$ & $46.19$\\
$E_{\rm sym}(\rho_0)$~($\rm MeV$) & $28.84$ & $31.93$ & $34.97$ & $32.33$\\
$L(\rho_0)$~($\rm MeV$) & $18.20$ & $49.10$ & $82.17$ & $45.25$\\
$K_{\rm sym}$~($\rm MeV$) & $-242.7$ & $-158.0$ & $-70.5$ & $-183.3$\\
$E_{\rm sym}(2\rho_0)$~($\rm MeV$) & $ 24.06 $ & $ 41.31 $ & $ 61.62 $ & $39.00$\\
$E_{\rm sym}(\rho_{\rm h})$~($\rm MeV$) & $0.03$ & $41.32$ & $79.82$ & $31.01$\\
$m_{s,0}^{\ast}/m$ & $0.759$ & $0.758$ & $0.755$ & $0.806$\\
$m_{v,0}^{\ast}/m$ & $0.678$ & $0.663$ & $0.648$ & $0.706$\\
\hline\hline
\end{tabular}\label{T:CPs}
\end{table}
\section{\label{S:LHV}Lattice Hamiltonian Vlasov method}
\subsection{Lattice Hamiltonian method for Vlasov equation}
Quantum theory with phase-space distributions, with proper generalization or approximation, is profoundly suitable to formulate and solve many-particle dynamics~\cite{CarRMP55}.
It is demonstrated that in the limit involving $\hbar$ $\rightarrow$ $0$, quantum theory with one-body phase-space distributions is reduced to the Vlasov equation~\cite{BerPR160},
\begin{equation}\label{E:VE}
\frac{\partial f}{\partial t} + \frac{\vec{p}}{E}\nabla_{\vec{r}}f + \nabla_{\vec{p}}U(\vec{r},\vec{p})\cdot\nabla_{\vec{r}}f - \nabla_{\vec{r}}U(\vec{r},\vec{p})\cdot\nabla_{\vec{p}}f = 0,
\end{equation}
where $f$ is the one-body phase-space distribution, or Wigner function defined as the Wigner transform of one-body density matrix $\rho(\vec{r}+\vec{s}/2,\vec{r}-\vec{s}/2)$, i.e.,
\begin{equation}
f(\vec{r},\vec{p}) = \frac{1}{(2\pi\hbar)^3}\int {\rm exp}\Big(-i\frac{\vec{p}}{\hbar}\cdot\vec{s}\Big)\rho(\vec{r}+\vec{s}/2,\vec{r}-\vec{s}/2)d\vec{s}.
\end{equation}
In nuclear physics, Eq.~(\ref{E:VE}) with an additional nucleon-nucleon collision term on the right hand side, which takes into account Fermi statistics, i.e.,
\begin{equation}
\begin{split}
I_{\rm c} & = -\int\frac{d\vec{p}_2}{(2\pi\hbar)^3}\frac{d\vec{p}_3}{(2\pi\hbar)^3}\frac{d\vec{p}_4}{(2\pi\hbar)^3}|{\cal M}_{12\rightarrow34}|^2\\
&\times(2\pi)^4\delta^4(p_1 + p_2 - p_3 - p_4)\\
&\times[f_1f_2(1-f_3)(1-f_4) - f_3f_4(1 - f_1)(1 - f_2)],
\end{split}\label{E:Ic}
\end{equation}
is commonly referred to as the BUU equation~\cite{BerPR160}.
The Vlasov equation or BUU equation is normally solved by interpreting $f_{\tau}(\vec{r},\vec{p},t)$ as the semi-classical phase space distribution function.
If we treat each volume element as nuclear matter, the (quasi)nucleons inside the volume are in momentum eigenstates obeying the Pauli principle.
Under such a condition, the obtained $f_{\tau}(\vec{r},\vec{p},t)$ through Wigner transform of the density matrix turns out to be the occupation probability of the momentum eigenstates, and thus can be regarded as the classical phase space distribution function.
The test particle method~\cite{CWPRC25}, where the semiclassical $f_{\tau}(\vec{r},\vec{p},t)$ is mimicked by a large number of test particles, i.e.,
\begin{equation}\label{E:fTP}
f_{\tau}(\vec{r},\vec{p},t) \propto\sum_i\delta\big[\vec{r}_i(t) - \vec{r}\big]\delta\big[\vec{p}_i(t) - \vec{p}\big],
\end{equation}
has been introduced to solve the Vlasov equation numerically.
In the conventional test particle method, the evolutions of coordinate $\vec{r}_i(t)$ and momentum $\vec{p}_i(t)$ of the $i$th test nucleon are governed by the mean-field potential, or the single particle potential, which can be obtained either by varying the Hamiltonian density or by direct parametrization.
In order to obtain a smooth mean-field potential, the density of a certain cell is averaged by neighboring cells.
This kind of simple smoothing technique violates the equation of motion a little bit, and fails to conserve either the total energy or the total momentum~\cite{BerPR160}.
The lattice Hamiltonian Vlasov method developed by Lenk and Pandharipande~\cite{LenPRC39} overcomes this disadvantage of conventional test particle method.
It conserves the total energy exactly and the total momentum to a high degree of accuracy.
In the LHV method, instead of mean-field potential, the equation of motion of test nucleons is governed directly by the total Hamiltonian of the system, which is approximated by the lattice Hamiltonian, i.e.,
\begin{equation}
H = \int {\cal H}(\vec{r})d\vec{r} \approx l_xl_yl_z\sum_{\alpha}{\cal H}(\vec{r}_{\alpha})\equiv H_L,
\end{equation}
where $\vec{r}_{\alpha}$ represents the coordinate of a certain lattice site $\alpha$ and $l_x$, $l_y$, and $l_z$ are lattice spacing.
The above lattice Hamiltonian can be expressed in terms of the positions and momenta of test nucleons, if we write the semi-classical LHV phase space distribution at lattice site $\alpha$ as
\begin{equation}\label{E:f}
\tilde{f}_{\tau}(\vec{r}_{\alpha},\vec{p},t) = \frac{1}{2}\frac{(2\pi\hbar)^3}{N_{\rm E}}\sum_i^{\alpha,\tau}S\big[\vec{r}_i(t) - \vec{r}_{\alpha}\big]\delta\big[\vec{p}_i(t) - \vec{p}\big],
\end{equation}
where the factor $\frac{1}{2}$ is due to spin degeneracy.
$N_{\rm E}$ is the number of ensembles (or number of test particles in some literature) introduced in the calculation, and the sum runs over all test nucleons with isospin $\tau$ that contribute to the lattice site $\alpha$.
This equivalently gives each test particle a form factor $S$ compared with Eq.~(\ref{E:fTP}).
By doing this, the movement of a test particle leads to a continuous variation of the local nucleon density of the nearby lattice sites, which is useful to smooth the nucleon distribution functions in phase space.
It should be noted that the form factor $S$ actually modifies the relation between test particles and the Wigner function $f$.
At this point, we would like to point out that in principle, a similar form factor in momentum space can also be introduced in Eq.~(\ref{E:f}), which is expected to improve the calculations with momentum-dependent mean-field potentials.
In the present work, we only adopt the form factor in coordinate space, and in the future it would be definitely interesting to perform a systematic investigation on the effects of a form factor in momentum space in heavy-ion transport model calculations.
The local nucleon density at lattice sites, or LHV density, is then given by integrating $\tilde{f}_{\tau}(\vec{r}_{\alpha},\vec{p},t)$ with respect to momentum, i.e.,
\begin{equation}\label{E:rhoL}
\tilde{\rho}_{\tau}(\vec{r}_{\alpha},t) = 2\int\tilde{f}_{\tau}\frac{d\vec{p}}{(2\pi\hbar)^3} = \frac{1}{N_{\rm E}}\sum_i^{\alpha,\tau}S\big[\vec{r}_i(t) - \vec{r}_{\alpha}\big].
\end{equation}
Note that here we distinguish the realistic phase space distribution function $f(\vec{r},\vec{p})$ and local density $\rho(\vec{r})$ from the LHV phase space distribution function and density expressed in Eqs.~(\ref{E:f}) and (\ref{E:rhoL}), respectively, and will explore their distinctions in Sec.~\ref{S:INI}.
Substituting Eq.~(\ref{E:f}) into Eq.~(\ref{E:H}),
the lattice Hamiltonian $H_L$ is then expressed in terms of the coordinates and momenta of test nucleons, and subsequently they can be treated as the canonical variables of the lattice Hamiltonian.
The equation of motion for the $i$th test nucleon is then governed by the Hamilton equation of total lattice Hamiltonian of all ensembles $N_{\rm E}H_L$, i.e.,
\begin{widetext}
\begin{align}
\frac{d\vec{r}_i}{dt} & = N_{\rm E}\frac{\partial H_L\big[\vec{r}_1(t),\cdots,\vec{r}_{A\times N_{\rm E}}(t);\vec{p}_1(t),\cdots,\vec{p}_{A\times N_{\rm E}}(t)\big]}{\partial\vec{p}_i} = \frac{\vec{p}_i(t)}{m} + N_{\rm E}l_xl_yl_z\sum_{\alpha\in V_i}\frac{\partial{\tilde{\cal H}}^{\rm MD}_{\alpha}}{\partial\vec{p}_i}\label{E:ri},\\
\frac{d\vec{p}_i}{dt} & = - N_{\rm E}\frac{\partial H_L\big[\vec{r}_1(t),\cdots,\vec{r}_{A\times N_{\rm E}}(t);\vec{p}_1(t),\cdots,\vec{p}_{A\times N_{\rm E}}(t)\big]}{\partial\vec{r}_i}\notag\\
& = - N_{\rm E}l_xl_yl_z\sum_{\alpha\in V_i}\bigg[\sum_{\tau}^{n,p}\Big(\frac{\partial{\tilde{\cal H}}^{\rm loc}_{\alpha}}{\partial\tilde{\rho}_{\tau,\alpha}} + \frac{\partial{\tilde{\cal H}}^{\rm Cou}_{\alpha}}{\partial\tilde{\rho}_{\tau,\alpha}} + \frac{\partial{\tilde{\cal H}}^{\rm DD}_{\alpha}}{\partial\tilde{\rho}_{\tau,\alpha}} + \frac{\partial{\tilde{\cal H}}^{\rm grad}_{\alpha}}{\partial\tilde{\rho}_{\tau,\alpha}} + \sum_n(-1)^n\nabla^n\frac{\partial{\tilde{\cal H}}^{\rm grad}_{\alpha}}{\partial\nabla^n\tilde{\rho}_{\tau,\alpha}} \Big)\frac{\partial\tilde{\rho}_{\tau,\alpha}}{\partial\vec{r}_i} + \frac{\partial{\tilde{\cal H}}^{\rm MD}_{\alpha}}{\partial\vec{r}_i}\bigg]\label{E:pi}.
\end{align}
In the above two equations, $A$ is the nucleon number of the system, while the subscripts $\alpha$ for various quantities denote their values at lattice site $\alpha$.
The sums run over all lattice sites inside $V_i$, where the form factor of the $i$th test nucleon covers.
A tilde above the Hamiltonian density, e.g., $\tilde{\cal H}^{\rm loc}(\vec{r}_{\alpha})$, denotes that in its expression in Sec.~\ref{S:H}, the realistic phase space distribution function and local density are replaced by the LHV phase space distribution function and density.
The Coulomb interaction contributes to the Hamiltonian density through
\begin{equation}
{\cal H}^{\rm Cou}(\vec{r}_{\alpha}) = e^2\rho_p(\vec{r}_{\alpha})\bigg\{\frac{1}{2}\int\frac{\rho_p(\vec{r}')}{|\vec{r}_{\alpha} - \vec{r}'|}d\vec{r}' - \frac{3}{4}\Big[\frac{3\rho_p(\vec{r}_{\alpha})}{\pi}\Big]^{1/3}\bigg\}\label{E:Cou}\approx\tilde{\cal H}^{\rm Cou}(\vec{r}_{\alpha}) = e^2\tilde{\rho}_p(\vec{r}_{\alpha})\bigg\{\frac{1}{2}\sum_{\alpha'\ne\alpha}\frac{\tilde{\rho}_p(\vec{r}_{\alpha'})l_xl_yl_z}{|\vec{r}_{\alpha} - \vec{r}_{\alpha'}|} - \frac{3}{4}\Big[\frac{3\tilde{\rho}_p(\vec{r}_{\alpha})}{\pi}\Big]^{1/3}\bigg\},
\end{equation}
among which the minus term represents the contribution from the exchange term of Coulomb energy.
Further testing shows that the Coulomb energy $\tilde{\cal H}^{\rm Cou}(\vec{r}_{\alpha})$ defined in the above equation has already converged at lattice spacing of $l_x$ $=$ $l_y$ $=$ $l_z$ $=$ $0.5~\rm fm$ used in the present work.
The partial derivative of $\tilde{\rho}_{\tau,\alpha}$ in Eq.~(\ref{E:pi}) can be calculated in terms of the spatial derivative of $S$, i.e.,
\begin{equation}
\frac{\partial\tilde{\rho}_{\tau,\alpha}}{\partial\vec{r}_i} = \frac{\partial}{\partial\vec{r}_i}\sum_{\vec{r}_j\in V_{\alpha}}^{\tau_j=\tau}S(\vec{r}_j-\vec{r}_{\alpha})
= \begin{cases}
& \frac{\partial S(\vec{r}_i-\vec{r}_{\alpha})}{\partial\vec{r}_i},\quad \tau_i = \tau,\\
& 0,\quad \tau_i \ne \tau.
\end{cases}
\end{equation}
The momentum-dependent parts of the equation of motion of test particles in Eqs.~(\ref{E:ri}) and (\ref{E:pi}) are obtained by substituting the momentum-dependent part of the Hamiltonian density, i.e., Eq.~(\ref{E:HMD}) into Eqs.~(\ref{E:ri}) and (\ref{E:pi}) after replacing $f_{\tau}(\vec{r},\vec{p})$ in Eq.~(\ref{E:HMD}) with the semi-classical LHV phase space distribution expressed in Eq.~(\ref{E:f}).
The integrals in Eq.~(\ref{E:HMD}) turn out to be summations of test particles,
\begin{align}
\frac{\partial\tilde{\cal H}^{\rm MD}(\vec{r}_{\alpha})}{\partial\vec{r}_i} & = 2\frac{\partial S\big[\vec{r}_i(t) - \vec{r}_{\alpha}\big]}{\partial\vec{r}_i}\bigg\{\sum_{j\in V_{\alpha}}S\big[\vec{r}_j(t) - \vec{r}_{\alpha}\big]{\cal K}_{\rm s}\big[\vec{p}_i(t),\vec{p}_j(t)\big] + \sum_{j\in V_{\alpha}}^{\tau_j = \tau_i}S\big[\vec{r}_j(t) - \vec{r}_{\alpha}\big]{\cal K}_{\rm v}\big[\vec{p}_i(t),\vec{p}_j(t)\big]\bigg\}\label{E:mdr},\\
\frac{\partial\tilde{\cal H}^{\rm MD}(\vec{r}_{\alpha})}{\partial\vec{p}_i} & = 2S\big[\vec{r}_i(t) - \vec{r}_{\alpha}\big]\bigg\{\sum_{j\in V_{\alpha}}S\big[\vec{r}_j(t) - \vec{r}_{\alpha}\big]\frac{\partial{\cal K}_{\rm s}\big[\vec{p}_i(t),\vec{p}_j(t)\big]}{\partial\vec{p}_i} + \sum_{j\in V_{\alpha}}^{\tau_j = \tau_i}S\big[\vec{r}_j(t) - \vec{r}_{\alpha}\big]\frac{\partial{\cal K}_{\rm v}\big[\vec{p}_i(t),\vec{p}_j(t)\big]}{\partial\vec{p}_i}\bigg\}\label{E:mdp}.
\end{align}
\end{widetext}
Based on Eqs.~(\ref{E:ri}) - (\ref{E:mdp}), we can calculate the time evolution $\vec{r}_i(t)$ and $\vec{p}_i(t)$ of test nucleons, and then calculate physical observables based on Eq.~(\ref{E:f}).
In the present work, the factor $S(\vec{r}_i - \vec{r})$ is chosen to be triangle form,
\begin{align}
S(\vec{r}_i - \vec{r}) & = \frac{1}{(nl/2)^6}g(\Delta x)g(\Delta y)g(\Delta z),\\
g(q) & = \Big(\frac{nl}{2} - |q|\Big)\theta\Big(\frac{nl}{2} - |q|\Big),
\end{align}
where $\theta$ is the Heaviside function, and $n$ is an integer which determines the range of $S$.
Calculations based on lattices generally break Galilean invariance, and thus violate momentum conservation.
In the present work we choose $n$ $=$ $4$, which is large enough to conserve the total momentum to a high degree of accuracy~\cite{LenPRC39}.
Generally speaking, the choice of $S(\vec{r}_i - \vec{r})$ is somewhat arbitrary.
Besides the triangle form used in the present work, alternative forms used in previous literature are trapezoid~\cite{DanNPA673}, double parabolic~\cite{PerPRC65}, and Gaussian~\cite{UrbPRC85} form.
However, in order to ensure particle number conservation, $S(\vec{r}_i - \vec{r})$ should satisfy the following equation:
\begin{equation}
\sum_{\alpha}\tilde{\rho}(\vec{r}_{\alpha})l_xl_yl_z = \frac{1}{N_{\rm E}}\sum_{\alpha}\sum_{i}S(\vec{r}_i - \vec{r}_{\alpha})l_xl_yl_z = A.
\end{equation}
It should be mentioned that in the conventional test particle method, the Hamiltonian equations of motion for the test particles are derived from the {\it single particle} Hamiltonian, which makes it difficult to exactly conserve energy of the system in the dynamic process~\cite{BerPR160,LenPRC39}. On the other hand, in the LHV method, the Hamiltonian equations of motion for the test particles, namely, Eqs.~(\ref{E:ri}) and (\ref{E:pi}), are derived from the {\it total} Hamiltonian of the system of test particles, which guarantees the exact energy conservation of the system in the dynamic process~\cite{LenPRC39}.
In addition, we would like to mention that the present LHV method is implemented based on GPU parallel computing~\cite{Rue2013}, which increases the computational efficiency and makes it possible to obtain more reliable results with the use of much more ensembles.
\subsection{\label{S:INI}Initialization of nuclear ground state}
In the present framework, the Vlasov ground state of nuclei at zero temperature is obtained by varying the Hamiltonian with respect to the nuclear radial density.
Such kind of initialization in one-body transport model is sometimes referred to as Thomas-Fermi~(TF) initialization~\cite{LenPRC39,DanNPA673,GaiPRC81,LHPRC99}.
Within the one-body transport models, at zero temperature, for a nucleus in ground state, its Wigner function satisfies
\begin{equation}\label{E:f0}
f_{\tau}(\vec{r},\vec{p}) = \frac{2}{(2\pi\hbar)^3}\theta\big[|\vec{p}| - p^F_{\tau}(\vec{r})\big],
\end{equation}
where $p^F_{\tau}(\vec{r})$ is local Fermi momentum and fulfills
\begin{equation}\label{E:pF}
p^F_{\tau}(\vec{r}) = \hbar\big[3\pi^2\rho_{\tau}(\vec{r})\big]^{1/3}.
\end{equation}
For simplicity in the following we assume the nucleus is spherical.
The total energy of a ground state nucleus at zero temperature can be then treated as a functional of radial density $\rho(r)$ and its spatial gradients $\nabla_r^n\rho(r)$, i.e.,
\begin{equation}
E = \int{\cal H}\big[r,\rho_{\tau}(r),\nabla{\rho_{\tau}(r)},\nabla^2{\rho_{\tau}(r)}\cdots\big]dr.
\end{equation}
After varying the total energy with respect to $\rho_{\tau}(r)$ and its spatial gradients, and considering Eqs.~(\ref{E:H}) (\ref{E:f0}), we obtain the neutron/proton radial density in a ground state nucleus~(note that the contribution from the Coulomb interaction in Eq.~(\ref{E:Cou}) for protons should also be included in the Hamiltonian density),
\begin{equation}\label{E:GS}
\frac{1}{2m}\big\{p_{\tau}^F\big[\rho_{\tau}(r)\big]\big\}^2 + U_{\tau}\big\{p_{\tau}^{\rm F}\big[\rho_{\tau}(r)\big],r\big\} = \mu_{\tau},
\end{equation}
where $\mu_{\tau}$ is the chemical potential of proton or neutron inside the nucleus and $U_{\tau}\big\{p_{\tau}^{\rm F}\big[\rho_{\tau}(r)\big],r\big\}$ is the single nucleon potential of the nucleon at the Fermi surface.
The single nucleon potential is defined as the variation of Hamiltonian density with respect to the phase space distribution function~(or local density for the zero temperature case) and density gradients.
For the N$3$LO Skyrme pseudopotential, the detailed expression can be found in Ref.~\cite{WRPRC98}.
Equation (\ref{E:GS}) has a very intuitive physical significance: in a classical point of view, it means within a ground state nucleus, the nucleons possessing different Fermi momentum in different radial position have the same chemical potential.
The local density $\rho(\vec{r})$ for ground state spherical nucleus is obtained by solving Eq.~(\ref{E:GS}) subjecting to the boundary condition
\begin{equation}
\frac{\partial\rho(r)}{\partial r}\Big|_{r = 0} = \frac{\partial\rho(r)}{\partial r}\Big|_{r = r_{\rm B}} = 0,
\end{equation}
where $r_{\rm B}$ is the boundary of the nucleus satisfying $\rho(r_{\rm B})$ $=$ $0$, and needs to be determined when solving Eq.~(\ref{E:GS}).
Since the way we deal with the ground state is semiclassical, the gradient parameter $E^{[2]}$ is readjusted for each interaction in order that the solution of Eq.~(\ref{E:GS}) reproduces roughly the experiment binding energy and charge radius of \isotope[208]{Pb}.
For MSL$1$ interaction, the results in Sec.~\ref{S:R&D} are based on the readjusted gradient parameters.
In the present LHV method, the initial coordinates of test nucleons are generated according to the solution of Eq.~(\ref{E:GS}), while their initial momenta follow zero-temperature Fermi distribution with the Fermi momentum in Eq.~(\ref{E:pF}) determined by local density.
Careful readers might realize that the obtained ground state LHV density $\tilde{\rho}(\vec{r})$ is smeared due to the form factor introduced in Eq.~(\ref{E:f}), and thus slightly different from the solution of Eq.~(\ref{E:GS}).
Contrary to the Gaussian wave packet in quantum molecular dynamics~\cite{AicPR202}, we do not attach any physical meaning to the form factor $S(\vec{r} - \vec{r}')$ and the smoothed LHV density $\tilde{\rho}$, and regard them as numerical techniques introduced in the test-particle approach so that we can obtain well-defined densities and mean fields.
In order to compensate for effects caused by the smearing of the local density due to the form factor, an additional gradient term should appear in the local density based on the following argument.
The LHV density $\tilde{\rho}$ can be regarded as the convolution of the realistic local density,
\begin{equation}
\tilde{\rho}(\vec{r}) = \int\rho(\vec{r}')S(\vec{r} - \vec{r}')d\vec{r}'.
\end{equation}
To express $\rho$ in terms of $\tilde{\rho}$, formally we have
\begin{align}
\rho(\vec{r}) & = \int\tilde{\rho}(\vec{r}')S^{-1}(\vec{r}' - \vec{r})d\vec{r}'\notag\\
& = \int\Big[\sum_{n = 0}^{\infty}\frac{1}{n!}\nabla^n\tilde{\rho}(\vec{r})(\vec{r}' - \vec{r})^n\Big]S^{-1}(\vec{r}' - \vec{r})d\vec{r}'\notag\\
& \approx\tilde{\rho}(\vec{r}) + c\nabla^2\tilde{\rho}(\vec{r})\label{E:rho},
\end{align}
where $S^{-1}(\vec{r} - \vec{r}')$ is the inverse of $S(\vec{r} - \vec{r}')$ which satisfies
\begin{equation}
\int S(\vec{r} - \vec{r}'')S^{-1}(\vec{r}'' - \vec{r}')d\vec{r}'' = \delta(\vec{r} - \vec{r}').
\end{equation}
The parameter $c$, defined as
\begin{equation}
c \equiv \int\frac{1}{2}(\vec{r}' - \vec{r})^2S^{-1}(\vec{r}' - \vec{r})d\vec{r}',
\end{equation}
is a constant only depending on the form of $S$.
In the LHV method, the direct correction on $\tilde{\rho}(\vec{r})$ is not feasible since numerically the density in Eq.~(\ref{E:rho}) is not always positively defined.
In practice, to compensate the smearing of density due to the form factor, we introduce an additional gradient term $\tilde{E}^{[2]}\nabla^2\tilde{\rho}(\vec{r})$ into the Hamiltonian.
To demonstrate this, one need only substitute Eq.~(\ref{E:rho}) into Eq.~(\ref{E:H}), and after several necessary approximations, an additional term proportional to $c\tilde{\rho}(\vec{r})\nabla^2\tilde{\rho}(\vec{r})$ will show up in the Hamiltonian.
Though $\tilde{\rho}$ is not a constant, in the present work, for simplicity the additional gradient term is recast effectively into $\tilde{E}^{[2]}\nabla^2\tilde{\rho}$, with $\tilde{E}^{[2]}$ being a constant.
This is equivalent to replace the gradient term coefficient $E^{[2]}$ by $E^{[2]}$ $+$ $\tilde{E}^{[2]}$.
Since in principle the rms radius in exact ground state does not change with time, $\tilde{E}^{[2]}$ here is adjusted roughly to obtain the ground state rms radius evolution with the smallest oscillation.
It should be mentioned that this correction on gradient term only improves the stability of ground state evolution~(rms radius and radial density profile) slightly, and does not cause much difference on the result of collective motions in Sec.~\ref{S:R&D}.
Needless to say, in ideal cases with $N_{\rm E}$ approaching infinity and lattice spacing approaching zero, LHV local density will approach the realistic local density, and $\tilde{E}^{[2]}$ will become zero.
\begin{table}[!htb]
\centering
\caption{The gradient parameters, both $E^{[2]}$ and $\tilde{E}^{[2]}$ for SP$6$s, SP$6$m, SP$6$h, and MSL$1$ used in the present work.
The obtained binding energy and proton rms radius of \isotope[208]{Pb} of TF initialization and LHV calculations are also shown.}
\begin{tabular}{cccccc}
\hline\hline
~ & \rm{SP$6$s} & \rm{SP$6$m} & \rm{SP$6$h} & \rm{MSL$1$} & Exp. \\
\hline
$E^{[2]}$~($\rm MeV\cdot fm^{5}$) & -250.0 & -200.0 & -150.0 & -250.0 & - \\
$\tilde{E}^{[2]}$~($\rm MeV\cdot fm^{5}$) & -15.0 & -10.0 & -10.0 & -20.0 & - \\
\\
BE~($\rm MeV$) & $1637.2$ & $1669.7$ & $1654.5$ & $1632.7$ & $1636.4$\\
$\sqrt{\langle r^2\rangle_p}$~($\rm fm$) & $5.48$ & $5.44$ & $5.40$ & $5.51$ & $5.45$\\
$\rm BE$~($\rm MeV$) in LHV & $1557.2$ & $1585.1$ & $1565.1$ & $1553.5$ & -\\
$\sqrt{\langle r^2\rangle_p}$~($\rm fm$) in LHV & $5.52$ & $5.49$ & $5.44$ & $5.56$ & -\\
\hline\hline
\end{tabular}\label{T:E2}
\end{table}
In Table~\ref{T:E2}, for the interactions used in the present work, namely SP$6$s, SP$6$m, SP$6$h, and MSL$1$, we list their parameters $E^{[2]}$ and $\tilde{E}^{[2]}$, as well as, for \isotope[208]{Pb}, the binding energy and proton rms radius based on TF initialization, i.e., the solution of Eq.~(\ref{E:GS}), and in the LHV calculations.
We note from the table that after the smearing of $S$ in LHV method, the total energy decreases and the rms radius of proton increases slightly.
However, this difference affects the stability of the ground state very little, as we will see in Sec.~\ref{S:GS}.
\subsection{\label{S:GRLHV}Nuclear giant resonance within the Vlasov equation}
We consider a perturbative excitation of the Hamiltonian at the initial time, i.e.,
\begin{equation}
\hat{H}_{ex}(t) = \lambda\hat{Q}\delta(t),
\end{equation}
where $\hat{Q}$ is the excitation operator we are interested in and $\lambda$ is supposed to be small.
Within the linear response theory~\cite{Fet1971}, the response of the excitation operator $\hat{Q}$ as a function of time is given by
\begin{align}
\Delta\langle\hat{Q}\rangle(t) & = \langle f|\hat{Q}|f\rangle(t) - \langle0|\hat{Q}|0\rangle(t)\notag\\
& = -\frac{2\lambda\theta(t)}{\hbar}\sum_f|\langle f|\hat{Q}|0\rangle|^2{\rm sin}\frac{(E_f-E_0)t}{\hbar},
\label{E:dQt}
\end{align}
where $|0\rangle$ is the ground state for unperturbed system, $|f\rangle$ is the energy eigenstate of the excited system, $E_0$ and $E_f$ are the eigen-energy of the system before and after excitation, respectively.
We define the strength function $S(E)$ as usual through
\begin{equation}
S(E) \equiv \sum_f|\langle f|\hat{Q}|0\rangle\delta(E - E_f - E_0).
\end{equation}
The $S(E)$ can be expressed as the Fourier integral of $\Delta\langle\hat{Q}\rangle(t)$ by taking advantage of Eq.~(\ref{E:dQt}), i.e.,
\begin{equation}\label{E:S-Q}
S(E) = -\frac{1}{\pi\lambda}\int_0^{\infty}dt\Delta\langle\hat{Q}\rangle(t){\rm sin}\frac{Et}{\hbar}.
\end{equation}
By evaluating the time evolution of the response of $\hat{Q}$ within the LHV method, we can obtain the strength function.
We assume $\hat{Q}$ is a one-body operator, which means it can be expressed as the sum of operators $\hat{q}$ acting on each nucleon, i.e.,
\begin{equation}
\hat{Q} = \sum_i^A\hat{q}.
\label{E:Qq}
\end{equation}
The expectation of $\hat{Q}$ then can be calculated as follows,
\begin{align}
\langle\hat{Q}\rangle = & \langle f|\hat{Q}|f\rangle\notag\\
= & \int\langle f|\vec{r}_1\cdots\vec{r}_N\rangle\langle\vec{r}_1\cdots\vec{r}_N|\hat{Q}|\vec{r}_1'\cdots\vec{r}_N'\rangle\notag\\
&\times\langle\vec{r}_1'\cdots\vec{r}_N'|f\rangle d\vec{r}_1\cdots d\vec{r}_Nd\vec{r}_1'\cdots d\vec{r}_N'\label{E:Qt},
\end{align}
where we have added two identity operators.
Considering the definition of the one-body density matrix,
\begin{equation*}
\rho(\vec{r}_1,\vec{r}_1') = A\int\langle\vec{r}_1\vec{r}_2\cdots\vec{r}_N|\Phi\rangle\langle\Phi|\vec{r}_1'\vec{r}_2\cdots\vec{r}_N\rangle d\vec{r}_2\cdots d\vec{r}_N,
\end{equation*}
and combining with Eq.~(\ref{E:Qq}), we rewrite Eq.~(\ref{E:Qt}) as
\begin{equation}
\langle\hat{Q}\rangle = \int\rho(\vec{r}_1',\vec{r}_1)\langle\vec{r}_1|\hat{q}|\vec{r}_1'\rangle d\vec{r}_1d\vec{r}_1'\label{E:Q2}.
\end{equation}
For convenience in the following we change the variables of the integral, $\vec{r}_1$ $=$ $\vec{r} + \frac{\vec{s}}{2}$ and $\vec{r}_1'$ $=$ $\vec{r} - \frac{\vec{s}}{2}$.
Since $f(\vec{r},\vec{p})$ is the Wigner transform of density matrix, in coordinate space the density matrix can be expressed as
\begin{equation}
\rho\Big(\vec{r} - \frac{\vec{s}}{2},\vec{r} + \frac{\vec{s}}{2}\Big) = \int f(\vec{r},\vec{p}){\rm exp}\Big(i\frac{\vec{p}}{\hbar}\vec{s}\Big)d\vec{p}\label{E:Q3}.
\end{equation}
We define the Wigner transform of $\hat{q}$ in coordinate space,
\begin{equation}
q(\vec{r},\vec{p}) \equiv \int{\rm exp}\Big(-i\frac{\vec{p}}{\hbar}\cdot\vec{s}\Big)q\Big(\vec{r}+\frac{\vec{s}}{2},\vec{r}-\frac{\vec{s}}{2}\Big)d\vec{s}\label{E:Q4},
\end{equation}
where
\begin{equation}
q\Big(\vec{r} + \frac{\vec{s}}{2},\vec{r} - \frac{\vec{s}}{2}\Big) = \Big\langle\vec{r} + \frac{\vec{s}}{2}|\hat{q}|\vec{r} - \frac{\vec{s}}{2}\Big\rangle
\end{equation}
is the matrix element of $\hat{q}$ in coordinate space.
Substituting Eq.~(\ref{E:Q3}) and the inverse transform of Eq.~(\ref{E:Q4}) into Eq.~(\ref{E:Q2}), we reduce the expectation of $\hat{Q}$ into the following form,
\begin{equation}
\langle\hat{Q}\rangle = \int f(\vec{r},\vec{p})q(\vec{r},\vec{p})d\vec{r}d\vec{p}\label{E:Q5}.
\end{equation}
In the LHV method, the $f(\vec{r},\vec{p})$ is replaced by $\tilde{f}(\vec{r},\vec{p})$ expressed in Eq. (20).
It can be demonstrated that the effect of the external excitation $\lambda\hat{Q}\delta(t-t_0)$ is to change
the positions and momenta of the test nucleons as follows~\cite{UrbPRC85}:
\begin{equation}\label{E:q}
\vec{r}_i \longrightarrow \vec{r}_i + \lambda\frac{\partial q(\vec{r}_i,\vec{p}_i)}{\partial\vec{p}_i},\quad\quad \vec{p}_i \longrightarrow \vec{p}_i - \lambda\frac{\partial q(\vec{r}_i,\vec{p}_i)}{\partial\vec{r}_i}.
\end{equation}
For the isoscalar monopole and isovector dipole excitation, the specific form of Eq.~(\ref{E:q}) will be given in Secs.~\ref{S:MS} and \ref{S:DV}, respectively.
\section{\label{S:R&D}Result and discussion}
In the following, we are going to examine the ability of the present LHV method in dealing with the (near-)equilibrium nuclear dynamics.
Specifically, we will study the ground state evolution, isoscalar monopole and isovector dipole mode of \isotope[208]{Pb}.
Under such (near-)equilibrium states, most of the nucleon-nucleon collisions are blocked according to the Pauli exclusive principle, thus in principle the BUU equation with the absence of the collision term, or the Vlasov equation, is still applicable.
\subsection{\label{S:GS}Ground state evolution of finite nuclei}
As mentioned in Sec.~\ref{S:INI}, the initial phase space information of a ground state nucleus is obtained self-consistently by varying the total energy with respect to nucleon radial density $\rho(r)$.
The initial coordinates of test nucleons are generated according to the solution of Eq.~(\ref{E:GS}), and the LHV density $\tilde{\rho}$ is obtain via Eq.~(\ref{E:rhoL}).
Shown in Fig.~\ref{F:DP1} is the time evolution of radial profile of LHV density for a single \isotope[208]{Pb} in ground state up to $200~{\rm fm}/c$, obtained from the present LHV method with $N_{\rm E}$ $=$ $10000$ and time step $\Delta t$ $=$ $0.4~{\rm fm}/c$ by using the N$3$LO Skyrme pseudopotential SP$6$m.
We notice from Fig.~\ref{F:DP1} that the LHV density approximates reasonably well the realistic ground state of the Vlasov equation, or the solution of Eq.~(\ref{E:GS}), since the profile of the LHV density only exhibits very small fluctuations.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.95\linewidth]{DP200.eps}
\caption{The time evolution of radial density profile of ground state \isotope[208]{Pb} based on the present LHV method, with N$3$LO Skyrme pseudopotential SP$6$m up to $200~{\rm fm}/c$.}
\label{F:DP1}
\end{figure}
To see more clearly the stability of the ground state evolution of \isotope[208]{Pb} within the present LHV method, the ground state evolution is continued up to $1000~{\rm fm}/c$, and we present in Fig.~\ref{F:DP2} the radial profiles of LHV density with a time interval of $200~{\rm fm}/c$.
Again, only small fluctuations are observed in Fig.~\ref{F:DP2}, which indicates that the present LHV method is capable of studying long-time nuclear processes, e.g., heavy-ion fusion reactions and nuclear spallation reactions.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.95\linewidth]{DP1000.eps}
\caption{Same as Fig.~\ref{F:DP1} but up to 1000~${\rm fm}/c$.}
\label{F:DP2}
\end{figure}
Apart from the radial density profile, other properties concerning the stability of the LHV method are also examined.
To that end, the time evolution of rms radius, fraction of bound nucleons and binding energy are presented in Fig.~\ref{F:TE}.
The calculations are performed with time step $\Delta t$ $=$ $0.4~{\rm fm}/c$, and $N_{\rm E}$ $=$ $5000$ and $10000$, respectively.
Free test nucleons are those who do not interact with other test nucleons~(which means their form factors $S$ do not overlap), and they are excluded in calculating the fraction of bound nucleon and rms radius.
We notice from Fig.~\ref{F:TE}(a) that both cases with different $N_{\rm E}$ give fairly stable evolution of rms radius.
For the $N_{\rm E}$ $=$ $5000$ case, the rms radius starts to decrease after about $800~{\rm fm}/c$.
This decrease is due to the evaporation of nucleons from the bound nuclei, which is demonstrated in Fig.~\ref{F:TE}(b).
Such evaporation of nucleons is inevitable in transport model simulations due to the limited precision in the numerical realization, whereas it can be suppressed by increasing $N_{\rm E}$, as can be seen in Fig.~\ref{F:TE}(b), though the result with $E_{\rm E}$ $=$ $5000$ is already satisfactory.
Owing to the advantage of lattice Hamiltonian framework, the binding energy of the given nucleus conserves almost exactly in both cases, as shown in Fig.~\ref{F:TE}(c).
Figure \ref{F:TE} indicates that the present LHV method is able to give fairly stable time evolution.
Due to the high efficiency of GPU parallel computing, including more ensembles in the LHV calculation becomes possible.
However, as a balance between computational resources and numerical accuracy, unless otherwise specified, the following calculations are performed with time step $\Delta t$ $=$ $0.4~{\rm fm}/c$ and $N_{\rm E}$ $=$ $5000$.
\begin{figure}[!htp]
\centering
\includegraphics[width=1.0\linewidth]{TE.eps}
\caption{Time evolution of (a)~rms radius, (b) the fraction of bound nucleons and (c) binding energy of ground state \isotope[208]{Pb} with N$3$LO Skyrme pseudopotential SP$6$m up to $1000~{\rm fm}/c$.
Calculations are performed with time step $\Delta t$ $=$ $0.4~{\rm fm}/c$, and $N_{\rm E}$ $=$ $5000$ and $10000$, respectively.}
\label{F:TE}
\end{figure}
\subsection{\label{S:MS}Isoscalar monopole mode of \isotope[208]{Pb}}
During the past several decades, a lot of studies on the isoscalar giant monopole resonance~(ISGMR) of finite nuclei have been performed, since it provides the information about both the symmetric and asymmetric part of the nuclear matter incompressibility~\cite{YouPRL82,ShlPRC47,TLPRL99,PatPLB718,PatPLB726,GupPLB760}, which are fundamental quantities characterizing the EOS of nuclear matter.
In the experimental aspect, the isoscalar monopole mode is measured through scattering off nucleus with isoscalar light particles, and recent experiments have been performed with inelastic $\alpha$-particle and deuteron scatterings~\cite{TLPRL99,MonPRL100,PatPLB718,PatPLB726,VanPRL113,GupPLB760}.
In the one-body transport model point of view, the isoscalar monopole mode is regarded as a compressional breathing of nuclear fluid.
Such mode can be generated in the LHV framework through the following procedures.
For the isoscalar monopole mode, we have
\begin{equation}
\hat{Q}_{\rm ISM} = \frac{1}{A}\sum_i^{\rm A}\hat{r}_i^2\label{E:QMS},\quad\hat{q}_{\rm ISM} = \frac{\hat{r}^2}{A},
\end{equation}
and thus according to Eq.~(\ref{E:Q4}), we obtain
\begin{equation}
q_{\rm ISM}(\vec{r},\vec{p}) = \frac{\vec{r}^2}{A}.
\end{equation}
Note that the square root of the expectation value of $\hat{Q}_{\rm ISM}$ is the rms radius of the given nucleus.
According to Eq.~(\ref{E:q}), to obtain the isoscalar monopole mode, the initial phase space information of test nucleons are changed with respect to that of the ground state by
\begin{equation}
\vec{p}_i\longrightarrow\vec{p}_i - 2\lambda\frac{\vec{r}_i}{A}.
\end{equation}
The spatial coordinators of test nucleons remain unchanged since the $q_{\rm ISM}$ is independent of momentum.
\begin{figure}[!hbt]
\centering
\includegraphics[width=1.0\linewidth]{Q-MS.eps}
\caption{The time evolution of $\Delta\langle\hat{Q}\rangle_{\rm ISM}$ of \isotope[208]{Pb} after a perturbation of $\hat{H}_{ex}(t)$ $=$ $\lambda\hat{Q}_{\rm ISM}\delta(t)$ with $\lambda$ $=$ $100~{\rm MeVfm^{-1}}/c$.
The results correspond to three N$3$LO Skyrme pseudo potentials, SP$6$s, SP$6$m, SP$6$h, and one conventional Skyrme interaction MSL$1$, respectively.}
\label{F:QMS}
\end{figure}
In Fig.~\ref{F:QMS} we show the time evolution of $\Delta\langle\hat{Q}\rangle_{\rm ISM}$ with three N$3$LO Skyrme pseudopotentials, namely, SP$6$s, SP$6$m, and SP$6$h, as well as one conventional Skyrme interaction MSL$1$.
The perturbation parameter $\lambda$ is chosen to be $100~{\rm MeVfm^{-1}}/c$ in the calculation.
We notice from the figure that the time evolution of $\Delta\langle \hat{Q}\rangle_{\rm ISM}$, or equivalently the rms radius, exhibits a very regular oscillation, and the rapid increase which is commonly observed in most BUU simulations using the conventional test particle method does not show up here.
Besides that, only a slight damping is observed in the oscillation, which is anticipated since the only damping mechanism in the Vlasov framework is Landau damping.
Landau damping is caused by one-body dissipation which is governed by a coupling of single-particle and collective motion.
It should be mentioned that in the RPA framework, the damping also only comes from one-body dissipation, since the coupling to more complex states like two-particle two-hole~($2p$-$2h$) states is missing in RPA~\cite{BerRMP55}.
In the semiclassical framework, effects analogous to the $2p$-$2h$ excitation can be included via a nucleon-nucleon collision term~\cite{BurNPA476}, and the width of the strength indeed increases due to the inclusion of the collision term~\cite{GaiPRC81}, but this is beyond the scope of the present work, and will be pursued in the future.
The strength function is obtained based on the time evolution of the response presented in Fig.~\ref{F:QMS} and Eq.~(\ref{E:Qq}).
When calculating the strength function, the response $\Delta\langle \hat{Q}\rangle(t)$ is multiplied by an exponential attenuation $e^{-\gamma t/2\hbar}$, which is a common practice in many giant resonance calculations within the BUU model~\cite{UrbPRC85,HKPRC95}.
The reason for introducing such an attenuation is to avoid oscillations in the Fourier transforms of Eq.~(\ref{E:S-Q}), which is caused by the finite period of the evolution.
In this work the attenuation parameter $\gamma$ is set to be $2~\rm MeV$, as well as in Sec.~\ref{S:DV} for the isovector dipole mode.
However, this exponential attenuation will not affect the peak energy of the strength, on which we mainly focus.
The obtained strength functions of the isoscalar monopole mode with SP$6$s, SP$6$m, SP$6$h, and MSL$1$ are presented in Fig.~\ref{F:SMS}.
The grey band represents the peak energy of $13.91\pm0.11~\rm MeV$ from inelastic $\alpha$-scattering off \isotope[208]{Pb} performed at TAMU~\cite{YouPRL82} while the cyan band of $13.7\pm0.1~\rm MeV$ represents that from the Research Center for Nuclear Physics~(RCNP)~\cite{PatPLB726}.
We notice from the figure that all these interactions give peak energies consistent with that of the experiment, which is a consequence of the proper nuclear incompressibility of the given interactions, as shown in Table~\ref{T:CPs} by $K_0$.
In order to compare the results from the LHV method with that from different approaches, we calculate the strength function of MSL$1$ based on RPA.
The Skyrme-RPA code by Colo {\it et al.}~\cite{ColCPC184} is employed.
The obtained peak energy is indicated by a green arrow.
The peak energies of MSL$1$ with different approaches are generally comparable, and the small discrepancy comes from the difference between semi-classical and quantum nature.
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\linewidth]{SE-MS.eps}
\caption{Strength function of isoscalar monopole mode calculated based on the LHV method with SP$6$s, SP$6$m, SP$6$h, and MSL$1$.
The experimental peak energy and that from RPA calculation with MSL1 are also included for comparison.}
\label{F:SMS}
\end{figure}
\subsection{\label{S:DV}isovector dipole mode of \isotope[208]{Pb}}
The isovector giant dipole resonance~(IVGDR) of finite nuclei is the oldest known nuclear collective motion.
Systematic experimental studies on IVGDR based on photo-nuclear reactions can be traced back to more than forty years ago~\cite{BerRMP47}.
Recent measurements on isovector dipole response have been performed based on inelastic proton scattering at RCNP for \isotope[48]{Ca}~\cite{BirPRL118}, \isotope[120]{Sn}~\cite{HasPRC92}, and \isotope[208]{Pb}~\cite{TamPRL107}, as well as by using Coulomb excitation in inverse kinematics at GSI for \isotope[68]{Ni}~\cite{RosPRL111}.
It is interesting to mention that in recent years a low-lying mode called pygmy dipole resonance~(PDR) has attracted a lot of attention both experimentally~\cite{RyePRL89,AdrPRL95,WiePRL102,EndPRL105} and theoretically~\cite{CarPRC81,UrbPRC85,BarPRC88,BarEPJD68}.
It is well known from theoretical studies based on various models that the PDR, IVGDR, and electric dipole polarizability $\alpha_D$ which is dominated by these isovector dipole modes, provide sensitive probes of the density dependence of the nuclear symmetry energy~\cite{YilPRC72,TriPRC77,PiePRC85,RocPRC88,ZZPRC92,RocPRC92,HKPRC95}.
Within the LHV method, the external perturbation for the isovector dipole mode can be expressed in the following form:
\begin{equation}
\hat{Q}_{\rm IVD} = \frac{N}{A}\sum_i^{\rm Z}\hat{z}_i - \frac{Z}{A}\sum_i^{\rm N}\hat{z}_i\label{E:QDV},
\end{equation}
which is defined so that the center of mass of the nucleus stays at rest.
Similar to the case in the isoscalar monopole mode, the excited system is obtained by changing the initial phase space coordinates of test nucleons according to
\begin{equation}
p_z \longrightarrow
\begin{cases}
&p_z - \lambda\frac{N}{A},\quad \rm for~protons,\\
&p_z + \lambda\frac{Z}{A},\quad \rm for~neutrons.
\end{cases}
\end{equation}
\begin{figure}[!hbt]
\centering
\includegraphics[width=1.0\linewidth]{Q-DV.eps}
\caption{Same as Fig.~\ref{F:QMS} but for isovector dipole mode.}
\label{F:QDV}
\end{figure}
After initializing and exciting the nucleus, we observe in Fig.~\ref{F:QDV} the damped oscillations of the isovector dipole response in \isotope[208]{Pb} with SP$6$s, SP$6$m, SP$6$h, and MSL$1$.
In the isovector dipole case, $\lambda$ is set to be $25~{\rm MeV}/c$.
Similar to the isoscalar monopole case, in Fig.~\ref{F:SDV} the corresponding strength functions of the isovector dipole response are displayed.
The vertical cyan line represents the experimental peak energy of $13.4~\rm MeV$ obtained from $\isotope[208]{Pb}(p,p')$ reaction performed at RCNP~\cite{TamPRL107} and the green arrow indicates the peak energy from the RPA calculation in MSL$1$.
Since these interactions give a reasonable description for the empirical isospin dependent behaviors in (sub)saturation density, as indicated in Table~\ref{T:CPs}, the peak energy from different interactions is consistent with experimental data.
The peak energies of MSL$1$ from the LHV method and the RPA calculation are comparable as well.
In Fig.~\ref{F:QDV}, the obtained responses with SP$6$s, SP$6$m, and MSL$1$ show clearly nonsingle frequency behavior.
This is also evident in Fig.~\ref{F:SDV} from the additional strength or even extra bumps in the high energy region.
Further tests show that the bump of the high energy part is related to the magnitude of the gradient terms in Eq.~(\ref{E:Hgrad}), which provides an additional restoring force out of phase with that from the local part.
This indicates that the effect of the gradient terms, usually omitted in one-body transport models, should not be overlooked.
We notice that such high-energy bumps or peaks have also been observed in previous studies based on one-body transport models with gradient terms~\cite{HZEPJWC117,HZPRC94}.
One possible reason is that the larger the gradient term, the easier for the isovector dipole excitation to invoke other modes, and other modes then react upon the isovector dipole mode and thus cause the additional high-energy strength.
In principle, such nonlinear modes can be avoided if the perturbation parameter $\lambda$ is small enough~\cite{SimPRC68}. However, this is impracticable in one-body transport model simulations due to the limited precision in the numerical realization.
Since the high-energy bump or peak is absent in experimental data~\cite{TamPRL107}, we attribute it to a numerical problem, and further investigation is needed in the future.
\begin{figure}[!hbt]
\centering
\includegraphics[width=1.0\linewidth]{SE-DV.eps}
\caption{Same as Fig.~\ref{F:SMS} but for isovector dipole mode.}
\label{F:SDV}
\end{figure}
\section{\label{S:S&O}summary and outlook}
We have developed a one-body transport model by employing the lattice Hamiltonian method to solve the Vlasov equation with nuclear mean-field obtained from the N$3$LO Skyrme pseudopotential.
The ground states of nuclei are obtained with the same interactions through varying the total energy with respect to the nucleon density distribution.
Owing to the self-consistent treatment of initial nuclear ground state and the exact energy conservation of the LHV method, the present framework for solving the Vlasov equation exhibits very stable nuclear ground state evolution.
As a first application of the new LHV method, we have calculated the isoscalar monopole and isovector dipole modes of finite nuclei.
The obtained peak energies of ISGMR and IVGDR in \isotope[208]{Pb} with the N$3$LO Skyrme pseudopotentials are consistent with the experimental data.
In addition, the use of the Skyrme interaction enables us to compare the LHV results with that from conventional nuclear structure method, i.e., the RPA calculation, and the obtained peak energies of ISGMR and IVGDR in \isotope[208]{Pb} have been shown to be comparable in the LHV method and the RPA calculation.
Our results have demonstrated the capability of the present LHV method in dealing with the ground state of finite nuclei and the near-equilibrium nuclear dynamics.
The present work provides a solid foundation not only for long-time Vlasov calculation of low energy heavy-ion reactions, but also for the BUU calculation with a nucleon-nucleon collision term.
Based on the future lattice Hamiltonian BUU method, one can use the Skyrme pseudopotentials to simultaneously explore both the structure properties of finite nuclei and heavy-ion collisions at intermediate to high energies.
Crosschecks for nuclear effective interactions and thus for the EOS of asymmetric nuclear matter from the nuclear structure and heav-ion collisions thus become possible.
Such studies are in progress and will be reported elsewhere.
\section*{Acknowledgments}
We thank P. Danielewicz and B.-A. Li for helpful discussions,
and M. Gao and X. Zhang for the maintenance of the GPU severs.
This work was supported in part by the National Natural Science
Foundation of China under Grant No. 11625521, the Major State Basic Research
Development Program (973 Program) in China under Contract No.
2015CB856904, the Program for Professor of Special Appointment (Eastern
Scholar) at Shanghai Institutions of Higher Learning, Key Laboratory
for Particle Physics, Astrophysics and Cosmology, Ministry of
Education, China, and the Science and Technology Commission of
Shanghai Municipality (11DZ2260700).
|
2,877,628,091,069 | arxiv | \section{Eruptions, wind-wind interactions and tides}
HD 5980 is an amazing system. Niemela (1988) was the first to point out that it consists of
two Wolf-Rayet-like components referred to as {\it star A} and {\it star B} in a relatively
close, eclipsing and excentric orbit (P$=$19.3, e$\sim$0.3) and a third source, referred to as
{\it star C}, that may simply lie along the line-of-sight to the close pair. The spectral
characteristics and visual brightness of the system underwent significant changes
between the late 1970's and 1993, when it entered an eruptive state that lasted
$\sim$1 year (Bateson \& Jones 1994; Barb\'a et al. 1995; Koenigsberger et al. 1995).
The activity involved an increase in visual brightness and mass-loss rate, and a
decrease in wind velocity and effective temperature, similar to the eruption
phenomena observed in luminous blue variables (LBVs). The radial velocity variations observed
in the very rich emission-line spectrum that appeared after the eruption led Barb\'a et al.
(1996; 1997) to conclude that the instability producing the outbursts originated in star A.
A detailed review of HD 5980's properties is provided by Koenigsberger (2004).
The ZAMS masses of star A and star B are inferred to be $\geq$100 M$_\odot$ (Koenigsberger 2004).
The substantial mass loss required for them to have reached their current masses
(M$_A\sim$50 M$_\odot$, M$_B\sim$30 M$_\odot$, Niemela et al. 1997) may have been achieved
through multiple events as those of 1993/1994. LBV's are associated with the
evolutionary state during which large quantities of mass are ejected allowing the star to
reach the W-R phases with highly depleted hydrogen envelopes. With the possible exception
of $\eta$ Carinae, there is no known LBV with such a developed W-R spectrum as that displayed
by HD 5980.
\subsection{A changing wind-momentum ratio}
The spectrum of HD 5980 in the late 1970's, with its broad He II and N V lines,
was typical of the ``early" W-R stars of the nitrogen sequence (WNE; van der Hucht, 2001).
This spectrum is believed to originate in the wind of star B. Over the next decade,
however, numerous lines from lower-ionization atomic species appeared, implying a growing
presence of a cooler stellar wind, which we now attribute to star A. Clearly, the
emerging dominance of star A's wind implies changes in the wind-wind interaction (WWI)
region characteristics,
Emission-line profile variations observed in optical and UV wavebands are phase-locked and
should, in principle, provide information on the geometry of the changing WWI region
(Moffat et al. 1998). Surprisingly, however, the nature of the variability has remained
the same ever since it was first reported by Breysacher \& Westerlund (1978) and quantified by
Breysacher, Moffat \& Niemela (1982). The variations consist of periodic changes in width
and degree of asymmetry as a function of orbital phase. Figure 1 illustrates the
NIV] 1486 \AA\ emission line variability: it is always narrower and sharply peaked
near the eclipse when star B is ``in front" ($\phi\sim$0.40), while becoming broader and
weaker when both stars are unocculted at $\phi=$0.83. The three epochs that are displayed
correspond to pre-eruption (1991), post-eruption (1999) and $\sim$ 1 year after maximum (1995).
This persistent trend is unexpected since the geometry of the WWI regions depends on the momentum
ratio of the stellar winds, a ratio that changed over time as star A's wind became
more dominant. Thus, HD 5980 appears to provide another example of the discrepancies that
arise when confronting the current WWI models with observations (see presentations
in this Volume by Rauw, Pollock, Williams, among others). But at the same time,
because of the large observational database available for HD 5980, its behavior
may provide a clue to identifying the source of the discrepancies.
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{fig1.eps}
\caption{Line profiles of the semi-forbidden N IV emission line at $\lambda$ 1486 \AA~
observed at two different orbital phases ($\phi\sim$0.40 and 0.83 --dots) for each of
three different epochs (1991, 1995 and 1999). The qualitative nature of the line-profile
variations remains constant. Sets of profiles for different epochs are vertically displaced
for clarity in the figure.}
\end{figure}
\subsection{Tides and non-stationary, asymmetric winds}
Like many of the intriguingly active binaries, HD 5980 has an excentric orbit. In such
systems, the tidal forces are time-variable. The preliminary exploration of
tidal effects in HD 5980 led to the conclusion that they are non-negligible (Koenigsberger
et al. 2002), and thus raised the question of whether they may be responsible for
some of the system's peculiarities.
Recent results of our calculations (Moreno \& Koenigsberger 2007) indicate that the tidal flows
near the stellar surface can liberate considerable amounts of energy through dissipative
processes. The magnitude of the energy dissipation rate, $\dot{E}$, depends on the stellar
and orbital parameters. In the following sections, we'll describe the two basic conclusions
of the calculations: 1) maximum $\dot{E}$ occurs {\em after} periastron passage, not at periastron;
and 2) at certain orbital phases, larger values of $\dot{E}$ are generated at intermediate
polar angles than in the equatorial belt. These results are relevant for WWI theory
if we assume that $\dot{E}$ contributes towards enhancing the stellar mass-loss rate.
The winds in excentric binaries such as HD 5980 would then be {\em intrinsically}
non-spherically symmetric and time-dependent, thus leading to discrepancies when comparing
the observational diagnostics of WWI with the predictions of stationary models.
\section{Energy dissipation from tidal flows in HD 5980}
Tidal effects are important when a star's rotation, $\omega$, is not synchronized with the
orbital motion, $\Omega$. In excentric binaries, this is generally the case since the orbital
motion varies with phase while the stellar rotation rate remains constant. There is no
direct observation of $v sin i$ for star A or star B. But the low-amplitude variations of
star C's narrow absorption lines over the 19.3-day orbit has been interpreted
in terms of contamination by very broad absorptions arising in star A but not visible due
to their superposition on the emission lines (Georgiev \& Koenigsberger 2004). Under
this assumption, $v sin i\geq$200 km/s, thus providing an estimate for the ratio
$\omega$/$\Omega_{per}\sim$2.33, where $\Omega_{per}$ is the orbital angular velocity at
periastron. This means that star A rotates super-synchronously throughout the orbital
cycle.
The basic method is described in Moreno \& Koenigsberger (1999) and Moreno et al. (2003).
With the recent extensions (Moreno \& Koenigsberger, 2007), the code now computes the amplitudes of
the tidal flow in a thin surface layer over the entire stellar surface. These amplitudes
are used to estimate the shear energy dissipation rates, $\dot{E}$ that arise from the
relative motions of different surface layers using an extension of the approach described
in Toledano et al. (2006).
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{fig2.eps}
\caption{Predicted energy dissipation rate per unit density due to the tidal
flows on the stellar surface of star A as a function of orbital phase. Maximum
rates occur {\em after} periastron passage. Star A is ``in front" at $\phi=$0,
and periastron passage occurs at $\phi \sim$0.07.
}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{fig3.eps}
\caption{Predicted energy dissipation rate per unit density due to the tidal
flows on the stellar surface of star A as a function of latitude at periastron and
several times (in days) after periastron passage. Note that $\dot{E}$ is greater
at intermetidate polar angles ($\theta \sim$20--50 $^\circ$) than at the equator
around $\sim$days 3--6 after periastron.
}
\end{figure}
Figure 2 illustrates the time-dependence of $\dot{E}$ as a function of orbital
phase from our HD 5980 model calculation. The first thing to note is that the
maximum rates are generated after periastron passage, $\phi \sim$0.1--0.25, with
minimum values around $\phi=$0.8. Thus, it is tempting to suggest that the
persistent line-profile variability shown in Figure 1 may be associated with
the orbital-phase dependent changes in $\dot{E}$. But in what way do the tidal
effects produce these variations ?
The answer may lie in the distribution in latitude of $\dot{E}$. Figure 3 illustrates
$\dot{E}=\dot{E}(\theta)$ for several different times within the orbital cycle.
At periastron (dotted curve) and until $\sim$2 days thereafter, maximum $\dot{E}(\theta)$
occurs at the equator and systematically decreases towards the pole ($\theta=$0 at the pole).
By day $\sim$3 and until apastron, however, there is a distinct change in this trend whereby
maximum $\dot{E}(\theta)$ now occurs at $\theta \sim$30--50$^\circ$. If we assume that
$\dot{E} \rightarrow \dot{M}$, the results shown in Figure 3 imply that between days $\sim$3
and $\sim$6 after periastron passage ($\phi \sim$0.2 -- 0.4) mass outflow from regions at
intermediate polar angles is more intense than from the equator. A denser ``polar" wind
would produce a narrower and sharply peaked emission-line profile, as seen in Figure 1 for
$\phi \sim$0.40. Additional observational evidence for polar outflows within this orbital
phase interval is provided by Villar-Sbaffi et al. (2003) who state that ``the mass-loss of HD 5980
around $\phi=$0.36 presented fluctuations in axial symmetry ranging from very rapid density
enhancements along the orbital plane to polar ejections." At orbital phase $\phi\sim$0.8,
$\dot{E}(\theta)$ has a relatively small gradient between the pole and the equator and is
significantly weaker than near periastron. Thus, the stellar wind structure of star A should
be more spherically symmetric at this phase.
Clearly, there are additional contributions to producing the line-profile variations, such as
the physical and wind eclipses, as well as contributions from the WWI zone and all of these
need to be considered. However, it is encouraging that the asymmetries in wind structure
predicted by the tidal interaction model are consistent with the emission-line profile
variability.
\section{Final reflections}
A one-layer tidal interaction model for HD 5980 predicts that the energy dissipation
rate due to the tidal shearing flows are time-dependent and non-spherically symmetric.
Speculating that $\dot{E}$ contributes towards the stellar wind mass-loss, leads to the
conclusion that $\dot{M}$ may also be locally enhanced near specific surface locations.
In particular, outflows at intermediate polar angles may be stronger than at the
equator at particular orbital phases. Furthermore, the model predicts that maximum $\dot{E}$
should occur after periastron passage, thus implying an overall enhanced $\dot{M}$ compared
to other orbital phases.
Post-periastron events seem to occur in a wide variety of binary systems, such
as WR 140, $\eta$ Carinae and others, raising the question of whether the tidal
effects described above are more prevalent than one may have anticipated. Is it
possible that the stronger WWI effects (``outbursts") that occur after periastron passage are
associated with stronger mass-loss rates at these phases induced by tidal instabilities ? If
this were the case, the source of the discrepancies between wind-wind interaction model
predictions and the observations may simply reside in the assumption of stationary and
spherically symmetric winds.
It is interesting to note that the hypothesis of enhanced $\dot{M}$ arising
from $\dot{E}$ may not be entirely unreasonable. Given HD 5980's huge UV luminosity
(see, for example, Koenigsberger 2004), it is likely to be on the verge of the Eddington
Limit, as other W-R stars appear to be (Goeffner, this workshop). Thus, small additions
of energy in sub-photospheric layers could drive it to a super-Eddington state. We speculate
that viscous shear energy dissipation resulting from the tidal forces may be a non-negligible
contributor to this small needed additional energy.
Within this context, a final consideration concerns the sudden eruptive events in HD 5980.
Monitoring of HD 5980 at visual and UV wavebands prior to, during and after its eruptions has
yielded a unique data set that provide clues for constraining the eruption mechanism. For
example, ultraviolet observations suggest that the onset of the eruptive state involved rapid
transitions between a fast and a slow stellar wind (Koenigsberger 2004). Hence, it is likely
that the eruption occurred when the wind became so dense that the bistability limit was
crossed (Lamers, Snow \& Lindholm 1995). But this is only the symptom of a more deep-seated
phenomenon that causes the instability leading to the enhanced density wind in the first place.
Tidal effects are very sensitive to the star's radius. If HD 5980 (and other similar stars) are
undergoing an evolutionary transition by which outer layers are expanding, the amplitudes of
the tidal flows are expected to grow significantly. If the hypothesis that
$\dot{E} \rightarrow \dot{M}$ can be shown to stand on firm ground, this would provide a
mechanism to remove significant amounts of mass as the star tries to evolve towards the
red end of the H-R Diagram. Whether the mass-shedding occurs as episodic eruptions, such
as we've observed in HD 5980, or whether it is through a sustained high-density wind, requires
an understanding of the $\dot{E} \rightarrow \dot{M}$ process. Since our model neglects the
effects of intrinsic stellar oscillation modes, effective temperature variations and radiation
pressure, we are unable to go beyond the speculative realm at this time.
\begin{acknowledgements}
GK thanks Jeff Kuhn and Stan Owocki for very helpful discussions.
Support from PAPIIT/DGAPA grant IN119205 is gratefully acknowledged.
\end{acknowledgements}
|
2,877,628,091,070 | arxiv | \section{Introduction}
\label{sec:intro}
Neural audio signal processing has set a new state of art in many fields, such as audio source separation \cite{Stoller2018Wave-U-Net:Separation}, text-to-speech \cite{Ping2018ClariNet:Text-to-Speech}, timbre transfer \cite{Engel2019DDSP:Processing} and unconditional generation \cite{Dhariwal2020Jukebox:Music}.
Recent works on neural audio synthesis such as DDSP \cite{Engel2019DDSP:Processing}, melGAN \cite{Kumar2019MelGAN:Synthesis} or RAVE \cite{Caillon2021RAVE:Synthesis} have allowed to perform deep audio synthesis faster than real-time. Those methods pave the way towards the integration of neural synthesis and processing inside real-time audio applications.
Amongst these, models based on recurrent layers (DDSP \cite{Engel2019DDSP:Processing} or RNNoise \cite{Valin2017RNNoise}) are built to process time series sequentially. Therefore, they are naturally fit to process live audio streams by caching their recurrent state in-between DSP calls. However, this is not the case for models based on convolutional networks \cite{Lecun1995ConvolutionalTime-series} since their reliance on \textit{padding} causes audible phase discontinuities between consecutive audio buffers (e.g clicks), which prevents their use for real-time audio applications. A simple solution to address this problem would be to rely on the \textit{overlap-add} method, where we process large overlapping audio buffers and cross-fade them to smooth out phase discontinuities. While this method is straightforward compatible with any generative model, processing overlapping buffers leads to redundant computations and degraded quality during transition phases.
In addition, this method requires caching buffers that are large enough to fill the receptive field of the model in order to avoid edge effects. This results in a high latency between the input and output of the model during inference. A more specific solution have been proposed through the idea of \textit{streaming} models \cite{Rybakov2020StreamingDevices,Zeghidour2021SoundStream:Codec} that use \textit{causal} convolutional layers. These layers replace padding during inference with a cached internal or external state. Although this mechanism allows the use of convolutional models on live audio streams, it usually degrades the model accuracy due to the aforementioned causal constraint.
In this article, we propose a method to make non-causal convolutional neural networks streamable without impacting the audio quality nor introducing computational redundancies.
We achieve this by making the model causal \textit{after training}, leveraging additional internal delays in order to preserve the original computational graph of the model. Hence, our method can be applied over models that were already trained in a non-causal way. As an application case, we use our method to make the recent RAVE model \cite{Caillon2021RAVE:Synthesis} streamable in real-time. However, our approach can be applied straightforwardly to any convolution-based model. We compare our method with several \textit{overlap-add} alternatives using both quantitative and qualitative metrics. We demonstrate that our method outperforms all other baselines in inference speed, while behaving exactly like the original model in terms of audio quality. Finally, we develop several applications leveraging the streaming RAVE model in order to provide regular digital audio workstations with real-time neural audio processing abilities. All of our experiments, methods and source code are packaged as an open-source Python library available online\footnote{\url{https://acids-ircam.github.io/cached_conv}}.
\section{State of art}
\label{sec:soa}
\subsection{Convolutional Neural Networks}
\begin{figure*}[ht]
\centering
\includegraphics[width=.75\linewidth]{img/convolution_cache.pdf}
\caption{Convolution applied on two split buffers using cached padding. The last $\mathbf N$ frames from input buffer 1 are cached and concatenated with the input buffer 2 (with $\mathbf N$ being the original amount of zero padding) in order to prevent discontinuities between buffers.}
\label{fig:cached_convolution}
\end{figure*}
We consider a 1-dimensional convolutional layer with a kernel \linebreak$\omega \in \mathbb R^{N\times M\times K}$ applied on an input tensor $x \in \mathbb R^{M\times T}$. The resulting tensor $y$ is defined by
\begin{align}
\mathbf y^n[i] = \sum_{m=0}^{M-1}\sum_{k=0}^{K-1}\mathbf \omega^{n,m}[k] \mathbf x^m[i + k]
\label{eq:correlation}
\end{align}
where $ \mathbf y \in \mathbb R^{N \times T-K+1}$.
Due to the size of the kernel $\omega$, the temporal size of $y$ is smaller than the input $x$.
When stacking convolutional layers, this can lead to a significant dimensionality reduction that may be unwanted.
To tackle this issue, convolutional layers are often used in combination with zero-\textit{padding}.
Padding is used to artificially augment the dimensionality of a tensor in order to prevent the loss of dimensionality induced by a convolution with a kernel larger than 1.
As an example, in Equation~(\ref{eq:correlation}), padding the input tensor $x$ with $K-1$ zeros prior to the convolution results in an output tensor $y$ whose temporal dimensionality is the same as the original input.
We call \textit{left-padding} (resp. \textit{right-padding}) the padding of the left-hand side (resp. right-hand side) of the tensor.
Using padding is useful to maintain a tensor dimensionality across layers. However, there are situations where an increase or decrease in temporal dimensionality is required.
Convolutional layers with a stride $s > 1$ allow to decrease a tensor dimensionality by a factor $s$ using the same padding strategy as regular convolutional layers.
On the other hand, transposed convolutional layers can be used to increase a tensor temporal dimensionality.
\subsection{Causal streaming models}
\label{sec:cached_padding}
Processing audio buffers one after the other using a convolutional neural network is not trivial. Indeed, the use of padding in each layer of the model creates discontinuities in the data when processing two consecutive buffers sequentially.
In the context of neural audio synthesis, and more specifically raw waveform modelling, this causes audible phase discontinuities that are not acceptable for real-time audio applications.
To address this problem, Rybakov et al. \cite{Rybakov2020StreamingDevices} proposed to rely on \textit{causal} Convolutional Neural Networks (CNN), which are defined through a \textit{cached padding} mechanism.
Cached padding is implemented by retaining the end of one tensor and using it to left-pad the following one, as shown in Figure~\ref{fig:cached_convolution}. This allows to maintain continuity between the computation of two consecutive audio buffers. It is meant to be used as a replacement for left-padding during inference, retaining the original padding increase in dimensionality without creating discontinuities in-between buffers. Although this method provides a solution for the use of CNN in real-time audio generation, it is constrained by the necessity to implement \textit{causal convolutions}, which are not widespread. This implies that existing pre-trained models might not be compatible with this method, as most of the existing CNN in the literature do not satisfy this assumption.
Finally, it has been shown that a causal constraint makes the learning process more complex \cite{Rybakov2020StreamingDevices}, which could impact the final audio quality.
\subsection{RAVE}
The RAVE model \cite{Caillon2021RAVE:Synthesis} is a variational auto encoder \cite{Kingma2014Auto-encodingBayes} applied directly to the raw audio waveform.
It is trained using two separate stages, respectively named \textit{representation learning} and \textit{adversarial fine tuning}.
The representation learning stage uses a spectral distance between the input and output of the model as its main training objective.
The encoder is regularised with a standard Kullback Leibler divergence between the posterior distribution and an isotropic normal distribution.
In order to keep the learned representation as compact as possible, the encoder is only trained during the first stage.
During the second stage, the model is trained using elements from generative adversarial networks \cite{Goodfellow2014GenerativeNetworks} to improve its synthesized audio quality.
A post-training analysis of the latent space is performed as a way to reduce the number of useful latent dimensions.
This allows an easier exploration and manipulation of the latent space.
Overall, RAVE can be used to perform timbre transfer, latent manipulation and unconditional generation with unprecedented quality while synthesizing 20 to 80 times faster than real-time on a laptop CPU.
RAVE is a feed-forward model, composed of an encoder (a strided convolutional network), and a decoder (a residual transposed convolutional network).
The model also implements the noise synthesizer from the DDSP model \cite{Engel2019DDSP:Processing} to increase its synthesis quality when processing noisy signals.
It leverages zero-padding to maintain the temporal dimensionality of the tensors across convolutional layers.
Therefore, this model in its current state cannot be used to perform streaming inference, and is solely usable on pre-recorded audio files.
Nevertheless, its feed-forward architecture and adversarial fine-tuning makes it a perfect candidate for the streaming task as it is both fast and high quality.
\section{Non-causal streaming models}
\label{sec:post-training-causal}
The streaming models obtained following the method described in Section~\ref{sec:cached_padding} can readily process live audio streams.
However, this requires models that use only causal convolutions, which is not the case for most models proposed in the literature.
Indeed, training a model causally can lead to a loss of accuracy or audio quality \cite{Rybakov2020StreamingDevices}.
Here, we introduce our method that allows to make \textit{non-causal} models streamable.
Our proposal is constructed around the idea of performing a post-training causal reconfiguration of the model. This allows to consider convolutional networks trained using any type of padding (potentially non-causal) and turn them into streamable models. One idea to do so would be to extend the cached padding mechanism to right-padding. However, this is not possible by nature, as we are processing live audio streams where the next buffer is not known yet.
Therefore, we propose to reconfigure the model as causal \textit{after training}. This can be achieved by transforming right-padding into an additional left-padding.
While this reconfiguration allows the use of a cached padding mechanism, making the model causal after training alters its computational graph. Hence, this might produce unpredictable results if the model includes strided convolutions or has a computational graph with parallel branches (e.g residual connections \cite{He2015DeepRecognition}).
In those cases, we propose the introduction of \textit{additional delays} to restore the original behavior of the model. In the following, we detail how we address each of these architectures, in order for our method to be applicable universally on any type of network.
\subsection{Aligning strided convolutions}
Strided convolutions are often used as a way to reduce the temporal or spatial dimensionality of an input tensor. This is done by skipping some steps in the application of the convoluted kernel, as depicted in Figure~\ref{fig:stride_training}.
\begin{figure}[h]
\centering
\includegraphics[width=.6\linewidth]{img/stride_training.pdf}
\caption{A simplified view of a strided convolution using zero-padding during training.}
\label{fig:stride_training}
\end{figure}
Transforming right-padding to left-padding shifts the input tensor to the right (i.e adds a lag to the input tensor). This has no consequence for convolutions with stride 1 or transposed convolutions as it only delays the output tensor.
However, this lag may have an impact on convolutions with a stride greater than one, where a lag of $n$ samples on the input tensor results in a fractional lag of $n/s$ in the output tensor. We show in Figure~\ref{fig:stride_inference} how this fractional lag results in a change of behavior of the layer whenever $n$ is not a multiple of $s$.
\begin{figure}[h]
\centering
\includegraphics[width=.75\linewidth]{img/stride_inference.pdf}
\caption{A strided convolution with post-training causal re-configuration. Due to the input lag, the output of the layer is not the same as during training (see Figure~\ref{fig:stride_training} for the regular output).}
\label{fig:stride_inference}
\end{figure}
Therefore, we introduce an additional delay to the input in order to make its overall lag a multiple of the stride of the convolutional layer, as shown in Figure~\ref{fig:stride_inference_aligned}.
\begin{figure}[h]
\centering
\includegraphics[width=.85\linewidth]{img/stride_inference_aligned.pdf}
\caption{An additional delay (\textit{add}) is applied to the input tensor in order to recover the original behavior of the layer.}
\label{fig:stride_inference_aligned}
\end{figure}
In the case of a complex convolutional network, it is necessary to keep track of the overall cumulated lag for an input tensor after each convolutional layer.
Considering that a convolutional layer with stride $S$ and right-pad $R$ processes an input tensor with cumulated delay $D_c$, we need to set the additional delay $D_a$ to
\begin{equation}
D_a = S - (R + D_c \mod S) \mod S
\label{eq:strided_delay}
\end{equation}
This ensures that the overall delay is a multiple of the layer stride.
\subsection{Aligning parallel branches}
When introducing delays inside a computational graph, special care must be given to the alignment of parallel branches. A well-known example of parallel architectures is that of residual layers \cite{He2015DeepRecognition}.
Indeed, residual layers sum the input of a function to its output, in order to make the overall operation act as a perturbation of the identity function.
Hence, it is crucial to delay the residual branch in order to compensate for the delay induced in the main branch by our method enforcing post-training causality. More generally, models implementing parallel branches must introduce delays to re-synchronise the different branches, as shown in Figure~\ref{fig:branch_delay}. In this case, we set the additional delays $A_i$ to
\begin{align}
A_i = \max_j D_j - D_i,
\end{align}
where $D_i$ is the cumulated delay induced in the $i^\text{th}$ branch.
\begin{figure}[hbpt]
\centering
\includegraphics[width=.8\linewidth]{img/branch.pdf}
\caption{Aligning parallel branches using additional delays.}
\label{fig:branch_delay}
\end{figure}
\subsection{Overlap-add baseline}
For comparison purposes, we use a simple yet effective baseline method to process live audio streams with non-causal convolutional neural networks. We implement the \textit{overlap-add} method by first collecting an audio buffer large enough to account for the receptive field of the model. Then, we apply the unmodified convolutional neural network on this buffer and window the output signal using the Hann window
$$
\mathbf w[n] = \sin \left ( \frac{\pi n}{N} \right )^2,
$$
where $N$ is the buffer size.
Finally, we add the resulting tensor to the previous output with a temporal offset of $N / 2$. This implements the overlap-add method with a 50$\%$ overlapping factor.
We compare this method to another having a 25\% overlapping ratio, implemented by scaling $w$ accordingly, as depicted in Figure~\ref{fig:overlapping_windows}.
This reduces the computational redundancy of the method and consequently makes it process audio faster. However, using a smaller overlapping window results in harsher transitions between buffers. Hence, we also consider the extreme case of a $0\%$ overlapping factor, where the model is applied on non-overlapping buffers. This last configuration can be seen as an ablation of our method where cached padding and causal constraints are removed.
\begin{figure}
\centering
\begin{subfigure}[b]{.6\linewidth}
\includegraphics[width=\linewidth]{img/window_50.pdf}
\caption{50\% overlap}
\end{subfigure}
\begin{subfigure}[b]{.6\linewidth}
\includegraphics[width=\linewidth]{img/window_25.pdf}
\caption{25\% overlap}
\end{subfigure}
\begin{subfigure}[b]{.6\linewidth}
\includegraphics[width=\linewidth]{img/window_0.pdf}
\caption{0\% overlap}
\end{subfigure}
\caption{Windows used by the three overlap-add baseline variants implemented.}
\label{fig:overlapping_windows}
\end{figure}
\section{Evaluation}
\label{sec:exp}
\subsection{Performances}
\label{sec:performances}
In this section, we evaluate the performances of our proposed non-causal streaming method. To do so, we compare it to different variants of the overlap-add method in the context of a model trained without a causal constraint.
In order to evaluate the inference speed, we rely on the Real-Time Factor (RTF) defined as the ratio between processing time and audio duration when processing an audio signal.
A RTF below 1 indicates that the algorithm processes data \textit{faster} than real-time. We also evaluate the amount of memory required during inference on live audio streams, by analyzing the Random Access Memory (RAM) usage. We estimate both memory usage and RTF of the reconstruction process using the various methods applied to 60s long random (white noise) audio signals with varying buffer sizes. We rely on white noise as here the audio output is not relevant to compute the speed of different methods.
All results are averaged over 10 trials in order to account for measurement errors.
We show in Figure~\ref{fig:memory} how our proposed \textit{streaming} and different \textit{overlap-add} methods all have a similar memory usage. The only difference comes from a constant 180kiB of additional RAM needed to store the cached padding of the streaming method.
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{.42\linewidth}
\includegraphics[width=\linewidth]{img/memory.pdf}
\caption{Memory usage}
\label{fig:memory}
\end{subfigure}
\begin{subfigure}[b]{.42\linewidth}
\includegraphics[width=\linewidth]{img/rtf.pdf}
\caption{Real-time factor}
\label{fig:rtf}
\end{subfigure}
\caption{Memory usage and real-time factor for the \textit{streaming} and \textit{overlap-add} methods on a regular RAVE model with varying buffer size. Memory usage is identical for all overlap-add methods. Dotted lines indicate that the model is applied on buffers smaller than its receptive field.}
\label{fig:performances_regular}
\end{figure*}
In terms of processing speed, as we can see in Figure~\ref{fig:rtf}, the overlap method with a $0\%$ overlap ratio is the fastest, while also being the less accurate (see Section~\ref{sec:fidelity}). Although increasing the overlap ratio to $25\%$ or $50\%$ can reduce the corresponding artifacts, it also makes the overlap method increasingly slower than the streaming method.
This is due to the computational redundancies involved in this method.
\subsection{Fidelity}
\label{sec:fidelity}
In contrast to our proposed streaming method, the \textit{overlap-add} approach only yields an approximation of the original model.
Hence, we aim to estimate the quality of this approximation by comparing signals coming from the overlap-add method with signals processed offline by a non-causal model.
To do so, we use the two following metrics
\begin{align}
\mathcal L_{s}(\mathbf x, \mathbf y) &= \| \log (S(\mathbf x)+\epsilon) - \log (S(\mathbf y)+\epsilon) \|_2 \label{eq:spectral_distance}\\
\mathcal L_{w}(\mathbf x, \mathbf y) &= \| \mathbf x - \mathbf y \|_2 ,\label{eq:waveform_distance}
\end{align}
where $\mathcal L_s$ is a spectral distance computed between amplitude STFT spectrum $S(\mathbf{x})$ and $\mathcal L_w$ is the Euclidean distance between the raw waveforms.
We set $\epsilon=1$ as proposed by Défossez et al. \cite{Defossez2018SING:Generator}.
The spectral distance is useful to assess how perceptually similar two audio signals are, regardless of their phase.
However, the waveform Euclidean distance is highly phase-dependent, and reflects a sample-wise dissimilarity between the raw waveform.
Combined, those two metrics give us insights about how similar signals are both from a perceptual and sample-wise point of view.
We disable the noise synthesizer and set the encoder variance to 0 in order to make the model behave predictably. This is necessary as any randomness involved in the generation process would bias the fidelity measure.
We compare the \textit{overlap-add} methods with several overlapping ratios (0\%, 25\% and 50\%), and also include the \textit{streaming} method to ensure that it is an exact reproduction of the offline method. We compensate the latency present in the synthesized outputs for all methods prior to their evaluation. We test all methods with varying buffer sizes and report the results in Figure~\ref{fig:reconstruction}.
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{.42\linewidth}
\includegraphics[width=\linewidth]{img/spectral.pdf}
\caption{Spectral distance}
\label{fig:spectral}
\end{subfigure}
\begin{subfigure}[b]{.42\linewidth}
\includegraphics[width=\linewidth]{img/euclidean.pdf}
\caption{Euclidean distance}
\label{fig:euclidean}
\end{subfigure}
\caption{Spectral and euclidean distances between different \textit{overlap-add} processing methods (ola) and the offline processing method as a function of the buffer size. Dotted lines indicate that the model is applied on buffers smaller than its receptive field.}
\label{fig:reconstruction}
\end{figure*}
As we can see, all variants of overlap-add methods have a decreasing spectral and Euclidean distances to the offline method as the buffer size increases. However, those distances never become null even for buffer sizes larger than 8s, stressing out the artifacts introduced by such methods. Oppositely, our streaming method is exactly identical to the offline method, regardless of the buffer sizes. This confirms that the cached padding and post-training causal reconfiguration of the model allow its use on live audio streams without altering the quality of the output.
\subsection{Impact of pre-training causal constraint}
As discussed in Section~\ref{sec:cached_padding}, enforcing a causal constraint on the model prior to its training can complexify the modelling task. We evaluate the impact of this constraint on the RAVE model trained with the following internal datasets \\
\noindent \textbf{Darbuka. } It has been shown that modelling percussive sounds using a causal model can be difficult \cite{Huang2018TimbreTron:Transfer}. Therefore, we rely on a dataset composed of various solo darbuka performances sampled at 44.1kHz, with a total duration of approximately 3 hours. \\
\noindent \textbf{Strings. } This dataset contains approximately 30 hours of various strings recordings sampled at 44.1kHz that were scraped from different real-life solo violin performances. Compared to the darbuka, it is composed of harmonic signals with smoother attacks. \\
\noindent \textbf{Speech. } The speech dataset is composed of approximately 8 hours of recordings sampled at 44.1kHz. All recordings are produced by a single speaker in a consistent acoustic environment.\\
All datasets are split into 90\%-10\% validation and train sets. We use all the augmentation strategies proposed in the original article \cite{Caillon2021RAVE:Synthesis}. We train two variants of the RAVE model for each dataset (pre-training and post-training causal re-configuration).
All models are trained for 2M iterations. We use the spectral distance defined in Section \ref{sec:fidelity} to measure the reconstruction error of audio samples from the validation set as input for a pretrained RAVE model. We report the resulting spectral distances in Table~\ref{tab:causal_impact}.
\begin{table}[ht]
\caption{\itshape Reconstruction errors for pre-training and post-training causal reconfiguration across different datasets.}
\centering
\begin{tabular}{c|c|c}
\hline ~& pre-training & post-training \\\hline
Darbuka & $0.228\pm 0.028$ & $\mathbf{0.178\pm 0.038}$ \\
Strings & $0.055\pm 0.012$ & $\mathbf{0.054\pm0.011}$ \\
Speech & $0.155\pm0.005$ & $\mathbf{0.138\pm0.005}$ \\\hline
\end{tabular}
\label{tab:causal_impact}
\end{table}
Using the pre-training causal configuration results in a small but consistent loss of accuracy as compared to the regular training of models across all datasets.
However, the cumulated lag applied to the input tensor due to the post-training reconfiguration is responsible for a processing latency when using the model on an audio stream.
In the case of the RAVE model, this latency adds up to $653$ms compared to only $52$ms when using RAVE trained with a causal constraint.
\section{Application}
Alongside this article, we also introduce several applications leveraging the streaming RAVE model obtained using our method. This provides real-time neural audio synthesis inside different types of digital audio workstations. The source code and pre-built binaries for all applications are available online\footnote{\url{https://acids-ircam.github.io/cached_conv}}.
\subsection{Max/MSP and PureData externals}
We introduce the \textit{nn$\sim$} external for Max/MSP and PureData. This external allows the use of deep learning streaming models to process audio signals inside both applications. It leverages pre-trained models exported as \textit{torchscript} files.
By default, the \textit{nn$\sim$} external uses the \textit{forward} method of the model. However, it is possible to specify another method by passing an additional argument to the external during its initialization. The number of inlets and outlets of the external depends on both the model and the method used. For example, the \textit{forward} method of the RAVE model uses one inlet and one outlet, as both the input and output of this method are monophonic audio signals. However, choosing the \textit{encode} method will create one inlet and $N$ outlets, as the input of this method is a monophonic audio signal, while its output is a $N$-dimensional latent representation. Tensors with a lower sampling rate than audio signals are up-sampled at the audio rate using nearest neighbour interpolation. This method of interfacing $N$-dimensional tensors as audio signals give the user a lot of flexibility, as each individual dimension can be modified in real-time. To examplify this, we show in Figure~\ref{fig:nn_tilde} an example Max/MSP patch where the first and last dimensions of the latent representation yielded by a RAVE model are respectively biased and replaced by a user defined input.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\linewidth]{img/patch_article.png}
\caption{Screenshot of the \textit{nn$\sim$} external interfacing a RAVE model trained on a darbuka dataset. Using either a live audio stream or a pre-recorded audio file as an input, the nn$\sim$ external allows to modify the latent representation yielded by the encoder in real-time. In this example, the first (resp. last) dimension of the latent space is biased (resp. replaced) by a user defined scalar. }
\label{fig:nn_tilde}
\end{figure}
This implements the high-level manipulation showcased in the original article \cite{Caillon2021RAVE:Synthesis}, but also extended by allowing real-time interaction with the generative process. Overall, the \textit{nn$\sim$} external can be used to combine deep learning streaming models with the large library of objects already available in both MaxMSP and PureData.
\subsection{VST audio plugin}
As an alternative to the \textit{nn$\sim$} external, we propose a VST audio plugin interfacing the RAVE model in order to expand its use to regular digital audio workstations supporting the VST3 plugin format. Our plugin is based on the JUCE framework for both the graphical interface and the audio engine. We depict a screenshot of the plugin in Figure~\ref{fig:vst}.
\begin{figure}[ht]
\centering
\includegraphics[width=.85\linewidth]{img/vst.png}
\caption{Screenshot of the RAVE VST interfacing a RAVE model trained on a darbuka dataset.}
\label{fig:vst}
\end{figure}
We generate a latent path, either by using the RAVE encoder on an audio input, or by sampling from the prior of the model. This latent path is then displayed as a circular graph (see Figure~\ref{fig:vst}), where each point corresponds to a single latent dimension. As the latent distribution produced by RAVE is close to a normal distribution, we define the distance $d_i$ of each point from the center of the graph using the following cumulative distribution function
\begin{align}
d_i = \frac{1}{2} \bigg(1 + \text{erf}\Big(\frac{z_i}{\sqrt{2}} \Big) \bigg),
\label{eq:cdf}
\end{align}
where $\text{erf}$ is the Gauss error function and $z_i$ is the value of the i$^{th}$ dimension of the latent representation. Applying Equation~(\ref{eq:cdf}) to a random variable $x$ sampled from a normal distribution $\mathcal N(0;1)$ results in a uniformly distributed value between 0 and 1.
We give the user the possibility to apply a scale and bias as well as a random noise to each individual dimension. The resulting latent representation is then duplicated and fed to the decoder in order to produce a fake stereo image whose width can be adjusted by the user. We also provide several pre-trained models available in a \textit{model explorer}, where other models will be added over time.
\section{Conclusion and Future Perspectives}
In this paper, we introduced a novel method allowing to transform any convolutional network for audio generation into a streamable model compatible with real-time buffer-based digital signal processing. We showed that our method can be applied on already-trained model by introducing a post-training causal reconfiguration. By carefully handling delays, we showed that this method easily extends to complex architectures with parallel branches. By comparing our method on several speech and music datasets, we showed that it provides faster computation and has no impact on the resulting audio quality. Finally, we released several implementations using our method to provide realtime CNN processing inside digital audio workstations. We hope that this work will pave the way towards the broader integration of the extensive possibilities offered by neural audio synthesis inside creative workflows.
\section{Acknowledgments}
The authors would like to thank Maxime Mantovani for his help on debugging the MaxMSP external and Jean-Baptiste Dupuy and Axel Chemla--Romeu-Santos for their work on the VST audio plugin. This work is currently supported by the ACTOR Partnership funded by the Canadian SSHRC (SSHRC:895-2018-1023) and by the ACIDITeam - Emergence(s) project funded by Ville de Paris.
\bibliographystyle{unsrt}
\section{Introduction}
Neural audio signal processing has set a new state of art in many fields, from audio source separation and text-to-speech systems to timbre transfer and unconditional generation. usually leveraging convolutional neural networks \cite{Lecun1995ConvolutionalTime-series} to build highly efficient models.
Early work on neural audio processing involved slower-than real-time synthesis procedures \cite{Oord2016WaveNet:Audio, Mehri2017Samplernn:Model, Vasquez2019MelNet:Domain, Wang2017Tacotron:Synthesis}, however the advent of adversarial learning \cite{Goodfellow2014GenerativeNetworks}, normalizing flow \cite{Rezende2015VariationalFlows} or diffusion models \cite{Ho2020DenoisingModels} have paved the way toward faster than real-time neural audio synthesis \cite{VanDenOord2018ParallelSynthesis, Kumar2019MelGAN:Synthesis, Prenger2019Waveglow:Synthesis, Chen2020WaveGrad:Generation}.
While theoretically compatible with live audio processing (i.e processing live audio streams instead of previously recorded audio files), only a handful of models have been successfully ported to real-time audio applications (e.g the recently proposed DDSP model \cite{Engel2019DDSP:Processing}), and none of them are using convolutional layers but rather are based on recurrent neural networks.
Recurrent model can easily be ported to live streams processing by caching the recurrent state in-between DSP calls, and hence are a target of choice when addressing the deep learning based real-time audio processing task. On the other hand, processing live audio streams with a convolutional neural network is far from trivial, since the receptive field of the model is often far bigger than the audio buffer it is applied to.
Furthermore, processing two separate audio buffers using a convolutional neural network and then concatenating them results in clearly audible waveform discontinuities (e.g clicks), which are not acceptable for most audio applications. One possible solution for both problems would be to process large overlapping chunks of audio and cross fading them to smooth out hearable discontinuities, at the expense of a higher computational complexity, high latency processing, and degraded synthesis quality during transitional phases.
In this article, we propose a method to make convolutional neural network usable on live audio streams, without any approximation nor increase in computational complexity during synthesis. We achieve this by extending the cache operation proposed in \cite{Paine2016FastAlgorithm} to non-autoregressive networks and by re-arranging padding inside convolutional layers post-training to prevent it from relying on future audio samples during generation.
We also consider the case of parallel computation branches (e.g residual networks, skip connections) where additional delays are required for proper branch alignment.
We then apply our method to the recently proposed RAVE model \cite{Caillon2021RAVE:Synthesis} for comparison purposes against several baselines. We show how rearranging the padding pre-training significantly reduces the latency between model input and output to less than 30 ms when used on live audio streams.
We finally present several plugins that we developed in order to interface deep learning models with regular digital audio workstations, effectively presenting the first ever convolutional deep generative model applied in real-time on live audio streams, featuring 44.1kHz CPU synthesis and low-latency processing.
\section{Method}
\label{sec:method}
Digital processing of an analog audio signal requires its discretization using an analog-to-digital converter, producing a digital stream with a sampling rate and resolution large enough to preserve information relevant to the human ear.
While some latency-critical applications require the processing of this stream one sample at a time, it is rather common to pack a number of samples into a buffer for later processing, increasing the processing latency while decreasing the processor load.
Convolutional layers process data with variable size by convoluting a learnable kernel over an input tensor. The kernel size, its dilation and the stride of the convolution altogether define the \textit{receptive field} of the layer (i.e the area around a particular location in the input tensor that is needed to produce an output). Stacking layers on top of each other increases the receptive field of the entire model, allowing the modelling of larger temporal dependencies \cite{Oord2016WaveNet:Audio}. It is however a problem for live audio streams since the processing of an audio buffer requires both past and future context, and can't be achieved knowing only the current buffered audio samples.
\begin{figure*}[ht]
\centering
\includegraphics[width=.75\linewidth]{img/convolution_cache.pdf}
\caption{Convolution applied on two split buffers using cached padding. The last $\mathbf N$ frames from input buffer 1 are cached and concatenated with the input buffer 2 (with $\mathbf N$ being the original amount of zero padding) in order to prevent discontinuities between buffers.}
\label{fig:cached_convolution}
\end{figure*}
\subsection{Processing buffers}
In order to maintain predictable tensor shapes, convolutional layers are often used in combination with \textit{padding}, which is a method to artificially increase a tensor shape in order to account for the receptive field of the model.
The padding mode \textit{center} consists in applying the same amount of padding at each end of a tensor particular dimension, as shown in Figure (\ref{fig:regular_padding}), effectively making the output of the convolutional layer "centered" with its input.
We call the padding at the left-side (resp. right-side) of the tensor \textit{pad L} (resp. \textit{pad R}) or \textit{left-padding} (resp. \textit{right-padding}). Left-padding is responsible for the need of a past context when processing an audio sample, whereas right-padding makes the model rely on future audio samples.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{.75\linewidth}
\vskip 0pt
\includegraphics[width=\linewidth]{img/convolution.pdf}
\caption{Regular "center" padding, used during training}
\label{fig:regular_padding}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.75\linewidth}
\vskip 0pt
\includegraphics[width=\linewidth]{img/convolution_causal.pdf}
\caption{Re-arranged "causal" padding, used during inference}
\label{fig:rearranged_padding}
\end{subfigure}
\caption{Comparison between convolutions using regular padding and re-arranged padding. Using re-arranged padding we implicitly delay the input in order to get the required overhead.}
\label{fig:convolutions}
\end{figure}
When processing live audio streams, using future samples can be achieved by introducing latency in the processing chain in order to get the required overhead.
We implement this by re-arranging padding for each convolutional layers after training, replacing right-padding with left-padding, as shown in Figure (\ref{fig:rearranged_padding}). Using this re-arranged padding, we keep the same tensor shapes while taking into account only past and present samples.
Processing two separate buffers and concatenating the output results in audible discontinuities caused by the introduction of padding in between each buffer. We therefore replace the aforementioned zero padding with \textit{cached} padding, where we keep in memory the last $N$ samples from a tensor and use it to left-pad the next one, as demonstrated on Figure (\ref{fig:cached_convolution}). This cached padding method can be seen as an extension of the one proposed in \cite{Paine2016FastAlgorithm} to buffer-based neural audio processing. Using this mechanism we can "resume" the convolution operation from one buffer to another, addressing the previously mentioned audio discontinuities (see Section \ref{sec:audio-quality} for an ablation study).
\subsection{Synchronizing parallel branches}
The aforementioned cached padding mechanism is sufficient to make simple convolutional neural networks operate on live streams, however models with parallel computation branches (e.g residual layers or skip connections) need an extra level of attention. In fact, if the receptive field of each parallel branch is not equal, the delay induced by re-arranging padding results in misaligned outputs for each branch, making the model behave unpredictably. It is therefore crucial to keep track of those parallel branches and introduce compensatory delays where needed to properly align each branch with relation to each other, as demonstrated in Figure (\ref{fig:alignment}).
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{.49\linewidth}
\vskip 0pt
\includegraphics[width=\linewidth]{img/alignment_training.pdf}
\caption{during training}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.49\linewidth}
\vskip 0pt
\includegraphics[width=\linewidth]{img/alignment_inference.pdf}
\caption{during inference}
\end{subfigure}
\caption{Aligning a residual connection for real-time inference. Re-arranging padding for real-time inference implies a delay that must be applied to every other parallel branches for proper output alignment.}
\label{fig:alignment}
\end{figure}
\subsection{Architecture specific alignment}
rearranged padding shifts the way the model is built. for most classical architectures cache + alignment is enough. but for weird combination such as conv transpose followed by stride conv it is not enough and proper synchronization must be applied.
another weird case is when you have parallel branches that compress / expand the tensor temporal dimension. in this particular case we must be careful to apply the delay where on the larger tensor (after conv for upsampling, before conv for downsampling).
\section{Experiments}
\subsection{RAVE}
The RAVE model \cite{Caillon2021RAVE:Synthesis} is a Variational Auto Encoder \cite{Kingma2014Auto-encodingBayes} aiming at generating high-quality 48kHz audio signals. It leverages a multiband decomposition of the raw waveform as a way to decrease the temporal complexity of the waveform, allowing 20 to 80 times faster than real-time neural audio synthesis on a standard laptop CPU. RAVE is a feed forward model, composed of an encoder (a regular strided convolutional network), and a decoder (a residual transposed convolutional network). Its overall architectural simplicity and synthesis quality make it a good candidate for experiments on the adaptation of deep convolutional models to live audio stream processing.
\subsection{Baselines}
\label{sec:baselines}
We compare our method presented in Section \ref{sec:method} against the following baselines.
\subsubsection{Offline reconstruction}
We compare all methods against the regular \textit{offline reconstruction} procedure of the model, where we reconstruct a pre-recorded audio signal all at once, setting an upper bound to the generation quality.
\subsubsection{Simple buffer-based}
The first baseline we implement is the \textit{simple buffer-based} method, ignoring the issues reported in Section \ref{sec:method} and applying the unmodified model directly on live audio streams, processing buffers one by one and concatenating the outputs prior to listening to the result.
\subsubsection{Overlap-add}
A convenient way to avoid harsh discontinuities between buffers is the \textit{overlap-add} method. We implement this method by caching a large chunk of audio (i.e large enough to account for the receptive field of the model), and process it the same way offline processing is performed. We then window the resulting signal using
$$
\mathbf w[n] = \sin \left ( \frac{\pi n}{N} \right )^2,
$$
where $N$ is the chunk size. We finally overlap-add the resulting tensor over the previous output signal with a hop length of $N$.
\subsection{Addressing model latency}
\label{sec:addressing_latency}
The post-training padding re-arrangement introduced in Section \ref{sec:method} is crucial to prevent the model from relying on future audio samples when operating in real-time, which inevitably adds some latency to the model (i.e the delay observable between the model input and output). Since the receptive field of the RAVE model extends up to 600ms into the future, this exact amount of delay will be heard when using this model in real-time, which can be a problem for latency critical applications. We therefore experiment with re-arranging padding \textit{prior} to the training, making the model learn to reconstruct audio samples in a causal fashion. This allow to drastically reduce the latency of the model from $\sim600$ms to $\sim 30$ms, as demonstrated in Figure \ref{fig:latency}. The remaining latency comes from the compression ratio of the model (i.e the full stride of the encoder) that makes it need a fixed amount of audio samples in order to produce a single latent point. Reducing latency even more can thus be achieved by reducing the compression ration of the model, which can have a negative impact on the performances of the model.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\linewidth]{img/latency.pdf}
\caption{Real-time reconstruction of a noise burst by two models trained on percussive sounds, one with post-training padding re-arranging (regular), the other with pre-training re-arranging (no latency).}
\label{fig:latency}
\end{figure}
Unlike post-training padding rearrangement, training the model in this causal configuration makes the modelling task more complex and may result in decreased model performances.
\subsection{Datasets}
We evaluate the performances of our method using RAVE trained on three different datasets with different acoustic properties. We train RAVE twice for each dataset (pre-training padding rearrangement and regular training).
\subsubsection{Speech}
wheel: 9h20mn
\subsubsection{Darbouka}
darbouka: 2h33mn
\subsubsection{Koto}
koto: 3h23mn
\section{Results}
\subsection{Audio quality}
\label{sec:audio-quality}
We reconstruct a 2 second long audio sample using RAVE with the methods listed in \ref{sec:baselines}, and display their rainbowgrams in figure \ref{fig:comparison_rainbow}.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{.49\linewidth}
\includegraphics[width=\linewidth]{img/rain_full.png}
\caption{Offline rendering}
\label{fig:rain_full}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.49\linewidth}
\includegraphics[width=\linewidth]{img/rain_naive.png}
\caption{Simple buffer-based}
\label{fig:rain_naive}
\end{subfigure}
\begin{subfigure}[b]{.49\linewidth}
\includegraphics[width=\linewidth]{img/rain_ola.png}
\caption{Overlap-add}
\label{fig:rain_ola}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.49\linewidth}
\includegraphics[width=\linewidth]{img/rain_cache.png}
\caption{\textbf{Our method}}
\label{fig:rain_cache}
\end{subfigure}
\caption{Rainbowgrams \cite{Engel2017NeuralAutoencoders} of audio samples reconstructed using different methods (offline, simple, overlap-add, and our method), with a buffer size $2^{14}$.}
\label{fig:comparison_rainbow}
\end{figure}
Using the naive method to generate sound yields clearly audible discontinuities, as shown in figure \ref{fig:rain_naive}. While the overlap-add method in figure \ref{fig:rain_ola} gets rid of those discontinuities, the cross-fade between contiguous buffers alters the phase and produce a noisy instantaneous frequency, perceptually producing a "blurry" sound. In comparison, the cache method in figure \ref{fig:rain_cache} behaves exactly like the full method, without any phase distortion or discontinuity. We quantitatively evaluate all methods by evaluating their similarity with the offline generation procedure, using the following distances
\begin{align}
\nonumber \mathcal L_{s}(\mathbf x, \mathbf y) &= \| \log (S(\mathbf x)+\epsilon) - \log (S(\mathbf y)+\epsilon) \|_2 \\
\mathcal L_{c}(\mathbf x, \mathbf y) &= \frac{\sum_n \mathbf x[n]\mathbf y[n]}{\sqrt{\sum_n \mathbf x[n]^2 \sum_n \mathbf y[n]^2}},
\end{align}
where $\mathcal L_s$ is a spectral distance between amplitude spectrograms $S(\cdot)$, and $\mathcal L_c$ is a normalized correlation score. We compensate the latency mentioned in Section \ref{sec:addressing_latency} prior to evaluating those distances, and report the results in Table \ref{tab:quantitative_results}.
\begin{table}[ht]
\caption{\itshape Quantitative reconstruction evaluation}
\centering
\begin{tabular}{|c|c|c|}
\hline Method & Spectral distance & Correlation\\\hline
simple & 0.00 & 0.00 \\
overlap-add & 0.00 & 0.00 \\
\textbf{ours} & 0.00 & 0.00 \\\hline
\end{tabular}
%
\label{tab:quantitative_results}
\end{table}
we also evaluate perceptually how those methods compare (offline, simple, ola, cache, cache nl) and report the results in Table \ref{tab:perceptual_results}.
\begin{table}[ht]
\caption{\itshape Perceptual evaluation}
\centering
\begin{tabular}{|c|c|}
\hline Method & MOS\\\hline
simple & 0.00 \\
overlap-add & 0.00 \\
\textbf{cache} & 0.00 \\
\textbf{cache nl} & 0.00 \\\hline
\end{tabular}
%
\label{tab:perceptual_results}
\end{table}
\subsection{Performances}
Processing high-resolution images or long audio samples with deep convolutional models has a non-negligible cost in term of memory consumption and computation time. A system with a fixed amount of RAM will therefore be limited in terms of the maximum resolution or duration it can handle for a particular model, as demonstrated in Figure \ref{fig:full_benchmark}.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{.49\linewidth}
\includegraphics[width=\linewidth]{img/full_mem.pdf}
\caption{Memory usage}
\end{subfigure}
\begin{subfigure}[b]{.49\linewidth}
\includegraphics[width=\linewidth]{img/full_time.pdf}
\caption{Computation time}
\end{subfigure}
\caption{Offline audio reconstruction benchmark using RAVE on increasingly long audio samples. In order to account for memory and time measurement errors, we average the two quantities over 10 trials.}
\label{fig:full_benchmark}
\end{figure}
In contrast, processing audio using buffers results in a memory usage that only depends on the buffer size, and not the total duration of the audio signal. Models operating on buffers can consequently process arbitrary long audio samples, with a linearly increasing computation time. We compare on Figure \ref{fig:buffer_benchmark} the memory footprint and real-time factor of the \textit{overlap-add} and \textit{cache} methods for increasingly long buffer sizes, and demonstrate that the \textit{cache} method needs less memory and computation time than the \textit{overlap} method for identical buffer sizes. Furthermore, the \textit{overlap} method needs at least $\sim 2.5$s long buffers in order to fill the receptive field of the model in its entirety. On the contrary, the \textit{cache} method can use buffers as small as the compression ratio of the model, allowing the use of RAVE on arbitrary long audio signals using less than 50MiB of RAM.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{.49\linewidth}
\includegraphics[width=\linewidth]{img/buffer_mem.pdf}
\caption{Memory usage (lower is better)}
\end{subfigure}
\begin{subfigure}[b]{.49\linewidth}
\includegraphics[width=\linewidth]{img/buffer_time.pdf}
\caption{Real-time factor (lower is better)}
\end{subfigure}
\caption{Buffer based audio reconstruction benchmark using RAVE on increasingly long audio samples for two methods, overlap-add (ola) and our method (cache). Dotted lines indicate that the model is operating on buffers that are smaller than its receptive field, degrading its modelling abilities. In order to account for memory and time measurement errors, we average the two quantities over 10 trials.}
\label{fig:buffer_benchmark}
\end{figure}
\subsection{Integration inside real-time frameworks}
is it appropriate to put that here ?
We build several application to showcase the real-time use of such models
- max / msp (macOS + windows)
- puredata (linux)
- VST (macOS + windows)
Using a light weight RAVE we even show that we can obtain 48kHz unconditional generation in real-time on a non-overcloked raspberry Pi 4.
\section{Conclusion and Future Perspectives}
usual deep learning apps work in an offline fashion when classical DSP operates on live audio streams. this make the use of DL models complex as it does not integrate well existing creative workflows. we propose a method to circumvent this problem and deploy dl models inside real-time framework using cached convolutions, causal padding and careful temporal alignment of parallel computation branches present inside the neural network. we apply our method to the RAVE model and therefore propose the first CNN model operating on live audio streams without any approximation compared to its offline version.
\bibliographystyle{unsrt}
|
2,877,628,091,071 | arxiv | \section{Introduction}
\label{sec:intro}
Cosmological observations suggest the existence of the mysterious elements
in the history of the Universe,
such as the inflationary evolution at the beginning of the Universe,
and dark matter and dark energy at the present day.
General relativity (GR)
with these new elements in the right-hand side of the Einstein equation
as the energy-momentum sources
are mathematically equivalent to the certain modification of GR
where the left-hand side of the Einstein equation is modified
by the new gravitational degrees of freedom
in addition to the metric tensor \cite{Clifton:2011jh,Berti:2015itd}.
In many cases,
the modification of GR can be described by
a scalar-tensor theory of gravitation
at least in a certain regime~\cite{Fujii:2003pa}.
Realistic modification of GR should not contain
the so-called Ostrogradsky ghosts
associated with the higher-derivative interactions~\cite{Woodard:2015zca},
and
should be endowed with
the mechanisms that could suppress the extra gravitational degrees of freedom
around the locally gravitating sources~\cite{Vainshtein:1972sx,Brax:2004qh},
in order to be compatible with the tests of GR in the weak gravity regime.
Scalar-tensor theories that could satisfy all these requirements
typically belong to the so-called generalized Galileon / Horndeski
theory~\cite{Horndeski:1974wa,Deffayet:2009mn,Deffayet:2011gz,Kobayashi:2011nu}
where the equations of motion are given by the second-order differential equations
despite the existence of the higher derivative interactions in the Lagrangian.
While the scalar-tensor Horndeski theory has been extensively investigated,
it is also interesting to look for the similar theories for the other field species.
In this paper, we consider a class of the generalized vector-tensor theories of gravitation
where the equations of motion are given by the second-order differential equations.
It was shown that if the gauge symmetry of the vector field is preserved,
the Galileon-like extension of the vector field theory does not exist
and only the Maxwell kinetic term is allowed \cite{Deffayet:2013tca}.
A way out for this no-go theorem was to abandon the gauge symmetry.
The introduction of the mass term of the vector field breaks the gauge symmetry.
In the vector field theory with the mass term $m^2 A_\mu A^\mu$,
where $A_\mu$ is the vector field and the Greek indices $(\mu,\nu,...)$ run the four-dimensional spacetime,
the so-called Proca theory,
the vector field contains the three propagating degrees of freedom,
namely one longitudinal and two transverse degrees of freedom.
The generalization of the massive vector field theory to the Galileon-like theory
was first investigated in Refs. \cite{Tasinato:2014eka,Heisenberg:2014rta},
and then extended in Refs. \cite{Allys:2015sht,Jimenez:2016isa,DeFelice:2016cri,DeFelice:2016yws,DeFelice:2016uil}
including the generalization of the interaction of the field strength with the double dual of the Riemann tensor Ref. \cite{Horndeski1976}.
In the generalized Proca theory,
the screening mechanism and cosmology have been investigated in Refs. \cite{DeFelice:2016cri,DeFelice:2016yws,DeFelice:2016uil}.
In this paper,
we will investigate the static and spherically symmetric solutions
in the subclass of the generalized Proca theory
which possesses the nonminimal coupling of the vector field to the Einstein tensor
$G^{\mu\nu} A_\mu A_\nu$,
where $G_{\mu\nu}$ is the Einstein tensor associated with the metric $g_{\mu\nu}$.
First, we will show that
the solutions in the scalar-tensor Horndeski theory with the nonminimal derivative coupling
to the Einstein tensor $G^{\mu\nu}\partial_\mu\varphi\partial_\nu\varphi$
\cite{Sushkov:2009hk,Saridakis:2010mf,Germani:2010gm,Germani:2011ua,Gubitosi:2011sg}
can also be those in the above generalized Proca theory
with the vanishing field strength.
In this subclass of the scalar-tensor Horndeski theory,
the static and spherically symmetric solutions have been obtained
in Refs. \cite{Babichev:2013cya,Rinaldi:2012vy,Minamitsuji:2013ura,Anabalon:2013oea}
(see also
Refs. \cite{Kobayashi:2014eva,Charmousis:2014zaa,Babichev:2015rva} for the more general theories
and
Ref. \cite{Silva:2016smx,Babichev:2016rlq} for the reviews),
and the solutions particularly relevant for astrophysics or cosmology
are the stealth Schwarzschild and the Schwarzschild- (anti-) de Sitter solutions
which were originally obtained in Ref. \cite{Babichev:2013cya}.
On the other hand,
the nonexistence of the black hole solutions with the massive vector field charge
has been proven by Bekenstein in Refs. \cite{Bekenstein:1971hc,Bekenstein:1972ky}.
As shown in recent work \cite{Chagoya:2016aar}, however,
the no-hair argument can be avoided
once the nonminimal coupling $G^{\mu\nu} A_\mu A_\nu$ is introduced.
\footnote{
As argued in Ref.~\cite{Herdeiro:2016tmi},
the no-hair argument for the Proca theory can also be circumvented
for the complex massive Proca field.}
This corresponds to the simplest class of the ghost-free bilinear nonminimal couplings of the vector field
to the divergence-free Lovelock tensors \cite{Geng:2015kvs}.
Moreover,
as argued in Ref. \cite{Jimenez:2014rna},
the nonminimal coupling of the vector field to the Einstein tensor, $G^{\mu\nu} A_\mu A_\nu$,
can also arise from the quadratic gravitational theory in the Weyl geometry.
References \cite{Chagoya:2016aar,Geng:2015kvs} have obtained the black hole solutions
for the particular value of the nonminimal coupling constant.
Reference \cite{Fan:2016jnz} has investigated the black hole solutions in the generalized Proca theory
with the nonminimal coupling $R A^\mu A_\mu$,
where $R$ is the Ricci scalar curvature.
In this paper,
we will extend these former attempts in Refs. \cite{Chagoya:2016aar,Geng:2015kvs},
clarify the relations among the solutions,
and also investigate the first-order slow-rotation corrections within the Hartle-Thorne approximation \cite{Hartle:1967he,Hartle:1968si}
along the line of Refs. \cite{Maselli:2015yva,Cisterna:2015uya}.
The paper is organized as follows:
In Sec. \ref{sec:action}, we will introduce the generalized Proca theory with the nonminimal coupling to the Einstein tensor
and derive the equations of motion.
In Sec. \ref{sec:solutions},
we will show how the solutions in the scalar-tensor theory
can be described in the generalized Proca theory with the vanishing field strength.
In Sec. \ref{sec:solutions2},
we will obtain the solutions with the Coulomb potential in the temporal component of the vector field.
In Sec. \ref{sec:others},
we will explore the solutions with other forms of the vector field.
In Sec. \ref{sec:slow},
we will investigate the first-order slow-rotation corrections
to the static and spherically symmetric solutions.
The last section, Sec. \ref{sec:conclusions}, is devoted to giving the concluding remarks.
\section{The generalized Proca theory with the nonminimal coupling to the Einstein tensor}
\label{sec:action}
In this paper, we consider the generalized Proca theory given by
\begin{align}
\label{eq:action}
S
&=
\int d^4x \sqrt{-g}
\left[
\frac{m_p^2}{2}
\left(R-2\Lambda\right)
-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}
\right.
\nonumber\\
&\left.
-\left(m^2 g_{\mu\nu}-\beta G_{\mu\nu}\right)
A^\mu A^\nu
\right],
\end{align}
where
$g_{\mu\nu}$ is the metric tensor,
$R$ and $G_{\mu\nu}$ are the Ricci scalar and the Einstein tensor associated with $g_{\mu\nu}$,
$m_p$ and $\Lambda$ are the reduced Planck mass and the cosmological constant, respectively.
$A_\mu$ is the vector field,
$m$ and $\beta$ are the mass and the (dimensionless) nonminimal coupling constant of the vector field,
and
$F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ is the field strength.
After partially integrating and ignoring the boundary terms,
the action \eqref{eq:action} can be rewritten as
\begin{align}
\label{eq:action2}
S
&=
\int d^4x \sqrt{-g}
\left[
\frac{m_p^2}{2}
\left(R-2\Lambda\right)
-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}
-m^2 g_{\mu\nu} A^\mu A^\nu
\right.
\nonumber\\
&\left.
+\beta
\left(
(\nabla_\mu A^\mu)^2
-\nabla_\mu A_\nu \nabla^\nu A^\mu
-\frac{1}{2}A_\mu A^{\mu} R
\right)
\right].
\end{align}
According to the formulation in Refs. \cite{Heisenberg:2014rta,DeFelice:2016cri,DeFelice:2016yws},
the action \eqref{eq:action2} corresponds to the case that
\begin{align}
G_2(X)=2m^2 X-\Lambda m_p^2,
\quad
G_4(X)= \frac{m_p^2}{2}+\beta X,
\end{align}
with $c_2=0$ and $G_3(X)=G_5(X)=0$ where we have defined $X:=-\frac{1}{2} g^{\mu\nu}A_\mu A_\nu$.
The action \eqref{eq:action} is quadratic in $A_\mu$ and hence reflection-symmetric under $A_\mu\to -A_\mu$.
Thus if a set of the metric and the vector field $(g_{\mu\nu},A_\mu)$ is a solution of the theory \eqref{eq:action2},
the set $(g_{\mu\nu},-A_\mu)$ is also a solution of it.
Varying the action \eqref{eq:action} with respect to
the metric $g_{\mu\nu}$ and the vector field $A_\mu$,
respectively,
the Einstein equation and the vector field equation of motion are obtained by
\begin{widetext}
\begin{subequations}
\label{eoms}
\begin{align}
\label{eom_a}
m_p^2
\left(
G_{\mu\nu}+\Lambda g_{\mu\nu}
\right)
&=
\left(
F_{\mu\rho}F_{\nu}{}^{\rho}
-\frac{1}{4}g_{\mu\nu}F^{\rho\sigma}F_{\rho\sigma}
\right)
+2m^2 \left(A_\mu A_\nu-\frac{1}{2}g_{\mu\nu} A^\rho A_\rho\right)
\nonumber\\
&+\beta \left(
A^\rho A_\rho G_{\mu\nu}
+A_\mu A_\nu R
\right)
\nonumber\\
&-\beta g_{\mu\nu}
\left[
\left(\nabla_\rho A^\rho\right)^2
-2\nabla_\rho A_\sigma \nabla^\rho A^\sigma
+\nabla_\rho A_\sigma \nabla^\sigma A^\rho
-2A_\rho\Box A^\rho
+2 A^\rho \nabla_\rho\nabla_\sigma A^\sigma
\right]
\nonumber\\
&-2\beta
\left[
\nabla_\mu A_\rho \nabla_\nu A^\rho
-\nabla_\rho A^\rho \nabla_{(\mu}A_{\nu)}
-\nabla_\rho A_{(\mu} \nabla_{\nu)} A^\rho
+\nabla_\rho A_\mu \nabla^\rho A_\nu
+ A_\rho \nabla_{(\mu}\nabla_{\nu)} A^\rho
\right.
\nonumber\\
&\left.
-A^\rho \nabla_\rho \nabla_{(\mu} A_{\nu)}
+A_{(\mu} \Box A_{\nu)}
-2 A_{(\mu}\nabla_{\nu)}\nabla_\sigma A^\sigma
+A_{(\mu|} \nabla_{\rho}\nabla_{|\nu)} A^\rho
\right],
\\
\label{eom_b}
\nabla_\mu F^{\mu\nu}
&=2
\left(m^2g^{\mu\nu}-\beta G^{\mu\nu}\right) A_\mu.
\end{align}
\end{subequations}
\end{widetext}
As expected the equations of motion \eqref{eoms} are given by the second-order differential equations.
Acting $\nabla_\nu$ on Eq. \eqref{eom_b} with $\nabla_\nu\nabla_\mu F^{\mu\nu}=0$,
we obtain
\begin{align}
\label{constr}
\nabla_\nu
\left[
\left(m^2g^{\mu\nu}-\beta G^{\mu\nu}\right) A_\mu
\right]
=0,
\end{align}
which gives the constraint relation
among the four components of the vector field $A_\mu$,
leaving the three physical degrees of freedom,
namely one longitudinal and two transverse degrees of freedom.
\section{From the scalar-tensor theory to the generalized Proca theory}
\label{sec:solutions}
In this section, we show
how the static and spherically symmetric solutions in the scalar-tensor theory \eqref{eq:action3}
are expressed in the generalized Proca theory \eqref{eq:action}
with the vanishing electric field strength.
\subsection{The correspondence}
We assume that the vector field $A_\mu$ can be decomposed
into the part given by the gradient of the scalar function $\varphi$
and the remaining vector field part $B_\mu$,
\begin{align}
\label{expansion}
A_\mu=\partial_\mu \varphi + B_\mu.
\end{align}
Since obviously
the scalar function does not contribute to the field strength
$F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu =\partial_\mu B_\nu -\partial_\nu B_\mu=: F^{(B)}_{\mu\nu}$,
plugging Eq. \eqref{expansion} into the action \eqref{eq:action},
we obtain
\begin{align}
\label{eq:action_int}
S
&=
\int d^4x \sqrt{-g}
\left[
\frac{m_p^2}{2}
\left(R-2\Lambda\right)
-\frac{1}{4}F^{(B)}_{\mu\nu}F^{(B)\mu\nu}
\right.
\nonumber \\
&\left.
-\left(m^2 g_{\mu\nu}-\beta G_{\mu\nu}\right)
\left(\nabla^\mu\varphi +B^{\mu} \right)
\left(\nabla^\nu\varphi +B^{\nu} \right)
\right].
\end{align}
In the case of $B_\mu= 0$,
namely when the vector field is given by the gradient of the scalar function $A_\mu= \partial_\mu \varphi$,
the action \eqref{eq:action_int}
reduces to the shift-symmetric scalar-tensor theory
with the nonminimal derivative coupling to the Einstein tensor,
\begin{align}
\label{eq:action3}
S
&=
\int d^4x \sqrt{-g}
\left[
\frac{m_p^2}{2}
\left(R-2\Lambda\right)
\right.
\nonumber\\
&\left.
-\left(m^2 g_{\mu\nu}-\beta G_{\mu\nu}\right)
\nabla^\mu\varphi
\nabla^\nu\varphi
\right],
\end{align}
which involves the metric $g_{\mu\nu}$ and the scalar field $\varphi$ as the physical degrees of freedom
\cite{Sushkov:2009hk,Saridakis:2010mf,Germani:2010gm,Germani:2011ua,Gubitosi:2011sg}.
The static and spherically symmetric black hole solutions in the scalar-tensor theory \eqref{eq:action3}
have been investigated in
Refs. \cite{Babichev:2013cya,Rinaldi:2012vy,Minamitsuji:2013ura,Anabalon:2013oea,Silva:2016smx,Babichev:2016rlq}
under the static and spherically symmetric metric ansatz
\begin{align}
\label{metric_ansatz}
ds^2=-f(r)dt^2+\frac{dr^2}{h(r)}+r^2(d\theta^2+\sin^2\theta d\phi^2),
\end{align}
where $t$ and $r$ are the time and radial coordinates,
$\theta$ and $\phi$ are the polar and azimuthal coordinates of the two-sphere,
respectively,
and $f(r)$ and $h(r)$ are the functions of only the radial coordinate $r$.
In the static and spherically symmetric background \eqref{metric_ansatz},
the vector field has the nonvanishing $t$ and $r$ components
\begin{align}
\label{proca_ansatz}
A_\mu=\left(A_0(r), A_1(r),0,0\right),
\end{align}
where $A_0(r)$ and $A_1(r)$ are also only the functions of $r$.
Because of the reflection symmetry of the generalized Proca theory \eqref{eq:action}
and the absence of the cross terms of $A_0(r)$ and $A_1(r)$
(and their derivatives)
in the Einstein equation \eqref{eom_a} under the ansatz Eqs. \eqref{metric_ansatz} and \eqref{proca_ansatz},
if a set $\left(A_0(r), A_1(r)\right)=\left(c_0(r),c_1(r)\right)$ is a solution,
other sets $\left(A_0(r), A_1(r)\right)=\left(c_0(r), -c_1(r)\right), \left(-c_0(r),c_1(r)\right), \left(-c_0(r), -c_1(r)\right)$
are also solutions.
In each case, among them we will show the two independent branches $\left(A_0(r), A_1(r)\right)=\left(c_0(r), \pm c_1(r)\right)$.
We then derive the condition that the solution in the generalized Proca theory \eqref{eq:action}
is also the solution in the scalar-tensor theory \eqref{eq:action3}
within the ansatz Eqs. \eqref{metric_ansatz} and \eqref{proca_ansatz}.
Imposing the condition that $B_\mu=0$,
only the nontrivial component of the field strength $F_{rt}= A_0'(r)=0$,
which by integration gives
\begin{align}
\label{p}
A_0(r)= P,
\end{align}
where $P$ is the constant.
Then from Eq. \eqref{expansion} with $B_\mu=0$, we identify
\begin{align}
\label{derives}
\partial_t \varphi=P,
\quad
\partial_r \varphi=A_1(r).
\end{align}
Further integrating Eq. \eqref{derives},
the scalar function $\varphi$ is found to take the form of
\begin{align}
\label{scalar_ansatz}
\varphi(t,r)= P\, t +\psi(r),
\quad
\psi(r):=\int dr A_1(r).
\end{align}
The scalar function of the form \eqref{scalar_ansatz} is
exactly the same as in the black hole solutions found in the scalar-tensor theory \eqref{eq:action3}.
(See Ref. \cite{Babichev:2013cya}
for $P\neq 0$
and Refs. \cite{Rinaldi:2012vy,Minamitsuji:2013ura,Anabalon:2013oea}
for $P=0$.)
Thus Eq. \eqref{derives} gives how to express the solution
in the scalar-tensor theory \eqref{eq:action3} in the generalized Proca theory \eqref{eq:action}
with the vanishing field strength.
On the other hand,
the solution with the nonconstant $A_0(r)$,
giving rise to the nonvanishing electric field strength $F_{rt}=A_0'(r)$,
does not contain the counterpart in the scalar-tensor theory \eqref{eq:action3}.
In the rest of this section,
we focus on the case that $B_\mu=0$
and show how the solutions in the scalar-tensor theory \eqref{eq:action3}
discussed in Refs. \cite{Babichev:2013cya,Rinaldi:2012vy,Minamitsuji:2013ura,Anabalon:2013oea}
can be expressed in the generalized Proca theory \eqref{eq:action}.
\subsection{The stealth Schwarzschild solution}
The first example of the static and spherically symmetric solution
in the scalar-tensor theory \eqref{eq:action3}
is the stealth Schwarzschild solution obtained for $m=\Lambda=0$ \cite{Babichev:2013cya}.
In the generalized Proca theory \eqref{eq:action},
for general $P$ in Eq. \eqref{p} the solution is given by
\begin{subequations}
\label{eq:stealth}
\begin{align}
f(r)&
=h(r)
=1-\frac{2M}{r},
\\
A_1(r)&=
\pm \sqrt{\frac{2M}{r}}\frac{P}{f},
\end{align}
\end{subequations}
where $M$ is the integration constant that physically corresponds to the mass of the black hole.
The parameter $M$ appearing in the solutions discussed in the rest
also has the same physical meaning.
This is the stealth black hole solution
in the sense that the amplitude of the vector field $P$ does not appear in the metric.
Introducing the tortoise coordinate $dr_\ast=\frac{dr}{f}$,
\begin{align}
A_\mu dx^\mu
=P\left(dt\pm \sqrt{\frac{2M}{r}} dr_\ast\right),
\end{align}
which near the event horizons $r= 2M$ reduces to
\begin{align}
\label{stealth_vector}
A_\mu dx^\mu
\approx
P\left(dt\pm dr_\ast\right)
=
P
\times
\begin{cases}
dv \\
du ,
\end{cases}
\end{align}
where $v:=t+r_\ast$ and $u:= t-r_\ast$
are the advanced and retarded null coordinates.
The null coordinates $v$ and $u$ are regular at the future and past event horizons,
respectively,
ensuring the regularity of the scalar field there for each branch
in the context of the scalar-tensor theory \eqref{eq:action3}
\cite{Babichev:2013cya,Kobayashi:2014eva,Babichev:2015rva}.
\subsection{The Schwarzschild- (anti-) de Sitter solution}
\label{sec:selftune}
Similarly, for $P=\pm \frac{m_p}{m} \sqrt{\frac{m^2+\beta\Lambda}{2\beta}}$ in Eq. \eqref{p}
the Schwarzschild- (anti-) de Sitter solution
in the scalar-tensor theory \eqref{eq:action3} obtained in Ref. \cite{Babichev:2013cya}
is also expressed in the generalized Proca theory by
\begin{subequations}
\label{eq:selftuned}
\begin{align}
\label{eq:selftuned1}
f(r)&
=h(r)
=1-\frac{2M}{r}+\frac{m^2}{3\beta}r^2,
\\
A_1(r)&
=
\pm
\frac{m_p}{m}
\sqrt{-\frac{(m^2+\beta\Lambda) (m^2 r^3-6M\beta)}{6\beta^2 r}}
\frac{1}{f(r)},
\end{align}
\end{subequations}
where the bare value of the cosmological constant $\Lambda$ does not appear
in the metric functions $f(r)$ and $h(r)$,
and from the metric functions \eqref{eq:selftuned1} the effective cosmological constant is read as
$\Lambda_{\rm eff}=-\frac{m^2}{\beta}$.
Thus the spacetime is either asymptotically de Sitter or anti- de Sitter
for $\beta<0$ and $\beta>0$,
respectively.
The positivity inside the square root in $A_0(r)$ requires
$\Lambda\geq -\frac{m^2}{\beta}$, irrespective of the sign of $\beta$.
Thus for $\beta>0$, $\Lambda$ can be either positive or negative,
while for $\beta<0$, $\Lambda$ is always positive.
For $m^2= -\beta\Lambda$,
$A_1(r)$ vanishes and
the solution \eqref{eq:selftuned} reduces to the Schwarzschild- anti- de Sitter in GR
with the cosmological constant $\Lambda$,
\begin{align}
\label{sch_ds}
f(r)=h(r)=1-\frac{2M}{r}-\frac{\Lambda}{3}r^2.
\end{align}
Introducing the tortoise coordinate $dr_\ast=\frac{dr}{f}$,
\begin{align}
A_\mu dx^\mu
=
\frac{m_p}{m}
\sqrt{\frac{m^2+\beta\Lambda}{2\beta}}
\left(dt\pm \sqrt{\frac{-m^2 r^3+6M\beta}{3\beta r}} dr_\ast\right),
\end{align}
which near the (either event or cosmological) horizons $r\approx r_\ast$
satisfying $-m^2r_\ast^3+6M\beta=3\beta r_\ast$
becomes
\begin{align}
A_\mu dx^\mu
&\approx
\frac{m_p}{m}
\sqrt{\frac{m^2+\beta\Lambda}{2\beta}}
\times
\begin{cases}
dv \\
du ,
\end{cases}
\end{align}
where $v$ and $u$ are defined as in Eq. \eqref{stealth_vector}.
In the limit of $\beta\to-\frac{m^2}{\Lambda}$ the vector field trivially vanishes and
the Schwarzschild- (anti-) de Sitter solution in GR with the cosmological constant $\Lambda$ is recovered.
The null coordinate $v$ is regular at the future event and past cosmological (only for $\beta<0$) horizons,
while
the null coordinate $u$ is regular at the past event and future cosmological (only for $\beta<0$) horizons,
ensuring the regularity of the scalar field there for each branch
in the context of the scalar-tensor theory \eqref{eq:action3}
\cite{Babichev:2013cya,Kobayashi:2014eva,Babichev:2015rva}.
\subsection{The asymptotically anti- de Sitter solution}
Finally,
for $P=0$ in Eq. \eqref{p}
where in the theory Eq. \eqref{eq:action3} the scalar field $\varphi$ is time independent,
the asymptotically anti- de Sitter solution obtained in Refs. \cite{Rinaldi:2012vy,Minamitsuji:2013ura,Anabalon:2013oea}
in the theory \eqref{eq:action3}
is expressed in the generalized Proca theory by
\begin{subequations}
\label{nonflat}
\begin{align}
f(r)&=\frac{1}
{3mr\beta \left(m^2-\beta\Lambda\right)^2}
\left[
m^7 r^3
-3m r\beta^3\Lambda^2
\right.
\nonumber\\
&\left.
+m^3 r\beta^2\Lambda \left(-6+r^2\Lambda\right)
+m^5\beta \left(9r-2r^3\Lambda -24M\right)
\right.
\\
&\left.
+3\beta^{\frac{3}{2}}
\left(m^2+\beta\Lambda\right)^2{\rm arctan}\left(\frac{mr}{\sqrt{\beta}}\right)
\right],
\nonumber\\
h(r)&=\frac{ \left(m^2-\beta\Lambda\right)^2\left(m^2r^2+\beta\right)^2}
{m^4\left(m^2r^2+\beta \left(2-r^2\Lambda\right)\right)^2}
f(r),
\\
\label{adsa1}
A_1(r)
&=
\pm
\sqrt{
-\frac{m^2+\beta\Lambda}
{2\beta \left(m^2r^2+\beta\right)h(r)}}
m_p r.
\end{align}
\end{subequations}
We require $\beta>0$ so that the domain of $r$ is given by $0<r<\infty$.
Then,
in order for $A_1(r)$ to be real outside the event horizon $h(r)>0$,
from Eq. \eqref{adsa1}
we find $\Lambda\leq -\frac{m^2}{\beta}$.
From the large-$r$ limit of the metric functions,
\begin{subequations}
\begin{align}
f(r)&\approx \frac{m^2r^2}{3\beta}
+\frac{3m^2+\beta\Lambda}
{m^2-\beta\Lambda}
+{\cal O}\left(\frac{1}{r}\right),
\\
h(r)&\approx \frac{m^2r^2}{3\beta}
+\frac{7m^2+\beta\Lambda}
{3\left(m^2-\beta\Lambda\right)}
+{\cal O}\left(\frac{1}{r}\right),
\end{align}
\end{subequations}
we find that
the effective cosmological constant is
given by $\Lambda_{\rm eff}=-\frac{m^2}{\beta}<0$,
and hence the spacetime is asymptotically anti- de Sitter.
For the parameters satisfying the above bound,
the function $f(r)$ has a single root that corresponds to the position of the unique event horizon.
For $\Lambda<\frac{m^2}{\beta}$,
the point $r=\sqrt{\frac{2\beta}{\beta\Lambda-m^2}}$ is not the curvature singularity.
For $m^2= -\beta\Lambda$ which for $\beta>0$ requires $\Lambda<0$,
$A_1(r)$ vanishes and
the solution \eqref{nonflat} reduces to the Schwarzschild- anti- de Sitter in GR
with the cosmological constant $\Lambda$, Eq. \eqref{sch_ds}.
\section{The case of the vector field with the form of the Coulomb potential}
\label{sec:solutions2}
In this section,
we consider the case that the temporal component of the vector field $A_0(r)$
is given by the Coulomb potential as well as the constant term $P$,
\begin{align}
\label{pq}
A_0(r)=P+\frac{Q}{r},
\end{align}
where the constant $Q$ corresponds to the electric charge.
For $m=\beta=0$ where the gauge symmetry is recovered,
the Reissner-Nortsr\"om -(anti-) de Sitter solution is obtained by
\begin{subequations}
\label{reissner}
\begin{align}
f(r)&=h(r)=1-\frac{\Lambda}{3}r^2 -\frac{2M}{r}+\frac{Q^2}{2m_p^2 r^2},
\\
A_1(r)&=0.
\end{align}
\end{subequations}
\subsection{The stealth Schwarzschild solution}
First, we consider the case of $m=0$ and $\Lambda=0$.
As argued in Ref. \cite{Chagoya:2016aar},
only for $\beta=\frac{1}{4}$ the stealth Schwarzschild solution \eqref{eq:stealth}
is obtained by
\begin{subequations}
\label{stealth2}
\begin{align}
\label{stealth2_1}
f(r)&
=h(r)=1-\frac{2M}{r},
\\
\label{stealth2_2}
A_1(r)
&=
\pm
\frac{\sqrt{Q^2+2P\left(Q+MP\right)r}}
{r}
\frac{1}{f(r)}.
\end{align}
\end{subequations}
The positivity inside the square root \eqref{stealth2_2} for an arbitrary $r$
requires $P\left(Q+MP\right)\geq 0$.
\subsection{The Schwarzschild- (anti-) de Sitter solution}
We then consider the case that $m^2\neq 0$ and $\Lambda\neq 0$,
where the generalization of the Schwarzschild- (anti-) de Sitter solution \eqref{eq:selftuned} is obtained only for $\beta=\frac{1}{4}$.
In the case of $m^2>0$,
only for $P=\pm \frac{m_p}{\sqrt{2}m} \sqrt{4m^2+\Lambda}$ in Eq. \eqref{pq}
the Schwarzschild- anti- de Sitter solution
is obtained by
\begin{subequations}
\label{selftune2}
\begin{align}
f(r)&
=h(r)
=1-\frac{2M}{r}+\frac{4m^2}{3}r^2,
\\
\label{selftune2_c}
A_1(r)
&=\pm
\frac{1}{\sqrt{3} m r f(r)}
\left[
m_p^2 r (3M- 2m^2 r^3) (4m^2+\Lambda)
\right.
\nonumber\\
&\left.
\pm 3 m m_p r\sqrt{2\left(4m^2+\Lambda\right)} Q
+3m^2 Q^2
\right]^{\frac{1}{2}}.
\end{align}
\end{subequations}
Similarly in the case of $m^2<0$,
only for $P=\pm \frac{m_p}{\sqrt{2}|m|}\sqrt{4|m|^2-\Lambda}$ in Eq. \eqref{pq},
the Schwarzschild- de Sitter solution is obtained by
\begin{subequations}
\label{selftune3}
\begin{align}
f(r)&
=h(r)
=1-\frac{2M}{r}-\frac{4|m|^2}{3}r^2,
\\
\label{selftune3_c}
A_1(r)
&=
\pm
\frac{1}{\sqrt{3}|m|rf(r)}
\left[
m_p^2 r (3M+ 2|m|^2 r^3) (4|m|^2-\Lambda)
\right.
\nonumber \\
&\left.
\pm 3 |m| m_p r\sqrt{2\left(4|m|^2-\Lambda\right)} Q
+3|m|^2 Q^2
\right]^{\frac{1}{2}}.
\end{align}
\end{subequations}
For the solution \eqref{selftune2},
in order for $A_0(r)$ to be real, we require $\Lambda \geq -4m^2$.
For a large $r$, however, the combination inside the square root in Eq. \eqref{selftune2_c}
always becomes negative, and hence the solution \eqref{selftune2} may not be regarded as the physical one.
On the other hand,
for the solution \eqref{selftune3},
in order for $A_0(r)$ to be real, we require that $\Lambda <4|m|^2$
and then the positivity inside the square root of Eq. \eqref{selftune3_c}
between the event and cosmological horizons
can be naturally realized.
\subsection{The asymptotically anti- de Sitter solution}
Finally, we consider the case of $P=0$ in Eq. \eqref{pq}.
As for the other cases,
only for $\beta=\frac{1}{4}$,
the asymptotically anti- de Sitter solution \eqref{nonflat} is obtained by
\begin{subequations}
\label{chagoya2}
\begin{align}
f(r)
&=
\frac{1}
{6mr (\Lambda-4m^2)^2}
\nonumber\\
&\times
\left\{-6 \Lambda ^2 mr
+128 m^7 r^3-32 m^5
\left(24 M+2 \Lambda r^3-9 r\right)
\right.
\nonumber\\
&\left.
+8 \Lambda m^3 r \left(\Lambda r^2-6\right)
+3 \left(\Lambda +4
m^2\right)^2 {\rm arctan}(2 m r)
\right\},
\\
h(r)&=\frac{\left(\Lambda -4 m^2\right)^2 \left(4 m^2 r^2+1\right)^2}
{16 m^4 \left(4 m^2 r^2-\Lambda r^2+2\right)^2}
f(r),
\\
\label{chagoya2_3}
A_1(r)
&=\pm
\frac{\sqrt{Q^2\left(1+4m^2 r^2\right)-2m_p^2 \left(\Lambda +4m^2\right)r^4f(r)}}
{r \sqrt{f(r)h(r)} \sqrt{1+4r^2m^2}}.
\end{align}
\end{subequations}
In order for $A_1(r)$ to be real for $f(r)>0$,
$\Lambda+4m^2\leq 0$.
From the large-$r$ limit of the metric functions $f(r)$ and $h(r)$,
\begin{align}
f(r)&= \frac{4m^2}{3}r^2+\frac{12m^2+\Lambda}{4m^2-\Lambda} +{\cal O}\Big(\frac{1}{r}\Big),
\nonumber \\
h(r)&= \frac{4m^2}{3}r^2+\frac{28m^2+\Lambda}{12m^2-3\Lambda} +{\cal O}\Big(\frac{1}{r}\Big),
\end{align}
the effective cosmological constant is read as $\Lambda_{\rm eff}=-4m^2<0$
and hence the spacetime is asymptotically anti- de Sitter.
For $M>0$,
the function $f(r)$ has a single root which corresponds to the position of the unique event horizon.
For $\Lambda\leq 4m^2$,
the point $r=\sqrt{\frac{2}{\Lambda-4m^2}}$ is not the curvature singularity.
For $m=\pm \frac{\sqrt{-\Lambda}}{2}$
where the positivity of $m^2$ requires $\Lambda<0$,
the solution \eqref{chagoya2} reduces to the Schwarzschild- anti- de Sitter
\begin{align}
\label{grlimit}
h(r)&=f(r)=1-\frac{2M}{r}-\frac{\Lambda}{3}r^2,
\nonumber\\
A_1(r)&=\pm \frac{Q}{rf}.
\end{align}
Equation \eqref{grlimit} also corresponds to the $m\to \pm \frac{\sqrt{-\Lambda}}{2}$ limit of
Eqs. \eqref{selftune2} and \eqref{selftune3}.
\section{The other specific solutions}
\label{sec:others}
In this section, we consider the cases
where the temporal component of the vector field
is not given by Eq. \eqref{pq}.
\subsection{The solutions for the more general form of $A_0(r)$}
First, we consider the case
where the additional (inverse) power-law function of the radial coordinate $r$
is added to $A_0(r)$ shown in Eq. \eqref{pq},
namely,
\begin{align}
\label{pqr}
A_0(r)&
=P+\frac{Q}{r}+Q_p r^p,
\end{align}
where $p$ is the real number ($p\neq -1$)
and $Q_p$ is the constant.
For the temporal component of the vector field given by Eq. \eqref{pqr},
the existence of the solution for an arbitrary $p$
requires $\beta=\frac{1}{4}$, $P=\pm 2m_p$ and $m=\pm \frac{\sqrt{\Lambda}}{2}$ ($\Lambda>0$).
For $p\neq -3, -\frac{3}{2}, -\frac{1}{2}$,
the solution is given by
\begin{widetext}
\begin{subequations}
\label{chagoya3}
\begin{align}
f(r)&=
\frac{1}{4m_p^2 r}
\left[-8 M m_p^2+\frac{4}{3} \Lambda m_p^2
r^3+4 m_p^2 r
\pm 4 m_p Q_p r^{p+1}
\left(\frac{\Lambda (p+1) r^2}{p+3}+1\right)+(p+1)^2
Q_p^2 r^{2 p+1} \left(\frac{\Lambda r^2}{2
p+3}+\frac{1}{2 p+1}\right)\right],
\\
h(r)&=\frac{1}{ \left(1\pm \frac{(p+1) Q_pr^p} {2m_p}\right)^2}f(r),
\\
A_1(r)
&=
\pm
2 \sqrt{3} m_p
\sqrt
(2p+1)(2p+3)(p+3)}
(\pm 2 m_p+(p+1)Q_p r^p)
\nonumber\\
&\times
\left\{
-r
\left[
r \left(
p^3 \big(16 \Lambda
m_p^2 r^2
\pm
48 \Lambda m_p Q_p
r^{p+2}+
3 Q_p^2 r^{2 p}
(11 \Lambda r^2+9)
\big)
\right.
\right.
\right.
\nonumber\\
&\left.
\left.
+p^2
\big(80 \Lambda m_p^2
r^2
\pm
144 \Lambda m_p Q_p r^{p+2}
+3Q_p^2 r^{2 p}
(19 \Lambda r^2+9)
\big)
+3 \Lambda p r^2
(36m_p^2
\pm
44 m_p Q_p r^p
+13 Q_p^2 r^{2 p})
\right.
\right.
\nonumber\\
&\left.
\left.
+9 \Lambda r^2
(\pm 2m_p+Q_p r^p)^2
+6 p^4 Q_p^2 r^{2 p}
(\Lambda r^2+1)
\right)
-24 M m_p^2
(2p+1)(2p+3)(p+3)
\right]
\nonumber\\
&\left.
+6
(2p+1)(2p+3)(p+3)Q r
(\pm 2 m_p+Q_p r^p)
+3
(2p+1)(2p+3)(p+3)Q^2
\right\}^{\frac{1}{2}}
\nonumber\\
&\times
\Big\{
r \left[4 m_p^
(2p+1)(2p+3)(p+3)(\Lambda r^2+3)
\pm 12 m_
(2p+1)(2p+3)Q_p r^p
(\Lambda p r^2+p+\Lambda r^2+3)
\right.
\nonumber\\
&
\left.
+3 (p+1)^2 (p+3) Q_p^2 r^{2 p}
(2 p (\Lambda r^2+1)+\Lambda r^2+3)
\right]
-24 M m_p^2
(2p+1)(2p+3)(p+3)
\Big\}^{-1},
\end{align}
\end{subequations}
\end{widetext}
where the upper and lower branches of Eq. \eqref{chagoya3}
correspond to $P=2m_p$ and $P=-2m_p$, respectively.
The point $r= \left[\mp \frac{2m_p}{(p+1)Q_p}\right]^{\frac{1}{p}}$
could be the curvature singularity other than at $r=0$.
For $P=2m_p$,
the appearance of the curvature singularity can be avoided
for $Q_p<0$ and $p<-1$ or for $Q_p>0$ and $p>-1$,
while
for $P=-2m_p$
it can be avoided
for $Q_p>0$ and $p<-1$ or for $Q_p<0$ and $p>-1$.
For any value of $p(\neq -3,-\frac{3}{2},-\frac{1}{2})$ and $M>0$,
the function $f(r)$ has a single root
which corresponds to the position of the unique event horizon.
For a larger value of $M$,
the singularity $r= \left[\mp \frac{2m_p}{(p+1)Q_p}\right]^{\frac{1}{p}}$
is hidden by the event horizon.
For the other values of $p=-3,-\frac{3}{2},-\frac{1}{2}$,
the similar solutions are obtained only for $\beta=\frac{1}{4}$ and $P=\pm 2m_p$.
Here, we introduce the solution for each case:
\begin{widetext}
\begin{enumerate}
\item
For $p=-3$,
the solution for $P=\pm 2m_p$ is given by
\begin{subequations}
\label{pm3}
\begin{align}
f(r)&=
\frac{1}{15m_p^2 r^6}
\left[
\pm 15 m_p Q_{-3} r^3
-Q_{-3}^2 (3+5r^2\Lambda)
+
5m_p^2 r^5
(-6M + 3 r + r^3\Lambda)
\mp 30 m_p Q_{-3} r^5\Lambda \ln (r)
\right],
\\
h(r)&=\frac{m_p^2 r^6}
{\left(Q_{-3}\mp m_p r^3\right)^2}
f(r),
\\
A_1(r)
&=
\pm
\frac{\sqrt{15} m_p (Q_{-3}\mp m_p r^3)}
{\pm 15 m_p Q_{-3} r^3
-Q_{-3}^2 (3+5r^2\Lambda)
+
5m_p^2 r^5
(-6M + 3 r + r^3\Lambda)
\mp 30 m_p Q_{-3} r^5\Lambda \ln (r)}
\nonumber\\
&\times
\Big\{
30 Q Q_{-3} r^2
+ Q_{-3}^2
\left(27+20r^2\Lambda\right)
+5 r^4
\left(
3Q^2
\pm 12 \left(\pm 2Mm_p+Q\right) m_p r
-4m_p^2 r^4\Lambda
\right)
\pm 120 m_p Q_{-3} r^5\Lambda \ln(r)
\Big\}^{\frac{1}{2}}.
\end{align}
\end{subequations}
The point $r=\left(\pm \frac{Q_{-3}}{m_p}\right)^{\frac{1}{3}}$
could be the curvature singularity other than at $r=0$,
which is absent for $Q_{-3}<0$ in the positive branch
and for $Q_{-3}>0$ in the negative branch.
\item
For $p=-\frac{3}{2}$,
the solution for $P=\pm 2m_p$ is given by
\begin{subequations}
\label{p32}
\begin{align}
f(r)
&=\frac{1}{96m_p^2 r^3}
\left[
-3 Q_{-3/2}^2
\mp 32m_p Q_{-3/2} r^{\frac{3}{2}}\left(-3+r^2\Lambda\right)
+32m_p^2 r^2\left(-6M + r(3+r^2\Lambda)\right)
+6 Q_{-3/2}^2 r^2\Lambda \ln (r)
\right],
\\
h(r)&=\frac{16m_p^2 r^3}
{\left(Q_{-3/2}\mp 4m_p r^{\frac{3}{2}}\right)^2}
f(r),
\\
A_1(r)
&=
\pm
\frac{2 \sqrt{6} m_p \left(Q_{-3/2}\mp 4 m_p r^{3/2}\right)}
{-3 Q_{-3/2}^2
\mp 32m_p Q_{-3/2} r^{\frac{3}{2}}\left(-3+r^2\Lambda\right)
+32m_p^2r^2\left(-6M + r(3+r^2\Lambda)\right)
+6 Q_{-3/2}^2 r^2\Lambda \ln (r)}
\nonumber\\
&\times
\Big\{
8 r \big
24m_p^2M r
-4 \Lambda m_p^2 r^4
\pm 12m_p Q r
+3 Q^2\big)
+16Q_{-3/2} \sqrt{r} (\pm 2 \Lambda m_p r^3+3 Q)
-6 \Lambda Q_{-3/2}^2 r^2 \ln (r)
+27 Q_{-3/2}^2
\Big\}^{\frac{1}{2}}.
\end{align}
\end{subequations}
The point $r=\left(\pm \frac{Q_{-3/2}}{4m_p}\right)^{\frac{2}{3}}$
could be the curvature singularity other than at $r=0$,
which is absent
for $Q_{-3/2}<0$ in the positive branch
and
for $Q_{-3/2}>0$ in the negative branch.
\item
For $p=-\frac{1}{2}$ ,
the solution for $P=\pm 2m_p$ is given by
\begin{subequations}
\label{p12}
\begin{align}
f(r)
&=\frac{1}{480 m_p^2 r}
\left[
-960 M m_p^2
+15 Q_{-1/2}^2 r^2\Lambda
+160 m_p^2 r \left(3+r^2\Lambda\right)
\pm 96 m_p Q_{-1/2} \sqrt{r} \left(5+r^2\Lambda\right)
+30 Q_{-1/2}^2 \ln (r)
\right],
\\
h(r)&=\frac{16m_p^2 r}
{\left(Q_{-1/2}\pm 4m_p r^{\frac{1}{2}}\right)^2}
f(r),
\\
A_1(r)
&=\pm
\frac{2 \sqrt{30} m_p \left(Q_{-1/2}\pm 4 m_p r^{\frac{1}{2}}\right)}
{ \sqrt{r}\left[
-960 M m_p^2
+15 Q_{-1/2}^2 r^2\Lambda
+160 m_p^2 r \left(3+r^2\Lambda\right)
\pm 96 m_p Q_{-1/2} \sqrt{r} \left(5+r^2\Lambda\right)
+30 Q_{-1/2}^2 \ln (r)
\right]}
\nonumber\\
&\times
\Big\{
960 M m_p^2 r
-160 \Lambda m_p^2 r^4
+240 Q (\pm 2 m_p r+Q_{-1/2}\sqrt{r})
\mp 96 \Lambda m_p Q_{-1/2} r^{7/2}
+120 Q^2
\nonumber\\
&-15 \Lambda Q_{-1/2}^2 r^3
+120 Q_{-1/2}^2 r
-30 Q_{-1/2}^2 r \ln (r)
\Big\}^{\frac{1}{2}}.
\end{align}
\end{subequations}
The point $r=\left(\mp \frac{Q_{-1/2}}{4m_p}\right)^2$
could be the curvature singularity other than at $r=0$,
which is absent
for $Q_{-1/2}>0$ in the positive branch
and
for $Q_{-1/2}<0$ in the negative branch.
\end{enumerate}
\end{widetext}
For all the values of $p=-3,-\frac{3}{2}, -\frac{1}{2}$,
the functions $f(r)$ and $h(r)$ have a single root
which corresponds to the position of the unique event horizon.
In the limit of $\Lambda\to 0$ and $Q_p\to 0$,
the solutions \eqref{chagoya3}, \eqref{pm3}, \eqref{p32} and \eqref{p12}
reduce to the stealth Schwarzschild solution \eqref{stealth2} with $P=\pm 2m_p$.
Moreover,
as argued in Ref. \cite{Chagoya:2016aar},
only for $m=0$, $\beta=\frac{1}{4}$ and $p=2$ in Eq. \eqref{pqr},
the other type of the solution can be obtained for
\begin{align}
Q_2= \frac{2m_p^2 P\Lambda}{3(P^2-4m_p^2)},
\end{align}
given by
\begin{subequations}
\label{chagoya}
\begin{align}
f(r)&=1-\frac{2M}{r}
+\frac{4m_p^2r^2\Lambda\left(5P^2+m_p^2\left(-20+3r^2\Lambda\right)\right)}
{15\left(P^2-4m_p^2\right)^2},
\\
h(r)&=\frac{\left(P^2-4m_p^2\right)^2}
{\left(P^2+2m_p^2\left(-2+r^2\Lambda\right)\right)^2}
f(r),
\\
A_1(r)
&=
\pm
\sqrt{\frac{A_0(r)^2}{f(r)h(r)}
-\frac{P^2+2m_p^2r^2\Lambda}{h(r)}}.
\end{align}
\end{subequations}
The spacetime is neither asymptotically Minkowski nor (anti-) de Sitter.
If $\frac{4m_p^2-P^2}{\Lambda}>0$,
the point $r=\frac{1}{m_p}\sqrt{\frac{4m_p^2-P^2}{2\Lambda}}$
is the curvature singularity
other than $r=0$.
On the other hand,
if $\frac{4m_p^2-P^2}{\Lambda}<0$,
there is no curvature singularity except for $r=0$.
In both the cases,
the function $f(r)$ always has a single root which corresponds to the position of the unique event horizon,
and for $\frac{4m_p^2-P^2}{\Lambda}>0$ the singularity at $r=\frac{1}{m_p}\sqrt{\frac{4m_p^2-P^2}{2\Lambda}}$
is hidden by the event horizon
$\sqrt{\Lambda}M>\frac{4}{15m_p}\sqrt{\frac{4m_p^2-P^2}{2}}$.
For the other values of $p$
the analytic solution similar to Eq. \eqref{chagoya} could not be found.
\subsection{The solution for $A_1(r)=0$}
So far, we have investigated the static and spherically symmetric solutions for several choices of $A_0(r)$.
Instead, we may specify $A_1(r)$ and then find the other variables $f(r)$, $h(r)$ and $A_0(r)$
by solving the equations of motion \eqref{eom_a} and \eqref{eom_b}
under the ansatz \eqref{metric_ansatz} and \eqref{proca_ansatz}.
For $m\neq0$ and/or $\beta\neq 0$,
the $r$ component of the vector field equation of motion \eqref{eom_b} becomes nontrivial as
\begin{align}
\label{Pr}
0&=
h(r) A_1(r)
\nonumber\\
&\times
\left[
-m^2 r^2 f(r)
+\beta
\left(
-f(r)(1-h(r))+r h(r) f'(r)
\right)
\right],
\end{align}
which
as in general $h(r)\neq 0$
gives the two possibilities
\begin{subequations}
\begin{align}
\label{eqs}
&-m^2 r^2f (r)
+\beta
\left(
-f(r)(1-h(r))+r h(r) f'(r)
\right)
=0,
\\
&
{\rm or}
\nonumber\\
\label{eqs2}
&A_1(r)=0.
\end{align}
\end{subequations}
The solutions, Eqs. \eqref{eq:stealth}, \eqref{eq:selftuned}, \eqref{nonflat}, \eqref{stealth2}, \eqref{selftune2}, \eqref{selftune3},
\eqref{chagoya2}, \eqref{chagoya3}, \eqref{pm3}, \eqref{p32}, \eqref{p12}, and \eqref{chagoya},
have been originated from the former choice \eqref{eqs}.
Under the ansatz Eqs. \eqref{metric_ansatz} and \eqref{proca_ansatz},
if the $r$ component of the vector field equation of motion \eqref{Pr} is satisfied
then the $(t,r)$ component of the Einstein equation is also automatically satisfied.
For the latter case \eqref{eqs2},
the specific solution is obtained
only for $m=0$, $\Lambda=0$ and $\beta=\frac{1}{4}$
by
\begin{align}
\label{a10}
f(r)&=h(r)=1\pm \sqrt{\frac{r_0}{r}},
\quad
A_0(r)= 2 m_p f(r),
\end{align}
where $r_0$ is the integration constant.
The solution \eqref{a10} was obtained in Ref. \cite{Geng:2015kvs}.
The spacetime is asymptotically flat.
The singularity at $r=0$ is visible in the positive branch and hidden by the event horizon at $r=r_0$ in the negative branch,
respectively.
The solution similar to Eq. \eqref{a10}
could not be found for the more general cases
of $m\neq 0$, $\Lambda\neq 0$, or $\beta\neq \frac{1}{4}$.
\subsection{A short summary}
\label{sec:short_summary}
Throughout Secs. \ref{sec:solutions2} and \ref{sec:others},
we have obtained the static and spherically symmetric solutions for several nontrivial choices of
the temporal component of the vector field $A_0(r)$.
For $A_0(r)$ with the form of the Coulomb potential given by Eq. \eqref{pq},
we have obtained the stealth Schwarzschild, the Schwarzschild- (anti-) de Sitter and the asymptotically anti- de Sitter solutions
\eqref{stealth2}, \eqref{selftune2} and \eqref{selftune3}, and \eqref{chagoya2},
respectively.
Unexpectedly,
these solutions are present only
for the specific value of the nonmimal coupling constant, $\beta=\frac{1}{4}$,
and the electric charge $Q$ does not appear in the metric,
which is different from the case of the Reissner- Nortsr\"om [- (anti-) de Sitter] solution \eqref{reissner}.
For the other cases,
we could obtain the solutions
\eqref{chagoya3}, \eqref{pm3}, \eqref{p32}, \eqref{p12}, \eqref{chagoya} and \eqref{a10}.
All these solutions also exist only for $\beta=\frac{1}{4}$.
\section{The slowly rotating solutions}
\label{sec:slow}
Finally, we investigate the slowly rotating solutions
within the Hartle-Thorne approximation \cite{Hartle:1967he,Hartle:1968si},
where the rotational correction is obtained in the perturbation framework to the static and spherically symmetric background
with respect to the angular velocity of the black hole $\Omega$.
The correction to the static and spherically symmetric metric \eqref{metric_ansatz}
appears in the $(t,\phi)$ component at ${\cal O}(\Omega)$
and in the other components at ${\cal O}(\Omega^2)$.
Thus within ${\cal O}(\Omega)$ the metric will take the form of
\begin{align}
\label{eq:framedrag1}
ds^2&=-f(r)dt^2 +\frac{dr^2}{h(r)}
+r^2\left(d\theta^2+\sin^2\theta d\phi^2\right)
\nonumber\\
&-2r^2\omega(r) \sin^2\theta dtd\phi,
\end{align}
where $\omega(r)$ is the unknown function of ${\cal O}(\Omega)$.
Similarly,
the correction to the vector field in the static and spherically symmetric background \eqref{proca_ansatz}
appears in the $\phi$ component at ${\cal O}(\Omega)$
and in the other components at ${\cal O} (\Omega^2)$.
Hence,
within ${\cal O}(\Omega)$ the vector field will take the form of
\begin{align}
A_\mu= \left(A_0(r), A_1(r),0,A_3(r,\theta)\right).
\end{align}
For the separability of the equations of motion at ${\cal O}(\Omega)$,
we assume that the azimuthal component of the vector field $A_3(r,\theta)$ takes the form
\begin{align}
\label{eq:framedrag2}
A_3(r,\theta)&= a_3 (r)\sin^2\theta,
\end{align}
where $a_3(r)$ is the other unknown function of ${\cal O} (\Omega)$.
The unknown functions $\omega(r)$ and $a_3(r)$
in Eqs. \eqref{eq:framedrag1} and \eqref{eq:framedrag2}
are found as the solution of the field equations \eqref{eom_a} and \eqref{eom_b} at ${\cal O}(\Omega)$.
At ${\cal O}(\Omega)$,
only the $(t,\phi)$ component of the Einstein equation \eqref{eom_a} becomes nontrivial,
and similarly
only the $\phi$ component of the vector field equation of motion \eqref{eom_b}
becomes nontrivial.
Thus they will be solved
under the boundary conditions that $\omega(r)$ and $a_3(r)$ are finite in the large-$r$ limit.
Before starting,
we consider the case of $m=\beta=0$ where the gauge symmetry is recovered.
In this case,
the slow-rotation correction to the Reissner-Nortsr\"om-(anti-) de Sitter solution \eqref{reissner}
is obtained by
\begin{subequations}
\label{kerr_newman}
\begin{align}
\label{kn1}
\omega(r)
&=\omega_0+\frac{2J}{r^3}-\frac{JQ^2}{2m_p^2 M r^4},
\\
\label{kn2}
a_3(r)&=-\frac{JQ}{Mr},
\end{align}
\end{subequations}
which agrees with the Kerr-Newman-(anti-) de Sitter solution
by neglecting the terms of ${\cal O} (\Omega^2)$ \cite{Dehghani:2002nt,Huaifan:2009nf}.
\subsection{For the background with the constant $A_0(r)$}
First, we consider the background solutions with the constant temporal component
of the vector field,
$A_0(r)=P$ discussed in Sec. \ref{sec:solutions},
namely,
Eqs. \eqref{eq:stealth} and \eqref{eq:selftuned} for $P\neq 0$
and Eq. \eqref{nonflat} for $P= 0$.
For these background solutions,
if we assume that $a_3(r)=0$,
we find that
$\omega(r)$ remains the same as the slow-rotation limit of the Schwarzschild- (anti-) de Sitter background in GR,
given by
\begin{align}
\label{sol:framedrag}
\omega (r)= \omega_0+\frac{2J}{r^3},
\end{align}
where the constant $\omega_0=0$ for the Schwarzschild background
and $\omega_0\neq 0$ for the Schwarzschild- (anti-) de Sitter background,
and the constant $J$ represents the angular momentum of the black hole.
This is the confirmation of the argument in Sec. \ref{sec:solutions} at the level of the first order in the slow-rotation approximation, ${\cal O} (\Omega)$,
as the solutions in the scalar-tensor theory \eqref{eq:action3} obtained in Refs. \cite{Cisterna:2015uya,Maselli:2015yva}
are expressed as those in the generalized Proca theory with the vanishing field strength.
On the other hand,
if we assume that $a_3(r)\neq 0$,
formally
the more general solution can be found.
For instance,
for the stealth Schwarzschild background \eqref{eq:stealth} with $P =\pm \frac{m_p}{\sqrt{\beta}}$ ($\beta>0$),
the solution is given by
\begin{subequations}
\label{nonzeroa2}
\begin{align}
\label{omega2}
\omega(r)
&=
\omega_0
+\frac{2J}{r^3}
\mp \frac{3{\cal Q} M}{2m_p \sqrt{\beta} r^4},
\\
\label{a32}
a_3(r)&=\frac{\cal Q}{r},
\end{align}
\end{subequations}
where ${\cal Q}$ is the integration constant.
The same solution as Eq. \eqref{nonzeroa2}
is also obtained for the Schwarzschild- (anti-) de Sitter background \eqref{eq:selftuned}
$m=\pm\sqrt{\beta\Lambda}$
($\beta \Lambda>0$).
For the other background parameters,
we could obtain the solutions with the same leading behavior
as Eq. \eqref{nonzeroa2}
in the large-$r$ limit.
From Eq. \eqref{omega2},
the contribution of ${\cal Q}$ seems to appear
as an independent ``charge''.
In fact,
if we consider
the slow-rotation correction to the
Schwarzschild- (anti-) de Sitter solution with $m=\beta=0$
by assuming that $a_3(r)\neq 0$,
the solution
with the same leading behavior in the large-$r$ limit as Eq. \eqref{a32}
could be obtained.
However,
we have to set the integration constant ${\cal Q}=0$,
as the slow-rotation correction
to the electrically neutral background solution
could not induce the nonzero magnetic field.
Similarly in our case,
for the background
with the vanishing electric field strength
it is reasonable to set ${\cal Q}=0$,
and hence
Eq. \eqref{sol:framedrag} with $a_3(r)=0$
can be regarded as the only physical solution.
\subsection{For the background with the nonconstant $A_0(r)$}
Second, we consider the background solutions with the nonconstant $A_0(r)$
discussed in Sec. \ref{sec:solutions2},
namely Eqs. \eqref{stealth2}, \eqref{selftune2}, \eqref{selftune3}
and \eqref{chagoya2} with $m=\pm \frac{\sqrt{-\Lambda}}{2}$ where the solution reduces to Eq. \eqref{grlimit}.
We find that $\omega(r)$ is given by Eq. \eqref{sol:framedrag} which is the same as the Kerr- (anti-) de Sitter solution,
and $a_3(r)$ is given by
\begin{align}
\label{a33}
a_3(r)=-\frac{JQ}{M r},
\end{align}
which is similar to the result obtained in Ref. \cite{Chagoya:2016aar} for the stealth Schwarzschild background \eqref{stealth2}.
Thus $a_3(r)$ is the same
as the slow-rotation limit of the Kerr-Newman-(anti-) de Sitter solution \eqref{kn2},
but $\omega(r)$ does not contain the term depending on the background electric charge $Q$.
This is the stealth property realized at the first order in the slow-rotation approximation, ${\cal O} (\Omega)$.
For Eq. \eqref{chagoya2} with $m\neq \pm \frac{\sqrt{-\Lambda}}{2}$,
no analytic solution of $\omega(r)$ and $a_3(r)$ could be obtained.
\subsection{A short summary}
In this section, we have investigated the slow-rotation corrections to the static and
spherically symmetric backgrounds
within the first order of the Hartle-Thorne approximation \cite{Hartle:1967he,Hartle:1968si}.
For the background with the vanishing electric field strength,
the slow-rotation correction to the metric was found to be the same as
the Kerr- (anti-) de Sitter solution \eqref{sol:framedrag}
with $A_3(r,\theta)=0$.
On the other hand,
for the background with the nonvanishing electric field strength,
the slow-rotation correction to the metric remains the same as the Kerr- (anti-) de Sitter solution \eqref{sol:framedrag},
but the azimuthal component $A_3(r,\theta)$ is the same as the Kerr- Newman- (anti-) de Sitter solution \eqref{a33},
which is the realization of the stealth property in the slowly rotating case.
\section{Conclusions}
\label{sec:conclusions}
We have investigated the static and spherically symmetric solutions in the generalized Proca theory
with the nonminimal coupling of the vector field to the Einstein tensor \eqref{eq:action}.
First,
we have shown that the solutions obtained in the scalar-tensor theory with
the nonminimal derivative coupling to the Einstein tensor \eqref{eq:action3}
are also those in the generalized Proca theory \eqref{eq:action}
with the vanishing field strength,
and we have obtained the expressions of
the stealth Schwarzschild,
the Schwarzschild- (anti-) de Sitter
and
the asymptotically anti- de Sitter solutions in the generalized Proca theory.
Second, we have investigated these solutions
where the temporal component of the vector field contains the term of the Coulomb potential.
In this case,
as argued in Ref. \cite{Chagoya:2016aar},
the extension of these solutions requires the special value of the nonminimal coupling parameter,
irrespective of the value of the mass term of the vector field and the asymptotic property of the spacetime.
We have also obtained the other nontrivial solutions for the same value of the coupling constant.
Finally,
we have investigated the first-order slow-rotation corrections to the static and spherically symmetric solutions.
We have found that
for the background with the vanishing electric field strength
the slowly rotating solution remains the same as in GR.
For the background with the nonvanishing electric field strength,
the slow-rotation correction to the metric
does not depend on the electric charge
and may be regarded as the realization of the stealth property in the context of the slow-rotation approximation.
There will be various extensions of the present work.
The first subject is to investigate the stability of the solutions obtained in this paper.
The stability of the black hole solutions in the scalar-tensor Horndeski theory \eqref{eq:action3}
has been investigated in Refs. \cite{Kobayashi:2012kh,Kobayashi:2014wsa,Cisterna:2015uya,Ogawa:2015pea}.
Especially in Ref. \cite{Ogawa:2015pea} it was argued that
the static and spherically symmetric black hole solutions with
the constant canonical kinetic term $X_\varphi=-\frac{1}{2}g^{\mu\nu}\partial_\mu\varphi\partial_\nu\varphi$
are generically unstable in the vicinity of the event horizon.
It will be interesting
to investigate whether there is the same kind of instability in the vector-tensor theory.
The other interesting issue is
to investigate the rapidly rotating black hole solutions
and the spectrum of the quasinormal modes,
which could make the deviations from the GR solutions more evident.
Other than the vacuum solutions,
it will also be very important
to investigate the solutions of the neutron stars and the other compact objects.
(See, e.g., \cite{Cisterna:2015yla,Cisterna:2016vdx,Maselli:2016gxk,Brihaye:2016lin} for the related studies in the case of the scalar-tensor Horndeski theory.)
We hope to come back to these issues in future work.
\section*{Acknowledgements}
We thank E. Babichev, M. Kimura and H. O. Silva for comments.
This work was supported by FCT-Portugal through Grant No. SFRH/BPD/88299/2012.
|
2,877,628,091,072 | arxiv | \section{Introduction}
\label{sec:Introduction}
The Sloan Digital Sky Survey (SDSS, \citealp{York_etal_2000}), with
its homogeneous spectroscopic and photometric data on hundreds of
thousands of galaxies has revolutionized our perception of the world
of galaxies in the local Universe. The enormous amount of objects
allowed one to reveal trends that had not been suspected before. For
example, while is was known since the work of
\citet*{Baldwin_Phillips_Terlevich_1981} that objects ionized by
massive stars and active galactic nuclei (AGN) live in different zones
of emission line ratios diagrams, the fact that emission line galaxies
are distributed in two well defined wings
(\citealp{Kauffmann_etal_2003c}) in the famous [OIII]5007/H$\beta$ vs
[NII]6583/H$\alpha$ diagnostic diagram (heareafter, the BPT diagram)
came as a surprise.
The left wing of the BPT diagram can be understood as a sequence in
metallicity of normal star-forming (SF) galaxies. The present-day
nebular metallicity ($Z_{neb}$) of a galaxy is intimately connected
with its past star formation history (SFH). The main goal of this
paper is to explore this link. The focus of many of the pioneering
studies of SFH of SF galaxies was instead the variation of SFH with
the Hubble type. \citet*{Searle_Sargent_Bagnuolo_1973}, for instance,
assumed a simple model for the SFH and calculated $UBV$ colors for
simulated galaxies. Comparing simulated and observed colors, they
concluded that morphological type alone does not explain the
differences in SFH, proposing that the galaxy mass should also be used
as a tracer of star formation.
\citet*{Gallagher_Hunter_Tutukov_1984} introduced a way to study the
star formation rates (SFR) in three different epochs of a galaxy's
history. In order to achieve such time resolution, manifold indices
were used: HI observations, dynamical masses, $B$-band and H$\alpha$
luminosities, and $UBV$ colors. \citet{Sandage_1986} applied some of
the techniques presented by \citet{Gallagher_Hunter_Tutukov_1984} to
investigate differences in SFH along the the Hubble sequence. In the
same vein, \citet*{Kennicutt_Tamblyn_Congdon_1994} derived the SFR for
SF objects from the H$\alpha$ luminosity and $UBV$ colors, and found
that the SFH differences for galaxies of the same Hubble type have a
much stronger relation with their disk than with their bulge.
\citet{Gavazzi_etal_2002} measured the present and past SFRs of
late-type galaxies in nearby clusters from H$\alpha$ imaging and near
infrared observations, and also derived the global gas content from HI
and CO observations. Most of these studies had to rely on many
different indices and observations in order to measure an
instantaneous SFR, or at most a 2--3 age resolution star formation
history for SF galaxies.
\citet{Bica_1988} introduced a method to reconstruct SFHs in greater
detail by mixing the properties of a base of star clusters of various
ages ($t_\star$) and metallicities ($Z_\star$). In its original
implementation, this method used a set of 5--8 absorption line
equivalent widths as observables, a grid of clusters arranged in 35
combinations of $t_\star$ and $Z_\star$, and a simple parameter space
exploration technique limited to paths through the $t_\star$-$Z_\star$
plane constrained by chemical evolution arguments (see also
\citealp{Schmidt_etal_1991, Bica_Alloin_Schmitt_1994,
CidFernandes_etal_2001}). Its application to nuclear spectra of nearby
galaxies of different types revealed systematic variations of the SFH
along the Hubble sequence.
For over a decade the most attractive feature of Bica's method was
its use of observed cluster properties, empirically bypassing the
limitations of evolutionary synthesis models, which until recently
predicted the evolution of stellar systems at spectral resolutions
much lower than the data. This is no longer a problem. Medium and
high spectral resolution stellar libraries, as well as updates in
evolutionary tracks have been incorporated into evolutionary synthesis
models in the past few years. The current status of these models and
their ingredients is amply discussed in the proceedings of the IAU
Symposium 241 (\citealp{Vazdekis_Peletier_2007}).
These advances spurred the development of SFH recovery methods which
combine the non-parametric mixture approach of empirical population
synthesis with the ambitious goal of fitting galaxy spectra on a
pixel-by-pixel basis using a base constructed with these new
generation of evolutionary synthesis models. Methods based on
spectral indices have also benefitted from these new models, and
produced an impressive collection of results (e.g.,
\citealp{Kauffmann_etal_2003a, Kauffmann_etal_2003b,
Kauffmann_etal_2003c, Gallazzi_etal_2005, Brinchmann_etal_2004}).
However, current implementations of these methods do not reconstruct
detailed SFHs, although they do constrain it, providing estimates of
properties such as mass-to-light ratios, mean stellar age, fraction of
mass formed in recent bursts, and ratio of present to past SFR.
The first SFHs derived from full spectral fits of SDSS galaxies were
carried out with the MOPED (\citealp{Panter_etal_2003_Moped,
Panter_etal_2007_Moped}; \citealp*{Mathis_Charlot_Brinchmann_2006})
and \starlight\ codes. MOPED results have been recently reviewed by
Panter (2007), so we just give a summary of the results achieved with
\starlight.
\starlight\ itself was the main topic of the first paper in our
Semi-Empirical Analysis of Galaxies series (SEAGal). In SEAgal I
(\citealp{CidFernandes_etal_2005_SEAGal1}) we have thoroughly
evaluated the method by means of simulations, astrophysical
consistency tests and comparisons with the results obtained by
independent groups. In SEAGal II (\citealp{Mateus_etal_2006_SEAGal2})
we have revisited the bimodality of the galaxy population in terms of
spectral synthesis products. In SEAGal III
(\citealp{Stasinska_etal_2006_SEAGal3}), we combined the emission
lines dug out and measured from the residual spectrum obtained after
subtraction of the synthetic spectrum with photoionization models to
refine the criteria to distinguish between normal SF galaxies and AGN
hosts. SEAGal IV (\citealp{Mateus_etal_2007_SEAGal4}) deals with
environment effects, studied in terms of the relations between mean
age, current SFR, density, luminosity and mass.
Only in SEAGal V (\citealp{CidFernandes_etal_2007_SEAGal5}) we turned
our attention to the detailed time dependent information provided by
the synthesis. We have used the entire Data Release 5
(\citealp{Adelman-McCarthy_etal_2007}) to extract the population of SF
galaxies, and study their chemical enrichment and mass assembly
histories. It was shown that there is a continuity in the evolution
properties of galaxies according to their present properties: Massive
galaxies formed most of their stars very early and quickly reached the
high stellar metallicities they have today, whereas low mass (metal
poor) galaxies evolve slower. These findings are in agreement with
recent studies of the mass assembly of larges samples of galaxies
through the fossil record of their stellar populations
(\citealp{Heavens_etal_2004_Moped}), and of studies of the $Z_\star$
distribution in small samples of galaxies (e.g.,
\citealp{Skillman_Cote_Miller_2003} and references therein), but the
generality of the result applied to the entire population of SF
galaxies was shown for the first time.
In the present paper, we aim at a more complete view of the properties
of SF galaxies, and their variations along the SF sequence in the BPT
diagram, improving and expanding upon the results only briefly
sketched in SEAGal V. In particular, we discuss in depth time
averaged values of quantities such as the SFR and the SFR per unit
mass, as well as their \emph{explicit} time dependence.
The paper is organized as follows. Section 2 describes our parent
sample and explain our criteria to define normal SF galaxies. This
section also explains how we deal with extinction and how we estimate
the nebular metallicity. In Section 3, we discuss the global
properties of galaxies along the SF sequence in the BPT diagram. In
Section 4, we then proceed to explain our formalism to uncover the
explicit time dependence of such quantities as the star formation
rate. In Section 5, we show that the current star formation rate as
estimated by the most commonly indicator -- the H$\alpha$
luminosity -- compares with that obtained from our stellar population
synthesis analysis. In Section 6, we analyse the SFH along the SF
sequence, binning galaxies in terms of their present-day nebular
metallicity. We show that, despite the important scatter at any
$Z_{neb}$, there is a clear tendency for the SFH as a function of
$Z_{neb}$, in that in the most metal-rich galaxies most of the stellar
mass assembly occurred very fast and early on, while metal poor
systems are currently forming stars at much higher relative rates.
We also compute mean SFHs binning the galaxies with respect to the
stellar mass and the surface mass density, which are expected to
better express the causes of the evolution of galaxies. Section
\ref{sec:samples} discusses possible selection effects and other
caveats. Finally, Section \ref{sec:Summary} summarizes our main
results.
\section{Data}
\label{sec:Data}
\label{sec:BPTdiagram}
\begin{figure*}
\includegraphics[width=0.815\textwidth, bb=60 170 592 718]{Fig_example_fits.eps}
\includegraphics[width=0.150\textwidth, bb=0 -37 113 563]{mosaic.eps}
\caption{Five examples of the spectral fits. Left panels show the
observed (black) and fitted (red) spectra, both normalized at
$\lambda_0 = 4020$ \AA. Magenta lines mark regions not used in
the fits either because they contain emission lines or because of
artifacts in the data. Middle panels illustrate the fraction of
light at $\lambda_0$ associated to each of the 25 SSP ages used
in the fits. Curves represent a 0.5 dex smoothed version of the
population vector. Right panels show SDSS $25.6^{\prime\prime}
\times 25.6^{\prime\prime}$ images ($\sim$ $12 \times 12$--$34
\times 34$ kpc$^2$). Galaxies in this plot are ordered according
to their nebular metallicity ($Z_{neb}$; see Section
\ref{sec:Z_neb}). From top to bottom, $Z_{neb} = 0.29$, 0.43,
0.61, 0.84 and 0.97 $Z_\odot$.}
\label{fig:STARLIGHT_fits}
\end{figure*}
The data analysed in this work was extracted from the SDSS Data
Release 5 (DR5; \citealp{Adelman-McCarthy_etal_2007}). This release
contains data for 582471 objects spectroscopically classified as
galaxies, from which we have found $\sim 1.6$ per cent of duplicates,
that is, objects with multiple spectroscopic information in the parent
galaxy catalog.
From the remaining 573141 objects we have selected our parent sample
adopting the following selection criteria: $14.5 \le m_r \le 17.77$
and $z \ge 0.002$. The magnitude range comes from the definition of
the Main Galaxy Sample, whereas the lower redshift limit is used to
avoid inclusion of intragalactic sources. The resulting sample
contains 476931 galaxies, which corresponds to about 82 per cent of
all galaxies with spectroscopic data gathered by SDSS and publicly
available in the DR5. These limits imply a reduction by $\sim 17\%$ in
the sample studied in SEAGal V.
\subsection{STARLIGHT fits}
\label{sec:STARLIGHTfits}
After correcting for Galactic extinction (with the maps of
\citealp*{Schlegel_Finkbeiner_Davis_1998} and the reddening law of
\citealp*{Cardelli_Clayton_Mathis_1989}, using $R_V = 3.1$), the
spectra were shifted to the rest-frame, resampled to $\Delta \lambda =
1$ \AA\ between 3400 and 8900 \AA, and processed through the
\starlight\ spectral synthesis code described in SEAGal I and II.
\starlight\ decomposes an observed spectrum in terms of a sum of
simple stellar populations (SSPs), each of which contributes a
fraction $x_j$ to the flux at a chosen normalization wavelength
($\lambda_0 = 4020$ \AA). As in SEAGal II, we use a base of $N_\star =
150$ SSPs extracted from the models of
\citet[BC03]{Bruzual_Charlot_2003_BC03}, computed for a
\citet{Chabrier_2003} initial mass function (IMF), ``Padova 1994''
evolutionary tracks (\citealp{Alongi_etal_1993, Bressan_etal_1993,
Fagotto_etal_1994a, Fagotto_etal_1994b, Girardi_etal_1996}), and
STELIB library (\citealp{LeBorgne_etal_2003}). The base components
comprise 25 ages between $t_{\star,j} = 1$ Myr and 18 Gyr, and 6
metallicities, from $Z_{\star,j} = 0.005$ to 2.5 solar. Bad pixels,
emission lines and the NaD doublet are masked and left out of the
fits. The emission line masks were constructed in a galaxy-by-galaxy
basis, following the methodology outlined in SEAGal II and
\citet{Asari_2006}. \starlight\ outputs several physical properties,
such as the present-day stellar mass, stellar extinction, mean stellar
ages, mean metallicities as well as full time dependent star formation
and chemical evolution histories, which will be used in our analysis.
Section \ref{sec:STARLIGHT} describes aspects of the code relevant to
this work.
Fitting half a million spectra represented a massive computational
effort, carried out in a network of over 100 computers spread over 3
continents and controlled by a specially designed PHP code. This huge
d'atabase of spectral fits and related products, as well as \starlight\
itself, are {\em publicly available} in a Virtual Observatory
environment at www.starlight.ufsc.br (see \citealp[in
prep.]{CidFernandes_etal_2007b}).
Examples of the spectral fits obtained for 5 star-forming galaxies are
shown in Fig \ref{fig:STARLIGHT_fits}. We have ordered the galaxies
according to their nebular metallicity ($Z_{neb}$), as defined in
Section \ref{sec:Z_neb}, to illustrate how spectral characteristics
change along the $Z_{neb}$ sequence. Metal-poor galaxies (top) show
blue spectra and strong emission lines in comparison to the redder
spectra and weaker emission lines of galaxies with a metal-rich ISM
(bottom). The middle panels in Fig \ref{fig:STARLIGHT_fits} show the
fractional contribution to the total flux at $\lambda_0 = 4020$ \AA\
of simple stellar population (SSP) of age $t_\star$. These panels
show that young stellar populations make a dominant contribution in
galaxies with low $Z_{neb}$, whereas at higher nebular metallicities a
richer blend of stellar ages is present.
\subsection{Emission lines}
\subsubsection{General procedure}
Emission lines were measured fitting gaussians to the residual spectra
obtained after subtraction of the stellar light using an updated
version of the line-fitting code described in SEAGal III. The main
transitions used in this study are H$\beta$, [OIII]$\lambda5007$,
H$\alpha$ and [NII]$\lambda6584$. In the next section these lines are
used to define the sub-sample of star-forming galaxies which will be
studied in this paper.
\subsubsection{The special case of H$\beta$}
We find that a zero level residual continuum is adequate to fit the
emission lines, except for H$\beta$. Inspection of the spectral fits
shows that the synthetic spectrum is often overestimated in the
continuum around H$\beta$, creating a broad, $\sim 200$ \AA\ wide
absorption trough in the residual spectrum. This problem, which can
hardly be noticed in Fig \ref{fig:STARLIGHT_fits}, becomes evident
when averaging many residual spectra (see SEAGal V and
\citealp{Panter_etal_2007_Moped}), and tends to be more pronounced for
older objects. The comparison between STELIB stars and theoretical
models presented by \citet{Martins_etal_2005} gives a clue to the
origin of this problem. A close inspection of Fig 21 in their paper
shows that the STELIB spectrum has an excess of flux on both sides of
H$\beta$ when compared to the model spectrum. This suggests that the
``H$\beta$ trough'' is related to calibration issues in the STELIB
library in this spectral range. This was confirmed by \starlight\
experiments which showed that the problem disappears using the SSP
spectra of \citet[based on the Martins \etal 2005
library]{Gonzalez-Delgado_etal_2005} or those constructed with the
MILES library (\citealp{Sanchez-Blazquez_etal_2006}).\footnote{We
thank Drs.\ Enrique Perez, Miguel Cervi\~no and Rosa
Gonz\'alez-Delgado for valuable help on this issue.}
Though this is a low amplitude mismatch (equivalent width $\sim 3$
\AA\ spread over $\sim 200$ \AA), it makes H$\beta$ sit in a region of
negative residual flux, so assuming a zero level continuum when fitting
a gaussian may chop the base of the emission line, leading to an
underestimation of its flux. To evaluate the magnitude of this effect
we have repeated the H$\beta$ fits, this time adjusting the continuum
level from two side bands (4770--4830 and 4890--4910 \AA). On average,
the new flux measurement are 2\% larger than the ones with the
continuum fixed at zero. The difference increases to typically 4\% for
objects with $W_{H\beta} < 5$ \AA, but $S/N > 3$ in the line, and 7\%
for $W_{H\beta} < 2$ \AA. Noise in the side bands introduces
uncertainties in the measurement of the flux, but at least it removes
the systematic effect described above, so the new measurements should
be considered as more accurate on average. We adopt these new
H$\beta$ measurements throughout this work. Using the zero continuum
measurements changes some of the quantitative results reported in this
paper minimally, with no impact on our general conclusions.
\subsection{Definition of the Star Forming Sample}
\label{sec:sample_definition}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Fig_BPT_SFwing.eps}
\caption{The SF sample in the BPT plane, chopped into bins of
nebular abundance $Z_{\rm neb} = \frac{({\rm O/H})}{({\rm
O/H})_\odot}$. All lines have been corrected by reddening (see
Section \ref{sec:AV_neb}). These same bins are used throughout
this paper. The number of galaxies in each bin is given on the
left. On the right, we show the corresponding mean $Z_{neb}$ and
log mean $M_\star$ values (in solar units). Galaxies close to bin
borders are not plotted for clarity. }
\label{fig:BPT}
\end{figure}
Since the pioneering work of \citet*{Baldwin_Phillips_Terlevich_1981},
emission line objects are classified in terms of their location in
diagrams involving pairs of line ratios. As explained in SEAGal III,
the [NII]$\lambda6584$/H$\alpha$ vs.\ [OIII]$\lambda5007$/H$\beta$
diagram (the BPT diagram) is the most useful for this purpose, mainly
due to the partially secondary nature of N (i.e., the increase of N/O
as O/H increases, e.g., \citealp{Liang_etal_2006, Molla_etal_2006}).
Our sample of star-forming galaxies is composed by objects which are
below the line separating normal star-forming galaxies and AGN hosts
proposed in SEAGal III. We have imposed a lower limit of 3 in $S/N$ on
the 4 lines in the BPT diagram, and $S/N \ge 10$ in the 4730--4780
\AA\ continuum to constitute our main sample (the SF sample). The
82302 galaxies composing the SF sample are shown in Fig.
\ref{fig:BPT} on the BPT plane.
Both the \starlight\ fits (and thus all SFH-related parameters) and the
emission line data are affected by the quality of the spectra. To
monitor this effect, we have defined a ``high-quality'' sub-set (the
SF$^{hq}$ sample) by doubling the $S/N$ requirements for the
SF sample, i.e., $S/N \ge 6$ in all 4 lines in the BPT diagram and a
continuum $S/N$ of 20 or better. A total of 17142 sources satisfy
these criteria.
Fig. \ref{fig:obs_prop} shows the distributions of observational and
physical properties for the samples. Naturally, the SF$^{hq}$ sample
is skewed towards closer and brighter galaxies with respect to the
SF sample, but in terms of physical properties such as stellar mass,
mean age and nebular metallicity the two samples are similar.
\begin{figure*}
\includegraphics[bb= 70 570 572 690,width=\textwidth]
{Fig_HistProperties.eps}
\caption{Normalized histograms of observed and physical properties
for the SF (solid), SF$^{hq}$ (dotted) samples.}
\label{fig:obs_prop}
\end{figure*}
\subsection{Nebular Metallicity Estimate}
\label{sec:Z_neb}
It is well known that the SF-wing in the BPT diagram is a sequence in
nebular metallicity (SEAGal III and references therein), which we
quantify by the oxygen abundance obtained through the O$_3$N$_2 =$
[OIII]5007/[NII]6583 index as calibrated by \citet{Stasinska_2006}:
\begin{equation}
\label{eq:Z_neb}
\log Z_{neb} =
\log \frac{{\rm (O/H)}}{{\rm ~~(O/H)}_\odot} =
-0.14 - 0.25 \log {\rm O}_3 {\rm N}_2
\end{equation}
\noindent where we have adopted ${\rm (O/H)}_\odot = 4.9 \times
10^{-4}$ (\citealp*{AllendePrieto_Lambert_Asplund_2001}).
We have chosen to use the O$_3$N$_2$ indicator to estimate the average
oxygen abundance in the ISM of SF galaxies mainly because it is a
single-valued indicator and it can be easily related to the position
of galaxies in the classical BPT diagram. However, this indicator is
affected by the the presence of diffuse ionized gas in galaxies and by
the fact that the N/O ratio depends on the way chemical evolution
proceeded (\citealp*{Chiappini_Romano_Matteucci_2003}). In addition,
for the lowest metallicity galaxies, O$_3$N$_2$ is not sensitive to
O/H anymore, as a wide range of metallicities correspond to the same
value of O$_3$N$_2$, as can be seen in Fig.\ 3 of
\citet{Stasinska_2006}. From that figure, the O/H given by equation
(\ref{eq:Z_neb}) lies towards the upper end of the possible values of
O/H. We considered using the [ArIII]7135/[OIII]5007 (Ar$_3$O$_3$)
index which, as argued by \citet{Stasinska_2006}, does not suffer from
the problems mentioned for the O$_3$N$_2$ index. It turns out that, in
objects where the [ArIII] line could be measured, Ar$_3$O$_3$ and
O$_3$N$_2$ are extremely well correlated (with a Spearman correlation
coefficient of $R_S = 0.58$). Unfortunately, the quality of the SDSS
spectra did not allow us to measure the [ArIII] line intensity with
sufficient accuracy in a large number of objects, and principally in
the zone where it would have been helpful to break the O$_3$N$_2$ vs
O/H degeneracy. Using the \citet{Pilyugin_Thuan_2005} metallicity
calibration based on [OIII]5007/H$\beta$ and [OII]3727/H$\beta$ adds
only a tiny fraction of galaxies. The same applies when using the O/H
values obtained by \citet{Izotov_etal_2006} from direct methods using
the electron temperature derived from [OIII]4363/[OIII]5007. When
comparing our measures with theirs, there is a systematic offset of
0.2 dex and a rms of 0.13 dex in $Z_{neb}$ for the 177 objects we have
in common. This effect is greater the lower $Z_{neb}$ is. We thus
decided to use O$_3$N$_2$ as a nebular metallicity indicator all along
the SF galaxy sequence, keeping in mind that equation (\ref{eq:Z_neb})
will tend to attribute a metallicity $Z_{neb}$ of about 0.2 $Z_\odot$
for the galaxies with the lowest observed O$_3$N$_2$ in our sample.
Other metallicity estimates have been used for galaxies. For example,
\citet{Tremonti_etal_2004} obtained the nebular metallicities by
comparing the observed line ratios with a large data base of
photoionization models. While a priori appealing, this method is not
devoid of problems, as shown by \citet{Yin_etal_2007}. There is a
systematic offset of -0.28 dex and a rms of 0.09 between our nebular
metallicities and theirs. Their method also yields a larger range of
values for $Z_{neb}$. For the SF sample, their calibration covers from
$Z_{neb} = 0.78$ to 2.70 $Z_\odot$ for the 5 to 95 percentile ranges,
whereas our calibration covers from 0.47 to 1.13 $Z_\odot$ for the
same percentile ranges.
The calibration by \citet{Pettini_Pagel_2004} is more similar to our
own. There is a slight offset of -0.04 dex with respect to our
calibration and the dispersion for the SF sample is 0.03 dex. Their
calibration also stretches a little the $Z_{neb}$ ranges: $Z_{neb} =
0.45$ to 1.36 $Z_\odot$ for the 5 to 95 percentile ranges.
Although we believe that our calibration is likely more reliable, we
have performed all the computations in this paper also with the
\citet{Tremonti_etal_2004} and \citet{Pettini_Pagel_2004}
calibrations. While the results differ in absolute scales, the
qualitative conclusions remain identical.
\subsubsection{$Z_{neb}$ bins}
As seen above, both physical and mathematical motivations make
O$_3$N$_2$ a convenient index to map galaxy positions along the SF
wing in the BPT diagram. From equation (\ref{eq:Z_neb}) one sees that
a given value of $Z_{neb}$ using this index defines a straight line of
unit slope in the BPT diagram.
Our SF sample spans the $Z_{neb} = 0.2$--1.6 $Z_\odot$ range from the
tip of the SF-wing to its bottom. In Fig. \ref{fig:BPT} this
interval is chopped into 6 bins of width $\Delta \log Z_{neb} = 0.13$
dex, except for the one of lowest metallicity which is twice as wide
to include more sources. Table \ref{tab:ZnebBinsStats} lists some
properties of galaxies in each of these bins, which are hereafter
labeled A--F. Galaxies inside these bins will be grouped together in
the analysis of star-formation presented in Section
\ref{sec:SFH_results}. Note that the bias in the determination of
$Z_{neb}$ from O$_3$N$_2$ at low metallicities has no consequence for
our study, since almost all the objects from bin A remain in this bin.
\begin{table*}
\begin{center}
\begin{tabular}{lcccccc}
\hline
& Bin A & Bin B & Bin C & Bin D & Bin E & Bin F \\
\hline
$\log Z_{neb}$ min [Z$_\odot$] & -0.710 & -0.450 & -0.320 & -0.190 & -0.060 & 0.070 \\
$\log Z_{neb}$ p05 & -0.608 & -0.435 & -0.311 & -0.180 & -0.053 & 0.071 \\
$\log Z_{neb}$ p50 & -0.494 & -0.364 & -0.242 & -0.111 & -0.001 & 0.081 \\
$\log Z_{neb}$ p95 & -0.454 & -0.324 & -0.194 & -0.064 & 0.054 & 0.112 \\
$\log Z_{neb}$ max & -0.450 & -0.320 & -0.190 & -0.060 & 0.070 & 0.200 \\
\hline
$\log M_\star$ p05 [M$_\odot$] & 7.146 & 7.938 & 8.693 & 9.277 & 9.800 & 10.089 \\
$\log M_\star$ p50 & 8.319 & 8.922 & 9.452 & 9.958 & 10.460 & 10.678 \\
$\log M_\star$ p95 & 9.313 & 9.634 & 10.083 & 10.646 & 11.073 & 11.154 \\
\hline
$\langle \log t_\star \rangle_L$ p05 [yr] & 6.755 & 7.428 & 7.695 & 7.915 & 8.127 & 8.167 \\
$\langle \log t_\star \rangle_L$ p50 & 7.762 & 8.155 & 8.385 & 8.567 & 8.726 & 8.724 \\
$\langle \log t_\star \rangle_L$ p95 & 8.556 & 8.900 & 9.139 & 9.255 & 9.276 & 9.255 \\
\hline
$\log L_{H\alpha}$ p05 [L$_\odot$] & 5.167 & 5.139 & 5.492 & 5.988 & 6.551 & 6.956 \\
$\log L_{H\alpha}$ p50 & 6.475 & 6.475 & 6.750 & 7.149 & 7.544 & 7.816 \\
$\log L_{H\alpha}$ p95 & 7.834 & 7.736 & 7.961 & 8.172 & 8.426 & 8.559 \\
\hline
$\log b$ p05 & -0.157 & -0.392 & -0.698 & -0.947 & -0.973 & -0.890 \\
$\log b$ p50 & 0.769 & 0.436 & 0.211 & -0.006 & -0.213 & -0.199 \\
$\log b$ p95 & 1.527 & 1.192 & 0.905 & 0.644 & 0.366 & 0.272 \\
\hline
\end{tabular}
\end{center}
\caption{Statistics of properties in bins A--F.}
\label{tab:ZnebBinsStats}
\end{table*}
\subsection{Extinctions}
\label{sec:AV_neb}
\subsubsection{Stellar extinction}
As explained in SEAGal I, \starlight\ also returns an estimate of the
stellar visual extinction, $A_V^\star$, modeled as due to a foreground
dust screen. This is obviously a simplification of a complex problem
(\citealp*{Witt_Thronson_Capuano_1992}), so that $A_V$ should be called
a dust attenuation parameter instead of extinction, although we do not
make this distinction. Previous papers in this series have used the
\citet*[CCM]{Cardelli_Clayton_Mathis_1989} reddening law, with $R_V =
3.1$. In order to probe different recipes for dust attenuation, we
have selected 1000 galaxies at random from the SF$^{hq}$ sample and
fitted them with four other functions: the starburst attenuation law
of \citet*{Calzetti_Kinney_Storchi-Bergmann_1994}, the SMC and LMC
curves from \citet{Gordon_etal_2003} and the $\lambda^{-0.7}$ law used
by \citet{Kauffmann_etal_2003a}.
We find that the quality of the spectral fits remains practically
unchanged with any of these 5 laws. Averaging over all galaxies the
SMC law yields slightly better $\chi^2$'s, followed closely by the
Calzetti, $\lambda^{-0.7}$, LMC, and CCM, in this order. As expected,
these differences increase with the amount of dust, as measured by the
derived $A_V^\star$ values or by H$\alpha$/H$\beta$. Yet, KS-tests
showed that in no case the distributions of $\chi^2$'s differ
significantly. This implies that the choice of reddening law {\em
cannot} be made on the basis of fit quality. A wider spectral coverage
would be needed for a definitive empirical test.
When using different recipes for the dust attenuation, the synthesis
algorithm has to make up for the small variations from one curve to
another by changing the population vector and the value of
$A_V^\star$. To quantify these changes we compare results obtained
with the Calzetti and CCM curves, and consider only the most extincted
objects. Compared to the results for a CCM law, with the Calzetti law
the mean stellar age decreases by 0.09 dex on the median
(qualitatively in agreement with the results reported in Fig 6 of
\citealp{Panter_etal_2007_Moped}), the mean stellar metallicity
increases by 0.05 dex, $A_V^\star$ increases by 0.07 mag and stellar
masses decrease by 0.02 dex. These differences, which are already
small, should be considered upper limits, since they are derived from
the most extincted objects. Somewhat larger differences are found when
using the SMC and $\lambda^{-0.7}$ laws. For instance, compared to the
Calzetti law, the SMC law produces mean stellar ages 0.15 dex younger
and masses 0.07 dex smaller, again for the most extincted objects.
We have opted to use the Calzetti law in our \starlight\ fits and
emission line analysis. The reasons for this choice are twofold: (1)
The Calzetti law yields physical properties intermediate between the
SMC and CCM laws, and (2) this law was built up on the basis of
integrated observations of SF galaxies, similar to the ones studied in
this paper. In any case, the experiments reported above show that this
choice has little impact upon the results.
\starlight\ also allows for population dependent extinctions. Although
tests with these same 1000 galaxies sample show that in general one
obtains larger $A_V^\star$ for young populations, as expected,
simulations show that, as also expected, this more realistic modeling
of dust effects is plagued by degeneracies which render the results
unreliable (see also \citealp{Panter_etal_2007_Moped}; SEAGal V). We
therefore stick to our simpler but more robust single $A_V^\star$
model.
\subsubsection{Nebular extinction}
\label{sec:av_neb}
The nebular V-band extinction was computed from the H$\alpha$/H$\beta$
ratio assuming a \citet{Calzetti_Kinney_Storchi-Bergmann_1994} law:
\begin{equation}
\label{eq:AV_neb}
A_V^{neb} = 7.96 \log {\rm
\frac{(H\alpha/H\beta)_{obs}}{(H\alpha/H\beta)_{int}} }
\end{equation}
\ni where ${\rm (H\alpha/H\beta)_{obs}}$ and ${\rm
(H\alpha/H\beta)_{int}}$ are the observed and intrinsic ratio
respectively. Instead of assuming a constant value, we take into
account the metallicity dependence of ${\rm (H\alpha/H\beta)_{int}}$,
which varies between 2.80 and 2.99 for $Z_{neb}$ in the 0.1 to 2.5
$Z_\odot$ range, as found from the photoionization models in SEAGal
III\footnote {Note that the models take into account collisional
excitation of Balmer lines, so that at low metallicities the intrinsic
H$\alpha$/H$\beta$ is different from the pure case B recombination
value.}.
We obtain the intrinsic ratio as follows. We start by assuming ${\rm
(H\alpha/H\beta)_{int}} = 2.86$, from which we derive a first guess
for $A_V^{neb}$. We then use the dereddened ${\rm [OIII]}\lambda5007$
and ${\rm [NII]}\lambda6584$ line fluxes to calculate $Z_{neb}$
(eq.~\ref{eq:Z_neb}). From our sequence of photoionization models
(SEAGal III) and $Z_{neb}$, we derive a new estimate for ${\rm
(H\alpha/H\beta)_{int}}$, and hence $A_V^{neb}$
(eq.~\ref{eq:AV_neb}). It takes a few iterations (typically 2--3) to
converge.
For 1.6\% of the objects (1.1\% for the SF$^{hq}$ sample),
H$\alpha$/H$\beta$ is smaller than the intrinsic value, which leads to
$A_V^{neb} < 0$. In such cases, we assume $A_V^{neb} = 0$. We have
corrected both the [OIII]/H$\beta$ and [NII]/H$\alpha$ line ratios for
dust attenuation for the remainder of our analysis.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Fig_AVBalmer.eps}
\caption{(a) Relation between stellar and nebular extinctions for
the SF sample. Lines indicate the 5, 50 and 95 percentiles. (b)
Equivalent width of the ISM component of the NaD doublet (as measured
from residual spectra) against the stellar extinction.}
\label{fig:ResidualsNaD}
\end{figure}
We find that $A_V^{neb}$ and $A_V^\star$ are strongly correlated, as
shown in Fig \ref{fig:ResidualsNaD}a. A robust linear fit including
all points yields $A_V^{neb} = 0.34 + 2.28 A_V^\star$. The ionized
gas thus suffers $\sim$ twice as much extinction as the stellar
continuum, corroborating the results reported in
\citet{Stasinska_etal_2004} (with a different methodology) and SEAGal
I (obtained with a smaller sample, different version of \starlight\ and
a CCM extinction curve), and in agreement with detailed studies of
nearby SF-galaxies (\citealp{Calzetti_Kinney_Storchi-Bergmann_1994}).
We also find that the difference between nebular and stellar
extinctions increases systematically as the mean age of the stellar
population increases.
Given the spatial association of the line emitting gas and the
ionizing populations, these results ultimately imply a breakdown of
our simple single-$A_V^\star$ modelling. In fact, \starlight\
experiments with population dependent $A_V^\star$ point in the same
direction, i.e, the need to allow young populations to suffer more
extinction than older ones. To evaluate to which extent this
simplification affects the results reported in this paper, Sec.\
\ref{sec:samples_YAV} presents experiments where the extinction of
$t_\star \le 10$ Myr components is set according to the empirical
relation $A_V^{neb}(A_V^\star)$ found above.
\subsubsection{Interstellar absorption as traced by the NaD doublet}
The most conspicuous spectroscopic feature of the cold ISM in the
optical range is the NaD doublet at $\lambda\lambda$5890,5896 \AA.
For a constant gas to dust ratio, the strength of this feature, which
measures the amount of cold gas in front of the stars, should
correlate with $A_V^\star$, as found for far-IR bright starburst
galaxies (\citealp{Heckman_etal_2000}). To perform this test for our
sample, we first measure the flux of the NaD doublet in the residual
spectrum, integrating from 5883 to 5903 \AA. We thus remove the
stellar component of this feature, which is also present in stellar
atmospheres, particularly late type stars
(\citealp*{Jacoby_Hunter_Christian_1984}; \citealp{Bica_etal_1991}).
In principle, this is a more precise procedure than estimating the
stellar NaD from its relation to other stellar absorption lines
(\citealp{Heckman_etal_2000, Schwartz_Martin_2004}), but since the NaD
window was masked in all fits (precisely because of its possible
contamination by ISM absorption), the stellar NaD predicted by the
fits rely entirely on other wavelengths, so in practice this is also
an approximate correction. The residual flux is then divided by the
continuum in this range (defined as the median synthetic flux in the
5800--5880 plus 5906-5986 \AA\ windows), yielding the excess
equivalent width $\Delta W_{\rm NaD}$, which says how much stronger
(more negative) the NaD feature is in the data with respect to the
models.
Fig \ref{fig:ResidualsNaD}b shows the relation between $\Delta W_{\rm
NaD}$ and $A_V^\star$. The plot shows that these two independently
derived quantities correlate strongly. Intriguingly, $\Delta W_{\rm
NaD}$ converges to $0.8$ \AA\ in the median as $A_V^\star
\rightarrow 0$. We interpret this offset from $\Delta W_{\rm NaD} = 0$
as due to the fact that the stars in the STELIB library have a
Galactic ISM component in their NaD lines. This propagates to our
spectral models, leading to an overprediction of the stellar NaD
strength, and thus $\Delta W_{\rm NaD} > 0$ when the ISM absorption
approaches zero.
Regardless of such details, the discovery of this astrophysically
expected correlation strengthens the confidence in our
analysis. Furthermore, it opens the interesting prospect of measuring
the gas-to-dust ratio and study its relation with all other galaxy
properties at hand, from nebular metallicities to SFHs. This goes
beyond the scope of the present paper, so we defer a detailed analysis
to a future communication.
\section{Correlations with nebular metallicity}
\label{sec:Correlations}
Galaxy properties change substantially from the tip of the SF-wing,
where small, metal-poor HII-galaxy-like objects live, to its bottom,
populated by massive, luminous galaxies with large bulge-to-disk
ratios and rich in metals (\citealp{Kennicutt_1998}). The simplest way
to investigate these systematic trends is to correlate various
properties with the nebular metallicity (e.g.,
\citealp{Tremonti_etal_2004, Brinchmann_etal_2004}).
In this section we correlate $Z_{neb}$ with both observed and physical
properties extracted from our stellar population fits. This
traditional analysis, based on current or time-averaged properties,
helps the interpretation of the more detailed study of time-dependent
SFHs presented in the next sections. In fact, this is the single
purpose of this section. Since most of the results reported in this
section are already know or indirectly deducible from previous work,
we will just skim through these correlations.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig_ZnebCorrels.eps}
\caption{Correlations of $Z_{neb}$ and (a) the absolute r-band
magnitude, (b) the stellar mass, (c) the surface mass density, (d)
the nebular extinction, (e) the mean stellar metallicity, (f) the
mean stellar age, (g) the ratio of current to mean past SFR, and (h)
the H$\alpha$ luminosity. Numbers in each panel report the
Spearman rank correlation coefficient, and the lines mark the 5, 50
and 95\% percentiles of 25 bins, 3292 points in each bin. The right
hand scale in panel (h) is also $\log L_{H\alpha}$, but in units of
$2 \times 10^8 L_\odot$, such that it can also be read as an
estimate of the current SFR in $M_\odot\,$yr$^{-1}$ (Section
\ref{sec:SFRxHa}).}
\label{fig:ZnebCorrelations}
\end{figure*}
Fig \ref{fig:ZnebCorrelations}a shows $Z_{neb}$ against absolute
r-band magnitude. This is the luminosity-nebular metallicity relation,
previously studied by many authors, and interpreted in terms of a
mass-metallicity relation. Fig \ref{fig:ZnebCorrelations}b shows our
version of the $M_\star$--$Z_{neb}$ relation. Because of the expected
bias in our $Z_{neb}$ estimate at the lowest metallicities (see
Section \ref{sec:Z_neb}), we expect the real mass-metallicity relation
to be flatter at low $M_\star$ than seen in this plot.
As shown by \citet{Kauffmann_etal_2003b}, stellar mass and stellar
surface mass density ($\Sigma_\star$) are very strongly related. It is
thus no surprise to find that $\Sigma_\star$ also correlates with
$Z_{neb}$, as shown in Fig \ref{fig:ZnebCorrelations}c. Our definition
of $\Sigma_\star$ is the same as adopted by \citet{Kauffmann_etal_2003b},
namely $\Sigma_\star = M_\star / 2\pi r_{50,z}^2$, where $r_{50,z}$ is
the half light Petrosian radius in the z-band.
Fig \ref{fig:ZnebCorrelations}d shows how nebular extinction increases
systematically with $Z_{neb}$. One factor which surely contributes to
this relation is the rise in dust grain formation with increasing gas
metallicity, but other factors may come into play as well
(\citealp{Stasinska_etal_2004}).
Fig \ref{fig:ZnebCorrelations}e shows how nebular and stellar
metallicities correlate. This important relation, first presented in
SEAGal I (for different sample, SSP base and $Z_{neb}$ scale), shows
that stellar and ISM chemical enrichment levels scale with each other,
as one would expect on the basis of simple chemical evolution
scenarios. The large scatter in Fig \ref{fig:ZnebCorrelations}e is
mostly intrinsic (as we verified comparing the relation obtained for
data of different qualities), in qualitative agreement with the idea
that stellar and nebular metallicities reflect different evolutionary
phases and react differently to the several processes which regulate
the chemical evolution of galaxies. A similar relation was obtained by
\citet{Gallazzi_etal_2005} using different methods to estimate both
stellar and nebular abundances. Even though we express both
quantities in solar units, these two metallicities are derived by such
radically different means that, as discussed in SEAGal V, they should
not be compared in quantitative terms.
The relation between the mean stellar age $\langle \log t_\star
\rangle_L$ and $Z_{neb}$, shown in Fig \ref{fig:ZnebCorrelations}f,
reflects the fact that young stars have a larger share of the light
output at the tip of the SF wing than at its bottom, where old
populations have a greater weight. Metal-rich SF galaxies thus have a
more continuous star-forming history than metal-poor ones, which are
often dominated (in light, but not mass) by the latest generation of
stars (e.g., \citealp{Corbin_etal_2006}). This is another way to look
at metallicity--age trend, discussed previously in the analysis of
Fig.~\ref{fig:STARLIGHT_fits}. Ultimately, this relation represents a
summary of chemical evolution, in the sense that more evolved systems
have a more enriched ISM. In a related vein, Fig
\ref{fig:ZnebCorrelations}g shows how the ratio of current to mean
past SFR (defined in Section \ref{sec:bScalo_def}) varies along the
metallicity sequence of SF galaxies. This indicates that the
lower-metallicity galaxies are slower in forming stars. When one
considers the mass-metallicity relation (Fig
\ref{fig:ZnebCorrelations}b), this is just another way of looking at
the downsizing effect (\citealp{Heavens_etal_2004_Moped,
Thomas_etal_2005, Mateus_etal_2006_SEAGal2}). Finally, Fig
\ref{fig:ZnebCorrelations}h shows the relation between reddening
corrected H$\alpha$ luminosity and $Z_{neb}$. The y-axis is given in
units such that the values correspond $\sim$ to the current SFR in
$M_\odot\,$yr$^{-1}$ (see Section \ref{sec:SFRxHa}). The correlation,
although statistically unquestionable, has a large scatter. This
implies that galaxies in the 6 $Z_{neb}$ bins defined in Fig
\ref{fig:BPT} have heavily overlapping current SFRs. Section
\ref{sec:SFH_results} presents independent confirmation of this fact.
As expected, all correlations discussed above are also present for the
SF$^{hq}$ sub-sample. For most they are in fact somewhat stronger,
whereas for samples defined with less stringent criteria the
correlation strengths weaken, indicating that noise in the data is
responsible for part of the scatter in these relations. Finally, as is
widely known and can be deduced from Fig \ref{fig:ZnebCorrelations}
itself, there are many inter-relations between galaxy properties. Our
use of $Z_{neb}$ as the ``independent'' variable axis in Fig
\ref{fig:ZnebCorrelations} is not meant to indicate that $Z_{neb}$ is
the underlying cause of the correlations; it simply reflects our
interest in mapping physical properties of galaxies along the SF wing
of the seagull in the BPT diagram.
\section{Methods to investigate star formation histories}
\label{sec:SFH_theory}
The main goal of this paper is to study how the SFH varies among SF
galaxies. Most other investigations in this same line used
absorption, emission or continuum spectral indices such as the 4000
\AA\ break, the H$\delta$ absorption, the K, G and Mg bands, or the
H$\alpha$ luminosity and equivalent widths to characterize the SFH
(e.g., \citealp{Raimann_etal_2000, Kong_etal_2003,
Kauffmann_etal_2003b, CidFernandes_Leao_Lacerda_2003,
Brinchmann_etal_2004, Westera_etal_2004}). Our approach, instead, is
to infer the SFH from detailed pixel-by-pixel fits to the full
observed spectrum, thus incorporating all available information.
Whereas our previous work concentrated on the first moments of the age
and metallicity distributions, here we present some basic formalism
towards a robust description of SFHs as a function of time. From the
point of view of methodology, these may be regarded as
``second-order'' products. Astrophysically, however, recovering the
SFH of galaxies is of prime importance. SEAGal V presented our first
results in this direction, including empirically derived
time-dependent mean stellar metallicities. In this section we expand
upon these results, exploring new ways to handle the output of the
synthesis, focusing of the SFHs.
\subsection{Compression methods}
\label{sec:STARLIGHT}
As reviewed in Section \ref{sec:STARLIGHTfits}, \starlight\ decomposes
an observed spectrum in terms of a sum of SSPs, estimating the $x_j$
($j = 1\cdots N_\star$) fractional contribution of each population to
the flux at $\lambda_0 = 4020$ \AA. For this work we used a base of
$N_\star = 150$ SSPs from BC03, spanning 25 ages between $t_{\star,j}
= 1$ Myr and 18 Gyr, and 6 metallicities ($0.005 \le Z_{\star,j} \le
2.5 Z_\odot$). Example fits were shown in Fig
\ref{fig:STARLIGHT_fits}.
Not surprisingly, the 150 components of the population vector
($\vec{x}$) are highly degenerate due to noise, and astrophysical plus
mathematical degeneracies, as confirmed by simulations in
\citet{CidFernandes_etal_2004} and SEAGal I. These same simulations,
however, proved that {\it compressed} versions of the population
vector are well recovered by the method.
Different compression approaches exist among spectral synthesis codes.
In MOPED (\citealp*{Heavens_Jimenez_Lavah_2000_Moped};
\citealp{Reichardt_etal_2001_Moped, Panter_etal_2003_Moped,
Panter_etal_2007_Moped, Heavens_etal_2004_Moped}), for instance,
compression is done {\it a priori}, replacing the full spectrum by a
set of numbers associated to each of the $N_\star + 1$ parameters (the
mass fractions and metallicities in several time bins plus a dust
parameter). STECMAP (\citealp{Ocvirk_etal_2006}) performs a
compression by requiring the resulting SFH to be relatively smooth.
The preference for a smooth solution over a ragged one is effectively
a prior, but the algorithm adjusts the degree of smoothing in a data
driven fashion, so we may call it an ``on the fly'' compression
method. The same can be said about VESPA
(\citealp{Tojeiro_etal_2007_Vespa}), a new code which combines
elements from these two approaches. \starlight\ is less sophisticated
in this respect. Its only built-in compression scheme is that the
final stages of the fit (after the Markov Chains reach convergence)
are performed with a reduced base comprising the subset of the
original $N_\star$ populations which account for $\ge 99\%$ of the
light. For our parent sample of 573141 galaxies the average size of
this subset is $\overline{N_\star^{\rm eff}} = 24$ populations, while
for the 82302 galaxies in the SF sample $\overline{N_\star^{\rm eff}}
= 41$. (This difference happens because the full sample has many old,
passive systems, which require relatively few SSPs, whereas SF
galaxies have more continuous SF regimes, thus requiring more SSPs to
be adequately fit.) Compression beyond this level must be carried out
{\it a posteriori} by the user. As explained in the next section, in
this study we in fact compress this information into only four age
bins by smoothing the population vectors.
Previous papers in this series have taken this {\it a posteriori}
compression approach to its limit, condensing the whole age
distribution to a single number, the mean stellar age:
\begin{equation}
\label{eq:logt_ave}
\langle \log t_\star \rangle_L =
\sum_{j=1}^{N_\star} x_j \log t_{\star,j}
\end{equation}
\ni where the subscript $L$ denotes a light-weighted average.
Mass-weighted averages are readily obtained replacing $\vec{x}$ by the
mass-fraction vector $\vec{\mu}$. Similarly, stellar metallicities
were only studied in terms of their mass-weighted mean value:
\begin{equation}
\label{eq:Z_ave}
\langle Z_\star \rangle_M =
\sum_{j=1}^{N_\star} \mu_j Z_{\star,j}
\end{equation}
Simulations show that both of these quantities have small
uncertainties and essentially no bias. Regarding practical
applications, these first moments proved useful in the study of
several astrophysical relations, some of which have just been
presented in Section \ref{sec:Correlations} (see
Fig.~\ref{fig:ZnebCorrelations}). Notwithstanding their simplicity,
robustness and usefulness, these averages throw away all time
dependent information contained in the population vector, thus
hindering more detailed studies of galaxy evolution. In what follows
we explore novel ways to deal with the population vector which
circumvent this limitation.
\subsection{Star Formation Rate as a Function of Time}
One alternative to characterize higher moments of the SFH is to bin
$\vec{x}$ onto age-groups, a strategy that goes back to
\citeauthor{Bica_1988} (\citeyear{Bica_1988}; see also
\citealp{Schmidt_etal_1991, CidFernandes_etal_2001}). Though useful,
this approach introduces the need to define bin-limits, and produces a
discontinuous description of the SFH.
A method which circumvents these disadvantages is to work with a {\em
smoothed} version of the population vector. We do this by applying a
gaussian filter in $\log t_\star$, with a FWHM of 1 dex. Given that
our base spans $\sim 4$ orders of magnitude in $t_\star$, this heavy
smoothing is equivalent to a description in terms of $\sim 4$ age
groups, but with the advantage that $\vec{x}_s$ can be sampled
continuously in $\log t_\star$. This approach is analogous to
smoothing a noisy high-resolution spectrum to one of lower resolution,
but whose large-scale features (colours, in this analogy) are more
robust. From the results in SEAGal I, where it was shown that 3 age
groups are reliably recovered, we expect this smoothing strategy to be
a robust one. Furthermore, averaging over a large number of objects
minimizes the effects of uncertainties in the smoothed SFH for
individual galaxies.
Technically, the issue of age resolution in population synthesis is a
complex one. \citet{Ocvirk_etal_2006}, for instance, find that bursts
must be separated by about 0.8 dex in $\log t_\star$ to be well
distinguished from one another with their SFH inversion
method. Considering that the age range spanned by our base is 4.2 dex
wide, one obtains 5 ``independent'' time bins. This number is similar
to that (6 bins) used by \citet*{Mathis_Charlot_Brinchmann_2006} to
describe the SFH of SDSS galaxies (covering a wider $\lambda$-range
than those simulated by \citeauthor{Ocvirk_etal_2006} but at lower
S/N) with a variant of the MOPED code. Up to 12 time bins were used in
other applications of MOPED. \citet{Panter_etal_2007_Moped} argue
that this may be a little too ambitious for individual galaxies, but
uncertainties in this overparameterized description average out in
applications to large samples. \citet{Tojeiro_etal_2007_Vespa} have a
useful discussion on the number of parameters that can be recovered
with synthesis methods. By using their VESPA code and calculating the
number of parameters on the fly for each individual object, they find
that tipically 2--8 parameters can be robustely recovered for SDSS
spectra. Hence, despite the complexity of the issue, there seems to be
some general consensus that the age resolution which can be achieved
in practice is somewhere between 0.5 and 1 dex, so our choice of
smoothing length is clearly on the conservative side.
A further advantage of this continuous description of the SFH is that
it allows a straight-forward derivation of a star-formation rate
(SFR). Recall that we describe a galaxy's evolution in terms of a
succession of instantaneous bursts, so a SFR is not technically
definable unless one associates a duration to each burst. The ${\rm
SFR}(t_\star)$ function is constructed by sampling the smoothed
mass-fraction vector $\vec{\mu}^c_s$ in a quasi-continuous grid from
$\log t_\star = 5.6$ to 10.5 in steps of $\Delta \log t_\star = 0.1$
dex, and doing
\begin{equation}
\label{eq:SFR}
{\rm SFR}(t_\star) = \frac {d M^c_\star(t_\star)}{dt_\star} \approx
\frac{\Delta M^c_\star(t_\star)}{\Delta t_\star} =
\frac{M_\star^c \log e}{t_\star}\frac {\mu^c_s(t_\star)}{\Delta
\log t_\star}
\end{equation}
\ni where $M_\star^c$ is the total mass {\em converted to stars} over
the galaxy history until $t_\star = 0$, and $\vec{\mu}^c_s(t_\star)$
is the fraction of this mass in the $t_\star$ bin.\footnote{The
superscript $c$ is introduced to distinguish $M_\star^c$ from the
mass still locked inside stars ($M_\star$), which must be corrected
for the mass returned to the ISM by stellar evolution. This
distinction was not necessary in previous SEAGal papers, which dealt
exclusively with $M_\star$ and its associated mass-fraction vector
($\vec{\mu}$). When computing SFRs, however, this difference must be
taken into account. From the BC03 models for a \citet{Chabrier_2003}
IMF, a $10^{10}$ yr old population has $M_\star^c \sim 2 M_\star$,
i.e., only half of its initial mass remains inside stars nowadays.}
We can also define the time dependent {\em specific} SFR:
\begin{equation}
\label{eq:SSFR}
{\rm SSFR}(t_\star) =
\frac {1}{M^c_\star}
\frac {d M^c_\star(t_\star)}{dt_\star} \approx
\frac{\log e}{t_\star}\frac {\mu^c_s(t_\star)}{\Delta \log t_\star}
\end{equation}
\ni which measures the pace at which star-formation proceeds with
respect to the mass already converted into stars. This is a better
quantity to use when averaging the SFH over many objects, since it
removes the absolute mass scale dependence of equation (\ref{eq:SFR}).
Three clarifying remarks are in order. (1) All equations above are
marginalized over $Z_\star$, i.e., ${\rm SFR}(t_\star) =
\sum_{Z_\star}{\rm SFR}(t_\star, Z_\star)$ measures the rate at which
gas is turned into stars of {\em any} metallicity. (2) The upper age
limit of our base (18 Gyr) is inconsistent with our adopted cosmology,
which implies an 13.5 Gyr Universe. Given the uncertainties in stellar
evolution, cosmology, observations and in the fits themselves, this is
a merely formal inconsistency, and, in any case, components older than
13.5 Gyr can always be rebinned to a cosmologically consistent time
grid if needed. (3) Finally, since our main goal is to compare the
{\em intrinsic} evolution of galaxies in different parts of the SF
wing in the BPT diagram, throughout this paper we will consider ages
and lookback times in the context of stellar-evolution alone. In other
words we will {\em not} translate $t_\star$ to a cosmological lookback
time frame, which would require adjusting the $t_\star$ scale by
adding the $z$-dependent lookback time of each galaxy.
\subsection{Mass Assembly Histories}
Another way to look at the population vector is to compute the total
mass converted into stars as a function of time:
\begin{equation}
\label{eq:MAH}
\eta^c_\star(t_\star) =
\sum_{t_{\star,j} > t_\star} \mu^c_j
\end{equation}
\ni which is a cumulative function that grows from 0 to 1, starting at
the largest $t_\star$, tracking what fraction of $M^c_\star$ was
converted to stars up to a given lookback time.
We sample $\eta^c_\star$ in the same $\log t_\star = 5.6$--10.5 grid
used to describe the evolution of the SFR, but here we operate on
the original population vector, not the smoothed one. Since most of
the mass assembly happens at large $t_\star$, computing $\eta^c_\star$
with the smoothed SFHs leads to too much loss of resolution. In
essence, however, $\eta^c_\star(t_\star)$ and ${\rm
SSFR}_\star(t_\star)$ convey the same physical information in
different forms.
\section{The current SFR}
\label{sec:SFRxHa}
The most widely employed method to measure the ``current'' SFR is by
means of the H$\alpha$ luminosity (\citealp{Kennicutt_1983,
Kennicutt_1998, Hopkins_etal_2003}). We have just devised ways of
measuring the time dependent SFR which rely exclusively on the stellar
light, from which one can define a current SFR averaging over a
suitably defined time interval. Before proceeding to the application
of these tools to study the detailed SFHs of galaxies, this section
compares these two independent methods to estimate the current SFR.
The purpose of this exercise is three-fold. First, it serves as yet
another sanity check on the results of the synthesis. Secondly, it
allows us to define in an objective way the ratio of current to
past-average SFR, often referred to as Scalo's $b$ parameter
(\citealp{Scalo_1986}), which is a useful way to summarize the SFH of
galaxies (e.g., \citealp{Sandage_1986, Brinchmann_etal_2004}). Finally,
defining and calibrating a synthesis-based measure of current SFR
equivalent to that obtained with H$\alpha$, allows one to estimate the
current SFR in galaxies where H$\alpha$ is {\em not} powered
exclusively by young stars. This turns out to be very useful in
studies of AGN hosts (\citealp[in prep.]{Torres-Papaqui_etal_2007}).
\subsection{Current SFR from H$\alpha$ luminosity}
For a SFR which is constant over times-scales of the order of the
lifetime of massive ionizing stars ($t_{ion} \sim 10$ Myr), the rate
of H-ionizing photons converges to
\begin{equation}
\label{eq:QH}
Q_H = {\rm SFR} \, {\cal N}_H({\rm IMF},Z_\star)
\end{equation}
\ni where ${\cal N}_H$ is the number of $h\nu > 13.6$ eV photons
produced by a SSP of unit mass over its life (in practice, over 95\%
of the ionizing radiation is produced in the first 10 Myr of
evolution). We computed ${\cal N}_H$ by integrating the $Q_H(t)$
curves for SSPs using the tables provided by BC03, obtaining ${\cal
N}_H = 9.12$, 7.08, 6.17, 5.62, 4.47 and $3.16 \times 10^{60}$
photons$\,$M$_\odot^{-1}$ for $Z_\star = 0.005$, 0.02, 0.2, 0.4, 1 and
2.5 $Z_\odot$, respectively, for a \citet{Chabrier_2003} IMF between
0.1 and 100 M$_\odot$.\footnote{${\cal N}_H$ is 1.66 times smaller for
a Salpeter IMF within the same mass limits.}
One in every 2.226 ionizing photons results in emission of an
H$\alpha$ photon, almost independently of nebular conditions
(\citealp{Osterbrock_Ferland_2006}). This assumes Case B
recombination and that no ionizing photon escapes the HII region nor
is absorbed by dust. Adopting the Chabrier IMF and the $Z_\odot$ value
of ${\cal N}_H$ leads to:
\begin{equation}
\label{eq:SFR_LHa}
{\rm SFR}_{H\alpha}
= \frac{2.226 L_{H\alpha}}{{\cal N}_H h\nu_{H\alpha}}
= 2 M_\odot yr^{-1} \left(
\frac{L_{H\alpha}}{10^8 L_\odot} \right)
\end{equation}
This calibration is strongly dependent on the assumed IMF and
upper stellar mass limit. Given its reliance on the most massive
stars, which comprise a tiny fraction of all the stars formed in a
galaxy, SFR$_{H\alpha}$ involves a large IMF-dependent extrapolation,
and thus should be used with care.
\subsection{Current SFR from the spectral synthesis}
The SFR from spectral synthesis is based on all stars that contribute
to the visible light, and thus should be more representative of the
true SFR. We define a mean ``current'' SFR from our time dependent
SFHs using equation (\ref{eq:MAH}) to compute the mass converted into
stars in the last $\tau$ years, such that
\begin{equation}
\label{eq:SFR_synthesis}
\overline{{\rm SFR}_\star}(\tau) =
M_\star^c \frac{1 - \eta^c_\star(\tau)}{\tau}
\end{equation}
\ni is the mean SFR over this period. Because of the discrete nature
of our base, the function $\overline{{\rm SFR}_\star}(\tau)$ has a
``saw-tooth'' appearance, jumping every time $\tau$ crosses one of the
$t_{\star,j}$'s bin borders.
For the reasons discussed in Section \ref{sec:STARLIGHT}, it is
desirable to include components spanning $\sim 1$ dex in age to obtain
robust results. Since our base starts at 1 Myr, $\tau \sim 10$ Myr
would be a reasonable choice. This coincides with the minimum
time-scale to obtain $\overline{{\rm SFR}_\star}(\tau)$ estimates
comparable to those derived from $L_{H\alpha}$, which are built upon
the assumption of constant SFR over $\tau \ge t_{ion} \sim 10$
Myr. Our base ages in this range are $t_{\star,j} = 10$, 14, 25, 40
and 55 Myr.
\subsection{Synthesis versus H$\alpha$-based current SFRs}
\label{sec:bScalo_def}
\begin{figure*}
\includegraphics[width=\textwidth]{Fig_SFR_Ha_X_Synthesis.eps}
\caption{(a) The solid line shows the Spearman coefficient ($R_S$) of
the $\Sigma_{SFR}({\rm synthesis}) \times \Sigma_{SFR}(H\alpha)$
correlation for different values of $\tau$ in equation
(\ref{eq:SFR_synthesis}). The dotted line indicates the strength of
the ${\rm SFR}({\rm synthesis}) \times {\rm SFR}(H\alpha)$
correlation. (b) Correlation between the SFR per unit area obtained
through H$\alpha$ and our synthesis (for $\tau = 24.5$ Myr). Units
are $M_\odot\,yr^{-1}\,kpc^{-2}$ for both axis. The dotted line
marks the identity line. (c) Correlation between the SFRs derived
from equations (\ref{eq:SFR_synthesis}) and (\ref{eq:SFR_LHa}).
Dashed lines indicate the $y(x)$ and $x(y)$ linear regressions,
while the solid line shows the bisector fit.}
\label{fig:SFR_Ha_X_Synthesis}
\end{figure*}
To compare the SFRs given by equations (\ref{eq:SFR_LHa}) and
(\ref{eq:SFR_synthesis}) we must first choose a specific value for
$\tau$. We do this by correlating the SFR {\em per unit area} obtained
with these two estimators, and seeking the value of $\tau$ which
yields the best correlation. Surface densities were used to remove the
$d^2$ factors common to both SFRs, thus avoiding distance-induced
correlations. Data for the SF$^{hq}$ sample was used in this
calibration. Also, since $L_{H\alpha}$ refers to the emission from
within the $3^{\prime\prime}$ SDSS fibers, $M^c_\star$ was not
extrapolated to the whole galaxy in this comparison.
Fig.\ \ref{fig:SFR_Ha_X_Synthesis} shows the results of this
exercise. Panel a shows the run of the Spearman coefficient ($R_S$)
for different values of $\tau$, with the best value indicated by an
arrow. Given the discreteness of our base, any value in the range of
the $t_\star = 25$ Myr bin yield identically strong correlations
(i.e., same $R_S$). We chose $\tau = 24.5$ Myr because this value
yields zero offset between these two SFRs. This is not a critical
choice, as values in the whole 10 to 100 Myr range yield correlations
of similar strength (Fig.\ \ref{fig:SFR_Ha_X_Synthesis}a). The
corresponding correlations between the synthesis and H$\alpha$-based
SFRs are shown in panels b and c in terms of SFR surface densities and
absolute SFRs, respectively. Robust fits to these relations yield
slopes very close to unity (1.09 in Fig.\
\ref{fig:SFR_Ha_X_Synthesis}b and 0.94 in Fig.\
\ref{fig:SFR_Ha_X_Synthesis}c).
The rms difference between these two SFR estimators is 0.3 dex,
corresponding to a factor of 2. We consider this an excellent
agreement, given that these estimators are based on entirely different
premises and independent data, and taking into account the
uncertainties inherent to both estimators. It is also reassuring that
$\tau$ turns out to be comparable to $t_{ion} \sim 10$ Myr, which is
(by construction) the smallest time-scale for SFR$_{H\alpha}$ to be
meaningful. That the scatter between SFR$_{H\alpha}$ and ${\rm
SFR}_\star$ is typically just a factor of two can be attributed to the
fact that we are dealing with integrated galaxy data, thus averaging
over SF regions of different ages and emulating a globally constant
SFR, which works in the direction of compatibilizing the hypotheses
underlying equations (\ref{eq:SFR_LHa}) and (\ref{eq:SFR_synthesis}).
With these results, we define the ratio of ``current'' to mean past
SFR as
\begin{equation}
b = \frac{\overline{{\rm SFR}_\star}(\tau = 24.5 {\rm Myr})}
{\overline{{\rm SFR}_\star}(\tau = \tau_G)}
\end{equation}
\ni where $\tau_G$ is the age of the oldest stars in a galaxy. In
practice, since the overwhelming majority of galaxies contain
components as old as our base allows, the denominator is simply
$M_\star^c$ divided by the age of the Universe, such that $b$ is
ultimately a measure of the current specific SFR. This definition
was used in Section \ref{sec:Correlations}, where its was shown that
$b$ decreases by an order of magnitude in the median from the top to
the bottom of the SF-wing.
\section{The Star Formation Histories of SF Galaxies}
\label{sec:SFH_results}
Spectral synthesis methods such as the one employed in this work have
historically been seen with a good deal of skepticism, best epitomized
by \citet{Searle_1986}, who, when talking about the spectral synthesis
of integrated stellar populations, said that ``too much has been
claimed, and too few have been persuaded''. Persuading the reader that
one can nowadays recover a decent sketch of the time-dependent SFR in
a galaxy from its spectrum requires convincing results. In this
section we apply the new tools to describe SFHs presented above
(equations \ref{eq:SFR} to \ref{eq:MAH}) to SF-galaxies in the
SDSS. As shown below, the temporal-dimension leads to a more detailed
view of SF-galaxies than that obtained with mean ages or current SFR
estimates.
\subsection{Distributions of Star Formation Histories}
\begin{figure}
\includegraphics[bb= 23 410 420 700, width=0.5\textwidth]
{Fig_Distributions.eps}
\caption{{\em Left:} Distributions of star formation histories for
$Z_{neb}$-bins A (black) and F (red), as defined in Fig
\ref{fig:BPT}. For each bin, we show the mean SSFR (solid
line), the median (dashed) and the 5 and 95 percentiles of the
distributions (dotted). {\em Right:} Normalized distributions of
log SSFR for $t_\star = 25$ Myr. Gaussians are superimposed to
illustrate that the distributions are log normal.}
\label{fig:SFH_distrib}
\end{figure}
Our general strategy to explore the statistics of the sample is to
group galaxies according to certain similarity criteria and derive
mean SFHs for each group. Since all the results presented from
Section \ref{sec:SFH_Zneb_bins} onwards are based on mean SFHs, it is
fit to first ask how representative such means are of the whole
distribution of SFHs.
This is done in Fig \ref{fig:SFH_distrib}, where we show the full
$t_\star$-by-$t_\star$ distribution of SSFR$(t_\star)$, computed with
equation (\ref{eq:SSFR}), for two of the six bins in $Z_{neb}$ defined
in Fig \ref{fig:BPT}: bins A and F, plotted in black and red, and
centered at $Z_{neb} = 0.31$ and 1.22, respectively. Solid lines
indicate the mean SSFR, dashed lines show the median and dotted lines
the corresponding 5 and 95 percentiles of the distributions. The first
thing one notices in this plot is that the distributions are very
wide. For instance, for most of the $t_\star < 1$ Gyr range, their 5
to 95 percentile ranges span over 2 orders of magnitude in SSFR. As
discussed further below, this is in part due to the choice of grouping
galaxies by $Z_{neb}$. Grouping by properties more directly related to
the SFHs should lead to narrower distributions. However, one must
realize that since galaxy evolution depends on many factors, grouping
objects according to any single property will {\em never} produce
truly narrow SFH distributions. Secondly, the distribution of SSFR
values at any $t_\star$ is asymmetric, as can be seen by the
fact that the mean and median curves differ. In fact, as illustrated
by the right panel in Fig \ref{fig:SFH_distrib}, these distributions
are approximately {\em log-normal}, indicating that SFHs result from
the multiplication of several independent factors, as qualitatively
expected on physical grounds.
Despite their significant breadth and overlap, it is clear that the
SSFR-distributions for $Z_{neb}$ bins A and F in Fig
\ref{fig:SFH_distrib} are very different, particularly at low
$t_\star$. This is confirmed by KS tests, which show that these two
distributions are undoubtedly different. In fact, for {\em any} pair
of $Z_{neb}$ bins the distributions differ with $> 99\%$ confidence
for {\em any} $t_\star$,
In what follows we will present only mean SFHs, obtained grouping
galaxies according to a subset of the available physical parameters.
Whilst their is clearly more to be learned from the SFH distributions
discussed above, this is a useful first approach to explore the
intricate relations between galaxy properties and their SFHs.
\subsection{Trends along the SF-wing}
\label{sec:SFH_Zneb_bins}
\begin{figure}
\includegraphics[bb= 40 270 340 695, width=0.5\textwidth]
{Fig_SFH_ZnebBins.eps}
\caption{Mean star formation histories for the 6 different
$Z_{neb}$-bins defined in Fig \ref{fig:BPT}, in four different
representations: (a) smoothed population vector,
$\overline{x_s}(t_\star)$, (b) $\overline{\rm SFR}(t_\star)$, (c)
$\overline{\rm SSFR}(t_\star)$, and (d)
$\overline{\eta^c_\star}(t_\star)$.}
\label{fig:mean_SFHs}
\end{figure}
We start our statistical study of galaxy SFHs grouping galaxies in the
six $Z_{neb}$ bins defined in Section \ref{sec:BPTdiagram}. As shown
in Fig \ref{fig:BPT}, $Z_{neb}$ traces the location of a galaxy along
the SF-wing in the BPT diagram.
Fig \ref{fig:mean_SFHs} shows the derived SFHs for the A--F bins in
four different representations, from top to bottom: $x_s$, SFR, SSFR
and $\eta^c_\star$ as a function of stellar age $t_\star$. Each line
represents a $t_\star$-by-$t_\star$ average over all galaxies in the
bin. The plots show that young populations are present in a proportion
which increases systematically as $Z_{neb}$ decreases. This is
evident, for instance, in the $x_s(t\star)$ panel, which shows how
$Z_{neb}$-related age distributions combine to produce the correlation
between $\langle \log t_\star \rangle_L$ and $Z_{neb}$ depicted in
Fig.~\ref{fig:ZnebCorrelations}f.
The SFR$(t\star)$ curves (panel b) show that SF-galaxies of different
$Z_{neb}$ differ more in their past SFR, low $Z_{neb}$ having SFRs
about 100 times lower than those of high $Z_{neb}$ a few Gyr ago. In
the more recent past, all curves converge to SFRs of a few
$M_\odot/yr$. At first sight, this convergence seems to be at odds
with the fact that galaxies with low and high $Z_{neb}$ differ by
about one order of magnitude in the median $L_{H\alpha}$ (Table
\ref{tab:ZnebBinsStats}), and thus should differ by a similar factor
in the recent SFR. In fact, there is no contradiction, since what
needs to be considered when comparing the synthesis-based SFR with
that derived from H$\alpha$ is the mean SFR over scales of at least 10
Myr, and these are clearly smaller for low $Z_{neb}$ galaxies than for
those of higher $Z_{neb}$. As shown in
Fig~\ref{fig:SFR_Ha_X_Synthesis}, H$\alpha$ and synthesis based SFRs
agree very well. Ultimately, the apparent coincidence of mean SFR
curves of different $Z_{neb}$ is due to the fact that the relation
between recent SFR and $Z_{neb}$ is a relatively weak and scattered
one (Fig.~\ref{fig:ZnebCorrelations}h), such that along the whole
SF-wing one may find galaxies that transform a similar amount of gas
into stars per year.
The clearest separation between SFHs of galaxies of different
$Z_{neb}$ is provided by the SSFR$(t\star)$ curves. At ages $\ga$ a
few Gyr all SSFR curves merge. This behavior is a consequence of the
fact that most of the stellar mass is assembled early on in a galaxy's
history, irrespective of $Z_{neb}$ or other current properties. With
our $\Delta \log t_\star = 1$ dex smoothing, this initial phase, over
which $\int {\rm SFR} dt_\star \sim M^c_\star$, becomes a single
resolution element in the SFR curves. Division by $M_\star^c$ to
produce a specific SFR (equation \ref{eq:SSFR}) then makes all curves
coincide. At later times (smaller $t_\star$), however, the curves
diverge markedly, with the lowest and highest $Z_{neb}$ groups
differing in SSFRs by $\sim 2$ orders of magnitude nowadays. This
confirms that the relation between recent and past star-formation is a
key-factor in distributing galaxies along the SF-wing in the BPT
diagram (Fig.~\ref{fig:ZnebCorrelations}g).
Yet another way to visualize the SFH is through the mass-assembly
function defined in equation (\ref{eq:MAH}). Though
$\eta^c_\star(t_\star)$ is computed with the raw (unsmoothed)
population vector, for presentation purposes we apply a FWHM $= 0.2$
dex gaussian in $\log t_\star$, just enough to smooth discontinuities
associated with the discrete set of $t_{\star,j}$'s in our base. Fig
\ref{fig:mean_SFHs}d shows the results. This is essentially a
cumulative representation of the same results reported in
Fig\ref{fig:mean_SFHs}c, namely, that low $Z_{neb}$ galaxies are
slower in assembling their stars. This plot is however better than
the previous ones in showing that despite these differences, all
galaxies have built up most of their stellar mass by $t_\star = 1$ Gyr.
These encouraging results indicate that synthesis methods have evolved
to a point where one can use them in conjunction with the fabulous
data sets currently available to sketch a fairly detailed
semi-empirical scenario for galaxy evolution. In the next section we
walk a few more steps in this direction by inspecting how
astrophysically plausible drivers of galaxy evolution relate to the
SFHs recovered from the data.
\subsection{Star Formation Histories and Chemical Evolution in Mass and
Surface-Density bins}
\label{sec:MassBins}
The value of grouping galaxies by $Z_{neb}$ is that it maps SFHs to a
widely employed diagnostic tool: the BPT diagram. Yet, present day
nebular abundance is not a cause, but a consequence of galaxy
evolution. In this section we leave aside our focus on the BPT
diagram and group galaxies according to properties more directly
associated to physical drivers of galaxy evolution. Two natural
candidates are the mass ($M_\star$) and surface mass-density
($\Sigma_\star$). Like $Z_{neb}$, both $M_\star$ and $\Sigma_\star$
can be considered the end product of a SFH, yet they are clearly more
direct tracers of depth of the potential well and degree of gas
compression, two key parameters affecting physical mechanisms which
regulate galaxy evolution (\citealp{Schmidt_1959,
Tinsley_1980,Kennicutt_1998}).
Fig \ref{fig:mean_SFHs_MassBins} shows our different representations
of the SFH of SF galaxies for five 1 dex-wide mass bins centered at
$\log M_\star/M_\odot = 7.5 \cdots 11.5$. Given that $M_\star$ and
$Z_{neb}$ are related, the overall evolutionary picture emerging from
this plot is similar to the one obtained binning galaxies in
$Z_{neb}$, i.e., massive galaxies assemble their stars faster than
low-mass galaxies. On the whole, Fig \ref{fig:mean_SFHs_MassBins}
provides a compelling visualization of galaxy downsizing.
The most noticeable difference with respect to $Z_{neb}$-binned
results is on the absolute SFR curves (compare Figs
\ref{fig:mean_SFHs}b and \ref{fig:mean_SFHs_MassBins}b). This
difference is rooted in the fact that galaxies of similar $Z_{neb}$
span a much wider range of SFRs than galaxies of similar $M_\star$.
This can be illustrated focusing on recent times, and inspecting the
$L_{H\alpha}$--$Z_{neb}$ relation (Fig.\ \ref{fig:ZnebCorrelations}h),
with the understanding that $L_{H\alpha}$ can be read as the current
SFR (Fig.\ \ref{fig:SFR_Ha_X_Synthesis}). Despite the statistically
strong correlation ($R_S = 0.49$), the typical 5--95 percentile range
in $L_{H\alpha}$ for a given $Z_{neb}$ is $\sim 2.1$ dex, comparable
to the full dynamic range spanned by the data (2.4 dex over the same
percentile range). In other words, the relation has a large scatter,
and hence $Z_{neb}$-binning mixes objects with widely different SFRs,
explaining why the $\overline{\rm SFR}(t_\star)$ curves in Fig
\ref{fig:mean_SFHs}b tend to merge at low $t_\star$. The
$L_{H\alpha}$-$M_\star$ relation (not shown), on the other hand, is
stronger ($R_S = 0.68$), partly due to the $d^2$ factors in common to
absolute SFRs and $M_\star$. Grouping by $M_\star$ then selects
galaxies in narrower SFR ranges, producing the well separated curves
seen in Fig \ref{fig:mean_SFHs_MassBins}b.
\begin{figure}
\includegraphics[bb= 40 270 340 695, width=0.5\textwidth]
{Fig_SFH_MassBins.eps}
\caption{As Fig \ref{fig:mean_SFHs}, but binning SF galaxies by their
stellar mass, using five 1 dex wide bins, centered at (from bottom
to top in panel b) $\log M_\star/M_\odot = 7.5$, 8.5, 9.5, 10.5 and
11.5, which contain 542, 4057, 26700, 47153, 3808 galaxies,
respectively.}
\label{fig:mean_SFHs_MassBins}
\end{figure}
Results grouping galaxies according to $\Sigma_\star$ are presented in
Fig \ref{fig:mean_SFHs_SurfDenBins}. Since $\Sigma_\star$ and
$M_\star$ correlate very strongly ($R_S = 0.73$ in our sample), the
results are similar to those obtained grouping galaxies by their
stellar mass. \citet{Kauffmann_etal_2003b}, based on an analysis of
two SFH-sensitive spectroscopic indices (namely $D_n(4000)$ and
H$\delta_A$), propose that $\Sigma_\star$ is more directly connected
to SFHs than $M_\star$. This is not obviously so comparing Figs
\ref{fig:mean_SFHs_MassBins} and \ref{fig:mean_SFHs_SurfDenBins}. A
more detailed, multivariate analysis is needed to evaluate which is
the primary driver of SFHs.
\begin{figure}
\includegraphics[bb= 40 270 340 695, width=0.5\textwidth]
{Fig_SFH_SMDBins.eps}
\caption{As Fig \ref{fig:mean_SFHs}, but binning SF galaxies by their
stellar surface densities, with five 0.5 dex wide bins centered at
$\log \Sigma_\star = 7.25$, 7.75, 8.25, 8.75 and 9.25
$M_\odot\,$kpc$^{-2}$, containing 1477, 12554, 37177, 27742, 3046
galaxies respectively.}
\label{fig:mean_SFHs_SurfDenBins}
\end{figure}
\section{Selection effects and modelling caveats}
\label{sec:samples}
This section deals with the effects of sample selection, synthesis
ingredients and model assumptions on our results.
\subsection{Selection effects}
We now study to which extent the mean SFHs derived in the last section
are affected by the way we have defined SF galaxies. We address this
issue recomputing mean SFHs for samples constructed with alternative
selection criteria, and comparing to the results obtained with our
default sample.
\begin{figure*}
\includegraphics[bb=40 460 572 700,width=\textwidth]
{Fig_Evol_AltSamples.eps}
\caption{Average mass assembly histories ($\eta^c_\star$, top
panels) and specific star formation rate (${\rm SSFR}$, bottom)
histories for $Z_{neb}$-bins B to F, color-coded as in Fig
\ref{fig:BPT}. Solid lines show the curves for different sample
definitions. For comparison, the dotted lines in all panels show
the curves for bins B (lower curves; magenta) and F (upper
curves; red) of the full SF sample.}
\label{fig:alt_samples}
\end{figure*}
We first ask how our emission line and continuum $S/N$ cuts influence
our results. Figs \ref{fig:alt_samples}a and b show the average mass
assembly and SSFR functions for the high-quality SF$^{hq}$ sub-sample
defined in Section \ref{sec:sample_definition}. The results for this
better-data sub-sample are very similar to those obtained with the
full SF sample. The SSFR curves in recent times are skewed to slightly
higher rates, which reflects the fact that objects in the SF$^{hq}$
sample are slightly younger than those in the full SF sample, as shown
in Fig \ref{fig:obs_prop}.
Our BPT-based selection of SF galaxies used the dividing line proposed
in SEAGal III, which is more restrictive than the empirical line
proposed by \citet{Kauffmann_etal_2003c}. We define the SF$^{kl}$
sample as the 111026 galaxies classified as SF according to the
\citet{Kauffmann_etal_2003c} line. Fig.~\ref{fig:alt_samples}c and d
show that the SFHs for $Z_{neb}$ bins in this sample are nearly
indistinguishable from those obtained with the SEAGal classification
criterion.
Another concern is the inclination effect. The spectra of edge-on
objects are biased by the metal poorer outer parts of the galaxies,
leading to an underestimation of $Z_{neb}$. This may lead us to place
a galaxy in a lower $Z_{neb}$ bin than it would if it were seen face
on, possibly affecting the mean SFH in that bin. To investigate this
effect we have defined a sub-sample of nearly face-on galaxies,
SF$^{fo}$ (6842 objects), selecting by the inclination parameter, $b/a
\ge 0.9$. Figs~\ref{fig:alt_samples}e and f show that the SFHs derived
with this sample are practically the same as for the full sample.
Aperture effects are a common source of concern in studies of SDSS
spectra (e.g., \citealp{Gomez_etal_2003}; SEAGal I). To investigate
how such effects impact upon our SFHs we defined two samples: the
SF$^{z}$ sample, which comprises 58153 SF galaxies with $z \ge 0.05$
(as opposed to 0.002 for the full SF sample), and the SF$^{ap}$
sample, comprising only the 1096 objects with more than half of their
$z$ band luminosity inside the fiber. Both criteria exclude
proportionately more the population of low $Z_{neb}$, low $M_\star$
galaxies (distant galaxies of this category are not present in the
SDSS, due to limiting magnitude). Accordingly, the change in SFHs is
only noticeable for the lowest $Z_{neb}$ bins, as shown in
Figs~\ref{fig:alt_samples}g--j. In particular,
Fig~\ref{fig:alt_samples}j shows that the mean SFH for bin C in the
SF$^{ap}$ sample matches that of bin B in the full sample. Of all
selection-induced changes discussed here, this is the largest one;
yet, all it does is to shift the SFHs from one group of galaxies to
the adjoining one.
To summarize, selection criteria may influence the derived mean SFHs
in quantitative terms, but do {\em not} modify the relative pattern of
mean SFHs of galaxies in different $Z_{neb}$ bins. The same applies to
grouping galaxies according to properties other than $Z_{neb}$. The
general trends in the SFH as a function of global galaxy properties
obtained in this work are therefore robust against variations in the
sample selection criteria.
\subsection{Experiments with different models}
One should also ask to which extent our results are robust against
changes in the base of evolutionary synthesis models. While answering
this question requires an in-depth study far beyond the scope of this
paper, we believe this has a much larger impact on SFHs than selection
effects.
Panter \etal (2007) reported results of MOPED experiments using
spectral models from different sources (\citealp{Jimenez_etal_2004,
Fioc_Rocca-Volmerange_1997, Maraston_2005, Bruzual_Charlot_1993};
and BC03), all sampled at $\Delta \lambda = 20$ \AA. For the 767
galaxies in their randomly selected test sample, the resulting mean
star-formation fractions (analogous to our $\vec{\mu}$ vector) differ
by factors of a few for the youngest and oldest ages, and close to a
full order of magnitude for $t_\star \sim 0.1$--1 Gyr. Recovering SFHs
in this intermediate age range is particularly hard, as discussed by
\citet{Mathis_Charlot_Brinchmann_2006}. Indeed, the experiments
reported by Panter \etal often find a suspiciously large 1 Gyr
component.
The behaviour of our \starlight\ fits is also anomalous in this age
range. This is clearly seen in the mean SFHs shown in Figs
\ref{fig:mean_SFHs}--\ref{fig:mean_SFHs_SurfDenBins}, particularly in
the $\vec{x}_s$ representation, which shows a hump at $\sim 1$
Gyr. Interestingly, \starlight\ experiments with a base of
evolutionary synthesis models using the MILES library of
\citet[instead of the STELIB library used in the base adopted for
this and previous SEAGal studies]{Sanchez-Blazquez_etal_2006} do not
produce this hump at $\sim 1$ Gyr. In fact, the whole mean SFHs
derived with this new set of models differs systematically from those
shown in Figs
\ref{fig:mean_SFHs}--\ref{fig:mean_SFHs_SurfDenBins}. The mass
assembly function $\etaÄ_\star(t_\star)$, for instance,
rises more slowly and converges at somewhat smaller $t_\star$ than
those obtained in this work. There are also systematic differences in
stellar extinction, which comes out $\Delta A_V^\star \sim 0.4$ mag
larger with the MILES models. Extensive tests with these new bases,
including different prescriptions of stellar evolution as well as
different spectral libraries, are underway, but these first results
show that significant changes can be expected.
Reassuringly, however, these same experiments show that the pattern of
SFHs as a function of $Z_{neb}$, $M_\star$ and $\Sigma_\star$ reported
in this paper does {\em not} change with these new models. In any
case, these initial results provide an eloquent reminder of how
dependent semi-empirical SFH studies are on the ingredients used in
the fits.
\subsection{Experiments with differential extinction}
\label{sec:samples_YAV}
\begin{figure}
\includegraphics[bb= 40 180 340 600, width=0.5\textwidth]
{Plot_SFHandChemEvol_17kYAV_vs_SFhq.eps}
\caption{Comparison between mean star formation histories for the
SF$^{hq}$ sample modelled with only one stellar extinction (dotted
lines) and with differential extinction (solid lines). For clarity,
only $Z_{neb}$-bins B, D and F are drawn, and color-coding is the
same as in Fig \ref{fig:BPT}. Each panel shows a different
representation of SFHs: (a) smoothed population vector,
$\overline{x_s}(t_\star)$, (b) $\overline{\rm SFR}(t_\star)$, (c)
$\overline{\rm SSFR}(t_\star)$, and (d)
$\overline{\eta^c_\star}(t_\star)$.}
\label{fig:SFHs_17kYAV}
\end{figure}
We saw in Section \ref{sec:av_neb} that the nebular and stellar
exctintions are strongly correlated, but with $A_V^{neb} \sim$
twice $A_V^{\star}$. This indicates that the uniform stellar
extinction used in our fits is not adequate to model star-forming
regions, which should be subjected to a similar extinction than the
line-emiting gas, ie., $A_V^\star(t_\star \la 10^7 {\rm yr}) \sim
A_V^{neb}$. It is therefore fit to ask whether and how such a
differential extinction affects our general results.
Given the difficulties in recovering reliable population dependent
$A_V^\star$'s from spectral synthesis in the optical range alone, we
address this question by postulating that populations younger than
$10^7$ yr are extincted by $0.34 + 2.28 A_V^{\star}$, i.e., the
empirical $A_V^{neb}(A_V^\star)$ relation found in Section
\ref{sec:av_neb}, with $A_V^{\star}$ now denoting the extinction to $>
10^7$ yr stars.\footnote{We thank the referee for suggesting this
approach.} The 17142 galaxies in the SF$^{hq}$ sub-sample were
re-fitted with this more realistic modified recipe for extinction
effects.
Qualitatively, one expects that forcing a uniform $A_V^\star$ fit to a
galaxy where the young stars suffers more extinction than the others
should lead to an overestimation of the age of the young population.
This older and thus redder young population would compensate the
mismatch in $A_V^\star$. Allowing for $A_V^\star(t < 10^7~{\rm yr})$
larger than the $A_V^\star$ of the $t > 10^7~{\rm yr}$ populations
makes it possible to fit the same spectrum with younger and dustier
populations. Hence, the recent SFRs should increase.
This expectation is fully confirmed by these new fits. Fig.
\ref{fig:SFHs_17kYAV} shows a comparison between three $Z_{neb}$ bins
(B, D and F) for the old and new fits of the SF$^{hq}$ sample. One
sees that the new average SFR and SSFR curves are shifted by $\sim
0.3$ dex upwards in the $t_\star \le 10^7$ yr range with respect to
the ones obtained with a single extinction. The rearrangements in the
population vector tend to be in the sense of shifting some light from
old populations to the $\le 10^7$ yr components.
Not surprisingly, the properties which change most are those directly
related to the strength of the young population, such as current SFR,
which increases by 0.3 dex on average, and the mean stellar age, which
decreases by $\Delta \langle \log t_\star \rangle_L \sim 0.1$ dex. The
changes in other global properties such as $A_V^\star$ and $M_\star$
are much smaller than this.
These experiments are obviously a simplification of the problem of
dust distribution in galaxies, yet they suggest that the choice of the
extinction modelling can have non-negligible effects on the derived
SFH curves. On the whole, however, the qualitative pattern of SFHs as
a function of $Z_{neb}$ or other variables stays the same. As found in
the sample selection studies, quantitative changes are at best
equivalent to moving from one bin to the next, so that our general
conclusion does not change.
\section{Summary}
\label{sec:Summary}
In this paper we have studied physical properties of 82302 normal
star-forming galaxies from the SDSS DR5, by using results from our
stellar population synthesis code, \starlight, and our emission-line
measurement algorithm.
Before reviewing our main results, we highlight some aspects of this
study which have relatively little impact upon our general
conclusions, but represent significant refinements in our methodology.
\begin{enumerate}
\item We have detected a systematically overestimated continuum level
around H$\beta$, whose origin we tentatively attribute to deficiencies
in the STELIB calibration in this range. Gaussian fits to the H$\beta$
emission which disregard this offset tend to underestimate the line
flux by 4\% on average, and $\sim 7$\% in the case of weaker lines.
These are relatively small, yet systematic effects, which propagate to
estimates of nebular extinction, metallicities and galaxy
classification.
\item SF galaxies were selected according to the theoretical
criterion proposed in SEAGal III, which minimizes contamination
by AGN emission.
\item Nebular extinctions and metallicities were derived
self-consistently, allowing for the metallicity dependence of the
Balmer decrement. Five different reddening laws were explored, but
found to produce equally good spectral fits and relatively small
differences in derived physical properties.
\item We have confirmed the strong correlation between $A_V^{neb}$ and
$A_V^\star$ found in SEAGal I. We have also identified a strong
correlation between the strength of the ISM component of the NaD
absorption doublet and the amount of dust derived from the synthesis.
\item Different recipes for nebular metallicity estimates were tried.
Some of them proved not to be adequate for this study, either because
of the lack of spectral data (e.g., measures of [ArIII]$\lambda$7135
and [OIII]4363 emission lines were available for few objects), or
because such calibrations were only valid in the low-$Z_{neb}$ regime,
thus encompassing a very small fraction of objects from our sample.
Therefore, throughout our analysis we use the O$_3$N$_2$ index and the
calibration by \citet{Stasinska_2006} to measure the nebular
metallicity. Although this is not a reliable calibrator at the lowest
metallicities, it is good enough for our analysis in
$Z_{neb}$-bins. Furthermore, it has the nice virtue of being directly
related with the position of the objects in the BPT diagram.
\end{enumerate}
We now summarize results related to the main goal of this paper,
namely, to investigate the SFH of galaxies along the SF wing in the
BPT diagram. In practice, this means studying how SFHs change as a
function of nebular metallicity, even though $Z_{neb}$ is more a
product than a cause of galaxy evolution.
\begin{enumerate}
\item We started our study with a traditional analysis, correlating
$Z_{neb}$ with several physical and observed properties. This analysis
confirms results obtained directly or indirectly in the past by other
works, such as relations between the nebular metallicity and galaxy
luminosity, mass, dust content, mean stellar metallicity and mean
stellar age.
\item Formalism towards a time-dependent analysis was then
presented. Simple ways to compress the output of our stellar
population synthesis code were proposed. These are based either on a
posteriori smoothing of the age distribution, which allows the
derivation of time dependent star formation rates, or a cumulative
mass assembly history.
\item As a first application of this time dependent description of
SFHs we computed the current SFR obtained from our spectral fits. The
resulting values of SFR$_\star$ agree very well with more traditional
estimates based on the luminosity of H$\alpha$. The scatter between
SFR$_\star$ and SFR$_{H\alpha}$ is just a factor of 2, despite the
differences in the underlying assumptions and sensitivity to the IMF.
This result strengthens confidence in our method, and, more
importantly, opens the possibility of measuring current SFRs in
galaxies hosting AGN, where H$\alpha$ method does not apply.
\item Fully time dependent SFHs were then derived grouping galaxies
into six $Z_{neb}$ bins spanning the entire SF wing of the BPT
diagram. Mean SFHs for each of these bins were presented in four
different representations: (a) the smoothed population vector,
$\overline{x_s}(t_\star)$, (b) the star formation rates
$\overline{\rm SFR}(t_\star)$, (c) specific star formation rates
$\overline{\rm SSFR}(t_\star)$, and (d) mass-assembly histories ,
$\overline{\eta_\star^c}(t_\star)$.
\item We found that SFHs vary systematically along the SF
sequence. Though all galaxies assembled the bulk of their stellar
mass over 1 Gyr ago, low $Z_{neb}$ systems evolve at a slower pace.
Galaxies at the tip of the SF wing have current specific SFRs about 2
orders of magnitude larger than the metal rich galaxies at the the
bottom of the BPT diagram.
\item At any given time, the distribution of SSFRs for galaxies within
a $Z_{neb}$-bin is quite broad and approximately log-normal.
\item We performed the same SFH study grouping galaxies by their
stellar mass and surface mass density. Given the existence of
$Z_{neb}$--$M_\star$--$\Sigma_\star$ relations, the overall picture is
obtained grouping by $Z_{neb}$. Thus, low $M_\star$ (low
$\Sigma_\star$) systems are the ones which evolve slower, with current
SSFRs much larger than more massive (dense) galaxies.
\item Finally, we have analysed a number of selection and modelling
effects that might bias our results, and show that while they may
affect the derived SFHs quantitatively, the organization of SFHs as a
function of $Z_{neb}$, $M_\star$, $\Sigma_\star$ remains the same.
Experiments with new evolutionary synthesis models and differential
extinction fits were reported and
found to lead to substantially different SFHs, yet preserving this
same overall pattern.
\end{enumerate}
\section*{ACKNOWLEDGMENTS}
We are greatly in debt with several colleagues and institutions around
the globe who have contributed to this project by allowing access to
their computers. The \starlight\ project is supported by the
Brazilian agencies CNPq, CAPES, FAPESP, by the France-Brazil
CAPES/Cofecub program and by Observatoire de Paris.
Funding for the SDSS and SDSS-II has been provided by the Alfred
P. Sloan Foundation, the Participating Institutions, the National
Science Foundation, the U.S. Department of Energy, the National
Aeronautics and Space Administration, the Japanese Monbukagakusho, the
Max Planck Society, and the Higher Education Funding Council for
England. The SDSS Web Site is http://www.sdss.org/. The SDSS is
managed by the Astrophysical Research Consortium for the Participating
Institutions. The Participating Institutions are the American Museum
of Natural History, Astrophysical Institute Potsdam, University of
Basel, University of Cambridge, Case Western Reserve University,
University of Chicago, Drexel University, Fermilab, the Institute for
Advanced Study, the Japan Participation Group, Johns Hopkins
University, the Joint Institute for Nuclear Astrophysics, the Kavli
Institute for Particle Astrophysics and Cosmology, the Korean
Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos
National Laboratory, the Max-Planck-Institute for Astronomy (MPIA),
the Max-Planck-Institute for Astrophysics (MPA), New Mexico State
University, Ohio State University, University of Pittsburgh,
University of Portsmouth, Princeton University, the United States
Naval Observatory, and the University of Washington.
|
2,877,628,091,073 | arxiv |
\subsection{Annotation process}
FQuAD2.0 is an extension of FQuAD1.1 \citep{fquad}.
This extension consists in the addition of unanswerable questions. These questions are hand-crafted in an adversarial manner in order to be difficult to distinguish from answerable ones. To achieve this goal we gave precise guidelines to the annotators:
\begin{itemize}
\item An adversarial question must be relevant to the context paragraph by addressing a topic also addressed in the context paragraph.
\item An adversarial question should be designed in the following way: ask an answerable question on the paragraph, and apply to it a transformation such as an entity swap, a negation or something else that renders the question unanswerable.
\end{itemize}
The articles and paragraphs used in the train, development and test sets of FQuAD1.1 and FQuAD2.0 are exactly the same.
An annotator is presented with a paragraph and the already existing answerable questions collected for this paragraph for FQuAD1.1. He is then asked to forge at least 4 adversarial questions, while spending up to 7 minutes by paragraph. A total of 17,765 adversarial questions were collected in 3,100 paragraphs. As FQuAD contains in total 14,908 paragraphs, unanswerable questions were not annotated for every paragraph, nor every article. In order to have reliable evaluations on the development and test sets for this new task, we chose to annotate, in proportion, an important amount of adversarial questions in these two sets. They contain in total around 42\% adversarial questions, while the train set contains 16\% adversarial questions. More statistics can be found in table \ref{tab:fquad2.0}.
We used the Étiquette annotation platform\footnote{\url{https://etiquette.illuin.tech/}} developed by Illuin Technology. It has a dedicated interface to annotate the Question Answering task. Unanswerable questions can be annotated by indicating that the answer is an empty string. A screenshot of the platform is displayed in Figure \ref{fig:platform}.
A total of 18 French students contributed to the annotation of the dataset. They were hired in collaboration with the Junior Enterprise of CentraleSupélec\footnote{\url{https://juniorcs.fr/en/}}.
To limit the bias introduced by an annotator's own style of forging adversarial questions, each annotator only contributed to a given subset: train, development or test.
\input{tables/statistics}
\input{tables/categories}
\subsection{Statistics}
To maximize the number of available questions for fine-tuning experiments, while keeping a sufficiently important set for evaluation, we decide to merge the train and test sets of FQuAD2.0 into a bigger training set, and keep the development set intact for evaluating the obtained models. The new training set contains a total of 13,591 unanswerable questions.
Main statistics for FQuAD1.1 and FQuAD2.0 are presented in Table \ref{tab:fquad2.0}.
\subsection{Challenges raised by adversarial questions}
\label{sec:questions-challenges}
To understand what the different types of adversarial questions collected are, we propose a segmentation of the challenges raised by adversarial questions in FQuAD2.0.
To do so, we randomly sampled 102 questions from the new annotated questions in FQuAD2.0 development set and manually inspected them to identify the challenges they proposed. Then, we sorted these questions following the different identified categories, in order to estimate the proportion of each category within the total dataset. Table \ref{tab:dataset-analysis} presents this analysis where 5 main categories have been identified.
\subsection{Baselines}
\label{sec:baselines}
To fulfill our first goal, we choose to fine-tune CamemBERT models, because they are the best performing models on several NER, NLI and Question Answering French benchmarks \citep{camembert, fquad}. We could also have chosen FlauBERT models \citep{flaubert}, but \cite{fquad} tends to show that for the same size, CamemBERT models outperform FlauBERT models on the Question Answering task, hence our choice of CamemBERT models. We benchmark two different model sizes: CamemBERT\textsubscript{LARGE} (24 layers, 1024 hidden dimensions, 12 attention heads, 340M parameters) and CamemBERT\textsubscript{BASE} (12 layers, 768 hidden dimensions, 12 attention heads, 110M parameters).
The fine-tuning procedure used is identical to the one described in \citet{bert}, and an implementation can be found in HuggingFace’s Transformers library \citep{Wolf2019HuggingFacesTS}. All models were fine-tuned on 3 epochs, with a warmup ratio of 6\%, a batch size of 16 and a learning rate of $1.5\cdot 10^{-5}$. The optimizer used is AdamW with its default parameters. All experiments were carried out on a single Nvidia V100 16 GB GPU. Whenever necessary, gradient accumulation was used to train with batch size not fitting within the GPU memory. The results obtained on the FQuAD2.0 development set for the different metrics are presented in Table \ref{tab:baselines_scores}.
These first results allow us to draw the following conclusions:
\begin{itemize}
\item One can see that the best trained model, CamemBERT\textsubscript{LARGE}, obtains a rather high score of 82.3 \% for the NoAns\textsubscript{F1} metric, while keeping a high score of 90.1 \% for the F1\textsubscript{has ans} metric. It confirms that it is possible for a pre-trained French Language Model to learn to determine with high precision when a French question is unanswerable, while extracting the correct answer in most cases when a question is answerable.
\item As observed in \citet{fquad} or \citet{lepetit}, Question Answering seems to be a complex task for a small size (base, small) fine-tuned Language Model to solve, and hence the obtained performances are highly dependent to model size, bigger models performing much better than smaller ones. It appears that for Adversarial Question Answering this observation is even more important, with CamemBERT\textsubscript{LARGE} scoring a 20.2 \% absolute improvement in NoAns\textsubscript{F1} metric compared to CamemBERT\textsubscript{BASE}.
\end{itemize}
\input{tables/scores_fquad1_vs_fquad2}
\input{figures/learning_curve.tex}
\subsection{Comparison with FQuAD1.1 scores}
Whilst the models presented in the previous sub-section clearly learned to both extract accurate answers from answerable questions and determine when a question is unanswerable, one may also be interested in whether these models extract as accurate answers as similar models solely fine-tuned on FQuAD1.1, ie. only on answerable questions.
To answer this problematic, we present in Table \ref{tab:comparison-with-fquad11} a comparison of our models of interest in two different set-ups: when fine-tuned solely on FQuAD1.1 and when fine-tuned on the entirety of FQuAD2.0. All evaluations are on FQuAD1.1 dev set. By dataset construction, the F1 score on the FQuAD1.1 dev set is strictly equivalent to the F1\textsubscript{has ans} on the FQuAD2.0 dev set. Results for fine-tuning on FQuAD1.1 are extracted from \citet{fquad}.
With the addition of unanswerable questions during fine-tuning, the model is encouraged to predict that some questions are unanswerable. And as for every model, the NoAns\textsubscript{P} is strictly lower than 100\%, there are answerable questions in the dev set, for which models tend to wrongly predict that they are unanswerable. Then for these questions, the predicted answer is the empty string instead of the expected answer. Hence, we can expect a decrease of the F1\textsubscript{has ans} metric in comparison to the set-up where a model is fine-tuned solely on FQuAD1.1.
This assumption is confirmed in Table \ref{tab:comparison-with-fquad11} with a shrinking gap as model size grows. Indeed, F1\textsubscript{has ans} is only 1.7 absolute points lower for CamemBERT\textsubscript{LARGE} trained on FQuAD2.0 compaired to the same model fine-tuned solely on FQuAD1.1. For CamemBERT\textsubscript{BASE}, the gap grows to 5.6 points.
This gap evolution also follows the evolution of the NoAns\textsubscript{P} metric which is equal to 82\% for CamemBERT\textsubscript{BASE} and 93.5\% for CamemBERT\textsubscript{LARGE}.
\subsection{Learning curves}
\label{sec:learning-curves}
To get a better grasp of how many adversarial questions are needed for a model to learn to determine when a question is unanswerable, we conduct several fine-tuning experiments with an increasing number of adversarial questions used for training.
For every training, all answerable questions of the training set of FQuAD2.0 (i.e. the training set of FQuAD1.1), and unanswerable questions are progressively added to the training set with increments of 2500 questions. We conduct such experiments for the two model architectures of CamemBERT\textsubscript{BASE} and CamemBERT\textsubscript{LARGE}. The results are displayed in Figure \ref{fig:learningcurve}.
\input{tables/multilingual}
From these experiments, we observe the following:
\begin{itemize}
\item The CamemBERT\textsubscript{LARGE} model needs quite few adversarial examples before achieving decent performances. Indeed the model trained with 5k adversarial questions achieves 88\% of the performance of the best model trained with 13.6k adversarial questions, which is 2.7 times more unanswerable questions.
\item The slope of the CamemBERT\textsubscript{BASE} learning curve is higher than for CamemBERT\textsubscript{LARGE}. For example, the CamemBERT\textsubscript{BASE} model trained with 5k adversarial questions achieves only 66\% of the performance of the best CamemBERT\textsubscript{BASE} model trained with 13.6k adversarial questions. We conclude that the value brought by additional data is more important for smaller models than for bigger ones. However, we also observe that the CamemBERT\textsubscript{LARGE} model trained with 2.5k adversarial questions performs on par with the CamemBERT\textsubscript{BASE} model trained with 12.5k adversarial questions (5 times more data).
\item Whatever the model, the learning curve has not flatten yet, which means that both architectures would benefit from more adversarial training samples. In order to do so, one would need to annotate further adversarial questions, which we leave for future work.
\end{itemize}
\subsection{Baseline performances by question category}
We present in section \ref{sec:questions-challenges} a detailed analysis of the different challenges FQuAD2.0 adversarial questions provide. To understand how well the baseline CamemBERT models trained perform on each one of these challenges, we present in table \ref{tab:results-by-category} evaluation results on each of these categories. As the evaluation is solely made on adversarial questions, the chosen metric is the recall of the NoAns task: NoAns\textsubscript{R}.
\input{tables/scores_by_category}
\section{Introduction}
\label{sec:introduction}
\input{content/introduction.tex}
\section{Related work}
\label{sec:related_work}
\input{content/related_work.tex}
\section{Dataset collection \& analysis}
\label{sec:dataset_collection}
\input{content/dataset_collection.tex}
\input{tables/baseline_fquad2.0}
\section{Evaluation metrics}
\label{sec:eval_metrics}
\input{content/evaluation_metrics.tex}
\section{French monolingual experiments}
\label{sec:monolingual-experiments}
\input{content/monolingual_experiments.tex}
\section{Multilingual experiments}
\label{sec:multilingual-experiments}
\input{content/multilingual_experiments.tex}
\section{Conclusion \& future work}
\label{sec:conclusion}
\input{content/conclusion.tex}
\section*{Acknowledgments}
\input{content/acknowledgements.tex}
\bibliographystyle{acl_natbib}
|
2,877,628,091,074 | arxiv |
\subsection{Wide neural networks are minimax optimal with early stopping}
Lin et al. \cite{lin2020optimal} have considered the regression problem (Equation \eqref{equation:true_model}). They investigated the generalization ability of the solution obtained by gradient descent method with $0\in \mathcal{H}_{\Phi}$ as the initialization point(function), and showed that a properly chosen early stopping strategy leads to a minimax optimal solution if we know the decay rate of eigenvalues of the kernel.
\begin{theorem}\label{thm:K_decay_rate}
Let $\{\lambda_{j}, j=0, 1,2,\cdots\}$ be the eigenvalues associated to the NTK $K$ on $[0,1]$. Then we have
\begin{equation}
\frac{1}{2\pi^3} \frac{1}{j^{2}}\leq \lambda_j \leq \frac{14}{\pi^3} \frac{1}{j^{2}}, j=1,2,...
\end{equation}
\end{theorem}
\begin{remark}
The kernel $K$ can be reformulated as the sum of two kernels. One is the main kernel, offering the main properties of NTK. And the proof of Theorem \ref{thm:K_decay_rate} is based on the bounds of the eigenvalues associated to the main kernel. Theorem \ref{thm:K_decay_rate} clearly shows the upper and lower bounds of the eigenvalues and indicates the size of the RKHS of NTK.
\end{remark}
Proposition \ref{prop:early:stopping} is a direct consequence from \cite{lin2020optimal} by applying Theorem \ref{thm:K_decay_rate}.
\begin{proposition}[Optimality of early stopping]\label{prop:early:stopping}
For the NTK regression, if we stop the training process at $t_{\star}=c n^{2/3}$, the resulting (neural network) $f_{t_{\star}}^{\mathtt{NTK}}$ satisfies that
\begin{equation}
\lVert f_{t_{\star}}^{\mathtt{NTK}}-f_{\star}\rVert^{2} \leq cn^{-2/3},
\end{equation}
with high probability,
which matches the minimax lower bounds, since we have
\begin{equation}
\inf_{\hat{f}}\sup_{f_{\star}\in\mathcal{H}_{K},\lVert f_{\star}\rVert\leq R}\mathbf{E}\lVert \hat{f}-f_{\star}\rVert^{2} \geq cn^{-2/3}.
\end{equation}
\end{proposition}
Combining Proposition \ref{prop:early:stopping} and Proposition \ref{prop: generalization closeness}, Corollary \ref{cor: nn early stopping} follows, indicating the overparameterized two-layer neural network with early stopping is minimax rate optimal.
\begin{corollary}\label{cor: nn early stopping}
If $m$ is sufficiently large, then
\begin{equation}
\lVert f_{\boldsymbol{\theta}(t_{\star})}-f_{\star}\rVert^{2} \leq cn^{-2/3},
\end{equation}
with probability at least $1-P(m)$ over random initialization and with high probability over the training data.
\end{corollary}
The statement in Section \ref{sec:main_result} that neural network that overfitted neural network cannot generalize well seems to contradict the observation (S). However, we realize that it might be caused by the subtle difference between zero training label error and zero training loss. In other words, the training till zero training label error is actually stopped earlier than the time till interpolating the data. We present this subtle difference through the following illustrative example.
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{100accuracy_vs_interpolation}
\caption{Overfitting vs. 100 \% label accuracy: 4 training data points $\{(0,0),(\frac{1}{3},1),(\frac{2}{3},0), (1,1) \}$. We use gradient descent to train a one-hidden-layer neural network with large width $m=1000$. Compared with the interpolation regime, the loss in the epoch with 100 \% label accuracy is still large. The function in the epoch with 100 \% label accuracy is also far different from the function in the interpolation regime.}
\label{fig: interpolation vs acc}
\end{figure}
Although we realize the stopping time with zero training label error can be an implicit early stopping, we find that the occurrence of this implicit early stopping depends on the scale of the noise(the fraction of label corruption). In other words, the signal strength of data plays a significant role in this case.
\subsection{On the role of signal strength}
Proposition \ref{prop:early:stopping} shows that a carefully chosen early stopping time will produce a neural network achieving the minimax rate.
Zhang et al. \cite{zhang2016understanding} observed the implicit early stopping strategy, stopping the training process when the 100\% label accuracy is reached, can generalize well. One may speculate that the implicit early stopping time is the optimal stopping time that appeared in Proposition \ref{prop:early:stopping}. However, this sounds too good to be true. In fact, Zhang et al. \cite{zhang2016understanding} have discussed the role of some explicit early stopping strategies in training neural networks. They observed that: 1. early stopping indeed makes generalization better on ImageNet and 2. early stopping does not help much on CIFAR10. These observations made the effects of the implicit early stopping elusive. We can not simply claim that an implicit early-stopping strategy would produce a rate-optimal neural network.
We hypothesized that signal strength plays an indispensable role in the success of implicit early-stopping strategies. More precisely, when the signal strength is strong enough, the implicit early stopping strategy will produce a stopping time near the optimal stopping time $t_{\star}$; and when the signal strength is deteriorated (e.g., after performing label corruption procedure), the implicit early stopping strategy will produce a stopping time far from the optimal stopping time $t_{\star}$.
\subsection{Noise of data}
In Section \ref{intro}, we assume the noise $\epsilon\sim N(0,\sigma)$, which iss too good to be true in real cases. Nevertheless, we can still believe there is a ground true function $f^*(x)$ behind the data. Thus, we can define the gap between the label and the ground true function as noise, i.e.,
\begin{equation}
\epsilon = y - f_{\star}(x).
\end{equation}
In binary classification problems, the true label can be defined as
\begin{equation}
y_{\text{true}}=\boldsymbol{1}_{f_{\star}(x)>0.5}.
\end{equation}
The labels of data $y$ are possibly different from the true labels $y_{true}$. If the noise does not affect the true label, i.e., $y=y_{\text{true}}$, then the noise is considered as a small noise. For example, $f_{\star}(x_1)=0.1$ and the label $y_1=0$ and $f^*(x_2)=0.9$ and the label $y_1=1$. These two data points has small noise since the gap does not affect the true labels of $f_{\star}(x_1)$ and $f_{\star}(x_2)$. However, if the noise affects the true label, that means the label corruption happens. For example, if $f_{\star}(x_3)=0.1$ but the label $y_3=1$, that means the noise affects the true label of $f_{\star}(x_3)$.
In general, the epoch with the maximum accuracy on validation set or the long-time unchanged training loss is commonly chosen as the stopping time. In this case, early stopping is potentially used. If the neural network is trained until it reach 100\% training accuracy, we tend to believe the network violates the conditions of early stopping and can not have good performance on testing set. However, if the noise is small, 100\% training accuracy does not mean the sufficient small loss(Figure \ref{fig: interpolation vs acc}). The neural network may have large training loss even if it reaches 100\% training accuracy, which means the training process is early stopped before the interpolation regime.
If the noise is large, the 100\%-training-accuracy neural network can not have good generalization. From Figure \ref{fig: MLP_diff_noise_mse} and \ref{fig: Alexnet_cifar10_diff_noise_ce}, we can find that as the percentage of label corruption is increasing, the training time to 100\% training accuracy is increasing. Moreover, the generalization gap between `100\% training accuracy' and `best generalization' becomes larger. In these cases, explicit early stopping(e.g. cross validation) is still significant to the generalization of the model.
In summary, we finally filled the last piece of jigsaw in reconciling the controversial observation (S) and the bias-variance trade-off doctrine in classical statistical learning theory.
\subsection{The role of signal strength in the training process}
\subsection{Overfitted Neural Network can be approximated by the linear interpolation}\label{ntk:linear interpolation}
Let
\begin{align}
f_{\mathtt{LI}}(x)=y_{i}+\frac{y_{i+1}-y_{i}}{x_{i+1}-x_{i}}(x-x_{i}), \mbox{ when } x\in [x_{i},x_{i+1}]
\end{align}
be the linear interpolation of these $n$ points. We can prove the following statement.
\begin{proposition}[Bounded second order derivative of overfitted NTK model]\label{prop:bound_second_derivative}
Let $\{(x_{i},y_{i})\}$ be the set of $n$ equally-distanced data in $[0,1]$, i.e., $x_i=\frac{i-1}{n-1}$, $i\in [n]$. With the probability in Lemma \ref{lem:bound_y}, the second order derivative of the overfitted NTK model with zero initialization $f^{\mathtt{NTK}}_{\infty}(x) =K(x,\boldsymbol{X})K^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}$ is bounded, i.e.,
\begin{align}
\sup_{x\in (x_i,x_{i+1})}|K''(x,\boldsymbol{X})K^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}|\leq C\sqrt{\log(n)}
\end{align}
for $\forall i\in[n]$ and for some constant $C$.
\end{proposition}
\begin{remark}
If we assume all the labels $\boldsymbol{y}$ are bounded(e.g., classification problems), i.e., $|y_i|\leq C$, $ j\in[n]$, the upper bound of Proposition \ref{prop:bound_second_derivative} will be the constant instead of $C\sqrt{\log(n)}$. Proposition \ref{prop:bound_second_derivative} is infrequent in many kernels such as Gaussian kernel or Polynomial kernel.
\end{remark}
Through Taylor expansion, this property can guarantee that the linear interpolation can approximate the model as $n$ is increasing. Thus, we have the following corollary:
\begin{corollary}
[Overfitted NTK model can be approximated by the linear interpolation]\label{LI}
Let $\{(x_{i},y_{i})\}$ be the set of $n$ equally-distanced data in $[0,1]$, i.e., $x_i=\frac{i-1}{n-1}$, $i\in [n]$. With the probability in Lemma \ref{lem:bound_y}, the overfitted NTK model with zero initialization $f^{\mathtt{NTK}}_{\infty}(x) =K(x,\boldsymbol{X})K^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}$ can be approximated by the linear interpolation, i.e.,
\begin{align}
\sup_{x\in [0,1]}|f_{\infty}^{\mathtt{NTK}}(x)-f_{\mathtt{LI}}(x)|\leq C\sqrt{\log(n)} /(n-1)^{2}
\end{align}
for some constant $C$.
\end{corollary}
\begin{remark}
To best of our knowledge, it is the first time to show how wide neural networks interpolate on the data. Combined with Proposition \ref{prop: function closeness} and Proposition \ref{LI} , our theory shows that for overfitted neural networks, adding more parameters does not increase the model complexity but make the models more concise (experiment results shown in Figure \ref{fig:LI} (a)). That is the reason why overfitted neural networks does not affect too much by the overparameterization.
\end{remark}
Different from RBF kernel (or high order polynomial kernel) ridgeless regression, Figure \ref{fig:LI} (a) and (b) present that the interpolation ways of the overfitted NTK model $f^{NTK}_{\infty}(x)$ and wide neural network are nearly linear. Proposition \ref{LI} give the upper bound of the gap between the overfitted NTK model and the linear interpolation is $O(\frac{1}{n})$ while Figure \ref{fig:LI} (c) shows that the gap actually is exactly $O(\frac{1}{n})$. These experimental results are in line with the conclusions of Proposition \ref{prop: function closeness} and Proposition \ref{LI}.
\begin{figure}[htbp]\label{fig:LI}
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{NN_with_diff_m.png}
\centerline{(a)}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{LI_NTK_RBF.png}
\centerline{(b)}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{gap_between_NTK_LI.png}
\centerline{(c)}
\end{minipage}
\caption{(a):The interpolation ways of one-hidden-layer neural networks with different width; (b):The interpolation ways of linear interpolation, $f^{NTK}_{\infty}(x)$ and RBF kernel regression with $\gamma=1$; (c): The gap between $f^{NTK}_{\infty}(x)$ and linear interpolation. The input of the training data is the equally-distanced one dimensional data $\{x_{i}=\frac{i-1}{n-1}, i \in [n]\}$. The label of the training data is randomly selected from $\{1,-1\}$, which does not contribute to the experiment result. The sample size $n=100,200,...,1000$}
\end{figure}
\subsection{Overfitted network generalizes poorly}
Proposition \ref{LI} implies that a lazy trained overfitted neural network can be approximated by the linear interpolation. On the other hand, it is well known that the linear interpolation does not generalize well. Thus, we have the following statement.
\begin{theorem}[Overfitted network generalizes poorly]\label{thm:bad_gen}
Let $\{(x_{i},y_{i})\}$ be the set of $n$ equally-distanced data in $[0,1]$, i.e., $x_i=\frac{i-1}{n-1}$, $i\in [n]$. The error of the overfitted NTK model with zero initialization $f^{\mathtt{NTK}}_{\infty}(x) =K(x,\boldsymbol{X})K^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}$ is bounded away from zero, i.e.,
\begin{equation}
\mathbf{E}\lVert f^{\mathtt{NTK}}_{\infty}-f_{\star}\rVert^2 \geq C
\end{equation}
for some constant $C$.
\end{theorem}
\begin{remark}
Theorem \ref{thm:bad_gen} is to give an example showing that the overfitted neural network can have bad generalization, which contradicts the observation {\bf (S)}. Since the lower bound $C$ in Theorem \ref{thm:bad_gen} is related to the scale of the noise, we believe overfitted neural network can not generalize well in most cases unless the noise is small enough. Figure \ref{fig: NTK_poor_performance} gives more examples showing that, even for $d>1$, the generalization error of overfitted networks is bounded away from zero even if the sample size is increasing.
\end{remark}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{test_error_of_NTK.png}
\caption{The performance of overfitted neural networks on different dimensional data: The input $x \sim Unif(0,1)^d$ where $d=1,3,5$. The ground true function $f^*(x) = sin(\frac{\sum_{j=1}^{d}x_j}{\sqrt{d}})$ and $y=f^*(x)+0.1\epsilon$, where $\epsilon \sim N(0,1)$. The sample size of training data $n=100,200,...,1000$ and the number of testing data is $1000$ for all experiments.}
\label{fig: NTK_poor_performance}
\end{figure}
\subsection{Smoothness of data associated with NTK}
In Section \ref{ntk:recollect}, we have defined the integral operator $T_K$ associated with NTK. Upon the definition of $T_K$, we can also define the power $s$ of $T_K$
\begin{equation}
T_K^{s} = \sum_{j=0} \lambda_j^{s} \phi_{j}\otimes \phi_{j}
\end{equation}
and the interpolation spaces with the power $s\geq 0$ of $T_K$
\begin{equation}
[\mathcal{H}_K]^s:=\left\{\sum_{j=0}^{\infty}c_j\phi_j \big| \sum_{j=0}^{\infty} \frac{c_j^2}{\lambda_j^s}<\infty \right\}.
\end{equation}
Obviously, $[\mathcal{H}_K]^1\subseteq \mathcal{H}_K$. For a function $f\in\mathcal{H}_K$, if $f\in [\mathcal{H}_K]^s$, $s$ represents the smoothness with respect to NTK, which is also considered as the signal of the function associated with the kernel. We tend to believe that the ground true function $f^*$ of the image data(e.g. MNIST, Cifar-10) is super smooth with respect to NTK(or CNTK).
The function with high smoothness can be easily fitted. For example, if the ground true function is the eigenfunction corresponding to the largest eigenvalue of NTK, i.e.,
\begin{equation}\label{eq:strong_signal_eq}
f^*(x) = \phi_0(x),
\end{equation}
that means $s$ is infinite by definition and the ground true function is infinitely smooth. To simply this case, we assume the data has no noise, $Y_n=f^*(X_n)$. Let $x_i\in[0,1]$ and $X_n=[x_1,...,x_n]$. $\frac{1}{n}K=P D P^{\tau}$, where $P$ is an orthogonal matrix, $P=[p_1,...,p_n]$ and $p_i, i\in [n]$ is the eigenvector. $D$ is the diagonal matrix, $D=\operatorname{diag}\{\hat{\lambda}_1,\dots,\hat{\lambda}_n\}$ and $\hat{\lambda}_1\geq \hat{\lambda}_2 \geq \dots \geq \hat{\lambda}_n$. From Equation \eqref{ntk:solution}, we have
\begin{equation}
f_{t}^{\mathtt{NTK}}(X_n) = (I-e^{-\frac{t}{n}K})Y_n = P(I-e^{-tD})P^{\tau} Y_n.
\end{equation}
It is well-known that under mild conditions, the eigenvectors of the kernel matrix converge to the eigenfunctions of the kernel. By Theorem 2.1 in \cite{koltchinskii1998asymptotics}, we have
\begin{equation}
\frac{1}{\sqrt{n}}|p_i\phi_{0}(X_n)| \to \delta_{i,1},a.s.,
\end{equation}
where $\delta_{11}=1$ and $\delta_{i1}=0$ for $i\neq 1$. which means $\frac{1}{\sqrt{n}}P^{\tau} Y_n$ is close to $(1,0,\dots,0)^{\tau}$. Typically, the eigenvector corresponding to the largest eigenvalue of the kernel matrix also holds if $n$ is sufficient large, i.e.,
\begin{equation}
\sqrt{n}p_1 \to \phi_0(X_n),
\end{equation}
Thus, it is easy to find that within the finite time $t$, we have
\begin{equation}
f_t^{\mathtt{NTK}}(X_n) \to Y_n.
\end{equation}
In other words, the training loss can be close to zero and the training accuracy is close to 100\%. Of course, the generalization is good enough because of the small noise.
More generally, if the ground true function is the linear combination of the first few eigenfunctions and the noise is small, it is still infinite smooth and easily fitted as the example above.
However, if the ground true function is not in $\mathcal{H}_K$, even if the noise is small, the overfitted neural network can not generalize. For example, the overfitted neural network can not generalize on odd/even prediction. More specifically, the input $x\in N$ and the ground true function $f^*(x)=1$ if x is odd and 0 otherwise. In this case, $\sum_{j=0}^{\infty} \frac{c_j^2}{\lambda_j}$ is not bounded(the estimations of $c_j$ are shown in Figure \ref{fig:signal}). Thus, $f^*(x)$ is not in $\mathcal{H}_K$ and the overfitted neural network can not generalize even if the data has no noise.
\subsection{Noise of data}
In regression problems, the noise is defined as
\begin{equation}
\epsilon = y - f^*(x).
\end{equation}
In binary classification problems, the true label is defined as
\begin{equation}
y_{true}=1_{f^*(x)>0.5}.
\end{equation}
The labels of data is possibly different from the true labels. Thus, there is also a gap between the labels $Y_n$ and the ground true function on training set $f^*(X_n)$ since $Y_n$ are discrete integers and $f^*(X_n)$ are continuous. This gap is also considered as the noise of data. If the noise does not affect the true label, we can consider the noise is small. For example, $f^*(x_1)=0.1$ and the label $y_1=0$ and $f^*(x_2)=0.9$ and the label $y_1=1$. These two data points has small noise since the gap does not affect the true labels of $f^*(x_1)$ and $f^*(x_2)$.
In general, the epoch with the maximum accuracy on validation set or the long-time unchanged training loss is commonly chosen as the stopping time. In this case, early stopping is potentially used. If the neural network is trained until it reach 100\% training accuracy, we tend to believe the network violates the conditions of early stopping and can not have good performance on testing set. However, if the noise is small, 100\% training accuracy does not mean the sufficient small loss(like Figure \ref{fig: interpolation vs acc}). The neural network may have large training loss even if it reaches 100\% training accuracy, which means the training process is early stopped before the interpolation regime.
If the noise is large, the 100\%-training-accuracy neural network can not have good generalization. For example, if $f^*(x_3)=0.1$ but the label $y_3=1$, that means the noise affects the true label of $f^*(x_3)$, which is called the label corruption in previous works(\cite{zhang2016understanding}). From Figure \ref{fig: MLP_diff_noise_mse} and \ref{fig: Alexnet_cifar10_diff_noise_ce}, we can find that as the percentage of label corruption is increasing, the training time to 100\% training accuracy is increasing. Moreover, the generalization gap between '100\% training accuracy' and 'best generalization' becomes larger. In these cases, explicit early stopping(e.g. cross validation) is still significant to the generalization of the model.
\section{Reproducing Kernel Hilbert Space} \label{sec:RKHS}
In this section, we recollect some essential concepts and theorems in the reproducing kernel Hilbert space (RKHS). For simplicity, we assume that $\mathcal{H}$ is a separable Hilbert space.
\begin{definition}[RKHS and reproducing kernel]\label{def:RKHS}
Let $\mathcal{H}$ be a Hilbert space of functions defined on a non-empty $\mathcal{X}$. It is an RKHS if for all $x\in \mathcal{X}$, there exists a positive constant $M_{x}$, such that
\begin{align}
|f(x)|\leq M_{x}\|f\|_{\mathcal{H}}, \quad \forall f\in \mathcal{H}.
\end{align}
By Riesz representation theory, for any $x$, there is an element $K(\cdot,x) \in \mathcal{H}$ such that
\begin{align}
f(x)=\langle f, K(\cdot,x) \rangle_{\mathcal{H}}.
\end{align}
The function $K:\mathcal{X}\times\mathcal{X} \rightarrow \mathbb{R}$ such that
\begin{align}
K(x,y) = \langle K(\cdot,x), K(\cdot,y)\rangle_{\mathcal{H}},
\end{align}
is referred to as the reproducing kernel associated with $\mathcal{H}$. It is clear that $K$ is a positive semi-definite kernel on $\mathcal{X}$.
\end{definition}
\begin{lemma}
Suppose that $\{e_{j},j \geq 1\}$ is an orthonormal basis of $\mathcal{H}$. Then
\begin{align}
K(x,y)=\sum_{j=1}^{\infty}e_{j}(x)e_{j}(y)
\end{align}
where the sum on RHS converges in $\mathcal{H}$.
\proof
Since $\mathcal{H}$ is an RKHS, the Plancherel theorem shows that
\begin{align}
K(x,y) = \sum_{j=1}^{\infty}\langle K(\cdot,x),e_{j}\rangle e_{j}(y)
\end{align}
where the sum on the RHS converges in $\mathcal{H}$.
\qed
\end{lemma}
Suppose that there is a topological structure and a Borel measure (or its completion) $\mu_{\mathcal{X}}$ on $\mathcal{X}$ with $\operatorname{supp}(\mu_{\mathcal{X}})=\mathcal{X}$ such that $\mathcal{X}$ is compact and $K$ is continuous. One then can easily verify that the natural embedding inclusion operator $I_K:\mathcal{H} \to L^2(\mathcal{X},\mu_{\mathcal{X}})$ is a compact operator and the adjoint operator $I_K^*: L^2(\mathcal{X},\mu_{\mathcal{X}}) \to \mathcal{H}$ of $I_{K}$ is given by:
\begin{align*}
I_K^* f(x) =\int_{\mathcal{X}} K(x,x')f(x') \mathrm{d} \mu_{\mathcal{X}}(x').
\end{align*}
Thus we can define an integral operator
\begin{align}
T_K = I_K \circ I_K^*: L^2(\mathcal{X},\mu_{\mathcal{X}}) \to L^2(\mathcal{X},\mu_{\mathcal{X}})
\end{align}
which is a positive semi-definite, self-adjoint, compact operator. The spectral theorem of the positive semi-definite, self-adjoint, compact operator shows that
there exists a set of non-negative numbers $\lambda_{1} \geq \lambda_{2} \geq \cdots$ and an orthonormal basis \ $\{\phi_{j},j \geq 1\}$ of $L^{2} (\mathcal{X},\mu_{\mathcal{X}})$ such that
\begin{align}
T_{K}f=\sum_{j=1}^{\infty}\lambda_{j}\left<f,\phi_{j}\right>_{L^{2}}\phi_{j},\quad \forall f\in L^{2}(\mathcal{X},\mu_{\mathcal{X}}).
\end{align}
where the sum on the RHS converges in $L^{2}(\mathcal{X},\mu_{\mathcal{X}})$.
In addition, if the operator $I_{K}$ is injective, then $\{\sqrt{\lambda_{j}}I_{K}^{*}\phi_{j},j \geq 1\}$ is an orthonormal basis of $\mathcal{H}$. Thus, we have
\begin{align}\label{Mercer's decomposition}
K(x,x')=\sum_{j=1}^{\infty}\lambda_{j}I_{K}^{*}\phi_{j}(x)I_{K}^{*}\phi_{j}(x')
\end{align}
where the sum on the RHS converges in $\mathcal{H}$. Note that for any $f,g\in \mathcal{H}$ and for any $x\in \mathcal{X}$, we have
\begin{align}
|f(x)-g(x)|=|\left<f-g,K_{x}\right>|\leq M_{x}\|f-g\|_{\mathcal{H}}.
\end{align}
Thus the equation \eqref{Mercer's decomposition} holds pointwise.
This is the celebrated Mercer's decomposition theorem.
The numbers $\{ \lambda_{j},j \geq 1\}$ and the functions $\{I_{K}^{*}\phi_{j},j \geq 1\} \subseteq \mathcal{H}$ are often referred to as the eigenvalues and the eigenfunctions associated to the kernel $K$ respectively. With these eigenvalues and the eigenfunctions, $\mathcal{H}$ can be formulated as
\begin{equation}
\mathcal{H} = \left\{\sum_{j=1}^{\infty} c_j I_{K}^{*}\phi_j ~\middle|~ \sum_{j=1}^{\infty}c_j^2/\lambda_j <\infty \right\}.
\end{equation}
\section{Proof of Section \ref{sec:ntk}}\label{app:ntk properties}
\begin{lemma}\label{lem:strict:positive}
Let $K$ be an inner product kernel on $\mathbb{S}^{d}$, i.e., $K(\boldsymbol{x},\boldsymbol{x}')=f(\langle \boldsymbol{x},\boldsymbol{x}'\rangle)$ for some function
$f(t):[-1,1]\to \mathbb{R}$ such that $f(t)=\sum^{\infty}_{k=0}a_{k}t^{k}$ where $a_{k}\geq 0$ for any $k\geq 0$. If there are infinity number $k$ such that $a_{k}>0$, then $K$ is positive definite on $\mathbb{S}^{d}_{+}:=\{\boldsymbol{x}=(x_{1},\ldots,x_{d+1})\in \mathbb{S}^{d}\mid x_{d+1}>0\}$.
\end{lemma}
\begin{proof}
For any different $n$ points $\boldsymbol{x}_{1}$, $\ldots$, $\boldsymbol{x}_{n}\in \mathbb{S}^{d}_{+}$, the Gram matrix $(K(\boldsymbol{x}_{i},\boldsymbol{x}_{j}))_{1\leq i,j\leq n}$ has an explicit formula:
\[
(K(\boldsymbol{x}_{i},\boldsymbol{x}_{j}))_{1\leq i,j\leq n}=\sum_{k\geq 0}a_{k}M_{k},
\]
where $M_{k}=\left(\langle \boldsymbol{x}_{i},\boldsymbol{x}_{j}\rangle^{k}\right)_{1\leq i,j\leq n}$.
It is obvious that each $M_{k}$ is positive semi-definite. Since the diagonal elements of $M_{k}$ equal to 1
and $\langle \boldsymbol{x}_{i},\boldsymbol{x}_{j}\rangle<1$ for $\boldsymbol{x}_{i}\neq \boldsymbol{x}_{j}\in \mathbb{S}^{d}_{+}$. By Gershgorin circle theorem, we know that for sufficiently large $k$, $M_{k}$ is positive definite. Since there are infinitely many positive $a_{k}$'s, we know that $(K(\boldsymbol{x}_{i},\boldsymbol{x}_{j}))_{1\leq i,j\leq n}$ is positive definite.
\end{proof}
\begin{proof}[Proof of Proposition \ref{PD}]
Let $\boldsymbol{z}^{\mathsf{T}}=(\boldsymbol{x}^{\mathsf{T}},1)$. By the definition of the NTK, we have
\begin{align*}
K(\boldsymbol{x},\boldsymbol{x}') &= (1+\left<\boldsymbol{x},\boldsymbol{x}'\right>) \kappa_0(\boldsymbol{z},\boldsymbol{z}') + \| \boldsymbol{z}\|_2 \| \boldsymbol{z}' \|_2 \kappa_1(\boldsymbol{z},\boldsymbol{z}')+ 1\\
&=\kappa_{0}(\boldsymbol{z},\boldsymbol{z}') + \left<\boldsymbol{x},\boldsymbol{x}'\right> \kappa_0(\boldsymbol{z},\boldsymbol{z}') + \| \boldsymbol{z}\|_2 \| \boldsymbol{z}' \|_2 \kappa_1(\boldsymbol{z},\boldsymbol{z}')+ 1
\end{align*}
where $\kappa_{n}(\boldsymbol{z},\boldsymbol{z}^\prime):=2\mathbf{E}_{\omega\sim \mathcal{N}(0,\boldsymbol{I})}[\sigma'(\langle\omega, \boldsymbol{z}\rangle)\sigma'(\langle\omega, \boldsymbol{z}^\prime\rangle)(\langle\omega, \boldsymbol{z}\rangle)^{n}(\langle\omega, \boldsymbol{z}^\prime\rangle)^{n}]$ is the arc-cosine kernels of degree $n$ \cite{cho2009kernel} and $\sigma(x):=\max\{x,0\}$.
The arc-cosine kernels $\kappa_0$ and $\kappa_{1}$ of degree $0$ and $1$, have the explicitly form (see e.g., \cite{cho2009kernel}):
\begin{align*}
\kappa_0(\boldsymbol{z},\boldsymbol{z}') & = \frac{1}{\pi}(\pi-\psi(\boldsymbol{z},\boldsymbol{z}'))\\
\kappa_1(\boldsymbol{z},\boldsymbol{z}') &= \frac{1}{\pi}\left ( \langle \frac{\boldsymbol{z}}{\lVert \boldsymbol{z} \rVert_2},\frac{\boldsymbol{z}'}{\lVert \boldsymbol{z}' \rVert_2} \rangle (\pi-\psi(\boldsymbol{z},\boldsymbol{z}')) + \sin (\psi(\boldsymbol{z},\boldsymbol{z}')) \right),
\end{align*}
where $\psi(\boldsymbol{z},\boldsymbol{z}') = \arccos \left(\langle \frac{\boldsymbol{z}}{\lVert \boldsymbol{z} \rVert_2},\frac{\boldsymbol{z}'}{\lVert \boldsymbol{z}' \rVert_2} \rangle \right)$. We can see that $\kappa_0(\boldsymbol{z},\boldsymbol{z}')$ and $\kappa_1(\boldsymbol{z},\boldsymbol{z}')$ can be considered inner product kernels on $\mathbb{S}^{d}_{+}$, i.e., $\kappa_0(\boldsymbol{z},\boldsymbol{z}')=f_0(\langle\hat{\boldsymbol{z}},\hat{\boldsymbol{z}}'\rangle)$ and $\kappa_1(\boldsymbol{z},\boldsymbol{z}')=f_1(\langle\hat{\boldsymbol{z}},\hat{\boldsymbol{z}}'\rangle)$ for some functions $f_0$ and $f_1$ satisfying the conditions of Lemma \ref{lem:strict:positive} and $\hat{\boldsymbol{z}}=\frac{\boldsymbol{z}}{\lVert \boldsymbol{z} \rVert_2}$ and $\hat{\boldsymbol{z}'}=\frac{\boldsymbol{z}'}{\lVert \boldsymbol{z}' \rVert_2}$ defined on $\mathbb{S}^{d}_{+}$. By Lemma \ref{lem:strict:positive}, $\kappa_{0}(\boldsymbol{z},\boldsymbol{z}')$ and $\kappa_{1}(\boldsymbol{z},\boldsymbol{z}')$ are positive definite, meaning that $K(\boldsymbol{x},\boldsymbol{x}')$ is also positive definite.
\end{proof}
In the following section, we consider the spectral properties of the NTK over data of dimensional one assuming that $\mu_{\mathcal{X}}$ is the uniform distribution on $[0,1]$. Through Equation \eqref{eq: NTK_formular} with $d=1$, the NTK $K(x,x^{\prime})$ can be presented as followed:
\begin{align}\label{NTK:d=1:explicit}
K(x,x^{\prime}) = \frac{2}{\pi}(\pi-\psi(x,x^{\prime}))(1+xx^{\prime}) + \frac{1}{\pi}|x-x^{\prime}| +1.
\end{align}
where $\psi(x,x^{\prime}) = \arccos \frac{1+xx^{\prime}}{\sqrt{(1+x^2)(1+(x^{\prime})^2)}}$. Define $\Pi_0$ and $\Pi_{1}$ as
\begin{align}
\Pi_0(x,x^{\prime}) &= \frac{1}{\pi}(\pi-\psi(x,x^{\prime}))\\
\Pi_1(x,x^{\prime}) &= \frac{1}{\pi}\left ( (1+xx^{\prime})(\pi-\psi(x,x^{\prime})) + |x-x^{\prime}| \right).
\end{align}
The following lemma shows the positive definiteness of $\Pi_0$ and $\Pi_{1}$:
\begin{lemma}\label{lem: Pi_positive_definite}
$\Pi_0(x,x')$ and $\Pi_{1}(x,x')$ are positive definite on $[0,1]$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem: Pi_positive_definite}]
Suppose that $\boldsymbol{X}=\{x_{1},\dots,x_{n}\} \subseteq [0,1]$ and $\boldsymbol{Z}=\{\boldsymbol{z}_{1},\cdots,\boldsymbol{z}_{n}\}$ where $\boldsymbol{z}_i=(x_i,1)$. Denote $D_{z} = \operatorname{diag} \{\|\boldsymbol{z}_i\|_2 \}_{i\in[n]}$ where $\|\boldsymbol{z}_i\|_2\geq 1$. $\Pi_0(\boldsymbol{X},\boldsymbol{X}) = \kappa_0(\boldsymbol{Z},\boldsymbol{Z})$ and $\Pi_1(\boldsymbol{X},\boldsymbol{X}) = D_{z}\kappa_1(\boldsymbol{Z},\boldsymbol{Z}) D_{z}$. Since $\kappa_0(\boldsymbol{Z},\boldsymbol{Z})$, $\kappa_1(\boldsymbol{Z},\boldsymbol{Z})$ are positive definite and $D_{z}$ is invertible, $\Pi_0(\boldsymbol{X},\boldsymbol{X})$ and $\Pi_1(\boldsymbol{X},\boldsymbol{X})$ are positive definite.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:spectral:d=1:L=1} $i)$]
Suppose that $\boldsymbol{X}=\{x_{1},...,x_{n}\}\subseteq [0,\pi]$. Let $G_{\alpha}(x,x')=\alpha-\frac{|x-x'|}{\pi}$ where $\alpha\geq 1$. Since $\boldsymbol{X}$ is one-dimensional data, we have the following lemmas.
\begin{lemma}\label{lem:min:eigenvalue} Let $d_{\min}=\min\{|x_{i}-x_{j}|\}$. We then have
\begin{align}
\frac{d_{\min}}{2 \pi } \leq \lambda_{\min}(G_{\alpha}(\boldsymbol{X},\boldsymbol{X}))\leq \frac{2d_{\min}}{ \pi }
\end{align}
\end{lemma}
\begin{lemma}\label{lem:G_leq_K_leq_7G}
Suppose that $A$ and $B$ are two symmetric matrices. We use that notation $A \geq B$ if $A-B$ is a positive semi-definite matrix. Then we have
\begin{equation}
G_{1}(\boldsymbol{X},\boldsymbol{X}) \leq K(\boldsymbol{X},\boldsymbol{X})\leq 7G_{9/7}(\boldsymbol{X},\boldsymbol{X}).
\end{equation}
\end{lemma}
Its clear that Lemma \ref{lem:G_leq_K_leq_7G} implies that
\begin{align}
\lambda_{\min}(G_{1}(\boldsymbol{X},\boldsymbol{X})) \leq \lambda_{\min}(K(\boldsymbol{X},\boldsymbol{X})) \leq 7 \lambda_{\min} (G_{9/7}(\boldsymbol{X},\boldsymbol{X}))
\end{align}
and Lemma \ref{lem:min:eigenvalue} implies Theorem \ref{thm:spectral:d=1:L=1} $i)$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:min:eigenvalue}]
Note that $G_{\alpha}^{-1}(\boldsymbol{X},\boldsymbol{X}) =$
\begin{equation}
\resizebox{0.9\hsize}{!}{$
\frac{\pi}{2} \begin{pmatrix}
\frac{1}{x_2-x_1} + \frac{1}{2\alpha\pi-x_n+x_1} & -\frac{1}{x_2-x_1}& 0 & \dots & 0 & \frac{1}{2\alpha\pi-x_n+x_1} \\
-\frac{1}{x_2-x_1} & \frac{1}{x_2-x_1} + \frac{1}{x_3-x_2} & -\frac{1}{x_3-x_2} & \dots & 0 & 0 \\
0 & \ddots & \ddots & \ddots& & \vdots \\
\vdots & & & & & \\
0 & & & &\frac{1}{x_{n-1}-x_{n-2}} + \frac{1}{x_{n}-x_{n-1}} & -\frac{1}{x_{n}-x_{n-1}} \\
\frac{1}{2\alpha\pi-x_n+x_1} & 0 & \dots & 0 &-\frac{1}{x_{n}-x_{n-1}} &\frac{1}{x_{n}-x_{n-1}} + \frac{1}{2\alpha\pi-x_n+x_1}
\end{pmatrix}.
$}
\end{equation}
By Gershgorin circle theorem, every eigenvalue of $G_{\alpha}^{-1} = G_{\alpha}^{-1}(\boldsymbol{X},\boldsymbol{X})$ lies in one of the Gershgorin discs $D_i = \left\{\lambda~\big\vert~|\lambda-(G_{\alpha}^{-1})_{i,i}| \leq \sum_{j\neq i} |(G_{\alpha}^{-1})_{i,j}|\right\}$. In particular, we have
\begin{equation}
\lambda_{\max}(G_{\alpha}^{-1}) \leq \max_{i\in [n]} \max_{\lambda} D_i \leq \max_{i\in [n]} \left\{\sum_{j\in [n]} |G^{-1}_{i,j}|\right\}\leq \frac{2\pi}{d_{\min}},
\end{equation}
which means $\lambda_{\min}(G_{\alpha})\geq \frac{d_{\min}}{2\pi}$.
One the other hand, assume that $x_{k+1}-x_k=d_{\min}$ for some $k$. Since $\lambda_{\max}(G_{\alpha}^{-1})\geq u^{\mathsf{T}} G_{\alpha}^{-1}u$ for $\forall u$ with $\|u\|_{2}=1$, let $u$ be the vector that only has 1 in the $k$-th entry and the rest are zero. Thus, we have $\lambda_{\max}(G_{\alpha}^{-1})\geq \frac{\pi}{2d_{\min}}$, which means $\lambda_{\min}(G_{\alpha})\leq \frac{2d_{\min}}{\pi}$. To sum up, we have
\begin{equation}\label{eq:min_eigen_G}
\frac{d_{\min}}{2\pi} \leq \lambda_{\min}(G_{\alpha})\leq \frac{2d_{\min}}{\pi}.
\end{equation}
\end{proof}
\begin{corollary}\label{cor:distance:positive}
The kernel function $G_{\alpha}$ is positive definite on $[0,\pi]$.
\end{corollary}
\begin{proof}[Proof of lemma \ref{lem:G_leq_K_leq_7G}] We can easily verify the following equation from the equation \eqref{NTK:d=1:explicit} and the definition of $G_{\alpha}$:
\begin{equation}\label{eqn:K:decompostion}
\begin{aligned}
K(x,x^{\prime})&= G_{1}(x,x^{\prime}) + 2\Pi_1(x,x^{\prime})\\
&=2\Pi_0(x,x^{\prime})(1+xx') - G_{1}(x,x') +2.
\end{aligned}
\end{equation}
It is clear that $K(\boldsymbol{X},\boldsymbol{X})\geq G_{1}(\boldsymbol{X},\boldsymbol{X})$ from the first line in \eqref{eqn:K:decompostion}.
On the other hand, let $z(x)=2x-\psi(0,x)$. We can easily verify that
\begin{align*}
z(0)=0; \quad z'(x) = 2-\frac{\psi(0,x)}{\partial x} = 2-\frac{1}{1+x^2}>0, \forall x\in [0,1]; \quad z(1)=2-\frac{\pi}{4} \leq \pi.
\end{align*}
Thus $z(x) \in[0,\pi]$. Let $\boldsymbol{Z}=\{z_1,...,z_n\}$, where $z_i=2x_i-\psi(0,x_i)$. Each entry of $G(\boldsymbol{Z},\boldsymbol{Z})$ is given by
\begin{equation*}
G(z_i,z_j)=1-\frac{|2x_i-2x_j-\psi(0,x_i)+\psi(0,x_j)|}{\pi}\\
=2(1-\frac{|x_i-x_j|}{\pi})-(1-\frac{\psi(x_i,x_j)}{\pi}).
\end{equation*}
Thus $G_{1}(\boldsymbol{Z},\boldsymbol{Z})=2G_{1}(\boldsymbol{X},\boldsymbol{X})-\Pi_{0}(\boldsymbol{X},\boldsymbol{X})$.
Since $G_{1}(\boldsymbol{Z},\boldsymbol{Z})$ is positive definite by Corollary \ref{cor:distance:positive}, we know $\Pi_0(\boldsymbol{X},\boldsymbol{X}) < 2G_{1}(\boldsymbol{X},\boldsymbol{X})$.
\vspace{3mm}
Let $D_{X}=\operatorname{diag}\{ (x_i)_{i \in [n]}\}$ and $1_{n}=[1,\dots,1]^{\mathsf{T}}$.
Note that $D_{X}\Pi_0(\boldsymbol{X},\boldsymbol{X}) D_{X} \leq \Pi_0(\boldsymbol{X},\boldsymbol{X})$ since $x_{i}\in[0,1]$.
Thus, from the second line in \eqref{eqn:K:decompostion} we have
\begin{align*}
K(\boldsymbol{X},\boldsymbol{X}) &= 2\Pi_0(\boldsymbol{X},\boldsymbol{X}) + 2D_{X} \Pi_0(\boldsymbol{X},\boldsymbol{X}) D_{X} - G_{1}(\boldsymbol{X},\boldsymbol{X}) + 2 1_{n} 1_{n}^{\mathsf{T}}\\
&\leq 4\Pi_0(\boldsymbol{X},\boldsymbol{X}) - G_{1}(\boldsymbol{X},\boldsymbol{X}) + 21_{n} 1_{n}^{\mathsf{T}} \leq 7 G_{1}(\boldsymbol{X},\boldsymbol{X}) + 2 1_{n} 1_{n}^{\mathsf{T}} = 7G_{\frac{9}{7}}(\boldsymbol{X},\boldsymbol{X}).
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:spectral:d=1:L=1} $ii)$]
Theorem \ref{thm:spectral:d=1:L=1} $ii)$ is a direct corollary of the following lemmas.
\begin{lemma}\label{lemma:K_bound_by_G}
Let $\{\lambda^{(\alpha)}_{j}, j \geq 1\}$ and $\{\lambda^K_{j}, j \geq 1\}$ be the eigenvalues associated to the kernel $G_{\alpha}$ and $K$ on $[0,1]$, respectively. Then we have
\begin{equation}\label{equation:G_K_G_alpha}
\lambda^{(1)}_{j} \leq \lambda^K_{j} \leq 7\lambda^{(9/7)}_{j} ,\quad j \geq 1.
\end{equation}
\end{lemma}
\begin{lemma}\label{lemma:G_decay_rate} Suppose that $\alpha=\{1,\frac{9}{7}\}$. There exist constants $c$ and $C$ such that
\begin{equation}
\frac{c}{j^{2}}\leq \lambda^{(\alpha)}_j \leq \frac{C}{j^{2}}, j \geq 1.
\end{equation}
\end{lemma}
\end{proof}
\begin{proof}[Proof of lemma \ref{lemma:K_bound_by_G}]
It is a direct corollary of the following lemma.
\begin{lemma}[Corollary of Theorem 3.1 of \cite{koltchinskii2000random}]\label{lemma:sample_eigen_to_total_eigen}
Let $A$ be a kernel function on $\mathcal{X}\times \mathcal{X}$ with $\int_{\mathcal{X}} \int_{\mathcal{X}} A(x,x')^2 \mathrm{d} x \mathrm{d} x^{\prime}< \infty$. For $\boldsymbol{X}=\{x_1,\dots, x_n\}\subseteq \mathcal{X}$, let $\hat{\lambda}_j$ be the eigenvalue of $A(\boldsymbol{X},\boldsymbol{X})$ and $\lambda_j$ be the eigenvalue of the kernel $A$. Then, for any fixed $j$, we have
\begin{equation}
\vert \hat{\lambda}_j -\lambda_j\vert \to 0, \mbox{as $n\to \infty$}.
\end{equation}
\end{lemma}
In fact, by Lemma \ref{lem:G_leq_K_leq_7G}, we have
$
G_{1}(\boldsymbol{X},\boldsymbol{X}) \leq K(\boldsymbol{X},\boldsymbol{X})\leq 7G_{\frac{9}{7}}(\boldsymbol{X},\boldsymbol{X}).
$
Thus, we have
$ \hat{\lambda}^{(1)}_{j} \leq \hat{\lambda}^K_{j} \leq 7\hat{\lambda}^{(9/7)}_{j}, j=1,2,...,n.$
Then Lemma \ref{lemma:sample_eigen_to_total_eigen} provides us that for any fixed $j \geq 1$,
\begin{equation}
\lambda^{(1)}_{j} \leq \lambda^K_{j} \leq 7\lambda^{(9/7)}_{j}.
\end{equation}
\end{proof}
\begin{proof}[Proof of lemma \ref{lemma:G_decay_rate}]
Since the $G_{\alpha}$ ($\alpha=1, 9/7$) is a positive definite kernel on $[0,1]$, we know that $\lambda_{j}\geq 0$ for any $j=1,2,\cdots$.
Let $\lambda \neq 0$ be an eigenvalue of $G_{\alpha}$, i.e., there is an eigenfunction $f(x)$ such that
\begin{equation}\label{equation:origin_function}
\begin{split}
\lambda f(x) &= \left(T_{G_{\alpha}}f\right)(x)= \int_{0}^{1} \left(\alpha-\frac{|x-x'|}{\pi}\right) f(x') \mathrm{d} x'\\
&=\int_{0}^{1}\alpha f(x')\mathrm{d} x'-\frac{1}{\pi}\left(\int_{0}^{x}(x-x')f(x')\mathrm{d} x'+\int_{x}^{1}(x'-x)f(x')\mathrm{d} x'\right).
\end{split}
\end{equation}
After taking the first and second derivatives on both sides with respect to $x$, we get
\begin{equation}
\lambda f'(x)=-\frac{1}{\pi}\left(\int_{0}^{x}f(x')\mathrm{d} x' - \int_{x}^{1}f(x')\mathrm{d} x'\right),
\end{equation}
and
\begin{equation}\label{equation:second_derivative}
\lambda f''(x)=-\frac{2}{\pi} f(x).
\end{equation}
It is well known that the solutions of \eqref{equation:second_derivative} are of the following forms:
\begin{equation}\label{equation:eigenfunction}
f(x)=A\cos (\omega x)+B\sin(\omega x).
\end{equation}
Inserting equation \eqref{equation:eigenfunction} back to equation \eqref{equation:second_derivative}, we know that
\begin{equation}\label{equation:oemga_lambda}
\omega^2=\frac{2}{\pi\lambda}>0.
\end{equation}
Inserting Equation \eqref{equation:eigenfunction} and \eqref{equation:oemga_lambda} in Equation \eqref{equation:origin_function}, we have
\begin{equation}
\begin{split}
\frac{2}{\pi \omega^2} f(x) &= -\frac{1}{\pi}x\omega(B+B\cos(\omega)-A\sin(\omega)) +\frac{2}{\pi \omega^2} f(x)\\
&+ \frac{1}{\pi}A(\alpha\pi\omega\sin(\omega)-1-\omega\sin(\omega) - \cos(\omega)) \\
&+ \frac{1}{\pi}B(\alpha\pi\omega(1-\cos(\omega)) + \omega\cos(\omega)-\sin(\omega)),
\end{split}
\end{equation}
which holds for all $x$ if and only if
\begin{equation*}
\begin{cases}
-A\sin(\omega)+B(1+\cos(\omega))=0,\\
A(\alpha\pi\omega\sin(\omega)-1-\omega\sin(\omega) - \cos(\omega))+B(\alpha\pi\omega(1-\cos(\omega)) + \omega\cos(\omega)-\sin(\omega))=0.\\
\end{cases}
\end{equation*}
A necessary and sufficient condition for this system to be degenerate (i.e., it has a nontrivial solution $A$ and $B$) is
\begin{equation}
\det \begin{pmatrix}
-\sin(\omega) & 1+\cos(\omega)\\
\alpha\pi\omega\sin(\omega)-1-\omega\sin(\omega) - \cos(\omega)& \alpha\pi\omega(1-\cos(\omega)) + \omega\cos(\omega)-\sin(\omega)
\end{pmatrix}=0,
\end{equation}
i.e.,
\begin{equation}\label{equation:omega}
2+2\cos(\omega)+ \omega \sin(\omega)(1-2\alpha\pi)=0.
\end{equation}
In fact, denote the left-hand side of the equation \eqref{equation:omega} by $h(\omega)$, i.e., $h(\omega)=2+2\cos(\omega)+\omega \sin(\omega)(1-2\alpha\pi)$. Since $h(\omega)=h(-\omega)$, we only need to prove the assertion \eqref{eqn:solution:omega} for $\omega > 0$. By Lemma \ref{lemma:h_equal_zero} and Equation \eqref{equation:oemga_lambda}, we have
\begin{equation}
\begin{cases}
\lambda_j \in [\frac{8}{\pi^3},\frac{72}{\pi^3}], & \mbox{$j=1$};\\
\lambda_j= \frac{2}{\pi^3}(j-1)^{-2}, & \mbox{$j$ is even}; \\
\lambda_j \in [\frac{2}{\pi^3}(j-\frac{1}{2})^{-2},\frac{2}{\pi^3}(j-1)^{-2}], & \mbox{$j>1$ and $j$ is odd}.
\end{cases}
\end{equation}
To sum up,
\begin{equation}
\frac{c}{j^{2}}\leq \lambda_j \leq \frac{C}{j^{2}}, j \geq 1
\end{equation}
for some absolute constants $c$ and $C$.
\end{proof}
\begin{proof}[Proof of lemma \ref{lemma:sample_eigen_to_total_eigen}]
By Theorem 3.1 of \cite{koltchinskii2000random}, we have
\begin{equation}
\sqrt{\sum_{j=1}^{\infty} (\hat{\lambda}_j-\lambda_j)^2}\to 0, \mbox{as $n\to \infty$},
\end{equation}
where $\hat{\lambda}_{j} =0$ for $j\geq n$. This implies that for any fixed $j \geq 1$,
\begin{equation}
\vert \hat{\lambda}_j - \lambda_j \vert \leq \sqrt{\sum_{l=1}^{\infty} (\hat{\lambda}_l-\lambda_l)^2} \to 0, \mbox{as $n\to \infty$}.
\end{equation}
\end{proof}
\begin{lemma}\label{lemma:h_equal_zero}
Let $h(\omega)=2+2\cos(\omega) + \omega \sin(\omega)(1-2\alpha \pi)$, where $\alpha \in \{1, \frac{9}{7}\}$ Then the solutions of
\begin{equation}\label{equation:h_equal_zero}
h(\omega)=0, \quad \omega>0
\end{equation}
are given by
\begin{equation}
\begin{cases}
\omega_{j} \in [\frac{1}{6}\pi, \frac{1}{2}\pi], & \mbox{$j=1$};\\
\omega_{j} = (j-1)\pi, & \mbox{$j$ is even};\\
\omega_{j} \in ((j-1)\pi, (j-\frac{1}{2})\pi), & \mbox{$j>1$ and $j$ is odd}.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
When $\omega \in (0,\pi)$, we can easily verify the following facts
\begin{itemize}
\item[ ~~~~~(1).] $h(\frac{\pi}{6})=2+\sqrt{3} - (2\alpha\pi-1)\frac{\pi}{12}>0$;
\item[ ~~~~~(2).] If $\omega \in ( 0, \frac{1}{2}\pi]$, we have
\begin{equation}
h'(\omega) = -2\sin(\omega) + (1-2\alpha \pi)\sin(\omega) + (1-2\alpha \pi)\omega \cos(\omega)<0.
\end{equation}
\item[ ~~~~~(3).] If $\omega \in [\frac{1}{2}\pi,\pi)$, then $$h(\omega)<2+2\cos(\omega) - 2\sin(\omega) = 2+2\sqrt{2}\cos(\omega+\frac{\pi}{4}) \leq 0. $$
\end{itemize}
Since $h(\omega)$ is a continuous function on $(0,\pi)$, the above facts imply that $h(\omega)=0$ has a unique solution $[\frac{1}{6}\pi,\frac{1}{2}\pi]$ which is denoted by $\omega_{1}$.\\
When $\omega \geq \pi$, it is clear that for any even number $j$, $j\pi$ is a solution of the equation \eqref{equation:h_equal_zero} with multiplicity one. We will show that there is another unique solution of the equation \eqref{equation:h_equal_zero} in the interval $(j\pi,(j+2)\pi)$ where $j$ is an even integer. Thus, the solutions of the equation can be indexed by $\mathbb{N}_{+}$ in the following way
\begin{align}\label{eqn:solution:omega}
\omega_{2k}=(2k-1)\pi \mbox{ and } \omega_{2k+1}\in ((2k-1)\pi, (2k+1)\pi), \quad k=1,2,\cdots.
\end{align}
When $\omega\in ((2k-1)\pi,(2k+1)\pi), k\in \mathbb{N}_{+}$, we can easily verify the following facts.
\begin{itemize}
\item[ ~~~~~(1).] If $\omega \in ((2k-1)\pi,2k\pi))$, then $h(\omega)>\omega \sin(\omega)(1-2\alpha \pi)>0$;
\item[ ~~~~~(2).] If $\omega \in [ 2k\pi, (2k+\frac{1}{2})\pi]$, we have
\begin{equation}
h'(\omega) = -2\sin(\omega) + (1-2\alpha \pi)\sin(\omega) + (1-2\alpha \pi)\omega \cos(\omega) <0.
\end{equation}
\item[ ~~~~~(3).] If $\omega \in ((2k+\frac{1}{2})\pi,(2k+1)\pi)$, then $$h(\omega)<2+2\cos(\omega) - 2\sin(\omega) = 2+2\sqrt{2}\cos(\omega+\frac{\pi}{4}) < 0. $$
\end{itemize}
Since $h(\omega)$ is a continuous function on $((2k-1)\pi,(2k+1)\pi)$, the above facts imply that $h(\omega)=0$ has a unique solution $\in [2k\pi, (2k+\frac{1}{2})\pi]$, which is denoted by $\omega_{2k+1}$. Thus, we have
\begin{equation}
\begin{cases}
\omega_{j} \in [\frac{1}{6}\pi, \frac{1}{2}\pi], & \mbox{$j=1$};\\
\omega_{j} = (j-1)\pi, & \mbox{$j$ is even};\\
\omega_{j} \in ((j-1)\pi, (j-\frac{1}{2})\pi), & \mbox{$j>1$ and $j$ is odd}.
\end{cases}
\end{equation}
\end{proof}
\end{appendix}
\subsection{Noise of data}
In Section \ref{intro}, we assume the noise $\epsilon\sim N(0,\sigma)$, which is too good to be true in real cases. Nevertheless, we can still believe there is a ground-true function $f_{\star}(\boldsymbol{x})$ behind the data. Thus, we can define the gap between the label and the ground-true function as noise, i.e.,
\begin{equation}
\epsilon = y - f_{\star}(x).
\end{equation}
In binary classification problems, the true label can be defined as
\begin{equation}
y_{\text{true}}=\boldsymbol{1}_{f_{\star}(x)>0.5}.
\end{equation}
The labels of data $y$ are possibly different from the true labels $y_{\text{true}}$. Since the ground-true function is unknown and we actually care about the true label, we can define the small noise as followed:
\begin{definition}
The noise of the data point $x,y$ is small if
\begin{equation}
y=y_{\text{true}}
\end{equation}
\end{definition}
For example, $f_{\star}(x_1)=0.1$ and the label $y_1=0$ and $f_{\star}(x_2)=0.6$ and the label $y_1=1$. These two data points have small noise since the gap does not affect the true labels of $f_{\star}(x_1)$ and $f_{\star}(x_2)$ even if the square losses of these two points are different. However, if the noise affects the true label, that means label corruption happens. For example, if $f_{\star}(x_3)=0.1$ but the label $y_3=1$, that means the noise affects the true label of $f_{\star}(x_3)$.
In general, the epoch with the maximum accuracy on the validation set or the long-time unchanged training loss is commonly chosen as the stopping time. In this case, early stopping is potentially used. If the neural network is trained until it reaches 100\% training accuracy, we tend to believe the network violates the conditions of early stopping and can not have a good performance on the testing set. However, even if the noise is small, 100\% training accuracy does not mean a sufficient small loss. The neural network may have a large training loss even if it reaches 100\% training accuracy, which means the training process is early stopped before the interpolation regime.
If the noise is large, the 100\%-training-accuracy neural network can not have good generalization. From Figure \ref{fig: MLP_diff_noise_mse} and \ref{fig: Alexnet_cifar10_diff_noise_ce}, we can find that as the percentage of label corruption is increasing, the training time to 100\% training accuracy is increasing. Moreover, the generalization gap between `100\% training accuracy' and `best generalization' becomes larger. In these cases, explicit early stopping(e.g. cross validation) is still significant to the generalization of the model.
In summary, we finally filled the last piece of the jigsaw in reconciling the controversial observation (S) and the bias-variance trade-off doctrine in classical statistical learning theory.
\end{comment}
\subsection{Three stopping rules}
We first emphasize that a subtle difference between the classification problem and the regression problem might be ignored in the reported experiments. To be more concrete,
we have three choices of stopping times in the classification problem:
$i)$ the stopping time $t_{\text{opt}}$ where the training process stopped at the time suggested by our theory ;
$ii)$ the stopping time $t_{\text{loss}}$ where the training process stopped till the value of the loss function nears zero;
$iii)$ the stopping time $t_{\text{label}}$ where the training process stopped till the label error rate nears zero.
Most of the reported experiments in ``benign overfitting phenomenon'' utilize the stopping time $t_{\text{label}}$ and claim that the resulting neural network can overfit the data and generalize well \cite{zhang2016understanding}.
Our theoretical results suggested that the neural network at the stopping time $t_{\text{opt}}$ has the best generalization ability and the neural network at the stopping time $t_{\text{loss}}$ can not generalize.
Thus, there might be a significant difference between the stopping time $t_{\text{label}}$ and $t_{\text{loss}}$.
This difference can clearly be seen from a toy example consisting of 4 data points $\{(0,0),(\frac{1}{3},1),(\frac{2}{3},0), (1,1) \}$. We fit the data with a two-layer neural network with width $m=1000$ with respect to the square loss ( regression ) and cross-entropy loss (classification ) separately.
The results are reported in figure \ref{fig: interpolation vs acc}.
It is clear that for both loss functions, the stopping time $t_{\text{label}}$ is much earlier than $t_{\text{loss}}$.
The fact that the stopping time $t_{\text{label}}$ may be far earlier than $t_{\text{loss}}$ partially explained why the training stopped at time $t_{\text{label}}$ produces a neural network with some generalization ability; if the stopping time $t_{\text{label}}$ is close to $t_{\text{opt}}$, then the training process stopped at time $t_{\text{label}}$ produces a neural network with the optimal generalization ability.
\begin{figure}
\centering
\begin{minipage}[b]{0.8\textwidth}
\includegraphics[width=1\textwidth]{100accuracy_vs_interpolation.png}
\end{minipage}
\begin{minipage}[b]{0.8\textwidth}
\includegraphics[width=1\textwidth]{100accuracy_vs_interpolation_classification.png}
\end{minipage}
\caption{Overfitting vs. 100 \% label accuracy for regression and classification problems: The upper figures are from the regression setup and the lower ones are from the classification setup. The left figures present the loss and the accuracy of 4 training data points. The right figures show the function in the epoch with 100 \% label accuracy and in the interpolation regime.} \label{fig: interpolation vs acc}
\end{figure}
\subsection{Contributions}
In this paper, we focus on the training process of the over-parameterized two-layers neural network in the so-called ``lazy training'' regime. That is, the width of the neural networks is sufficiently large so that the dynamics of the training process can be approximated well by the dynamics of the corresponding NTK.
{\it $\bullet$ Properties of NTK.}
We first show that NTK is strictly positive definite on $\mathbb{R}^{d}$, which fills a long-standing gap in the literature. We then provide the seemingly sharpest bounds on the minimum eigenvalue of gram matrix $(K(\boldsymbol{x}_{i},\boldsymbol{x}_{j}))_{1\leq i,j\leq n}$ for one-dimensional data. Since most of the existing studies on the spectral properties of NTK are focused on those defined on a sphere $\mathbb{S}^{d-1}$, we are lacking of basic knowledge about the spectral properties of the NTK on $\mathbb{R}^{d}$. We show the decay rate of the eigenvalues on $[0,1]\in \mathbb{R}$. Our work shed light on the possibility of obtaining more explicit knowledge about the spectral properties of NTK on $\mathbb{R}^{d}$. This would be of great interest to researchers.
{\it $\bullet$ Uniform NTK approximation for over-parameterized neural networks.} Many works have claimed that wide neural networks can be approximated by NTK. However, they only prove the point-wise kernel approximation and function approximation. Here we give a refined analysis to show that the over-parameterized two-layers ReLU neural networks can be approximated by NTK uniformly. Thus the generalization performance of the neural networks can be represented by NTK safely.
{\it $\bullet$ Generalization performance of neural networks on $\mathbb{R}$.}
With the decay rate of the eigenvalue of the NTK, we prove that over-parameterized neural networks trained by GD achieves the minimax-optimal rate $n^{-2/3}$. Moreover, we show that if one trains a over-parameterized two-layers neural network till it over-fits the equally-distanced one-dimensional data, the resulting neural network is essentially a linear interpolation. Thus, an over-fitted neural network can not generalize well. Though this claim is simple, it is the first time that we have a concrete understanding of what an over-fitted neural network looks like.
{\it $\bullet$ Why over-fitted neural networks can generalize: (implicit) early stopping.} Though lots of experiments have reported that over-fitted neural networks can generalize, a subtle difference between 100\% training accuracy of labels and zero training loss might be ignored. This difference actually leads the training process being stopped earlier than the time needed to over-fit the data. We refer to this fact as implicit early stopping. However, we find that the occurrence of this implicit early stopping depends on the signal strength of the data. We further illustrate through several experiments that signal strength explains when the implicit early stopping strategy results a neural network achieving the optimal generalization ability.
\subsection{Related works}
\subsubsection*{NTK properties on $S^{d-1}$} To the best of our knowledge, most works studying the generalization ability of neural networks are either working on $\mathbb{S}^{d-1}$ or assuming that $\lambda_{\min}$, the minimum eigenvalue of the Gram matrix $(K(\boldsymbol{x}_{i},\boldsymbol{x}_{j}))_{1\leq i,j\leq n}$ associated to the NTK $K$ and the data set $\{\boldsymbol{x}_{i}, i=1,\cdots,n\}$, is positive (e.g., \cite{du2018gradient, hu2021regularization}). The strictly positive definiteness of NTK on $\mathbb{S}^{d-1}$ is proved by \cite{jacot2018neural}, including ReLU NTK. For the decay rate of the eigenvalue of ReLU NTK on $\mathbb{S}^{d-1}$, it can be used to analyze the reproducing kernel Hilbert space of NTK. Many previous works \cite{bietti2019inductive, bietti2020deep, cao2019towards} have shown the decay rate of the eigenvalues $\lambda_j$ of NTK on $\mathbb{S}^{d-1}$. For the minimum eigenvalue of the $n\times n$ kernel matrix, it can indicate the convergent rate of the over-parameterized neural network. \cite{nguyen2021tight} gives a bound of the minimum eigenvalue related to the dimension of inputs $d$ instead of sample size $n$, which can not be used when $n$ is larger compared with $d$. Many results of NTK properties are solved on $\mathbb{S}^{d}$. But the works on $\mathbb{R}^d$ are still waiting for solutions.
\subsubsection*{The generalization performance of neural networks trained by GD} Several works study the generalization ability of neural networks with GD. Specifically, with the result of spectral properties of NTK on $\mathbb{S}^{d-1}$ mentioned above. \cite{hu2021regularization} claimed that if the ground true function is from the RKHS with NTK, an over-parameterized two-layers neural network trained by $l_2$-regularized GD can reach the minimax optimal rate. \cite{suh2022nonparametric} further showed the same generalization property of deep neural networks. For vanilla GD, without early stopping, \cite{hu2021regularization} and \cite{suh2022nonparametric} claimed that over-trained neural networks can not generalize well, i.e., the generalization error is bounded away from zero as the sample size goes to infinity. However, their claims rely on an unproved result (the second statement of the Corollary 3 in \cite{raskutti2014early}) essentially.
\subsubsection*{Explanations of why over-fitted neural networks can generalize well}
Although some theoretical results claimed that over-fitted neural networks can not generalize well, other papers still try to show that over-fitted models/neural networks can generalize well(e.g., \cite{belkin2019does, liang2020just}). In particular, \cite{belkin2019does} exhibited the Nadaraya-Watson estimator that interpolates the data, yet achieves minimax optimal rates of convergence. However, this kind of interpolation is far different from the interpolation that neural networks do. \cite{liang2020just} presented that Kernel ``Ridgeless'' Regression with non-linear kernels, that perfectly fits the training data, can still generalize well on test data. But such generalization ability needs high dimensional assumption: the ratio of dimension to sample size should be constant.
\vspace{3mm}
\subsection{Preliminary}
Let's consider $n$ i.i.d. samples $\{(\boldsymbol{x}_i,y_i)\}_{i=1}^{n}$ from an unknown distribution $\mu_{\mathcal{X}}\times\mu_{\mathcal{Y}}$ supported on $\mathcal{X}\times \mathcal{Y}$ where $\mathcal{X}\in\mathbb{R}^d$ and $\mathcal{Y}\in\mathbb{R}$.
The samples are from the model
\begin{equation}\label{equation:true_model}
y=f_{\star}(\boldsymbol{x})+\varepsilon, \quad \varepsilon\sim N(0,\sigma^{2}).
\end{equation}
where $f_{\star}$ is an unknown regression function and $\varepsilon$ is independent random variables. Based on these $n$ samples, we are interested in finding a function $\hat{f}_{n}$ such that risk
\begin{equation}
\mathcal{L}(\hat{f}_{n})=\mathbf{E}_{(\boldsymbol{x},y)}\left[(\hat{f}_{n}(\boldsymbol{x})-y)^2\right]
\end{equation}
is as small as possible. The best possible one can get is by choosing $\hat{f}$ to be $f_{\star}=\mathbf{E}(y\mid \boldsymbol{x})$. From now on, we use excess risk
\begin{equation}\label{eq:excrisk}
\mathcal{E}(\hat{f}_{n})=\mathcal{L}(\hat{f}_{n})-\mathcal{L}(f_{\star})=\int_{\mathcal{X}}(\hat{f}_{n}(\boldsymbol{x})-f_{\star}(\boldsymbol{x})^{2}) \mathrm{d} \mu_{\mathcal{X}}(\boldsymbol{x})
\end{equation}
to evaluate the generalization performance.
To better estimate the regression function $f_{\star}(\boldsymbol{x})$, we can consider the machinery of reproducing kernel Hilbert spaces(RKHS). Let's consider a Hilbert space $\mathcal{H}\subset L_{\mu_{\mathcal{X}}}^{2}$ associated with an inner product $\langle \cdot, \cdot\rangle_{\mathcal{H}}$. If there exists a continuous, symmetric and positive definite function $K:\mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}$ such that $\boldsymbol{x}\in \mathbb{R}^d$, $K(\boldsymbol{x},\cdot)\in \mathcal{H}$ and for all $f\in\mathcal{H}$, $f(\boldsymbol{x}) = \langle f, K(\boldsymbol{x},\cdot) \rangle_{\mathcal{H}}$, then $\mathcal{H}$ is an RKHS. We can denote $\mathcal{H}_K$ as an RKHS associated with the kernel $K$.
The Mercer's decomposition theorem asserts that there exist $\phi_{0}(\boldsymbol{x}), \phi_{1}(\boldsymbol{x}),\phi_{2}(\boldsymbol{x}),\cdots \in L_{\mu_{\mathcal{X}}}^{2}(\mathcal{X})$ such that $\left<\phi_{i},\phi_{j}\right>_{L_{\mu_{\mathcal{X}}}^{2}}=\delta_{ij}$ and
\begin{align}
K(\boldsymbol{x},\boldsymbol{x}')=\sum_{j=0}^{\infty}\lambda_{j}\phi_{j}(\boldsymbol{x})\phi_{j}(\boldsymbol{x}'),\quad j=0,1,2,\cdots,
\end{align}
where $\lambda_0\geq \lambda_1,\dots,\geq 0$. We also refer to the $\{\lambda_{j},j=0,1,2,\cdots\}$ as the eigenvalues and $\phi_0,\phi_1,\dots$ are the eigenfunctions associated to the kernel $K$. At the same time, a integral operator $T_{K}:L_{\mu_{\mathcal{X}}}^{2} \to L_{\mu_{\mathcal{X}}}^{2} $ associated with $K$ can be defined as
\begin{equation}
(T_K f)(\boldsymbol{x}) = \int_{\mathcal{X}} K(\boldsymbol{x},\boldsymbol{x}') f(\boldsymbol{x}') d\mu_{\mathcal{X}}(\boldsymbol{x}), \quad \forall f \in L_{\mu_{\mathcal{X}}}^{2}
\end{equation}
and we have the spectral decomposition of $T_K$:
\begin{equation}
T_K=\sum_{j=0}^{\infty} \lambda_{j}\phi_j \otimes \phi_j
\end{equation}
With the eigenvalues and the eigenfunctions associated to the kernel $K$, $\mathcal{H}_K$ can be formulated as
\begin{equation}
\mathcal{H}_K = \left\{f=\sum_{j=0}^{\infty} c_j\lambda_j^{\frac{1}{2}} \phi_j ~\middle|~ \sum_{j=0}^{\infty}c_j^2 <\infty \right\}.
\end{equation}
\subsection{Contributions}
In this paper, we focus on training a wide two-layer ReLU neural network in the so-called ``lazy training regime''. That is, the width $m$ of the neural network is sufficiently large so that the parameters of the neural network stay in a small neighbourhood of the initialization.
{\it $\bullet$ Spectral properties of the NTK.}
We first show that the NTK is positive definite on $\mathbb{R}^{d}$, filling a long-standing gap in the literature.
We then provide an optimal bound on the minimum eigenvalue of the gram matrix $(K(\boldsymbol{x}_{i},\boldsymbol{x}_{j}))_{1\leq i,j\leq n}$ for one-dimensional data.
Finally, we determine the decay rate of the eigenvalues of the NTK defined on $[0,1]$.
To the best of our knowledge, our work is the first result about the spectral properties of the NTK defined on a domain other than sphere \cite{bietti2019inductive, bietti2020deep, cao2019towards}.
Though the eigenvalue decay rate of the NTK is obtained only for a one-dimensional interval in this paper, it sheds light on obtaining similar results for the NTK defined on $\mathbb{R}^{d}$. We believe this problem would be of great interest to researchers.
{\it $\bullet$ NNK converges to NTK uniformly.} Though many works have claimed that the dynamic of training the wide neural network can be well approximated by that of the NTK regression, all of them only proved this claim pointwisely \cite{arora2019exact, lee2019wide}.
In this paper, we first show that the NNK converges to the NTK uniformly and that the dynamic of training the wide two-layer ReLU neural network converges to that of the NTK regression uniformly. Thus, the generalization performance of the wide neural networks can be approximated well by that of the NTK regression.
{\it $\bullet$ Generalization performance of neural networks on $\mathbb{R}$.}
With the assumption that the regression function $f_{\star}\in\mathcal{H}_{1}$, the RKHS associated to the NTK $K_{1}$ defined on $\mathbb{R}$,
we prove that training a wide neural network with a properly early stopping strategy can produce a neural network achieving the minimax-optimal rate $n^{-2/3}$, i.e., the early stopped neural network can generalize. On the other hand, we can show that if one trains a wide neural network till it overfits the equally-distanced one-dimensional data, the resulting neural network is essentially a linear interpolation and thus can not generalize. To the best of our knowledge,
it is the first time that we have a concrete understanding on what an overfitted neural network looks like.
{\it $\bullet$ Implicitly early stopping caused the ``benign overfitting phenomenon''. } Most reported experiments on the ``benign overfitting phenomenon'' in neural networks might ignore a subtle difference between the 100\% training accuracy of labels and the (nearly) zero training loss.
This difference actually leads the training process being stopped earlier than the time needed to overfit the data.
We call the strategy stopping the training process with near 100\% training accuracy the implicit early stopping rule and find that the occurrence of it depends on the signal strength of the data.
We further illustrate through several experiments how the signal strength affects the implicitly early stopping rule and the generalization ability of the resulting neural networks.
\subsection{Related works}
Whether the overfitted neural network can generalize is arguably one of the most intriguing questions in explaining the superior performance of the neural network methods in practice.
Inspired by the experiments reported in \cite{zhang2016understanding},
lots of effort tried to explain that overfitted models/neural networks can generalize well \cite{belkin2019does, liang2020just}. For example, \cite{belkin2019does} exhibited the singular Nadaraya-Watson estimator that interpolates the data can achieve the corresponding minimax optimal rate;
\cite{liang2020just} illustrated that the Kernel ``Ridgeless'' Regression can perfectly fit the high dimensional data and still generalize well.
Though these interpolations possess some generalization ability, we still need more work to explain the ``benign overfitting phenomenon'' for neural networks.
On the other hand, there are few results claiming that kernel interpolations can not generalize well \cite{rakhlin2019consistency, buchholz2022kernel}.
For example, \cite{rakhlin2019consistency} showed that for fixed dimension, the Laplace kernel interpolation cannot have vanishing error for noisy data as $n\to \infty$, even with bandwidth adaptive to the training set; \cite{buchholz2022kernel} further extended the result to the kernels whose associated reproducing kernel Hilbert space (RKHS) is a Sobolev space $H^{s}$, where $d/2< s< 3d/4$. However, these results can not conclude the inconsistency of the neural network interpolation.
Besides the aforementioned non-parametric static ERMs approaches, there are few works studying the generalization ability of neural networks through the dynamic of gradient descent or stochastic gradient descent \cite{zhong2017recovery, zhang2019learning, lei2022stability}.
Most of them assumed that the data live in a sphere since the NTK is an inner product kernel on the sphere and the spectral properties of the NTK are well understood \cite{bietti2019inductive,cao2019towards, geifman2020similarity, bietti2020deep}.
For example, Hu et al. \cite{hu2021regularization}, one of the most relevant works, considered the generalization performance of a two-layer ReLU neural network defined on a sphere $\mathbb{S}^{d}$ trained by the gradient descent with or without a $L^{2}$ penalized term.
They claimed that: 1) the overfitted neural network does not generalize well; 2) the properly early stopped trained neural network can achieve the optimal rate.
Unfortunately, their first claim relies on an unproved result (the second statement of the Corollary 3 in \cite{raskutti2014early}) essentially; their second claim secretly utilizes another unproved fact: the NNK convergence to the NTK uniformly, one of the major technical contributions in our current work.
\paragraph*{Notation} For every positive integer $n\in\mathbb{N}^{+}$, denote $\{1,\dots,n\}$ by $[n]$. For a real number $x\in\mathbb{R}$, denote by $\lceil x\rceil$ the smallest integer that is greater or equal to $x$ and by $\lfloor x\rfloor$ the greatest integer that is greater or equal to $x$. For $\boldsymbol{v}\in\mathbb{R}^{d}$, denote by $\boldsymbol{v}_{(j)}$ the $j$-th component of $\boldsymbol{v}$ and denote the $\ell_{2}$ norm and supreme norm of $\boldsymbol{v}$ by $\|\boldsymbol{v}\|_{2}=(\sum_{j\in[d]}\boldsymbol{v}_{(j)}^{2})^{1/2}$ and $\|\boldsymbol{v}\|_{\infty}=\max_{j\in [d]}|\boldsymbol{v}_{(j)}|$ respectively. For a matrix $\boldsymbol{A}\in\mathbb{R}^{m\times n}$, denote by $a_{ij}$ the $(i,j)$-th component of $\boldsymbol{A}$ and denote the operator norm and the Frobenius norm of $\boldsymbol{A}$ by $\|\boldsymbol{A}\|_{2}=\sup_{\boldsymbol{v}\in\mathbb{R}^{n}}\|\boldsymbol{A}\boldsymbol{v}\|_{2}/\|\boldsymbol{v}\|_{2}$ and $\|\boldsymbol{A}\|_{\mathrm{F}}=(\sum_{i\in[m],j\in[n]}a_{ij}^{2})^{1/2}$ respectively. For a set $A$, denote by $|A|$ the number of elements $A$ contains. Let $\mu_{\mathcal{X}}$ be a positive measure on $\mathcal{X} \subseteq \mathbb{R}^d$. We define the space $L_{2}(\mathcal{X},\mu_{\mathcal{X}}) = \{f:\mathcal{X} \to \mathbb{R} : \int_{\mathcal{X}} |f(\boldsymbol{x})|^2 \mathrm{d} \mu_{\mathcal{X}} <\infty \}$. We use the notation $a_{m}=o_{m}(1)$, meaning the sequence $\{a_{m}\}_{m=1}^{\infty}$ converges to zero as $m\to\infty$.
\vspace{3mm}
Let $f_{\star}$ be a continuous function defined on a compact subset $\mathcal{X} \subseteq [-B,B]^{d} \subseteq \mathbb{R}^{d}$ for some $B>0$ and $\mu_{\mathcal{X}}$ be a distribution supported on $\mathcal{X}$. Suppose that we have observed $n$ i.i.d. samples $\{(\boldsymbol{x}_{i},y_{i}), i \in [n]\}$ sampling form the model:
\begin{equation}\label{equation:true_model}
y_i=f_{\star}(\boldsymbol{x}_i)+\varepsilon_{i}, \quad i=1,\dots,n,
\end{equation}
where $\boldsymbol{x}_{i}$'s are sampled from $\mu_{\mathcal{X}}$ and $\varepsilon_{i} \sim \mathcal{N}(0,\sigma^{2})$ for some fixed $\sigma>0$.
We are interested in finding $\hat{f}_{n}$ based on these $n$ samples, which can minimize the excess risk, i.e., the difference between $\mathcal{L}(\hat{f}_{n})=\mathbf{E}_{(\boldsymbol{x},y)}\left[(\hat{f}_{n}(\boldsymbol{x})-y)^2\right]$ and $\mathcal{L}(f_{\star})=\mathbf{E}_{(\boldsymbol{x},y)}\left[(f_{\star}(\boldsymbol{x})-y)^2\right]$.
One can easily verify the following formula about the excess risk:
\begin{equation}\label{eq:excrisk}
\mathcal{E}(\hat{f}_{n})=\mathcal{L}(\hat{f}_{n})-\mathcal{L}(f_{\star})=\int_{\mathcal{X}}(\hat{f}_{n}(\boldsymbol{x})-f_{\star}(\boldsymbol{x}))^{2} \mathrm{d} \mu_{\mathcal{X}}(\boldsymbol{x}).
\end{equation}
It is clear that the excess risk is an equivalent evaluation of the generalization performance of $\hat{f}_{n}$. When $\boldsymbol{x}_{i}$ is assumed to be fixed, the excess risk can be taken as measuring the $L^{2}(\mathcal{X},\nu)$ distance between $\hat{f}$ and $f_{\star}$, where $\nu$ is the Lebesgue measure.
\section{Introduction}\label{intro}
\input{intro.tex}
\section{Neural tangent kernel and its spectral properties}\label{sec:ntk}
\input{NTK.tex}
\section{Neural network kernel converges to neural tangent kernel uniformly}\label{sec:main_result}
\input{main_result.sava.tody.tex}
\section{Why overfitted neural networks generalize}\label{sec:explanation}
\input{explanation.tex}
\subsection{The effects of signal strength}\label{sec:experments}
\input{Experiment.tex}
\section{Discussion and conclusion}
\input{Discussion.tex}
\input{appendix.tex}
\begin{supplement}
\stitle{Supplement to ``Generalization Ability of Wide Neural Networks on $\mathbb{R}$''} \sdescription{This supplementary file contains the proofs of Theorem \ref{thm:risk:approx}, \ref{thm:nn:early:stopping:d=1} and \ref{thm:bad_gen}.}
\end{supplement}
\bibliographystyle{imsart-number}
\section{The generalization performance of wide neural networks on $\mathbb{R}$}\label{subsec:generalization}
In order to get a meaningful discussion about the generalization performance of a neural network, we have to specify a class of functions to which $f_{\star}$ belongs.
In this paper, we make the following assumption:
\begin{assumption}\label{assump:f_star}
The regression function $f_{\star}\in \mathcal{H}_{1}$ and $\| f_{\star}\|_{\mathcal{H}_{1}}\leq R$ for some constant $R$, where $\mathcal{H}_{1}$ is the RKHS associated to the kernel $K_{1}$.
\end{assumption}
Proposition \ref{prop:funct:approx} and Theorem \ref{thm:risk:approx} shows that $f^{m}_{\boldsymbol{\theta}(t)}$ uniformly converges to $f^{\mathtt{NTK}}_{t}$ and $\mathcal{E}(f_{\boldsymbol{\theta}(t)}^{m})$ is well approximate by $\mathcal{E}(f_{t}^{\mathtt{NTK}})$, thus we can focus on studying the generalization ability of the NTK regression function $f^{\mathtt{NTK}}_{t}$.
It would be easier to stick with the usual assumptions appeared in the kernel regression literature (see e.g., \cite{caponnetto2007optimal, yao2007early, raskutti2014early, blanchard2018optimal, lin2020optimal}). This is exactly the Assumption \ref{assump:f_star}.
\subsection{Wide neural networks with early stopping achieve the minimax rate}
Early stopping, an implicit regularization strategy, is widely applied in training various models such as kernel ridgeless regression, neural networks, etc.
Lots of solid research has provided theoretical guarantees for early stopping (see e.g. \cite{yao2007early, raskutti2014early, blanchard2018optimal, lin2020optimal}), where the optimal stopping time is depending on the decay rate of eigenvalue associated to the kernel.
Note that Theorem \ref{thm:spectral:d=1:L=1} gives us the eigenvalue decay rate of $K_{1}$ and Theorem \ref{thm:risk:approx} guarantees the excess risk of the NTK regression function $f_{t}^{\mathtt{NTK}}$ is an accurate alternative of the excess risk of the neural network $f^{m}_{\boldsymbol{\theta}(t)}$, thus we have the following Theorem \ref{thm:nn:early:stopping:d=1}.
\begin{theorem}\label{thm:nn:early:stopping:d=1} Suppose Assumption \ref{assump:f_star} holds and we observed $n$ i.i.d. samples $\{(\boldsymbol{x}_{i},y_{i}),i\in[n]\}$ from the model \eqref{equation:true_model}.
For any given $\delta\in(0,1)$, if one trains a two-layer neural network with width $m$ that is sufficiently large and stops the gradient flow at time $t_{\star}\propto n^{2/3}$, then for sufficiently large $n$, there exists a constant $C$ independent of $\delta$ and $n$, such that
\begin{equation}\label{eq:excrisk nn}
\mathcal{E}(f_{\boldsymbol{\theta}(t_{\star})}^{m})\leq Cn^{-\frac{2}{3}}\log^{2}\frac{6}{\delta}
\end{equation}
holds with probability at least $(1-\delta)(1-o_{m}(1))$ where the randomness comes from the joint distribution of the random samples and the random initialization of parameters in the neural network $f_{\boldsymbol{\theta}(0)}^{m}$.
\end{theorem}
Researchers have established (\cite{blanchard2018optimal}) the following minimax rate of regression over the RKHS $\mathcal{H}_{1}$ associated to $K_{1}$:
\begin{equation} \inf_{\hat{f}_{n}}\sup_{f_{\star}\in\mathcal{H}_{1},\| f_{\star}\|_{\mathcal{H}_{1}}\leq R}\mathbf{E}\mathcal{E}(\hat{f}_{n})=\Omega(n^{-\frac{2}{3}}).
\end{equation}
Thus, we have proved that training a wide neural two-layer neural network with the early stopping strategy achieves the optimal rate.
The proof of Theorem \ref{thm:nn:early:stopping:d=1} can be found in Supplementary Material. Theorem \ref{thm:nn:early:stopping:d=1} rigorously shows the fully trained wide two-layer ReLU neural network with early stopping is minimax rate optimal.
\subsection{Overfitted Neural Networks generalize poorly}
In this subsection, we are more interested in the generalization performance of $f_{\boldsymbol{\theta}(t)}^{m}(x)$ for sufficiently large $t$ such that $f_{\boldsymbol{\theta}(t)}^{m}(x)$ can (nearly) fit the given data.
To be more concrete, suppose that we observed $n$ equally-distanced one-dimensional data $\{(x_{i},y_{i}) \mid x_{i}=\frac{i-1}{n-1}, i \in [n]\}$. The following theorem shows that $f_{\boldsymbol{\theta}(t)}^{m}(x)$ almost linearly interpolates these data points when $t$ is sufficiently large, therefore it can not generalize well.
We remind that a linear interpolation of the equally-distanced data is given by:
\begin{align}
f_{\mathtt{LI}}(x)=y_{i}+\frac{y_{i+1}-y_{i}}{x_{i+1}-x_{i}}(x-x_{i}), \mbox{ when } x\in [x_{i},x_{i+1}].
\end{align}
\begin{theorem}[Overfitted networks generalize poorly]\label{thm:bad_gen}
Suppose that we have observed $n$ data $\{(x_{i},y_{i}), i\in [n]\}$ from the model \eqref{equation:true_model} where $x_i=\frac{i-1}{n-1}$, $i\in [n]$. When the width $m$ is sufficiently large, the following statements hold.
$i)$ There exist some absolute constants $C_{1}$, $C_{2}$ and $C_{3}$ such that for any $t>C_{1}n^{2}\log n$, we have
\begin{align}
\sup_{x\in [0,1]}|f_{\boldsymbol{\theta}(t)}^{m}(x)-f_{\mathtt{LI}}(x)|\leq C_3 \sqrt{\log n} /(n-1)^{2}
\end{align}
holds with probability at least $1-\frac{C_{2}}{n}$.
\vspace{1mm}
$ii)$ There exist some positive constant $C_{4}$ depending only on $\sigma$ and absolute constant $C_{5}$
such that for any $t>C_{1}n^{2}\log n$, we have
$\mathcal{E}(f_{\boldsymbol{\theta}(t)}^{m}) \geq C_4$
holds with probability at least $1-\frac{C_{5}}{n}$.
\end{theorem}
Theorem \ref{thm:bad_gen} $i)$ shows that the overfitted neural network is nearly a linear interpolation (e.g., shown in Figure \ref{fig:LI}(a)). To the best of our knowledge, this is the first result explicitly showing how the overfitted neural network interpolates the data.
We have to emphasize that not every kernel interpolation (kernel ridgeless regression) is nearly linear interpolation. For example, it is clear that the radial basis function(RBF) kernel interpolation interpolates the data nonlinearly, shown in Figure \ref{fig:LI}(b).
Figure \ref{fig:LI}(c) shows that the maximum gap between the overfitted neural network and linear interpolation is exactly $O(\frac{1}{n^2})$, which is in line with the Theorem \ref{thm:bad_gen} $i)$.
Theorem \ref{thm:bad_gen} $ii)$ shows that the generalization error of overfitted neural networks has a constant lower bound at least for the equally-distanced data. It strongly suggests that overfitted neural networks can not generalize well, which contradicts the ``benign overfitting phenomenon''. So in the next section, we will provide an explanation to reconcile our theoretical result and the ``benign overfitting phenomenon''.
\begin{figure}[htbp]
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{NN_with_diff_m.png}
\centerline{(a)}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{LI_NTK_RBF.png}
\centerline{(b)}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{gap_between_NTK_LI.png}
\centerline{(c)}
\end{minipage}
\caption{(a): The interpolation ways of two-layer neural networks with different widths; (b): The interpolation ways of linear interpolation, $f^{\mathtt{NTK}}_{\infty}(x)$ and RBF kernel regression with the bandwidth $\gamma=1$; (c): The maximum gap between $f^{\mathtt{NTK}}_{\infty}$ and linear interpolation.
The input of the training data is the equally-distanced one-dimensional data $\{x_{i}=\frac{i-1}{n-1}, i \in [n]\}$ with randomly selected labels.
The sample size $n=100,200,\dots,1000$}\label{fig:LI}
\end{figure}
\begin{comment}
\subsection{Speculation on $\mathbb{R}^{d}$}
\begin{figure}[htbp]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{min_eigenvalue_for_diff_d}
\centerline{(a) Minimum eigenvalue of NTK}
\end{minipage}%
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{test_error_of_NTK.png}
\centerline{(b) Generalization error for Overfitted Neural Networks}
\end{minipage}
\caption{(a) Minimum eigenvalue of NTK on $\mathbb{R}^{d}$: The input is the equally-distanced data $\{\boldsymbol{x}_i\}_{i=1}^n$, defined as $\{(\frac{i_1-1}{n_d-1},...,\frac{i_d-1}{n_d-1})\}$, $i_1,...,i_d\in[n_d]$, where $n_d=n^{\frac{1}{d}}$. We calculate the minimum eigenvalue of NTK with different minimum distances $\frac{1}{n_d-1}$; (b) Generalization error for Overfitted NTK $f^{\mathtt{NTK}}_{\infty}$ over $\mathbb{R}^{d}$: The input $x \sim \operatorname{unif}(0,1)^d$ where $d=2,3,4$. $f_{\star}(x) = \sin(\frac{\sum_{j=1}^{d}x_j}{\sqrt{d}})$ and $y=f_{\star}(x)+0.1\varepsilon$, where $\varepsilon \sim N(0,1)$. The sample size of training data $n=100,300,500,...,1900$ and the number of testing data is $1000$ for all experiments.}
\label{fig: d_larger_than_1}
\end{figure}
\subsubsection*{Spectral properties of NTK} The experiment of the minimum eigenvalue is presented in Figure \ref{fig: d_larger_than_1} (a), showing that for fixed $d$, the minimum eigenvalue $\lambda_{\min}$ of $K_d(\boldsymbol{X},\boldsymbol{X})$ tends to be $\Theta(d_{\min})$, i.e., $c d_{\min}\leq \lambda_{\min} \leq C d_{\min}$ for some constants $c$ and $C$.
\cite{bietti2019inductive} has shown that on $\mathbb{S}^{d-1}$, the eigenvalues associated with the bias-free NTK $\lambda_k \asymp k^{-\frac{d}{d-1}}$ (considering multiple roots). There are some zero eigenvalues for the bias-free NTK and adding a bias may prevent such zero eigenvalues. Thus, we believe that on $\mathbb{R}^{d}$, the eigenvalues of the NTK with biases $\lambda_k \asymp k^{-\frac{d+1}{d}}$. This conjecture is the key to the generalization ability of neural networks with early stopping.
\subsubsection*{Over-parameterized neural networks are minimax rate optimal with early stopping} We can make the following assumptions:
\begin{assumption}\label{assumption:EDR_for_R_d}
Let $\lambda_j,j=0,1,2\dots$ be the eigenvalues associated with the NTK $K_d$ defined on $[0,1]^{d}$ and satisfies
\begin{equation}
\lambda_{j} \asymp j^{-\frac{d+1}{d}}.
\end{equation}
\end{assumption}
\begin{assumption}\label{assumption:f_star_in_H_K_d}
$f_{\star}\in \mathcal{H}_{K_d}$ where $K_d$ is the NTK.
\end{assumption}
Then with these two assumptions, we have the following results:
\begin{proposition}\label{prop:early:stopping:d}
Suppose Assumption \ref{assumption:EDR_for_R_d} and \ref{assumption:f_star_in_H_K_d} hold. For the NTK regression, if we stop the training process at $t_{\star}=\Theta(n^{-(d)/(2d+1)})$, then
\begin{equation}
\mathcal{E}(f_{t_{\star}}^{\mathtt{NTK}})=O( n^{-(d+1)/(2d+1)}\log^{2}\frac{6}{\delta}),
\end{equation}
with probability at least $1-\delta$ and
\begin{equation}
\inf_{\hat{f}_{n}}\sup_{f_{\star}\in\mathcal{H}_{K},\lVert f_{\star}\rVert\leq R}\mathbf{E}\mathcal{E}(\hat{f}_{n}) =\Omega( n^{-(d+1)/(2d+1)}).
\end{equation}
\end{proposition}
Thus, combined with Theorem \ref{thm:risk:approx}, we show the generalization performance of neural networks with early stopping.
\subsubsection*{Over-fitted neural networks generalize poorly} The experiment of the generalization error is presented in Figure \ref{fig: d_larger_than_1} (b), showing that for fixed $d$, the generalization error of $f^{\mathtt{NTK}}_{\infty}$ is bounded away from zero even when $n$ is increasing.
In summary, for fixed $d$, the conjectures on the $\mathbb{R}^{d}$ are summarized as followed:
i) Let $\boldsymbol{X} = \{\boldsymbol{x}_i,...,\boldsymbol{x}_n\}\in [0,1]^{d}$ and $d_{\min} = \min_{i\neq j}\|\boldsymbol{x}_i-\boldsymbol{x}_j\|$. The minimum eigenvalue, $\lambda_{\min}$ of $K_d(\boldsymbol{X},\boldsymbol{X})$ satisfies
\begin{equation}
\lambda_{\min} \asymp d_{\min}
\end{equation}
for some constants $c$, $C$.
ii) Let $\lambda_j,j=0,1,2\dots$ be the eigenvalues associated with the NTK $K_d$ defined on $[0,1]^{d}$ and satisfies
\begin{equation}
\lambda_{j} \asymp j^{-\frac{d+1}{d}}.
\end{equation}
iii) Assume ii) holds and $f_{\star} \in \mathcal{H}_{K_d}$, then if the width $m$ of the two-layer ReLU neural network is sufficiently large and the training process is stopped at $t_{\star}=\Theta(n^{-(d)/(2d+1)})$, then
\begin{equation*}
\mathcal{E}(f_{\boldsymbol{\theta}(t_{\star})})=O( n^{-(d+1)/(2d+1)}\log^{2}\frac{6}{\delta}),
\end{equation*}
with probability converging to one over initialization and high probability over the training data.
iv) The generalization error of the over-fitted NTK model with zero initialization is bounded away from zero, i.e.,
\begin{equation}
\mathcal{E}(f_{\infty}^{\mathtt{NTK}}) \geq C
\end{equation}
for some constant $C$.
\end{comment}
\subsection{The excess risk $\mathcal{E}(f^{m}_{\boldsymbol{\theta}(t)})$ converges to the excess risk $\mathcal{E}(f^{\mathtt{NTK}}_{t})$ } \label{subsec:ntk:approx}
In this subsection, we show in the following Theorem \ref{thm:risk:approx} that the excess risk of the very wide two-layers ReLU neural networks $f_{\boldsymbol{\theta}(t)}^{m}$ could be well arppoximated by the excess risk of $f_{t}^{\mathtt{NTK}}$.
\begin{theorem}\label{thm:risk:approx}
Given the training data, for all $t\geq 0$, if the width $m$ of the two-layers ReLU neural network is sufficiently large, then
\begin{equation*}
|\mathcal{E}(f_{\boldsymbol{\theta}(t)})-\mathcal{E}(f_{t}^{\mathtt{NTK}})|=o_{m}(1)
\end{equation*}
holds with probability converging to one
as the width $m$ goes to infinity.
\end{theorem}
{\color{red}
\begin{proof}
It is easy to see that $\mathcal{E}(f_{\boldsymbol{\theta}(t)})-\mathcal{E}(f_{t}^{\mathtt{NTK}})$ could be controlled by upper bounding $\int_{\mathcal{X}}(f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x}))^{2}\mathrm{d}\mu_{\mathcal{X}}(\boldsymbol{x})$ and $\int_{\mathcal{X}}(f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x}))(f_{t}^{\mathtt{NTK}}(\boldsymbol{x})-f_{\star}(\boldsymbol{x}))\mathrm{d} \mu_{\mathcal{X}}(\boldsymbol{x})$. The term $\sup_{\boldsymbol{x}}|f_{t}^{\mathtt{NTK}}(\boldsymbol{x})-f_{\star}(\boldsymbol{x})|$ could be bounded by arguing that $f_{t}^{\mathtt{NTK}}$ and $f_{\star}$ are both continuous over the compact set $\mathcal{X}$. Thus
\begin{equation*}
|\mathcal{E}(f_{\boldsymbol{\theta}(t)})-\mathcal{E}(f_{t}^{\mathtt{NTK}})|=O(\sup_{\boldsymbol{x}}|f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})|),
\end{equation*}
indicating the uniform approximation of $f_{t}^{\mathtt{NTK}}$ to $f_{\boldsymbol{\theta}(t)}$, i.e., $\sup_{\boldsymbol{x}}|f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})|$ lies at the heart of the proof of Theorem \ref{thm:risk:approx}, which is provided in Proposition \ref{prop:funct:approx} below.
\end{proof}
}
\begin{proposition}\label{prop:funct:approx}
Given the training data, if the width $m$ of the two-layers ReLU neural network is sufficiently large, then for all $t\geq 0$,
\begin{equation*}
\sup_{\boldsymbol{x}}| f^{m}_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})|=o_{m}(1)
\end{equation*}
holds with probability converging to one over initialization as the width $m$ goes to infinity.
\end{proposition}
It is clear in the proof of Theorem \ref{thm:risk:approx} that the neural network $f_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{x})$ converges uniformly to the function $f_{t}^{\mathtt{NTK}}(\boldsymbol{x})$ with high probability as $m\rightarrow\infty$ is indispensable. The existing results
by \cite{lee2019wide, arora2019exact} only showed that for any fixed $\boldsymbol{x}$, the value of neural network $f_{\boldsymbol{\theta}(t)}^{m}$ at $\boldsymbol{x}$ converges to the value of the function $f_{t}^{\mathtt{NTK}}$ at $\boldsymbol{x}$ with high probability as $m\rightarrow\infty$. Lots of inspired works (see e.g.,{\color{red} citation}\cite{hu2021regularization, suh2022nonparametric}) ignored the subtle difference between the uniformly and pointwisely convergences and claimed that the excess risk of neural network $f_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{x})$ can be well approximate the excess risk of the function $f_{t}^{\mathtt{NTK}}(\boldsymbol{x})$.
The proof of Proposition \ref{prop:funct:approx} is deferred to Appendix \ref{app:ntk:approx}. Since $f_{\boldsymbol{\theta}(t)}$ and $f_{t}^{\mathtt{NTK}}$ are fully characterized by the equations \eqref{nn:theta:flow}, \eqref{nn:f:flow} and\eqref{ntk:f:flow}, one of the key step we needed in the proof of Proposition \ref{prop:funct:approx} is that the time-varying neural network kernel $K_{\boldsymbol{\theta}(t)}$ uniformly converges to the time-independent neural tangent kernel $K_{d}$.
\begin{proposition}\label{prop:kernel:approx}
Given the training data, if the width $m$ of the two-layers ReLU neural network is sufficiently large, then
\begin{equation*}
\sup_{\boldsymbol{x},\boldsymbol{x}'}\sup_{t\geq 0}| K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{x}')-K_d(\boldsymbol{x},\boldsymbol{x}')|=o_{m}(1)
\end{equation*}
holds with probability converging to one over initialization as the width $m$ goes to infinity.
\end{proposition}
The proof of Proposition \ref{prop:kernel:approx} is given in Appendix \ref{app:ntk:approx}. Previous works, such as \cite{jacot2018neural, du2018gradient}, only showed the time-varying kernel $K_{\boldsymbol{\theta}(t)}$ converges pointwisely to the time-independent neural tangent kernel $K_{d}$ as the width $m\rightarrow\infty$. The significance of this proposition is that it shows that the time-varying kernel $K_{\boldsymbol{\theta}(t)}$ converges uniformly to the time-independent neural tangent kernel $K_{d}$ as the width $m\rightarrow\infty$. Thanks to the uniform convergence result, we are now able to claim that the excess risk of $f_{\boldsymbol{\theta}(t)}^{m}$ is well approximated by the excess risk of $f^{\mathtt{NTK}}_{t}$.
\subsection{The generalization performance of overparameterized neural networks over $\mathbb{R}$}\label{subsec:generalization}
In Section \ref{subsec:ntk:approx}, we have mentioned $f_{\boldsymbol{\theta}(t)}$ can be effectively approximated by $f_{t}^{\mathtt{NTK}}$. So we can analyze the generalization performance of $f_{\boldsymbol{\theta}(t)}$ via the performance of $f_{t}^{\mathtt{NTK}}$. In this section, we focus on data with one-dimensional feature and will show that the fully trained over-parameterized two-layers ReLU neural networks with early stopping is minimax rate optimal, while the neural network trained until overfitting might not even be consistent.
It is well known that the rate of convergence of any estimator can be arbitrarily slow if no restrictions are imposed on the class that the regression function lies in \cite{devroye1996probabilistic}. Hence in our setting, we make the following assumption for the ground true function $f_{\star}$.
\begin{assumption}\label{assump:f_star}
$f_{\star}\in \mathcal{H}_{K_1}$ and $\| f_{\star}\|_{\mathcal{H}_{K_{1}}}\leq R$ for some constant $R$, where $K_1$ is the NTK.
\end{assumption}
Assumption \ref{assump:f_star} is very common in the literature for statistical learning theory, see \cite{caponnetto2007optimal, yao2007early, raskutti2014early, blanchard2018optimal, lin2020optimal}. For more applications of RKHS in statistical learning, we refer the readers to \cite{wahba1990spline, saitoh2016theory, scholkopf2018learning}.
\subsubsection{Over-parameterized neural networks are minimax rate optimal with early stopping}
Regularization is a well-known solution to decrease the generalization error while early stopping was presented as an implicit regularization method in gradient descent \cite{morgan1989generalization, yao2007early}. The generalization of early stopping in kernel methods is widely discussed in previous works \cite{raskutti2014early, lin2020optimal}. In particular, \cite{lin2020optimal} showed that if the eigenvalues satisfy a polynomial decay condition, the kernel methods with early stopping can reach the minimax optimal rate. Thus, with the decay rate of the eigenvalue associated with $K_1$ (Theorem \ref{thm:spectral:d=1:L=1}), the generalization error of $f_t^{\mathtt{NTK}}$ with early stopping can be presented as followed:
\begin{proposition}\label{prop:early:stopping}
Suppose Assumption \ref{assump:f_star} holds. For the NTK regression, if the training process is stopped at $t_{\star}=\Theta(n^{1/3})$, then
\begin{equation}
\mathcal{E}(f_{t_{\star}}^{\mathtt{NTK}})=O(n^{-\frac{2}{3}}\log^{2}\frac{6}{\delta})
\end{equation}
holds with probability at least $1-\delta$ over the training data and
\begin{equation}
\inf_{\hat{f}_{n}}\sup_{f_{\star}\in\mathcal{H}_{K_{1}},\| f_{\star}\|_{\mathcal{H}_{K_{1}}}\leq R}\mathbf{E}\mathcal{E}(\hat{f}_{n})=\Omega(n^{-\frac{2}{3}}).
\end{equation}
\end{proposition}
With Theorem \ref{thm:risk:approx} controlling the gap between the generalization error of $f_{t}^{\mathtt{NTK}}$ and the one of $f_{\boldsymbol{\theta}(t)}$, we can bound the generalization error of neural networks with early stopping.
\begin{theorem}\label{thm:nn:early:stopping:d=1}
If the width $m$ of the two-layers ReLU neural network is sufficiently large and the training process is stopped at $t_{\star} =\Theta(n^{1/3})$, then
\begin{equation*}
\mathcal{E}(f_{\boldsymbol{\theta}(t_{\star})})=O(n^{-\frac{2}{3}\log^{2}\frac{6}{\delta}})
\end{equation*}
holds with probability converging to one over initialization and with probability at least $1-\delta$ over the training data.
\end{theorem}
Theorem \ref{thm:nn:early:stopping:d=1} rigorously shows the fully trained over-parameterized two-layers ReLU neural networks with early stopping is minimax rate optimal. However, in the absence of early stopping, the following result shows the poor performance of over-parameterized neural networks.
\subsubsection{Over-fitted Neural Networks generalize poorly}
We are interested in the behavior of $f^{\mathtt{NTK}}_{t}$ when $t$ is sufficiently large, which represents over-fitted neural networks. Given the specific kernel $K$, we can consider Kernel Ridgeless Regression, which is presented as
\begin{equation}\label{eq:ridgeless}
f(x) = K(x,\boldsymbol{X})K^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}.
\end{equation}
In the absence of explicit regularization, Kernel Ridgeless Regression with nonlinear kernels has the potential to fit the data points perfectly. The kernel can be chosen like RBF kernel or polynomial kernel. Compared with Equation \eqref{ntk:solution} and \eqref{eq:ridgeless}, as $t\to\infty$, Equation \eqref{ntk:solution} becomes Kernel Ridgeless Regression with NTK. Thus, we will discuss the behavior of $f^{\mathtt{NTK}}_{\infty}(x) = K_1(x,\boldsymbol{X})K_1^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}$, which represents over-fitted neural networks that can interpolate the data points.
To be more concrete, we consider the equally-distanced one-dimensional data $\{\boldsymbol{X},\boldsymbol{y}\}=\{(x_{i},y_{i}) \mid x_{i}=\frac{i-1}{n-1}, i \in [n]\}$ as an example. We will focus on the performance of $f^{\mathtt{NTK}}_{\infty}(x)$ on such data. Before analyzing the generalization performance, a special property of $f^{\mathtt{NTK}}_{\infty}(x)$ should be highlighted:
\begin{proposition}[Bounded second order derivative of over-fitted NTK regression]\label{prop:bound_second_derivative}
Let $\{(x_{i},y_{i})\}$ be the set of $n$ equally-distanced data in $[0,1]$, i.e., $x_i=\frac{i-1}{n-1}$, $i\in [n]$. With the probability at least $1-\frac{2}{n}$, we have
\begin{align}
\sup_{x\in (x_i,x_{i+1})}|K_1''(x,\boldsymbol{X})K_1^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}|\leq C\sqrt{\log(n)}
\end{align}
for $\forall i\in[n]$ and for some constant $C$.
\end{proposition}
\begin{remark}
If we assume all the labels $\boldsymbol{y}$ are bounded(e.g., classification problems), i.e., $|y_i|\leq C$, $ j\in[n]$, the upper bound of Proposition \ref{prop:bound_second_derivative} will be a constant $C$ instead of $C\sqrt{\log(n)}$. Proposition \ref{prop:bound_second_derivative} is infrequent in many kernels such as Gaussian kernel or Polynomial kernel.
\end{remark}
Let
\begin{align}
f_{\mathtt{LI}}(x)=y_{i}+\frac{y_{i+1}-y_{i}}{x_{i+1}-x_{i}}(x-x_{i}), \mbox{ when } x\in [x_{i},x_{i+1}]
\end{align}
be the linear interpolation of these $n$ points. By Taylor expansion, Proposition \ref{prop:bound_second_derivative} can guarantee that the linear interpolation can approximate the model as $n$ is increasing. Since the generalization error of the linear interpolation has a constant lower bound on the noisy data. Thus, we have the following theorem:
\begin{theorem}[Overfitted network generalizes poorly]\label{thm:bad_gen}
Let $\{(x_{i},y_{i})\}$ be the set of $n$ equally-distanced data in $[0,1]$, i.e., $x_i=\frac{i-1}{n-1}$, $i\in [n]$. With the probability at least $1-\frac{2}{n}$, $f_{\infty}^{\mathtt{NTK}}(x)$ can be approximated by the linear interpolation, i.e.,
\begin{align}
\sup_{x\in [0,1]}|f^{\mathtt{NTK}}_{\infty}(x)-f_{\mathtt{LI}}(x)|\leq C\sqrt{\log(n)} /(n-1)^{2}
\end{align}
for some constant $C$. Moreover, the generalization error of $f_{\infty}^{\mathtt{NTK}}(x)$ is bounded away from zero, i.e.,
\begin{equation}
\mathcal{E}(f_{\infty}^{\mathtt{NTK}}) \geq C
\end{equation}
for some constant $C$.
\end{theorem}
\begin{remark}
To the best of our knowledge, it is the first time to show how wide neural networks interpolate on the data. Combined with Proposition \ref{prop:funct:approx} and Theorem \ref{thm:bad_gen}, our theory shows that for over-fitted neural networks, adding more parameters does not increase the model complexity but makes the models more concise. The experiment results are shown in Figure \ref{fig:LI} (a). That is the reason why over-fitted neural networks can not affect too much by overparameterization.
\end{remark}
\begin{remark}
The second result of Theorem \ref{thm:bad_gen} is to give an example showing that the overfitted neural network can have bad generalization, which contradicts the observation {\bf (S)}. Since the lower bound $C$ in Theorem \ref{thm:bad_gen} is related to the scale of the noise, we believe the overfitted neural network can not generalize well in most cases unless the noise is small enough. Figure \ref{fig: d_larger_than_1} (b) gives more examples showing that, even for $d>1$, the generalization error of overfitted networks is bounded away from zero even if the sample size is increasing.
\end{remark}
Different from RBF kernel (or high order polynomial kernel) ridgeless regression, Figure \ref{fig:LI} (a) and (b) present that the interpolation ways of the overfitted NTK model $f^{\mathtt{NTK}}_{\infty}(x)$ and wide neural network are nearly linear. Theorem \ref{thm:bad_gen} gives the upper bound of the gap between the overfitted NTK model and the linear interpolation is $O(\frac{1}{n^2})$ while Figure \ref{fig:LI} (c) shows that the gap actually is exactly $O(\frac{1}{n^2})$. These experimental results are in line with the conclusions of Proposition \ref{prop:funct:approx} and Theorem \ref{thm:bad_gen}.
\begin{figure}[htbp]
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{NN_with_diff_m.png}
\centerline{(a)}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{LI_NTK_RBF.png}
\centerline{(b)}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{gap_between_NTK_LI.png}
\centerline{(c)}
\end{minipage}
\caption{(a): The interpolation ways of one-hidden-layer neural networks with different widths; (b): The interpolation ways of linear interpolation, $f^{NTK}_{\infty}(x)$ and RBF kernel regression with $\gamma=1$; (c): The gap between $f^{NTK}_{\infty}(x)$ and linear interpolation. The input of the training data is the equally-distanced one-dimensional data $\{x_{i}=\frac{i-1}{n-1}, i \in [n]\}$. The label of the training data is randomly selected from $\{1,-1\}$, which does not contribute to the experiment result. The sample size $n=100,200,...,1000$}\label{fig:LI}
\end{figure}
\begin{comment}
\subsection{Speculation on $\mathbb{R}^{d}$}
\begin{figure}[htbp]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{min_eigenvalue_for_diff_d}
\centerline{(a) Minimum eigenvalue of NTK}
\end{minipage}%
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{test_error_of_NTK.png}
\centerline{(b) Generalization error for Overfitted Neural Networks}
\end{minipage}
\caption{(a) Minimum eigenvalue of NTK on $\mathbb{R}^{d}$: The input is the equally-distanced data $\{\boldsymbol{x}_i\}_{i=1}^n$, defined as $\{(\frac{i_1-1}{n_d-1},...,\frac{i_d-1}{n_d-1})\}$, $i_1,...,i_d\in[n_d]$, where $n_d=n^{\frac{1}{d}}$. We calculate the minimum eigenvalue of NTK with different minimum distances $\frac{1}{n_d-1}$; (b) Generalization error for Overfitted NTK $f^{\mathtt{NTK}}_{\infty}$ over $\mathbb{R}^{d}$: The input $x \sim \operatorname{unif}(0,1)^d$ where $d=2,3,4$. $f_{\star}(x) = \sin(\frac{\sum_{j=1}^{d}x_j}{\sqrt{d}})$ and $y=f_{\star}(x)+0.1\varepsilon$, where $\varepsilon \sim N(0,1)$. The sample size of training data $n=100,300,500,...,1900$ and the number of testing data is $1000$ for all experiments.}
\label{fig: d_larger_than_1}
\end{figure}
\subsubsection*{Spectral properties of NTK} The experiment of the minimum eigenvalue is presented in Figure \ref{fig: d_larger_than_1} (a), showing that for fixed $d$, the minimum eigenvalue $\lambda_{\min}$ of $K_d(\boldsymbol{X},\boldsymbol{X})$ tends to be $\Theta(d_{\min})$, i.e., $c d_{\min}\leq \lambda_{\min} \leq C d_{\min}$ for some constants $c$ and $C$.
\cite{bietti2019inductive} has shown that on $\mathbb{S}^{d-1}$, the eigenvalues associated with the bias-free NTK $\lambda_k \asymp k^{-\frac{d}{d-1}}$ (considering multiple roots). There are some zero eigenvalues for the bias-free NTK and adding a bias may prevent such zero eigenvalues. Thus, we believe that on $\mathbb{R}^{d}$, the eigenvalues of the NTK with biases $\lambda_k \asymp k^{-\frac{d+1}{d}}$. This conjecture is the key to the generalization ability of neural networks with early stopping.
\subsubsection*{Over-parameterized neural networks are minimax rate optimal with early stopping} We can make the following assumptions:
\begin{assumption}\label{assumption:EDR_for_R_d}
Let $\lambda_j,j=0,1,2\dots$ be the eigenvalues associated with the NTK $K_d$ defined on $[0,1]^{d}$ and satisfies
\begin{equation}
\lambda_{j} \asymp j^{-\frac{d+1}{d}}.
\end{equation}
\end{assumption}
\begin{assumption}\label{assumption:f_star_in_H_K_d}
$f_{\star}\in \mathcal{H}_{K_d}$ where $K_d$ is the NTK.
\end{assumption}
Then with these two assumptions, we have the following results:
\begin{proposition}\label{prop:early:stopping:d}
Suppose Assumption \ref{assumption:EDR_for_R_d} and \ref{assumption:f_star_in_H_K_d} hold. For the NTK regression, if we stop the training process at $t_{\star}=\Theta(n^{-(d)/(2d+1)})$, then
\begin{equation}
\mathcal{E}(f_{t_{\star}}^{\mathtt{NTK}})=O( n^{-(d+1)/(2d+1)}\log^{2}\frac{6}{\delta}),
\end{equation}
with probability at least $1-\delta$ and
\begin{equation}
\inf_{\hat{f}_{n}}\sup_{f_{\star}\in\mathcal{H}_{K},\lVert f_{\star}\rVert\leq R}\mathbf{E}\mathcal{E}(\hat{f}_{n}) =\Omega( n^{-(d+1)/(2d+1)}).
\end{equation}
\end{proposition}
Thus, combined with Theorem \ref{thm:risk:approx}, we show the generalization performance of neural networks with early stopping.
\subsubsection*{Over-fitted neural networks generalize poorly} The experiment of the generalization error is presented in Figure \ref{fig: d_larger_than_1} (b), showing that for fixed $d$, the generalization error of $f^{\mathtt{NTK}}_{\infty}$ is bounded away from zero even when $n$ is increasing.
In summary, for fixed $d$, the conjectures on the $\mathbb{R}^{d}$ are summarized as followed:
i) Let $\boldsymbol{X} = \{\boldsymbol{x}_i,...,\boldsymbol{x}_n\}\in [0,1]^{d}$ and $d_{\min} = \min_{i\neq j}\|\boldsymbol{x}_i-\boldsymbol{x}_j\|$. The minimum eigenvalue, $\lambda_{\min}$ of $K_d(\boldsymbol{X},\boldsymbol{X})$ satisfies
\begin{equation}
\lambda_{\min} \asymp d_{\min}
\end{equation}
for some constants $c$, $C$.
ii) Let $\lambda_j,j=0,1,2\dots$ be the eigenvalues associated with the NTK $K_d$ defined on $[0,1]^{d}$ and satisfies
\begin{equation}
\lambda_{j} \asymp j^{-\frac{d+1}{d}}.
\end{equation}
iii) Assume ii) holds and $f_{\star} \in \mathcal{H}_{K_d}$, then if the width $m$ of the two-layers ReLU neural network is sufficiently large and the training process is stopped at $t_{\star}=\Theta(n^{-(d)/(2d+1)})$, then
\begin{equation*}
\mathcal{E}(f_{\boldsymbol{\theta}(t_{\star})})=O( n^{-(d+1)/(2d+1)}\log^{2}\frac{6}{\delta}),
\end{equation*}
with probability converging to one over initialization and high probability over the training data.
iv) The generalization error of the over-fitted NTK model with zero initialization is bounded away from zero, i.e.,
\begin{equation}
\mathcal{E}(f_{\infty}^{\mathtt{NTK}}) \geq C
\end{equation}
for some constant $C$.
\end{comment}
\subsection{Uniform NTK approximation for over-parameterized neural networks}\label{subsec:ntk:approx}
Since the seminal work \cite{jacot2018neural} brought up the concept of the NTK, many works have been trying to understand the generalization performance of neural networks based on the NTK \cite{vyas2022limitations, arora2019exact, arora2019fine, hu2021regularization, suh2022nonparametric, montanari2022interpolation}. In this subsection, we show that the generalization performance of the over-parameterized two-layers ReLU neural networks $f_{\boldsymbol{\theta}(t)}$ could be studied via NTK regression $f_{t}^{\mathtt{NTK}}$ and the precise argument is established in Theorem \ref{thm:risk:approx}, paving the way for the discussion about generalization performance of neural networks in Section \ref{subsec:generalization}.
\begin{theorem}\label{thm:risk:approx}
Given the training data, for all $t\geq 0$, if the width $m$ of the two-layers ReLU neural network is sufficiently large, then
\begin{equation*}
|\mathcal{E}(f_{\boldsymbol{\theta}(t)})-\mathcal{E}(f_{t}^{\mathtt{NTK}})|=o_{m}(1)
\end{equation*}
holds with probability converging to one over initialization as the width $m$ goes to infinity.
\end{theorem}
\begin{proof}
Plugging in $f_{t}^{\mathtt{NTK}}$ in the excess risk of the two-layers ReLU neural network $\mathcal{E}(f_{\boldsymbol{\theta}(t)})$ according to \eqref{eq:excrisk} shows $\mathcal{E}(f_{\boldsymbol{\theta}(t)})-\mathcal{E}(f_{t}^{\mathtt{NTK}})$ could be controlled by upper bounding $\int_{\mathcal{X}}(f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x}))^{2}\mathrm{d}\mu_{\mathcal{X}}(\boldsymbol{x})$ and $\int_{\mathcal{X}}(f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x}))(f_{t}^{\mathtt{NTK}}(\boldsymbol{x})-f_{\star}(\boldsymbol{x}))\mathrm{d} \mu_{\mathcal{X}}(\boldsymbol{x})$. The term $\sup_{\boldsymbol{x}}|f_{t}^{\mathtt{NTK}}(\boldsymbol{x})-f_{\star}(\boldsymbol{x})|$ could be bounded by arguing that $f_{t}^{\mathtt{NTK}}$ and $f_{\star}$ are both continuous over the compact set $\mathcal{X}$. Thus
\begin{equation*}
|\mathcal{E}(f_{\boldsymbol{\theta}(t)})-\mathcal{E}(f_{t}^{\mathtt{NTK}})|=O(\sup_{\boldsymbol{x}}|f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})|),
\end{equation*}
indicating the uniform approximation of $f_{t}^{\mathtt{NTK}}$ to $f_{\boldsymbol{\theta}(t)}$, i.e., $\sup_{\boldsymbol{x}}|f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})|$ lies at the heart of the proof of Theorem \ref{thm:risk:approx}, which is provided in Proposition \ref{prop:funct:approx} below.
\end{proof}
\begin{proposition}\label{prop:funct:approx}
Given the training data, if the width $m$ of the two-layers ReLU neural network is sufficiently large, then for all $t\geq 0$,
\begin{equation*}
\sup_{\boldsymbol{x}}| f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})|=o_{m}(1)
\end{equation*}
holds with probability converging to one over initialization as the width $m$ goes to infinity.
\end{proposition}
\begin{remark}
Proposition \ref{prop:funct:approx} shows the difference between $f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})$ and $f_{t}^{\mathtt{NTK}}(\boldsymbol{x})$ over all $\boldsymbol{x}$ can be controlled uniformly with high probability by the width $m$, compared to similar results by \cite{lee2019wide, arora2019exact}. The uniform approximation of function is a vital clue for us to investigate the generalization performance of the fully trained overparameterized two-layers ReLU neural networks with the help of NTK regression in a carefree way since generalization performance is related to every draw of $\boldsymbol{x}$ from $\mu_{\mathcal{X}}$.
\end{remark}
The proof of Proposition \ref{prop:funct:approx} is deferred to Appendix \ref{app:ntk:approx}. Since \eqref{nn:f:flow} and \eqref{ntk:f:flow} imply the evolution of $f_{\boldsymbol{\theta}(t)}$ and $f_{t}^{\mathtt{NTK}}$ is fully determined by the kernel $K_{\boldsymbol{\theta}(t)}$ and $K_{d}$ respectively, it suffices to give the uniform approximation of $K_{\boldsymbol{\theta}}$ to $K_{d}$ in Proposition \ref{prop:kernel:approx} below in order to prove Proposition \ref{prop:funct:approx}.
\begin{proposition}\label{prop:kernel:approx}
Given the training data, if the width $m$ of the two-layers ReLU neural network is sufficiently large, then
\begin{equation*}
\sup_{\boldsymbol{x},\boldsymbol{x}'}\sup_{t\geq 0}| K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{x}')-K_d(\boldsymbol{x},\boldsymbol{x}')|=o_{m}(1)
\end{equation*}
holds with probability converging to one over initialization as the width $m$ goes to infinity.
\end{proposition}
The proof of Proposition \ref{prop:kernel:approx} is given in Appendix \ref{app:ntk:approx}.
\begin{remark}
It is worth emphasizing that Proposition \ref{prop:kernel:approx} implies the uniform approximation of kernel over all $\boldsymbol{x},\boldsymbol{x}'$ with high probability. Results from the previous works, such as \cite{jacot2018neural, du2018gradient}, could be summarized as the point-wise approximation of the kernel. The problem is that although $K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{x}')$ converges to $K_{d}(\boldsymbol{x},\boldsymbol{x}')$ with high probability, there is no guarantee of uniform convergence with a high probability by simply using the union bound. A more delicate analysis is needed.
\end{remark}
\subsection{The generalization performance of overparameterized neural networks over $\mathbb{R}$}\label{subsec:generalization}
In Section \ref{subsec:ntk:approx}, we have mentioned $f_{\boldsymbol{\theta}(t)}$ can be effectively approximated by $f_{t}^{\mathtt{NTK}}$. So we can analyze the generalization performance of $f_{\boldsymbol{\theta}(t)}$ via the performance of $f_{t}^{\mathtt{NTK}}$. In this section, we focus on data with one-dimensional feature and will show that the fully trained over-parameterized two-layers ReLU neural networks with early stopping is minimax rate optimal, while the neural network trained until overfitting might not even be consistent.
It is well known that the rate of convergence of any estimator can be arbitrarily slow if no restrictions are imposed on the class that the regression function lies in \cite{devroye1996probabilistic}. Hence in our setting, we make the following assumption for the ground true function $f_{\star}$.
\begin{assumption}\label{assump:f_star}
$f_{\star}\in \mathcal{H}_{K_1}$ and $\| f_{\star}\|_{\mathcal{H}_{K_{1}}}\leq R$ for some constant $R$, where $K_1$ is the NTK.
\end{assumption}
Assumption \ref{assump:f_star} is very common in the literature for statistical learning theory, see \cite{caponnetto2007optimal, yao2007early, raskutti2014early, blanchard2018optimal, lin2020optimal}. For more applications of RKHS in statistical learning, we refer the readers to \cite{wahba1990spline, saitoh2016theory, scholkopf2018learning}.
\subsubsection{Over-parameterized neural networks are minimax rate optimal with early stopping}
Regularization is a well-known solution to decrease the generalization error while early stopping was presented as an implicit regularization method in gradient descent \cite{morgan1989generalization, yao2007early}. The generalization of early stopping in kernel methods is widely discussed in previous works \cite{raskutti2014early, lin2020optimal}. In particular, \cite{lin2020optimal} showed that if the eigenvalues satisfy a polynomial decay condition, the kernel methods with early stopping can reach the minimax optimal rate. Thus, with the decay rate of the eigenvalue associated with $K_1$ (Theorem \ref{thm:spectral:d=1:L=1}), the generalization error of $f_t^{\mathtt{NTK}}$ with early stopping can be presented as followed:
\begin{proposition}\label{prop:early:stopping}
Suppose Assumption \ref{assump:f_star} holds. For the NTK regression, if the training process is stopped at $t_{\star}=\Theta(n^{1/3})$, then
\begin{equation}
\mathcal{E}(f_{t_{\star}}^{\mathtt{NTK}})=O(n^{-\frac{2}{3}}\log^{2}\frac{6}{\delta})
\end{equation}
holds with probability at least $1-\delta$ over the training data and
\begin{equation}
\inf_{\hat{f}_{n}}\sup_{f_{\star}\in\mathcal{H}_{K_{1}},\| f_{\star}\|_{\mathcal{H}_{K_{1}}}\leq R}\mathbf{E}\mathcal{E}(\hat{f}_{n})=\Omega(n^{-\frac{2}{3}}).
\end{equation}
\end{proposition}
With Theorem \ref{thm:risk:approx} controlling the gap between the generalization error of $f_{t}^{\mathtt{NTK}}$ and the one of $f_{\boldsymbol{\theta}(t)}$, we can bound the generalization error of neural networks with early stopping.
\begin{theorem}\label{thm:nn:early:stopping:d=1}
If the width $m$ of the two-layers ReLU neural network is sufficiently large and the training process is stopped at $t_{\star} =\Theta(n^{1/3})$, then
\begin{equation*}
\mathcal{E}(f_{\boldsymbol{\theta}(t_{\star})})=O(n^{-\frac{2}{3}\log^{2}\frac{6}{\delta}})
\end{equation*}
holds with probability converging to one over initialization and with probability at least $1-\delta$ over the training data.
\end{theorem}
Theorem \ref{thm:nn:early:stopping:d=1} rigorously shows the fully trained over-parameterized two-layers ReLU neural networks with early stopping is minimax rate optimal. However, in the absence of early stopping, the following result shows the poor performance of over-parameterized neural networks.
\subsubsection{Over-fitted Neural Networks generalize poorly}
We are interested in the behavior of $f^{\mathtt{NTK}}_{t}$ when $t$ is sufficiently large, which represents over-fitted neural networks. Given the specific kernel $K$, we can consider Kernel Ridgeless Regression, which is presented as
\begin{equation}\label{eq:ridgeless}
f(x) = K(x,\boldsymbol{X})K^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}.
\end{equation}
In the absence of explicit regularization, Kernel Ridgeless Regression with nonlinear kernels has the potential to fit the data points perfectly. The kernel can be chosen like RBF kernel or polynomial kernel. Compared with Equation \eqref{ntk:solution} and \eqref{eq:ridgeless}, as $t\to\infty$, Equation \eqref{ntk:solution} becomes Kernel Ridgeless Regression with NTK. Thus, we will discuss the behavior of $f^{\mathtt{NTK}}_{\infty}(x) = K_1(x,\boldsymbol{X})K_1^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}$, which represents over-fitted neural networks that can interpolate the data points.
To be more concrete, we consider the equally-distanced one-dimensional data $\{\boldsymbol{X},\boldsymbol{y}\}=\{(x_{i},y_{i}) \mid x_{i}=\frac{i-1}{n-1}, i \in [n]\}$ as an example. We will focus on the performance of $f^{\mathtt{NTK}}_{\infty}(x)$ on such data. Before analyzing the generalization performance, a special property of $f^{\mathtt{NTK}}_{\infty}(x)$ should be highlighted:
\begin{proposition}[Bounded second order derivative of over-fitted NTK regression]\label{prop:bound_second_derivative}
Let $\{(x_{i},y_{i})\}$ be the set of $n$ equally-distanced data in $[0,1]$, i.e., $x_i=\frac{i-1}{n-1}$, $i\in [n]$. With the probability at least $1-\frac{2}{n}$, we have
\begin{align}
\sup_{x\in (x_i,x_{i+1})}|K_1''(x,\boldsymbol{X})K_1^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}|\leq C\sqrt{\log(n)}
\end{align}
for $\forall i\in[n]$ and for some constant $C$.
\end{proposition}
\begin{remark}
If we assume all the labels $\boldsymbol{y}$ are bounded(e.g., classification problems), i.e., $|y_i|\leq C$, $ j\in[n]$, the upper bound of Proposition \ref{prop:bound_second_derivative} will be a constant $C$ instead of $C\sqrt{\log(n)}$. Proposition \ref{prop:bound_second_derivative} is infrequent in many kernels such as Gaussian kernel or Polynomial kernel.
\end{remark}
Let
\begin{align}
f_{\mathtt{LI}}(x)=y_{i}+\frac{y_{i+1}-y_{i}}{x_{i+1}-x_{i}}(x-x_{i}), \mbox{ when } x\in [x_{i},x_{i+1}]
\end{align}
be the linear interpolation of these $n$ points. By Taylor expansion, Proposition \ref{prop:bound_second_derivative} can guarantee that the linear interpolation can approximate the model as $n$ is increasing. Since the generalization error of the linear interpolation has a constant lower bound on the noisy data. Thus, we have the following theorem:
\begin{theorem}[Overfitted network generalizes poorly]\label{thm:bad_gen}
Let $\{(x_{i},y_{i})\}$ be the set of $n$ equally-distanced data in $[0,1]$, i.e., $x_i=\frac{i-1}{n-1}$, $i\in [n]$. With the probability at least $1-\frac{2}{n}$, $f_{\infty}^{\mathtt{NTK}}(x)$ can be approximated by the linear interpolation, i.e.,
\begin{align}
\sup_{x\in [0,1]}|f^{\mathtt{NTK}}_{\infty}(x)-f_{\mathtt{LI}}(x)|\leq C\sqrt{\log(n)} /(n-1)^{2}
\end{align}
for some constant $C$. Moreover, the generalization error of $f_{\infty}^{\mathtt{NTK}}(x)$ is bounded away from zero, i.e.,
\begin{equation}
\mathcal{E}(f_{\infty}^{\mathtt{NTK}}) \geq C
\end{equation}
for some constant $C$.
\end{theorem}
\begin{remark}
To the best of our knowledge, it is the first time to show how wide neural networks interpolate on the data. Combined with Proposition \ref{prop:funct:approx} and Theorem \ref{thm:bad_gen}, our theory shows that for over-fitted neural networks, adding more parameters does not increase the model complexity but makes the models more concise. The experiment results are shown in Figure \ref{fig:LI} (a). That is the reason why over-fitted neural networks can not affect too much by overparameterization.
\end{remark}
\begin{remark}
The second result of Theorem \ref{thm:bad_gen} is to give an example showing that the overfitted neural network can have bad generalization, which contradicts the observation {\bf (S)}. Since the lower bound $C$ in Theorem \ref{thm:bad_gen} is related to the scale of the noise, we believe the overfitted neural network can not generalize well in most cases unless the noise is small enough. Figure
\end{remark}
Different from RBF kernel (or high order polynomial kernel) ridgeless regression, Figure \ref{fig:LI} (a) and (b) present that the interpolation ways of the overfitted NTK model $f^{\mathtt{NTK}}_{\infty}(x)$ and wide neural network are nearly linear. Theorem \ref{thm:bad_gen} gives the upper bound of the gap between the overfitted NTK model and the linear interpolation is $O(\frac{1}{n^2})$ while Figure \ref{fig:LI} (c) shows that the gap actually is exactly $O(\frac{1}{n^2})$. These experimental results are in line with the conclusions of Proposition \ref{prop:funct:approx} and Theorem \ref{thm:bad_gen}.
\begin{figure}[htbp]
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{NN_with_diff_m.png}
\centerline{(a)}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{LI_NTK_RBF.png}
\centerline{(b)}
\end{minipage}%
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=\textwidth]{gap_between_NTK_LI.png}
\centerline{(c)}
\end{minipage}
\caption{(a): The interpolation ways of one-hidden-layer neural networks with different widths; (b): The interpolation ways of linear interpolation, $f^{NTK}_{\infty}(x)$ and RBF kernel regression with $\gamma=1$; (c): The gap between $f^{NTK}_{\infty}(x)$ and linear interpolation. The input of the training data is the equally-distanced one-dimensional data $\{x_{i}=\frac{i-1}{n-1}, i \in [n]\}$. The label of the training data is randomly selected from $\{1,-1\}$, which does not contribute to the experiment result. The sample size $n=100,200,...,1000$}\label{fig:LI}
\end{figure}
\begin{comment}
\subsection{Speculation on $\mathbb{R}^{d}$}
\begin{figure}[htbp]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{min_eigenvalue_for_diff_d}
\centerline{(a) Minimum eigenvalue of NTK}
\end{minipage}%
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{test_error_of_NTK.png}
\centerline{(b) Generalization error for Overfitted Neural Networks}
\end{minipage}
\caption{(a) Minimum eigenvalue of NTK on $\mathbb{R}^{d}$: The input is the equally-distanced data $\{\boldsymbol{x}_i\}_{i=1}^n$, defined as $\{(\frac{i_1-1}{n_d-1},...,\frac{i_d-1}{n_d-1})\}$, $i_1,...,i_d\in[n_d]$, where $n_d=n^{\frac{1}{d}}$. We calculate the minimum eigenvalue of NTK with different minimum distances $\frac{1}{n_d-1}$; (b) Generalization error for Overfitted NTK $f^{\mathtt{NTK}}_{\infty}$ over $\mathbb{R}^{d}$: The input $x \sim \operatorname{unif}(0,1)^d$ where $d=2,3,4$. $f_{\star}(x) = \sin(\frac{\sum_{j=1}^{d}x_j}{\sqrt{d}})$ and $y=f_{\star}(x)+0.1\varepsilon$, where $\varepsilon \sim N(0,1)$. The sample size of training data $n=100,300,500,...,1900$ and the number of testing data is $1000$ for all experiments.}
\label{fig: d_larger_than_1}
\end{figure}
\subsubsection*{Spectral properties of NTK} The experiment of the minimum eigenvalue is presented in Figure \ref{fig: d_larger_than_1} (a), showing that for fixed $d$, the minimum eigenvalue $\lambda_{\min}$ of $K_d(\boldsymbol{X},\boldsymbol{X})$ tends to be $\Theta(d_{\min})$, i.e., $c d_{\min}\leq \lambda_{\min} \leq C d_{\min}$ for some constants $c$ and $C$.
\cite{bietti2019inductive} has shown that on $\mathbb{S}^{d-1}$, the eigenvalues associated with the bias-free NTK $\lambda_k \asymp k^{-\frac{d}{d-1}}$ (considering multiple roots). There are some zero eigenvalues for the bias-free NTK and adding a bias may prevent such zero eigenvalues. Thus, we believe that on $\mathbb{R}^{d}$, the eigenvalues of the NTK with biases $\lambda_k \asymp k^{-\frac{d+1}{d}}$. This conjecture is the key to the generalization ability of neural networks with early stopping.
\subsubsection*{Over-parameterized neural networks are minimax rate optimal with early stopping} We can make the following assumptions:
\begin{assumption}\label{assumption:EDR_for_R_d}
Let $\lambda_j,j=0,1,2\dots$ be the eigenvalues associated with the NTK $K_d$ defined on $[0,1]^{d}$ and satisfies
\begin{equation}
\lambda_{j} \asymp j^{-\frac{d+1}{d}}.
\end{equation}
\end{assumption}
\begin{assumption}\label{assumption:f_star_in_H_K_d}
$f_{\star}\in \mathcal{H}_{K_d}$ where $K_d$ is the NTK.
\end{assumption}
Then with these two assumptions, we have the following results:
\begin{proposition}\label{prop:early:stopping:d}
Suppose Assumption \ref{assumption:EDR_for_R_d} and \ref{assumption:f_star_in_H_K_d} hold. For the NTK regression, if we stop the training process at $t_{\star}=\Theta(n^{-(d)/(2d+1)})$, then
\begin{equation}
\mathcal{E}(f_{t_{\star}}^{\mathtt{NTK}})=O( n^{-(d+1)/(2d+1)}\log^{2}\frac{6}{\delta}),
\end{equation}
with probability at least $1-\delta$ and
\begin{equation}
\inf_{\hat{f}_{n}}\sup_{f_{\star}\in\mathcal{H}_{K},\lVert f_{\star}\rVert\leq R}\mathbf{E}\mathcal{E}(\hat{f}_{n}) =\Omega( n^{-(d+1)/(2d+1)}).
\end{equation}
\end{proposition}
Thus, combined with Theorem \ref{thm:risk:approx}, we show the generalization performance of neural networks with early stopping.
\subsubsection*{Over-fitted neural networks generalize poorly} The experiment of the generalization error is presented in Figure \ref{fig: d_larger_than_1} (b), showing that for fixed $d$, the generalization error of $f^{\mathtt{NTK}}_{\infty}$ is bounded away from zero even when $n$ is increasing.
In summary, for fixed $d$, the conjectures on the $\mathbb{R}^{d}$ are summarized as followed:
i) Let $\boldsymbol{X} = \{\boldsymbol{x}_i,...,\boldsymbol{x}_n\}\in [0,1]^{d}$ and $d_{\min} = \min_{i\neq j}\|\boldsymbol{x}_i-\boldsymbol{x}_j\|$. The minimum eigenvalue, $\lambda_{\min}$ of $K_d(\boldsymbol{X},\boldsymbol{X})$ satisfies
\begin{equation}
\lambda_{\min} \asymp d_{\min}
\end{equation}
for some constants $c$, $C$.
ii) Let $\lambda_j,j=0,1,2\dots$ be the eigenvalues associated with the NTK $K_d$ defined on $[0,1]^{d}$ and satisfies
\begin{equation}
\lambda_{j} \asymp j^{-\frac{d+1}{d}}.
\end{equation}
iii) Assume ii) holds and $f_{\star} \in \mathcal{H}_{K_d}$, then if the width $m$ of the two-layers ReLU neural network is sufficiently large and the training process is stopped at $t_{\star}=\Theta(n^{-(d)/(2d+1)})$, then
\begin{equation*}
\mathcal{E}(f_{\boldsymbol{\theta}(t_{\star})})=O( n^{-(d+1)/(2d+1)}\log^{2}\frac{6}{\delta}),
\end{equation*}
with probability converging to one over initialization and high probability over the training data.
iv) The generalization error of the over-fitted NTK model with zero initialization is bounded away from zero, i.e.,
\begin{equation}
\mathcal{E}(f_{\infty}^{\mathtt{NTK}}) \geq C
\end{equation}
for some constant $C$.
\end{comment}
\subsection{Properties of NTK}\label{subsection:properties_of_ntk}
In Section \ref{ntk:positive}, we have shown the strictly positive definiteness of NTK. The strictly positive definiteness only guarantees that $\lambda_{n}>0$. We can actually prove a stronger statement.
\begin{lemma}\label{min_eigenvalue} Let $X=\{x_{1},...,x_{n}\}\subset [0,\pi]$ and $d_{min}=\min_{i\neq j}|x_{i}-x_{j}|$. The minimum eigenvalue, $\lambda_{\min}$, of $K=(K(x_{i},x_{j}))_{1\leq i,j\leq n}$ satisfies that
\begin{equation}
\lambda_{\min}\geq \frac{d_{min}}{2\pi}.
\end{equation}
\end{lemma}
The minimum eigenvalue of $K(X_{n},X_{n})$ is of particular interest in understanding the dynamics of the (very) wide neural network. Several groups of researchers studied it for high dimensional data ( e.g., \cite{ghorbani2020neural},\cite{nguyen2021tight}). Most of works implicitly or explicitly assumed that the minimum eigenvalue is a positive constant (\cite{hu2021regularization},\cite{suh2021non}). Lemma \ref{min_eigenvalue} shows that, even for the equally distanced one dimensional data ($d_{min}=\frac{1}{n-1}$), the minimum eigenvalue depends on the sample size.
\vspace{3mm}
\begin{lemma}\label{lemma:K_decay_rate}
Let $\{\lambda_{j}, j=0, 1,2,\cdots\}$ be the eigenvalues associated to the kernel $K$ on $[0,1]$. Then we have
\begin{equation}
\frac{1}{2\pi^3} \frac{1}{j^{2}}\leq \lambda_j \leq \frac{14}{\pi^3} \frac{1}{j^{2}}, j=1,2,...
\end{equation}
\end{lemma}
\subsection{Overfitted Neural Network can not generalize well}\label{ntk:linear interpolation}
Since the gradient flow $f_{t}^{m}$ can be effectively approximated by the NTK flow $f_{t}^{NTK}$, we will focus on the asymptotic behaviour of $f_{t}^{NTK}$ from now on.
We are interested in the behaviour of $f^{NTK}_{t}$ when $t$ is sufficiently large.
To be more concrete, we consider the equally-distanced one dimensional data $\{X_n,Y_n\}=\{(x_{i},y_{i}) \mid x_{i}=\frac{i-1}{n-1}, i \in [n]\}$.
Let
\begin{align}
f_{LI}(x)=y_{i}+\frac{y_{i+1}-y_{i}}{x_{i+1}-x_{i}}(x-x_{i}), \mbox{ when } x\in [x_{i},x_{i+1}]
\end{align}
be the linear interpolation of these $n$ points. We can prove the following statement.
\begin{proposition}
[Overfitted NTK model can be approximated by linear interpolation]\label{LI}
Let $\{(x_{i},y_{i})\}$ be the set of $n$ equally-distanced data in $[0,1]$, i.e., $x_i=\frac{i-1}{n-1}$, $i\in [n]$. The overfitted NTK model with zero initialization $f^{\mathtt{NTK}}_{\infty}(x) =K(\boldsymbol{x},\boldsymbol{X})K^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}$ can be approximated by the linear interpolation, i.e.,
\begin{align}
\sup_{x\in [0,1]}|f_{\infty}^{\mathtt{NTK}}(x)-f_{LI}(x)|\leq C/(n-1)^{2}
\end{align}
for some constant $C$.
\end{proposition}
Proposition \ref{LI} implies that a lazy trained overfitted neural network can be approximated by linear interpolation. On the other hand, it is well known that linear interpolation does not generalize well. Thus, we have the following statement.
\begin{theorem}[Overfitted network generalizes poorly]\label{thm:bad_gen}
Let $\{(x_{i},y_{i})\}$ be the set of $n$ equally-distanced data in $[0,1]$, i.e., $x_i=\frac{i-1}{n-1}$, $i\in [n]$. The error of the overfitted NTK model with zero initialization $f^{NTK}_{\infty}(x) =K(x,X_n)K^{-1}(X_n,X_n)Y_n$ is bounded away from zero, i.e.,
\begin{equation}
\mathbf{E}\lVert f^{NTK}_{\infty}(x)-f_{\star}(x)\rVert^2 \geq C
\end{equation}
for some constant $C$.
\end{theorem}
Theorem \ref{thm:bad_gen} shows that overfitted network can not generalize well. That means training process should be stopped before the network overfits the data.
\subsection{Wide neural networks are minimax optimal with early stopping}
We have shown in Section \ref{ntk:linear interpolation} that, when the equally-distanced data is fitted, the lazy trained 1-hidden neural network can be approximated by linear interpolation. Thus, a lazy trained overfitted neural network can not generalize well.
Lin et al. \cite{lin2020optimal} have considered the regression problem (Equation \eqref{equation:true_model}). They investigated the generalization ability of the solution obtained by gradient descent method with $0\in \mathcal{H}_{\Phi}$ as the initialization point(function), and showed that a properly chosen early stopping strategy leads to a minimax optimal solution if we know the decay rate of eigenvalues of the kernel. By Lemma \ref{lemma:K_decay_rate}, we have the following theorem
\begin{theorem}[Optimality of early stopping]\label{thm:early:stopping}
For the overfitted NTK model with zero initialization $f^{\mathtt{NTK}}_{\infty}(x) =K(\boldsymbol{x},\boldsymbol{X})K^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}$, if we stop the training process at $t_{\star}=c n^{2/3}$, the resulting neural network $f_{t_{\star}}^{\mathtt{NTK}}$ satisfies that
\begin{equation}
\mathbf{E}\lVert f_{t_{\star}}^{\mathtt{NTK}}(x)-f_{\star}\rVert^{2} \leq Cn^{-2/3},
\end{equation}
which matches the minimax lower bounds, since we have
\begin{equation}
\inf_{\hat{f}}\sup_{f_{\star}\in\mathcal{H}_{K},\lVert f_{\star}\rVert\leq R}\mathbf{E}\lVert \hat{f}-f_{\star}\rVert^{2} \geq Cn^{-2/3}.
\end{equation}
\end{theorem}
Combining Theorem \ref{thm:early:stopping} and Proposition \ref{prop: generalization closeness}, Corollary \ref{cor: nn early stopping} follows.
\begin{corollary}\label{cor: nn early stopping}
If $m>$, then we have
\begin{equation}
\mathbf{E}\lVert f_{t_{\star}}^{m}(x)-f_{\star}\rVert^{2} \leq Cn^{-2/3},
\end{equation}
i.e., the fact that neural network using early stopping is minimax optimal holds with probability at least $1-P(m)$ over random initialization.
\end{corollary}
The statement that neural network that overfits cannot generalize well seems to contradict the observation (S), however, we realize that it might be caused by the subtle difference between zero training label-error and zero training loss. In other words, we actually stopped earlier than the time we needed to overfit the data.
We present this subtle difference through the following illustrative example.
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{100accuracy_vs_interpolation}
\caption{Overfitting vs. 100 \% label accuracy: 4 training data points $\{(0,0),(\frac{1}{3},1),(\frac{2}{3},0), (1,1) \}$. We use gradient descent to train a one-hidden-layer neural network with large width $m=1000$. Compared with the interpolation regime, the loss in the epoch with 100 \% label accuracy is still large. The function in the epoch with 100 \% label accuracy is also far different from the function in the interpolation regime.}
\label{fig: interpolation vs acc}
\end{figure}
\subsection{On the role of signal strength}
Theorem \ref{thm:early:stopping} shows that a carefully chosen early stopping time will produce a neural network achieving the minimax rate.
Zhang et al. \cite{zhang2016understanding} observed the implicit early stopping strategy, stopping the training process when the 100\% label accuracy is reached, can generalize well. One may speculate that the implicit early stopping time is the optimal stopping time appeared in Theorem \ref{thm:early:stopping}, however, this sounds too good to be true. In fact, Zhang et al. \cite{zhang2016understanding} have discussed about the role of some explicit early stopping strategies in training the neural networks. They observed that: 1. early stopping indeed make generalization better on ImageNet; and 2. early stopping does not help much on CIFAR10. These observations made the effects of the implicit early stopping elusive. We can not simply claim that implicit early stopping strategy would produce a rate optimal neural network.
We hypothesized that the signal strength plays an indispensable role in the successes of implicit early stopping strategies. More precisely, when the signal strength is strong enough, the implicit early stopping strategy will produce a stopping time near the optimal stopping time $t_{\star}$; and when the signal strength is deteriorated (e.g., after performing some label corruption procedure), the implicit early stopping strategy will produce a stopping time far from the optimal stopping time $t_{\star}$. We give a qualitative analysis the signal strength from two aspects in Section \ref{sec:signal} and answer why overfitted neural networks can generalize on real data. The analysis will be justified in Section \ref{sec:experments} through several experiments including both synthetic data and real data. In summary, we finally filled the last piece of jigsaw in reconciling the controversial observation (S) and the bias-variance trade-off doctrine in classical statistical learning theory.
\subsection{Settings of the neural networks}
We are interested in analyzing the training process of one-hidden layer neural networks,
\begin{equation*}
\mathcal{F}^m=\left\{f_{\boldsymbol{\theta}}~\middle|~f_{\boldsymbol{\theta}}(\boldsymbol{x}) = \frac{1}{\sqrt{m}}\sum_{r=1}^{2m} \left(a_{r}\sigma\left(\langle \boldsymbol{w}_{r},\boldsymbol{x}\rangle+b_{r}\right)\right) + b\right\},
\end{equation*}
where $\sigma(z) = \max\{z,0\}$ and $\boldsymbol{\theta}=\operatorname{vec}(\{a_{r},\boldsymbol{w}_{r},b_{r},b,r\in [2m]\})$ is the parameters of the neural network. Initialize the parameters $a_{r}(0),\boldsymbol{w}_{r,j}(0),b_{r}(0), b \sim \mathcal{N}(0,1)$ for $r\in [m]$, $j\in [d]$ and set $a_{r+m}(0)=a_{r}(0)$, $\boldsymbol{w}_{r+m,j}(0)=\boldsymbol{w}_{r,j}(0)$, $b_{r+m}(0)=b_{r}(0)$. Given the
data $\{(\boldsymbol{x}_{i},y_{i})\in \mathbb{R}^{d}\times\mathbb{R}, i\in[n]\}$, we are interested in analyzing the gradient flow of the empirical loss function
\begin{equation*}
\mathcal{L}_{n}(f_{\boldsymbol{\theta}})=\frac{1}{2n}\sum_{i=1}^{n}\left(y_{i}-f_{\boldsymbol{\theta}}(\boldsymbol{x}_{i})\right)^{2}=\frac{1}{2n}\lVert \boldsymbol{y}-f_{\boldsymbol{\theta}}(\boldsymbol{X})\rVert^{2}
\end{equation*}
where $\boldsymbol{X}\in\mathbb{R}^{n\times d}$, $\boldsymbol{y}\in\mathbb{R}^{n}$ and $f\in \mathcal{F}^{m}$.
We recollect some well known results in this subsection to make this paper more friendly to the readers.
Let $\boldsymbol{\Theta}=\left\{\boldsymbol{\theta}~\middle|~ \boldsymbol{\theta}\in\mathbb{R}^{(d+2)m+1}\right\}$ be the parameter space of $\mathcal{F}^{m}$. The gradient flow in $\boldsymbol{\Theta}$ induced by the loss function is given by
\begin{equation}
\begin{aligned}
\dot{\boldsymbol{\theta}}(t) &=\frac{\mathrm{d}}{\mathrm{d} t}\boldsymbol{\theta}(t)=-\nabla_{\boldsymbol{\theta}}\mathcal{L}_{n}(f_{\boldsymbol{\theta}})= - \frac{1}{n}\nabla_{\boldsymbol{\theta}} f_{\boldsymbol{\theta}(t)}(\boldsymbol{X})^{\mathsf{T}} (f_{\boldsymbol{\theta}(t)}(\boldsymbol{X})-\boldsymbol{y})
\end{aligned}
\end{equation}
where $\nabla_{\boldsymbol{\theta}} f_{\boldsymbol{\theta}(t)}(\boldsymbol{X})$ is an $n\times ((d+2)m+1)$ matrix.
This flow induced an associated flow in $\mathcal{F}^{m}$:
\begin{equation}\label{nn:f:flow}
\begin{aligned}
\dot{f}_{\boldsymbol{\theta}(t)}(\boldsymbol{x}) &=\frac{\mathrm{d} }{\mathrm{d} t}f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})=\nabla_{\boldsymbol{\theta}} f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})\dot{\boldsymbol{\theta}}(t)= -\frac{1}{n} K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{X}) (f_{\boldsymbol{\theta}(t)}(\boldsymbol{X})-\boldsymbol{y}),
\end{aligned}
\end{equation}
where $K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{X}) =\nabla_{\boldsymbol{\theta}} f_{\boldsymbol{\theta}(t)}(\boldsymbol{x}) \nabla_{\boldsymbol{\theta}} f_{\boldsymbol{\theta}(t)}(\boldsymbol{X})^{\mathsf{T}}$. We often refer to $K_{\boldsymbol{\theta}(t)}$ as the neural network kernel which is a time-varying kernel.
\vspace{3mm}
Since Equation \eqref{nn:f:flow} is highly non-linear and hard to find an explicit solution, various approximated solutions (\cite{mei2019mean}, \cite{karakida2019universal}, \cite{sirignano2022mean}, \cite{eldan2021non}) were proposed to characterize the asymptotic behaviour of these equations. In the seminal paper \cite{jacot2018neural}, Jacot et al. observed that
\begin{align}
K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{x}')\xrightarrow{m\rightarrow \infty} K(\boldsymbol{x},\boldsymbol{x}')
\end{align}
where $K(\boldsymbol{x},\boldsymbol{x}')$ is a time independent kernel. This kernel, now often referred to as the neural tangent kernel (NTK), leads us to consider the following gradient flow
\begin{equation}\label{ntk:f:flow}
\begin{aligned}
\dot{f}^{\mathtt{NTK}}_{t}(\boldsymbol{x})=\frac{\mathrm{d}}{\mathrm{d} t}f^{\mathtt{NTK}}_{t}(\boldsymbol{x})=-\frac{1}{n}K(\boldsymbol{x},\boldsymbol{X})(f^{\mathtt{NTK}}_{t}(\boldsymbol{X})-\boldsymbol{y})
\end{aligned}
\end{equation}
in the space $\mathcal{F}$, the reproducing kernel Hilbert space (RKHS) associated to the kernel $K$. $K(\boldsymbol{x},\boldsymbol{X}) = (K(\boldsymbol{x},\boldsymbol{x}_1),\dots,K(\boldsymbol{x},\boldsymbol{x}_n))\in \mathbb{R}^{1\times n}$. Similar to the neural network function, we assume $f^{\mathtt{NTK}}_{t}(\boldsymbol{x})=0$.
Though it is hard to analyze the solution $f_{\boldsymbol{\theta}(t)}$ of the equation \eqref{nn:f:flow} directly, we may focus on the solution $f^{\mathtt{NTK}}_{t}$ of the equation \eqref{ntk:f:flow} first if $m$ is sufficiently large. This simplifies the problem substantially, as we can solve the equation \eqref{ntk:f:flow} explicitly:
\begin{equation}\label{ntk:solution}
f_t^{\mathtt{NTK}}(\boldsymbol{x})=K(\boldsymbol{x},\boldsymbol{X})K(\boldsymbol{X},\boldsymbol{X})^{-1}(\boldsymbol{I}-e^{-\frac{1}{n}K(\boldsymbol{X},\boldsymbol{X})t})\boldsymbol{y}
\end{equation}
\subsection{Uniform NTK Approximation for Wide Neural Networks}\label{subsec: unif ntk approx}
One may naturally expect that the solution $f_{\boldsymbol{\theta}(t)}$ of the equation \eqref{nn:f:flow} and the solution $f^{\mathtt{NTK}}_{t}$ of the equation \eqref{ntk:f:flow} should behave in a similar way for sufficiently large $m$.
Furthermore, when $m$ is sufficiently large,
some researchers often referred to the flow $f_{\boldsymbol{\theta}(t)}$ as the lazy trained neural network, for
the movement of $\boldsymbol{\theta}(t)$ is ignorable in this scenario (e.g., \cite{lee2019wide},\cite{du2018gradient},\cite{jacot2018neural}).
These statements have been justified in a sequence of works from various groups of researchers (see e.g., \cite{jacot2018neural},\cite{arora2019exact},\cite{lee2019wide}).
We are aware of that \cite{hu2021regularization}, \cite{suh2022nonparametric} have shown that the generalization error of the neural network trained for too long is bounded away from zero. However, it is worth noting that both of their results need NTK regression as a intermediate term. Then it is necessary to prove that the generalization error of the neural network could be approximated by that of NTK regression. Since the generalization error is related to all $\boldsymbol{x}$, the uniform convergence of the difference between the output of the neural network and that of NTK regression, i.e., $\sup_{\boldsymbol{x}}\abs{f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})}=o_{m}(1)$. Notice that \eqref{nn:f:flow} and \eqref{ntk:f:flow} indicate the evolution of $f_{\boldsymbol{\theta}(t)}$ and $f_{t}^{\mathtt{NTK}}$ is fully determined by their kernel $K_{\boldsymbol{\theta}(t)}$ and $K$ respectively, it is important to give the uniform convergence of the difference between the two kernel, i.e., $\sup_{\boldsymbol{x},\boldsymbol{x}'}\abs{K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{x}')-K(\boldsymbol{x},\boldsymbol{x}')}=o_{m}(1)$. To the best of our knowledge, we give the first concrete result on that the generalization error of the fully trained two-layers ReLU neural networks could be represented by NTK regression, which is our Proposition \ref{prop: generalization closeness}.
\begin{theorem}\label{thm: kernel closeness}
Assume that $\lambda_{\min}(K(\boldsymbol{X},\boldsymbol{X}))>0$. If $m$ is sufficiently large(depending on $\lambda_{\min}(K(\boldsymbol{X},\boldsymbol{X}))$), then
\begin{equation*}
\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\sup_{\boldsymbol{x},\boldsymbol{x}'}\sup_{t\geq 0}\left\lvert K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{x}')-K(\boldsymbol{x},\boldsymbol{x}')\right\rvert\leq cm^{-(1-\gamma)}\log m\right)\geq 1-P(m),
\end{equation*}
where $P(m)\to 0$ as $m\to\infty$.
\end{theorem}
The important assumption on $\lambda_{\min}(K(\boldsymbol{X},\boldsymbol{X}))$ will be proven in the next subsection.
Results from previous work (refs) could be summarized as the point-wise convergence of kernel. The problem is that although each point-wise convergence holds with high probability, there is no grantee on the uniform convergence with high probability using the union bound. More delicate analysis is needed.
For brevity, denote the pre-activation value and the activation pattern for the $r$-th neuron by $h_{\boldsymbol{\theta}}^{r}(\boldsymbol{x})=\langle \boldsymbol{w}_{r},\boldsymbol{x}\rangle+b_{r}$ and $\boldsymbol{1}_{\boldsymbol{\theta}}^{r}(\boldsymbol{x})=\boldsymbol{1}_{\{h_{\boldsymbol{\theta}}^{r}(\boldsymbol{x})\geq 0\}}$ respectively.
Lemma \ref{lem: event B}, Lemma \ref{lem: event C} and Lemma \ref{lem: event R} are the building blocks to prove Theorem \ref{thm: kernel closeness}. Proofs are left in Appendix \ref{app: unif ntk approx}.
Check
\begin{itemize}
\item $\alpha<1/2$
\item $1-\gamma<\beta/2$
\item $\gamma>\max\{1-\alpha,\delta\}$
\item $\beta>\alpha$
\end{itemize}
\begin{lemma}\label{lem: event B}
Define the event
\begin{equation*}
\mathcal{B}=\left\{\omega\mid \lvert a_{r}(0)\rvert,\lvert \boldsymbol{w}_{r,j}(0)\rvert,\lvert b_{r}(0)\rvert\leq R_{B}, r\in[2m], j\in[d]\right\}, \text{~where~} R_{B}=\sqrt{3 \log m}.
\end{equation*}
Conditioning on the event $\mathcal{B}$, we have $\lvert h_{\boldsymbol{\theta}(0)}^{r}(\boldsymbol{x})\rvert\leq cR_{B}$ for all $r\in [2m]$ and $\boldsymbol{x}$. The event $\mathcal{B}$ holds with high probability, i.e., $\mathbf{P}_{\boldsymbol{\theta}(0)}(\mathcal{B})\geq 1-P_{\mathcal{B}}(m)$, where $P_{\mathcal{B}}(m)=cm^{-1/2}$.
\end{lemma}
\begin{remark}
Lemma \ref{lem: event B} control the scale of the parameters of the neural network at initialization.
\end{remark}
Our main contribution is the uniform convergence of kernel which relies on the analysis the continuity of $K_{\boldsymbol{\theta}(0)}$ and $K$ when the domain $\mathcal{X}$ is discretized. On each dimension, we place $\lceil m^{\beta}\rceil$ points with distance $\epsilon=B/\lceil m^{\beta}\rceil$. Denote the collection $\mathcal{N}_{\epsilon}$ so that $\lvert\mathcal{N}_{\epsilon}\rvert=\lceil m^{\beta}\rceil^{d}$. The idea is to use $\mathcal{N}_{\epsilon}$ to discretize the domain $\mathcal{X}$ and then use classical concentration inequality on points in $\mathcal{N}_{\epsilon}$, which gives probability decaying exponentially fast with $m$. We need the following three lemmas to show events we need to condition on hold with probability converging to one as $m\to\infty$.
\begin{lemma}\label{lem: event R}
Define the events
\begin{gather*}
\mathcal{R}(\mathcal{N}_{\epsilon})=\left\{\omega ~\middle|~ \abs{h_{\boldsymbol{\theta}(0)}^{r}(\boldsymbol{z})}> 2cR\mbox{~holds for at least~}2(m-\lceil m^{\gamma}\rceil)\mbox{~of~} r \in [2m],\forall\boldsymbol{z}\in\mathcal{N}_{\epsilon}\right\}\\
\text{and~}\mathcal{R}=\left\{\omega ~\middle|~ \abs{h_{\boldsymbol{\theta}(0)}^{r}(\boldsymbol{x})}> cR\mbox{~holds for at least~}2(m-\lceil m^{\gamma}\rceil)\mbox{~of~} r\in [2m],\forall\boldsymbol{x}\right\},
\end{gather*}
where $R=cm^{-\alpha}$. If $m$ is sufficiently large, then $\mathcal{R}\supseteq\mathcal{R}(\mathcal{N}_{\epsilon})$ and the event $\mathcal{R}$ holds with high probability, i.e., $\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\mathcal{R}\right)\geq \mathbf{P}_{\boldsymbol{\theta}(0)}\left(\mathcal{R}(\mathcal{N}_{\epsilon})\right)\geq 1-P_{\mathcal{R}}(m)$, where $P_{\mathcal{R}}(m)=\lceil m^{\beta}\rceil^{d}e^{-2m^{2\delta-1}}$.
\end{lemma}
\begin{remark}
Lemma \ref{lem: event R} shows the pre-activation value of most neurons are large, which hints that the activation pattern for these neurons is likely to stay unchanged during training since large pre-activation value requires the parameters to travel a long way from the initialization to make the sign change. This is crucial to prove lazy.
\end{remark}
\begin{lemma}\label{lem: event C}
Define the event
\begin{equation*}
\mathcal{C}=\left\{\omega ~\middle|~ \sup_{\boldsymbol{z},\boldsymbol{z}'\in\mathcal{N}_{\epsilon}}\lvert K_{\boldsymbol{\theta}(0)}(\boldsymbol{z},\boldsymbol{z}')-K(\boldsymbol{z},\boldsymbol{z}')\rvert\leq cm^{-1/2}\sqrt{\log m}\right\}.
\end{equation*}
If $m$ is sufficiently large, then the event $\mathcal{C}$ holds with high probability, i.e., $\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\mathcal{C}\right)\geq 1-P_{\mathcal{C}}(m)$, where $P_{\mathcal{C}}(m)=4m^{-(c_{0}c^{2}/c_{B}-2d\beta)}$, where $c_{0}$ is a absolute constant and $c_{B}$ is a constant depending on $B$.
\end{lemma}
\begin{remark}
It is intuitive that the point-wise convergence of $K_{\boldsymbol{\theta}(0)}-K$ holds simply by the law of large numbers. The result from Lemma \ref{lem: event C} shows this convergence is uniform for points in the collection $\mathcal{N}_{\epsilon}$.
\end{remark}
To prove Theorem \ref{thm: kernel closeness}, we only need Lemma \ref{lem: initial kernel close to fixed} and Lemma \ref{lem: kernel with lazy params close to initial kernel} to show that the convergence of kernel at the initialization and during training, and then use the triangle inequality $\abs{K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{x}')-K(\boldsymbol{x},\boldsymbol{x}')}\leq\abs{K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}(\boldsymbol{x},\boldsymbol{x}')}+\abs{K_{\boldsymbol{\theta}(0)}(\boldsymbol{x},\boldsymbol{x}')-K(\boldsymbol{x},\boldsymbol{x}')}$. The proof of Lemma \ref{lem: initial kernel close to fixed} and Lemma \ref{lem: kernel with lazy params close to initial kernel} could be found in Appendix \ref{app: unif ntk approx}.
\begin{lemma}\label{lem: initial kernel close to fixed}
Conditioning on the event $\mathcal{B}\cap\mathcal{R}\cap\mathcal{C}$, if $m$ is sufficiently large, then
\begin{equation*}
\sup_{\boldsymbol{x},\boldsymbol{x}}\lvert K_{\boldsymbol{\theta}(0)}(\boldsymbol{x},\boldsymbol{x}')-K(\boldsymbol{x},\boldsymbol{x}')\rvert\leq cm^{-(1-\gamma)}\log m.
\end{equation*}
\end{lemma}
\begin{remark}
Notice that Lemma \ref{lem: initial kernel close to fixed} is an extension of Lemma \ref{lem: event C}. The key reason why Lemma \ref{lem: initial kernel close to fixed} is true is due to the continuity of $K_{\boldsymbol{\theta}(0)}$ and $K$.
\end{remark}
\begin{lemma}\label{lem: kernel with lazy params close to initial kernel}
Assume that $\lambda_{\min}(K(\boldsymbol{X},\boldsymbol{X}))>0$. Conditioning on the event $\mathcal{B}\cap\mathcal{R}\cap\mathcal{C}$, if $m$ is sufficiently large(depending on $n,\lambda_{\min}(K(\boldsymbol{X},\boldsymbol{X})),...$),then
\begin{equation*}
\sup_{\boldsymbol{x},\boldsymbol{x}'}\sup_{t\geq 0}\lvert K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}(\boldsymbol{x},\boldsymbol{x}')\rvert\leq cm^{-(1-\gamma)}\log m.
\end{equation*}
\end{lemma}
\begin{remark}
The continuity of $K_{\boldsymbol{\theta}}$ w.r.t. $\boldsymbol{\theta}$.
\end{remark}
Using Theorem \ref{thm: kernel closeness}, we can further prove the uniform function approximation as follows, whose proof is deferred in Appendix \ref{app: unif ntk approx}.
\begin{proposition}\label{prop: function closeness}
Assume that $\lambda_{\min}(K(\boldsymbol{X},\boldsymbol{X}))>0$. If $m$ is sufficiently large(depending on $\lambda_{\min}(K(\boldsymbol{X},\boldsymbol{X}))$), then
\begin{equation*}
\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\sup_{\boldsymbol{x}}\sup_{t\geq 0}\lvert f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})\rvert\leq cm^{-(1-\gamma)}\log m\right)\geq 1-P(m).
\end{equation*}
\end{proposition}
Thanks to the uniform function approximation, we now can formally show that the test performance of wide neural networks can be represented by NTK regression.
\begin{proposition}\label{prop: generalization closeness}
Assume that $\lambda_{\min}(K(\boldsymbol{X},\boldsymbol{X}))>0$. Given the training data, if $m$ is sufficiently large, then
\begin{equation*}
\abs{\lVert f_{\boldsymbol{\theta}(t)}-f_{\star}\rVert^{2}-\lVert f_{t}^{\mathtt{NTK}}-f_{\star}\rVert^{2}}\leq cm^{-(1-\gamma)}\log m
\end{equation*}
holds with probability at least $1-P(m)$ w.r.t. the initialization and with probability at least $1-\delta$ w.r.t. the training data.
\end{proposition}
\begin{proof}
\begin{equation*}
\begin{aligned}
& \lVert f_{\boldsymbol{\theta}(t)}-f_{\star}\rVert^{2}-\lVert f_{t}^{\mathtt{NTK}}-f_{\star}\rVert^{2}\\
=&\int_{\mathcal{X}}(f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})+2(f_{t}^{\mathtt{NTK}}(\boldsymbol{x})-f_{\star}(\boldsymbol{x})))(f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x}))\mathrm{d} \mu(\boldsymbol{x})\\
\leq &c\sup_{\boldsymbol{x}}\sup_{t\geq 0}\lvert f_{\boldsymbol{\theta}(t)}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})\rvert
\end{aligned}
\end{equation*}
\end{proof}
In the next subsection, we will discuss strictly positive definiteness of NTK $K$ over $\mathbb{R}^{d}$.
\subsection{Strictly positive definiteness of NTK over $\mathbb{R}^{d}$}\label{ntk:positive}
To avoid the potential confusions between positive definiteness and semi-positive definiteness, we introduce the notion of strictly positive definiteness below.
\begin{definition}
A kernel $K$ is strictly positive definite over domain $\mathcal{X}\subset \mathbb{R}^{d}$ if for any $n$ and any points $\boldsymbol{x}_{1},\dots,\boldsymbol{x}_{n}\in \mathcal{X}$, the smallest eigenvalue $\lambda_{\min}$ of the matrix $(K(\boldsymbol{x}_{i},\boldsymbol{x}_{j}))_{1\leq i,j\leq n}$ is larger than zero.
\end{definition}
We have to emphasize that in order to make the neural tangent approximation hold, we need to know that $K$, the neural tangent kernel, is strictly positive definite on its domain. This has been only proved for NTKs defined on $\mathbb{S}^{d-1}$ (\cite{jacot2018neural}). If the neural tangent kernel $K$ is strictly positive definite, then one can easily see that the gradient flow $f^{\mathtt{NTK}}_{t}$ in $\mathcal{F}$ will achieve the global minima. As a direct corollary, for sufficiently large $m$, the gradient flow $f_{\boldsymbol{\theta}(t)}$ in $\mathcal{F}^{m}$ will achieve the global minima with high probability.
Combining the results in \cite{cho2009kernel} and \cite{jacot2018neural}, we can get the following explicit expression for the neural tangent kernel:
\begin{equation}
\begin{split}
K(\boldsymbol{x},\boldsymbol{x}') &= 2\left(1-\psi(\boldsymbol{x},\boldsymbol{x}')/\pi\right)(\langle \boldsymbol{x},\boldsymbol{x}' \rangle +1) \\
&+ \sqrt{\lVert\boldsymbol{x} - \boldsymbol{x}'\rVert^2_2 + \lVert \boldsymbol{x} \rVert^2_2 \lVert \boldsymbol{x}' \rVert^2_2 - \langle \boldsymbol{x},\boldsymbol{x}'\rangle^2 }/\pi + 1,
\end{split}
\end{equation}
where $\psi(\boldsymbol{x},\boldsymbol{x}')=\arccos$\resizebox{!}{0.3cm}{$\frac{ \langle \boldsymbol{x},\boldsymbol{x}' \rangle+1}{\sqrt{ (\lVert \boldsymbol{x}\rVert^2_2 +1)(\lVert \boldsymbol{x}'\rVert^2_2 +1)}}$}.
One of the most important observations in this study is the following:
\begin{theorem} \label{PD}
The neural tangent kernel $K(\boldsymbol{x},\boldsymbol{x}')$ is strictly positive definite on $\mathbb{R}^{d}$.
\end{theorem}
\begin{remark}
The idea of Theorem \ref{PD} is based on the fact that the inner-product kernel determined by the non-polynomial function is strictly positive definite on $\mathbb{S}^{d}_+ :=\{\boldsymbol{x}=(x_1,...,x_{d+1})\in \mathbb{S}^{d+1}\mid x_{d+1}>0\}$.
\end{remark}
We have shown the strictly positive definiteness of NTK. The strictly positive definiteness only guarantees that $\lambda_{\min}>0$. However, the minimum eigenvalue can be related to the sample size $n$(the minimum distance), shown in Figure \ref{fig:min_eigenvalue}. For example, we can actually prove a stronger statement over data of dimension one.
\begin{lemma}\label{min_eigenvalue} Let $X=\{x_{1},...,x_{n}\}\subset [0,\pi]$ and $d_{\min}=\min_{i\neq j}|x_{i}-x_{j}|$. The minimum eigenvalue, $\lambda_{\min}$, of $K=(K(x_{i},x_{j}))_{1\leq i,j\leq n}$ satisfies that
\begin{equation}
\lambda_{\min}\geq \frac{d_{\min}}{2\pi}.
\end{equation}
\end{lemma}
\begin{remark}
The minimum eigenvalue of $K(X_{n},X_{n})$ is of particular interest in understanding the dynamics of the (very) wide neural network. Several groups of researchers studied it for high dimensional data ( e.g., \cite{ghorbani2020neural},\cite{nguyen2021tight}). Most of works implicitly or explicitly assumed that the minimum eigenvalue is a positive constant (\cite{hu2021regularization},\cite{suh2021non}). Lemma \ref{min_eigenvalue} shows that, for the asymptotic analysis, the minimum eigenvalue depends on the sample size $n$. For example, for the equally distanced one dimensional data, the minimum distance $d_{\min}=\frac{1}{n-1}$ and the result is shown in Figure \ref{fig:min_eigenvalue} (a). And we find that for $d>1$, the minimum eigenvalue also depends on the the minimum distance, shown in Figure \ref{fig:min_eigenvalue}.
\end{remark}
\begin{corollary}\label{cor: kernel closeness}
If $m$ is sufficiently large(depending on $\lambda_{\min}(K(\boldsymbol{X},\boldsymbol{X}))$), then
\begin{equation*}
\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\sup_{\boldsymbol{x},\boldsymbol{x}'}\sup_{t\geq 0}\left\lvert K_{\boldsymbol{\theta}(t)}(\boldsymbol{x},\boldsymbol{x}')-K(\boldsymbol{x},\boldsymbol{x}')\right\rvert\leq cm^{-(1-\gamma)}\log m\right)\geq 1-P(m),
\end{equation*}
where $P(m)\to 0$ as $m\to\infty$.
\end{corollary}
\begin{figure}[htbp]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{min_eigenvalue_d_1.png}
\centerline{(a) $\lambda_{min}$ for the equally distanced data(d=1)}
\end{minipage}%
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{min_eigenvalue_for_diff_d.png}
\centerline{(b) $\lambda_{min}$ for the equally distanced data(d>1)}
\end{minipage}
\caption{Minimum eigenvalue vs sample size(the minimum distance). (a) Let $\{x_i\}_{i=1}^n$ be the n equally-distanced one dimensional data in [0,1], i.e., $x_i=\frac{i-1}{n-1}$ for $i\in[n]$. We calculate the minimum eigenvalue of NTK with different sample size $n$. (b) For $d>1$, the equally-distanced data $\{\boldsymbol{x}_i\}_{i=1}^n$ is defined as $\{(\frac{i_1-1}{n_d-1},...,\frac{i_d-1}{n_d-1})\}$, $i_1,...,i_d\in[n_d]$, where $n_d=n^{\frac{1}{d}}$. We calculate the minimum eigenvalue of NTK with different minimum distances $\frac{1}{n_d-1}$.
\label{fig:min_eigenvalue}
\end{figure}
\subsection{Proofs of Section \ref{subsec: unif ntk approx}}
For brevity, let $h_{\boldsymbol{x}}^{r}=\langle \boldsymbol{w}_{r},\boldsymbol{x}\rangle+b_{r}$, $\mathbf{1}_{\boldsymbol{x}}^{r}=\mathbf{1}_{\{h_{\boldsymbol{x}}^{r}\geq 0\}}$, $\mathbf{1}_{\boldsymbol{x}\boldsymbol{x}'}^{r}=\mathbf{1}_{\{h_{\boldsymbol{x}}^{r}\geq 0,h_{\boldsymbol{x}'}^{r}\geq 0\}}$ and $h_{\boldsymbol{x}}^{r}(t),\mathbf{1}_{\boldsymbol{x}}^{r}(t),\mathbf{1}_{\boldsymbol{x}\boldsymbol{x}'}^{r}(t)$ represent those at time $t$. To start with, we control the parameters at initialization.
\begin{lemma}\label{lem: event B}
Define the event
\begin{equation*}
\mathcal{B}=\left\{\omega\mid \lvert a_{r}(0)\rvert,\lvert \boldsymbol{w}_{r,j}(0)\rvert,\lvert b_{r}(0)\rvert\leq R_{B}, r\in[2m], j\in[d]\right\}, \text{~where~} R_{B}=\sqrt{3 \log m}.
\end{equation*}
Conditioning on the event $\mathcal{B}$, we have $\lvert h_{\boldsymbol{x}}^{r}(0)\rvert\leq (dB+1)R_{B}$ for all $r\in [2m]$ and $\boldsymbol{x}$. The event $\mathcal{B}$ holds with high probability, i.e., $\mathbf{P}_{\boldsymbol{\theta}(0)}(\mathcal{B})\geq 1-P_{\mathcal{B}}(m)$, where $P_{\mathcal{B}}(m)=\frac{2(d+2)}{\sqrt{2\pi}}m^{-1/2}$.
\end{lemma}
Our main contribution is the uniform. On each dimension, we place $\lceil m^{\beta}\rceil$ points with distance $\epsilon=B/\lceil m^{\beta}\rceil$, where $\beta>0$ to be determined later. Denote the collection $\mathcal{N}_{\epsilon}$ so that $\lvert\mathcal{N}_{\epsilon}\rvert=\lceil m^{\beta}\rceil^{d}$. The idea is to use $\mathcal{N}_{\epsilon}$ to discretize the domain $\mathcal{X}$ and then use classical concentration inequality on points in $\mathcal{N}_{\epsilon}$, which gives probability decaying exponentially fast with $m$. We need the following three lemmas to show events we need to condition on hold with probability converging to one as $m\to\infty$.
\begin{lemma}\label{lem: event R}
Define the events
\begin{gather*}
\mathcal{R}(\mathcal{N}_{\epsilon})=\left\{\omega ~\middle|~ \lvert h_{\boldsymbol{z}}^{r}(0)\rvert> 2(d^{1/2}B+1)R\mbox{~holds for at least~}m-\lceil m^{\gamma}\rceil\mbox{~of~} r \in [m],\forall\boldsymbol{z}\in\mathcal{N}_{\epsilon}\right\}\\
\text{and~}\mathcal{R}=\left\{\omega ~\middle|~ \lvert h_{\boldsymbol{x}}^{r}(0)\rvert> (d^{1/2}B+1)R\mbox{~holds for at least~}m-\lceil m^{\gamma}\rceil\mbox{~of~} r\in [m],\forall\boldsymbol{x}\right\},
\end{gather*}
where $R=\frac{\sqrt{2\pi}}{4(d^{1/2}B+1)}m^{-\alpha}$. If $m$ is sufficiently large than a constant depending on $d,B,\alpha,\beta,\gamma,\delta$, $\gamma>\max\{1-\alpha,\delta\}$, and $\beta>\alpha$, then we have $\mathcal{R}\supseteq\mathcal{R}(\mathcal{N}_{\epsilon})$ and the event $\mathcal{R}$ holds with high probability, i.e., $\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\mathcal{R}\right)\geq \mathbf{P}_{\boldsymbol{\theta}(0)}\left(\mathcal{R}(\mathcal{N}_{\epsilon})\right)\geq 1-P_{\mathcal{R}}(m)$, where $P_{\mathcal{R}}(m)=\lceil m^{\beta}\rceil^{d}e^{-2m^{2\delta-1}}$.
\end{lemma}
\begin{lemma}\label{lem: event C}
Define the event
\begin{equation*}
\mathcal{C}=\left\{\omega ~\middle|~ \sup_{\boldsymbol{z},\boldsymbol{z}'\in\mathcal{N}_{\epsilon}}\lvert K_{\boldsymbol{\theta}(0)}(\boldsymbol{z},\boldsymbol{z}')-K(\boldsymbol{z},\boldsymbol{z}')\rvert\leq c\frac{\log m}{\sqrt{m}}\right\}.
\end{equation*}
The event $\mathcal{C}$ holds with high probability, i.e., $\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\mathcal{C}\right)\geq 1-P_{\mathcal{C}}(m)$
\end{lemma}
We only need the following two lemmas to show at initialization and during training, and then triangle inequality to prove Proposition \ref{prop: kernel closeness}.
\begin{lemma}\label{lem: initial kernel close to fixed}
Conditioning on $\mathcal{B}\cap\mathcal{R}\cap\mathcal{C}$, we have
\begin{equation}
\sup_{\boldsymbol{x},\boldsymbol{x}}\lvert K_{\boldsymbol{\theta}(0)}(\boldsymbol{x},\boldsymbol{x}')-K(\boldsymbol{x},\boldsymbol{x}')\rvert\leq \frac{1}{m^{1-\gamma}}R_{B}^{2}.
\end{equation}
\end{lemma}
\begin{lemma}\label{lem: kernel with lazy params close to initial kernel}
Conditioning on $\mathcal{B}\cap\mathcal{R}$, we have
\begin{equation}
\sup_{\boldsymbol{x},\boldsymbol{x}'}\sup_{t\geq 0}\lvert K_{t}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}(\boldsymbol{x},\boldsymbol{x}')\rvert\leq \frac{1}{m^{1-\gamma}}.
\end{equation}
\end{lemma}
\subsection{Proofs of Section \ref{ntk:positive}}
The proof of Theorem \ref{PD} is based on the strictly positive definiteness on $\mathbb{S}^d$.
\begin{proof}[Proof of Theorem \ref{PD}]
Let $\boldsymbol{z}^{\top}=(\boldsymbol{x}^{\top},1)$. We then have
\begin{align*}
K(\boldsymbol{x},\boldsymbol{x}') &= (\left<\boldsymbol{x},\boldsymbol{x}'\right>+1) \kappa_0(\boldsymbol{z},\boldsymbol{z}') +\kappa_1(\boldsymbol{z},\boldsymbol{z}')+ 1\\
&=\left<\boldsymbol{x},\boldsymbol{x}'\right> \kappa_0(\boldsymbol{z},\boldsymbol{z}') + 1 +\kappa_1(\boldsymbol{z},\boldsymbol{z}')+\kappa_{0}(\boldsymbol{z},\boldsymbol{z}')
\end{align*}
where $
\kappa_{n}(\boldsymbol{z},\boldsymbol{z}^\prime):=2\mathbb{E}_{\omega\sim \mathcal{N}(0,I)}[\sigma(\omega\cdot \boldsymbol{z})\sigma(\omega\cdot \boldsymbol{z}^\prime)(\omega\cdot \boldsymbol{z})^{n}(\omega\cdot \boldsymbol{z}^\prime)^{n}]$
and $\sigma(x):=max(x,0)$.
\cite{cho2009kernel} showed that
\begin{equation}
\kappa_{n}(\boldsymbol{z},\boldsymbol{z}')=\lVert \boldsymbol{z}\rVert^{n}\lVert \boldsymbol{z}'\rVert^{n}J_{n}\left(\frac{\boldsymbol{z}}{\lVert \boldsymbol{z}\rVert},\frac{\boldsymbol{z}'}{\lVert \boldsymbol{z}'\rVert}\right)
\end{equation}
where $J_{n}$ defined on $\mathbb{S}^{(d-1)}\times \mathbb{S}^{(d-1)}$ only depends on the angle $\theta=\arccos\left(\frac{\boldsymbol{z}\cdot \boldsymbol{z}^\prime}{\lVert \boldsymbol{z}\rVert\lVert \boldsymbol{z}^\prime\rVert}\right)$.
We only need to prove that for distint $n$ points $x_{1},\cdots,x_{n}\in \mathbb{R}^{d}$, the matrix $K_{n}=(K(x_{i},x_{j}))_{1\leq i,j\leq n}$ is the strictly positive definite matrix. Since $\|\boldsymbol{z}_{i}\|\geq 1$, $i=1,2,\cdots,n$, we have
\begin{align}
&\lambda_{\min}((\kappa_{1}(\boldsymbol{z}_{i},\boldsymbol{z}_{j}))_{1\leq i,j\leq n})\geq \lambda_{\min}((J_{1}(\boldsymbol{z}_{i}/\|\boldsymbol{z}_{i}\|,\boldsymbol{z}_{j}/\|\boldsymbol{z}_{j}\|))_{1\leq i,j\leq n}).
\end{align}
On the other hand, we have
\begin{align}
\lambda_{\min}((\kappa_{0}(\boldsymbol{z}_{i},\boldsymbol{z}_{j}))_{1\leq i,j\leq n})\geq \lambda_{\min}((J_{0}(\boldsymbol{z}_{i}/\|\boldsymbol{z}_{i}\|,\boldsymbol{z}_{j}/\|\boldsymbol{z}_{j}\|))_{1\leq i,j\leq n}).
\end{align}
Since \cite{jacot2018neural} showed that $J_{0}+J_{1}$ is strictly positive definite on $\mathbb{S}^{d}$, we know that
\begin{align}
\lambda_{\min}(K_{n})\geq \lambda_{\min}((\kappa_{1}(\boldsymbol{z}_{i},\boldsymbol{z}_{j}))_{1\leq i,j\leq n})+\lambda_{\min}((\kappa_{0}(\boldsymbol{z}_{i},\boldsymbol{z}_{j}))_{1\leq i,j\leq n})>0
\end{align}
as $\boldsymbol{z}_{i}/\|\boldsymbol{z}_{i}\|$ are distinct points in $\mathbb{S}^{d}$.
\end{proof}
The proof of Theorem \ref{thm:bad_gen} is based on Proposition \ref{LI} since Proposition\ref{LI} shows that $f_{\infty}^{NTK}$ is close to the linear interpolation. Thus, the proof of Proposition \ref{LI} is the key step and also shown in the following content.
Before we prove Proposition \ref{LI}, we need the following lemma:
\begin{lemma}\label{lemma:K_second_derivative_K_inv}
Suppose that $n>22$. There exits a constant $C$ such that for any $1\leq i \leq n-1$ and any $\xi_{i} \in(x_i,x_{i+1})$,
\begin{equation}\label{eqn:ess:bound}
\left|\left(K^{''}(\xi,X_n)K^{-1}\right)_{j}\right| \leq
\begin{cases}
\quad C & j\in\{1,i,i+1,n\},\\
\quad \frac{C}{n-1} & j\not \in \{ 1,i,i+1,n\}.
\end{cases}
\end{equation}
\end{lemma}
The idea of Lemma \ref{lemma:K_second_derivative_K_inv} is to show the second term of Taylor Expansion of $f_{\infty}^{NTK}$ is ignorable as $n$ is large enough. That is the reason why $f_{\infty}^{NTK}$ can be approximated by the linear interpolation.
\begin{proof}[Proof of Proposition \ref{LI}]
Since
\begin{equation*}
y_{i} = K(x_i,X_n)K^{-1}Y_n
\end{equation*}
and
\begin{equation*}
y_{i+1} = K(x_{i+1},X_n)K^{-1}Y_n,
\end{equation*}
the Taylor Expansion and intermediate theorem imply that for $\forall x\in(x_{i},x_{i+1})$, there are $\xi_{i}$ and $\hat{\xi}_{i}\in (x_{i},x_{i+1})$ such that
\begin{align}\label{equation:taylor_expansion}
K(x,X_n)K^{-1}Y_n -y_{i}&= (x-x_i)K_{+}^{'}(x_i,X_n)K^{-1}Y_n + \frac{(x-x_i)^2}{2}K^{''}(\xi_i,X_n) K^{-1}Y_n,\\
y_{i+1}-y_{i
&=(x_{i+1}-x_i)K_{+}^{'}(x_i,X_n)K^{-1}Y_n + \frac{(x_{i+1}-x_i)^2}{2}K^{''}(\hat{\xi}_i,X_n)K^{-1}Y_n
\end{align}
where
$K^{'}_{+}(x_{i},X_n)=\lim_{x\rightarrow x_{i}^{+}} K^{'}_{+}(x,X_n) = \lim_{x\rightarrow x_{i}^{+}}\frac{\partial K(x,X_n)}{\partial x}$. Thus,
\begin{equation}
\begin{aligned}\label{eq:higher:order}
K(x,X_n)K^{-1}Y_n& - y_i - \frac{(x-x_i)}{x_{i+1}-x_i} (y_{i+1}-y_{i}) \\
&=-\frac{(x-x_i)}{x_{i+1}-x_i}\frac{(x_{i+1}-x_i)^2}{2}K^{''}(\hat{\xi}_i,X_n)K^{-1}Y_n \\
&+\frac{(x-x_i)^2}{2}K^{''}(\xi_i,X_n) K^{-1}Y_n.
\end{aligned}
\end{equation}
By Proposition \ref{lemma:K_second_derivative_K_inv}, $K^{''}(\hat{\xi}_i,X_n) K^{-1}$ is a vector with at least $n-4$ entries with absolute value smaller than $C/(n-1)$ for some constant $C$. Thus, $\|K^{''}(\xi_i,X_n) K^{-1}Y_n\|_{2}<C$ for some constant $C$ and the RHS of \eqref{eq:higher:order} is bounded by $ \frac{C}{(n-1)^{2}}$ for some constant $C$.
\end{proof}
After showing that $f_{\infty}^{NTK}$ can be approximated by the linear interpolation, we also need the following lemma:
\begin{lemma}[Linear Interpolation cannot Generalize Well]\label{LI_not_good}
If $x \in [0,1]$ and $f_{\mathsf{LI}}$ is a linear interpolation estimator, then the generalization error is bounded away from zero, i.e.,
\begin{equation}
\int_0^1 (f_{\mathsf{LI}}(x)-f_{\star}(x))^2 dx \geq C.
\end{equation}LI
\end{lemma}
With Proposition \ref{LI} and Lemma \ref{thm:bad_gen}, we can prove Theorem \ref{thm:bad_gen}
\begin{proof}[Proof of Theorem \ref{thm:bad_gen}]
\begin{equation}
\int_0^1 (f_{\infty}^{NTK}(x)-f_{\star}(x))^2 dx \geq \int_0^1 (f_{\mathsf{LI}}(x)-f_{\star}(x))^2dx - \int_0^1 (f_{\mathsf{LI}}(x)-f_{\infty}^{NTK}(x))^2 dx
\end{equation}
By Proposition \ref{LI}, $\int_0^1 ( f_{\mathsf{LI}}(x)-f_{t}^{NTK}(x))^2 dx $ is bounded by $\frac{C}{(n-1)^{4}}$. By Lemma \ref{LI}, $\int_0^1 (f_{\mathsf{LI}}(x)-f_{\star}(x))^2 dx \geq C$. To sum up, $ \int_0^1 ( f_{t}^{NTK}(x)-f_{\star}(x))^2 \geq C$.
\end{proof}
\subsection{On the role of signal strength}
Proposition \ref{prop:early:stopping} shows that a carefully chosen early stopping time will produce a neural network achieving the minimax rate.
Zhang et al. \cite{zhang2016understanding} observed the implicit early stopping strategy, stopping the training process when the 100\% label accuracy is reached, can generalize well. One may speculate that the implicit early stopping time is the optimal stopping time that appeared in Proposition \ref{prop:early:stopping}. However, this sounds too good to be true. In fact, Zhang et al. \cite{zhang2016understanding} have discussed the role of some explicit early stopping strategies in training neural networks. They observed that: 1. early stopping indeed makes generalization better on ImageNet, and 2. early stopping does not help much on CIFAR10. These observations made the effects of the implicit early stopping elusive. We can not simply claim that the implicit early-stopping strategy would produce a rate-optimal neural network.
We hypothesized that signal strength plays an indispensable role in the success of implicit early-stopping strategies. More precisely, when the signal strength is strong enough, the implicit early stopping strategy will produce a stopping time near the optimal stopping time $t_{\star}$; and when the signal strength deteriorates (e.g., after performing label corruption procedure), the implicit early stopping strategy will produce a stopping time far from the optimal stopping time $t_{\star}$.
Experiments from \cite{belkin2018understand} show that ``early stopping provides at most a minor improvement to classifier performance'' and MNIST labels are arguably close to a deterministic function of the features, as most (but not all) digit images are easily recognizable.
\subsection{Noise of data}
In Section \ref{intro}, we assume the noise $\epsilon\sim N(0,\sigma)$, which is too good to be true in real cases. Nevertheless, we can still believe there is a ground-true function $f_{\star}(\boldsymbol{x})$ behind the data. Thus, we can define the gap between the label and the ground-true function as noise, i.e.,
\begin{equation}
\epsilon = y - f_{\star}(x).
\end{equation}
In binary classification problems, the true label can be defined as
\begin{equation}
y_{\text{true}}=\boldsymbol{1}_{f_{\star}(x)>0.5}.
\end{equation}
The labels of data $y$ are possibly different from the true labels $y_{\text{true}}$. Since the ground-true function is unknown and we actually care about the true label, we can define the small noise as followed:
\begin{definition}
The noise of the data point $x,y$ is small if
\begin{equation}
y=y_{\text{true}}
\end{equation}
\end{definition}
For example, $f_{\star}(x_1)=0.1$ and the label $y_1=0$ and $f_{\star}(x_2)=0.6$ and the label $y_1=1$. These two data points have small noise since the gap does not affect the true labels of $f_{\star}(x_1)$ and $f_{\star}(x_2)$ even if the square losses of these two points are different. However, if the noise affects the true label, that means label corruption happens. For example, if $f_{\star}(x_3)=0.1$ but the label $y_3=1$, that means the noise affects the true label of $f_{\star}(x_3)$.
In general, the epoch with the maximum accuracy on the validation set or the long-time unchanged training loss is commonly chosen as the stopping time. In this case, early stopping is potentially used. If the neural network is trained until it reaches 100\% training accuracy, we tend to believe the network violates the conditions of early stopping and can not have a good performance on the testing set. However, even if the noise is small, 100\% training accuracy does not mean a sufficient small loss. The neural network may have a large training loss even if it reaches 100\% training accuracy, which means the training process is early stopped before the interpolation regime.
If the noise is large, the 100\%-training-accuracy neural network can not have good generalization. From Figure \ref{fig: MLP_diff_noise_mse} and \ref{fig: Alexnet_cifar10_diff_noise_ce}, we can find that as the percentage of label corruption is increasing, the training time to 100\% training accuracy is increasing. Moreover, the generalization gap between `100\% training accuracy' and `best generalization' becomes larger. In these cases, explicit early stopping(e.g. cross validation) is still significant to the generalization of the model.
In summary, we finally filled the last piece of the jigsaw in reconciling the controversial observation (S) and the bias-variance trade-off doctrine in classical statistical learning theory.
\section{Proof of Section \ref{sec:main_result}}\label{app:ntk:approx}
We first prove Proposition \ref{prop:kernel:approx} and then prove Proposition \ref{prop:funct:approx}. For brevity, denote the pre-activation value and the activation pattern for the $r$-th neuron of the hidden layer of the neural network with parameters $\boldsymbol{\theta}$ by $h_{\boldsymbol{\theta},r}(\boldsymbol{x})=\langle \boldsymbol{w}_{r},\boldsymbol{x}\rangle+b_{r}$ and $\boldsymbol{1}_{\boldsymbol{\theta},r}(\boldsymbol{x})=\boldsymbol{1}_{\{h_{\boldsymbol{\theta},r}(\boldsymbol{x})\geq 0\}}$ respectively. For simplicity, we consider the neural network to have $2m$ neurons.
\subsection{Proof of Proposition \ref{prop:kernel:approx}}\label{app:kernel:approx}
We defer the proof to the end of
Section \ref{app:kernel:approx}. To start with,
Lemma \ref{lem: event B}, Lemma \ref{lem: event R} and Lemma \ref{lem: event C} are the building blocks to prove Proposition \ref{prop:kernel:approx}, since in these lemmas we will show the events we need to condition on hold with probability converging to one as $m\to\infty$.
Lemma \ref{lem: event B} controls the scale of the parameters of the neural network at initialization.
\begin{lemma}\label{lem: event B}
Define the event
\begin{equation*}
\mathcal{B}=\left\{\omega\mid|a_{r}(0)|,|\boldsymbol{w}_{r,(j)}(0)|,|b_{r}(0)|\leq R_{B}, r\in[2m], j\in[d]\right\}, \text{~where~} R_{B}=\sqrt{3\log m}.
\end{equation*}
Conditioning on the event $\mathcal{B}$, we have $|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})|\leq (dB+1)R_{B}$ for all $r\in [2m]$ and $\boldsymbol{x}\in\mathcal{X}$. The event $\mathcal{B}$ holds with high probability, i.e., $\mathbf{P}_{\boldsymbol{\theta}(0)}(\mathcal{B})\geq 1-P_{\mathcal{B}}(m)$, where $P_{\mathcal{B}}(m)=\frac{2(d+2)}{\sqrt{2\pi}}m^{-1/2}$.
\end{lemma}
\begin{proof}
Under our special initialization setting where $a_{r}(0)=-a_{r+m}(0)$, $\boldsymbol{w}_{r,(j)}(0)=\boldsymbol{w}_{r+m,(j)}(0)$, $b_{r}(0)=b_{r+m}(0)\sim \mathcal{N}(0,1)$ for $r\in[m]$, the total number of the elements in $\mathcal{B}$ that need to be controlled is $(d+2)m$.
For $Z\sim\mathcal{N}(0,1)$, classical Gaussian tail bound gives
\begin{equation*}
\mathbf{P}(|Z|\geq R_{B})\leq\frac{2e^{-R_{B}^{2}/2}}{\sqrt{2\pi}R_{B}}\leq\frac{2e^{-R_{B}^{2}/2}}{\sqrt{2\pi}}=\frac{2}{\sqrt{2\pi}}m^{-3/2}.
\end{equation*}
Then $\mathbf{P}_{\boldsymbol{\theta}(0)}(\mathcal{B})\geq 1-\frac{2(d+2)}{\sqrt{2\pi}}m^{-1/2}$ by the union bound. Conditioning on $\mathcal{B}$, we have
\begin{equation*}
|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})|=| \langle \boldsymbol{w}_{r}(0),\boldsymbol{x}\rangle+b_{r}(0)|\leq\| \boldsymbol{w}_{r}(0)\|_{2}\| \boldsymbol{x}\|_{2}+|b_{r}(0)|\leq (dB+1)R_{B}.
\end{equation*}
\end{proof}
Our main contribution is the uniform convergence of kernel which relies on the analysis of the continuity of $K_{\boldsymbol{\theta}(0)}^{m}$ and $K_{d}$ and a method that is similar to the epsilon-net argument. On each dimension, we place $\lfloor m^{\beta}\rfloor$ points with distance
\begin{equation*}
\epsilon=2B/\lfloor m^{\beta}\rfloor
\end{equation*}
in $[-B,B]$ for some $\beta\in(0,1]$. Denote the collection $\mathcal{N}_{\epsilon}$ so that $|\mathcal{N}_{\epsilon}|=\lfloor m^{\beta}\rfloor^{d}$. The idea is to use $\mathcal{N}_{\epsilon}$ to discretize the domain $\mathcal{X}$ and then use classical concentration inequality on points in $\mathcal{N}_{\epsilon}$, which makes the probability of the complement of the events decaying exponentially fast with $m$. Then with the continuity of $K_{\boldsymbol{\theta}(0)}^{m}$ and $K_{d}$, the events hold over $\mathcal{X}$ with high probability.
Lemma \ref{lem: event R} shows the pre-activation values of most neurons are large, which hints that the activation pattern for these neurons is likely to stay unchanged during training since a large pre-activation value requires the parameters to travel a long way from the initialization to change the sign. This is crucial to prove that the training wide neural networks fall into the lazy regime where the parameters stay close to the initialization during training.
\begin{lemma}\label{lem: event R}
Define the events
\begin{equation*}
\mathcal{R}(\mathcal{N}_{\epsilon})=\left\{\omega ~\middle|~|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})|\leq 2(dB+1)R\mbox{~holds for at most~}2\lfloor m^{\gamma}\rfloor\mbox{~of~}r\in [2m],\forall\boldsymbol{z}\in\mathcal{N}_{\epsilon}\right\}
\end{equation*}
and
\begin{equation*}
\mathcal{R}=\left\{\omega ~\middle|~|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})|\leq (dB+1)R\mbox{~holds for at most~}2\lfloor m^{\gamma}\rfloor\mbox{~of~} r\in [2m],\forall\boldsymbol{x}\in [-B,B]^{d}\right\},
\end{equation*}
where $R=\frac{\sqrt{2\pi}}{4(dB+1)}m^{-\alpha}$ for some $\alpha\in(0,\beta)$ and $\gamma>\max\{1-\alpha,\delta\}$ with $\delta>1/2$. If $m$ is sufficiently large, then $\mathcal{R}\supseteq\mathcal{R}(\mathcal{N}_{\epsilon})$ and the event $\mathcal{R}$ holds with high probability, i.e., $\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\mathcal{R}\right)\geq \mathbf{P}_{\boldsymbol{\theta}(0)}\left(\mathcal{R}(\mathcal{N}_{\epsilon})\right)\geq 1-P_{\mathcal{R}}(m)$, where $P_{\mathcal{R}}(m)=m^{d\beta}e^{-2m^{2\delta-1}}$.
\end{lemma}
\begin{proof}
Due to our special initialization setting, we only need to consider $r\in[m]$ since $|h_{\boldsymbol{\theta}(0),r+m}(\boldsymbol{z})|=|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})|$ for $r\in[m]$. For every $\boldsymbol{z}\in\mathcal{N}_{\epsilon}$, let $T_{r}=\boldsymbol{1}_{\{|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})|\leq 2(dB+1)R\}}$ with mean
\begin{equation*}
p=\mathbf{E}_{\boldsymbol{\theta}(0)}T_{r}=\mathbf{P}_{\boldsymbol{\theta}(0)}(|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})|\leq 2(dB+1)R)\leq\frac{2}{\sqrt{2\pi}}\frac{2(dB+1)R}{\sqrt{\|\boldsymbol{z}\|_{2}^{2}+1}}\leq m^{-\alpha},
\end{equation*}
where the second inequality holds due to $h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})\sim\mathcal{N}(0,\lVert\boldsymbol{z}\rVert_{2}^{2}+1)$ and the density function of a standard Gaussian is upper bounded by $1/\sqrt{2\pi}$.
By Hoeffeding's inequality (see Theorem 2.8 in \cite{boucheron2013concentration}), for all $\delta>0$, we have $\mathbf{P}_{\boldsymbol{\theta}(0)}\left( \sum_{r\in[m]}T_{r} \geq mp+m^{\delta}\right)\leq e^{-2m^{2\delta-1}}$.
Now we have
\begin{equation*}
\begin{aligned}
& \mathbf{P}_{\boldsymbol{\theta}(0)}\left( |h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})\leq 2(dB+1)R\mbox{~holds for at most~}\lfloor m^{\gamma}\rfloor\mbox{~of~} r \in[m]\right)\\
= & \mathbf{P}_{\boldsymbol{\theta}(0)}\left(\sum_{r\in[m]}T_{r}\leq \lfloor m^{\gamma}\rfloor\right)=1-\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\sum_{r\in[m]}T_{r}>\lfloor m^{\gamma}\rfloor\right)\\
\geq& 1-\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\sum_{r\in[m]}T_{r}\geq mp+m^{\delta}\right)\geq 1-e^{-2m^{2\delta-1}},
\end{aligned}
\end{equation*}
where the first inequality holds when $m$ is large enough such that $mp+m^{\delta}\leq m^{1-\alpha}+m^{\delta}\leq \lfloor m^{\gamma}\rfloor$. Hence we have $\mathbf{P}_{\boldsymbol{\theta}(0)}(\mathcal{R}(\mathcal{N}_{\epsilon}))\geq 1-\lvert \mathcal{N}_{\epsilon}\rvert e^{-2m^{2\delta-1}}$ simply by the union bound. For every $\boldsymbol{x}$, we choose $\boldsymbol{z}\in\mathcal{N}_{\epsilon}$ such that $\|\boldsymbol{x}-\boldsymbol{z}\|_{2}\leq \sqrt{d}\epsilon$, so
\begin{equation*}
\begin{aligned}
\abs{h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})}&=\abs{h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})+\langle\boldsymbol{w}_{r}(0),\boldsymbol{z}-\boldsymbol{x}\rangle}\\
&\leq \abs{h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})}+\|\boldsymbol{w}_{r}(0)\|_{2}\|\boldsymbol{z}-\boldsymbol{x}\|_{2}\leq \abs{h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})}+dR_{B}\epsilon.
\end{aligned}
\end{equation*}
Thus $|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})|\geq|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})|-dR_{B}\epsilon>(dB+1)R$, where the last inequality holds when $m$ is large enough such that $dR_{B}\epsilon<(dB+1)R$.
\end{proof}
It is intuitive that the point-wise convergence of $K_{\boldsymbol{\theta}(0)}^{m}-K_{d}$ holds simply by the law of large numbers. The result from Lemma \ref{lem: event C} shows this convergence is uniform for points in the collection $\mathcal{N}_{\epsilon}$.
\begin{lemma}\label{lem: event C}
Define the event
\begin{equation*}
\mathcal{C}=\left\{\omega ~\middle|~ \sup_{\boldsymbol{z},\boldsymbol{z}'\in\mathcal{N}_{\epsilon}}| K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{z},\boldsymbol{z}')-K_d(\boldsymbol{z},\boldsymbol{z}')|\leq C_{1}\sqrt{\frac{\log m}{m}}\right\},
\end{equation*}
where $C_{1}>0$ is a constant depending on $d,B,\beta$. If $m$ is sufficiently large, then the event $\mathcal{C}$ holds with high probability, i.e., $\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\mathcal{C}\right)\geq 1-P_{\mathcal{C}}(m)$, where $P_{\mathcal{C}}(m)=4m^{-d\beta}$.
\end{lemma}
Before we give the proof of Lemma \ref{lem: event C}, we need to dive into details of the kernel of the neural network from here to analyze further, so we introduce more notations. Given the parameters $\boldsymbol{\theta}$ of the neural network, let $H_{\boldsymbol{\theta},r}(\boldsymbol{x},\boldsymbol{x}')=\left(\langle\boldsymbol{x},\boldsymbol{x}'\rangle+1 \right)a_{r}^{2}\boldsymbol{1}_{\boldsymbol{\theta},r}(\boldsymbol{x})\boldsymbol{1}_{\boldsymbol{\theta},r}(\boldsymbol{x}')$, $G_{\boldsymbol{\theta},r}(\boldsymbol{x},\boldsymbol{x}')=\sigma(h_{\boldsymbol{\theta},r}(\boldsymbol{x}))\sigma(h_{\boldsymbol{\theta},r}(\boldsymbol{x}'))$ be the contribution to the kernel from the $r$-th neuron at the first and second layer respectively. Then we decompose
\begin{equation*}
K_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}')=\langle\nabla_{\boldsymbol{\theta}}f_{\boldsymbol{\theta}}^{m}(\boldsymbol{x}),\nabla_{\boldsymbol{\theta}}f_{\boldsymbol{\theta}}^{m}(\boldsymbol{x}')\rangle=1+H_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}')+G_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}'),
\end{equation*}
where
\begin{equation*}
H_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}')=\frac{1}{m}\sum_{r\in[2m]}H_{\boldsymbol{\theta},r}(\boldsymbol{x},\boldsymbol{x}'),G_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}')=\frac{1}{m}\sum_{r\in[2m]}G_{\boldsymbol{\theta},r}(\boldsymbol{x},\boldsymbol{x}').
\end{equation*}
A similar decomposition for the NTK is
\begin{equation*}
K_{d}(\boldsymbol{x},\boldsymbol{x}')=1+H(\boldsymbol{x},\boldsymbol{x}')+G(\boldsymbol{x},\boldsymbol{x}'),
\end{equation*}
where $H(\boldsymbol{x},\boldsymbol{x}')=\mathbf{E}_{\boldsymbol{\theta}(0)}H_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')$ and $G(\boldsymbol{x},\boldsymbol{x}')=\mathbf{E}_{\boldsymbol{\theta}(0)}G_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')$. Thanks to the decomposition, we can simply analyze each part of the kernel and then use the triangle inequality to apply to the whole kernel.
\begin{proof}[Proof of Lemma \ref{lem: event C}]
Notice that $H_{\boldsymbol{\theta}(0),r}(\boldsymbol{z},\boldsymbol{z}')$ and $G_{\boldsymbol{\theta}(0),r}(\boldsymbol{z},\boldsymbol{z}')$ are both sub-exponential and their sub-exponential norm is bounded by a constant $c'$ depending on
$d,B$. Then by Bernstein's inequality(see Theorem 2.8.1 in \cite{vershynin2018highdimensional}), for every $c>0$,
\begin{equation*}
\begin{aligned}
\mathbf{P}_{\boldsymbol{\theta}(0)}\left(|H_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{z},\boldsymbol{z}')-H(\boldsymbol{z},\boldsymbol{z}')|\geq c\sqrt{\frac{\log m}{m}}\right)\leq& 2e^{-c_{0}\min\{\frac{c^{2}}{c'^{2}}\log m,\frac{c}{c'}\sqrt{m\log m}\}}\\
=&2m^{-c_{0}c^{2}/c'^{2}},
\end{aligned}
\end{equation*}
where $c_{0}$ is an absolute constant and the equality holds when $m$ is large enough such that $\frac{c^{2}}{c'^{2}}\log m\leq \frac{c}{c'}\sqrt{m\log m}$.
Likewise, we have the same inequality for $G_{\boldsymbol{\theta}(0)}^{m}$, so that
\begin{equation*}
\begin{aligned}
\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\sup_{\boldsymbol{z},\boldsymbol{z}'\in\mathcal{N}_{\epsilon}}\lvert K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{z},\boldsymbol{z}')-K_{d}(\boldsymbol{z},\boldsymbol{z}')\rvert\leq 2c\sqrt{\frac{\log m}{m}}\right)\geq&1-4\binom{|\mathcal{N}_{\epsilon}|}{2} m^{-c_{0}c^{2}/c'^{2}}\\
\geq&1-4m^{-(c_{0}c^{2}/c'^{2}-2d\beta)}\\
=&1-4m^{-d\beta}
\end{aligned}
\end{equation*}
simply by the triangle inequality and the union bound, where we set $c=\sqrt{3c'^{2}d\beta/c_{0}}$ in the last equality.
\end{proof}
For initialization that lies in the intersection of the events, i.e., $\mathcal{B}\cap\mathcal{R}\cap{\mathcal{C}}$, Lemma \ref{lem: initial kernel close to fixed} and Lemma \ref{lem: kernel with lazy params close to initial kernel} shows how the width $m$ control the convergence of kernel at the initialization and during training. The proof of Lemma \ref{lem: initial kernel close to fixed} and Lemma \ref{lem: kernel with lazy params close to initial kernel} could be found in Section \ref{app:subsec:init:kernel:close:fixed} and Section \ref{app:subsec:lazy:kernel:close:init} respectively.
\begin{lemma}\label{lem: initial kernel close to fixed}
Conditioning on the event $\mathcal{B}\cap\mathcal{R}\cap\mathcal{C}$, if we set $\gamma>1-\beta/4$ and $m$ is sufficiently large, then
\begin{equation*}
\sup_{\boldsymbol{x},\boldsymbol{x}'\in\mathcal{X}}|K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{d}(\boldsymbol{x},\boldsymbol{x}')|\leq C_{2}m^{-(1-\gamma)}\log m,
\end{equation*}
where $C_{2}>0$ is a constant depending on $d,B$.
\end{lemma}
\begin{lemma}\label{lem: kernel with lazy params close to initial kernel}
Conditioning on the event $\mathcal{B}\cap\mathcal{R}\cap\mathcal{C}$, if we set $\alpha<1/2$ and $m$ is sufficiently large, then
\begin{equation*}
\sup_{t\geq 0}\sup_{\boldsymbol{x},\boldsymbol{x}'\in\mathcal{X}}| K_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')|\leq C_{3}m^{-(1-\gamma)}\log m,
\end{equation*}
where $C_{3}>0$ is constant depending on $d,B$.
\end{lemma}
\begin{proof}[Proof of Proposition \ref{prop:kernel:approx}]
Consider the initialization $\omega\in\mathcal{B}\cap\mathcal{R}\cap\mathcal{C}$. Then for all $\boldsymbol{x},\boldsymbol{x}'\in\mathcal{X}$, we have
\begin{equation*}
\begin{aligned}
|K_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{d}(\boldsymbol{x},\boldsymbol{x}')|&\leq|K_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')|+|K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{d}(\boldsymbol{x},\boldsymbol{x}')|\\
&\leq (C_{2}+C_{3})m^{-(1-\gamma)}\log m,
\end{aligned}
\end{equation*}
where the last inequality follows from Lemma \ref{lem: initial kernel close to fixed} and Lemma \ref{lem: kernel with lazy params close to initial kernel}. With Lemma \ref{lem: event B}, Lemma \ref{lem: event R} and Lemma \ref{lem: event C}, we show that
\begin{equation*}
\begin{aligned}
&\mathbf{P}_{\boldsymbol{\theta}(0)}\left(\sup_{\boldsymbol{x},\boldsymbol{x}'\in\mathcal{X}}|K_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{d}(\boldsymbol{x},\boldsymbol{x}')|\leq (C_{2}+C_{3})m^{-(1-\gamma)}\log m\right)\\
\geq&1-P_{\mathcal{B}}(m)-P_{\mathcal{R}}(m)-P_{\mathcal{C}}(m).
\end{aligned}
\end{equation*}
\end{proof}
\subsubsection{Proof of Lemma \ref{lem: initial kernel close to fixed}}\label{app:subsec:init:kernel:close:fixed}
Conditioning on $\mathcal{B}\cap\mathcal{R}\cap\mathcal{C}$, for all $\boldsymbol{x},\boldsymbol{x}'\in\mathcal{X}$, decomposition of $\abs{K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{d}(\boldsymbol{x},\boldsymbol{x}')}$ by the triangle inequality gives
\begin{equation*}
\begin{aligned}
|K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{d}(\boldsymbol{x},\boldsymbol{x}')|\leq&|K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{z},\boldsymbol{z}')|+|K_{d}(\boldsymbol{z},\boldsymbol{z}')-K_{d}(\boldsymbol{x},\boldsymbol{x}')| \\
&+|K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{z},\boldsymbol{z}')-K_{d}(\boldsymbol{z},\boldsymbol{z}')| \\
\leq & 2C_{4}m^{-(1-\gamma)}\log m+2C_{5}(\sqrt{d}\epsilon)^{1/4}+C_{1}\sqrt{\frac{\log m}{m}} \\
\end{aligned}
\end{equation*}
by Lemma \ref{lem: continuity of K_0}, Lemma \ref{lem: continuity of K} and Lemma \ref{lem: event C} when $m$ is sufficiently large.
\paragraph{The continuity of $K_{\boldsymbol{\theta}(0)}^{m}$}
Using the triangle inequality again yields
\begin{equation*}
|K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{z},\boldsymbol{z}')|\leq |K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{z}')|+|K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{z}')-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{z},\boldsymbol{z}')|.
\end{equation*}
We here illustrate how to control the first term, as the control of the second term follows from the symmetry of $K_{\boldsymbol{\theta}(0)}^{m}(\cdot,\cdot)$.
\begin{lemma}\label{lem: init activation pattern for close points}
For all $\boldsymbol{x}\in [-B,B]^{d}$ and $\boldsymbol{z}\in\mathcal{N}_{\epsilon}$ such that $\|\boldsymbol{x}-\boldsymbol{z}\|_{2}\leq\sqrt{d}\epsilon$, conditioning on $\mathcal{B}\cap\mathcal{R}$, if $m$ is sufficiently large, then $| I(\boldsymbol{x},\boldsymbol{z})|\geq2(m-\lfloor m^{\gamma}\rfloor)$, where $I(\boldsymbol{x},\boldsymbol{z})=\{r\mid\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})=\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})\}$ is the index set of neurons on which the activation pattern for $\boldsymbol{x}$ and $\boldsymbol{z}$ is the same at $\boldsymbol{\theta}(0)$.
\end{lemma}
\begin{proof}
Notice that
\begin{equation*}
|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})-h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})|=|\langle \boldsymbol{w}_{r}(0),\boldsymbol{x}-\boldsymbol{z}\rangle|\leq\|\boldsymbol{w}_{r}(0)\|_{2}\|\boldsymbol{x}-\boldsymbol{z}\|_{2}\leq dR_{B}\epsilon.
\end{equation*}
For $r\in I_{\text{in}}(\boldsymbol{x})=\{r\mid |h_{\boldsymbol{\theta}(0)}(\boldsymbol{x})|>(dB+1)R\}$, if $m$ is large enough such that $dR_{B}\epsilon\leq(dB+1)R$, we have
\begin{equation*}
|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})-h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})|<|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})|,
\end{equation*}
which implies $\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})=\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{z})$, thus $|I(\boldsymbol{x},\boldsymbol{z})|\geq|I_{\text{in}}(\boldsymbol{x})|\geq 2(m-\lfloor m^{\gamma}\rfloor)$.
\end{proof}
\begin{lemma}\label{lem: continuity of K_0}
For all $\boldsymbol{x},\boldsymbol{x}'\in [-B,B]^{d}$ and $\boldsymbol{z}'\in\mathcal{N}_{\epsilon}$ such that $\|\boldsymbol{x}'-\boldsymbol{z}'\|_{2}\leq\sqrt{d}\epsilon$, conditioning on the event $\mathcal{B}\cap\mathcal{R}$, if we set $\gamma > 1 - \beta$ and $m$ is sufficiently large,
\begin{equation*}
|K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{z}')|\leq C_{4}m^{-(1-\gamma)}\log m,
\end{equation*}
where $C_{4}$ is a constant depending on $d,B$.
\end{lemma}
\begin{proof}
For simplicity, let $I=I(\boldsymbol{x}',\boldsymbol{z}')$. Then
\begin{equation*}
\begin{aligned}
&|H_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')-H_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{z}')|\\
\leq&\left|\langle\boldsymbol{x},\boldsymbol{x}'-\boldsymbol{z}'\rangle\frac{1}{m}\sum_{r\in[2m]}a_{r}^{2}(0)\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}')\right|\\
&+\left|(\langle\boldsymbol{x},\boldsymbol{z}'\rangle+1)\frac{1}{m}\sum_{r\in[2m]}a_{r}^{2}(0)\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})(\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}')-\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{z}'))\right|\\
\leq &dB\epsilon\cdot 2R_{B}^{2}+\frac{(dB^{2}+1)R_{B}^{2}}{m}\sum_{r\in I}\sum_{r\in I^{\mathsf{c}}}\abs{\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}')-\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{z}')}\\
\leq &2dBR_{B}^{2}\epsilon+(dB^{2}+1)R_{B}^{2}\frac{2\lfloor m^{\gamma}\rfloor}{m}\\
\leq & 4(dB^{2}+1)R_{B}^{2}m^{-(1-\gamma)},
\end{aligned}
\end{equation*}
where
the first inequality holds by plugging in $\langle\boldsymbol{x},\boldsymbol{z}'\rangle\frac{1}{m}\sum_{r\in[2m]}a_{r}^{2}(0)\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}')$ and using the triangle inequality,
the third inequality follows from Lemma \ref{lem: init activation pattern for close points}, and the last inequality holds if $m$ is large enough such that $2dBR_{B}^{2}\epsilon\leq2(dB^{2}+1)R_{B}^{2}\frac{\lfloor m^{\gamma}\rfloor}{m}$. Similarly,
\begin{equation*}
\begin{aligned}
&|G_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')-G_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{z}')|\\
\leq&\left|\langle\boldsymbol{x},\boldsymbol{x}'-\boldsymbol{z}'\rangle\frac{1}{m}\sum_{r\in[2m]}\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}))\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}'))\right|\\
&+\left|\langle\boldsymbol{x},\boldsymbol{z}'\rangle\frac{1}{m}\sum_{r\in[2m]}\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}))(\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}'))-\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z}')))\right|\\
\leq&dB\epsilon\cdot 2(dB+1)^{2}R_{B}^{2}\\
&+\frac{1}{m}\sum_{r\in I}\sum_{r\in I^{\mathsf{c}}}|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})||\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}'))-\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z}'))|\\
\leq & 2dB(dB+1)^{2}R_{B}^{2}\epsilon\\
&+\frac{1}{m}\sum_{r\in I}|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})||h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}')-h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z}')|\\
&+\frac{1}{m}\sum_{r\in I^{\mathsf{c}}}|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})|\max\{|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}')|,|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{z}')|\}\\
\leq&2dB(dB+1)^{2}R_{B}^{2}\epsilon+2(dB+1)R_{B}\cdot dR_{B}\epsilon+(dB+1)^{2}R_{B}^{2}\frac{2\lfloor m^{\gamma}\rfloor}{m}\\
\leq&6(dB+1)^{2}R_{B}^{2}m^{-(1-\gamma)},
\end{aligned}
\end{equation*}
where
the first inequality holds by plugging in $\langle\boldsymbol{x},\boldsymbol{z}'\rangle\frac{1}{m}\sum_{r\in[2m]}\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}))\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}'))$ and using the triangle inequality, the third and the last but second inequality follows from Lemma \ref{lem: init activation pattern for close points}, the last inequality holds if $m$ is large enough such that
\begin{equation*}
\max\{2dB(dB+1)^{2}R_{B}^{2}\epsilon,2d(dB+1)R_{B}^{2}\epsilon\}\leq 2(dB+1)^{2}R_{B}^{2}\frac{\lfloor m^{\gamma}\rfloor}{m}.
\end{equation*}
\end{proof}
\paragraph{The continuity of $K_{d}$}
The triangle inequality shows
\begin{equation*}
\abs{K_{d}(\boldsymbol{z},\boldsymbol{z}')-K_{d}(\boldsymbol{x},\boldsymbol{x}')}\leq \abs{K_{d}(\boldsymbol{z},\boldsymbol{z}')-K_{d}(\boldsymbol{z},\boldsymbol{x}')}+\abs{K_{d}(\boldsymbol{z},\boldsymbol{x}')-K_{d}(\boldsymbol{x},\boldsymbol{x}')}.
\end{equation*}
Similarly, we only need to show the control of the first term, since it is the same for the second term by the symmetry of $K_{d}(\cdot,\cdot)$.
\begin{lemma}\label{lem: continuity of K}
For every $\boldsymbol{x}',\boldsymbol{z},\boldsymbol{z}'\in [-B,B]^{d}$ and $\epsilon_{0}>0$, if $\|\boldsymbol{x}'-\boldsymbol{z}'\|_{2}\leq\epsilon_{0}$, then
\begin{equation*}\label{eq: NTK epsilon close}
|K_{d}(\boldsymbol{z},\boldsymbol{z}')-K_{d}(\boldsymbol{z},\boldsymbol{x}')|\leq C_{5}\max\{\epsilon_{0},\epsilon_{0}^{1/4}\},
\end{equation*}
where $C_{5}$ is a constant depending on $d,B$.
\end{lemma}
\begin{proof}
Recall the expression of NTK $K_{d}$, we have
\begin{equation*}
\begin{aligned}
&|K_{d}(\boldsymbol{z},\boldsymbol{z}')-K_{d}(\boldsymbol{z},\boldsymbol{x}')|\\
\leq&\underbrace{\frac{2}{\pi}|(\pi-\psi(\boldsymbol{z},\boldsymbol{z}'))(\langle\boldsymbol{z},\boldsymbol{z}'\rangle+1)-(\pi-\psi(\boldsymbol{z},\boldsymbol{x}'))(\langle\boldsymbol{z},\boldsymbol{x}'\rangle+1)|}_{\text{\uppercase\expandafter{\romannumeral1}}}\\
&+\underbrace{\frac{1}{\pi}\left|\sqrt{\|\boldsymbol{z}-\boldsymbol{z}'\|_{2}^{2}-\|\boldsymbol{z}\|_{2}^{2}\|\boldsymbol{z}'\|_{2}^{2}-\langle\boldsymbol{z},\boldsymbol{z}'\rangle^{2}}-\sqrt{\|\boldsymbol{z}-\boldsymbol{x}'\|_{2}^{2}-\|\boldsymbol{z}\|_{2}^{2}\|\boldsymbol{x}'\|_{2}^{2}-\langle\boldsymbol{z},\boldsymbol{x}'\rangle^{2}}\right|}_{\text{\uppercase\expandafter{\romannumeral2}}}.
\end{aligned}
\end{equation*}
For the first term $\text{\uppercase\expandafter{\romannumeral1}}$, plugging in $(\pi-\psi(\boldsymbol{z},\boldsymbol{z}'))(\langle\boldsymbol{z},\boldsymbol{x}'\rangle+1)$ and using the triangle inequality yields
\begin{equation*}
\begin{aligned}
\text{\uppercase\expandafter{\romannumeral1}}\leq & \frac{2}{\pi}\left((\pi-\psi(\boldsymbol{z},\boldsymbol{z}'))|\langle\boldsymbol{z},\boldsymbol{z}'-\boldsymbol{x}'\rangle|+|\langle \boldsymbol{z},\boldsymbol{x}'\rangle +1||\psi(\boldsymbol{z},\boldsymbol{z}')-\psi(\boldsymbol{z},\boldsymbol{x}')|\right)\\
\leq & \frac{2}{\pi}\left( 2\pi\cdot \sqrt{d}B\epsilon_{0}+(dB^{2}+1)|\psi(\boldsymbol{z},\boldsymbol{z}')-\psi(\boldsymbol{z},\boldsymbol{x}')|\right)\\
\leq&\frac{2}{\pi}\left(2\pi\sqrt{d}B\epsilon_{0}+(dB^{2}+1)C_{6}\max\{\sqrt{\epsilon_{0}},\epsilon_{0}^{1/4}\}\right),
\end{aligned}
\end{equation*}
where the last inequality holds due to Lemma \ref{lem: continuity of psi} where $C_{6}$ is a constant depending on $d,B$. For the second term $\text{\uppercase\expandafter{\romannumeral2}}$,
\begin{equation*}
\begin{aligned} \text{\uppercase\expandafter{\romannumeral2}}&\leq\sqrt{|(\|\boldsymbol{z}-\boldsymbol{z}'\|_{2}^{2}-\|\boldsymbol{z}-\boldsymbol{x}'\|_{2}^{2})+(\|\boldsymbol{z}\|_{2}^{2}\|\boldsymbol{z}'\|_{2}^{2}-\|\boldsymbol{z}\|_{2}^{2}\|\boldsymbol{x}'\|_{2}^{2})-(\langle\boldsymbol{z},\boldsymbol{z}'\rangle^{2}-\langle\boldsymbol{z},\boldsymbol{x}'\rangle^{2})|}\\
&=\sqrt{|(2\langle\boldsymbol{z},\boldsymbol{x}'-\boldsymbol{z}'\rangle+\|\boldsymbol{z}'\|_{2}^{2}-\|\boldsymbol{x}'\|_{2}^{2})+\|\boldsymbol{z}\|_{2}^{2}(\|\boldsymbol{z}'\|_{2}^{2}-\|\boldsymbol{x}'\|_{2}^{2})-\langle\boldsymbol{z},\boldsymbol{z}'+\boldsymbol{x}'\rangle\langle\boldsymbol{z},\boldsymbol{z}'-\boldsymbol{x}'\rangle|}\\
&\leq\sqrt{4\sqrt{d}B\epsilon_{0}+2(\sqrt{d}B)^{3}\epsilon_{0}+2(\sqrt{d}B)^{3}\epsilon_{0}}=2\sqrt{(\sqrt{d}B+d^{3/2}B^{3})}\sqrt{\epsilon_{0}},
\end{aligned}
\end{equation*}
where the first inequality holds since $|\sqrt{x}-\sqrt{x'}|\leq\sqrt{|x-x'|}$ for all $x,x'>0$ and the last inequality holds by the Cauchy-Schwartz inequality and the fact that $\boldsymbol{x}',\boldsymbol{z},\boldsymbol{z}'\in [-B,B]^{d}$ and $\|\boldsymbol{x}'-\boldsymbol{z}'\|_{2}\leq\epsilon_{0}$.
\end{proof}
\begin{lemma}\label{lem: continuity of psi}
For every $\boldsymbol{x}',\boldsymbol{z},\boldsymbol{z}'\in [-B,B]^{d}$ and $\epsilon_{0} > 0$, if $\|\boldsymbol{x}'-\boldsymbol{z}'\|_{2}\leq\epsilon_{0}$, then
\begin{equation}\label{eq: psi epsilon close}
|\psi(\boldsymbol{z},\boldsymbol{z}')-\psi(\boldsymbol{z},\boldsymbol{x}')|\leq C_{6}\max\{\sqrt{\epsilon_{0}},\epsilon_{0}^{1/4}\},
\end{equation}
where $C_{6}$ is a constant depending on $d,B$.
\end{lemma}
\begin{proof}
Let $\Delta=|\cos(\psi(\boldsymbol{z},\boldsymbol{z}'))-\cos(\psi(\boldsymbol{z},\boldsymbol{x}'))|$. Then plug in $\frac{\langle\boldsymbol{z},\boldsymbol{x}'\rangle+1}{\sqrt{(\|\boldsymbol{z}\|_{2}^{2}+1)(\|\boldsymbol{z}'\|_{2}^{2}+1)}}$ and the triangle inequality concludes
\begin{equation*}
\begin{aligned}
\Delta\leq&\left|\frac{\langle\boldsymbol{z},\boldsymbol{z}'\rangle+1}{\sqrt{(\|\boldsymbol{z}\|_{2}^{2}+1)(\|\boldsymbol{z}'\|_{2}^{2}+1)}}-\frac{\langle\boldsymbol{z},\boldsymbol{x}'\rangle+1}{\sqrt{(\|\boldsymbol{z}\|_{2}^{2}+1)(\|\boldsymbol{z}'\|_{2}^{2}+1)}}\right|\\
&+\left|\frac{\langle\boldsymbol{z},\boldsymbol{x}'\rangle+1}{\sqrt{(\|\boldsymbol{z}\|_{2}^{2}+1)(\|\boldsymbol{z}'\|_{2}^{2}+1)}}-\frac{\langle\boldsymbol{z},\boldsymbol{x}'\rangle+1}{\sqrt{(\|\boldsymbol{z}\|_{2}^{2}+1)(\|\boldsymbol{x}'\|_{2}^{2}+1)}}\right|\\
=&\frac{1}{\sqrt{(\|\boldsymbol{z}\|_{2}^{2}+1)(\|\boldsymbol{z}'\|_{2}^{2}+1)}}|\langle\boldsymbol{z},\boldsymbol{z}'-\boldsymbol{x}'\rangle|\\
&+\frac{1}{\sqrt{(\|\boldsymbol{z}\|_{2}^{2}+1)(\|\boldsymbol{z}'\|_{2}^{2}+1)(\|\boldsymbol{x}'\|_{2}^{2}+1)}}|\langle\boldsymbol{z},\boldsymbol{x}'\rangle+1|\left|\sqrt{\|\boldsymbol{x}'\|_{2}^{2}+1}-\sqrt{\|\boldsymbol{z}'\|_{2}^{2}+1}\right|\\
\leq&\sqrt{d}B\epsilon_{0}+(dB^{2}+1)\sqrt{2\sqrt{d}B}\sqrt{\epsilon_{0}}\\
\end{aligned}
\end{equation*}
where the last line follows from the fact that $|\sqrt{x^{2}+1}-\sqrt{x'^{2}+1}|\leq\sqrt{|x+x'|}\sqrt{|x-x'|}$ for all $x,x'\in\mathbb{R}$.
Then we have
\begin{equation*}
\begin{aligned}
\abs{\psi(\boldsymbol{z},\boldsymbol{z}')-\psi(\boldsymbol{z},\boldsymbol{x}')}\leq&|\arccos 1-\arccos(1-\Delta)|=\int_{1-\Delta}^{1}\frac{1}{\sqrt{1-x^{2}}}\mathrm{d} x \\
\leq& \int_{1-\Delta}^{1}\frac{1}{\sqrt{1-x}}\mathrm{d} x =2\sqrt{\Delta}\\
\leq&2\sqrt{\sqrt{d}B\epsilon_{0}+(dB^{2}+1)\sqrt{2\sqrt{d}B}\sqrt{\epsilon_{0}}}.
\end{aligned}
\end{equation*}
\end{proof}
\subsubsection{Proof of Lemma \ref{lem: kernel with lazy params close to initial kernel}}\label{app:subsec:lazy:kernel:close:init}
It is hard to analyze $K_{\boldsymbol{\theta}(t)}^{m}$ directly, so we show $K_{\boldsymbol{\theta}}^{m}$ is close to $K_{\boldsymbol{\theta}(0)}^{m}$ if $\boldsymbol{\theta}$ is close to $\boldsymbol{\theta}(0)$ in Lemma \ref{lem: kernel close to init when params close to init} first and then prove $\boldsymbol{\theta}(t)$ is indeed near $\boldsymbol{\theta}(0)$ in Proposition \ref{prop: lazy regime}.
\paragraph{Approximation for $K_{\boldsymbol{\theta}}^{m}$ to $K_{\boldsymbol{\theta}(0)}^{m}$}
Denote by
\begin{equation*}
\boldsymbol{\Theta}(\boldsymbol{\theta}(0),R_{0})=\left\{\boldsymbol{\theta}~\middle|~|a_{r}-a_{r}(0)|,|\boldsymbol{w}_{r,(j)}-\boldsymbol{w}_{r,(j)}(0)|,|b_{r}-b_{r}(0)|\leq R_{0}, r\in [2m],j\in[d]\right\}
\end{equation*}
the neighborhood of $\boldsymbol{\theta}(0)$.
\begin{lemma}\label{lem: activation pattern for params close to init}
For all $\boldsymbol{x}\in\mathcal{X}$, for all $\boldsymbol{\theta}\in \boldsymbol{\Theta}(\boldsymbol{\theta}(0),R)$, conditioning on the event $\mathcal{R}$, then $|I(\boldsymbol{x})|\geq2(m-\lfloor m^{\gamma}\rfloor)$ where $I(\boldsymbol{x})=\{r\mid\boldsymbol{1}_{\boldsymbol{\theta},r}(\boldsymbol{x})=\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})\}$ is the index set of neurons on which the activation pattern for $\boldsymbol{x}$ is the same at $\boldsymbol{\theta}$ and $\boldsymbol{\theta}(0)$.
\end{lemma}
\begin{proof}
Since $\boldsymbol{\theta}\in \boldsymbol{\Theta}(\boldsymbol{\theta}(0),R)$, we have
\begin{equation*}
|h_{\boldsymbol{\theta},r}(\boldsymbol{x})-h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})|=|\langle\boldsymbol{w}_{r}-
\boldsymbol{w}_{r}(0),\boldsymbol{x}\rangle+(b_{r}-b_{r}(0))|\leq(dB+1)R.
\end{equation*}
For $r\in I_{\mathrm{in}}(\boldsymbol{x})=\left\{r~\middle|~|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})|>(dB+1)R~\right\}$, we have
\begin{equation*}
|h_{\boldsymbol{\theta},r}(\boldsymbol{x})-h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})|<|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})|,
\end{equation*}
which implies $\boldsymbol{1}_{\boldsymbol{\theta},r}(\boldsymbol{x})=\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})$, thus $|I(\boldsymbol{x})|\geq|I_{\text{in}}(\boldsymbol{x})|\geq 2(m-\lfloor m^{\gamma}\rfloor)$.
\end{proof}
\begin{lemma}\label{lem: kernel close to init when params close to init}
Conditioning on the event $\mathcal{B}\cap\mathcal{R}$, if $m$ is sufficiently large, then
\begin{equation*}
\sup_{\boldsymbol{\theta}\in\boldsymbol{\Theta}(\boldsymbol{\theta}(0),R)}\sup_{\boldsymbol{x},\boldsymbol{x}'\in\mathcal{X}}|K_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')|\leq C_{7}m^{-(1-\gamma)}\log m,
\end{equation*}
where $C_{7}>0$ is a constant depending on $d,B$.
\end{lemma}
\begin{proof}
For all $\boldsymbol{x},\boldsymbol{x}'\in\mathcal{X}$, let $I=I(\boldsymbol{x})\cap I(\boldsymbol{x}')$, then $|I|=|I(\boldsymbol{x})|+|I(\boldsymbol{x}')|-|I(\boldsymbol{x})\cup I(\boldsymbol{x}')|\geq2m-4\lfloor m^{\gamma}\rfloor$ by Lemma \ref{lem: activation pattern for params close to init}.
Hence for all $\boldsymbol{\theta}\in\boldsymbol{\Theta}(\boldsymbol{\theta}(0),R)$,
\begin{equation*}
\begin{aligned}
& \quad |H_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}')-H_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')|\\
& \leq \frac{(dB^{2}+1)}{m}\sum_{r\in [2m]}|a_{r}^{2}\boldsymbol{1}_{\boldsymbol{\theta},r}(\boldsymbol{x})\boldsymbol{1}_{\boldsymbol{\theta},r}(\boldsymbol{x}')-a_{r}^{2}(0)\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})\boldsymbol{1}_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}')|\\
& \leq \frac{(dB^{2}+1)}{m}\left(\sum_{r\in I}\abs{a_{r}^2-a_{r}^{2}(0)}+\sum_{r\in I^{\mathsf{c}}}\max\{a_{r}^2,a_{r}^{2}(0)\}\right)\\
& \leq (dB^{2}+1)\left(\frac{|I|}{m}3RR_{B}+\frac{|I^{\mathsf{c}}|}{m}4R_{B}^{2}\right)\\
& \leq (dB^{2}+1)\left(3RR_{B}+16R_{B}^{2}\frac{\lfloor m^{\gamma}\rfloor}{m}\right).
\end{aligned}
\end{equation*}
Similarly, we have
\begin{equation*}\label{eq: Gm G0m bound}
\begin{aligned}
&\quad |G_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}')-G_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')|\\
& \leq \frac{1}{m}\sum_{r\in[2m]}|\sigma(h_{\boldsymbol{\theta},r}(\boldsymbol{x}))\sigma(h_{\boldsymbol{\theta},r}(\boldsymbol{x}'))-\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}))\sigma(h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}'))|\\
& \leq \frac{1}{m}\bigg(\sum_{r\in I}\left(|h_{\boldsymbol{\theta},r}(\boldsymbol{x})||h_{\boldsymbol{\theta},r}(\boldsymbol{x}')-h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}')|+|h_{\boldsymbol{\theta},r}(\boldsymbol{x})-h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})||h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}')|\right)\\
& \quad +\sum_{r\in I^{\mathsf{c}}}\max\{|h_{\boldsymbol{\theta},r}(\boldsymbol{x})||h_{\boldsymbol{\theta},r}(\boldsymbol{x}')|,|h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x})||h_{\boldsymbol{\theta}(0),r}(\boldsymbol{x}')|\}\bigg)\\
& \leq \frac{|I|}{m}4(dB+1)^{2}R_{B}R+\frac{|I^{\mathsf{c}}|}{m}8(dB+1)^{2}R_{B}^{2} \\
& \leq 4(dB+1)^{2}R_{B}R+8(dB+1)^{2}R_{B}^{2}\frac{\lfloor m^{\gamma}\rfloor}{m}.
\end{aligned}
\end{equation*}
Simply by the triangle inequality, we have
\begin{equation*}
\begin{aligned}
& \quad |K_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')|\\
& \leq |H_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}')-H_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')|+|G_{\boldsymbol{\theta}}^{m}(\boldsymbol{x},\boldsymbol{x}')-G_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x},\boldsymbol{x}')|\\
& \leq (dB^{2}+1)\left(3R_{B}R+16R_{B}^{2}\frac{\lfloor m^{\gamma}\rfloor}{m}\right)+ 4(dB+1)^{2}R_{B}R+8(dB+1)^{2}R_{B}^{2}\frac{\lfloor m^{\gamma}\rfloor}{m}\\
& \leq C_{7}m^{-(1-\gamma)}\log m,
\end{aligned}
\end{equation*}
where the last inequality holds when $m$ is sufficiently large.
\end{proof}
\paragraph{Lazy regime}\label{app:lazy:regime}
\begin{proposition}\label{prop: lazy regime}
Let $R'=\frac{4\sqrt{3}(dB+1)\|\boldsymbol{y}\|_{2}}{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))\sqrt{n}}\sqrt{\frac{\log m}{m}}$. Denote the ``lazy regime'' event by
\begin{equation*}
\mathcal{A}=\mathcal{A}_{\lambda}\cap\mathcal{A}_{\boldsymbol{\theta}}\cap\mathcal{A}_{\boldsymbol{u}},
\end{equation*}
where
\begin{gather*}
\mathcal{A}_{\lambda}=\left\{\omega~\middle|~\lambda_{\min}(K_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{X},\boldsymbol{X}))\geq \frac{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))}{2},~\forall t\geq 0\right\},\\
\mathcal{A}_{\boldsymbol{\theta}}=\left\{\omega~\middle|~\boldsymbol{\theta}(t)\in\boldsymbol{\Theta}(\boldsymbol{\theta}(0),R'),~\forall t\geq 0\right\},\\
\mathcal{A}_{\boldsymbol{u}}=\left\{\omega~\middle|~\|\boldsymbol{u}^{m}(t)\|_{2}^{2}\leq e^{-\frac{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))}{n}t}\|\boldsymbol{u}(0)\|_{2}^{2},~\forall t\geq 0\right\}.
\end{gather*}
If we further set $\alpha<1/2$ and $\gamma > 1-\beta/4$, when $m$ is sufficiently large, such that $R'<R$, then we have
\begin{equation*}
\mathcal{A}\supseteq \mathcal{B}\cap\mathcal{R}\cap\mathcal{C}.
\end{equation*}
\end{proposition}
The proof of Proposition \ref{prop: lazy regime} is deferred to the end of Appendix \ref{app:lazy:regime}. To prove it, we need the following three lemmas.
\begin{lemma}\label{lem: smallest eigenvalue leads to fast convergence}
For some $t\geq0$, if there exists some $\lambda_{\min}>0$ such that for all $0\leq s\leq t$,
\begin{equation*}
\lambda_{\min}(K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X}))\geq \lambda_{\min}/2,
\end{equation*}
then
\begin{equation*}
\|\boldsymbol{u}^{m}(t)\|_{2}^{2}\leq e^{-\frac{\lambda_{\min}}{n}t}\| \boldsymbol{u}^{m}(0)\|_{2}^{2}.
\end{equation*}
\end{lemma}
\begin{proof}
Notice that
\begin{equation*}
\begin{aligned}
\frac{\partial\|\boldsymbol{u}^{m}(s)\|_{2}^{2}}{\partial s}=\frac{\partial\|\boldsymbol{u}^{m}(s)\|_{2}^{2}}{\partial\boldsymbol{u}^{m}(s)}\frac{\partial\boldsymbol{u}^{m}(s)}{\partial s}=-\frac{2}{n}\boldsymbol{u}^{m}(s)^{\top} K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X}) \boldsymbol{u}^{m}(s)\leq -\frac{\lambda_{\min}}{n}\|\boldsymbol{u}^{m}(s)\|_{2}^{2}
\end{aligned}
\end{equation*}
leads to
\begin{equation*}
\frac{\partial e^{\frac{\lambda_{\min}}{n}s}\|\boldsymbol{u}^{m}(s)\|_{2}^{2}}{\partial s}=e^{\frac{\lambda_{\min}}{n}s}\left(\frac{\lambda_{\min}}{n}\|\boldsymbol{u}^{m}(s)\|_{2}^{2}+\frac{\partial\|\boldsymbol{u}^{m}(s)\|_{2}^{2}}{\partial s}\right)\leq 0.
\end{equation*}
Thus $e^{\frac{\lambda_{\min}}{n}s}\|\boldsymbol{u}^{m}(s)\|_{2}^{2}$ is non-increasing, which implies $e^{\frac{\lambda_{\min}}{n}s}\|\boldsymbol{u}^{m}(t)\|_{2}^{2}\leq \|\boldsymbol{u}^{m}(0)\|_{2}^{2}$.
\end{proof}
\begin{lemma}\label{lem: bound smallest eigenvalue}
Conditioning on $\mathcal{B}\cap\mathcal{R}\cap\mathcal{C}$, if we set $\gamma>1-\beta/4$ and $m$ is sufficiently large, then for all $\boldsymbol{\theta}\in\boldsymbol{\Theta}(\boldsymbol{\theta}(0),R)$,
\begin{equation*}
\lambda_{\min}(K_{\boldsymbol{\theta}}^{m}(\boldsymbol{X},\boldsymbol{X}))\geq\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))/2.
\end{equation*}
\end{lemma}
\begin{proof}
Notice that
\begin{equation*}
\begin{aligned}
& \quad \|K_{\boldsymbol{\theta}}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\|_{2} \\
& \leq \lVert K_{\boldsymbol{\theta}}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{X},\boldsymbol{X})\rVert_{2}+\lVert K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\rVert_{2}\\
& \leq \|K_{\boldsymbol{\theta}}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{X},\boldsymbol{X})\|_{\mathrm{F}}+\| K_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\|_{\mathrm{F}}\\
& \leq (C_{2}+C_{7})nm^{-(1-\gamma)}\log m,
\end{aligned}
\end{equation*}
where the last inequality follows from Lemma \ref{lem: kernel close to init when params close to init} and Lemma \ref{lem: initial kernel close to fixed}.
If $m$ large enough such that $\|K_{\boldsymbol{\theta}}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\|_{2}\leq \lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))/2$, then
\begin{equation*}
\begin{aligned}
\lambda_{\min}(K_{\boldsymbol{\theta}}^{m}(\boldsymbol{X},\boldsymbol{X}))&\geq\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))-\|K_{\boldsymbol{\theta}}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\|_{2}\\
&\geq \lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))/2.
\end{aligned}
\end{equation*}
\end{proof}
\begin{lemma}\label{lem: smallest eigenvalue lowerbound and lazy training}
For some $t\geq0$, suppose that $\lambda_{\min}(K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X}))\geq \lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))/2$ holds for all $s\in [0,t]$ and we set $\alpha<1/2$, so that $R'<R$ when $m$ is sufficiently large. Then conditioning on the event $\mathcal{B}\cap\mathcal{R}\cap\mathcal{C}$, we have $\boldsymbol{\boldsymbol{\theta}}(s)\in \boldsymbol{\Theta}(\boldsymbol{\theta}(0),R')$ for all $s\in[0,t]$ when $m$ is sufficiently large.
\end{lemma}
\begin{proof}
We prove the following two statements instead.
\begin{enumerate}
\item If $|\boldsymbol{w}_{r,(j)}(s)-\boldsymbol{w}_{r,(j)}(0)|\leq R$ and $|b_{r}(s)-b_{r}(0)|\leq R$ hold for all $r\in[2m],j\in[d]$ and all $s\in[0,t]$, then $|a_{r}(t)-a_{r}(0)|\leq R'$ holds for all $r\in [2m]$;
\item If $|a_{r}(s)-a_{r}(0)|\leq R$ holds for all $r\in [2m]$ and for all $s\in[0,t]$, then $|\boldsymbol{w}_{r,(j)}(t)-\boldsymbol{w}_{r,(j)}(0)|\leq R',|b_{r}(t)-b_{r}(0)|\leq R'$ hold for all $r\in [2m],j\in[d]$.
\end{enumerate}
We can bound the distance from initializations by integrating the norm of gradient since $\|\boldsymbol{v}(t)-\boldsymbol{v}(0)\|_{2}\leq \int_{0}^{t}\|\dot{\boldsymbol{v}}(s)\|_{2}\mathrm{d} s$ for any vector-valued function $\boldsymbol{v}(t)$.
The gradient flow of parameters is as follows:
\begin{gather*}
\dot{a}_{r}(s)=-\nabla_{a_{r}}\hat{\mathcal{L}}_{n}(f_{\boldsymbol{\theta}(s)}^{m})=-\frac{1}{n}m^{-1/2}\sum_{i\in[n]}\sigma(h_{\boldsymbol{\theta}(s),r}(\boldsymbol{x}_{i}))\boldsymbol{u}_{i}^{m}(s),\\
\dot{\boldsymbol{w}}_{r,(j)}(s)=-\nabla_{\boldsymbol{w}_{r,(j)}}\hat{\mathcal{L}}_{n}(f_{\boldsymbol{\theta}(s)}^{m})=-\frac{1}{n}m^{-1/2}\sum_{i\in[n]} a_{r}(s)\boldsymbol{x}_{i,(j)}\boldsymbol{1}_{\boldsymbol{\theta}(s),r}(\boldsymbol{x}_{i})\boldsymbol{u}_{i}^{m}(s),\\
\dot{b}_{r}(s)=-\nabla_{b_{r}}\hat{\mathcal{L}}_{n}(f_{\boldsymbol{\theta}(s)}^{m})=-\frac{1}{n}m^{-1/2}\sum_{i\in[n]} a_{r}(s)\boldsymbol{1}_{\boldsymbol{\theta}(s),r}(\boldsymbol{x}_{i})\boldsymbol{u}_{i}^{m}(s).
\end{gather*}
By the Cauchy-Schwartz inequality and Lemma \ref{lem: smallest eigenvalue leads to fast convergence},
\begin{equation*}
\sum_{i\in[n]}|\boldsymbol{u}_{i}^{m}(s)|\leq\sqrt{n}\|\boldsymbol{u}^{m}(s)\|_{2}\leq\sqrt{n}e^{-\frac{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))}{2n}s}\|\boldsymbol{y}\|_{2}.
\end{equation*}
In the following, we suppose $m$ is sufficiently large such that $R\leq R_{B}$.
\noindent 1. Since $|\boldsymbol{w}_{r,(j)}(s)-\boldsymbol{w}_{r,(j)}(0)|\leq R$ and $|b_{r}(s)-b_{r}(0)|\leq R$ hold for all $r\in[2m],j\in[d]$, we have $\sigma(h_{\boldsymbol{\theta}(s),r}(\boldsymbol{x}_{i}))\leq 2(dB+1)R_{B}$. Thus, according to the gradient flow, we have
\begin{equation*}
\begin{aligned}
|\dot{a}_{r}(s)|\leq&\frac{1}{n}m^{-1/2}\max_{i\in[n]}\sigma(h_{\boldsymbol{\theta}(s),r}(\boldsymbol{x}_{i}))\sum_{i\in[n]}|\boldsymbol{u}_{i}^{m}(s)|\\
\leq& \frac{1}{n}m^{-1/2}2(dB+1)R_{B}\sqrt{n}e^{-\frac{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))}{2n}s}\|\boldsymbol{y}\|_{2}
\end{aligned}
\end{equation*}
and
\begin{equation*}
|a_{r}(t)-a_{r}(0)|\leq \int_{0}^{t}|\dot{a}_{r}(s)|\mathrm{d} s\leq\frac{4\sqrt{3}(dB+1)\|\boldsymbol{y}\|_{2}}{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))\sqrt{n}}\sqrt{\frac{\log m}{m}}\leq R'.
\end{equation*}
hold for every $r\in [2m]$.
\noindent 2. Since $|a_{r}(s)-a_{r}(0)|\leq R$ holds for all $r\in[2m]$, we have $|a_{r}(s)|\leq2R_{B}$. Thus, we have that
\begin{gather*}
|\dot{\boldsymbol{w}}_{r,(j)}(s)|\leq \frac{1}{n}m^{-1/2}B2R_{B}\sqrt{n}e^{-\frac{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))}{2n}s}\|\boldsymbol{y}\|_{2},\\
|\dot{b}_{r}(s)|\leq \frac{1}{n}m^{-1/2}2R_{B}\sqrt{n}e^{-\frac{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))}{2n}s}\|\boldsymbol{y}\|_{2},
\end{gather*}
and
\begin{gather*}
|\boldsymbol{w}_{r,(j)}(t)-\boldsymbol{w}_{r,(j)}(0)|\leq \int_{0}^{t}|\dot{\boldsymbol{w}}_{r,(j)}(s)|\mathrm{d} s\leq \frac{4\sqrt{3}B\|\boldsymbol{y}\|_{2}}{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))\sqrt{n}}\sqrt{\frac{\log m}{m}}\leq R',\\
|b_{r}(t)-b_{r}(0)|\leq\int_{0}^{t}|\dot{b}_{r}(s)|\mathrm{d} s \leq\frac{4\sqrt{3}\|\boldsymbol{y}\|_{2}}{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))\sqrt{n}}\sqrt{\frac{\log m}{m}}\leq R',
\end{gather*}
hold for all $r\in[2m],j\in[d]$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop: lazy regime}]
For every $\omega\in\mathcal{B}\cap\mathcal{R}\cap\mathcal{C}$, let
$\tau=\min\{\tau_{\lambda},\tau_{\boldsymbol{\theta}},\tau_{\boldsymbol{u}}\}$, where
\begin{gather*}
\tau_{\lambda}=\inf\left\{t~\middle|~\lambda_{\min}(K_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{X},\boldsymbol{X}))<\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))/2\right\},\\
\tau_{\boldsymbol{\theta}}=\inf\left\{t~\middle|~\boldsymbol{\theta}(t)\notin\boldsymbol{\Theta}(\boldsymbol{\theta}(0),R')\right\},\\
\tau_{\boldsymbol{u}}=\inf\left\{t~\middle|~\|\boldsymbol{u}^{m}(t)\|_{2}^{2}>e^{-\frac{\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))}{n}t}\|\boldsymbol{u}(0)\|_{2}^{2}\right\}.
\end{gather*}
We will show that $\tau=\infty$ by contradiction, so that $\omega\in\mathcal{A}$.
Notice that $\tau_{\lambda}<\tau_{\boldsymbol{u}}$ and $\tau_{\lambda}<\tau_{\boldsymbol{\theta}}$ according to Lemma \ref{lem: smallest eigenvalue leads to fast convergence} and Lemma \ref{lem: smallest eigenvalue lowerbound and lazy training} respectively. However if $\tau=\tau_{\lambda}<\infty$, which suggests that $\boldsymbol{\theta}(\tau)\notin\boldsymbol{\Theta}(\boldsymbol{\theta}(0),R)$ by Lemma \ref{lem: bound smallest eigenvalue}. Then there must exists some $t_{0}$ such that $0\leq t_{0}<\tau$ and $\boldsymbol{\theta}(t_{0})\notin\boldsymbol{\Theta}(\boldsymbol{\theta}(0),R')$ since $R'<R$, which however violates the assumption that $\tau=\tau_{\lambda}$.
\end{proof}
\subsection{Proof of Proposition \ref{prop:funct:approx}}
Since we have $f_{\boldsymbol{\theta}(0)}^{m}(\boldsymbol{x})=f_{0}^{\mathtt{NTK}}(\boldsymbol{x})=0$ for every $\boldsymbol{x}\in\mathcal{X}$ under our initialization setting, we can bound the difference between $f_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{x})$ and $f_{t}^{\mathtt{NTK}}(\boldsymbol{x})$ by bounding the difference between their derivative, i.e.,
\begin{equation*}
|f_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})|=\left|\int_{0}^{t}\dot{f}_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{x})-\dot{f}_{s}^{\mathtt{NTK}}(\boldsymbol{x})\mathrm{d} s\right|\leq\int_{0}^{t}|\dot{f}_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{x})-\dot{f}_{s}^{\mathtt{NTK}}(\boldsymbol{x})|\mathrm{d} s.
\end{equation*}
Recall that
\begin{gather*}
\dot{f}_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{x})=-\frac{1}{n}K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{x},\boldsymbol{X})\boldsymbol{u}^{m}(s),\\
\dot{f}_{s}^{\mathtt{NTK}}(\boldsymbol{x})=-\frac{1}{n}K_{d}(\boldsymbol{x},\boldsymbol{X})\boldsymbol{u}^{\mathtt{NTK}}(s),
\end{gather*}
where $\boldsymbol{u}^{m}(s) = f_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X}) - \boldsymbol{y}$ and $\boldsymbol{u}^{\mathtt{NTK}}(s) = f_{s}^{\mathtt{NTK}}(\boldsymbol{X}) - \boldsymbol{y}$. Let
\begin{equation*}
\Delta=\sup_{t\geq 0}\sup_{\boldsymbol{x},\boldsymbol{x}'\in\mathcal{X}}|K_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{x},\boldsymbol{x}')-K_{d}(\boldsymbol{x},\boldsymbol{x}')|.
\end{equation*}
Then
\begin{equation*}
\begin{aligned}
& \quad |f_{\boldsymbol{\theta}(t)}^{m}(\boldsymbol{x})-f_{t}^{\mathtt{NTK}}(\boldsymbol{x})|\\
& \leq \frac{1}{n} \int_{0}^{t}|K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{x},\boldsymbol{X})\boldsymbol{u}^{m}(s)-K_{d}(\boldsymbol{x},\boldsymbol{X})\boldsymbol{u}^{\mathtt{NTK}}(s)|\mathrm{d} s \\
& \leq \frac{1}{n}\int_{0}^{t}\left(\| K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{x},\boldsymbol{X})^{\top}-K_{d}(\boldsymbol{x},\boldsymbol{X})^{\top}\|_{2} +\| K_{d}(\boldsymbol{x},\boldsymbol{X})^{\top}\|_{2}\right) \| \boldsymbol{u}^{m}(s)-\boldsymbol{u}^{\mathtt{NTK}}(s)\|_{2} \mathrm{d} s\\
& \quad + \frac{1}{n} \int_{0}^{t}\| K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{x},\boldsymbol{X})^{\top}-K_{d}(\boldsymbol{x},\boldsymbol{X})^{\top}\|_{2}\| \boldsymbol{u}^{\mathtt{NTK}}(s)\|_{2}\mathrm{d} s\\
& \leq \frac{1}{n}\cdot(\sqrt{n}\Delta+\sqrt{n}C) \cdot \int_{0}^{t}\|\boldsymbol{y}\|_{2}\Delta s e^{-\frac{1}{n}(\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))-n\Delta)s} \mathrm{d} s \\
& \quad + \frac{1}{n}\cdot\sqrt{n}\Delta\cdot \int_{0}^{t} e^{-\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))s}\|\boldsymbol{y}\|_{2} \mathrm{d} s\\
& \leq \frac{1}{n}\cdot(\sqrt{n}\Delta+\sqrt{n}C) \cdot\|\boldsymbol{y}\|_{2}\Delta(\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))/2)^{-2} \\
& \quad + \frac{1}{n}\cdot\sqrt{n}\Delta\cdot\left(\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))\right)^{-1}\|\boldsymbol{y}\|_{2} \\
& \leq \frac{\epsilon}{2}+\frac{\epsilon}{2} = \epsilon,
\end{aligned}
\end{equation*}
where $C>0$ is a constant depending only on $B$ and we apply Lemma \ref{lem: u_NTK} and Lemma \ref{lem: u-u_NTK} in the third inequality and the last line follows from Proposition \ref{prop:kernel:approx} that for sufficiently large $m$, we have
\begin{equation*}
\Delta\leq\min\left\{C,\frac{\epsilon(\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X})))^{2}\sqrt{n}}{16C\|\boldsymbol{y}\|_{2}},\frac{\epsilon\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))}{2\sqrt{n}\|\boldsymbol{y}\|_{2}}\right\},
\end{equation*}
with probability at least $1-\delta$ over initialization.
\begin{lemma}\label{lem: u_NTK}
For all $t\geq0$, we have $\|\boldsymbol{u}^{\mathtt{NTK}}(t)\|_{2} \leq e^{-\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))t}\|\boldsymbol{y}\|_{2}$.
\end{lemma}
\begin{proof}
Recall that $\dot{\boldsymbol{u}}^{\mathtt{NTK}}(t)=-\frac{1}{n}K_{d}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{u}^{\mathtt{NTK}}(t)$. Notice that $K_{d}(\boldsymbol{X},\boldsymbol{X})$ is fixed, so we can write the explicit form $\boldsymbol{u}^{\mathtt{NTK}}(t)=e^{-\frac{1}{n}K_{d}(\boldsymbol{X},\boldsymbol{X})t}\boldsymbol{u}^{\mathtt{NTK}}(0)$, where $\boldsymbol{u}^{\mathtt{NTK}}(0)=-\boldsymbol{y}$. Then $\| \boldsymbol{u}^{\mathtt{NTK}}(t)\|_{2}\leq e^{-\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))t}\| \boldsymbol{y}\|_{2}$, since $\|e^{-\frac{1}{n}K_{d}(\boldsymbol{X},\boldsymbol{X})t}\|_{2}=e^{-\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))t}$.
\end{proof}
\begin{lemma}\label{lem: u-u_NTK}
For all $t\geq0$, we have
\begin{equation*}
\|\boldsymbol{u}^{m}(t)-\boldsymbol{u}^{\mathtt{NTK}}
(t)\|_{2} \leq \|\boldsymbol{y}\|_{2}\Delta t e^{-\frac{1}{n}(\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))-n\Delta)t}
\end{equation*}
\end{lemma}
\begin{proof}
Recall that we can express explicitly for all $s\geq 0$,
\begin{gather*}
\dot{\boldsymbol{u}}^{m}(s)=-\frac{1}{n}K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{u}^{m}(s)\\
\text{~and~}\dot{\boldsymbol{u}}^{\mathtt{NTK}}(s)=-\frac{1}{n}K_{d}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{u}^{\mathtt{NTK}}(s).
\end{gather*}
Notice that
\begin{equation*}
\frac{\mathrm{d}}{\mathrm{d} s}e^{\frac{1}{n}K_{d}(\boldsymbol{X},\boldsymbol{X})s}\left(\boldsymbol{u}^{m}(s)-\boldsymbol{u}^{\mathtt{NTK}}(s)\right)=\frac{1}{n}e^{\frac{1}{n}K_{d}(\boldsymbol{X},\boldsymbol{X})s}(K_{d}(\boldsymbol{X},\boldsymbol{X})-K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X}))\boldsymbol{u}^{m}(s).
\end{equation*}
Since $\boldsymbol{u}^{m}(0)=\boldsymbol{u}^{\mathtt{NTK}}(0)$, integrating gives
\begin{equation*}
e^{\frac{1}{n}K_{d}(\boldsymbol{X},\boldsymbol{X})t}\left(\boldsymbol{u}^{m}(t)-\boldsymbol{u}^{\mathtt{NTK}}(t)\right)=\frac{1}{n}\int_{0}^{t}e^{\frac{1}{n}K_{d}(\boldsymbol{X},\boldsymbol{X})s}(K_{d}(\boldsymbol{X},\boldsymbol{X})-K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X}))\boldsymbol{u}^{m}(s)\mathrm{d} s,
\end{equation*}
then
\begin{equation*}
\boldsymbol{u}^{m}(t)-\boldsymbol{u}^{\mathtt{NTK}}(t)=\frac{1}{n}\int_{0}^{t}e^{\frac{1}{n}K_{d}(\boldsymbol{X},\boldsymbol{X})(s-t)}(K_{d}(\boldsymbol{X},\boldsymbol{X})-K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X}))\boldsymbol{u}^{m}(s)\mathrm{d} s.
\end{equation*}
Bounding the norm implies
\begin{equation*}
\begin{aligned}
&\|\boldsymbol{u}^{m}(t)-\boldsymbol{u}^{\mathtt{NTK}}(t)\|_{2}\\
\leq&\frac{1}{n}\int_{0}^{t}\| e^{\frac{1}{n}K_{d}(\boldsymbol{X},\boldsymbol{X})(s-t)}(K_{d}(\boldsymbol{X},\boldsymbol{X})-K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X}))\boldsymbol{u}^{m}(s)\|_{2}\mathrm{d} s\\
\leq& \frac{1}{n}\int_{0}^{t}e^{\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))(s-t)}\|K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\|_{2}\| \boldsymbol{u}^{\mathtt{NTK}}(s)\|_{2}\mathrm{d} s\\
&+\frac{1}{n}\int_{0}^{t}e^{\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))(s-t)}\| K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\|_{2}\| \boldsymbol{u}^{m}(s)-\boldsymbol{u}^{\mathtt{NTK}}(s)\|_{2}\mathrm{d} s.
\end{aligned}
\end{equation*}
For all $s\geq 0$, let
\begin{gather*}
u(s)=e^{\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))s}\| \boldsymbol{u}^{m}(s)-\boldsymbol{u}^{\mathtt{NTK}}(s)\|_{2},\\
\alpha(s)=\frac{1}{n}\int_{0}^{s}e^{\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))s'}\| K_{\boldsymbol{\theta}(s')}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\|_{2}\| \boldsymbol{u}^{\mathtt{NTK}}(s')\|_{2}\mathrm{d} s',\\
\text{~and~}\beta(s)=\frac{1}{n}\|K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\|_{2}.
\end{gather*}
Notice that $\beta(\cdot)$ is non-negative, $\alpha(\cdot)$ is non-decreasing and $u(\cdot)$ satisfies that $u(s)\leq \alpha(s)+\int_{0}^{s}\beta(s')u(s')\mathrm{d} s'$ for all $s\geq 0$. Applying the Grönwall's inequality \cite{walter1970differential} yields $u(s) \leq \alpha(s)e^{\int_{0}^{s}\beta(s')\mathrm{d} s'}$ for all $s\geq 0$. Then we have
\begin{equation*}
\begin{aligned}
& \quad \|\boldsymbol{u}^{m}(t)-\boldsymbol{u}^{\mathtt{NTK}}(t)\|_{2}\\
& \leq \frac{1}{n}\int_{0}^{t}e^{\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))(s-t)}\|K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\|_{2}\| \boldsymbol{u}^{\mathtt{NTK}}(s)\|_{2}\mathrm{d} s\\
& \quad \cdot e^{\frac{1}{n}\int_{0}^{t} \| K_{\boldsymbol{\theta}(s)}^{m}(\boldsymbol{X},\boldsymbol{X})-K_{d}(\boldsymbol{X},\boldsymbol{X})\|_{2}\mathrm{d} s} \\
& \leq \frac{1}{n}\cdot n\Delta\cdot e^{-\frac{1}{n}\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))t}\|\boldsymbol{y}\|_{2}\cdot t\cdot e^{\frac{1}{n}\cdot n\Delta\cdot t} = \|\boldsymbol{y}\|_{2}\Delta t e^{-\frac{1}{n}(\lambda_{\min}(K_{d}(\boldsymbol{X},\boldsymbol{X}))-n\Delta)t},
\end{aligned}
\end{equation*}
where Lemma \ref{lem: u_NTK} is applied in the second inequality.
\end{proof}
\section{Proof of Theorem \ref{thm:nn:early:stopping:d=1}}\label{app:proof:nn:es}
We recollect some essentials of spectral algorithms for the convenience of the readers here. For a thorough introduction on spectral algorithms, we refer the interested readers to \cite{lin2020optimal} and references therein. For simplicity, let $\mathcal{X} \subseteq \mathbb{R}^{d}$ be a compact set and $K:\mathcal{\mathcal{X}}\times\mathcal{X}$ a kernel function which is continuous and measurable. Denote the RKHS of $K$ by $\mathcal{H}$. Assume that the kernel $K$ satisfies that $\sup_{\boldsymbol{x} \in \mathcal{X}}K(\boldsymbol{x},\boldsymbol{x}) \leq \kappa^{2}$ for some $\kappa<\infty$. Notice $K_{d}$ satisfies all the assumptions above. Let $K_{\boldsymbol{x}}:\mathbb{R} \to \mathcal{H}$ be the mapping $x \mapsto xK(\cdot,\boldsymbol{x})$ and the adjoint operator $K_{\boldsymbol{x}}^{*}:\mathcal{H} \to \mathbb{R}$ such that $K_{\boldsymbol{x}}^{*}f \mapsto f(\boldsymbol{x})$. We further introduce $T_{\boldsymbol{x}} = K_{\boldsymbol{x}}K_{\boldsymbol{x}}^{*}: \mathcal{H} \to \mathcal{H}$ and $T_{\boldsymbol{X}}=\frac{1}{n}\sum_{i=1}^{n}T_{\boldsymbol{x}_{i}}$.
\begin{definition}[Filter functions]
Let $\Lambda$ be a subset of $[0,\infty) \cup \{\infty\}$ and define $\infty^{-1}=0$. The functions $\{\mathcal{G}_{\lambda}:[0,\kappa^{2}] \to [0,\infty), \lambda \in \Lambda \}$ are the filter functions with qualification $\tau \geq 1$ if there exists absolute constant $E$ and constant $F_{\tau}$ depending on $\tau$ such that
\begin{equation*}
\sup_{\lambda \in \Lambda}\sup_{\alpha \in [0,1]}\sup_{u \in [0,\kappa^{2}]}u^{\alpha}\mathcal{G}_{\lambda}(u) \lambda^{1-\alpha} \leq E
\end{equation*}
and
\begin{equation*}
\sup_{\lambda \in \Lambda}\sup_{\alpha \in [0,\tau]}\sup_{u \in [0,\kappa^{2}]}u^{\alpha}|1-u\mathcal{G}_{\lambda}(u)| \lambda^{-\alpha} \leq F_{\tau}.
\end{equation*}
\end{definition}
\begin{definition}[Spectral algorithms]
Given the filter functions $\mathcal{G}_{\lambda}(u)$, define
\begin{equation*}
\mathcal{G}_{\lambda}(T_{\boldsymbol{X}})=\sum_{j=1}^{\infty}\mathcal{G}_{\lambda}(\hat{\lambda}_{j})\langle \cdot , \hat{e}_{j}\rangle_{\mathcal{H}}\hat{e}_{j},
\end{equation*}
where $\{\hat{\lambda}_{j},j \geq 1\}$ and $\{\hat{e}_{j},j \geq 1\}$ are eigenvalues and eigenfunctions of $T_{\boldsymbol{X}}$. The estimator $\hat{f}_{\lambda}$ reads as follows,
\begin{equation*}
\hat{f}_{\lambda} = \mathcal{G}_{\lambda}(T_{\boldsymbol{X}})\frac{1}{n}\sum_{i=1}^{n}y_{i}K(\cdot,\boldsymbol{x}_{i}).
\end{equation*}
\end{definition}
\begin{lemma}
The filter function corresponding to gradient flow is
\begin{equation*}
\mathcal{G}_{\lambda}(u) = (1-e^{-u/\lambda})/u,
\end{equation*}
where $\Lambda=(0,\infty)\cup\{\infty\}$, $\tau$ could be any real number which is greater than or equal to $1$, $E=1$, $F_{\tau}=(\tau/e)^{\tau}$.
\end{lemma}
\begin{proof}
By the definition of gradient flow,
\begin{equation*}
\dot{f}=-\frac{\partial \hat{\mathcal{L}}_{n}}{\partial f}=-\frac{1}{n}\sum_{i=1}^{n}-K_{\boldsymbol{x}_{i}}(y_{i}-f(\boldsymbol{x}_{i}))=-(T_{\boldsymbol{X}}f-\frac{1}{n}\sum_{i=1}^{n}y_{i}K(\cdot,\boldsymbol{x}_{i})),
\end{equation*}
where
$\hat{\mathcal{L}}_{n}(f)=\frac{1}{2n}\sum_{i=1}^{n}(y_{i}-f(\boldsymbol{x}_{i}))^{2}$ and $f_{0}=0 \in \mathcal{H}$. Hence
\begin{equation*}
f_{t}=T_{\boldsymbol{X}}^{-1}(I-e^{-tT_{\boldsymbol{X}}})\frac{1}{n}\sum_{i=1}^{n}y_{i}K(\cdot,\boldsymbol{x}_{i}),
\end{equation*}
where $T_{\boldsymbol{X}}^{-1}$ is the Moore-Penrose inverse of $T_{\boldsymbol{X}}$. So $\mathcal{G}_{\lambda}(u)=(1-e^{-u/\lambda})/u$, where we parameterize $t=1/\lambda$. It could be verified that $\mathcal{G}_{\lambda}(u)=\sum_{j=1}^{\infty}\frac{(-1)^{j-1}(1/\lambda)^{j}}{j!}u^{j-1}$ is continuous for all $u \geq 0$ so $\mathcal{G}_{\lambda}(\cdot)$ can be applied to $T_{\boldsymbol{X}}$ by Theorem 5.1.11 from \cite{simon2015operator}. It could also be checked that $u^{\alpha}\mathcal{G}_{\lambda}(u)\lambda^{1-\alpha} \leq 1$ by the fact that $1-e^{-x} \leq x$ for all $x \geq 0$ and $u^{\alpha}|1-u\mathcal{G}_{\lambda}(u)|\lambda^{-\alpha} \leq (\alpha/e)^{\alpha}$.
\end{proof}
\begin{proposition}[Corollary 4.4 in \cite{lin2020optimal}]\label{prop:early:stopping}
Suppose Assumption \ref{assump:f_star} holds and we observed $n$ i.i.d. samples $\{(\boldsymbol{x}_{i},y_{i}),i\in[n]\}$ from the model \eqref{equation:true_model}. For any given $\delta\in(0,1)$, if the training process is stopped at $t_{\star} \propto n^{2/3}$ for the NTK regression, then for sufficiently large $n$, there exists a constant $C$ independent of $\delta$ and $n$, such that
\begin{equation*}
\mathcal{E}(f_{t_{\star}}^{\mathtt{NTK}})=Cn^{-\frac{2}{3}}\log^{2}\frac{6}{\delta}
\end{equation*}
holds with probability at least $1-\delta$ over the training data.
\end{proposition}
Setting $\epsilon=Cn^{-\frac{2}{3}}\log^{2}\frac{6}{\delta}$ in Theorem \ref{thm:risk:approx} yields
\begin{equation*}
\begin{aligned}
\mathcal{E}(f_{\boldsymbol{\theta}(t_{\star})}^{m}) &\leq |\mathcal{E}(f_{\boldsymbol{\theta}(t_{\star})}^{m})-\mathcal{E}(f_{t_{\star}}^{\mathtt{NTK}})|+\mathcal{E}(f_{t_{\star}}^{\mathtt{NTK}}) \\
&\leq 2Cn^{-\frac{2}{3}}\log^{2}\frac{6}{\delta}.
\end{aligned}
\end{equation*}
\section{Proof of Theorem \ref{thm:bad_gen}}
\begin{proof}[Proof of Theorem \ref{thm:bad_gen} $i)$]
Theorem \ref{thm:bad_gen} $i)$ is a direct corollary of the following lemmas and Proposition \ref{prop:funct:approx}.
\begin{lemma}[Overfitted NTK model can be approximated by the linear interpolation]\label{LI}
Suppose that we have observed $n$ data $\{(x_{i},y_{i}), i\in [n]\}$ from the model \eqref{equation:true_model} and $x_i=\frac{i-1}{n-1}$, $i\in [n]$. With the probability at least $1-C_1/n$, the overfitted NTK model with zero initialization $f^{\mathtt{NTK}}_{\infty}(x) =K_1(x,\boldsymbol{X})K_1^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}$ can be approximated by the linear interpolation, i.e.,
\begin{align}
\sup_{x\in [0,1]}|f_{\infty}^{\mathtt{NTK}}(x)-f_{\mathtt{LI}}(x)|\leq C_2\sqrt{\log n} /(n-1)^{2}
\end{align}
for some absolute constants $C_1$ $C_2$.
\end{lemma}
\begin{lemma}\label{lemma:NTK_t_vs_NTK_inf}
Suppose that we have observed $n$ data $\{(x_{i},y_{i}), i\in [n]\}$ from the model \eqref{equation:true_model} and $x_i=\frac{i-1}{n-1}$, $i\in [n]$. If $t\geq C_1 n^2 \log n$, we have
\begin{equation}
\sup_{x\in [0,1]}|f_{t}^{\mathtt{NTK}}(x)-f_{\infty}^{\mathtt{NTK}}(x)|\leq \frac{C_2}{(n-1)^3}
\end{equation}
for some absolute constants $C_1$, $C_2$.
\end{lemma}
\end{proof}
\begin{proof}[Proof of lemma \ref{LI}]
Since
\begin{equation*}
y_{i} = K(x_i,\boldsymbol{X})K^{-1}\boldsymbol{y}
\end{equation*}
and
\begin{equation*}
y_{i+1} = K(x_{i+1},\boldsymbol{X})K^{-1}\boldsymbol{y},
\end{equation*}
the Taylor Expansion and intermediate theorem imply that for $\forall x\in(x_{i},x_{i+1})$, there are $\xi_{i}$ and $\hat{\xi}_{i}\in (x_{i},x_{i+1})$ such that
\begin{align}\label{equation:taylor_expansion}
K(x,\boldsymbol{X})K^{-1}\boldsymbol{y} -y_{i}&= (x-x_i)K_{+}'(x_i,\boldsymbol{X})K^{-1}\boldsymbol{y} + \frac{(x-x_i)^2}{2}K''(\xi_i,\boldsymbol{X}) K^{-1}\boldsymbol{y},\\
y_{i+1}-y_{i
&=(x_{i+1}-x_i)K_{+}'(x_i,\boldsymbol{X})K^{-1}\boldsymbol{y} + \frac{(x_{i+1}-x_i)^2}{2}K''(\hat{\xi}_i,\boldsymbol{X})K^{-1}\boldsymbol{y}
\end{align}
where
$K'_{+}(x_{i},\boldsymbol{X})=\lim_{x\rightarrow x_{i}^{+}} K'_{+}(x,\boldsymbol{X}) = \lim_{x\rightarrow x_{i}^{+}}\frac{\partial K(x,\boldsymbol{X})}{\partial x}$. Thus,
\begin{equation}
\begin{aligned}\label{eq:higher:order}
K(x,\boldsymbol{X})K^{-1}\boldsymbol{y}& - y_i - \frac{(x-x_i)}{x_{i+1}-x_i} (y_{i+1}-y_{i}) \\
&=-\frac{(x-x_i)}{x_{i+1}-x_i}\frac{(x_{i+1}-x_i)^2}{2}K''(\hat{\xi}_i,\boldsymbol{X})K^{-1}\boldsymbol{y} \\
&+\frac{(x-x_i)^2}{2}K''(\xi_i,\boldsymbol{X}) K^{-1}\boldsymbol{y}.
\end{aligned}
\end{equation}
The second-order derivative can be bounded by the following lemma:
\begin{lemma}[Bounded second order derivative of overfitted NTK regression]\label{prop:bound_second_derivative}
Suppose that we have observed $n$ data $\{(x_{i},y_{i}), i\in [n]\}$ from the model \eqref{equation:true_model} and $x_i=\frac{i-1}{n-1}$, $i\in [n]$. With the probability at least $1-\frac{2}{n}$, we have
\begin{align}
\sup_{x\in (x_i,x_{i+1})}|K_1''(x,\boldsymbol{X})K_1^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}|\leq C\sqrt{\log n}
\end{align}
for $\forall i\in[n]$ and for some absolute constant $C$.
\end{lemma}
By Lemma \ref{prop:bound_second_derivative}, $|K''(\xi_i,\boldsymbol{X}) K^{-1}\boldsymbol{y}|<C\log n$ for some constant $C$ and the RHS of \eqref{eq:higher:order} is bounded by $ \frac{C\sqrt{\log n} }{(n-1)^{2}}$ for some constant $C$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:NTK_t_vs_NTK_inf}]
$K(x,x')$ is bounded by a constant By Lemma \ref{lem:bound_y}, $|y_i|$ are also bounded by $C\sqrt{\log n}$ with the probability at least $1-2/n$. By Theorem \ref{thm:spectral:d=1:L=1}, $\lambda_{\min}=C_1/n$ for some constant $C_1$. If $t\geq C_2 n^2 \log(n^6)=C_3n^2\log n$ for some constants $C_2$, $C_3$, we have
\begin{equation*}
\begin{split}
|f_{t}^{\mathtt{NTK}}(x)-f_{\infty}^{\mathtt{NTK}}(x)|&= e^{-\frac{tK_1(\boldsymbol{X},\boldsymbol{X})}{n}}|K_1(x,\boldsymbol{X})K_1^{-1}(\boldsymbol{X},\boldsymbol{X})\boldsymbol{y}|\\
&\leq e^{-\frac{t \lambda_{\min}}{n} } \lambda_{\min}^{-1}n\sqrt{\log n} \\
&\leq \frac{C_4}{(n-1)^3}
\end{split}
\end{equation*}
for some constant $C_4$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:bad_gen} $ii)$]
Theorem \ref{thm:bad_gen} $ii)$ is a direct corollary of the following lemma and Theorem \ref{thm:bad_gen} $i)$.
\begin{lemma}[Linear Interpolation cannot Generalize Well]\label{LI_not_good}
Suppose that we have observed $n$ data $\{(x_{i},y_{i}), i\in [n]\}$ from the model \eqref{equation:true_model} and $x_i=\frac{i-1}{n-1}$, $i\in [n]$. Let $f_{\mathtt{LI}}$ be a linear interpolation estimator. Then there exists a positive constant $C$ such that
\begin{equation}
\mathcal{E}( f_{\mathtt{LI}}(x)) \geq \frac{1}{3}\sigma^2.
\end{equation}
holds with probability at least $1-\frac{C}{n}$.
\end{lemma}
\begin{comment}
By Theorem \ref{thm:bad_gen} $i)$ and Lemma \ref{LI_not_good}, if $n\geq N$ for some large $N$, we have
\begin{align}
\mathcal{E}(f_{\boldsymbol{\theta}(t)}^{m}(x)) \geq \mathcal{E}(f_{\mathtt{LI}}(x)) - \int_0^1 (f^m_{\theta(t)}(x)-f_{LI}(x))^2 \mathrm{d} x \geq \frac{1}{6}\sigma^2
\end{align}
with probability at least $1-\frac{C}{n}$ for some constant $C$.
\end{comment}
\end{proof}
\begin{proof}[Proof of Lemma \ref{LI_not_good}]
For $x\in[x_i,x_{i+1}]$, the linear interpolation takes the form
\[
f_{\mathtt{LI}}(x)= \lambda_i(x)y_{i} + (1-\lambda_i(x))y_{i+1},
\]
where $\lambda_i(x)=\frac{x_{i+1}-x}{x_{i+1}-x_{i}}$.
Denote $b^2(x)=(\mathbf{E}_{\boldsymbol{\varepsilon}}f_{\mathtt{LI}}(x)-f_{\star}(x))^{2}$ and $\sigma^{2}(x)=(f_{\mathtt{LI}}(x)-\mathbf{E}_{\boldsymbol{\varepsilon}}f_{\mathtt{LI}}(x))^2$ to be the bias and variance term respectively, where we denote by $\mathbf{E}_{\boldsymbol{\varepsilon}}$ taking expectations with respect to the noise $\varepsilon_{1},\dots,\varepsilon_{n}$. Thus the excess risk of the linear interpolation $\mathcal{E}(f_{\mathtt{LI}})$ can be formulated as
\begin{align}
\mathcal{E}(f_{\mathtt{LI}}) &=\int_{0}^{1} (b(x)+\sigma(x))^2 \mathrm{d} x \\
&= c_1 + \sum_{i=1}^{n-1} \frac{c_{2,i} }{n-1} (\varepsilon_i + \varepsilon_{i+1}) + \frac{1}{3(n-1)}\sum_{i=1}^{n-1} (\varepsilon_{i}^{2} + \varepsilon_{i+1}^{2} + \varepsilon_{i}\varepsilon_{i+1}).
\end{align}
for some positive constant $c_1$ and a uniformly bounded sequence $\{c_{2,i}\}_{i=1}^{n-1}$. The last equation is the result of Lemma \ref{lem:risk_LI_fomular}. The expectation of $\mathcal{E}(f_{\mathtt{LI}})$
\begin{align}
\mathbf{E}_{\boldsymbol{\varepsilon}}\mathcal{E}(f_{\mathtt{LI}}) =c_1 + \frac{2}{3}\sigma^2 \geq\frac{2}{3}\sigma^2
\end{align}
and the variance of $\mathcal{E}(f_{\mathtt{LI}})$
\begin{align}
\operatorname{Var}_{\boldsymbol{\varepsilon}}(\mathcal{E}(f_{\mathtt{LI}})) \leq \frac{c_3}{n}
\end{align}
for some constant $c_3$. By Chebyshev's inequality, we have
\begin{align}
\mathbf{P} (|\mathcal{E}(f_{\mathtt{LI}})-\mathbf{E}_{\boldsymbol{\varepsilon}}(\mathcal{E}(f_{\mathtt{LI}}))|\geq \frac{1}{3}\sigma^2) \leq \frac{\operatorname{Var}_{\boldsymbol{\varepsilon}}(\mathcal{E}(f_{\mathtt{LI}}))}{\frac{1}{9}\sigma^4}.
\end{align}
Thus, we conclude that with probability at least $1-\frac{c_4}{n}$,
\begin{align}
\mathcal{E}(f_{\mathtt{LI}}) \geq \frac{1}{3}\sigma^2.
\end{align}
for some constant $c_4$.
\end{proof}
\begin{lemma}\label{lem:risk_LI_fomular}
Denote $b^2(x)=(\mathbf{E}_{\boldsymbol{\varepsilon}}f_{\mathtt{LI}}(x)-f_{\star}(x))^{2}$ and $\sigma^{2}(x)=(f_{\mathtt{LI}}(x)-\mathbf{E}_{\boldsymbol{\varepsilon}}f_{\mathtt{LI}}(x))^2$. $ \mathcal{E}(f_{\mathtt{LI}})= \int_{0}^{1} (b(x)+\sigma(x))^2 dx$ can be reformulated as
\begin{align}
\mathcal{E}(f_{\mathtt{LI}})= c_1 + \sum_{i=1}^{n-1} \frac{c_{2,i} }{n-1} (\varepsilon_i + \varepsilon_{i+1}) + \frac{1}{3(n-1)}\sum_{i=1}^{n-1} (\varepsilon_{i}^{2} + \varepsilon_{i+1}^{2} + \varepsilon_{i}\varepsilon_{i+1})
\end{align}
for some positive constant $c_1$ and a uniformly bounded sequence $\{c_{2, i}\}_{i=1}^{n}$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:risk_LI_fomular}]
Denote
\begin{align}
b_i(x) &= (1-\frac{x-x_i}{x_{i+1}-x_i})(f_{\star}(x_i) - f_{\star}(x)) + \frac{x-x_i}{x_{i+1}-x_i}(f_{\star}(x_{i+1}) - f_{\star}(x))\\
\sigma_i(x) &= (1-\frac{x-x_i}{x_{i+1}-x_i}) \varepsilon_i + \frac{x-x_i}{x_{i+1}-x_i} \varepsilon_{i+1}.
\end{align}
\begin{align}
\mathcal{E}(f_{\mathtt{LI}})&=\int_{0}^{1} b^2(x)+2b(x)\sigma(x) +\sigma^2(x) \mathrm{d} x\\
&=\sum_{i=1}^{n-1}\int_{x_i}^{x_{i+1}} b^{2}_i(x) + 2b_i(x)\sigma_i(x) +\sigma^2_i(x) \mathrm{d} x
\end{align}
Since $f_{\star}$ is bounded, $b_i(x)\in (C_1, C_2)$ where $C_1$ and $C_2$ depend on $f_{\star}$. Thus, by the mean value theorems, there exists a positive constant $c_1$ and a uniformly bounded sequence $\{c_{2, i}\}_{i=1}^{n}$ such that
\begin{align}
\mathcal{E}(f_{\mathtt{LI}})= c_1 + \sum_{i=1}^{n-1} \frac{c_{2,i} }{n-1} (\varepsilon_i + \varepsilon_{i+1}) + \frac{1}{3(n-1)}\sum_{i=1}^{n-1} (\varepsilon_{i}^{2} + \varepsilon_{i+1}^{2} + \varepsilon_{i}\varepsilon_{i+1}).
\end{align}
\end{proof}
\subsection{Technical Lemmas}\label{subsection:tech_lemma}
In the following content, to simplify the notations, denote $K=K(\boldsymbol{X},\boldsymbol{X})$, $G=G_1(\boldsymbol{X},\boldsymbol{X})$ and $\Pi=2\Pi_1(\boldsymbol{X},\boldsymbol{X})$.
\begin{proof}[Proof of lemma \ref{prop:bound_second_derivative}]
Let $\xi \in (x_i,x_{i+1}). $We only present the proof for $2\leq i \leq n-2$. When $i\in\{1,n-1\}$, one can prove the statement in a similar way.
Let $e_{k}$ be the $k$-th vector in the standard basis of $\mathbb{R}^{n}$. Denote $I_{k}=e_{k}e_{k}^{\top}$. Let us consider the rank one decomposition of $\Pi$:
\begin{equation}\label{eqn:decomposition:reoder}
\begin{aligned}
\Pi&=\sum_{k\not \in \{1,i,i+1,n\}}I_{k}\Pi+I_{i}\Pi+I_{i+1}\Pi+I_{1}\Pi+I_{n}\Pi\\
&=\underbrace{I_{2}\Pi}_{\triangleq \Pi_{1}}+\cdots+\underbrace{I_{i-1}\Pi}_{\triangleq \Pi_{i-2}}+\underbrace{I_{i+2}\Pi}_{\triangleq \Pi_{i-1}}+\cdots+\underbrace{I_{n-1}\Pi}_{\triangleq\Pi_{n-4}}+\underbrace{I_{i}\Pi}_{\triangleq\Pi_{n-3}}+\underbrace{I_{i+1}\Pi}_{\triangleq\Pi_{n-2}}+\underbrace{I_{1}\Pi}_{\triangleq\Pi_{n-1}}+\underbrace{I_{n}\Pi}_{\triangleq\Pi_{n}}.
\end{aligned}
\end{equation}
We denote by $S$ (resp. $S^{-1}$ ) the transformation (resp. inverse transform) between the indices appeared in \eqref{eqn:decomposition:reoder}, i.e., $S(1)=2$, $S(2)=3$, \ldots, $S(i-2)=i-1$, $S(i-1)=i+2$,\ldots, $S(n-4)=n-1$, $S(n-3)=i$, $S(n-2)=i+1$, $S(n-1)=1$, and $S(n)=n$. It is clear that $\Pi_{k}=I_{S(k)}\Pi$.
Let $D_{k}=G+\Pi_{1}+...+\Pi_{k-1}$, $k=1,2,\ldots, n$. It is clear that $D_{1}=G$ and $D_{n+1}=G+\Pi=K$.
To proceed with the proof, we need the following lemma:
\begin{lemma} \label{lemma:B_C_k}
Suppose that $n> 22$. Let $\Gamma=\operatorname{diag}\{1,-(n-1),\cdots,-(n-1),1\}$ be an $n\times n$ diagonal matrix. There exists a constant $C$ such that the following statements hold.
\begin{enumerate}
\item For any $p \in [n]$, $D_{p}$ is an invertible matrix and $g_p=(1+\operatorname{tr}(\Pi_{k}D_{p}^{-1}))^{-1}\in(0,C]$.
\item Let $H^{(p)}=\Pi D_{p}^{-1}$ and $C^{(p)}=\Gamma H^{(p)}$. Then for any $a\in [n]$, we have $|C^{(p)}_{i,j}|\leq C$ for any $i,j \in [n]$.
\end{enumerate}
\end{lemma}
We remind that $\Pi$ is an invertible matrix (please see Lemma \ref{lem: Pi_positive_definite}). Thus, if $D_{k}$'s are invertible, then $H^{(k)}$'s are invertible. \\
\noindent $\bullet$ $\underline{p=1}$; \quad Since $K''(\xi_{i},\boldsymbol{X}) = \frac{4}{\pi((1+\xi_i^2)^2)}(|\xi_i-x_1|,\dots,|\xi_i-\boldsymbol{X}|)$, we can easily verify that there exists a constant $C$ such that
\begin{equation}
\left|\left(K''(\xi_{i},\boldsymbol{X})D_{1}^{-1}\right)_{j}\right| =\left|\left(K''(\xi,\boldsymbol{X})G^{-1}\right)_{j}\right| \leq
\begin{cases}
\quad C & j\in\{1,i,i+1,n\},\\[6pt]
\quad 0 & j\not \in \{ 1,i,i+1,n\}.
\end{cases}
\end{equation}
In other words, $K''G^{-1}=K''D_{1}^{-1}$ is a row vector with at most 4 non-zero entries located in $\{1,i,i+1,n\}$.
\vspace{ 3mm }
\noindent $\bullet$ $\underline{p\in \{2,\ldots,n-4\}}$; \quad Thanks to the Lemma \ref{lemma:B_C_k}, $D_{k}$'s are invertible matrices. Thus, the Sherman–Morrison formula gives us that for any $p=1,2,\ldots, n$,
\begin{equation}\label{eqn:ess:recursive}
\begin{aligned}
K''D_{p+1}^{-1}&=K''(D_{p}+\Pi_{p})^{-1
=K''D_{p}^{-1}-g_{p}K''D_{p}^{-1}\Pi_{p}D_{p}^{-1}.
\end{aligned}
\end{equation}
Because that for any $p\in \{1,2,\dots,n-4\}$, $S(p)\not \in \{1,i,i+1,n\}$, we know from the definition of $\Pi_{p}$ that for $p\in\{1,2,\ldots,n-4\}$, $K''D_{p}^{-1}\Pi_{p}=0$ and $K''D_{p+1}^{-1}=K''D_{p}^{-1}$. In particular, we know that
\begin{align}
K''D_{n-3}^{-1}=K''D_{n-4}^{-1}=\cdots=K''D_{1}^{-1}=K''G^{-1}.
\end{align}
In other words, $K''D_{p}^{-1}, p=2,3,\dots,n-4$ are row vectors with at most 4 non-zero entries located in $\{1,i,i+1,n\}$.
\vspace{3mm}
\noindent $\bullet$ \underline{ $p\in \{n-3,n-2,n-1,n\}$;}\quad Since $g_{p}K''D_{p}^{-1}\Pi_{p}D_{p}^{-1}$ is no longer 0 for $p\geq n-3$, we do not have $K''D_{n-2}^{-1}=K''D^{-1}_{n-3}$ anymore. We need to treat them separately. Again, the Sherman–Morrison formula gives us that
\begin{equation}
\begin{aligned}
K''D_{n-2}^{-1}&=K''(D_{n-3}+\Pi_{n-3})^{-1
=K''D_{n-3}^{-1}-g_{n-3}K''D_{n-3}^{-1}\Pi_{n-3}D_{n-3}^{-1}.
\end{aligned}
\end{equation}
Thus, there exists an absolute constant $C$, such that
\begin{align*}
|(K''&(\xi_i,\boldsymbol{X})D^{-1}_{n-2})_{j}| \leq |(K''(\xi_i,\boldsymbol{X})D^{-1}_{n-3})_j| + g_{n-3} |(K''(\xi_i,\boldsymbol{X})D^{-1}_{n-3} \Pi_{n-3} D^{-1}_{n-3})_j|\\
&=|(K''(\xi_i,\boldsymbol{X})D^{-1}_{n-3})_j| +g_{n-3}|(K''(\xi_i,\boldsymbol{X})D^{-1}_{1} \Pi_{n-3} D^{-1}_{n-3})_j| \leq
\begin{cases}
C, & j\in\{1,i,i+1,n\}\\[6pt]
C\frac{1}{n-1} & j\neq \{1,i,i+1,n\}
\end{cases}
\end{align*}
where the last inequality follows from
the Lemma \ref{lemma:B_C_k} and $\Pi_{n-1}D^{-1}_{n-3}=\frac{1}{n-1}I_{S(n-1)}C^{(n-3)}$. We can prove the results for $p=n-2,n-1,n$ in a similar way.
In other words, we have shown that there exists an absolute constant $C$ such that
\begin{align}
|(K''(\xi_i,\boldsymbol{X})K^{-1})_{j}| = |(K''(\xi_i,\boldsymbol{X})D^{-1}_{n+1})_{j}|\leq
\begin{cases}
C, & j=1,i,i+1,n\\[6pt]
C\frac{1}{n-1} & j\neq 1,i,i+1,n
\end{cases}.
\end{align}
Denote $y_{\max} = \max_{i\in[n]} y_i $. Then $|K''(\xi_i,\boldsymbol{X})K^{-1}\boldsymbol{y}| \leq C |y_{\max}|$.
By Lemma \ref{lem:bound_y}, we have $|y_{\max}|\leq C\sqrt{\log n}$ with probability $1-\frac{2}{n}$. Thus,
\begin{equation}
|K''(\xi_i,\boldsymbol{X})K^{-1}\boldsymbol{y}|\leq C \sqrt{\log n}
\end{equation}
with probability $1-\frac{2}{n}$ and for some constant $C$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:B_C_k}]
We prove the statements through induction on $k$.
\vspace{3mm}
\noindent $\bullet$ \underline{$ k=1$;}\quad It is clear that $D_{1}=G$ is invertible.
The second statement follows the following lemma:
\begin{lemma}\label{lemma:G_inv_property}
There exists an absolute constant $C$ such that
\begin{align*}
(\Pi G^{-1})_{i,j}\in \begin{cases}
\quad (-0.54 -\frac{1}{n-1}, C] &\quad i\in [n], j=1 \\[4pt]
\quad (0, C] &\quad i\in [n],j=n \\[4pt]
\quad [- 2\frac{|i-j|+1}{(n-1)^2}, 0], &\quad i\in [n], j\neq 1,n.
\end{cases}
\end{align*}
Specifically, $ (\Pi G^{-1})_{1,1} > 1.2-\frac{1}{n-1}$.
\end{lemma}
Moreover, the above lemma also shows that $g_{1}=(1+\operatorname{tr}(\Pi_{1}D_{1}^{-1}))^{-1}>0$ is bounded. Thus, we proved the Lemma \ref{lemma:B_C_k} for $k=1$.
\vspace{3mm}
\noindent $\bullet$ \underline{$k>1$;} \quad Suppose that the inductive hypotheses hold for any $1\leq k'\leq k-1$.
\vspace{3mm}
Since $g_{k-1}=(1+\operatorname{tr}(\Pi_{k-1}D_{k-1}^{-1}))^{-1}\in (0,C]$, Sherman–Morrison formula implies that $D_{k}=D_{k-1}+\Pi_{k-1}$ is invertible.
Thus, we have
\begin{align}
C^{(k)}&=\Gamma\Pi D_{k}^{-1}=C^{(k-1)}-g_{k-1}C^{(k-1)}\Pi_{k-1}D_{k-1}^{-1}\\
&=C^{(k-1)}-g_{k-1}C^{(k-1)}I_{S(k-1)}\Gamma^{-1}C^{(k-1)}.
\end{align}
Since both $\Pi$, $\Gamma$ and $D_{k}$ are invertible matrices, we know that $C^{(k)}$ is invertible and the Sherman-Morrison formula gives us
\begin{align}
(C^{(k)})^{-1}=(C^{(k-1)})^{-1}-\frac{I_{S(k-1)}}{n-1}.
\end{align}
The desired bound about $C^{(k)}$ is provided by the following lemma:
\begin{lemma}\label{lem:bound:C}
Assume that $k\leq n-1$ and $C^{(j)}, j=1,2,\dots, k$ are invertible matrices. There exists an absolute constant $C$ such that,
\begin{align}
|C^{(k)}_{i,j}|\leq 21, \mbox{~if~} 2\leq i,j\leq n-1 \mbox{~and~} |C^{(k)}_{i,j}|\leq C, \mbox{ ~if~ } i \mbox { or } j \in \{1,n\} .
\end{align}
\end{lemma}
First, Lemma \ref{lem:bound:C} implies that the second statement in Lemma \ref{lemma:B_C_k} hold for $k$. Second,
since $n\geq 23$, Lemma \ref{lem:bound:C} implies that $C^{(k)}_{S(k),S(k)} \leq 21<n-1$. Thus, for any constant $C>22$, we have $g_k=\left(1-\frac{C^{(k)}_{S(k),S(k)}}{n-1}\right)^{-1} \in (0,C]$.
\end{proof}
\vspace{3mm}
\noindent{\bf Lemma \ref{lemma:G_inv_property}} {
There exists an absolute constant $C$ such that
\begin{align*}
(\Pi G^{-1})_{i,j}\in \begin{cases}
\quad (-0.54 -\frac{1}{n-1}, C] &\quad i\in [n], j=1 \\[4pt]
\quad (0, C] &\quad i\in [n],j=n \\[4pt]
\quad [- 2\frac{|i-j|+1}{(n-1)^2}, 0], &\quad i\in [n], j\neq 1,n.
\end{cases}
\end{align*}
Moreover, we can prove that $ (\Pi G^{-1})_{1,1} > 1.2-\frac{1}{n-1}$.
}
\begin{proof}
Since $\Pi(x,y)$ are continuous differentiable of 2nd order, the Taylor expansion gives us that for any $i$, there exist $\xi_{i}$ and $\xi_{i}'\in [x_{i},x_{i+1}]$ such that
\begin{align}
&\Pi(x.x_{i+1})-\Pi(x,x_{i})=\Pi'(x,\xi_{i})(x_{i+1}-x_{i}), \\
& \Pi(x.x_{i+1})-\Pi(x,x_{i})=\Pi'(x,x_{i})(x_{i+1}-x_{i})+\frac{1}{2}\Pi''(x,\xi_{i}')(x_{i+1}-x_{i})^{2}.
\end{align}
\noindent $\bullet$ \underline{ $j=1$;} For any $i\in [n]$,
\begin{equation}\label{equation:Pi_G_i_1}
\begin{split}
(\Pi G^{-1})_{i,1}
&= \frac{\pi}{2}(n-1)(\Pi(x_i,x_1)-\Pi(x_i,x_2)) + \frac{\pi}{2}\frac{\Pi(x_i,x_1)+\Pi(x_i,x_n)}{2\pi-1} \\
&= \frac{\pi}{2}\Pi^{'}(x_i,\xi_1) + \frac{\pi}{2}\frac{\Pi(x_i,x_1)+\Pi(x_i,x_n)}{2\pi-1}\\
&= - \xi_1\frac{|x_i-\xi_1|}{1+\xi_1^2} - x_i(\pi -\psi(\xi_1, x_i)) + \frac{\pi}{2}\frac{\Pi(x_i,x_1)+\Pi(x_i,x_n)}{2\pi-1}
\end{split}
\end{equation}
Since $x_i\in[0,1]$, It is clear that there exists a constant $C$ such that $(\Pi G^{-1})_{i,1}\leq C$.
On the other hand,
\begin{align*}
(\Pi G^{-1})_{i,1}&\geq -x_2 - x_i(\pi -\psi(x_1, x_i)) + \frac{\pi}{2}\frac{\Pi(x_i,x_1)+\Pi(x_i,x_n)}{2\pi-1}\\
&= -\frac{1}{n-1} - \frac{i-1}{n-1}(\pi-\psi(x_1,x_i)) + \frac{\pi-\psi(x_1,x_i) + (1+x_i)(\pi-\psi_(x_n,x_i))+1}{2\pi-1}\\
&\geq -0.54-\frac{1}{n-1}
\end{align*}
Finally, we have $ (\Pi G^{-1})_{1,1} \geq -\frac{1}{n-1} + \frac{2\pi-\frac{\pi}{4}+1}{2\pi-1} >1.2-\frac{1}{n-1}$.
\vspace{3mm}
\noindent $\bullet$ \underline{ $j=n$;}
\begin{equation}\label{equation:Pi_G_i_n}
\begin{split}
(\Pi G^{-1})_{i,n}
&= \frac{\pi}{2}(n-1)(\Pi(x_i,x_n)-\Pi(x_i,x_{n-1})) + \frac{\pi}{2}\frac{\Pi(x_i,x_1)+\Pi(x_i,x_n)}{2\pi-1} \\
&= \frac{\pi}{2}\Pi^{'}(x_i,\xi_{n-1}) + \frac{\pi}{2}\frac{\Pi(x_i,x_1)+\Pi(x_i,x_n)}{2\pi-1}.
\end{split}
\end{equation}
Since $x_i\in[0,1]$, it is clear there exists a constant $C$ such that $(\Pi G^{-1})_{i,n}\leq C$. On the other hand, we have
\begin{align}
(\Pi G^{-1})_{i,n}&= \xi_1\frac{|x_i-\xi_{n-1}|}{1+\xi_i^2} + x_i(\pi -\psi(\xi_{n-1}, x_i)) + \frac{\pi}{2}\frac{\Pi(x_i,x_1)+\Pi(x_i,x_n)}{2\pi-1}>0.
\end{align}
\vspace{3mm}
\noindent $\bullet$ \underline{$2\leq j\leq n-1$;}
\begin{equation}\label{equation:Pi_G_i_j}
\begin{split}
(\Pi G^{-1})_{i,j} &=\frac{\pi}{2} (n-1) \left(2\Pi(x_i,x_j)-\Pi(x_i,x_{j+1})-\Pi(x_i,x_{j-1})\right)\\
&=- \frac{\pi}{2} (n-1) (\Pi^{''}(x,\xi_{j})\frac{(x_{j}-x_{j+1})^2}{2} + \Pi^{''}(x,\xi_{j-1})\frac{(x_{j}-x_{j-1})^2}{2})\\
&= -\frac{1}{(n-1)} \left(\frac{|x_i-\xi_{j}|}{(1+\xi_{j}^2)^2} + \frac{|x_i-\xi_{j-1}|}{(1+\xi_{j-1}^2)^2} \right)
\end{split}
\end{equation}
It is clear that $(\Pi G^{-1})_{i,j}\leq 0$.
One the other hand,
since $\xi_j'\in(x_j,x_{j+1})$ and $\xi_{j-1}'\in(x_{j-1},x_{j})$, we have
\begin{align}
(\Pi G^{-1})_{i,j}&\geq -\frac{1}{(n-1)}\frac{|x_i-\xi_{j}|+|x_i-\xi_{j-1}|}{(1+x_{j-1}^2)^2}
\geq -2\frac{|x_i-x_j|+\frac{1}{n-1}}{(n-1)}
= - 2\frac{|i-j|+1}{(n-1)^2} .
\end{align}
\end{proof}
\vspace{5mm}
\noindent{\bf Lemma \ref{lem:bound:C} }{
Assume that $k\leq n-1$ and $C^{(j)}, j=1,2,\dots, k$ are invertible matrices. There exists an absolute constant $C$ such that,
\begin{align}
|C^{(k)}_{i,j}|\leq 21, \mbox{~if~} 2\leq i,j\leq n-1 \mbox{~and~} |C^{(k)}_{i,j}|\leq C, \mbox{ ~if~ } i \mbox { or } j \in \{1,n\} .
\end{align}
}
\begin{proof} We prove this lemma by induction on $k$.
\vspace{3mm}
\noindent $\bullet$ \underline{$ k=1$;}\quad Recall that Lemma \ref{lemma:G_inv_property} implies that
\begin{align}
C^{(1)}_{i,j}=
\begin{cases}
\quad -(n-1)(\Pi G^{-1})_{i,j} \leq 2\frac{|i-j|+1}{(n-1)}\leq 2 & i\in[n], j\neq 1,n\\[8pt]
\quad \quad \quad \quad \quad (\Pi G^{-1})_{i,j}\leq C & i\in[n], j=1,n\\
\end{cases}.
\end{align}
Thus the statements hold for $k=1$.
\vspace{3mm}
\noindent $\bullet$ \underline{$ k>1$;}\quad Suppose that the inductive hypotheses hold for $k$. Then
\begin{align}
(C^{(k+1)})^{-1}=(C^{(k)})^{-1}-\frac{1}{n-1}I_{S_{k}}=(C^{(1)})^{-1}-\frac{1}{n-1}\sum_{j=1}^{k}I_{S(j)}.
\end{align}
Denote $\frac{1}{n-1}\sum_{j=1}^{k}I_{S(j)}$ by $T_{k}$. Then, we have
\begin{align*}
C^{(k+1)}= C^{(1)} + C^{(1)}T_{k}C^{(1)} + C^{(1)}\left(T_{k}C^{(1)}\right)^{2}+\cdots=\mathcal{Q} +\mathcal{Q} \left(T_{k}C^{(1)}\right)^{3}+\mathcal{Q}\left(T_{k}C^{(1)}\right)^{6}+\cdots
\end{align*}
where $\mathcal{Q}=C^{(1)}\left(1+T_{k}C^{(1)}+\left(T_{k}C^{(1)}\right)^{2}\right)$.
Simple calculations show that (please see Lemma \ref{lem:simple:computations} ), for any $2\leq i,j\leq n-1$ and $q\in \mathbb{N}$, we have
\begin{equation}\label{eqn:simple:computations}
\begin{aligned}
| C^{(1)}_{i,j}|\leq 2,~
\left| \left(C^{(1)} T_{k}C^{(1)}\right)_{i,j}\right| \leq \frac{4}{3}, ~
\left| \left(C^{(1)} \left(T_{k}C^{(1)}\right)^{2}\right)_{i,j}\right| \leq \frac{4}{5},~
\left|\left( \left(T_{k}C^{(1)}\right)^{3q}\right)_{i,j}\right| \leq \frac{\left(\frac{4}{5}\right)^{q}}{n-1}.
\end{aligned}
\end{equation}
Note that the first row and last row of $T_{k}$ are zero vectors. Thus, for any $2\leq i,j\leq n-1$, we get
\begin{align}
| \mathcal{Q}_{i,j}|= \left|\left(C^{(1)}\left(1+T_{k}C^{(1)}+\left(T_{k}C^{(1)}\right)^{2}\right)\right)_{i,j}\right| \leq 2+\frac{4}{3} + \frac{4}{5}= \frac{62}{15}
\end{align}
and
\begin{align}
|C^{(k+1)}_{i,j}|&\leq \sum_{q\geq 0} \left|\left(\mathcal{Q} \left(T_{k}C^{(1)}\right)^{3q}\right)_{i,j}\right|\leq \sum_{q\geq 0}\sum_{p=1}^{n}|\mathcal{Q}|_{i,p}\left|\left(T_{k}C^{(1)}\right)^{3q}_{p,j}\right|\leq \sum_{q\geq 0}\sum_{p=2}^{n-1} \frac{62}{15} \left(\frac{4}{5}\right)^{q}\frac{1}{(n-1)} < 21
\end{align}
If $i$ or $j \in \{1,n\}$, we can prove $|C^{(k+1)}_{i,j}| \leq C$ similarly.
Finally, we can
show that $\left(C^{(1)}\left( T_{n-2}C^{(1)}\right)^{p}\right)_{1,1}\geq 0$ for $p=0,1,...$ and
\begin{equation}
C^{(n-1)}_{1,1} = C^{(1)}_{1,1} + \left(C^{(1)} T_{n-2}C^{(1)}\right)_{1,1} +\cdots \geq C^{(1)}_{1,1}\geq 1.2 -\frac{1}{n-1}>0.
\end{equation}
\end{proof}
Under the bounded input $x_i$, we can get the bounded $f^*(x_i)$. Let $y_{max} = \max_{i\in[n]} y_i$. we can have the upper bound of $y_{max}$ through the following lemma:
\begin{lemma}\label{lem:bound_y}
With the definition of Equation \eqref{equation:true_model}, with the probability at least $1-\frac{2}{n}$, we have
\begin{equation}
|y_{\max}| \leq C\sqrt{\log n}
\end{equation}
and for some constant $C$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:bound_y}]
Denote $\epsilon_{\max} = \max_{i\in[n]}\epsilon_i$. For $\forall t>0$, we have
\begin{equation}
\exp(t\mathbf{E}(\epsilon_{\max}))\leq \mathbf{E}(\exp(t \epsilon_{\max} )) \leq \sum_{i=1}^n \mathbf{E} ( \exp(t \epsilon_i )) = n \exp(t^2\sigma^2/2).
\end{equation}
The first inequality is Jensen’s inequality, the second is the union bound, and the final equality follows from the definition of the moment-generating function. Taking the logarithm of both sides of this inequality, we have
\begin{equation}
\mathbf{E}(\epsilon_{\max}) \leq \frac{\log n}{t} + \frac{\sigma^2 t}{2}.
\end{equation}
Let $t=\frac{\sqrt{2\log n}}{\sigma}$, we have
\begin{equation}
\mathbf{E}(\epsilon_{\max}) \leq \sigma \sqrt{2\log n}.
\end{equation}
By Borell-ITS inequality, since $\sigma_{max}^2 = \max_{i\in[n]} E(\epsilon_i^2) = \sigma^2$, for $t\geq 0$, we have
\begin{equation}
P(|\epsilon_{\max}-\mathbf{E}(\epsilon_{\max})|\geq t)<2exp(-\frac{t^2}{2\sigma_{max}^2}).
\end{equation}
Let $t=\sigma \sqrt{2\log n}$, we have
\begin{equation}
P(|\epsilon_{\max}|\leq 2\sigma \sqrt{2\log n} )>1- \frac{2}{n}.
\end{equation}
Since $f_{\star}(x)$ is bounded, we have $\max_{x\in[0,1]}|f_{\star}(x)| \leq C \sqrt{\log n}$ for some constant C and $n>2$. Thus, with the probability at least $1- \frac{2}{n}$, we have
\begin{equation}
|y_{\max}|\leq C\sqrt{\log n}
\end{equation}
for some constant $C$.
\end{proof}
\begin{lemma}\label{lemma:K_derivative}
For any $x,x'\in[0,1]$, the function $\Pi(x^{\prime},x) =2\left(\frac{\pi - \psi(x',x)}{\pi}(1+x'x) + \frac{\lvert x^{\prime}-x\rvert}{\pi}\right)$ is a twice continuously differentiable function where $\psi(x',x)=\arccos\frac{1+x'x}{\sqrt{(1+x^{'2})(1+x^{2})}}$. Moreover,
\begin{equation}
\frac{\partial \Pi(x',x)}{\partial x} =\frac{2x}{\pi}\frac{|x-x'|}{1+x^2} +2x' - \frac{2}{\pi}x'\psi \mbox{~~ and~~ } \frac{\partial^2 \Pi(x',x)}{\partial x^2} = \frac{4}{\pi}\frac{|x-x'|}{(1+x^2)^2}.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:K_derivative}] Simple calculations give us that
\begin{equation}
\begin{split}
\frac{\partial \psi(x,x')}{\partial x}&=\frac{\operatorname{sgn}(x-x')}{(1+x^2)} \mbox{~ and ~}\frac{\partial^2 \psi(x,x')}{\partial x^2}=-\frac{2x \operatorname{sgn}(x-x')}{(1+x^2)^2}
\end{split}
\end{equation}
and the desired results.
\end{proof}
\begin{lemma}\label{lem:simple:computations}
Detailed calculations in equation \eqref{eqn:simple:computations}.
\end{lemma}
\begin{proof}
For $k\leq n-2$, we have
\begin{equation}
C^{(1)} = \begin{pmatrix}
C^{(1)}_{1,1} & u & C^{(1)}_{1,n}\\
l & M & r \\
C^{(1)}_{n,1}& d & C^{(1)}_{n,n}
\end{pmatrix}, \quad
T_{k}C^{(1)} = \frac{1}{n-1}\begin{pmatrix}
0 & 0 &0\\
l & M & r \\
0& 0 & 0
\end{pmatrix}
\end{equation}
where $u$ and $d$ are $1\times (n-2)$ vectors, $l$ and $r$ are $(n-2)\times 1$ vectors, and $M$ is a $(n-2)\times (n-2)$ matrix. Simple calculations imply that
\begin{equation}
\left(T_{k}C^{(1)}\right)^p = \left(\frac{1}{n-1}\right)^{p} \begin{pmatrix}
0 & 0 &0\\
M^{p-1}l & M^p & M^{p-1}r \\
0& 0 & 0
\end{pmatrix}.
\end{equation}
For any $2\leq i,j\leq n-1$, we have
\begin{equation}
\begin{aligned}
\left(C^{(1)}\right)_{i,j}&=M_{i,j}=-(n-1)(\Pi G^{-1})_{i,j} \leq 2\frac{|i-j|+1}{n-1},\\
\left(C^{(1)}T_{k}C^{(1)}\right)_{i,j}&= \left(\frac{M^2}{n-1}\right)_{i-1,j-1}
\leq \frac{1}{n-1}\sum_{k=1}^{n-2}\frac{2(|i-1-k|+1)}{n-1}\frac{2(|k-j+1|+1)}{n-1} \\%&\leq 4\frac{1}{(n-1)^3} \sum_{k=1}^{n-2} k^{2}
&\leq \frac{4}{3}\\
\left(C^{(1)} \left( T_{k}C^{(1)}\right)^{2}\right)_{i,j}&=\left(\frac{M^3}{(n-1)^2}\right)_{i-1,j-1}\\
&\leq\frac{1}{(n-1)^5}\sum_{l=1}^{n-2} \sum_{k=1}^{n-2}8(|i-1-k|+1)(|k-l|+1)(|l-j+1|+1)\\
&\leq \frac{1}{(n-1)^5}\sum_{l=1}^{n-2} \sum_{k=1}^{n-2}8(n-1-k)(|k-l|+1)l\\
&=\frac{2}{15}\frac{(n-2)n(2n-1)(3n-7)}{(n-1)^5}\leq \frac{4}{5}.
\end{aligned}
\end{equation}
Finally, we have
\begin{equation}
\left( \left(T_{k}C^{(1)}\right)^{3q}\right)_{i,j}\leq \left(\frac{4}{5}\right)^{q}\frac{1}{(n-1)}.
\end{equation}
\end{proof}
\begin{comment}
\begin{figure}[htbp]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\textwidth]{synthetic_data_loss_gen_vs_noise_ce.png}
\centerline{(a) Four different epochs}
\end{minipage}%
\begin{minipage}[t]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth]{synthetic_data_gen_vs_noise_ce.png}
\centerline{(b) Generalization gap with different label corruption ratios}
\end{minipage}
\caption{The generalization gap between the '100\% training acc' epoch and 'best generalization' is increasing when the label corruption ratio is increasing. The classification setup is used in this experiment. The result is similar to the result of the regression setup.}
\label{fig: MLP_diff_noise_ce}
\end{figure}
\end{comment}
|
2,877,628,091,075 | arxiv |
\section{Introduction}
\label{sec:intro}
In a modern particle physics experiment, simulation of the detector response is used to estimate efficiencies and resolutions of measured quantities.
These efficiencies and resolutions are necessary in order to fully interpret the data produced by the experiment.
The possible differences between what is simulated and the actual detector response therefore lead to bias on physics measurements. This potential bias is quantified in the form of detector systematic uncertainties.
This paper describes a method in which the response of the MicroBooNE LArTPC detector~\cite{detector} is characterized in data and simulation. The results are used to modify simulated signals to thereby produce samples of modified simulated events.
Comparisons between modified simulations and the nominal simulation can be used to identify measurement biases and to estimate detector systematic uncertainties.
Understanding detector effects and systematic uncertainties is critical for achieving the physics goals of future LArTPC-based experiments, such as SBN~\cite{sbn} and DUNE~\cite{dune}. The detector-related uncertainties must be reduced to the level of a few percent and estimated precisely to reach the design sensitivities.
The principal detector of MicroBooNE is a wire-based liquid argon time projection chamber (TPC) with a single drift region.
The trajectories of charged particles through the liquid argon are detected by drifting ionization electrons in an electric field to three parallel planes of sense wires.
The drifted ionization charge measured at the wire planes is sensitive to a number of known detector effects, such as electron--ion recombination~\cite{icarusrecomb,recomb}, electron diffusion~\cite{bnldiffusion,icarusdiffusion,diffusion}, space charge effects~\cite{laser,sce_cosmic}, and electron attenuation~\cite{elifetime,calib}. It is also subject to effects related to the model that describes the induced signal on the wires due to the drifting electrons and the electronics response~\cite{sp1,sp2}.
These effects are difficult to disentangle.
The method detailed in this paper is used to address systematic uncertainties related to ionization charge in the TPC that can be described by changes in the amplitude and width of signals on the wires.
This method produces a set of simulations where the signals on wires are modified---differences between these varied simulation sets and the nominal simulation are taken as an estimate of the uncertainty on the nominal simulation's modeling of the detector response to ionization.
For the subset of the detector variations where this approach can be used, it has two significant advantages over modeling-based estimates.
First, by working with digitized wire waveforms in both data and simulation, this procedure does not depend explicitly on the modeling used for different components of detector response simulation.
It therefore captures residual effects that are not well-described by existing detector models or that are not fully simulated, providing a more robust, data-driven assessment of uncertainties related to the detector model.
Second, it is relatively computationally efficient.
By directly modifying waveforms, this approach avoids the computationally intensive steps of simulating the drifting of ionization electrons and deconvolving the resulting signals.
As a result, the method outlined is about an order of magnitude faster than running the full simulation each time.
The structure of the paper is as follows: Section~\ref{sec:overview} gives a brief overview of the method, including a description of the relevant detector variables and the parameters that are used to characterize the detector's response.
Section~\ref{sec:samples} defines the event samples in data and simulation.
Section~\ref{sec:ratios} describes the procedure for extracting the data-to-simulation comparisons, which take the form of ratios of waveform properties.
Section~\ref{sec:wiremod} describes the application of these ratios to modifying the wire waveforms.
Section~\ref{sec:impacts} presents the results of applying this method to higher-level reconstructed quantities.
Section~\ref{sec:future} discusses the potential improvements and extensions.
Section~\ref{sec:summary} presents the summary and conclusion.
\section{Overview of Method}
\label{sec:overview}
The MicroBooNE detector is a liquid argon time projection chamber (LArTPC) designed to observe neutrino interactions. It is located on-axis along the Booster Neutrino Beam (BNB)~\cite{bnb} at Fermilab, and is also exposed to an off-axis flux of the Neutrinos from the Main Injector (NuMI) beam~\cite{numi}. Compared to the BNB beam, the NuMI beam is higher in energy and has a larger electron neutrino contribution.
When charged particles traverse the detector, they deposit energy that liberates ionization electrons and also produces prompt vacuum ultraviolet scintillation photons.
The ionization electrons drift in the applied electric field until they reach the three sense wire planes located at the anode, as illustrated in Figure~\ref{fig:detector}.
The electrostatic potentials of the wire planes are set up such that ionization electrons pass through the first two wire planes before ultimately ending their trajectory on a wire in the last plane. The drifting electrons induce signals on the first two planes, referred to as induction planes (planes 0~and~1), and additively constitute the signals in the final plane, referred to as the collection plane or plane~2. The collection plane wires are aligned vertically and the induction plane wires are oriented at $\pm 60\degree$ from the vertical.
The voltage of each wire is digitized by on-detector electronics, and recorded over time to produce raw waveforms.
To process recorded raw waveforms offline, a noise-filtering algorithm is applied~\cite{noise} and then the field responses are removed from the signals via a Gaussian deconvolution process~\cite{sp1,sp2} to produce a waveform that measures the charge that arrived at each wire as a function of time.
Scintillation photons are observed by an array of 32~photo-multiplier tubes (PMTs) located behind the wire planes. The optical information is used for triggering the detector.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{intro/ub-detector.png}
\caption{The MicroBooNE detector and operating principles, adapted from Ref.~\cite{detector}, as described in the text. The green and blue wire planes are the induction planes; the red wire plane is the collection plane. The right-hand portion of the figure shows the wire waveforms before deconvolution.}
\label{fig:detector}
\end{figure}
The detector's response to an ionizing particle depends on the position and the amount of energy deposited, as well as the angular orientation of the particle's trajectory with respect to the wires~\cite{sp1,sp2}.
The MicroBooNE coordinate system is defined such that the $x$ axis points along the drift electric field direction from the anode to the cathode, the $y$ axis points vertically up, and the $z$ axis points along the BNB beam direction to complete a right-handed coordinate system.
As the response depends on the orientation of a particle trajectory, it is useful to define the detector angles $\theta_{XZ}$ and $\theta_{YZ}$ for a displacement vector $\Updelta \vec{r_i}$ with components $(\Updelta x_i, \Updelta y_i, \Updelta z_i)$ as below.
\begin{equation}
\begin{aligned}
\theta_{XZ,i} &= \arctan(\Updelta x_i / \Updelta z_i) \\
\theta_{YZ,i} &= \arctan(\Updelta y_i / \Updelta z_i)
\end{aligned}
\label{eqn:def_thetas}
\end{equation}
In the coordinate system used, the direction of the BNB has both $\theta_{XZ}$ and $\theta_{YZ}$ equal to zero.
The vector, $\Updelta \vec{r_i}$, is taken to be the (true) local direction of travel of the simulated particle that produced a particular wire waveform.
Later, in Section~\ref{sec:ratios}, ``rotated'' angles relevant for the two induction planes are introduced.
The detector response is characterized as a function of these five variables: $x$, $y$, $z$, $\theta_{XZ}$, and $\theta_{YZ}$.
Much of the variability in the detector's response in $y$ and $z$ is driven by the presence of non-responsive wires in one plane, which can affect the behavior of the signals on nearby wires on the other planes~\cite{sp2}.
The different planes have different orientations in the $yz$-plane, but the locations of wire-crossings are at fixed points this 2D plane; for this reason $y$ and $z$ are considered together.
The remaining variables are considered independently.
The effects of each of the variables on the post-deconvolution wire waveforms are described in terms of a Gaussian fit to the waveform, called a \textit{hit}. A hit has an integrated charge $Q$, proportional to the number of ionization electrons that produced the wire signal, and a width $\sigma$, measured in waveform time ticks. A tick corresponds to 0.5~$\mu \rm{s}$ as defined by the 2~MHz sampling rate of the ADCs~\cite{detector}.
To quantify how the wire waveforms differ between data and simulation, the differences are expressed as data-to-simulation ratios.
The hits are used as the basis to apply the modifications to the underlying waveforms.
Digitized waveforms from each wire in each event are divided into wire signal regions separated by signal-free regions, which are zero-suppressed.
Each wire signal region can be described by the sum of one or more Gaussian functions with some peak position, integrated charge, and width.
Each constituent Gaussian function is modified according to the properties of the simulated energy deposits that are matched to it, by applying the data-to-simulation differences provided by the ratio functions for $Q$ and $\sigma$ for each detector variable.
The technical details are described in Sec.~\ref{sec:wiremod}.
The variation as a function of $x$ position captures the dependence of the signal width on, for example, the charge cloud spreading out (diffusion), and of the signal amplitude on electrons being absorbed by impurities (attenuation).
The local variation in $y$ and $z$ can account for the distortion of the signal due to deviations in the electric field between the wire planes resulting from non-responsive and cross-connected wires.
The variations in the angular variables $\theta_{XZ}$ and $\theta_{YZ}$ can describe distortions in the waveforms due to imperfect modeling of the signals that drift charge induces on the wires and of the electronics response. This is particularly relevant for extended charge distributions, because the response can include interference between signals induced by different parts of the charge distribution on the same wire. This interference depends on the angular orientation of the charge distribution relative to the wire planes in a way that is challenging to model precisely~\cite{sp1,sp2}.
All of these waveform-level modifications are agnostic to the downstream reconstruction and analysis chain as well as the upstream detector simulation model.
For evaluating the full range of systematic uncertainties related to the MicroBooNE detector, separate variations are considered for the drift electric field model~\cite{laser,sce_cosmic}, the electron--ion recombination model parameters~\cite{calib}, and the scintillation light model parameters.
\section{Data and Simulation Event Samples}
\label{sec:samples}
To determine the hit properties (integrated charge and width), cosmic ray muon tracks are used. They provide an abundant and well-understood event sample in which each of the five relevant position and angular variables can be reconstructed.
The data tracks are selected from beam-off data, which is collected using the same optical trigger as the beam-on data but when there is no neutrino beam (so-called ``beam-off'' events).
The triggered beam-off data comes from MicroBooNE's Run 1 period, taken between February and October 2016.
It was verified that consistent results were obtained using different run periods, so for simplicity the measurements are made using Run 1 and applied to all other runs.
The simulation tracks are selected from a sample of single muons that are generated using CORSIKA~\cite{corsika}.
The signals from these simulated muons are overlaid on cosmic data that is collected using a random (unbiased) trigger when there is no neutrino beam.
The cosmic data overlay incorporates the detector noise and cosmic muon backgrounds found in data events.
This technique is also applied to the simulated neutrino events discussed in Sec.~\ref{sec:impacts}.
The unbiased cosmic data used in this procedure comes from the run period that matches the data sample to which the simulation is being compared.
For the simulated muon samples used to measure the hit properties, this means unbiased beam-off data from the Run 1 period is used.
The $x$ position of an energy deposit in the MicroBooNE TPC is determined from the drift time of its ionization tracks relative to the trigger time of the event combined with measurements of the local drift velocity~\cite{laser,sce_cosmic}.
To reconstruct the $x$ position of a given particle track, it is therefore necessary to match that track to a flash of scintillation light, whose offset from the trigger time is readily known.
This is achieved by using cosmic tracks that are topologically consistent with having crossed the anode or the cathode in-time with the flash of scintillation light that triggered the beam-off event. In addition, the opposite end of the track is required to have crossed either the opposite face of the detector or the top or bottom. These are called anode/cathode piercing tracks (ACPT) and are illustrated in Figure~\ref{fig:acpt}.
\subsection{Reconstruction}
\label{subsec:reconstruction}
The Pandora multi-algorithm package~\cite{pandora} is used to reconstruct 3D tracks from the ionization charge collected at the wires.
These tracks are then matched to the flash of scintillation light, collected by the PMT system, which triggered the TPC readout~\cite{ccnp}.
If the track has an ACPT topology and matched to the scintillation light which triggered the detector, it is selected. These types of tracks have little ambiguity in the TPC-to-PMT matching, leading to a sample that is very pure in tracks with the correct $x$ position assigned.
Selected tracks are corrected for spatial distortions due to nonuniform electric fields in the detector~\cite{laser,sce_cosmic}.
Based on simulation studies, more than 95\% of the selected track candidates are true ACPT tracks with correctly determined $x$ positions. Additionally, such through-going cosmic muon tracks generally behave as minimally ionizing particles along their entire length and therefore make a good ``standard candle'' of ionization per unit track length.
Note that the geometrical requirements of this selection combined with the fact that cosmic muons are mostly downward-going mean that ACPT muon trajectories tend to populate the regions near the anode (low $x$ position) and cathode (high $x$ position).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{intro/acpt_v2.png}
\caption{Illustration of two examples of anode/cathode piercing tracks (ACPT), shown in black. The track must cross at least one of the anode or the cathode. The other tracks, shown in gray, are cosmic muons that do not satisfy the ACPT criteria.}
\label{fig:acpt}
\end{figure}
\section{Measuring the Detector Response}
\label{sec:ratios}
Using the cosmic ray muon ACPT samples described above in Section~\ref{sec:samples}, the method proceeds by determining the dependence of the two hit properties on each of the five geometric variables stated.
With a measurement of these dependencies made in both data and simulation in each variable, the ratio of the two is formed and used as a measure of the scale of the discrepancy between them. This section details the determination of these ratios. The ratios will later be used (see Sec.~\ref{sec:wiremod}) to derive modifications to the wire waveforms that capture differences due to detector modeling.
\subsection{Measurements in $x$}
\label{subsec:x_ratio}
First, consider variations in charge response as a function of the $x$ position. This is sensitive to drift-dependent effects, such as electron diffusion and attenuation.
To measure the response, all hits associated with reconstructed ACPT muon tracks are used to form distributions of the hit charge and hit width across bins in $x$ position.
The detector is divided into bins in $x$ using a variable binning scheme to ensure a reasonable number of entries in each of the $x$ bins. ACPT trajectories are concentrated near the anode and the cathode, so the bins are narrower in those regions.
The binning is determined separately for each of the wire planes.
Each bin contains hits from several thousand ACPT muons.
Within each bin, the values of the hit properties have some intrinsic spread due to the different positions and orientations of tracks, as demonstrated in the distribution of hit widths of a typical $x$ bin in Figure~\ref{fig:prop_in_x_bin}.
To facilitate the measurement of the variation that is due to the $x$ position, the peaks of the hit charge and width distributions in each $x$ bin are calculated using an iterative truncated mean algorithm.
The algorithm starts with all the hits in the bin and computes the mean, the median, and the standard deviation. Hits that are more than 2 standard deviations below the median or more than 1.75 standard deviations above it are removed, and all quantities are then recalculated. The boundaries for the truncation reflect the asymmetry of the underlying distributions, and were empirically determined to improve the accuracy and stability of the peak finding algorithm. This step is repeated until the calculated mean meets the convergence criteria of changing by less than $10^{-4}$. The resulting distribution for means of hit charge and width from the collection wire plane are shown in Figure~\ref{fig:prop_vs_x_prof}.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{ratios/x/width_in_x_bin.png}
\caption{Distribution of hit widths on the collection plane for $1.6 < x < 4.3~\text{cm}$ in the cosmic data. The spread in the distribution is driven by other sources of variability, such as the position in the $yz$-plane and the angular orientation of the track. The distribution is asymmetric and is not well described by any simple analytic function. This motivates the specialized algorithm based on the iterative truncated mean that is described in the text. A tick corresponds to 0.5~$\mu \rm{s}$ of time~\cite{detector}.}
\label{fig:prop_in_x_bin}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{ratios/x/charge_vs_x_prof.png}
$\qquad$
\includegraphics[width=0.95\columnwidth]{ratios/x/width_vs_x_prof.png}
\caption{Hit charge and hit width vs.\ $x$ in data and simulation (MC) for the collection plane. The values are computed from histograms similar to the example shown in Figure~\ref{fig:prop_in_x_bin} using the algorithm based on the iterative truncated mean described in the text.}
\label{fig:prop_vs_x_prof}
\end{figure*}
The ratio of the typical hit properties in data to simulation is computed in each bin in $x$ using the peaks found by the truncated mean algorithm.
A spline fit to the measured ratio is performed to obtain a smooth function that describes the data-to-simulation differences, as shown in Figure~\ref{fig:prop_vs_x_ratio}.
This fit provides the simulation modification factor.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{ratios/x/charge_vs_x_splines.png}
$\qquad$
\includegraphics[width=0.95\columnwidth]{ratios/x/width_vs_x_splines.png}
\caption{Ratios (data/simulation) and fitted simulation modification functions for mean hit charge and mean hit width vs.\ $x$ on each of the three wire planes. The solid lines are the bin values, with error bars showing the statistical uncertainties, and the dashed lines are spline fits. The width of each bin is indicated by the solid horizontal bars. The binning is chosen to ensure high statistics in each bin.}
\label{fig:prop_vs_x_ratio}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[height=1.85in]{ratios/yz/width_vs_y_doubleband.png}
\includegraphics[height=1.85in]{ratios/yz/width_vs_y_corrected.png}
\caption{Distribution of hit widths on the collection plane for $-10 < y < 0$\,cm in the cosmic data. On the left, this distribution before any correction for the hit width dependence on $x$. A time tick translates to 0.5 $\mu \rm{s}$~\cite{detector}. The ``double-peak'' structure is evident, where the low-width peak comes from ACPT trajectory points near the anode and the high-width peak comes from points near the cathode (see Figure~\ref{fig:prop_vs_x_prof}). On the right, the $x$-correction has been applied and the double-peak structure is removed. In this case, the hit widths (in ticks) have been divided by the median hit width at the corresponding $x$ position (in ticks), so the resulting quantity is dimensionless.}
\label{fig:double_banding}
\end{figure*}
\subsection{An $x$ Correction for Other Measurements}
\label{subsec:x_corr}
The hit widths (and to a lesser extent the charges) have large variations as a function of $x$, specifically between the cathode and anode. As shown in Figure~\ref{fig:prop_vs_x_prof}, the measured hit widths vary by up to 50\% across the drift direction. As a result, the hit widths have broad distributions when projected onto the other four geometric variables.
For the ACPT muon sample in particular, where the trajectories tend to populate the regions at high and low $x$, this leads to a ``double-peak'' structure in the hit width distributions in both data and simulation. This complicates the measurement of the hit width dependence as a function of these other variables, as the truncated mean is no longer a well-behaved estimate of the peak position.
An example of this double-peak feature for a bin in $y$ is shown in Figure~\ref{fig:double_banding}.
To account for this, the measurements for the other variables are based on hit properties that have been corrected for their known $x$-dependence.
Spline fits to the results in Figure~\ref{fig:prop_vs_x_prof}, for data and simulation and for each wire plane separately, provide expected hit properties for a hit at a given $x$ position, on a given plane, in data or simulation.
Each hit's charge and width is then divided by the relevant expected value to produce ``$x$-corrected'' hit properties.
This process produces distributions of corrected hit properties that have a median value of one, by construction.
The remaining measurements in $(y,z)$ and the angular angular variables use these $x$-corrected hit properties.
As well as avoiding the difficulties with the double-peak structure, this process removes any global offsets from the remaining measurements, placing all global scalings in the $x$-dependence.
The remaining measurements are shape-only in their respective variables.
These measurements are further described in the sections below.
\subsection{Measurements in $(y, z)$}
\label{subsec:yz_ratio}
Next consider the behavior of hit charge and width in the $yz$-plane.
The detector effects that dominate the behavior in these two variables are TPC channels that are shorted or cross-connected, which distorts the electric field between the wire planes and therefore the wire response~\cite{sp2}. This creates local nonuniformities in the charge response in $(y,z)$.
Note that the detector response in the nominal simulation incorporates a data-driven tuning for this effect.
This section will briefly describe the method for tuning the simulation, followed by the method for extracting the residual difference that will be used to evaluate an uncertainty.
First, the nominal simulation is tuned by scaling the simulated local $(y, z)$ charge response based on measurements of the charge deposited per unit track length, $dQ/dx$.
The median $dQ/dx$ is measured in $5 \times 5$~cm$^2$ bins over the $yz$-plane. This is used to calculate the following quantity in each $(y,z)$ bin for each wire plane in data and simulation:
\begin{equation}
C_{(y_i,z_i)} = \frac{(dQ/dx)_{\text{global}}}{(dQ/dx)_{(y_i,z_i)}},
\end{equation}
where $(dQ/dx)_\text{global}$ is the global median $dQ/dx$ value of the entire $(y,z)$ plane and $(dQ/dx)_{(y_i,z_i)}$ is the local median in $(y,z)$ bin $i$.
The simulated charge response is scaled by the ratio of $C_{(y_i,z_i)}$ measured in data to the one measured in simulation for each wire plane.
The reconstructed $dQ/dx$ quantities are generally corrected for these local nonuniforimities using the $C_{(y_i,z_i)}$ values from data as part of the downstream analysis~\cite{calib}.
However, the reconstructed quantities used for the technique described in this paper are Gaussian fits to the deconvolved waveforms, where the $yz$-plane uniformity calibration is not applied.
The method described in this paper is used to measure the residual bias in the model for the nonuniformities in the tuned simulation.
The same sample of ACPT muons and the peak-finding algorithm as described in Section~\ref{subsec:x_ratio} are employed, but with the $x$-correction described in Section~\ref{subsec:x_corr} applied to the hit properties.
The $(y,z)$ bins are optimized in 2D to again ensure reasonable numbers of entries in each. The result is a set of rectangular $(y,z)$ bins that vary in size based on the density of hits on each wire plane (typically about 4--5~$\text{cm}$ on each side) and contain hits from at least a thousand ACPT muons.
\begin{figure*}
\centering
\includegraphics[width=0.4\textwidth]{ratios/yz/charge_vs_yz_ratio_pl0.png}
\includegraphics[width=0.4\textwidth]{ratios/yz/width_vs_yz_ratio_pl0.png}
\includegraphics[width=0.4\textwidth]{ratios/yz/charge_vs_yz_ratio_pl1.png}
\includegraphics[width=0.4\textwidth]{ratios/yz/width_vs_yz_ratio_pl1.png}
\includegraphics[width=0.4\textwidth]{ratios/yz/charge_vs_yz_ratio_pl2.png}
\includegraphics[width=0.4\textwidth]{ratios/yz/width_vs_yz_ratio_pl2.png}
\caption{Ratios (data/simulation) for hit charge and width vs.\ $(y,z)$. The left column shows the hit charge; the right column shows the hit widths. The top row shows the ratios on the first induction plane; the middle row shows the ratios on the second induction plane; and the bottom row shows the ratios on the collection plane. Note the color axis is the same on all six graphs.}
\label{fig:prop_vs_yz_ratio}
\end{figure*}
Figure~\ref{fig:prop_vs_yz_ratio} shows the results of applying the procedure outlined above.
A smooth function of $y$ and $z$ that describes these ratios is obtained by interpolating between points in the 2D space. In the interior of the detector, the points are the centers of the $(y,z)$ bins. For bins where one edge is along the boundary of the detector, an additional point is placed at the midpoint of that edge with the same value as the point at the center of the bin. Additional points are placed in the four corners of the $(y, z)$ plane, with values given by the ratio at the center of the corner bin.
\subsection{Measurements in Angular Variables}
\label{subsec:angle_ratios}
In addition to the position of the charge in the detector discussed in the preceding sections, this method also considers the orientation of the particle trajectory in angular variables. This captures effects related to long-range induced charge signals on the wires as well as the signal processing.
The same procedure as in the previous section is applied, including the $x$-correction for the hit properties described in in Section~\ref{subsec:x_corr}.
This section details some special considerations related to the choice of basis for the angular variables, and how to handle angles relative to each wire plane where signal processing and hit finding become less reliable.
The two angles most relevant for describing the detector response to a charged particle track are the angle with respect to the drift direction ($x$) and the angle with respect to the wire direction (which is different for each wire plane).
For the collection plane, where the wires are oriented vertically, these are the angles $\theta_{XZ}$ and $\theta_{YZ}$, respectively, as defined in Equation~\ref{eqn:def_thetas}.
For the induction planes, where the wires are oriented at $\pm 60\degree$ from the vertical, analogous angles are defined with respect to a different set of basis vectors, $x'$, $y'$, and $z'$, where $x'$ remains the drift direction, $y'$ is the appropriate wire direction, and $z'$ completes an orthogonal right-handed basis.
Mathematically, this is expressed by the following expressions for the first (upper sign) and second (lower sign) induction planes.
\begin{equation}
\label{eqn:def_rotate}
\begin{aligned}
x' &= x \\
y' &= y\cos(60\degree) \pm z\sin(60\degree) \\
z' &= y\sin(60\degree) \mp z\cos(60\degree) \\
\end{aligned}
\end{equation}
The angles $\theta_{XZ}$ and $\theta_{YZ}$ are used for all wire planes with the understanding that these quantities always refer to the angle definition relevant for the plane in question.
With this choice of angular basis, the variations in hit properties in $\theta_{XZ}$ and $\theta_{YZ}$ can be treated independently.
It was verified that the detector response in both integrated charge and width does not depend on the quadrant for these angles, i.e.\ that the wire response is independent of the particle's direction (up vs.\ down, etc.), as expected.
Because of this, it is possible to ``fold'' all angles into the space between 0 and $\pi/2$.
Using this angular basis, the variations in the $x$-corrected properties of the hits are measured as a function of angles.
The ACPT muons do not have an isotropic angular distribution, so a variable binning scheme is employed here.
The peak in each angular bin in data and simulation is computed using the same algorithm described in Section~\ref{subsec:x_ratio}.
However, as either $\theta_{XZ}$ or $\theta_{YZ}$ approach $\pi/2$, the corresponding deconvolved waveform is no longer well-described by a single Gaussian function, and is instead an extended charge deposition~\cite{sp1}.
Above 1.4~radians (about $80\degree$), the observed distribution of hit charges and widths cannot be reliably used to characterize the detector's response.
The simulation modification factor in this bin ($R_N$) is instead extrapolated using the maximum absolute difference from $1.0$ over the rest of the angular space ($\Updelta R_\text{max}$) while maintaining the sign of the difference from the adjacent measured bin ($R_{N-1}$):
\begin{equation}
\begin{aligned}
\Updelta R_\text{max} &= \max_{\text{bins $k$}} \left| R_k - 1 \right| \\
R_N &= 1 + \left( \text{sign}(R_{N-1}-1) \cdot \Updelta R_\text{max} \right).
\end{aligned}
\label{eqn:extrapolate}
\end{equation}
It is worth noting that a displacement vector with $\theta_{XZ}=\pi/2$ or $\theta_{YZ}=\pi/2$ also has zero $z$-component. In the MicroBooNE coordinate system, the BNB points along the $z$ direction, so the region in which we use this extrapolation is perpendicular to the neutrino beam.
Figure~\ref{fig:prop_vs_thetaXZ_ratio} shows the ratio of data to simulation for the corrected hit charges and widths as a function of $\theta_{XZ}$, including the extrapolation to the high-angle region.
Figure~\ref{fig:prop_vs_thetaYZ_ratio} shows the ratio for the corrected hit charges as a function of $\theta_{YZ}$.
The hit width is not expected to vary as a function of $\theta_{YZ}$, and we find that this is true in our data to within 2\% (measured in the angular range up to $\theta_{YZ}$ of 1.3, after which saturation effects lead to non-gaussian waveforms which lead to biased width estimates).
For this reason we do not extract the ratio of the hit widths as a function of $\theta_{YZ}$ and do not apply this as part of our detector systematic uncertainties.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\columnwidth]{ratios/thetaXZ/charge_vs_thetaXZ_splines.png}
\includegraphics[width=0.95\columnwidth]{ratios/thetaXZ/width_vs_thetaXZ_splines.png}
\caption{Ratio functions (data/simulation) for hit charge (top) and hit width (bottom) vs.\ $\theta_{XZ}$. The solid lines are the values of the ratio in each bin, and the dashed lines are the spline fits. The bins at $1.4 < \theta_{XZ} < \pi/2~\text{rad}$ are extrapolated as described in the text.}
\label{fig:prop_vs_thetaXZ_ratio}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\columnwidth]{ratios/thetaYZ/charge_vs_thetaYZ_splines.png}
\caption{Ratio functions (data/simulation) for hit charge vs.\ $\theta_{YZ}$. The solid lines are the values of the ratio in each bin, and the dashed lines are the spline fits. The bin at $1.4 < \theta_{YZ} < \pi/2~\text{rad}$ is extrapolated as described in the text.}
\label{fig:prop_vs_thetaYZ_ratio}
\end{figure}
\FloatBarrier
\section{Wire Waveform Modification}
\label{sec:wiremod}
The functions based on the measured data to simulation ratios extracted in Section~\ref{sec:ratios} are used to modify the wire waveforms in simulated neutrino interaction events, effectively varying the detector response.
First, the wire signal regions are divided into Gaussian sub-regions in the drift time dimension that can be modified independently, again using the reconstructed hits.
This division is important because a single waveform can include overlapping charge from multiple particles with different kinematics that should be modified in different ways.
Additionally, because the simulated signals are overlaid on unbiased cosmic data as described in Section~\ref{sec:samples}, the algorithim must distinguish the simulation-dominated portions of the waveforms from the data-dominated portions.
Each wire signal region can be described as the sum of one or more Gaussian functions each with three parameters: peak position in time ticks, an integrated charge, and a width in time.
For each simulated energy deposit in the event, the projected position of the corresponding signal on each wire plane is computed after accounting for local nonuniformities in the electric field.
In this way simulated energy deposits are associated with the Gaussian regions that match their projected position.
The scale factors that are applied to the wire waveforms are based on the truth information of the simulated energy deposits matched to that portion of the waveform.
The individual simulated energy deposits each have an associated amount of energy as well as a start and an end position.
The $x$, $y$, and $z$ positions of the energy deposit are calculated as the average of the start and end positions; the angular variables $\theta_{XZ}$ and $\theta_{YZ}$ are computed using the definition in Equation~\ref{eqn:def_thetas}.
The simulation modification functions derived in Section~\ref{sec:ratios} are used to obtain a charge and width scale factor for each energy deposit.
The hit charge and width scale factors for each Gaussian region of the wires are computed as the energy-weighted average of the scale factors over the associated set of energy deposits. For example, the scale factor $R$ for hit widths as a function of $x$ is given by
\begin{equation}
R = \frac{\sum_i E_i \cdot R_\sigma(x_i)}{\sum_i E_i}
\end{equation}
where the sums are over the set of energy deposits contributing to the Gaussian region, $E_i$ is the energy of the $i^\text{th}$ energy deposit, and $R_\sigma(x_i)$ is the spline fit for the hit widths as a function of $x$ from Figure~\ref{fig:prop_vs_x_ratio} evaluated at the $x$ position of the $i^\text{th}$ energy deposit.
The scale factors are set to unity if the Gaussian region has total charge greater than 80 units but less than 0.3~MeV of deposited energy associated with it.
This prevents small amounts of simulated charge from modifying cosmic-dominated regions of the waveforms.
Finally, the above information is used to modify the overall waveform to have the desired integrated charge and width. This is accomplished by modifying the waveform at each time tick using the following procedure.
The original waveform is approximated by adding together the Gaussian functions that describe each region with their original parameters (mean time tick $t_0$, width $\sigma$, and integrated charge $Q$).
Similarly, the desired post-modification waveform is approximated by adding together the Gaussian functions with the same mean time tick but with modified charge $Q'$ and width $\sigma'$ based on their computed scale factors.
At each tick, the waveform is scaled by
\begin{equation}
\text{scale}(t) = \frac{\sum_j \text{Gaus}(t; t_j, Q'_j, \sigma'_j)}{\sum_j \text{Gaus}(t; t_j, Q_j, \sigma_j)}
\end{equation}
where
\begin{equation}
\text{Gaus}(t; t_0, Q, \sigma) = \frac{Q}{\sqrt{2\pi} \, \sigma} \exp \left( - \frac{(t-t_0)^2}{2 \sigma^2} \right)
\end{equation}
with sums over the Gaussian region(s) within the relevant wire signal region.
Figure~\ref{fig:wiremod_examples} shows two examples of how this procedure modifies the waveforms.
The final result of running this procedure over an event is a new set of wire waveforms, where signals from simulated charge have been modified but signals from the cosmic data overlay are unchanged.
Waveform modifications are performed separately in each of the geometric variables, all in the manner described above for $x$. This results in one set of modified events for each of $x$, $(y,z)$, $\theta_{XZ}$, and $\theta_{YZ}$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{wiremod/wiremod_oneHit.png}
\includegraphics[width=0.95\columnwidth]{wiremod/wiremod_twoHits.png}
\caption{Examples of modified waveforms. The top graph shows a simple example where the wire signal region is well-described by a single Gaussian function. The bottom graph shows a case where one portion of the waveform is associated to simulated charge while the other is associated with cosmic data charge. Here, the simulation-dominated portion of the waveform is modified but the cosmic-dominated portion is not.}
\label{fig:wiremod_examples}
\end{figure}
In order to validate this method, a closure test was performed using a simulation event sample in which the waveforms were modified in accordance with the ratios extracted above, and in which the hit properties were then re-measured.
The hit properties in the modified simulation are predicted exactly using the ratios the modification was based on, and the results show agreement within $\pm 2\%$ of those expectations in all variables.
An additional validation test was performed to demonstrate that this method can reproduce the behavior of a variation in a known detector model parameter. The variation used in this test case was a 50\% decrease in the longitudinal diffusion constant, consistent with the difference between the value measured in MicroBooNE data compared to the value used in the MicroBooNE simulation~\cite{diffusion}.
A sample of simulated ACPT muons was produced with this decreased diffusion constant, and used in place of detector data to extract a set of ratio functions that encapsulate the difference between the diffusion variation and the nominal simulation.
These ratio functions were then used to modify waveforms, according to the procedure described above, in a sample of simulated neutrino interactions (with nominal diffusion).
Finally, this wire-modified sample was compared to a sample of neutrinos generated with the diffusion constant modified in the initial simulation. We found the wire-modified sample to faithfully reproduce features of the diffusion-modified simulation across a range of low- and high-level reconstructed variables.
\FloatBarrier
\section{Uncertainties on Physics Observables}
\label{sec:impacts}
Post-modification simulated event samples for each of the variables $x$, $(y, z)$, $\theta_{XZ}$, and $\theta_{YZ}$ agree better with the data from the MicroBooNE detector in specific ways related to the wire response as a function of that variable.
This section details how small-statistics samples of simulated events with modified waveforms can be used to quantify any bias due to the detector mis-modeling in the nominal simulation, and how that bias can be included in the quoted systematic uncertainties.
The principle is that the difference between the nominal simulation and the modified simulations for each variable is used as the estimate of the corresponding bias. For most current MicroBooNE analyses, the bias is not corrected and is instead used as the estimated systematic uncertainty.
The wire modifications are determined based only on a sample of cosmic muons. As an example of general applicability, this section discusses applying them for evaluating systematic uncertainties on electromagnetic showers, objects very different from the charged particle tracks from which the wire modifications were derived.
For this study, two event samples are considered.
The first is a sample of single-shower events which are electron neutrino candidates from NuMI beam data~\cite{krishan}.
For these showers, the energy loss per unit length, $dE/dx$, in the initial segment of the shower is measured. Electrons at the relevant energy scale will deposit energy as a minimum ionizing particle (2.1~MeV/cm), whereas photons produce showers primarily by pair production which will deposit twice as much energy per unit length (4.2~MeV/cm).
The measured $dE/dx$ of the trunks of these showers is shown in Figure~\ref{fig:nuMidEdX}, with the expected two contributions from electrons and photons.
The second sample is of events with two reconstructed photons, for which the primary production mechanism in MicroBooNE is neutral pion decay. This sample uses data from the BNB beam.
For each event in this sample, the diphoton invariant mass is calculated, as shown in Figure~\ref{fig:bnbPi0}. The shower energies are not corrected for known energy losses, such as shower clustering inefficiencies, so the invariant mass does not directly measure the neutral pion mass. However, this effect is present in both data and simulation.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{physics/WireModPaperPlot.pdf}
\caption{Distribution of the shower $dE/dx$ using NuMI beam data (points) and central value (CV) simulation (black line). The distribution for the simulation modified based on the detector response as a function of $x$ (blue line) and $\theta_{YZ}$ (green line) are also shown. The red band indicates the uncertainty from the wire modification alone (with all wire modification uncertainties included). The gray band indicates the full uncertainty, including other detector uncertainties as well as uncertainties on the neutrino flux and the interaction model. The bands represent the uncertainty on the number of events in that bin, calculated using Equation~\ref{eq:unc}, and are symmetric. The error bars on the data are statistical only.}
\label{fig:nuMidEdX}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{physics/WireModPaperPlot_Pi0Mass_run12.pdf}
\caption{Measured diphoton invariant mass distribution using BNB beam data (points) and central value (CV) simulation (black line), prior to additional shower energy corrections. The distribution for the simulation modified based on the detector response as a function of $x$ (blue line) and $\theta_{YZ}$ (green line) are also shown. The red band indicates the uncertainty from the wire modification alone (with all wire modification uncertainties included). The gray band indicates the full uncertainty, including other detector uncertainties, as well as uncertainties on the neutrino flux and the interaction model. The bands represent the uncertainty on the number of events in that bin, calculated using Equation~\ref{eq:unc}, and are symmetric. The error bars on the data are statistical only.}
\label{fig:bnbPi0}
\end{figure}
The overall distributions of the $e/\gamma$ $dE/dx$ and diphoton invariant mass observables are subject to uncertainties from a range of sources. These include uncertainties in the flux and neutrino interaction model, but these uncertainties primarily manifest as normalization changes in the total number of events, or, in the case of the $e/\gamma$ $dE/dx$, relative normalization differences in the low (electron) and high (photon) ionization peaks.
The reconstructed positions and widths of the $dE/dx$ and $M_{\gamma\gamma}$ peaks are primarily driven by the detector response model, which is calibrated via the absolute charge scale measurement~\cite{calib}.
Errors in the response model lead to shifts in these distributions. Changes to the amplitudes and widths of the waveforms will change the measured amount of charge---even leading to charge falling below hit reconstruction thresholds---and so change the measurement of $dE/dx$, or lead to non-linear losses or gains in shower energy reconstruction.
Therefore, this study specifically looks at the peak positions and widths in order to evaluate the impact of the wire waveform modification procedure on these two distributions.
The mean and width, as measured using the RMS, of each of the peaks are calculated from unbinned data and simulation. The range that is used for each is given in Table~\ref{tab:peak_ranges}.
The systematic uncertainty on the simulation is calculated over the variations $s$ as
\begin{equation}
\sigma_p = \sqrt{\sum_s(p_s - p_\text{CV})^2}
\label{eq:unc}
\end{equation}
where $p_s$ and $p_\text{CV}$ are the parameters (either mean or RMS) estimated from each modified sample and the central value simulation, respectively.
The statistical uncertainty on the data is estimated assuming a Gaussian distribution.
The best-fit peak means and widths in the data and simulation and their uncertainties are summarized in Table~\ref{tab:meansAndWidths}. The wire modifications induce changes in the peak means and widths in simulation typically in the range of 1--2\%, though as large as $6\%$ in the case of the diphoton invariant mass width. These variations are consistent with the magnitude of the observed differences between the data and the simulation, suggesting that systematic uncertainties derived from this method are reasonable and not significantly overestimated.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|}
\hline
Value & Peak Range \\
\hline
e$^-$ $dE/dx$ & 1.75--3.0 MeV/cm \\
$\gamma$ $dE/dx$ & 3.5--5 MeV/cm \\
$M_{\gamma\gamma}$ & 20--200 MeV/c$^2$ \\
\hline
\end{tabular}
\caption{Table summarizing the ranges used in calculating the means and widths of the peaks in the $dE/dx$ and diphoton invariant mass distributions.}
\label{tab:peak_ranges}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{ |c|c|c| }
\hline
Value & Data & MC \\
\hline
e$^-$ $dE/dx$ mean [MeV/cm] & $2.17 \pm 0.02$ & $2.15 \pm 0.05$ \\
e$^-$ $dE/dx$ width [MeV/cm] & $0.342 \pm 0.017$ & $0.326 \pm 0.005$ \\
$\gamma$ $dE/dx$ mean [MeV/cm] & $4.10 \pm 0.03$ & $4.08 \pm 0.05$ \\
$\gamma$ $dE/dx$ width [MeV/cm] & $0.425 \pm 0.024$ & $0.423 \pm 0.010$\\
$M_{\gamma\gamma}$ mean [MeV/c$^2$] & $106.5 \pm 0.9$ & $105.8 \pm 2.3$ \\
$M_{\gamma\gamma}$ width [MeV/c$^2$] & $35.4 \pm 0.6$ & $36.6 \pm 2.3$ \\
\hline
\end{tabular}
\caption{Table summarizing the mean and width of each of the peaks in the $dE/dx$ and diphoton invariant mass distributions. The data uncertainties are statistical, and the MC uncertainties are derived from the wire modified samples.}
\label{tab:meansAndWidths}
\end{table}
\section{Future Work}
\label{sec:future}
The methods described in this paper have been used to estimate the impact of detector response uncertainties in MicroBooNE physics analyses.
There are a number of potential improvements and extensions possible.
The method could be expanded to describe the dependence on local ionization density.
This would require a sample of particles with varying energy deposition profiles, such as protons, with well-understood kinematic distributions that are similar between data and simulation.
Additionally, the dependence of the hit properties on the variables shown in this paper were shown to be separable from each other, except for the $y$ and $z$ position dependence which have strong correlations. The remaining correlations are known to be small, but in principle the dependencies could be measured simultaneously across more than two variables. Considering correlations in this way could further reduce the uncertainties on physics observables.
Finally, rather than taking the full difference between data and simulation, the methods described here could be used to correct the nominal simulation with the residual uncertainty on that correction being taken as the uncertainty. This was not deemed necessary for recent MicroBooNE physics analyses, but could be employed if detector uncertainties became dominant.
\section{Summary and Conclusions}
\label{sec:summary}
This paper presents a novel method for applying data-driven modifications to simulated LArTPC wire waveforms.
The technique is based on comparisons between the properties of Gaussian hits fitted to the wire waveforms in data and simulation as functions of the relevant variables: $x$, $(y,z)$, $\theta_{XZ}$, and $\theta_{YZ}$. The differences in waveform properties between data and simulation are used to modify simulated events, which are then used to quantify systematic differences in reconstructed variables.
This method is agnostic to the details of the simulation detector model and can capture mismodelling in known effects as well as unknown contributions not included in any model.
Compared to generating modified event samples repeating the full simulation with modified detector physics models, this method is more robust against underlying model assumptions and significantly more computationally efficient.
This paper has also shown how uncertainties on physics observables can be evaluated with this method using two MicroBooNE analyses as examples.
From this study, it was found that the wire waveform modification method leads to variations in electromagnetic shower-based observables consistent with the small differences between data and simulation, despite having been developed exclusively using cosmic muon tracks.
The method described here is generally applicable to wire-based noble liquid TPC detectors assuming the presence of a well-understood source for calibration samples with sufficient statistics. Such will be the case in the detectors of the Short Baseline Neutrino program at Fermilab, which, like MicroBooNE should have plentiful samples of cosmic-ray muon tracks, and in the case of the Deep Underground Neutrino Experiment far detector where laser or radioactive source calibration samples could be used to perform similar studies. Similar methods may be used for LArTPCs with different signal formation or readout mechanisms, though the applicability to these detector readout designs would have to be studied.
\begin{acknowledgements}
This document was prepared by the MicroBooNE collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S.\ Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No.\ DE-AC02-07CH11359. MicroBooNE is supported by the following: the U.S.\ Department of Energy, Office of Science, Offices of High Energy Physics and Nuclear Physics; the U.S. National Science Foundation; the Swiss National Science Foundation; the Science and Technology Facilities Council (STFC), part of the United Kingdom Research and Innovation; the Royal Society (United Kingdom); and The European Union’s Horizon 2020 Marie Sklodowska-Curie Actions. Additional support for the laser calibration system and cosmic ray tagger was provided by the Albert Einstein Center for Fundamental Physics, Bern, Switzerland. We also acknowledge the contributions of technical and scientific staff to the design, construction, and operation of the MicroBooNE detector as well as the contributions of past collaborators to the development of MicroBooNE analyses, without whom this work would not have been possible.
\end{acknowledgements}
\input{references}
\end{document}
|
2,877,628,091,076 | arxiv | \section{Introduction and Summary}
In condensed matter physics,
we aim to formulate a systematic framework within unified principles to understand
many-body quantum systems and their underlying universal phenomena.
Two strategies are often being used: classification and characterization.
The classification aims to organize the distinct macroscopic states / phases / orders of quantum matter in terms of distinct classes, give these classes some proper
mathematical labels,
and find the mathematical relations between distinct classes.
The characterization aims to distinguish different classes of matter in terms of some universal physics probes as
incontrovertible experimental evidence of their existences.
Ginzburg-Landau theory \cite{Landau1937,GL5064,Landau1958} provides a framework to understand the global-symmetry breaking states and
their phase transitions. Ginzburg-Landau theory uses the group theory in mathematics to classify the states of matter through their global symmetry groups.
Following Ginzburg-Landau theory and its refinement to the Wilson's renormalization-group theory \cite{WilsonKogut1974},
it is now well-known that we can characterize symmetry breaking states through
their gapless Nambu-Goldstone modes, the long-range order (see References therein \cite{anderson1984basic}),
and their behaviors through the critical exponents. In this classic paradigm,
physicists focus on looking into the long-range correlation function of \emph{local} operators ${\cal O}(x)$ at a spacetime point $x$, or into a generic $n$-point correlation function:
\begin{eqnarray}
\langle {\cal O}(x_1) {\cal O}(x_2) \rangle, \;\;\;\;\;\; \langle {\cal O}_1(x_1) {\cal O}_2 (x_2) \cdots {\cal O}_n (x_n)\rangle, \;\;\;\;\;\; etc.
\end{eqnarray}
through its long-distance behavior.
However, a new paradigm beyond-Ginzburg-Landau-Wilson's have emerged since the last three decades \cite{WenBook, sachdev2011quantum}.
One important theme is the
emergent conformal symmetries and emergent gauge fields at the quantum
critical points of the phase transitions.
This concerns the critical behavior of gapless phases of matter where the energy gap closes to zero at the infinite system size limit.
Another important theme is the \emph{intrinsic topological order} \cite{Wen:1990tm}.
The topological order cannot be detected through the local operator ${\cal O}(x)$,
nor the Ginzburg-Landau symmetry breaking order parameter,
nor the long-range order.
Topological order is famous for harboring fractionalized anyon excitations that have the fractionalized statistical Berry phase\cite{Wilczek:1990ik}.
Topological order should be characterized and detected through the
\emph{extended} or \emph{non-local} operators.
It should be classified through the quantum pattern of the long-range entanglement (See \cite{Wen1610.03911} for a recent review).
Topological order can occur in both gapless or gapped phases of matter.
In many cases,
when topological orders occurr in the gapped phases of condensed matter system,
they may have low energy effective field theory descriptions by Topological Quantum Field Theories (TQFTs) \cite{Witten:1988hf}.
Our work mainly concerns gapped phases of matter with intrinsic topological order that have TQFT descriptions.
One simplest example of topological order in 2+1 dimensions
(denoted as 2+1D\footnote
We denote
$n+1$ dimensional spacetime as $n+1$D,
with $n$ dimensional space and 1 dimensional time.
})
is called the ${\mathbb{Z}}_2$ topological order\cite{WenPRBZ2TO44.2664}, equivalently the ${\mathbb{Z}}_2$ spin liquid\cite{ReadSachdevPRL66.1773},
or the ${\mathbb{Z}}_2$ toric code \cite{Kitaev2003}, or the ${\mathbb{Z}}_2$ discrete gauge theory \cite{Wegner:1971jf}.
Indeed the ${\mathbb{Z}}_2$ topological order exists in any dimension,
say in $((d-1)+1)$D with $d\geq 3$. Discrete
${\mathbb{Z}}_N$ gauge theory can be described by an integer level-$N$ $BF$ field theory with an action $\int\,\frac{N}{2\pi}B\wedge dA$,
where $A$ and $B$ are locally 1-form and $d-2$-form gauge fields.
The case of $N=2$ and $d=3$ is our example of 2+1D ${\mathbb{Z}}_2$ topological order.
Since $n$-point correlation function of local operators cannot detect the nontrivial ${\mathbb{Z}}_2$ or ${\mathbb{Z}}_N$ topological order,
we shall instead use \emph{extended} operators to detect its nontrivial order.
The \emph{extended} operators pf are Wilson and 't Hooft operators:
$W_{A,e_n}({C_n^1}) =\exp[\hspace{1pt}\mathrm{i}\hspace{1pt} e_n \oint_{C_n^1} A ]$
carrying the electric charge $e_n$
along a closed curve ${C_n^1}$,
and $W_{B,q_m}({S_m^{d-2}}) = \exp[\hspace{1pt}\mathrm{i}\hspace{1pt} q_m \oint_{S_m^{d-2}}B ]$
carrying the magnetic charge $q_m$
along a closed surface ${S_m^{d-2}}$.
The path integral (details examined in the warm up exercise done in Section \ref{sec:BdA})
for the correlator of the extended operators results in
\begin{eqnarray}
\langle W_{A,e_n}({C_n^1}) W_{B,q_m}({S_m^{d-2}}) \rangle = \exp[ -\hspace{1pt}\mathrm{i}\hspace{1pt} \frac{2\pi}{N} e_n q_m \text{Lk}(C_n^1, S_m^{d-2})].
\end{eqnarray}
With some suitable
$e_n$ and $q_m$ values, its expectation value is nontrivial (i.e. equal to 1), if and only if
the linking number $\text{Lk}(C_n^1, S_m^{d-2})$ of the line and surface operator is nonzero.
The closed line operator $W_{A,e_n}({C_n^1})$ can be viewed as creating and then annihilating a pair of $e_n$ particle-antiparticle 0D anyon excitations
along a 1D trajectory in the spacetime.
The closed surface operator $W_{B,q_m}({S_m^{d-2}})$ can be viewed as creating and then annihilating
a pair of $q_m$ fractionalized flux-anti-flux $(d-3)$D excitations
along some trajectory in the spacetime
(Note that the flux excitation is a 0D anyon particle in 2+1D, while it is a 1D anyonic string excitation in 3+1D).
A nontrivial linking implies that there is a nontrivial braiding process between $e_n$ charge and $q_m$ flux excitation in the spacetime\footnote{
Let us elaborate on what exactly is meant by this. Let ${\cal L}$ be a link in a closed space-time manifold $M$. The link ${\cal L}$ can be decomposed as some submanifolds, including lines or surfaces.
The lines or surfaces become operators in TQFT that create anyonic excitations at their ends. For example, an open line creates the anyonic particle at two end points. An open surface creates the anyonic string at its boundary components. A closed line thus creates a pair of anyonic particle/anti-particle from vacuum, and then annihilate them to vacuum. The closed surface creates the anyonic strings from vacuum and then annihilate them to vacuum. Therefore the link ${\cal L}$ can be viewed as the time trajectory for the braiding process of those anyonic excitations,
where braiding process means the time-dependent process (a local time as a tangent vector in a local patch of the whole manifold) that is moving those anyonic excitations around to form a closed trajectory as the link of submanifolds (lines, surfaces) in the spacetime manifold.
The braiding statistics concerns the complex number that arise in the path integral with the configuration described above. The braiding statistics captures the statistical Berry phase of excitations of particle and string.
We remark \emph{quantum dimensions} of anyonic particles/strings in Sec.~\ref{sec:conclude}.
}.
The link confugurations shown in terms of spacetime braiding process are listed in Table \ref{table:TQFTlink}.
Physically,
we can characterize the topological order through the statistical Berry phase between anyonic excitations, say $\exp[ \hspace{1pt}\mathrm{i}\hspace{1pt} \frac{2\pi}{N} e_n q_m]$, via
the nontrivial link invariant.
Mathematically, the viewpoint is the opposite,
the topological order, or TQFT, or here the $BF$ theory detects the nontrivial link invariant.
It shall be profound to utilize both viewpoints to explore the topological order in condensed matter, TQFT in field theories, and
link invariants in mathematics.
This thinking outlines the deep relations between quantum statistics and spacetime topology\cite{JWangthesis, 1602.05951}.
The goals of our paper are:
(1) Provide concrete examples of topological orders and TQFTs that occur in
emergent low energy phenomena
in some well-defined fully-regularized many-body quantum systems.
(2) Explicit exact analytic calculation of the braiding statistics and link invariants for our topological orders and TQFTs.
For the sake of our convenience and for the universality of low energy physics, we shall approach our goal through TQFT, without worrying about a particular lattice-regularization or
the lattice Hamiltonian.
However, we emphasize again that all our TQFTs are low energy physics of some well-motivated lattice quantum Hamiltonian systems,
and we certainly shall either provide or refer to the examples of such lattice models and condensed matter systems, cases by cases.
To summarize, our TQFTs / topological orders shall satisfy the following physics properties:
\begin{enumerate}
\item The system is unitary.
\item Anomaly-free in its own dimensions. Emergent as the infrared low energy physics from
fully-regularized microscopic many-body quantum Hamiltonian systems with a ultraviolet high-energy lattice cutoff. This motivates a practical purpose for condensed matter.
\item The energy spectrum has a finite energy gap $\Delta$ in a closed manifold for the microscopic many-body quantum Hamiltonian systems.
We shall take the large energy gap limit $\Delta \gg 1$ to obtain a valid TQFT description.
The system can have degenerate ground states (or called the zero modes) on a closed spatial manifold $M^{d-1}$.
This can be evaluated
as the path integral on the manifold $M^{d-1} \times S^1$, namely
$Z(M^{d-1} \times S^1)= \dim \cH_{M^{d-1}} \equiv \text{GSD}$
as the dimension of Hilbert space, which counts the ground state degeneracy (\text{GSD}).
On an open manifold, the system has the lower dimensional boundary theory
with anomalies.
The anomalous boundary theory could be gapless.
\item The microscopic
Hamiltonian contains the short-ranged local interactions between the spatial sites or links.
The Hamiltonian operator is Hermitian.
Both the TQFT and the Hamiltonian system are defined within the local Hilbert space.
\item The system has the long-range entanglement, and contains fractionalized anyonic particles, anyonic strings, or other extended object as excitations.
\end{enumerate}
As said, the ${\mathbb{Z}}_2$ topological order / gauge theory
has both TQFT and lattice Hamiltonian descriptions \cite{WenPRBZ2TO44.2664, ReadSachdevPRL66.1773, Kitaev2003, Wegner:1971jf}.
There are further large classes of topological orders, including the ${\mathbb{Z}}_2$ toric code \cite{Kitaev2003},
that can be described by
a local short-range interacting Hamiltonian:
\begin{equation}\label{eq:Hamiltonian}
\hat{H}=-\sum_v \hat{A}_v-\sum_f \hat{B}_f,
\end{equation}
where $\hat{A}_v$ and $\hat{B}_f$ are mutually commuting bosonic lattice operators acting on the vertex $v$ and the face $f$ of a triangulated/regularized space.
With certain appropriate choices of $\hat{A}_v$ and $\hat{B}_f$, we can write down an exact solvable spatial-lattice model
(e.g. see a systematic analysis in \cite{Wan1211.3695, Wan:2014woa}, and also similar models
in \cite{MesarosRan, Jiang:2014ksa, Wang1404.7854}) whose low energy physics
yields the Dijkgraaf-Witten topological gauge theories \cite{Wittencohomology}.
Dijkgraaf-Witten topological gauge theories in $d$-dimensions are defined in terms of path integral on a spacetime lattice ($d$-dimensional manifold ${M^{d}}$ triangulated with $d$-simplices).
The edges of each simplex are assigned with quantum degrees of freedom of a gauge group $G$ with group elements $g \in G$.
Each simplex then is associated to a complex $U(1)$ phase of $d$-cocycle $\omega_d$ of the cohomology group $H^{d}(G,U(1))$
up to a sign of orientation related to the ordering of vertices (called the branching structure). How do we
convert the spacetime lattice
path integral $Z$
as the ground state solution of the Hamiltonian given in
Eq. (\ref{eq:Hamiltonian})?
%
We design the $\hat{B}_f$ term as the zero flux constraint on each face / plaquette.
We design that the $\hat{A}_v$ term acts on the wavefunction of a spatial slice through each vertex $v$ by lifting
the \cred{initial} state through an imaginary time evolution to a new state with a vertex $v'$ via
$\hat{A}_v=\frac{1}{|G|}\sum_{[vv']=g\in G}\hat{A}_v^g$. Here the edge along the imaginary time is assigned with $[vv']=g$ and
all $g \in G$ are summed over. The precise value of $\hat{A}_v^g$ is related to fill the imaginary spacetime simplices with cocycles $\omega_d$.
The whole term $\hat{A}_v$ can be viewed as the near neighbor interactions that capture the statistical Berry phases
and the statistical interactions.
%
Such models are also named
the twisted quantum double model \cite{deWildPropitius:1995cf, Wan1211.3695}, or the twisted gauge theories \cite{Wang1404.7854, Wan:2014woa}, due to the fact that
Dijkgraaf-Witten's group cohomology description requires twisted cocycles.
With a well-motivated lattice Hamiltonian, we can ask what is its low energy \cred{continuum} TQFT.
The Dijkgraaf-Witten model should be described by bosonic TQFT, because its definition does not restrict to a spin manifold.
Another way to understand this bosonic TQFT is the following.
Since $\hat{A}_v$ and $\hat{B}_f$ are bosonic operators in Eq.\ref{eq:Hamiltonian}, we shall term such a Hamiltonian as a bosonic system and bosonic quantum matter. TQFTs for bosonic Hamiltonians are bosonic TQFTs that require no spin structure.
We emphasize that bosonic quantum matter and bosonic TQFTs have only \emph{fundamental bosons} (without any fundamental fermions),
although these bosonic systems can allow excitations of \emph{emergent anyons}, including \emph{emergent fermions}.
It has been noticed by \cite{deWildPropitius:1995cf, WangSantosWen1403.5256, Wang1403.7437, Kapustin1404.3230, Wang1404.7854, 1405.7689}
that the cocycle in the cohomology group reveals the continuum field theory action (See, in particular, the Tables in \cite{1405.7689}).
A series of work develop along this direction by formulating a continuum field theory description for
Dijkgraaf-Witten topological gauge theories of discrete gauge groups,
their topological invariants and physical properties
\cite{Kapustin1404.3230, 1405.7689, Gaiotto:2014kfa,
CWangMLevin1412.1781, Gu:2015lfa, Ye1508.05689,
RyuChenTiwari1509.04266,CWangCHLinMLevin1512.09111,RyuTiwariChen1603.08429,1602.05951, He1608.05393}.
We will follow closely to
the set-up of \cite{1405.7689, 1602.05951}. Continuum TQFTs with level-quantizations
are formulated in various dimensions in Tables of \cite{1405.7689}.
Dynamical TQFTs with well-defined exact gauge transformations to all orders
and their physical observables are organized in terms of path integrals of with linked line and surface operators in Tables of \cite{1602.05951}.
For example,
we can start by considering the Dijkgraaf-Witten topological gauge theories given by the cohomology group $H^{d}(G,U(1))$,
say of a generic finite Abelian gauge group $G= \prod_I {\mathbb{Z}}_{N_I}$.
Schematically, leaving the details of level-quantizations into our main text,
in 2+1D,
we have field theory actions of
$\int BdA$,
$\int K_{IJ} A_I dA_J$,
$\int B_I dA_I+A_I dA_J$ and
$\int B_I dA_I+A_1 A_2A_3$, etc.
In 3+1D, we have
$\int B_I dA_I+A_J A_K dA_L$,
$\int B_I dA_I+A_1 A_2A_3A_4$.
Here $B$ and $A$ fields are locally 2-form and 1-form gauge fields respectively.
For simplicity, we omit the wedge product ($\wedge$) in the action.
(For example, $A_1 A_2A_3$ is a shorthand notation for $A_1 \wedge A_2 \wedge A_3$.)
The indices of $A_I$ and $B_I$ are associated to the choice of ${\mathbb{Z}}_{N_I}$ subgroup in $G= \prod_I {\mathbb{Z}}_{N_I}$.
The $A$ fields are 1-form U(1) gauge fields, but the $B$ fields can have modified gauge transformations when we turn on the cubic and quartic interactions
in the actions.
We should warn the readers \emph{not} to be confused by the notations:
the TQFT gauge fields $A$ and $B$, and the microscopic Hamiltonian operator $\hat{A}_v$ and $\hat{B}_f$ are totally different subjects.
Although they are mathematically related by the group cohomology cocycles, the precise physical definitions are different.
How do we go beyond the twisted gauge theory description of Dijkgraaf-Witten model?
Other TQFTs that are beyond Dijkgraaf-Witten model, such as
$\int B_I dA_I+B_IB_J$ \cite{Ye1410.2594, Gaiotto:2014kfa} \cgreen{and other higher form TQFTs\cite{Kapustin:2013uxa}}, may still be captured by the analogous lattice Hamiltonian model
in Eq. (\ref{eq:Hamiltonian}) by modifying the decorated cocycle in $\hat{H}$ to more general cocycles.
Another possible formulation for beyond-Dijkgraaf-Witten model can be the Walker-Wang model\cite{WalkerWang1104.2632, Williamson:1606.07144}.
The lattice Hamiltonian can still be written in terms of certain version of Eq. (\ref{eq:Hamiltonian}).
All together,
we organize the list of aforementioned TQFTs,
braiding statistics and link invariants that we compute,
and some representative realizable condensed matter/lattice Hamiltonians, in Table \ref{table:TQFTlink}.
Most TQFTs in the Table \ref{table:TQFTlink} are bosonic TQFTs that require no spin manifold/structure.
However, $\int \frac{N_I}{2\pi}{B^I \wedge d A^I} + { \frac{ p_{IJ}}{4 \pi}} A^I \wedge d A^J$ in 2+1D, and $\int \frac{N_I}{2\pi}B^I\wedge dA^I+\frac{p_{IJ}N_IN_J}{4\pi N_{IJ}}\,B^I\wedge B^J$ in 3+1D,
\footnote{
Throughout our article, we denote ${N_{12}} \equiv {\gcd(N_{1}, N_{2})}$, in general ${N_{IJ \dots}} \equiv {\gcd(N_{I}, N_{J}, \dots)}$.
}
are two examples of fermionic TQFTs (or the so-called spin TQFTs) when $p_{II}$ is an odd integer.
A fermionic TQFTs can emerge only from a fermionic Hamiltonian that contains fundamental fermionic operators satisfying the anti-commuting relations {(see e.g. \cite{Gaiotto1505.05856,Bhardwaj1605.01640})}.
We emphasize that the fermionic quantum matter have \emph{fundamental fermions} (also can have fundamental bosons),
although these fermionic systems can allow excitations of other \emph{emergent anyons}.
Mathematically, TQFTs describing fermionic quantum matter should be tightened to spin TQFTs that require a spin structure
\cite{BelovMoore2005ze, jenquin2006spin} (see the prior observation of the spin TQFT in \cite{Wittencohomology}).
We shall clarify how we go beyond the approach of \cite{1405.7689, 1602.05951}.
Ref.\cite{1405.7689} mostly focuses on formulating the \emph{probe-field} action and path integral, so that the field variables that are non-dynamical
and do not appear in the path integral measure. Thus Ref.\cite{1405.7689} is suitable for the context of probing the global-symmetry protected states,
so-called Symmetry Protected Topological states \cite{XieSPT4} (SPTs, see \cite{Wen1610.03911, Senthil1405.4015, Chiu1505.03535} for recent reviews).
Ref.\cite{1602.05951} includes \emph{dynamical gauge fields} into the path integral, that is the field variables which are dynamical
and do appear in the path integral measure. This is suitable for the context for {
Ref.\cite{1602.05951}
observes the relations between the links of submanifolds (e.g. worldlines and worldsheets whose operators
create anyon excitations of particles and strings) based on the properties of 3-manifolds and
4-manifolds, and then relates the links to the braiding statistics data computed in Dijkgraaf-Witten model \cite{Wang1403.7437, Wang1404.7854,CWangMLevin1412.1781} and in the path integral of TQFTs. }
In this article, we explore from the opposite direction reversing our target.
We start from the TQFTs as an input (the first sub-block in the first column in Table \ref{table:TQFTlink}), and determine the associated
mathematical link invariants independently (the second sub-block in the first column in Table \ref{table:TQFTlink}).
We give examples of nontrivial links in 3-sphere $S^3$ and 4-sphere $S^4$, and their path integral expectation value as statistical Berry phases
(the second column in Table \ref{table:TQFTlink}), and finally associate the related condensed matter models
(the third column in Table \ref{table:TQFTlink}).
In Table \ref{table:TQFTlink}, we systematically survey various link invariants together with relevant braiding processes (for which the invariant is a nontrivial number as $1$) that
either are new to or had occurred in the literature in a unified manner.
The most familiar braiding is the Hopf link
with two linked worldlines of anyons in 2+1D spacetime\cite{{Witten:1988hf},Wilczek:1990ik} such that $\text{Lk}(\gamma_I,\gamma_J)=1$ .
The more general Aharonov-Bohm braiding \cite{Aharonov1959} or the charge-flux braiding
has a worldline of an electric-charged particle linked with a ${(d-2)}$-worldsheet of a magnetic flux linked with
the linking number $\text{Lk}(S_m^{d-2}, C_n^1)=1$ in $(d-1)$+1D spacetime.
The Borromean rings braiding is useful for detecting certain non-Abelian anyon systems\cite{CWangMLevin1412.1781}.
The link of two pairs of surfaces as the loop-loop braiding (or two string braiding) process is mentioned in
\cite{AlfordPreskill1992,Bodecker2004PRL,Baez0603085}.
The link of three surfaces as the three-loop braiding (or three string braiding) process is
discovered in
\cite{Wang1403.7437,Jiang:2014ksa} and explored in \cite{Wang1404.7854}.
The link of four 2-surfaces as the four-loop braiding (or four string braiding) process is explored in
\cite{CWangMLevin1412.1781,1602.05951,RyuTiwariChen1603.08429}.
More broadly, below we should make further remarks on
the related work \cite{Kapustin1404.3230, 1405.7689, Ye1410.2594, Gaiotto:2014kfa,
CWangMLevin1412.1781, Gu:2015lfa, Ye1508.05689,
RyuChenTiwari1509.04266,CWangCHLinMLevin1512.09111,RyuTiwariChen1603.08429, He1608.05393, NingLiuPengYe1609.00985, Ye1610.08645}.
This shall connect our work to other condensed matter and field theory literature in a more general context.
While Ref. \cite{Kapustin1404.3230} is motivated by the discrete anomalies (the 't Hooft anomalies for discrete global symmetries),
Ref. \cite{1405.7689} is motivated by utilizing locally flat bulk gauge fields as physical probes to detect Symmetry Protected Topological states (SPTs).
As an aside note, the SPTs are very different from the intrinsic topological orders and the TQFTs that we mentioned earlier:
\begin{itemize}
\item The SPTs are short-range entangled states protected by nontrivial global symmetries of symmetry group $G$.
The SPTs have its path integral $|Z|=1$ on any closed manifold. The famous examples of SPTs include the topological insulators\cite{2010RMP_HasanKane, 2011_RMP_Qi_Zhang} protected by
time-reversal and charge conjugation symmetries.
{
The gapless boundaries of SPTs are gappable by breaking the symmetry or introducing strong interactions.
Consequently, take the 1+1D boundary of 2+1D SPTs as an example, the 1+1D chiral central charge is necessarily (but not sufficiently) $c_-=0$.}
\item The intrinsic topological orders are long-range entangled states robust against local perturbations, even without any global symmetry protection.
However, some of topological orders that have a gauge theory description of a gauge group $G$ may be obtained by dynamically gauging the global symmetry $G$ of SPTs
\cite{LG1202.3120, Ye:2013upa}.
{The boundary theory for topological orders/TQFTs obtained from gauging SPTs must be gappable as well.
}
\end{itemize}
{In relation to the lattice Hamiltonian, the SPTs has its Hilbert space and group elements associated to
the vertices on a spatial lattice \cite{XieSPT4}, whereas the corresponding group cohomology implementing the homogeneous cocycle
and the holonomies are trivial for all cycles of closed manifold thus $|Z|=1$.
In contrast, the Eq. (\ref{eq:Hamiltonian}) is suitable for topological order that has its Hilbert space and group elements associated to
the links on a spatial lattice \cite{Wan1211.3695,Wang1404.7854,Wan:2014woa}, whereas its group cohomology implementing the inhomogeneous cocycle and its holonomies are non-trivial for cycles of closed manifold thus $|Z|$ sums over different holonomies. }
In relation to the field theory, we expect that the SPTs are described by invertible TQFTs (such as the level $N=1$ in $BF$ theory), a nearly trivial theory, but implemented with nontrivial global symmetries.
(See \cite{Freed2014} for the discussions for invertible TQFTs, and see the general treatment of global symmetries on TQFTs in \cite{Gaiotto:2014kfa}.)
In contrast, we expect that
the intrinsic topological orders are described by generic non-invertible TQFTs (e.g. level $N$ $BF$ theory).
Since Ref.\cite{1405.7689} implements the nearly flat probed gauge fields, the formalism there could not be the complete
story for the intrinsic topological orders and TQFTs of our current interests.
It is later found that one can view the topological actions in terms of dynamical gauge fields instead of the probed fields,
by modifying the gauge transformations \cite{Gu:2015lfa, Ye1508.05689}.
Up until now,
there is good evidence that
we can view the discrete spacetime Dijkgraaf-Witten model
in terms of some continuum TQFTs (See Tables in \cite{1602.05951, 1405.7689} and our
Table \ref{table:TQFTlink}).
One of the most important issues for understanding the dynamical TQFT
is to compute precisely the path integral $Z$ and to
find explicitly the physical observables.
To this end, one partial goal for this article, is to explicitly compute the path integral and
the braiding statistics /
link invariants for these TQFTs in various dimensions.
We focus mainly on 2+1D and 3+1D for the sake of realistic dimensions in condensed matter physics, but our formalism can be easily applied to any dimension.
Other than TQFTs and discrete gauge theories in Table \ref{table:TQFTlink},
we can obtain even more fermionic spin TQFTs by gauging the global symmetries of fermionic SPTs (fSPTs).
An interesting example is gauging the
fSPTs with
${\mathbb {Z}}_2^f\times ({\mathbb {Z}}_2)^n$ symmetry in various dimensions.
We are able to address one interesting puzzle concerning the ${\mathbb {Z}}_2^f\times {\mathbb {Z}}_2$ fSPTs as
Topological Superconductors
with 8 distinct classes labeled by $\nu \in \mathbbm{Z}_8$
\cgreen{(realized by stacking $\nu$ layers of a pair of chiral and anti-chiral $p$-wave superconductors).}
Although it is known that $\nu=0,4$ gauged fSPTs are bosonic Abelian Chern-Simons (CS) theories for bosonic ${\mathbb{Z}}_2$ gauge and
twisted gauge theory (toric code and double-semion models),
and $\nu=2,6$ gauged fSPTs are fermionic Abelian spin-CS theory for fermionic ${\mathbb{Z}}_2$ gauge and twisted gauge theory,
the field theories description for the odd-$\nu$ classes ($\nu=1,3,5,7$) are somewhat mysterious.
In some sense, the odd-$\nu$ class are fermionic ``${\mathbb {Z}}_2$ gauge spin-TQFTs,'' but the statistics is somehow non-Abelian.
We solve the puzzle by deriving explicit non-Abelian spin TQFTs obtained from gauging fSPTs, and compute physical observables to distinguish $\nu \in \mathbbm{Z}_8$ class in
Sec. \ref{sec:fTQFT}.
\subsection{The plan of the article and the convention of notation}
\begin{table}[!h]
\centering
\finline[\fontsize{10}{10}]{Alpine}{
\noindent
\makebox[\textwidth][c]
{
\begin{tabular}{ccc}
\hline
$\begin{matrix}\text{(i). TQFT actions}\\[1mm]
\hline
\hline\\[-4mm]
\text{associated link invariants}
\end{matrix}$ &
$\begin{matrix}
\text{(ii). Spacetime-braiding process,}\\
\text{Path-integral $Z(\text{Link})$/$Z[S^d]$}\\[1mm]
\hline
\hline\\[-4mm]
\text{Quantum statistic braiding data $e^{\hspace{1pt}\mathrm{i}\hspace{1pt} \theta}$}
\end{matrix}$ &
$\begin{matrix}\text{(iii). Comments:}\\
\text{Condensed matter models}
\end{matrix}$ \\
\hline\\[-4mm]
\multicolumn{3}{c}{Any dimensions} \\
\cline{1-3}\\[-2mm]
$
\begin{matrix}
\text{Sec. \ref{sec:BdA}}: \int \frac{ N_I}{2\pi}{B^I d A^I}
\\[2mm]
\hline
\hline\\[-2mm]
\text{(Aharonov-Bohm) linking number}\\[2mm]
\text{Lk}(S_m^{d-2}, C_n^1)
\end{matrix}
$
&
$\begin{matrix}
Z \begin{pmatrix} \includegraphics[scale=0.4]{Link_Sn_S1_in_Sd_wd.pdf} \end{pmatrix} / Z[S^d]
\\[2mm]
\hline
\hline\\[-2mm]
\exp[ - \frac{2\pi i}{N} q_m e_n \text{Lk}(S_m^{d-2}, C_n^1)]
\end{matrix}$
&
$\begin{matrix}
\text{${\mathbb{Z}}_N$ topological order\cite{WenPRBZ2TO44.2664}}\\
\text{${\mathbb{Z}}_N$ spin liquid\cite{ReadSachdevPRL66.1773},}\\
\text{${\mathbb{Z}}_N$ toric code\cite{Kitaev2003}}\\
\text{${\mathbb{Z}}_N$ gauge theory\cite{Wegner:1971jf}}
\end{matrix}$
\\
\hline\\[-3mm]
\multicolumn{3}{c}{2+1D} \\
\cline{1-3}\\[-3mm]
$\begin{matrix}
\text{Sec. \ref{sec:AdA}}: \int \frac{K_{IJ}}{4 \pi} A^I d A^J
\\[2mm]
\int \frac{N_I}{2\pi}{B^I d A^I} + { \frac{ p_{IJ}}{4 \pi}} A^I d A^J \\[2mm]
\hline
\hline\\[-2mm]
\text{Linking number: } \text{Lk}(\gamma_I,\gamma_J)
\end{matrix}
$
&
$\begin{matrix}
Z \begin{pmatrix} \includegraphics[scale=0.3]{S3ll12_uncut_l.pdf} \includegraphics[scale=0.27]{S3_top.pdf}\end{pmatrix} / Z[S^3]
\\[2mm]
\hline
\hline\\[-2mm]
\exp [- \pi \hspace{1pt}\mathrm{i}\hspace{1pt} \sum_{I,J} (K^{-1})_{IJ} e_I e_J\text{Lk}(\gamma_I,\gamma_J)]
\end{matrix}$
&
$\begin{matrix}
\text{Fractional quantum Hall states\cite{WenKmatrix},}\\
\text{Halperin states\cite{halperin1983theory}}\\
\text{Twisted quantum double\cite{Wan1211.3695, MesarosRan}},\\
\text{String-net models\cite{Levin0404617},}\\
\text{2+1D anyon systems\cite{Kitaev2006, nayak2008non}}\\
\text{({Spin TQFT} for $K_{II}, p_{II} \in$ odd.)}
\end{matrix}$
\\
\hline\\[-2mm]
$\begin{matrix}
\text{Sec. \ref{sec:aaa-theory}}: \int \frac{ N_I}{2\pi}{B^I d A^I}+{ \frac{N_1 N_2 N_3\;
p_{}}{{(2 \pi)^2 } N_{123}}} A^1 A^2 A^3
\\[2mm]
\hline
\hline\\[-2mm]
\text{Milnor's triple linking number}:\\
\bar\mu(\gamma_1,\gamma_2,\gamma_3)
\end{matrix}$
&
$\begin{matrix}
Z \begin{pmatrix} \includegraphics[scale=0.35]{2+1D_Borromean_mid_123_l.pdf} \end{pmatrix} / Z[S^3]
\\[2mm]
\hline
\hline\\[-2mm]
\exp (
-\frac{2\pi \hspace{1pt}\mathrm{i}\hspace{1pt}\,p\,q_1q_2 q_3}{N_{123}}\,\bar{\mu}(\gamma_1,\gamma_2,\gamma_3)
)
\end{matrix}$
&
$\begin{matrix}
\text{Gauged SPT lattice model\cite{WangSantosWen1403.5256,Gu:2015lfa, He1608.05393}},\\
\text{Twisted quantum double\cite{Wan1211.3695, MesarosRan}},\\
\text{String-net models\cite{Levin0404617},}\\
\text{$D_4$
discrete gauge theory \cite{deWildPropitius:1995cf,Wang1404.7854, Gu:2015lfa, He1608.05393}}
\end{matrix}$
\\
\hline
$\begin{matrix}
\text{Sec. \ref{sec:fTQFT}: \cred{Gauged} } \frac{\pi}{4}\int a\cup \text{ABK}
\\[2mm]
\hline
\hline\\[-2mm]
\text{Arf invariant }\\
\end{matrix}
$
&
$\begin{matrix}
Z \begin{pmatrix} \includegraphics[scale=0.35]{trefoil.pdf} \end{pmatrix} / Z[S^3]
\\[2mm]
\hline
\hline\\[-2mm]
\pm 1
\end{matrix}$
&
$\begin{matrix}
\text{Gauged ${\mathbb{Z}}_2^f \times {\mathbb{Z}}_2$-fSPT model.}\\
\text{\cgreen{Odd $\nu \in \mathbbm{Z}_8$ detects knots}}\\
\text{\cgreen{with non-zero Arf invariant (e.g. Trefoil).}}
\end{matrix}$
\\
\hline
$\begin{matrix}
\text{Sec. \ref{sec:fTQFT}: \cred{Gauged} } {\pi}\int a_1\cup a_2 \cup \eta
\\[2mm]
\hline
\hline\\[-2mm]
\text{Sato-Levine invariant }\\
\end{matrix}
$
&
$\begin{matrix}
Z \begin{pmatrix} \includegraphics[scale=0.35]{whitehead-link.pdf} \end{pmatrix} / Z[S^3]
\\[2mm]
\hline
\hline\\[-2mm]
\pm 1
\end{matrix}$
&
$\begin{matrix}
\text{Gauged ${\mathbb{Z}}_2^f \times ({\mathbb{Z}}_2)^2$-fSPT model.}\\
\text{\cred{Odd $\nu$ in the mixed class detects}}\\
\text{\cgreen{links with non-zero Sato-Levine}}\\
\text{\cgreen{invariant (e.g. Whitehead).}}
\end{matrix}$
\\
\hline
\\[-2mm]
\multicolumn{3}{c}{3+1D} \\
\cline{1-3}\\[-2mm]
$\begin{matrix}
\text{Sec. \ref{sec:AAdA}}: \int \frac{ N_I}{2\pi}{B^I d A^I} {{+}}
\frac{ N_{I'} N_{J'} \; p_{I'J'K'}}{{(2 \pi)^2 } N_{{I'}{J'}}}
A^{I'} A^{J'} d A^{K'} \\[2mm]
\hline
\hline\\[-2mm]
\text{Triple linking number of surfaces:}\\
\text{Tlk}(\Sigma_1,\Sigma_3,\Sigma_2)
\end{matrix}$
&
$\begin{matrix}
Z \begin{pmatrix} \includegraphics[scale=0.35]{3+1D_Triple_link_mid_SpinHopfLink_Large_S2_123_l.pdf} \end{pmatrix} / Z[S^4]
\\[2mm]
\hline
\hline\\[-2mm]
\exp (
\frac{2\pi \hspace{1pt}\mathrm{i}\hspace{1pt}\,p\,q_1 q_2 q_3}{N_{123}}\,\text{Tlk}(\cred{ \Sigma_1,\Sigma_3,\Sigma_2})
)
\end{matrix}$
&
$\begin{matrix}
\text{Gauged SPT lattice model\cite{Wang1403.7437}},\\
\text{Twisted gauge theory\cite{Wan:2014woa,ZWang1611.09334}},\\
\text{Abelian string model\cite{Wang1403.7437, Jiang:2014ksa, Wang1404.7854}}\\
\text{}
\end{matrix}$
\\
\hline\\[-2mm]
$\begin{matrix}
\text{Sec. \ref{sec:A4-theory}}: \int \frac{ N_I}{2\pi}{B^I d A^I} +
{ \frac{N_1 N_2 N_3 N_4\;
p_{}}{{(2 \pi)^3 } N_{1234}}} A^1 A^2 A^3 A^4
\\[2mm]
\hline
\hline\\[-2mm]
\text{
Quadruple linking number of surfaces:}\\
\text{Qlk}(\Sigma_1,\Sigma_2,\Sigma_3,\Sigma_4)
\end{matrix}
$
&
$\begin{matrix}
Z \begin{pmatrix} \includegraphics[scale=0.35]{3+1D_SpinBorromean_mid_S2_1234.pdf} \end{pmatrix} / Z[S^4]
\\[2mm]
\hline
\hline\\[-2mm]
\exp (
\frac{2\pi \hspace{1pt}\mathrm{i}\hspace{1pt}\,p\,q_1 q_2 q_3 q_4}{N_{1234}}\,\text{Qlk}(\Sigma_1,\Sigma_2,\Sigma_3,\Sigma_4)
)
\end{matrix}$ &
$\begin{matrix}
\text{Gauged SPT lattice model\cite{Gu:2015lfa}},\\
\text{Twisted gauge theory\cite{Wan:2014woa}},\\
\text{Non-Abelian string model\cite{Wang1404.7854}}\\
\text{}
\end{matrix}$
\\
\hline
$\begin{matrix}
\text{Sec. \ref{sec:BB-theory}}: \int \frac{N_I}{2\pi}B^I dA^I+\frac{p_{IJ}N_IN_J}{4\pi N_{IJ}}\,B^I B^J
\\[2mm]
\hline
\hline\\[-2mm]
\text{Intersection number of surfaces: }\\
\#(\Sigma_I\cap\Sigma_J)
\end{matrix}
$
&
$\begin{matrix}
Z \begin{pmatrix} \includegraphics[scale=0.5]{Link_BB_in_S4.pdf} \end{pmatrix} / Z[S^4]
\\[2mm]
\hline
\hline\\[-2mm]
\exp (
- \frac{ \pi \hspace{1pt}\mathrm{i}\hspace{1pt} p_{IJ} e_I e_J}{N_{IJ}}\,\#(\Sigma_I\cap\Sigma_J) )
\end{matrix}$
&
$\begin{matrix}
\text{Gauged SPT lattice model\cite{Ye1410.2594}},\\
\text{Walker-Wang like model\cite{WalkerWang1104.2632,KeyserlingkBurnell1405.2988}}.\\
\text{({Spin TQFT} for \cred{$p_{II}, N_{I} \in$ odd}.)}
\end{matrix}$
\\
\hline
\end{tabular}
} \hspace*{35mm}
\caption{TQFT actions, the link invariants computed through the path integral $Z$, and their condensed matter models
are organized in three columns.
For a comparison of the development from the earlier work, see the setup in \cite{1405.7689, 1602.05951}.
Here $p, p_{IJ}, p_{IJK}$ are quantized integer levels.
}
\label{table:TQFTlink}
}
\end{table}
The plan of our article is organized as follows.
In Sec.~\ref{sec:BdA},
we derive the link invariant of $\int BdA$ theory in any dimension as the Aharonov-Bohm's linking number that detects a charge particle
and a flux loop braiding process through the Aharonov-Bohm phase.
In Sec.~\ref{sec:AdA}, we study $\int K_{IJ} A_I dA_J$ and $\int BdA+AdA$ in 2+1D and show that its path integral calculates the linking number.
In Sec.~\ref{sec:aaa-theory}, we study $\int BdA+A^3$ in 2+1D and obtain Milnor's triple linking number from its path integral.
In Sec.~\ref{sec:AAdA}, we study $\int BdA+A^2dA$ in 3+1D and obtain triple-linking number of surfaces.
In Sec.~\ref{sec:A4-theory}, we study $\int BdA+A^4$ in 3+1D and obtain quadruple-linking number of surfaces.
In Sec.~\ref{sec:BB-theory}, we study $\int BdA+BB$ in 3+1D and obtain intersection number of open surfaces.
In Sec.~\ref{sec:fTQFT}, we construct the explicit fermionic SPT path integrals
with ${\mathbb {Z}}_2^f\times ({\mathbb {Z}}_2)^n$ symmetry, and their gauged versions: fermionic spin TQFTs.
We derive the experimentally measurable physics observables, including the ground state degeneracy (GSD), the braiding statistics
(the modular matrices $\cS^{xy}$ and $\cT^{xy}$), etc.
In addition, we discuss their relation to various invariants including Arf(-Brown-Kervaire), Rokhlin, Sato-Levine invariants and more.
In Sec.~\ref{sec:conclude}, we conclude with additional remarks.
\cred{
We should emphasize that the link invariants we derive are powerful and important in various aspects.
(1) A link invariant can detect various possible links in spacetime, or various possible braiding processes (regardless} \cgreen{if} \cred{the braiding process is known or unknown to the literature).
While in the literature, few specific braiding processes have been investigated (such as the three or four string braiding processes),
we can use our link invariants to identify other braiding processes} \cgreen{ that produce nontrivial values of topological invariants and thus have nontrivial statistical Berry phases.
(2) Our method to derive topological invariants is based on field theory description of TQFTs. In particular, our approach is systematic,
using Poincar\'e duality and intersection theory. Our approach is universal, and our result is more general than what appeared in the literature.
}
\noindent
Note: To denote the cyclic group of order $n$,
we use ${\mathbb{Z}}_n$ and $\mathbbm{Z}_n$, which are equivalent mathematically, but have different meanings physically.
We use ${\mathbb{Z}}_n$ to denote a symmetry group and a gauge group.
We use the slight different notation $\mathbbm{Z}_n$ to denote the distinct classes in the classification of SPTs/TQFTs or in the cohomology/bordism group.
Notation ${\mathbb {Z}}_2^f$ stands for the fermion parity symmetry.
We denote ${N_{IJ \dots}} \equiv {\gcd(N_{I}, N_{J}, \dots)}$ and $\mathbbm{Z}_{N_{IJ \dots}} \equiv \mathbbm{Z}_{\gcd(N_{I}, N_{J}, \dots)}$.
\cred{As usual, notation $M_1 \sqcup M_2$ means the disjoint union between two sets or two manifolds
$M_1$ and $M_2$. The $M\setminus S$ means relative complement of $S$ in $M$.}
We use $\cup:H^p({M^{d}},{\mathbb{Z}}_N)\otimes H^q({M^{d}},{\mathbb{Z}}_N)\rightarrow H^{p+q}({M^{d}},{\mathbb{Z}}_N)$ to denote cup-product in cohomology ring.
GSD stands for ground state degeneracy.
In Table.\ref{table:TQFTlink} and elsewhere,
the repeated indices is normally assumed to have Einstein summation, except that the
$\frac{ N_{I'} N_{J'} \; p_{I'J'K'}}{{(2 \pi)^2 } N_{{I'}{J'}}}
A^{I'} A^{J'} d A^{K'}$ term where \emph{the prime indices here are fixed} instead of summed over.
\section{$\int BdA$ in any dimension and Aharonov-Bohm's linking number} \label{sec:BdA}
Below we warm up by considering the level-$N$ BF theory with an action $\int \frac{N}{2\pi} BdA$ in any dimension, where $N$ is quantized to be an integer.
{{The study of BF theory in physics dates back to the early work of \cite{Horowitz1989, 1991BlauThompson}.}}
Consider the following action on any closed $d$-manifold ${M^{d}}$:
\begin{equation}
S[A,B]=\int_{{M^{d}}}\,\frac{N}{2\pi}B\wedge dA
\end{equation}
where $A$ is a 1-form gauge field on $M$ and $B$ is a $(d-2)$-form gauge field on $M$. The partition function or path integral without any additional operator insertion
is
\begin{equation}
Z=\int DA DB \exp[\hspace{1pt}\mathrm{i}\hspace{1pt} S[A,B]] =\int DA DB \exp[\hspace{1pt}\mathrm{i}\hspace{1pt} \int_{{M^{d}}}\,\frac{N}{2\pi}B\wedge dA]
\end{equation}
Locally the gauge transformation is given by:
\begin{eqnarray}
A &\rightarrow& A+dg, \\
B &\rightarrow& B+ d\eta.
\end{eqnarray}
If ${M^{d}}$ has non-trivial topology, globally $g$ and $\nu$ may have discontinuities such that $dg$ and $d\nu$ are continuous forms representing a cohomology class in $2\pi H^1({M^{d}},{\mathbb{Z}})$ and $2\pi H^{d-2}({M^{d}},{\mathbb{Z}})$ respectively.
Now for a path integral with insertions, let $\Phi$ be a gauge invariant functional $\Phi(A,B)$ of the fields $A$ and $B$.
The path integral with insertion $\Phi$ can be formally defined as
\begin{eqnarray} \label{eq:ZBF}
\langle \Phi \rangle =\frac{1}{Z} \int DA \;DB \;\Phi(A,B)\; \exp[\hspace{1pt}\mathrm{i}\hspace{1pt} S[A,B]]=\frac{1}{Z} \int DA \; DB \; \Phi(A,B)\; \exp[\hspace{1pt}\mathrm{i}\hspace{1pt} \int_{{M^{d}}}\,\frac{N}{2\pi}B\wedge dA].
\end{eqnarray}
Let us note that in the case when ${M^{d}}$ has non-trivial topology, the field $B$ only locally can be understood as a $d-2$ form. Globally, it can be realized as $B=\tilde{B}+\beta$ where $\tilde{B}$ is a globally defined $d-2$ form and $\beta$ is a discontinuous $d-2$-form such that $d\beta$ is a continuous form representing a class in $2\pi H^{d-1}({M^{d}},{\mathbb{Z}})$, the flux of the $d-2$ gauge field $B$. So the path integral over $\tilde{B}$ actually means the following
\begin{equation}
\int DB\ldots \equiv\sum_{[d\beta]\in 2\pi H^{d-1}({M^{d}},{\mathbb{Z}})}\int D\tilde B\ldots.
\end{equation}
Below we evaluate the $\langle \Phi \rangle$ in various scenarios starting from the simplest, almost trivial case and gradually increasing complexity.
\begin{enumerate}
\item If $\Phi (A)$ is independent of the $B$ field, then the integration over $\tilde{B}$ gives the equation of motion as constraint of $A$, which localizes
$A$ to be flat $U(1)$ connection. Namely, the curvature is zero $F_A=dA=0$. Furthermore, from Poincar\'e duality $H^{d-1}({M^{d}},{\mathbb{Z}})\cong H_1({M^{d}},{\mathbb{Z}})$, it follows that the sum over fluxes $\beta$ imposes the following constrains on $A$:
\begin{equation}
\exp ( iN\int_\gamma A) =1,\qquad \forall \gamma
\end{equation}
that is, modulo gauge transformations, connection $A$ belongs to ${\mathbb{Z}}_N$ subset of $U(1)$ flat connections:
\begin{equation}
[A]\in \text{Hom}(H_1({M^{d}}),{\mathbb{Z}}_N) \subset \text{Hom}(H_1({M^{d}}),U(1)).
\end{equation}
Note that from the universal coefficient theorem and the fact that $H_0({M^{d}},{\mathbb{Z}})$ is a free group, it follows that $\text{Hom}(H_1({M^{d}}),{\mathbb{Z}}_N)\cong H^1({M^{d}},{\mathbb{Z}}_N)$. The path integral then reduces to the following finite sum:
\begin{eqnarray} \label{eq:Phi(A)}
\langle \Phi \rangle =\frac{1}{Z}
\sum_{[A]\,\in \text{Hom}(H_1({M^{d}}),{\mathbb{Z}}_N) }
\;\Phi(A).
\end{eqnarray}
The standard normalization for the partition function $Z$ is as follows:
\begin{equation}
Z=\frac{1}{N}\sum_{[A]\in \text{Hom}(H_1({M^{d}}),{\mathbb{Z}}_N) }1
\end{equation}
so that $Z=1$ for ${M^{d}}=S^{d-1}\times S^1$.
\item If $\Phi (A,B)$ depends on $B$ field as follows
\begin{eqnarray} \label{eq:Phi(A,B)}
\Phi(A,B)= \prod_{m} \exp[\hspace{1pt}\mathrm{i}\hspace{1pt} q_m \int_{S_m^{d-2}}B ] \cdot \Phi_0(A).
\end{eqnarray}
Where $\{S_m^{d-2}\}_{m=1,2,\dots}$ is a family of $d-2$-dimensional hypersurfaces inside the spacetime manifold ${M^{d}}$ and $\Phi_0(A)$ is the insertion that depends only on $A$. Gauge invariance requires $q_m\in {\mathbb{Z}}$.
One can also rewrite (\ref{eq:Phi(A,B)}) as follows:
\begin{equation}
\Phi(A,B)= \prod_{m} \exp[\hspace{1pt}\mathrm{i}\hspace{1pt} q_m \int_{{M^{4}}}B\wedge \delta^{\perp}({S_m^{d-2}})] \cdot \Phi_0(A)
\end{equation}
where $\delta^{\perp}({S_m^{d-2}})$ is the 2-form valued delta function distribution supported on ${S_m^{d-2}}$. That is,
\begin{eqnarray}
\int_{{M^{d}}} \omega_{d-2} \wedge \delta^{\perp}({S_m^{d-2}}) =\int_{S_m^{d-2}} \omega_{d-2}.
\end{eqnarray}
for any $d-2$ form $\omega_{d-2}$. After integrating out $B$ the path integral Eq. (\ref{eq:ZBF}) localizes to the solutions of the equations of motion with source:
\begin{eqnarray} \label{eq:FA}
F_A=dA=-\frac{2 \pi}{N} \sum_m q_m \; \delta^{\perp}({S_m^{d-2}}).
\end{eqnarray}
This equation implies that
$F_A/2\pi$ is a differential form which represents the class in $H^2({M^{d}},{\mathbb{R}})$ Poincar\'e dual to the class \cred{$\frac{1}{N} \sum_m q_m \; [{S_m^{d-2}}]$}
in homology $H_{d-2}(M,{\mathbb{R}})$.
Here and below $[S]$ denotes the homology class of the surface $S$.
Since $\frac{F_A}{2 \pi }$ represents the first Chern class $c_1\in H_1({M^{d}},{\mathbb{Z}})$ of the $U(1)$ gauge bundle,
$\frac{1}{N} \sum_m q_m \; [{S_m^{d-2}}]$
must represent an integral homology class.
This gives the constraint on the allowed charge $q_m$ (the magnetic charge),
if some of the classes $[{S_m^{d-2}}] \neq 0$ are nontrivial.
\item If $H_1({M^{d}},{\mathbb{Z}})=0$, then there is a unique solution to Eq. (\ref{eq:FA}), modulo the gauge
redundancy. The cohomology \cred{$H^1({M^{d}} \setminus (\sqcup_m {S_m^{d-2}}),{\mathbb{Z}})$}
is then generated by 1-forms $\mu_1, \dots, \mu_m$ such that
\begin{eqnarray}
\int_{C_n^1} \mu_i = \delta_{n,i},
\end{eqnarray}
where $C_n^1$ is a small circle linking ${S_m^{d-2}}$.
Here we denote $M\setminus S$ means the relative complement of $S$ in $M$.
The solution of Eq. (\ref{eq:FA}) then becomes:
\begin{eqnarray}
A= -\frac{2 \pi}{N} \sum_m q_m \mu_m.\;
\end{eqnarray}
One possible choice of forms $\mu_m$ is using 1-form valued delta functions supported on ${\cal V}_m^{d-1}$, Seifert hypersurfaces bounded by ${S_m^{d-2}}$ (i.e. such that $\partial{\cal V}_m^{d-1}={S_m^{d-2}}$ and therefore $d\delta^\perp({\cal V}_m^{d-1})=\delta^\perp(S_m^{d-2})$):
\begin{equation}
A=-\frac{2 \pi}{N} \sum_m q_m \delta^\perp({\cal V}_m^{d-1}).
\end{equation}
\item
If $\Phi_0(A)$ in Eq. (\ref{eq:Phi(A,B)}) is a product of the Wilson loops around the one-dimensional loops $\{ \gamma_n^1 \}$ separate and disjoint from $\{ {S_m^{d-2}} \}$,
such that
\begin{eqnarray} \label{eq:Phi(A,B)}
\Phi_0(A)= \prod_{n} \exp[\hspace{1pt}\mathrm{i}\hspace{1pt} e_n \int_{\gamma_n^1} A ]
\end{eqnarray}
with the electric charge $e_n\in{\mathbb{Z}}$ associated to each loop, then the path integral with $\Phi(A,B)$ insertion can be evaluated as follows:
\begin{multline} \label{eq:ZBFlink}
\langle \Phi \rangle
= \frac{1}{Z} \int DA \;DB \; \exp[i S[A,B]] \; \exp[i \sum_{n} e_n \int_{\gamma_n^1} A ] \exp[i \sum_{m} q_m \int_{S_m^{d-2}}B ]\\
=\exp[ - \frac{2\pi i}{N} \sum_{m,n} q_m e_n \int_{{M^{d}}}\delta^\perp(\gamma_n^1)\wedge \delta^\perp({\cal V}_m^{d-1})]
=\exp[ - \frac{2\pi i}{N} \sum_{m,n} q_m e_n \text{Lk}(S_m^{d-2}, \gamma_n^1)]
\end{multline}
where the $\text{Lk}(S^{d-2}_m, \gamma^1_n)\equiv \#({\cal V}^{d-1}_m \cap \gamma^1_n)$ is the linking integer number between the loop $\gamma^1_n$ and the $(d-2)$-dimensional submanifold $S^{d-2}_m$, which by definition is given by counting intersection points in $({\cal V}^{d-1}_m \cap \gamma^1_n)$ with signs corresponding to orientation.
\end{enumerate}
\section{$\int K_{IJ} A_I dA_J$ and $\int BdA+AdA$ in 2+1D and the linking number}
\label{sec:AdA}
In the 2+1D spacetime, as another warp up exercise, consider the action of $U(1)^s$ Chern-Simons theory with level matrix $K$:
\begin{equation}
S[A]=\int_{{M^{3}}}\sum_{I,J=1}^s \frac{K_{IJ}}{4\pi}A^I\wedge dA^J.
\end{equation}
where $K_{IJ}$ is a symmetric integral valued matrix.
The above most general Abelian Chern-Simons theory includes a particular case:
\begin{equation}
S[A,B]=\int_{{M^{3}}} \sum_{I} \frac{N_I}{2\pi}\,B^I\wedge dA^I+
\sum_{I,J} \frac{p_{IJ}}{4\pi}A^I\wedge dA^J
\label{BdAAdA-action}
\end{equation}
where $p_{IJ}$ is a symmetric integral valued matrix.
When $p_{IJ}$ is an odd integer, we have the Abelian spin-Chern-Simons theory (considered in detail in \cite{BelovMoore2005ze}).
When $p_{IJ}$ is an even integer, we have the Abelian Chern-Simons theory that are within the
cohomology group $H^3({\mathbb{Z}}_{N_I} \times {\mathbb{Z}}_{N_J} ,U(1))=\mathbbm{Z}_{N_I} \times \mathbbm{Z}_{N_J} \times \mathbbm{Z}_{N_{IJ}}$ for the Dijkgraaf-Witten theory \cite{1405.7689},
$p_{II} \in \mathbbm{Z}_{N_I}$, $p_{JJ} \in \mathbbm{Z}_{N_J}$ and $p_{IJ} \in \mathbbm{Z}_{N_{IJ}}$.
Here we denote $\mathbbm{Z}_{N_{IJ \dots}} \equiv \mathbbm{Z}_{\gcd(N_{I}, N_{J}, \dots)}$.
Note that when $K_{II}$ is odd for some $I$, the theory becomes fermionic spin-TQFT that depends on the choice of spin structure.
A generic collection of line operators supported on $s$ closed disjoint curves $\gamma_I$ embedded in $S^3$ can be realized as follows:
\begin{equation}
W_{q}[\{\gamma_I\}_{I=1}^s]=
\exp ( i \sum_{I=1}^s e_I\int_{\gamma_I} A_I)\equiv
\exp (i \sum_{I=1}^s e_I\int_{{M^{3}}} A_I\wedge \delta^\perp(\gamma_I) )
\end{equation}
for some integer numbers $e_I$. As we will see the result, up to a $\pm 1$ sign, only depends on the class of $s$-vector $e$ in the cokernel of the level matrix $K$, that is effectively $ e \in {\mathbb{Z}}^s/K{\mathbb{Z}}^s$. Suppose ${M^{3}}=S^3$. The expectation value of $W_{e}[\{\gamma_I\}]$ is then given by a Gaussian integral which \cred{localizes} on the following equations of motion:
\begin{equation}
\sum_{J}K_{IJ}dA_J=-2\pi \delta^\perp(\gamma_I)
\end{equation}
which, up to a gauge transformation, can be solved as follows:
\begin{equation}
A_I=-2\pi \sum_{J}(K^{-1})_{IJ} e_J\delta^\perp(\Sigma_J)
\end{equation}
where $\Sigma_J$ is a Seifert surface bounded by $\gamma_J$ and we used that $d\delta^\perp(\Sigma_J)=\delta^\perp(\partial\Sigma)$. Plugging the solution back into the integrand gives us
\begin{multline} \label{Ab-CS-linking}
\langle W_{e}[\{\gamma_I\}] \rangle
=\exp\left\{ - \pi i \sum_{I,J} (K^{-1})_{IJ}e_I e_J\int_{S^3}\delta^\perp(\Sigma_I) \wedge d\delta^\perp(\Sigma_J)
\right\}=\\
\exp\left\{ - \pi i \sum_{I,J} (K^{-1})_{IJ} e_I e_J\text{Lk}(\gamma_I,\gamma_J)
\right\}
\end{multline}
where $\text{Lk}(\gamma_I,\gamma_J)$ is the linking number between $\gamma_I$ and $\gamma_J$, which is by definition equal to the intersection number \cred{$\#(\Sigma_I\cap \gamma_J)$}.
The physics literature on this invariant dates back to \cite{PhysRevLett.51.2250,Polyakov:1988md}.
\section{$\int BdA+A^3$ in 2+1D, non-Abelian anyons and Milnor's triple linking number}
\label{sec:aaa-theory}
In the 2+1D spacetime, we can consider the following action on a 3-manifold ${M^{3}}$:
\begin{equation}
S[A,B]=\int_{{M^{3}}}\,\sum_{I=1}^3\frac{N_I}{2\pi}B^I\wedge dA^I+\frac{\bar p}{(2\pi)^2} \,A^1\wedge A^2 \wedge A^3
\end{equation}
where $A^I$ and $B^I$ are 1-form fields.
Here ${\bar p} \equiv { \frac{N_1 N_2 N_3\;
p_{}}{ N_{123}}}$ with $p \in \mathbbm{Z}_{N_{IJK}}$.
We have the TQFT that are within the
class $p \in \mathbbm{Z}_{N_{IJK}}$ in the
cohomology group $H^3({\mathbb{Z}}_{N_I} \times {\mathbb{Z}}_{N_J} \times {\mathbb{Z}}_{N_K} ,U(1))$ for the Dijkgraaf-Witten theory \cite{1405.7689}.
The gauge transformation is:
\begin{equation}
\begin{array}{c}
A^I\rightarrow A^I+dg^I \\
B^I\rightarrow B^I+d\eta^I+\frac{\bar p}{2\pi N_I}\,\epsilon_{IJK}\,\left(A^Jg^K-\frac{1}{2} g^Jdg^K\right).
\end{array}
\end{equation}
Consider the following observable:
\begin{equation}
W_{r,q}[\gamma_1,\gamma_2,\gamma_3]=
\exp\left\{
i\sum_{I=1}^3\oint_{\gamma_I} q_I\left(B^I+\frac{\bar p}{4\pi\,N_I}\,\epsilon_{IJK}A^J(d^{-1}A^K)\right)+\sum_{J} e_{IJ}A^J
\right\}
\label{triple-line-op}
\end{equation}
Where $\gamma_I$ are three pairwise unlinked (and with trivial framing) connected components of a link. The functions $(d^{-1}A^K)$ are defined on link components as follows:
\begin{equation}
(d^{-1}A^K)(x) \equiv {\phi^K(x)} \equiv \int\limits_{[x_0,x]_{\gamma_I}}A^K,\qquad x\in\gamma^I
\label{dinvA}
\end{equation}
where $x_0\in \gamma_I$ is a reference point and $[x_0,x]_{\gamma_I}\subset \gamma_I$ denotes a segment of $\gamma_I$. Note that $\phi^{K}(x)$ is a well defined continuous function on $\gamma_I$ only if $\int_{\gamma_I}A^K=0$, that is the flux of $A^K$ gauge field through $\gamma_I$ vanishes. We assume that this is the case. If such condition is not satisfied $W_{r,q}[\gamma_1,\gamma_2,\gamma_3]$ should be zero instead \cite{He1608.05393}. Later we will generalize this to the case when charges $q_I$, similarly to $e_{IJ}$, form a \cred{general matrix}.
We are interested in calculating its vacuum expectation value (vev), that is:
\begin{equation}
\langle W_{q,e}[\gamma_1,\gamma_2,\gamma_3]\rangle =
\frac{\int {\cal D} A{\cal D} B \;e^{iS[A,B]} W_{q,e}[\gamma_1,\gamma_2,\gamma_3]}
{\int {\cal D} A{\cal D} B \;e^{iS[A,B]} }.
\label{vev1}
\end{equation}
As before, $\delta^\perp(\gamma)$ denotes the form distribution supported on $\gamma$ such that $\int_{{M^{3}}} \omega \wedge \delta^\perp(\gamma)=\int_\gamma \omega$ for any $\omega$. Then we can write
\begin{eqnarray}
W_{q,e}[\gamma_1,\gamma_2,\gamma_3] &=&
\exp\left\{
i\int_{{M^{3}}} \sum_{I=1}^3\delta^\perp(\gamma_I)\wedge\left[ q_I\left(B^I+\frac{\bar p}{4\pi\,N_I}\sum_{J,K}\epsilon_{IJK}A^J(d^{-1}A^K)\right)+\sum_{J}e_{IJ}A^J\right]
\right\} \nonumber \\
&\equiv&
\exp\left\{
i\int_{{M^{3}}} \sum_{I=1}^3\delta^\perp(\gamma_I)\wedge\left[ q_I\left(B^I+\frac{\bar p}{4\pi\,N_I}\sum_{J,K}\epsilon_{IJK} d \phi^J\phi^K\right)+\sum_{J}e_{IJ}A^J\right]
\right\}.\;\;\;\;\;\;\;\;
\end{eqnarray}
Then integrating out $B^I$ in the path integral (\ref{vev1}) imposes the following conditions on $A^I$:
\begin{equation}
dA^{I}=-\frac{2\pi q_I}{N_I}\,\delta^\perp(\gamma_I)
\end{equation}
On ${M^{3}}=S^3$ it can be always solved as follows (uniquely modulo the gauge group):
\begin{equation}
A^{I}=-\frac{2\pi q_I}{N_I}\,\delta^\perp(\Sigma_I)
\end{equation}
where $\Sigma_I$ is a surfcase bounded by $\gamma_I$ (i.e. $\partial\Sigma_I=\gamma_I$). Consider then the value of different terms in the effective action that we obtained after integrating $B^I$ out:
\begin{multline}
\sum_{I,J}e_{IJ}\int \delta^\perp(\gamma_I) \wedge A_J=
\sum_{I,J}\frac{2\pi \,e_{IJ} q_J}{N_J}\int \delta^\perp(\gamma_I) \wedge \delta^\perp(\Sigma_J)=\\
=\sum_{I,J}\frac{2\pi \,e_{IJ} q_J}{N_J}\, \# (\gamma_I \cap \Sigma_J)\equiv
\sum_{I,J}\frac{2\pi \,e_{IJ} q_J}{N_J}\, \text{Lk}(\gamma_I, \gamma_J).
\end{multline}
The assumption that there is no flux of $A^I$ gauge field through any $\gamma_J$ for any pair $I,J$ implies that in order to get a non-vanishing expectation value all pairwise linking numbers should be zero: $\text{Lk}(\gamma_I, \gamma_J)=0$.
\begin{multline}
\int \frac{\bar p}{(2\pi)^2} \,A^1\wedge A^2 \wedge A^3=-\frac{2\pi\,\bar p\, q_1 q_2 q_3}{N_1N_2N_3}\int \delta^\perp(\Sigma_1) \wedge \delta^\perp(\Sigma_2) \wedge \delta^\perp(\Sigma_3)=\\
=-\frac{2\pi\,{\bar p}\, q_1 q_2 q_3}{N_1N_2N_3}\,\# (\Sigma_1 \cap \Sigma_2 \cap \Sigma_3)
\end{multline}
where intersection numbers are, as usual, counted with signs determined by orientation. Denote $(-1)^{\epsilon(a)}$ the sign corresponding to the orientation of the intersection at point $a$. Consider (\ref{dinvA}):
\begin{equation}
-\frac{N_K}{2\pi q_K}\,(\phi^K)(x)=\int\limits_{x_0\rightarrow x\text{ along }\gamma_I}\delta^\perp(\Sigma_K)=\# ([x_0,x]_{\gamma_I} \cap \Sigma_K)\equiv
\sum_{a\in ([x_0,x]_{\gamma_I} \cap \Sigma_K)}(-1)^{\epsilon(a)}
\end{equation}
which is unambiguously defined because $\text{Lk}(\gamma_I,\gamma_K)=0$.
Then
\begin{multline}
\sum_{I,J,K}\frac{{\bar p}\, q_I\,\epsilon_{IJK}}{4\pi\,N_I}\int \delta^\perp(\gamma_I)\wedge A^J(d^{-1}A^K)=
\sum_{I,J,K}\frac{{\bar p}\, q_I q_J\,\epsilon_{IJK}}{2N_IN_J}\int \delta^\perp(\gamma_I)\wedge \delta^\perp(\Sigma_J)\,(d^{-1}A^K)=\\
\sum_{I,J,K}\frac{\pi\, {\bar p} \,q_I q_J q_K\,\epsilon_{IJK}}{N_IN_JN_K}\sum_{b\;\in\; \gamma_I \cap \Sigma_K}(-1)^{\epsilon(b)}\sum_{a\in ([x_0,x_b]_{\gamma_I} \cap \Sigma_K)}(-1)^{\epsilon(a)}=\\
\frac{\pi\, {\bar p} \, q_1 q_2 q_3}{N_1N_2N_3}\sum_{I,J,K}\epsilon_{IJK}
\sum_{\scriptsize\begin{array}{c}a\in \gamma_{I}\cap \Sigma_K \\ b\in \gamma_{I}\cap \Sigma_J\\
x_b>x_a \end{array}}(-1)^{\epsilon(a)+\epsilon(b)},
\label{int-pair-count}
\end{multline}
where the ordering of intersection points $a,b\,\in\, \gamma_I$ (that is the condition $x_b>x_a$) is done relative to the previously chosen reference point $x_0\in \gamma_I$. Finally we have:
\begin{equation}
\langle W_{q,e}[\gamma_1,\gamma_2,\gamma_3]\rangle
=
\exp\left\{
-\frac{2\pi i\,{\bar p}\, q_1 q_2 q_3}{N_1N_2N_3}\,\bar{\mu}(\gamma_1,\gamma_2,\gamma_3)
\right\}
=
\exp\left\{
-\frac{2\pi i\,p\,q_1q_2 q_3}{N_{123}}\,\bar{\mu}(\gamma_1,\gamma_2,\gamma_3)
\right\}
\label{triple-line-vev-result}
\end{equation}
where
\begin{equation}
\bar{\mu}(\gamma_1,\gamma_2,\gamma_3)=
\# (\Sigma_1 \cap \Sigma_2 \cap \Sigma_3)-\frac{1}{2}\sum_{I,J,K}\epsilon_{IJK}
\sum_{\scriptsize\begin{array}{c}a\in \gamma_{I}\cap \Sigma_K \\ b\in \gamma_{I}\cap \Sigma_J\\
x_a>x_b \end{array}}(-1)^{\epsilon(a)+\epsilon(b)}
\label{triple-linking}
\end{equation}
is exactly the geometric formula for Milnor's $\bar{\mu}$ invariant or Milnor's triple linking number \cite{geom-triple-linking}. It is easy to evaluate for the Borromean rings link. Consider realization of Borromean rings shown in Figure \ref{fig:borromean-rings-int} with natural choice of Seifert surfaces $\Sigma_I$ lying in three pairwise orthogonal planes. It is easy to see that the first term in (\ref{triple-linking}) is $1$ while all other terms vanish. That is
\begin{figure}[h]
\centering
\includegraphics[scale=1.8]{borromean-rings-int}
\caption{Particular choice of surfaces $\Sigma_I$ for Borromean rings. The red lines show pairwise intersections $\Sigma_J\cap \Sigma_K$. The endpoints of the redlines are intersection points which pairs are counted in (\ref{int-pair-count}).}
\label{fig:borromean-rings-int}
\end{figure}
\begin{equation}
\bar{\mu}(\gamma_1,\gamma_2,\gamma_3)=1.
\end{equation}
As an example, in the corresponding link figure shown in Table \ref{table:TQFTlink}, we mean the braiding process of three particle excitations described in
\cite{CWangMLevin1412.1781,1602.05951,RyuTiwariChen1603.08429}.
When the coefficients $q$ in (\ref{triple-line-op}) form a general
matrix $q_{IJ}$ (similarly to the coefficients $e_{IJ}$) we have the $(\det q)$ instead of $q_1 q_2 q_3$ in (\ref{triple-line-vev-result})\footnote{Assuming trivial framing of link components}. Lastly, we remark that this 2+1D theory with a cubic interacting action can host
non-Abelian anyons\cite{deWildPropitius:1995cf,Wang1404.7854,CWangMLevin1412.1781, He1608.05393} with non-Abelian statistics.
\cred{
The attempt to derive the Milnor's $\bar{\mu}$ invariant from Chern-Simons\cgreen{-like} field theory dates back to \cite{Ferrari0210100, Leal0704.2429} and recently summarized in \cite{Franco1411.6429}.\footnote{We thank Franco Ferrari for bringing us attention to the earlier work on $\int BdA+A^3$ theory.}
However, our approach is rather different and is generally based on Poincar\'e duality and the intersection theory.}
We note that the theory of
$\int\,\sum_{I=1}^3\frac{N_I}{2\pi}B^I\wedge dA^I+\frac{\bar p}{(2\pi)^2} \,A^1\wedge A^2 \wedge A^3$ is equivalent to the non-Abelian discrete gauge theory of the dihedral group $D_4$ (with the $D_4$ group of order 8) \cite{deWildPropitius:1995cf, Wang1404.7854}.
\section{$\int BdA+A^2dA$ in 3+1D and the triple linking number of 2-surfaces}
\label{sec:AAdA}
In the 3+1D spacetime, consider the following action on a 4-manifold ${M^{4}}$:
\begin{equation}
S[A,B]=\int_{{M^{4}}}\,\sum_{I=1}^3\frac{N_I}{2\pi}B^I\wedge dA^I+\frac{{\bar p}}{(2\pi)^2} \,A^1\wedge A^2 \wedge dA^3
\end{equation}
where $A^I$ and $B^I$ are 1- and 2-form gauge fields respectively.
Here ${\bar p} \equiv { \frac{N_1 N_2 \;
p_{}}{ N_{12}}}$ with $p \in \mathbbm{Z}_{N_{123}}$.
We have the TQFT that are within the
class $p \in \mathbbm{Z}_{N_{123}}$ in the
cohomology group $H^4({\mathbb{Z}}_{N_1} \times {\mathbb{Z}}_{N_2} \times {\mathbb{Z}}_{N_3} ,U(1))$ for the Dijkgraaf-Witten theory \cite{1405.7689}.
Let us introduce an antisymmetric matrix $\epsilon^{IJ}$ such that $\epsilon^{12}=-\epsilon^{21}=1$ and all other elements are zero. The gauge transformation then reads:
\begin{equation}
\begin{array}{c}
A^I\rightarrow A^I+dg^I, \\
B^I\rightarrow B^I+d\eta^I+\frac{{\bar p}}{2\pi N_I}\,\epsilon^{IJ}dg^J\wedge A^3.
\end{array}
\end{equation}
Consider the following gauge invariant observable:
\begin{equation}
W_{q}[\Sigma_1,\Sigma_2,\Sigma_3]=
\exp(
i\sum_{I=1}^3 q_I\left\{\int_{\Sigma_I} B^I+ \sum_{J} \frac{{\bar p}\,\epsilon^{IJ}}{2\pi\,N_I}\int_{{\cal V}_I} A^J\wedge dA^3\right\}).
\label{triple-surface-op}
\end{equation}
Where $\Sigma_I$ are three non-intersecting surfaces in ${M^{4}}$ and ${\cal V}_I$ are some 3D submanifolds that are bounded by them, that is $\partial{\cal V}_I=\Sigma_I$. Such ${\cal V}_I$ are usually called Seifert hyper-surfaces. As before, we are interested in calculating its vev, that is:
\begin{equation}
\langle W_{q}[\Sigma_1,\Sigma_2,\Sigma_3]\rangle =
\frac{\int {\cal D} A{\cal D} B \;e^{iS[A,B]} W_{q}[\Sigma_1,\Sigma_2,\Sigma_3]}
{\int {\cal D} A{\cal D} B \;e^{iS[A,B]} }.
\label{vev1}
\end{equation}
Using $\delta$-forms we can write it as follows
\begin{equation}
W_{q}[\Sigma_1,\Sigma_2,\Sigma_3]=
\exp (i\sum_{I=1}^3 q_I\int_{{M^{4}}}\left\{\delta^\perp(\Sigma_I)\wedge B^I+\frac{{\bar p}\,\epsilon^{IJ}}{2\pi\,N_I}\,\delta^\perp({\cal V}_I)\wedge A^J\wedge dA^3\right\})
\end{equation}
Then integrating out $B^I$ in the path integral (\ref{vev1}) imposes the following conditions on $A^I$:
\begin{equation}
dA^{I}=-\frac{2\pi q_I}{N_I}\,\delta^\perp(\Sigma_I)
\end{equation}
On ${M^{4}}=S^4$ it can be always solved as follows (uniquely modulo the gauge group):
\begin{equation}
A^{I}=-\frac{2\pi q_I}{N_I}\,\delta^\perp({\cal V}_I')
\end{equation}
where ${\cal V}_I'$ is any 3D \cred{hypersurface} bounded by $\Sigma_I$. Without loss of generality we can choose ${\cal V}_I'={\cal V}_I$. Consider the value of different terms in the effective action that we obtained after integrating $B^I$ out:
\begin{multline}
\int \frac{{\bar p}}{(2\pi)^2} \,A^1\wedge A^2 \wedge dA^3=-\frac{2\pi\,{\bar p}\, q_1 q_2 q_3}{N_1N_2N_3}\int \delta^\perp({\cal V}_1) \wedge \delta^\perp({\cal V}_2) \wedge \delta^\perp(\Sigma_3)=\\
=-\frac{2\pi\, {\bar p} \,q_1 q_2 q_3}{N_1N_2N_3}\,\# ({\cal V}_1 \cap {\cal V}_2 \cap \Sigma_3)
\end{multline}
\begin{multline}
\sum_{I=1}^3 q_I\int_{{M^{4}}}\frac{{\bar p}\,\epsilon^{IJ}}{2\pi\,N_I}\,\delta^\perp({\cal V}_I)\wedge A^J\wedge dA^3=
\sum_{I=1}^3\frac{2\pi {\bar p}\,\epsilon^{IJ} q_I q_J q_3}{N_IN_JN_3}\int_{{M^{4}}}\,\delta^\perp({\cal V}_I)\wedge \delta^\perp({\cal V}_J)\wedge \delta^\perp(\Sigma_3)=\\
=\frac{4\pi {\bar p}\, q_1 q_2 q_3}{N_1N_2N_3}
\,\# ({\cal V}_1 \cap {\cal V}_2 \cap \Sigma_3).
\end{multline}
Finally we have:
\begin{equation}
\langle W_{q}[\Sigma_1,\Sigma_2,\Sigma_3]\rangle
=
\exp\left\{
\frac{2\pi i\, {\bar p} \, q_1 q_2 q_3}{N_1N_2N_3}\,\text{Tlk}(\Sigma_1,\Sigma_3,\Sigma_2)
\right\}
=
\exp\left\{
\frac{2\pi i\,p\,q_1 q_2 q_3}{N_{123}}\,\text{Tlk}(\Sigma_1,\Sigma_3,\Sigma_2)
\right\}
\label{triple-surface-vev-result}
\end{equation}
where
\begin{equation}
\text{Tlk}(\Sigma_1,\Sigma_3,\Sigma_2)\equiv \# ({\cal V}_1 \cap {\cal V}_2 \cap \Sigma_3)
\label{triple-surface-linking}
\end{equation}
is the triple linking number of surfaces (defenition (4) in \cite{surface-triple-linking}).
\begin{figure}[!t]
\centering
\includegraphics[scale=1.2]{Hopf-link-spun-int}
\caption{\cred{An example of configuration with a triple linking number Eq.(\ref{triple-surface-linking}) of three 2-surfaces being $\text{Tlk}(\Sigma_1,\Sigma_3,\Sigma_2)\equiv \# ({\cal V}_1 \cap {\cal V}_2 \cap \Sigma_3)=1$. (The same link figure is shown in Table \ref{table:TQFTlink}.)
Take the $\Sigma_2,\Sigma_3$ to be a spun of Hopf link with Seifert hypersurfaces ${\cal V}_2,{\cal V}_3$ being \cgreen{spuns} of Seifert surfaces. The surface $\Sigma_1$ is a torus embedded at a fixed value of the spin angle and encircling the Hopf link. Choose Seifert hypersurface ${\cal V}_1$ to be the interior the torus $\Sigma_1$.
The intersection $\# ({\cal V}_1 \cap {\cal V}_2 \cap \Sigma_3)=1$ contains one point shown bold in the figure.}
}
\label{fig:Hopf-spun-int}
\end{figure}
Similarly one can consider theory
\begin{equation}
S[A,B]=\int_{{M^{4}}}\,\sum_{I=1}^2\frac{N_I}{2\pi}B^I\wedge dA^I+\frac{{\bar p}}{(2\pi)^2} \,A^1\wedge A^2 \wedge dA^2
\end{equation}
which represents a non-trivial element of \cred{$H^4({\mathbb{Z}}_{N_1} \times {\mathbb{Z}}_{N_2} ,U(1))$}. The analogous operator supported in a triple of surfaces $\Sigma_{1},\Sigma_{2},\Sigma_{2}'$ reads:
\begin{equation}
W_{q}[\Sigma_1,\Sigma_2,\Sigma_2']=
\exp(
iq_1\int_{\Sigma_1} B^1+iq_2\int_{\Sigma_2}B_2+iq_2'\int_{\Sigma_2'}B_2 +\ldots)
\end{equation}
where dots denote appropriate gauge invariant \cred{completions} similar to the ones in (\ref{triple-surface-op}). The resulting expectation value is as follows
\begin{equation}
\langle W_{e,q}[\Sigma_1,\Sigma_2,\Sigma_2']\rangle
=
\exp\left\{
\frac{2\pi i\, {\bar p} \, q_1 q_2 q_3}{N_1N_2N_3}\,[\text{Tlk}(\Sigma_1,\Sigma_2,\Sigma_2')+\text{Tlk}(\Sigma_1,\Sigma_2',\Sigma_2)]
\right\}
\end{equation}
assuming $\Sigma_2,\Sigma_2'$ have trivial framing.
\cred{As an example, in the corresponding link figure shown in Table \ref{table:TQFTlink} as well \cgreen{in} Fig. \ref{fig:Hopf-spun-int}, we mean the braiding process of three string excitations described in
\cite{Wang1403.7437,Jiang:2014ksa,Wang1404.7854,1602.05951,RyuTiwariChen1603.08429}.
In this configuration \cgreen{shown in} Fig. \ref{fig:Hopf-spun-int}, we have $\text{Tlk}(\Sigma_1,\Sigma_3,\Sigma_2)=1$, $\text{Tlk}(\Sigma_2,\Sigma_3,\Sigma_1)=-1$,
$\text{Tlk}(\Sigma_3,\Sigma_2,\Sigma_1)=-1$, $\text{Tlk}(\Sigma_1,\Sigma_2,\Sigma_3)=1$, and finally
$\text{Tlk}(\Sigma_2,\Sigma_1,\Sigma_3)=\text{Tlk}(\Sigma_3,\Sigma_1,\Sigma_2)=0$.
The TQFTs with $A_1A_2dA_3$ \cgreen{term} can detect this link, and
also $A_2A_1dA_3,\,A_3A_1dA_2,\, A_1A_3dA_2$ can detect this link, but neither $A_2A_3dA_1$ nor $A_3A_2dA_1$ can detect this link configuration.
}
\cred{Reader can find Ref.\cite{RyuTiwariChen1603.08429} for a related study for this theory.
We also note that Ref.\cite{Bi:2014vaa} applies non-linear sigma model \cgreen{descriptions as an alternative,} without using $\int BdA+A^2dA$ theory to study the 3 string braiding process,
limited \cgreen{to} a more restricted case $N_1= N_2=N_3= 2$.
Here we had considered more generic \cgreen{levels}.}
\section{$\int BdA+A^4$ in 3+1D, non-Abelian strings and
the quadruple linking number of 2-surfaces}
\label{sec:A4-theory}
In the 3+1D spacetime, we can also consider the following action on a 4-manifold ${M^{4}}$:
\begin{equation}
S[A,B]=\int_{{M^{4}}}\,\sum_{I=1}^4\frac{N_I}{2\pi}B^I\wedge dA^I+\frac{\bar p}{(2\pi)^3} \,A^1\wedge A^2 \wedge A^3 \wedge A^4
\end{equation}
where $A^I$ and $B^I$ are 1- and 2-form gauge fields respectively.
Here ${\bar p} \equiv { \frac{N_1 N_2 N_3 N_4\;
p_{}}{ N_{1234}}} $ with $p \in \mathbbm{Z}_{N_{1234}}$.
We have the TQFT that are within the
class $p \in \mathbbm{Z}_{N_{1234}}$ in the
cohomology group $H^4({\mathbb{Z}}_{N_1} \times {\mathbb{Z}}_{N_2} \times {\mathbb{Z}}_{N_3} \times {\mathbb{Z}}_{N_4} ,U(1))$ for the Dijkgraaf-Witten theory \cite{1405.7689}.
The gauge transformation reads (see the exact transformation to all order in \cite{1602.05951}):
\begin{equation}
\begin{array}{c}
A^I\rightarrow A^I+dg^I \\
B^I\rightarrow B^I+d\eta^I-\sum_{J,K,L}\frac{{\bar p}}{2(2\pi)^2N_I}\,\epsilon^{IJKL}A^J\,A^K\,g^L
\end{array}
\end{equation}
where $\epsilon^{IJKL}$ is an absolutely anti-symmetric tensor. Consider the
surface operators supported on 4 different non-intersecting surfaces $\Sigma_I,\, I=1,\ldots,4$ which we \textit{formally} write as follows:
\begin{equation}
W_{q}[\Sigma_1,\Sigma_2,\Sigma_3,\Sigma_4]=
\exp
(i\sum_{I=1}^4 q_I\left\{\int_{\Sigma_I} B^I+\sum_{J,K,L}\frac{{\bar p}\,\epsilon^{IJKL}}{3!\,(2\pi)^2\,N_I}A^I A^J d^{-1}A^L\right\})
\label{quadruple-surface-op}
\end{equation}
To be more specific, consider the surface operator supported on $\Sigma_1$:
\begin{equation}
\exp
(i q_1\left\{\int_{\Sigma_1} B^1+\sum_{J,K,L\neq 1}\frac{{\bar p}\,\epsilon^{1JKL}}{3!\,(2\pi)^2\,N_I}A^I A^J d^{-1}A^L\right\}).
\end{equation}
What we mean by this expression is the following. If in the path integral we first integrate out $B^2,B^3,B^4$ (which do not appear in the surface operator supported on $\Sigma_1$), this imposes conditions
\begin{equation}
dA^J=0,\,J=2,3,4.\label{flat-cond}
\end{equation}
If $\Sigma_1$ has a non-zero genus as a Riemann surface, we can always represent it by a polygon $\tilde\Sigma_1$
(which is topologically a disk) with appropriately glued boundary.
Choose a point $x_*^{(1)}\in \tilde\Sigma_1$ and define $d^{-1}A^I|_{\Sigma_1}\equiv \phi^I(x)\equiv\int_{x_*^{(1)}}^x A^I,\,x\in \Sigma_1$ where the integral is taken along a path in $\tilde\Sigma_1$.
It does not depend on the choice of the path in $\tilde\Sigma_1$
due to (\ref{flat-cond}). The choice of simply connected $\tilde\Sigma_1$
representing $\Sigma_1$ is similar to the global choice of the path in $\gamma_I$ for line operators in section \ref{sec:aaa-theory}. The surface operator that can be expressed as
\begin{equation}
\exp (
i q_1\left\{\int_{\Sigma_1} B^1+\sum_{J,K,L\neq 1}\frac{{\bar p}\,\epsilon^{1JKL}}{3!\,(2\pi)^2\,N_I}d\phi^I d\phi^J \phi^L\right\}).
\end{equation}
It is easy to see that it is invariant under the gauge transformations
\begin{equation}
\begin{array}{c}
\phi^I\rightarrow \phi^I+g^I \\
B^1\rightarrow B^1+d\eta^1-\sum_{J,K,L\neq 1}\frac{{\bar p}}{2(2\pi)^2N_1}\,\epsilon^{1JKL}d\phi^J\,d\phi^K\,g^L
\end{array}
\end{equation}
up to boundary terms supported on $\partial \tilde\Sigma_1$.
The presence of such terms and the dependence on the choice of $\tilde\Sigma_1$
in general makes such surface operator ill defined. However for field configurations with certain restriction the boundary terms vanish and the ambuguity goes away. This is similar to the situation in Sec. $\ref{sec:aaa-theory}$, where $d^{-1}A_K$ is a well defined continuous function on $\gamma_I$ only if the pairwise linking numbers vanish. In particular, we will need to require that all triple linking numbers $\text{Tlk}(\Sigma_I,\Sigma_J,\Sigma_K)$ are zero. If the ambiguity is present, the operator should vanish instead, similarly to the case considered in Sec. \ref{sec:aaa-theory}. In the examples below there will be no such ambiguity.
As before, let ${\cal V}_I$ be Seifert hypersurfaces such that $\partial{\cal V}_I=\Sigma_I$. Then the integrating out all $B^I$ implies:
\begin{equation}
dA^{I}=-\frac{2\pi q_I}{N_I}\,\delta^\perp(\Sigma_I),\qquad A^{I}=-\frac{2\pi q_I}{N_I}\,\delta^\perp({\cal V}_I).
\end{equation}
The effective action is then given by the quadruple intersection number of the Seifert hypersurfaces
\begin{multline}
\int \frac{ {\bar p}}{(2\pi)^2} \,A^1\wedge A^2 \wedge A^3\wedge A^4=\frac{2\pi\,{\bar p}\,q_1 q_2 q_3 q_4}{N_1N_2N_3N_4}\int \delta^\perp({\cal V}_1) \wedge \delta^\perp({\cal V}_2) \wedge \delta^\perp({\cal V}_3)\wedge \delta^\perp({\cal V}_4)=\\
=\frac{2\pi\,{\bar p} \, q_1 q_2 q_3 q_4}{N_1N_2N_3N_4}\,\# ({\cal V}_1 \cap {\cal V}_2 \cap {\cal V}_3\cap {\cal V}_4)
\end{multline}
The contribution of the surface operator supported on $\Sigma_I$ reads
\begin{multline}
-q_I\sum_{J,K,L}\int_{{M^{4}}}\frac{{\bar p} \; q_J \; q_K \; q_L\,\epsilon^{IJKL}}{3!\,(2\pi)^2\,N_IN_JN_KN_L}\,\delta^\perp(\Sigma_I)\wedge\delta^\perp({\cal V}_J)\wedge \delta^\perp({\cal V}_K)\; \left.d^{-1}\delta^\perp({\cal V}_L)\right|_{\Sigma_I}=\\
-\frac{2\pi\,{\bar p} \, q_1 q_2 q_3 q_4}{N_1N_2N_3N_4}
\sum_{J,K,L\neq 4} \frac{t(\Sigma_I;{\cal V}_J,{\cal V}_K,{\cal V}_L)}{3!}
\end{multline}
where $t(\Sigma_I;{\cal V}_J,{\cal V}_K,{\cal V}_L)$ is defined as follows. Consider $\gamma^{(I)}_{J}\equiv (\Sigma_I\cap {\cal V}_J)$, oriented, not necesserily connected, curves on surface $\Sigma_I$. Then\footnote{Again, we assume that configuration of surfaces is such that there is no ambiguity in such expression, i.e. no dependence on the choice of $\tilde{\Sigma}_1$ and $x_*^{(1)}\in \tilde{\Sigma}_1$. Otherwise the result should be zero: $\langle W \rangle$.}
\begin{eqnarray}
t(\Sigma_1;{\cal V}_2,{\cal V}_3,{\cal V}_4)\equiv
\sum_{a\,\in \,\gamma^{(1)}_2\cap \gamma^{(1)}_3 }(-1)^{\epsilon(a)}
\,\left[\#\text{ of times one needs to cross $\gamma^{(1)}_4$ to reach $a$ from $x_*^{(1)}$}\right],
\label{t-surface-formula}
\end{eqnarray}
where as before, $(-1)^{\epsilon(a)}=\pm 1$
depends on the orientation of the intersection of $\gamma^{(1)}_2$ with $\gamma^{(1)}_3$. The crossings with $\gamma^{(1)}_4$ are also counted with signs. See Fig. \ref{fig:qlinking-t-counting}.
\begin{figure}[!h]
\centering
\includegraphics[scale=1.4]{qlinking-t-counting}
\caption{An example of computing $t(\Sigma_1;{\cal V}_2,{\cal V}_3,{\cal V}_4)$ by formula (\ref{t-surface-formula}). The red numbers $0,1$ denote the value of the weight with which the intersection points of $\gamma_2^{(1)}$ with $\gamma_3^{(1)}$ are counted in different domains separated by $\gamma_4^{(1)}$. The points $a\in \gamma^{(1)}_2\cap \gamma^{(1)}_3$ that enter into the sum with non-zero weight are shown as bold black points. }
\label{fig:qlinking-t-counting}
\end{figure}
Note that if there is no ambiguity in defining $t(\Sigma_1;{\cal V}_2,{\cal V}_3,{\cal V}_4)$ (i.e. no dependence on the choice of polygon $ \tilde\Sigma_1$
and the reference point $x_*^{(1)}$) it is antisymmetric
with respect to exchange of ${\cal V}_2,{\cal V}_3,{\cal V}_4$.
\begin{figure}[!h]
\centering
\includegraphics[scale=1.6]{qlinking-move}
\caption{An illustration of invariance of (\ref{quadruple-surface-linking}) under deformation of Seifert hypersurfaces ${\cal V}_I$. A local configuration of $\Sigma_1,{\cal V}_2,{\cal V}_3,{\cal V}_4$ in shown in ${\mathbb{R}}^4\cong {\mathbb{R}}^3 \times {\mathbb{R}}_\text{time}$, where ${\mathbb{R}}_\text{time}$ is not shown in the picture. The hypersurfaces ${\cal V}_2,{\cal V}_3,{\cal V}_4$ are locally repsented by planes $\times {\mathbb{R}}_\text{time}$, while $\Sigma_1$ is locally a plane $\times$ point and ${\cal V}_1$ is locally a half ${\mathbb{R}}^3$ bounded by $\Sigma^1$ and spanned in the direction of the reader. The right side shows a local deformation of ${\cal V}_4$ which results in increasing both $\# ({\cal V}_1 \cap {\cal V}_2 \cap {\cal V}_3\cap {\cal V}_4)$ and $t(\Sigma_1;{\cal V}_2,{\cal V}_3,{\cal V}_4)$ by 1 (the contributing intersection points are shown bold and black). The total sum (\ref{quadruple-surface-linking}) stays intact.}
\label{fig:qlinking-qlinking-move}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=1.5]{borromean-rings-spun-int}
\caption{An example of configuration with a quadruple linking number Eq.(\ref{quadruple-surface-linking}) being $\text{Qlk}(\Sigma_1,\Sigma_2,\Sigma_3,\Sigma_4)=1$. Take the triple $\Sigma_2,\Sigma_3,\Sigma_4$ to be a spun of Borromean rings with Seifert hypersurfaces ${\cal V}_2,{\cal V}_3,{\cal V}_4$ being the spun of Seifert surfaces in Fig. \ref{fig:borromean-rings-int}. The surface $\Sigma_1$ is a torus embedded at a fixed value of the spin angle and encircling the Borromean rings. Choose Seifert hypersurface ${\cal V}_1$ to be the interior the torus $\Sigma_1$. It is easy to see that for this choice of Seifert hypersurfaces all $t(\Sigma_I;{\cal V}_J,{\cal V}_K,{\cal V}_L)$ vanish just because for each $\Sigma_I$ one of the three curves $\gamma_J^{(I)}\,,J\neq I$ is empty. The quadruple intersection $\# ({\cal V}_1 \cap {\cal V}_2 \cap {\cal V}_3\cap {\cal V}_4)=1$ contains one point shown bold in the figure.
}
\label{fig:borromean-rings-spun-int}
\end{figure}
Finally we have:
\begin{multline}
\langle W_{q}[\Sigma_1,\Sigma_2,\Sigma_3,\Sigma_4]\rangle=
\exp\left\{
\frac{2\pi i\,{\bar p}\,q_1 q_2 q_3q_4}{N_1N_2N_3N_4}\,\text{Qlk}(\Sigma_1,\Sigma_2,\Sigma_3,\Sigma_4)
\right\}\\
=\exp \left\{
\frac{2\pi i\,p\,q_1 q_2 q_3 q_4}{N_{1234}}\,\text{Qlk}(\Sigma_1,\Sigma_2,\Sigma_3,\Sigma_4)
\right\}
\label{quadruple-surface-vev-result}
\end{multline}
where we define the \textit{quadruple linking number} of four 2-surfaces as follows:
\begin{multline}
\text{Qlk}(\Sigma_1,\Sigma_2,\Sigma_3,\Sigma_4)\equiv
\# ({\cal V}_1 \cap {\cal V}_2 \cap {\cal V}_3\cap {\cal V}_4)\\
-t(\Sigma_1;{\cal V}_2,{\cal V}_3,{\cal V}_4)
+t(\Sigma_2;{\cal V}_3,{\cal V}_4,{\cal V}_1)
-t(\Sigma_3;{\cal V}_4,{\cal V}_1,{\cal V}_2)
+t(\Sigma_4;{\cal V}_1,{\cal V}_2,{\cal V}_3).
\label{quadruple-surface-linking}
\end{multline}
It is very similar to the geometric definition Milnor's triple linking number of a 3-component link in $S^3$ considered in Section \ref{sec:aaa-theory}. Each term in the sum is not a topological invariant (that is invariant under ambient isotopy) of embedded quadruple of surfaces $\Sigma_{1,2,3,4}\subset S^4$, since it depends on the choice of Seifert hypersurfaces ${\cal V}_I$. However, their sum is. One can easily check its invariance under basic local deformation moves, see Fig. \ref{fig:qlinking-qlinking-move}. A particular example with quadruple linking number 1 is shown in Fig. \ref{fig:borromean-rings-spun-int}.
Lastly, we remark that this 3+1D theory with a quartic interacting action can host non-Abelian strings\cite{Wang1404.7854, CWangMLevin1412.1781, RyuTiwariChen1603.08429} with non-Abelian statistics.
As an example, in the corresponding link figure shown in Table \ref{table:TQFTlink} and Fig. \ref{fig:borromean-rings-spun-int}, we mean the braiding process of four string excitations described in
\cite{CWangMLevin1412.1781,1602.05951,RyuTiwariChen1603.08429}.
\section{$\int BdA+BB$ in 3+1D and the intersection number of open surfaces}
\label{sec:BB-theory}
In the 3+1D spacetime,
one can consider the following action on a 4-manifold ${M^{4}}$:
\begin{equation}
S[A,B]=\int_{{M^{4}}}\,\sum_{I=1}^s\frac{N_I}{2\pi}B^I\wedge dA^I+\sum_{I,J=1}^s\frac{p_{IJ}N_IN_J}{4\pi N_{IJ}}\,B^I\wedge B^J
\label{BB-action}
\end{equation}
where $A^I$ and $B^I$ are 1- and 2-form fields respectively and $N_{IJ} \equiv \gcd(N_I,N_J)$.
We make a choice on the symmetric integral quadratic form $p_{IJ} \in \mathbbm{Z}$.
This TQFT is beyond the Dijkgraaf-Witten group cohomology theory.
The gauge transformation reads:
\begin{equation}
\begin{array}{c}
A^I\rightarrow A^I+dg^I -\sum_J \frac{p_{IJ}N_J\eta^J}{N_{IJ}}\\
B^I\rightarrow B^I+d\eta^I.
\end{array}
\end{equation}
Note that if the diagonal elements $p_{II}$ \cred{and the integer $N_I$} are odd, $e^{iS}$ is invariant under large gauge transformations only if ${M^{4}}$ has even intersection form. Equivalently, it is a spin 4-manifold.
Consider the following gauge invariant operator supported on closed surfaces $\Omega_I$ and surfaces $\Sigma^I$ with boundaries $\gamma^I$:
\begin{equation}
W_{e,q}[\{\Sigma^I\},\{\Omega^J\}]=
\exp\left[
i\sum_{I}q_I\int_{\Omega_I} B^I+i\sum_{I} e_I
\left\{
\int_{\gamma_I} A^I
+\int_{\Sigma_I} \frac{p_{IJ}N_J}{N_{IJ}}\,B_J
\right\}
\right]
\label{B2-surface-op}
\end{equation}
where $q_I, e_I\in{\mathbb{Z}}$ (the expectation value, up to a sign, will depend only on their value modulo $N_I$) are integral weights (charges).
{Since the charge curve ${\gamma_I}$ of $A^I$ must bound the surface ${\Sigma_I}$ of $B_J$,
we learn that the $\int BdA+BB$ theory is a higher-form gauge theory where particles must have strings attached.}
Consider the case ${M^{4}}=S^4$. Then integrating out $A^I$ imposes the following condition on $B^I$:
\begin{equation}
dB^{I}=-\frac{2\pi e_I}{N_I}\,\delta^\perp(\gamma_I),\qquad B^{I}=-\frac{2\pi e_I}{N_I}\,\delta^\perp(\Sigma'_I)
\end{equation}
where $\Sigma'_I$ is a Seifert surface of $\gamma_I$, that is $\partial {\Sigma'}_I=\gamma_I$.
The effective action is then given by the intersection number of the Seifert surfaces
\begin{equation}
\int_{S^4}\sum_{IJ}\frac{p_{IJ}N_IN_J}{4\pi N_{IJ}}\,B^I\wedge B^J=
\sum_{IJ}\frac{\pi p_{IJ}e_I e_J}{ N_{IJ}}\,\#(\Sigma'_I\cap \Sigma'_J).
\end{equation}
While the contribution of the surface operators reads
\begin{multline}
\sum_{I}q_I\int_{\Omega_I} B^I+\sum_{I,J}\frac{e_Ip_{IJ}N_J}{N_{IJ}}\int_{\Sigma_I}B^{\cred{J}}=\\
-\sum_I\frac{2\pi e_Iq_I}{N_I}\,\#(\Omega_I \cap \Sigma'_I )
-\sum_{IJ}\frac{2\pi p_{IJ}e_I e_J}{ N_{IJ}}\,\#(\Sigma_I\cap \Sigma'_J)
\end{multline}
Combining all the terms we get
{
\begin{equation}
\langle W_{e,q}[\{\Sigma^I\},\{\Omega^J\}] \rangle=
\prod_{I}\exp\left\{-\frac{2\pi i e_Iq_I}{N_I}\,\text{Lk}(\gamma_I,\Omega_I)\right\}
\prod_{I,J}\exp\left\{-\frac{\pi ip_{IJ} e_I e_J}{N_{IJ}}\,\#(\Sigma_I\cap\Sigma_J)\right\}
\end{equation}}
where we used that, by the definition of the linking number, \cred{$\#(\Sigma'_I \cap \Omega_I)=\text{Lk}(\gamma_I,\Omega_I)$} and that $\#((\Sigma_I-\Sigma'_I)\cap (\Sigma_J-\Sigma'_J) )=0$ because intersection number of any two closed surfaces in $S^4$ is zero. Note that the result depends not only on $\gamma_I$, but also on the choice of surfaces $\Sigma_I$ that are bounded by them. This is consistent with the fact that if one changes \cred{$\Sigma_I$ to $\Sigma_I+\delta\Sigma_I$, where $\delta\Sigma_I$} is a closed surface, it is equivalent to changing \cred{$q_I \Omega_I\rightarrow q_I\Omega_I+\sum_J \frac{p_{IJ} e_J N_I}{N_{IJ}}\,\delta\Sigma_J$} in (\ref{B2-surface-op}), and $\delta\Sigma_J$ may have non-trivial linking with $\gamma_J$ (see Fig. \ref{fig:BB-int-number}). Also note that in order to calculate the digonal elements $\#(\Sigma_I\cap \Sigma_I)$ one needs to introduce a framing to $\gamma_I$, that is a choice of a non-zero normal vector along $\gamma_I$. The trivial choice of a generic constant vector leads to $\#(\Sigma_I\cap \Sigma_I)=0$. The example of a framing choice that gives $\#(\Sigma_I\cap \Sigma_I)=1$ is shown in Fig. \ref{fig:BB-self-link}.
\begin{figure}[h]
\centering
\includegraphics[scale=2]{BB-int-number}
\caption{An example of a pair of surfaces $\Sigma_I,\Sigma_J\in S^4$ bounded by curves $\gamma_{I},\gamma_J$ with intersection number $\#(\Sigma_I\cap \Sigma_J)=1$. The configuration of line/surface operaors in $S^4$ is represented as a ``movie,'' where the horizontal direction
is the time axis moving from the left to the right. The closed curve $\gamma_I$ is represented by a pair of points created in 3-dimensional space and then annihilated. Seifert surface $\Sigma_I$ is represented by line in the 3-dimensional space connecting the points in the pair. Seifert surface $\Sigma_J$ is represented by a disc appearing at a fixed moment in ``time''. The red bold point depicts the point contributing to the intersection number $\#(\Sigma_I\cap \Sigma_J)=1$. The shifted surface $\Sigma_J+\delta\Sigma_J$ does not intersect $\Sigma_I$, which is consistent with the fact that the 2D closed surface $\delta\Sigma_J$ has a non-trivial linking number with the 1d closed curve $\gamma_I$:
\cred{$\text{Lk}(\gamma_I,\delta \Sigma_J)=1$}.
}
\label{fig:BB-int-number}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=1.7]{BB-self-link}
\caption{An example of a framing choice which results in the self-intersection number $\#(\Sigma_I\cap \Sigma_I)=1$. The configuration of line/surface operaors in $S^4$ is represented as a ``movie''. The closed curve $\gamma^I$ and its sligthly shifted copy are represented by a pairs of points created in 3-dimensional space and then annihilated. Seifert surfaces are represented by lines in a 3-dimensional space connecting the points in the pairs. The red bold point depicts the point contributing to the self-intersection number $\#(\Sigma_I\cap \Sigma_I)=1$. Note that when \cgreen{both $p_{II}$ and $N_I$ are odd,} such configuration result in \cred{$\langle W_{e,q}[\{\Sigma^I\},\{\Omega^J\}] \rangle=-1$} when
\cred{$e_I=N_I,e_J=0,J\neq I$}, which is indication of fermionic nature of the line/surface operator.
\cblue{Here the time evolution of two pairs of end points (two pairs of black dots) form a closed (invisible undrawn) ribbon that has one side rotating by a $2 \pi$ framing,
as shown in the Figure 9 of Ref.\cite{Wang1404.7854}, which indicates the spin-statistics (exchange statistics) relation.}
}
\label{fig:BB-self-link}
\end{figure}
One could also detect the value of $p_{IJ}$ by considering, for example, the partition function of the theory (\ref{BB-action}) on a closed simply-connected spin 4-manifold with the second Betti number $b_2$ and the intersection form $Q_{\alpha\beta}$ on $H^2({M^{4}},{\mathbb{Z}})$. Integrating out $A^I$ restricts $N^IB^I/2\pi$ to be a representative of an element from $H^2({M^{4}},{\mathbb{Z}})$. Equivalently,
\begin{equation}
B^I=\frac{2\pi}{N_I}\sum_{\alpha= 1}^{b_2}n^{I\alpha}\delta^\perp(\Sigma_\alpha)
\end{equation}
where $\Sigma_\alpha$ are representatives of the basis elements of $H^2({M^{4}},{\mathbb{Z}})$ and $n^{I\alpha}\in {\mathbb{Z}}_{N_I}$ taking into account large gauge transformations. The partition function then reads\footnote{For a generic, not necessarily simply connected, 4-manifold $M^4$ the partition function of a discrete 2-form gauge theory in canonical normalization have the following form:
\begin{equation}
Z[M^4]=\frac{|H^0(M^4,\prod_i {\mathbb{Z}}_{N_i})|}{|H^1(M^4,\prod_i {\mathbb{Z}}_{N_i})|}
\sum_{b\in H^2(M_4,\prod_i{\mathbb{Z}}_{N_i})}e^{iS[b]}.
\end{equation} Roughly speaking, the denominator of the normalizaiton factor counts discrete group gauge transformations while the numerator counts ambiguitites in the gauge transformations.}
\begin{equation}
Z[{M^{4}}]=\prod_{I=1}^s{N_I}\sum_{n^{I\alpha}\in {\mathbb{Z}}_{N_I}}
\exp\sum_{I,J,\alpha,\beta}\frac{i\pi p_{IJ}}{N_{IJ}}n^{I\alpha}n^{J\beta}Q_{\alpha\beta}.
\label{BB-4man}
\end{equation}
Suppose for simplicity that $N_I=N,\forall I=1,\ldots, s$. One can rewrite (\ref{BB-4man}) using Gauss reciprocity formula as follows:
\begin{equation}
Z[{M^{4}}]=\frac{e^{\frac{i \pi \sigma(p)\sigma({M^{4}})}{4}}\,N^{3s/2}}{|\det p|^{1/2}}
\sum_{a\in {\mathbb{Z}}^{s\cdot b_2}/(p\,\otimes\,Q){\mathbb{Z}}^{s\cdot b_2}}
\exp\left\{-i \pi N\, p^{IJ}Q^{\alpha\beta}a_{I\alpha}a_{J\beta}\right\}
\end{equation}
where $p^{IJ}$ and $Q^{\alpha\beta}$ are inverse matrices of $p_{IJ}$ and $Q_{\alpha\beta}$, respectively. Here
$\sigma(p)$ is the signature of the $p_{IJ}$ matrix, that is the difference between the numbers of positive and the negative eigenvalues of
the matrix. Similarly, $\sigma({M^{4}})$ denotes the signature of ${M^{4}}$, which is by definition is the signature of the intersection \cred{matrix} $Q^{\alpha\beta}$.
\section{Fermionic TQFT/ spin TQFT in 2+1D and 3+1D}
\label{sec:fTQFT}
Now we consider spin-TQFTs which arise from gauging unitary global symmetries of fermionic SPTs (fSPTs).
We can obtain fermionic discrete gauge spin TQFTs from gauging the $({\mathbb{Z}}_2)^n$ symmetry of ${\mathbb{Z}}_2^f \times ({\mathbb{Z}}_2)^n$ fSPT.
For example, it is recently known that the 2+1D $ {\mathbb{Z}}_2^f \times {\mathbb{Z}}_2$ fSPT, namely the ${\mathbb{Z}}_2$-Ising-symmetric Topological Superconductor,
has $\nu \in \mathbbm{Z}_8$ classes \cite{Qi1202.3983, HongYaoRyu1202.5805, GuLevin1304, Neupert2D1403.0953, Morimoto1505.06341}.
\cred{The $\nu$-class of ${\mathbb{Z}}_2^f \times ({\mathbb{Z}}_2)^n$ fSPT
is realized by stacking \cgreen{$\nu$ layers of pairs} of chiral and anti-chiral p-wave superconductors ($p+ip$ and $p-ip$), in which boundary supports non-chiral Majorana-Weyl modes.}
Formally, one may interpret this $\mathbbm{Z}_8$ classification from the extended version of group super-cohomology \cite{Gu1201.2648, MCheng1501.01313, WangLinGu1610.08478}
or the cobordism group \cite{Kapustin1406.7329, Freed2016}.
Yet it remains puzzling what are the \emph{continuum field theories} for these fSPTs and their gauged spin TQFTs, and what are the physical observables that fully characterize them.
Our strategy to tackle this puzzle goes as follows.
In Sec. \ref{sec:Z2fZ2fSPT}, we define fSPT path integrals and its gauged TQFTs for all $\nu \in \mathbbm{Z}_8$ through the cobordism approach in Eq. (\ref{gfSPT-Z2-3D}).
In Sec. \ref{sec:Z2fZ2GSD}, we calculate the GSD on the $T^2$ torus which distinguishes only the odd-$\nu$ from the even-$\nu$ classes.
In Sec. \ref{sec:ZRP3}, we calculate the path integral $Z[{\mathbb{RP}}^3]$, a single datum that distinguishes all $\nu\in\mathbbm{Z}_8$ classes.
In Sec. \ref{sec:ST}, we show the $\cT^{xy}$ matrix for the ${\mathbb{Z}}_2$-gauge flux ('t Hooft line) operator
is another single datum that distinguishes $\nu\in\mathbbm{Z}_8$ classes. By computing the $\cS^{xy}$ and $\cT^{xy}$ matrices,
we propose our continuum field theories for spin TQFTs and identify their underlying fermionic topological orders through \cite{LanKongWen1507.04673},
shown in Table \ref{table:Z8-gauged}. In Sec.\ref{sec:Rokhlin} we propose expression for ${\mathbb{Z}}_2^f\times{\mathbb{Z}}_2$ fSPT via Rokhlin invariant.
In Sec.\ref{sec:more2+1D/3+1DsTQFT}, we study more general fSPTs and corresponding spin TQFTs in 2+1 and 3+1D, and their link invariants.
\subsection{2+1D $ {\mathbb{Z}}_2^f \times {\mathbb{Z}}_2$ symmetric fermionic SPTs} \label{sec:Z2fZ2fSPT}
Our first non-trivial examples are the spin-TQFTs gauging the unitary ${\mathbb{Z}}_2$ part of fSPTs with ${\mathbb{Z}}_2^f \times {\mathbb{Z}}_2$ symmetry, where ${\mathbb{Z}}_2^f$ denotes the fermions number parity symmetry. The mathematical classification of such phases using the spin bordism group:
\begin{equation}
\cred{ \Omega^{3,\text{Spin}}_{\text{tor}}(B{\mathbb{Z}}_2\cgreen{,U(1)}) \equiv \text{Hom}(\Omega^{\text{Spin}}_{3,\text{tor}}(B{\mathbb{Z}}_2),U(1)) \cong
\Omega^\text{Spin}_3(B{\mathbb{Z}}_2) \cong \mathbbm{Z}_8}
\end{equation}
appeared in \cite{Kapustin1406.7329}. \cgreen{Note that the last isomorphism is non-canonical and follows from the fact that $\Omega^\text{Spin}_3(B{\mathbb{Z}}_2)$ contains only torsion elements. }
In particular, for the class $\nu\in \mathbbm{Z}_8$,
the value of the fSPT partition action on a closed 3-manifold ${M^{3}}$ with a spin structure
$s \in \text{Spin}({M^{3}})$
and the background ${\mathbb{Z}}_2$ gauge connection $a\in H^1({M^{3}},{\mathbb{Z}}_2)$ is given by
\begin{equation}
e^{iS[a, s]}=e^{\frac{\pi i\nu}{4}\text{ABK}[\text{PD}(a), \;\; s|_{\text{PD}(a)}]}
\end{equation}
where $\text{PD}$ stands for the Poincar\'e dual.
The $\text{PD}(a)\subset {M^{3}}$ denotes a (possibly unoriented) surface\footnote{For cohomology with ${\mathbb{Z}}_2$ coefficients, it is always possible to find a smooth representative of the Poincar\'e dual.} in ${M^{3}}$ representing a class in $H_2({M^{3}},{\mathbb{Z}}_2)$ Poincar\'e dual to $a\in H^1({M^{3}},{\mathbb{Z}}_2)$.
The $s|_{\text{PD}(a)}$ is the $\text{Pin}^-$ structure on $\text{PD}(a)$ obtained by the restriction of $s$, and $\text{ABK}[\ldots]$ denotes $\mathbbm{Z}_8$ valued Arf-Brown-Kervaire inavariant of Pin$^-$ 2-manifold $\text{PD}(a)$ (which is its Pin$^-$ bordism class). Although there is no local realization of Arf-Brown-Kervaire invariant via characteristic classes, \textit{schematically} one can write:
\begin{equation} \label{Z8-action}
S[a,s]=\frac{\pi\nu}{4}\int_{{M^{3}}}\,a\cup \text{ABK}.
\end{equation}
where
\begin{equation}
\int_{\Sigma}\text{ABK} \equiv \text{ABK}[\Sigma]
\end{equation}
for any possibly unoriented surface $\Sigma$ embedded into ${M^{3}}$. The corresponding spin-TQFT partition function reads\footnote{Which can be unterstood as expression of type (\ref{eq:Phi(A)}), that is with $B$ fields already integrated out.}
\begin{equation}
\label{gfSPT-Z2-3D}
Z[{M^{3}},s]=\frac{1}{2}\sum_{a\,\in H^1({M^{3}},{\mathbb{Z}}_2)}e^{iS[a,s]}.
\end{equation}
Starting from Eq.(\ref{gfSPT-Z2-3D}) we explicitly check that the resulting TQFTs for various values of $\nu \in \mathbbm{Z}_8$ are as described in Table \ref{table:Z8-gauged}.
\begin{table}
\footnotesize
\hspace{-5.05em}\begin{tabular}{ | c | c | c | c | c | c | c |}
\hline
$\nu$ & TQFT description (Local action) & $\begin{array}{c}\text{Link}\\ \text{inv.}\end{array}$ & $\text{GSD}_{T^2_\text{o}|T^2_\text{e}}$ & $Z[{\mathbb{RP}}^3]$ & $\cS^{xy}$ & $\cT^{xy}$ \\ \hline
$0$ &
$\begin{array}{c}
\text{level $2$ $BF$ theory $\cong$}\\
\text{level $K=\left(
\begin{array}{cc}
0 & 2 \\
2 & 0 \\
\end{array}
\right)$ $U(1)^2$ CS $\cong$}\\
{\mathbb{Z}}_2\text{-toric code}
\end{array}$
& Lk
&
4b $\mid$ 4b
&
1
& $\left(
\begin{array}{cccc}
\frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\
\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\
\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\
\frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} \\
\end{array}
\right)$ & $\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1 \\
\end{array}
\right)$
\\ \hline
$1$ & $\begin{array}{c}
\text{Ising}\,\times\,{p-ip}\cong \\
\text{Ising}\,\times\,\overline{\text{spin-Ising}}\cong \\
U(2)_{2,-4}\,\times\,(SO(3)_{-1}\times U(1)_1) \;{\text{CS}}
\end{array}$
& Arf
&
3f $\mid$ 3b
&
$\frac{(1+e^{\pm\frac{\pi i}{4}})}{2}$
& $\left(
\begin{array}{cccc}
\frac{1}{2} & \frac{1}{2} & \frac{1}{\sqrt{2}} \\
\frac{1}{2} & \frac{1}{2} & -\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\
\end{array}
\right)
$ & $\left(
\begin{array}{cccc}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & e^{\frac{\pi i}{8}}
\end{array}
\right)$
\\ \hline
$2$ & level $K=\left(
\begin{array}{cc}
0 & 2 \\
2 & -1 \\
\end{array}
\right)$ $U(1)^2$ CS & Lk
&
4b $\mid$ 4b
&
$\frac{(1+e^{\pm\frac{\pi i2}{4}})}{2}$
& $\left(
\begin{array}{cccc}
\frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\
\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\
\frac{1}{2} & -\frac{1}{2} & \frac{i}{2} & -\frac{i}{2} \\
\frac{1}{2} & -\frac{1}{2} & -\frac{i}{2} & \frac{i}{2} \\
\end{array}
\right)$ & $\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & e^{\frac{i \pi }{4}} & 0 \\
0 & 0 & 0 & e^{-\frac{3}{4} i \pi } \\
\end{array}
\right)$
\\ \hline
$3$ & $SU(2)_2\times SO(3)_{-1}$ CS
& Arf
&
3f $\mid$ 3b
&
$\frac{(1+e^{\pm\frac{\pi i 3}{4}})}{2}$
&
$\left(
\begin{array}{cccc}
\frac{1}{2} & \frac{1}{2} & \frac{1}{\sqrt{2}} \\
\frac{1}{2} & \frac{1}{2} & -\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\
\end{array}
\right)$ & $\left(
\begin{array}{cccc}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & e^{\frac{3\pi i}{8}}
\end{array}
\right)$
\\ \hline
$4$ &
$\begin{array}{c}
\text{level $K=\left(
\begin{array}{cc}
0 & 2 \\
2 & 2 \\
\end{array}
\right)$ $U(1)^2$ CS $\cong$}\\
{\mathbb{Z}}_2\text{-double semions}
\end{array}$
& Lk
&
4b $\mid$ 4b
&
0
& $\left(
\begin{array}{cccc}
\frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\
\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\
\frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} \\
\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\
\end{array}
\right)$ & $\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & -i & 0 \\
0 & 0 & 0 & i \\
\end{array}
\right)$
\\ \hline
$5$ & $SU(2)_{-2}\times SO(3)_{1}$ CS
& Arf
&
3f $\mid$ 3b
&
$\frac{(1+e^{\pm\frac{\pi i 5}{4}})}{2}$
& $\left(
\begin{array}{cccc}
\frac{1}{2} & \frac{1}{2} & \frac{1}{\sqrt{2}} \\
\frac{1}{2} & \frac{1}{2} & -\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\
\end{array}
\right)$ & $\left(
\begin{array}{cccc}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & e^{-\frac{3\pi i}{8}}
\end{array}
\right)$
\\ \hline
$6$ & level $K=\left(
\begin{array}{cc}
0 & 2 \\
2 & 1 \\
\end{array}
\right)$ $U(1)^2$ CS
& Lk
&
4b $\mid$ 4b
&
$\frac{(1+e^{\pm\frac{\pi i 6}{4}})}{2}$
&
$\left(
\begin{array}{cccc}
\frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\
\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\
\frac{1}{2} & -\frac{1}{2} & -\frac{i}{2} & \frac{i}{2} \\
\frac{1}{2} & -\frac{1}{2} & \frac{i}{2} & -\frac{i}{2} \\
\end{array}
\right)$ & $\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & e^{-\frac{i \pi }{4}} & 0 \\
0 & 0 & 0 & e^{\frac{3}{4} i \pi } \\
\end{array}
\right)$
\\ \hline
$7$ & $\begin{array}{c}
\overline{\text{Ising}}\,\times\,{p+ip}\cong \\
\overline{\text{Ising}}\,\times\,{\text{spin-Ising}}\cong \\
U(2)_{-2,4}\,\times\,(SO(3)_{1}\times U(1)_{-1}) \;{\text{CS}}
\end{array}$
& Arf
&
3f $\mid$ 3b
&
$\frac{(1+e^{\pm\frac{\pi i 7}{4}})}{2}$
& $\left(
\begin{array}{cccc}
\frac{1}{2} & \frac{1}{2} & \frac{1}{\sqrt{2}} \\
\frac{1}{2} & \frac{1}{2} & -\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\
\end{array}
\right)$ & $\left(
\begin{array}{cccc}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & e^{-\frac{\pi i}{8}}
\end{array}
\right)$
\\ \hline
\end{tabular}
\caption{Table of spin TQFTs
as fermionic ${\mathbb{Z}}_2$ gauge theories
that arise from gauging ${\mathbb{Z}}_2$ symmetry part of ${\mathbb{Z}}_2^f\times{\mathbb{Z}}_2$-fSPTs, for different classes of $\nu\in\mathbbm{Z}_8$
with total 8 classes.
The first column of the table shows the $\nu\in\mathbbm{Z}_8$ class.
The second column shows the continuum TQFTs that we obtain by gauging fSPTs.
We use the description of the Ising TQFT in terms of Chern-Simons theory (CS)
$U(2)_{2,-4}\cong (SU(2)_2\times U(1)_{-4})/{\mathbb{Z}}_2$
from \cite{Seiberg:2016rsg}.
By $SO(3)_1$, we denoted the spin-CS theory (see e.g. \cite{jenquin2006spin}) with
the level normalized such that the states on $T^2$ are subset of $SU(2)_2$ states corresponding to $SU(2)$ representations with odd dimension (1 and 3).
The third column (``Link inv.'') shows the topological invariant through which the expectation value of the system of line operators supported on a link in $S^3$ can be expressed.
The {fourth} column in the right shows the GSD on $T^2$, in terms of
the spin 2-tori as ${T^2_\text{o}}$ and ${T^2_\text{e}}$ with the odd or even parity.
The ``b'' stands for boson and the ``f'' for fermions.
On one hand, the Ising and the $SU(2)_2$ TQFTs contain 3 \textit{bosonic} (i.e. with $(-1)^F=1$) anyons.
On the other hand, the spin-Ising and the $SO(3)_1$ spin-CS have 1 bosonic state on $T^2$ for any even spin structure, and
have 1 fermionic ($(-1)^F=-1$) state for the odd spin structure.
The theories with $\nu=0\mod 2$ contain 4 bosonic states for any choice of the spin structure.
\cred{The fifth column shows that $Z[{\mathbb{RP}}^3]$ distinguishes all $\nu\in\mathbbm{Z}_8$ classes.}
The last two columns show
the
{reduced modular}
$\cS^{xy}$ and $\cT^{xy}$ matrices {(see main text for details)}.
We can compute $\cS^{xy}$ and $\cT^{xy}$ based on the description Eq.(\ref{gfSPT-Z2-3D}),
and find our data consistent with the spin TQFTs that we associate with in the second column.
{
Our spin TQFTs can be identified
as fermionic topological orders through \cite{LanKongWen1507.04673},
which denoted as
$6^F_0$ for odd $\nu$,
and $8^F_0$ or $4^B_0$ for even $\nu$.
We find the correspondence that
$\nu=0$ as $4^{B,a}_0$,
$\nu=1$ as ${\cal F}_0 \boxtimes 3^{B}_{1/2}$,
$\nu=2$ as $4^{B}_{1}$,
$\nu=3$ as ${\cal F}_0 \boxtimes 3^{B}_{3/2}$,
$\nu=4$ as $4^{B,b}_0$,
$\nu=5$ as ${\cal F}_0 \boxtimes 3^{B}_{-3/2}$,
$\nu=6$ as $4^{B}_{-1}$,
and
$\nu=7$ as ${\cal F}_0 \boxtimes 3^{B}_{-1/2}$.
}
}
\label{table:Z8-gauged}
\end{table}
\subsubsection{Ground state degeneracy (GSD): Distinguish the odd-$\nu$ and even-$\nu$ classes} \label{sec:Z2fZ2GSD}
The first step in identifying the TQFT is calculating ground state degeneracy on $T^2$. Since we deal with the
spin-TQFT it is necessary to specify the choice of spin structure on $T^2$. There are 4 choices corresponding to
the periodic (P) or anti-periodic boundary (A) conditions along each of two cycles: (P,P), (A,P), (P,A), (A,A). As we will see the Hilbert space \cred{up to an isomorphism} only depends on the parity (the value \cred{of the Arf invariant in $\mathbbm{Z}_2$}), which is odd for (P,P), and is even for (A,P), (P,A), (A,A). We will denote the corresponding spin 2-tori as ${T^2_\text{o}}$ and ${T^2_\text{e}}$. The GSD can be counted by considering partition function on
${M^{3}}=T^3=T^2_{\text{e}}\times S^1$
or
$T^2_{\text{o}}\times S^1$
where we put either periodic or anti-periodic boundary conditions on the time circle $S^1$.
We denote their GSD as $\text{GSD}_{T^2_{\text{e}}}$ and $\text{GSD}_{T^2_{\text{o}}}$ respectively.
We find that
\begin{eqnarray}
&&\text{GSD}_{T^2} (\nu=\text{odd}) =
\left\{\begin{array}{c}
\text{GSD}_{T^2_\text{o}}= 3 \text{ (fermions)}, \\
\text{GSD}_{T^2_\text{e}}= 3 \text{ (bosons)}.
\end{array}\right.\\
&&\text{GSD}_{T^2} (\nu=\text{even}) =
\left\{\begin{array}{c}
\text{GSD}_{T^2_\text{o}}= 4 \text{ (bosons)}, \\
\text{GSD}_{T^2_\text{e}}= 4 \text{ (bosons)}.
\end{array}\right.
\end{eqnarray}
If we account all possible spin structures, the odd-$\nu$ theories have 3 bosonic and 3 fermionic states (6 states in total), and
the even-$\nu$ theories have 4 bosonic states in total.
{We can define $\hat{{\cal S}}^{xy}$ and $\hat{{\cal T}}^{xy}$ as generators of $SL(2,{\mathbb{Z}})$, the mapping class group of $T^2$, which permute spin structures as follows:
\begin{equation}
\hat{{\cal S}}^{xy}:\begin{array}{ccc}
\text{(P,P)} & \mapsto & \text{(P,P)} \\
\text{(A,P)} & \mapsto & \text{(P,A)} \\
\text{(P,A)} & \mapsto & \text{(A,P)} \\
\text{(A,A)} & \mapsto & \text{(A,A)}
\end{array}
\qquad
\hat{{\cal T}}^{xy}:\begin{array}{ccc}
\text{(P,P)} & \mapsto & \text{(P,P)} \\
\text{(A,P)} & \mapsto & \text{(A,A)} \\
\text{(P,A)} & \mapsto & \text{(P,A)} \\
\text{(A,A)} & \mapsto & \text{(A,P)}.
\end{array}
\end{equation}
So in general, the corresponding quantum operators act between Hilbert spaces for different spin 2-tori $T^2$. Only the unique odd spin structure (that is (P,P)) is invariant. However in our case the Hilbert space on $T^2$ with spin structure $s$ has form of ${\cal H}_{T^2_s}=\tilde{{\cal H}}_{T^2}\otimes {{\cal H}}^{\text{1-dim}}_s$. Here $\tilde{{\cal H}}_{T^2}$ is an $s$-independent purely bosonic Hilbert space that is 3 (4)-dimensional for odd (even) $\nu$. The ${{\cal H}}^{\text{1-dim}}_s$ is a \textit{one-dimensional} Hilbert space (of spin-Ising $\cong p+ ip$ superconductor, or spin-$SO(3)_1$ Chern-Simons, or their conjugates). The reduced modular $\cS^{xy}$ and $\cT^{xy}$ matrices in Table \ref{table:Z8-gauged} are representations of $\hat{{\cal S}}^{xy}$ and $\hat{{\cal T}}^{xy}$ elements acting on the reduced Hilbert space $\tilde{{\cal H}}_{T^2}$.}
\subsubsection{$Z[{\mathbb{RP}}^3]$: Distinguish $\nu\in\mathbbm{Z}_8$ classes} \label{sec:ZRP3}
The easiest way to destinguish different TQFTs with the same number of states (say the 4 states for the odd-$\nu$ and the 3+3 states for the even-$\nu$) is to calculate the partition function on ${\mathbb{RP}}^3$:
\begin{equation}
Z[{\mathbb{RP}}^3]=\frac{1}{2}\sum_{a\in H^1(\cred{{\mathbb{RP}}^3},{\mathbb{Z}}_2)\cong {\mathbb{Z}}_2}e^{\frac{\pi i\nu}{4}\text{ABK}[\text{PD}(a)]}=\frac{1}{2}(1+e^{\frac{\pi i\nu}{4}\text{ABK}[{\mathbb{RP}}^2\subset {\mathbb{RP}}^3]})=\frac{1}{2}(1+e^{\pm\frac{\pi i\nu}{4}})
\end{equation}
where $\pm$ corresponds to the choice of spin structure on ${\mathbb{RP}}^3$.
One compares it with the expression via $\cS^{xy}$ and $\cT^{xy}$ matrices:
\begin{equation}
Z[{\mathbb{RP}}^3]=(\cS^{xy} (\cT^{xy})^2 \cS^{xy})_{0,0}
\end{equation}
based on the $(0,0)$ component of the right hand side matrix.
This gives the precise map between the gauged fSPTs for different values of $\nu \in\mathbbm{Z}_8$ and the known fermionic topological orders \cite{LanKongWen1507.04673} as listed in our Table \ref{table:Z8-gauged}. To summarize, we show that the $Z[{\mathbb{RP}}^3]$ is one simple single datum that distinguishes $\nu\in\mathbbm{Z}_8$ classes of fSPTs.
\subsubsection{$\cS^{xy}$ and $\cT^{xy}$: The mutual- and self-exchange braiding statistics for $\nu\in\mathbbm{Z}_8$ classes} \label{sec:ST}
Another way is to calculate directly
the modular data $\cS^{xy}$ and $\cT^{xy}$ matrices starting from the description Eq.(\ref{gfSPT-Z2-3D}) and computing
the partition function with line operators supported on the corresponding links. We recall that:
\begin{itemize}
\item A Hopf link for $\cS^{xy}_{mn}$ between two line operators of anyons ($m,n$) encodes the mutual-braiding statistics data between two anyons.
\cgreen{For Abelian anyons, $\cS^{xy}_{mn} \propto e^{\hspace{1pt}\mathrm{i}\hspace{1pt} \theta_{mn}}$ encodes the Abelian Berry statistical phase $e^{\hspace{1pt}\mathrm{i}\hspace{1pt} \theta_{mn}}$ of anyons ($m,n$), and, up to an overall factor, is
related to the total quantum dimensions of all anyons.}
\item A framed unknot for $\cT^{xy}_{nn}$ of a line operator of an anyon ($n$) encodes the self exchange-statistics or equivalently \cred{the spin statistics (also called the topological spin)} of the anyon.
\end{itemize}
As in the bosonic case,
the possible nontrivial line operators are the Wilson loop $\exp(\pi i\int_{\gamma'} a)$ and the 't Hooft loop, which imposes
the condition $da=\delta^\perp(\gamma)$. \cred{Another possibility is a line defect with a non-trivial spin-structure on its complement}. In particular,
consider the case when we have Wilson loop operators supported on the connected loops $\gamma'_I\subset S^3$, and 't Hooft operators supported on the connected loops
\cred{$\gamma_J\subset S^3$}.
{
The Wilson loop $\gamma'_I$ is the ${\mathbb{Z}}_2$-charge loop, while the 't Hooft loop $\gamma_J$ is the ${\mathbb{Z}}_2$-gauge flux loop (also called the vison loop in condensed matter).} We consider the loop operators:
\begin{equation}
W[\{\gamma'_I\},\{\gamma_J\}]=\prod_{I,J} e^{\pi i\int_{\gamma'_J} a}
\delta(da-\sum_{J}\delta^\perp(\gamma_J)),
\end{equation}
then its expectation value in path integral gives
\begin{equation}
\langle W[\{\gamma'_I\},\{\gamma_J\}] \rangle \equiv
\sum_{a\,\in H^1({M^{3}},{\mathbb{Z}}_2)}e^{iS[a,s]}W[\{\gamma'_I\},\{\gamma_J\}]=
e^{\sum_{I,J}\pi i\,\text{Lk}(\gamma'_I,\gamma_J)}
e^{\frac{\pi i\nu}{4}\text{ABK}(\Sigma)}.
\end{equation}
Here $\Sigma$ is such that
$\partial \Sigma= \sqcup _J\gamma_J$
and the framing on the link components $\gamma_J$ is induced by $\Sigma$.
$\text{ABK}(\Sigma)$ is the Arf-Brown-Kervaire invariant of embedded surface with the boundary $\Sigma\subset S^3$ \cite{kirby2004local}. Note that it can be expressed via the Arf invariant of \textit{unframed} link $\{\gamma_J\}$ as follows
\footnote{Note that both ABK and Arf invariant only defined for the \textit{proper} links, that are links such that each component evenly links the rest. It can also be naturally extended for all links, taking values in $\mathbbm{Z}_8^*\equiv \mathbbm{Z}_8 \sqcup \infty$ instead\cite{kirby2004local}, that is $e^{\frac{\pi i}{4}\text{ABK}}=0$ (equivalently, $(-1)^\text{Arf}=0$) for the improper links. This means that $\langle W \rangle=0$ in this case.
}
\footnote{The Arf invariant of a link can be expressed via Arf invariants of individual components\cite{kirby2004local}:
\begin{equation}
\text{Arf}[\{\gamma_J\}]=\sum_I\text{Arf}[\gamma_I]+
\frac{1}{4}\sum_{I<J}(\lambda(\gamma_I,\gamma_J)+\text{Lk}(\gamma_I,\gamma_J)+\sum_{I,J,K}\bar{\mu}(\gamma_I,\gamma_J,\gamma_K),
\end{equation}
where $\lambda$ is the Sato-Levine linking invariant and $\bar{\mu}$ is the Milnor triple linking number.
}:
\begin{equation} \label{eq:ABKSigma}
\text{ABK}[\Sigma]=4\text{Arf}[\{\gamma_J\}]+\frac{1}{2}\sum_{I,J}\text{Lk}(\gamma_I,\gamma_J).
\end{equation}
Therefore we have:
\begin{equation}
\langle W[\{\gamma'_I\},\{\gamma_J\}] \rangle
=
(-1)^{\sum_{I,J}\text{Lk}(\gamma'_I,\gamma_J)+\nu\text{Arf}[\{\gamma_J\}]}
\cdot e^{\frac{\pi i\nu}{8}\sum_{I,J}\text{Lk}(\gamma_I,\gamma_J)}.
\label{W-arf}
\end{equation}
Note that when $\nu$ is even, the dependence on $\text{Arf}$ invariant goes away, and the expectation value becomes as in Eq.(\ref{Ab-CS-linking}) with the level matrix given in Table \ref{table:Z8-gauged}. Note that \cgreen{the} trefoil \cred{knot} provides an example with non-zero Arf invariant, see Fig. \ref{fig:trefoil}.
\begin{figure}[!h]
\centering
\includegraphics[scale=1]{trefoil}
\caption{Trefoil knot. It has non-trivial Arf invariant (unlike unknot) and can be detected by the \cred{2+1D fermionic (but not bosonic)} gauged SPT with ${\mathbb{Z}}_2$ symmetry.
\cred{If the 't Hooft ${\mathbb{Z}}_2$ flux line (\cgreen{or} the vison in condensed matter terminology), as a worldline of the $\sigma$ anyon, forms \cgreen{a} knot,
then the trefoil knot gives a statistical Berry phase $(-1)$ for the odd $\nu$ \cgreen{classes}, while it gives a trivial $(+1)$ for the even $\nu$ \cgreen{classes}.
So the trefoil knot \cgreen{distinguishes odd $\nu$ from even $\nu$} for $\nu \in \mathbbm{Z}_8$ classes of ${\mathbb{Z}}_2^f\times ({\mathbb{Z}}_2)^2$ fSPTs.}
}
\label{fig:trefoil}
\end{figure}
Remember that the 't Hooft loop $\gamma_J$ is equivalent to the ${\mathbb{Z}}_2$-gauge flux vison loop,
where we gauge the ${\mathbb{Z}}_2$-symmetry of ${\mathbb{Z}}_2^f \times {\mathbb{Z}}_2$-fSPT.
{
We anticipate such a 't Hooft loop $\gamma_J$ as the ${\mathbb{Z}}_2$-gauge flux may be identified with the sigma anyon $\sigma$ \cred{either in
the Ising TQFT or in the $SU(2)_2$} Chern-Simons (CS) theory.
}
With this in mind, to reproduce the non-trivial element of the $\cT^{xy}$-matrix,
we can take the link to be a framed unknot of the 't Hooft loop $\gamma_J$.
Thus we denote $\{\gamma_J\}=\{\circlearrowleft_p\}$. Here the $\circlearrowleft_p$ is an oriented unknot with a framing of any integer $p$.
The framing $p$ means that the line is $2\pi$-twisted by $p$ times, as being Dehn twisted by $p$ times and then glued to a closed line.
We derive that
\begin{equation} \label{eq:WTp}
\cred{
\langle W[\emptyset,\{\circlearrowleft_p\}] \rangle
= e^{\frac{\pi i\nu}{8}p}
}
\end{equation}
since $\text{Arf}[\circlearrowleft]=0$.
{
This reproduces an $e^{\frac{\pi i\nu}{8}}$ element of the $\cT^{xy}$-matrix, with a power $p$.
Remember that $\cT^{xy}$ matrix represents the \emph{self-statistics} and equivalently
the \emph{spin} (\cred{also named} as the \emph{topological spin}) of the quasi-particle.
The result $e^{\frac{\pi i\nu}{8}p}$
confirms post-factum our earlier prediction that the
${\mathbb{Z}}_2$-gauge flux 't Hooft line operator should be identified with the same line operator for the sigma anyon $\sigma$.
Thus, we further establish the correspondence between the gauged fSPT Eq.(\ref{gfSPT-Z2-3D})
and the TQFTs in the second column of Table \ref{table:Z8-gauged} for all $\nu \in \mathbbm{Z}_8$ classes.
}
\cred{When $\nu$ is odd}, similarly one can confirm that, \cred{both} Ising and $SU(2)_2$ anyons $\psi$ can be realized by Wilson lines and use the general expression (\ref{W-arf}) to reproduce the other elements\footnote{In order to calculate the elements of $\cS^{xy}$ matrix it is important to fix the normalization of Wilson and flux lines. In particular, the flux line should have an extra $\sqrt{2}$ factor, which is easy to fix by considering a pair of flux lines embedded in the obvious way into $S^2\times S^1$ and requiring that the corresponding path integral is $1$.
}
of $\cT^{xy}$ and $\cS^{xy}$. For example the fact that the diagonal element of $\cS^{xy}$ for flux lines is zero follows from the fact that Hopf link is not a proper link and $(-1)^\text{Arf}=0$.
Note that the appearance of the Arf invariant for the odd $\nu$ is consistent with the following two facts:
(1) The expectation values of Wilson lines supported on a link ${\cal L}$ in the fundamental representation of the $SU(2)_2$ CS theory
is given by the Jones polynomial of the link $J[{\cal L}]\in {\mathbb{Z}}[q^{1/2},q^{-1/2}]$ at $q=e^{\frac{2\pi\i}{2+2}}=i$ \cite{Witten:1988hf}
\cgreen{,
where Jones polynomial is an element of ${\mathbb{Z}}[q^{1/2},q^{-1/2}]$, the space of Laurent polynomials in $q^{1/2}$ with integer coefficients.
}
(2) The value of the Jones polynomial (up to a simple normalization related factor) at $q=i$ is given by the Arf invariant $J[{\cal L}]|_{q=i}\propto (-1)^{\text{Arf}[{\cal L}]}$ \cite{jones1987hecke,murakami1986}.
To summarize, we show that the modular data $\cS^{xy}$ and $\cT^{xy}$ computed in our Table \ref{table:Z8-gauged}
also distinguish $\nu\in{\mathbb{Z}}_8$ classes of fSPTs.
\subsubsection{{Fermionic Topological Superconductor and Rokhlin invariant}} \label{sec:Rokhlin}
We explore further on the partition function of ${\mathbb{Z}}_2^f\times {\mathbb{Z}}_2$ fSPTs, namely the ${\mathbb{Z}}_2$-symmetric fermionic Topological Superconductors.
First let us notice that the partition function of $SU(2)_2$ Chern-Simons theory (CS) on a closed 3-manifold can be expressed via Rokhlin invariant \cite{kirby19913}:
\begin{equation}
Z_{SU(2)_2}[{M^{3}}]=\frac{1}{2}\sum_{s\in \text{Spin}({M^{3}})}e^{-\frac{3\pi i}{8}\mu({M^{3}},s)}
\end{equation}
where the Rokhlin invariant $\mu ({M^{3}},s)$ of a 3-manifold ${M^{3}}$ equipped with the spin-structure $s\in \text{Spin}({M^{3}})$ is {defined} as:
\begin{equation}
\mu ({M^{3}},s)=\sigma({M^{4}})\mod 16.
\end{equation}
The $\sigma({M^{4}})$ is the signature of any spin 4-manifold bounded by ${M^{3}}$ so that spin structure on ${M^{3}}$ is induced by spin structure on ${M^{4}}$. Similarly, for its spin version, that is the $SO(3)_1$ spin CS:
\begin{equation}
Z_{SO(3)_1}[{M^{3}},s]=e^{-\frac{3\pi i}{8}\mu({M^{3}},s)}.
\end{equation}
Combining those together, we have:
\begin{multline}
Z_{SU(2)_2\times SO(3)_{-1}}[{M^{3}},s]=\frac{1}{2}\sum_{s'\in \text{Spin}({M^{3}})}e^{-\frac{3\pi i}{8}\mu({M^{3}},s')+\frac{3\pi i}{8}\mu({M^{3}},s)}=\\ \frac{1}{2}\sum_{a\in H^1({M^{3}},{\mathbb{Z}}_2)}
e^{-\frac{3\pi i}{8}(\mu({M^{3}},s+a)-\mu({M^{3}},s))}
\end{multline}
where we used the fact that the spin structures form an affine space over $H^1({M^{3}},{\mathbb{Z}}_2)$. Comparing it with (\ref{gfSPT-Z2-3D})
at $\nu=3$ for the ${SU(2)_2\times SO(3)_{-1}}$ Chern-Simons theory,
this suggests that\footnote{This should follow from the following formula \cite{guillou62extension}
\begin{equation}
\mu({M^{3}},s)=\sigma({M^{4}})-(\text{PD}(w_2)\cdot\text{PD}(w_2))+2\text{ABK}[\text{PD}(w_2)]
\end{equation}
where ${M^{4}}$ is \textit{any} (not necessarily spin) 4-manifold bounded by ${M^{3}}$, and $w_2$ is \cgreen{the} \cred{relative} Stiefel-Whitney class.
}
\begin{equation} \label{eq:RokhlinNew}
\mu({M^{3}},s)-\mu({M^{3}},s+a) = 2\text{ABK}[\text{PD}(a),s|_{\text{PD}(a)}] \mod 16
\end{equation}
and that the partition function of fSPT is given by
\begin{equation} \label{eq:RokhlinNew2}
Z_{\text{fSPT}_\nu}[{M^{3}},s,a]=e^{\frac{\pi i\nu}{8}(\mu({M^{3}},s)-\mu({M^{3}},s+a))}.
\end{equation}
The fact that it is a cobordism invariant can be understood as follows.
Let ${M^{3}}$ and ${M^{3}}'$ be 3-manifolds equipped with spin-structures $s,s'$ and ${\mathbb{Z}}_2$ gauge fields $a,a'$ (that is ${\mathbb{Z}}_2$ principal bundles over ${M^{3}}$ and ${M^{3}}'$, or equivalently maps ${M^{3}},{M^{3}}'\rightarrow B{\mathbb{Z}}_2$ ).
Suppose
these spin 3-manifolds with ${\mathbb{Z}}_2$ gauge bundles
represent the same class in $\Omega^\text{Spin}_3(B{\mathbb{Z}}_2)$.
Then there exists a 4-manifold ${M^{4}}$ equipped with spin structure $s_4$ and ${\mathbb{Z}}_2$ gauge field $a_4\in H^1({M^{4}},{\mathbb{Z}}_2)$ such that $\partial {M^{4}}={M^{3}}' \sqcup (-{M^{3}})$ and $s_4|_{{M^{3}}}=s$, $s_4|_{{M^{3}}'}=s'$, $a_4|_{{M^{3}}}=a$, $a_4|_{{M^{3}}'}=a'$. It follows that $(s_4+a_4)|_{{M^{3}}}=s+a$, $(s_4+a_4)|_{{M^{3}}'}=s'+a'$. Therefore, by the definition of Rokhlin invariant
\begin{equation}
\mu({M^{3}},s)-\mu({M^{3}}',s')=\sigma({M^{4}}) \mod 16
\end{equation}
and
\begin{equation}
\mu({M^{3}},s+a)-\mu({M^{3}}',s'+a')=\sigma({M^{4}}) \mod 16,
\end{equation}
and therefore
\begin{equation}
Z_{\text{fSPT}_\nu}[{M^{3}},s,a]=Z_{\text{fSPT}_\nu}[{M^{3}}',s',a'].
\end{equation}
\subsection{Other examples of 2+1D/3+1D spin-TQFTs and ${\mathbb{Z}}_2^f\times ({\mathbb{Z}}_2)^n$ fermionic SPTs: \\
Sato-Levine invariant and more} \label{sec:more2+1D/3+1DsTQFT}
In Table \ref{table:fSPT-gauged}, we propose other examples of spin-TQFTs with action (formally written) similar to (\ref{Z8-action}). The idea is that if we have a collection of ${\mathbb{Z}}_2$ gauge fields $a_i\in H^1({M^{d}},{\mathbb{Z}}_2)$, there is lesser-dimensional fPST with time-reversal symmetry (with $T^2=1$) living on the intersection of domain walls (Poincar\'e dual to $a_i$, which is, in general, non-orientable). The 1-cocycle $\eta$ is \textit{formally} defined such that $(-1)^{\int_{S^1}\eta}=\pm 1$ depending on the choice of spin structure on $S^1$. It can be interpreted as the action of the non-trivial 0+1D fSPT with no unitary global symmetry, that is the theory of one free fermion (the 0+1D fSPT partition function is $1$ for the choice of \cred{anti-periodic} boundary conditions on the fermion, and $-1$ for periodic boundary conditions).
The corresponding link invariants are similar to those appeared in bosonic theories, but instead of counting points of intersection between loops/surfaces/Seifert (hyper)surfaces, we count $\Omega^{\text{Pin}^-}_{1,2}(\mathrm{pt})$ bordism classes of 1- and 2-manifolds that appear in the intersection.
In math literature, the result of such counting sometimes referred as ``framed intersection''.
Note that in one dimension,
the $\text{Pin}^-$ bordism group is isomorphic to the stable framed bordism group: $\Omega^{\text{Pin}^-}_1\cong\Omega^{\text{Spin}}_1\cong \Omega^{\text{fr}}_1\cong\pi_1^s\cong \mathbbm{Z}_2$ which, in turn, by Pontryagin-Thom construction is isomorphic to the stable homotopy group of spheres. The $\text{Pin}^-$ structure on the intersection is induced by Spin structure on the ambient space together with the framing on its normal bundle given by vectors which are tangent to the intersecting surfaces \cite{kirby1990pin}.
\begin{table}[t!]
\footnotesize
\begin{center}
\begin{tabular}{| c | c|c | c | }
\hline
Dim & Symmetry & $\begin{array}{c} \text{Action} \\ (\text{Formal notation}) \end{array}$ & $\begin{array}{c}\text{Link}\\ \text{invariant}\end{array}$ \\ \hline\hline
2+1D & ${\mathbb{Z}}_2^f\times {\mathbb{Z}}_2$ &
$\frac{\pi}{4}\int a\cup \text{ABK}$ & Arf invariant of a knot/link
\\ \hline
2+1D & ${\mathbb{Z}}_2^f\times ({\mathbb{Z}}_2)^2$ &
${\pi}\int a_1\cup a_2 \cup \eta$
& $\begin{array}{c}
\text{Sato-Levine invariant\,\cite{sato1984cobordisms}}:\\
\gamma_1,\gamma_2\mapsto\text{Framed bordism class} \\
\text{[}\Sigma_1 \cap \Sigma_2 \text{]} \in \pi_1^s\cong \mathbbm{Z}_2
\end{array}$
\\ \hline \hline
3+1D & ${\mathbb{Z}}_2^f\times ({\mathbb{Z}}_2)^2$ (?)&
$\frac{\pi}{4}\int a_1\cup a_2 \cup \text{ABK}$
& $\begin{array}{c} \Sigma_1,\Sigma_2 \mapsto \\
\text{ABK}[{\cal V}_1 \cap {\cal V}_2] \;\;\;
(\partial{\cal V}_I=\Sigma_I)
\end{array}$
\\ \hline
3+1D & ${\mathbb{Z}}_2^f\times ({\mathbb{Z}}_2)^2$ (?)&
$\pi\int a_1\cup a_2 \cup a_2\cup \eta$
&
$\begin{array}{c}
\text{Dlk}[\Sigma_1,\Sigma_2]=\\
\text{Framed bordism class} \\
\text{[}\Sigma_1 \cap {\cal V}_2 \text{]} \in \pi_1^s\cong \mathbbm{Z}_2
\;\;\text{\cite{sato1984cobordisms, carter2008link}}
\end{array}$
\\ \hline
3+1D & ${\mathbb{Z}}_2^f\times ({\mathbb{Z}}_2)^3$ &
$\pi\int a_1\cup a_2 \cup a_3\cup \eta$
& $\begin{array}{c}
\Sigma_1,\Sigma_2,\Sigma_3\mapsto\text{Framed bordsim class} \\
\text{[}{\cal V}_1 \cap {\cal V}_2 \cap {\cal V}_3\text{]}+\cred{[\ldots]} \\
\in \pi_1^s\cong \mathbbm{Z}_2
\end{array}$
\\ \hline
\end{tabular}
\end{center}
\caption{Table of spin TQFTs and the corresponding link invariants.
{As before $\text{ABK}(\Sigma)$ is the Arf(-Brown-Kervaire invariant) of Spin (Pin$^-$) surface $\Sigma$.
The 1-cocycle $\eta$ is \textit{formally} defined by the rule $(-1)^{\int_{S^1}\eta}=\pm 1$ with the dependence on the spin structure choice on $S^1$. }
Notation $\pi_1^s (\cong \Omega_1^{\text{Pin}^-}(\mathrm{pt}))$ stands for the stable framed bordism group of 1-manifolds.
Here $\text{Dlk}[\Sigma_1,\Sigma_2]$ stands for the double linking (Dlk) of two surfaces $\Sigma_1$ and $\Sigma_2$, given
in Ref.\cite{sato1984cobordisms, carter2008link}.
It can detect, for example, two linked surfaces obtained as a twisted spun of the Hopf link.
Here
the ${[\ldots]}$ means extra terms similar to the ones in Eq.(\ref{triple-linking}) that insure invariance under the choice of three Seifert hypersurfaces.
Note that from the point of view of bordism classification of fSPTs,
it is not surprising that the emerging link invariants are \cred{cobordism invariants} of links.
{Here the formal actions in Table \ref{table:fSPT-gauged} are consistent topological invariants that we merely propose them
to detect potential SPT states. However,
there are further obstructions \cite{KapustinThorngren1701.08264,WangGu1703.10937} in order to define SPTs on any closed manifold
through these topological invariants.
Indeed some topological invariants (e.g. the third and fourth rows marked with ``?'') above do not correspond to any SPTs, where we report
the details in a companion work \cite{to-appear}.
} }
\label{table:fSPT-gauged}
\end{table}
To clarify, consider for example the case with action $\pi \int a_1\cup a_2\cup \eta$ (second line in Table \ref{table:fSPT-gauged}).
More precisely, the fSPT partition function on a 3-manifold ${M^{3}}$ is given by
\begin{equation}
e^{iS[a_1,a_2]}=(-1)^{\int_{\text{PD}(a_1)\cap\text{PD}(a_2)}\eta}\equiv
\prod_{\text{circles in } \text{PD}(a_1)\cap\text{PD}(a_2)}\left\{\begin{array}{cl}
+1, & \text{odd},\\
-1, & \text{even},
\end{array}
\text{ spin structure on } S^1
\right.
\end{equation}
\cred{The circle with \cgreen{the} anti-periodic boundary condition on fermions is spin-bordant to an empty set,
since it can be realized as a boundary of \cgreen{a} disk. So \cgreen{the partition function of the non-trivial $0+1$D fSPT has value 1 for it}.
The circle with \cgreen{the} periodic boundary condition forms \cgreen{the} generator of the spin-bordism group, and \cgreen{the partition function of the non-trivial fSPT} has value $-1$.}
Note that induced spin-structures on circles embedded into ${M^{3}}$ can be understood as their framings (trivialization of the tangent bundle) modulo two \cite{kirby1990pin}. If one chooses framing on ${M^{3}}$ compatible with spin structure, the framing on the circle that appears at the intersection of two surfaces is then determined by the two normal vectors tangent to the surfaces. Intuitively the framing on $S^1$ is given by how many times the ``cross'' of intersecting surfaces winds when one goes around the loop.
Physically this 2+1D fSPT can be constructed by putting a non-trivial 0+1D fSPT (which are classified by $\mathbbm{Z}_2$) state, that is a state with one fermion, on the intersections of pairs domain walls for discrete gauge fields $a_{1,2}\in H^1({M^{3}},{\mathbb{Z}}_2)$. The partition function of the corresponding spin-TQFT then can be written as follows:
\begin{equation}
Z=\frac{1}{4} \sum_{a_1,a_2\in H^1({M^{3}},{\mathbb{Z}}_2)}e^{iS[a_1,a_2]}
\end{equation}
Now let us consider flux line operators ($W[\gamma_I]\propto \,\delta(da_I-\delta^\perp(\gamma_I))$) in this theory supported on a two-component semi-boundary link $\{\gamma_1,\gamma_2\}$ in $S^3$.
``Semi-boundary'' link is by definition a link for which one can choose Seifert surfaces $\Sigma_I$ (\cred{which satisfies $\partial \Sigma_I =\gamma_I $}) such that $\Sigma_I\cap \gamma_J=0$ for $I\neq J$.
\cgreen{Note that semi-boundary links should be distinguished from boundary links, the links that satisfy a stronger condition $\Sigma_I \cap \Sigma_J = 0$ for $I\neq J$.}
Then\footnote{The extra $\exp(\ldots)$ factors are gauge-invariant completions of line operators, similar to the ones in Sec. \ref{sec:AAdA}.}
\begin{equation}
W[\gamma_1,\gamma_2]=\prod_I \delta(da_I-\gamma_I)
e^{\pi i\sum_J\epsilon^{IJ} \int_{\Sigma_I}a^J\cup \eta }
\end{equation}
\begin{equation}
\langle W[\gamma_1,\gamma_2]\rangle
=e^{\pi i \int_{\Sigma_1\cap \Sigma_2 }\eta}=
\prod_{\text{circles in } \Sigma_1\cap \Sigma_2} (-1)^\text{framing}\equiv (-1)^{\mathrm{S}(\gamma_1,\gamma_2)}
\end{equation}
where the framing on the connected components of $\Sigma_1\cap \Sigma_2\subset S^3$ is determined by the normal vectors in the direction of $\Sigma_{1,2}$. This invariant
$\mathrm{SL}(\gamma_1,\gamma_2)\in\pi^s_1\cong\mathbbm{Z}_2$
is known as the (stable) Sato-Levine linking invariant of semi-boundary link \cite{sato1984cobordisms}. It can be used to detect some non-trivial links for which the usual linking number is zero.
The simplest example of a 2-component link with $\text{Lk}(\gamma_1,\gamma_2)=0$ but
$\mathrm{SL}(\gamma_1,\gamma_2)=1$ is the Whitehead link (see e.g. \cite{cochran1985geometric}) shown in Fig \ref{fig:whitehead}).
\begin{figure}[!h]
\centering
\includegraphics[scale=1]{whitehead-link}
\caption{Whitehead link of two worldlines $\gamma_1$ and $\gamma_2$. It has a trivial linking number, but non-trivial Sato-Levine invariant $\mathrm{SL}(\gamma_1,\gamma_2)=1$.
Therefore it is detected by \cred{the 2+1D fermionic (but not bosonic)} gauged SPT with ${\mathbb{Z}}_2\times {\mathbb{Z}}_2$ symmetry.
The two ${\mathbb{Z}}_2$-flux lines of distinct ${\mathbb{Z}}_2$ form two components of the link.
\cred{Moreover, the classification of ${\mathbb{Z}}_2^f\times ({\mathbb{Z}}_2)^2$ fSPTs shows \cgreen{$(\mathbbm{Z}_8)^2 \times \mathbbm{Z}_4$ distinct} classes \cite{WangLinGu1610.08478}.
In this case, we claim that the Whitehead link can detect \cgreen{odd $\nu\in \mathbbm{Z}_4$ classes} in $\mathbbm{Z}_4$ sub-classes with a Berry phase $(-1)$.}}
\label{fig:whitehead}
\end{figure}
\section{Conclusion}
\label{sec:conclude}
Some final remarks and promising directions are in order:
\begin{enumerate}
\item
We formulate the continuum TQFTs to verify several statistical Berry phases raised from particle and string braiding processes
via the 2+1D and 3+1D spacetime path integral formalism (See \cite{1602.05951} and References therein).
We find the agreement with \cite{1602.05951} which uses a different approach based on surgery theory and quantum mechanics.
As far as we are concerned, all the TQFTs discussed in our Table \ref{table:TQFTlink}, \ref{table:Z8-gauged} and \ref{table:fSPT-gauged}
can be obtained from dynamically gauging the unitary global symmetries of certain SPTs.
We also derive the corresponding new link invariants directly through TQFTs.
\item
{We resolve the puzzle of continuum spin TQFTs that arise from gauging the ${\mathbb{Z}}_2^f\times {\mathbb{Z}}_2$-fSPT
(namely the ${\mathbb{Z}}_2$-symmetric fermionic Topological Superconductor) partition function (\ref{gfSPT-Z2-3D}) and listed in Table \ref{table:Z8-gauged}}.
In addition, we may also understand the spin bordism group $\Omega^\text{Spin}_3(B{\mathbb{Z}}_2)\cong \mathbbm{Z}_8$ classification \cite{Kapustin1406.7329} of those fSPT
in terms of three layers with
three distinct $\mathbbm{Z}_2$ generators respectively through the extended group super-cohomology approach \cite{Gu1201.2648, MCheng1501.01313, WangLinGu1610.08478}.
\cred{Comparing the cohomology group classification\cite{MCheng1501.01313, WangLinGu1610.08478} \cgreen{with a group $G={\mathbb{Z}}_2$
and} Table \ref{table:Z8-gauged}, we can deduce that
the first $H^3(G,U(1))=\mathbbm{Z}_2$ group generates the bosonic Abelian CS theories for $\nu=0,4$,
the second $H^2(G,{\mathbb{Z}}_2)=\mathbbm{Z}_2$ group generates the fermionic Abelian spin CS theories for $\nu=2, 6$, and
the third $H^1(G,{\mathbb{Z}}_2)=\mathbbm{Z}_2$ group generates fermionic non-Abelian spin TQFTs for $\nu=1,3,5,7$.
}
\item Following the above comment, for the odd-$\nu$ non-Abelian spin TQFTs in Table \ref{table:Z8-gauged},
we see that they are formed by a product of a bosonic and a spin TQFT sectors, each with opposite chiral central charges $c_-$.
The Ising TQFT and the $SU(2)_2$ CS both have anyon contents $\{1, \sigma, \psi\}$, while the $p+ip$ superconductor and the $SO(3)_1$ CS
have contents $\{1, f\}$.
Here we identify the $\sigma$ as the anyon created by the 't Hooft loop of ${\mathbb{Z}}_2$-gauge flux vison through our $\cT^{xy}$ matrix Eq.(\ref{eq:WTp}).
The $\psi$ is the Bogoliubov fermion, and $f$ is a \textit{fundamental} fermion.
\cred{The $\sigma$ is a non-Abelian anyon created from the ${\mathbb{Z}}_2$-gauge flux. \cgreen{We can identify $\sigma$ with a Majorana zero mode} trapped at a half-quantum vortex of a dynamically gauged chiral $p$-wave superconductor \cite{Ivanov20005069}.
}
To recap, we should regard the $\nu=1$ class
${\mathbb{Z}}_2$-symmetric Topological Superconductor
as stacking a $p+ip$ superconductor and a $p-ip$ superconductor together.
More generally, we should regard the $\nu$ class fSPT as stacking $\nu$ copies of $p+ip$ superconductors and $p-ip$ superconductors.
To obtain the spin TQFTs by gauging the fSPTs, naively we dynamically gauging the superconductor vortices in one sector of theory.
More precisely, $f$ is realized by equipping non-trivial spin structure on the complement of the loop.
The theory of $\{1, \sigma, \psi\}$ is related to $\{1, f\}$ by dynamically gauging the ${\mathbb{Z}}_2$-flux, thus summing over the spin structures.
The ${\mathbb{Z}}_2$-gauge flux traps the Majorana mode identified as the $\sigma$ anyon.
The pair of chiral central charges $c_-$ for two sectors of odd-$\nu$ theories are
$(\frac{1}{2},-\frac{1}{2})$ for $\nu=1$,
$(\frac{3}{2},-\frac{3}{2})$ for $\nu=3$,
$(-\frac{3}{2},\frac{3}{2})$ for $\nu=5$,
$(-\frac{1}{2},\frac{1}{2})$ for $\nu=7$, which satisfy a relation of $(\frac{\nu}{2} ,-\frac{\nu}{2})$ up to mod 4.
\cred{So the total chiral central charges is $c_-=0$ for each theory, as it supposed to be \cgreen{for a} gauged fSPTs.}
\item Comment on \emph{gauging and bosonization}:
\cred{We like to make a further remark on the relation between
SPTs and TQFTs.}
By \emph{gauging}, one may mean to probe SPTs by coupling the global symmetry to a background classical probe field.
However, here in our context, in order to convert (short-range entangled) SPTs to (long-range entangled) TQFTs,
we should further make the gauge field dynamical. Namely, in the field theory context,
\cgreen{this means} summing over all the classical gauge field configuration to define the path integral.
This step is straightforward for bosonic theories in Sec.~\ref{sec:BdA}-\ref{sec:BB-theory}.
\cred{
By \emph{bosonization}, we mean summing over all the spin structures for a fermionic theory \cgreen{which turns} it into a bosonic theory.}
\cred{
It is worthwhile to clarify the physics of \emph{gauging} procedure \cgreen{and its relation to } \emph{bosonization} \cgreen{of} the fermionic theories in Sec.~\ref{sec:fTQFT}.}
\cgreen{The background ${\mathbb{Z}}_2$ gauge field of fSPTs in Sec.~\ref{sec:fTQFT}'s {Table \ref{table:Z8-gauged}}
can be understood as the difference between spin structures for chiral and anti-chiral factors (i.e. $p+ip$ and $p-ip$ superconductors for $\nu=1$ {of Table \ref{table:Z8-gauged}}). By fixing a the spin structure for one of the chiral factors (say, $p-ip$), the summation over the background gauge field becomes equivalent to the summation over the spin structure for the other factor.
In particular, in the $\nu=1$ case, gauging produces $\text{Ising}\,\times\,{p-ip}$ fTQFT or equivalently
$\text{Ising}\,\times\,\overline{\text{spin-Ising}}$ fTQFT. The case of other $\nu\in {\mathbb{Z}}_8$ classes is similar.
}
\item Comment on \emph{the quantum dimension and statistical Berry phase/matrix of anyonic particles/strings}:
\cred{In order to discuss the quantum dimensions $d_J$ of loop/surface
operators, one should properly normalize the line/surface operators.
There are different choices of possible normalizations.
\cgreen{In particular, for condensed matter/quantum information literature,
the common normalization of line operators is such} that
insertion of the pair of operator and anti-operator along a
non-trivial loop in \cgreen{$S^d\times S^1$} in $d+1$D, where $d+1$ is the spacetime dimension,
yields 1. For example, from Sec.~\ref{sec:BdA}-\ref{sec:BB-theory},
it can be shown that quantum dimensions $d_J$ for $\int BdA+A^3$ and
$\int BdA+A^4$ theories are non-Abelian in the sense that $d_J=N$ for some of
the operators that contain $B_J$ fields \cgreen{(where $B_J$ is a 1-form field for $\int BdA+A^3$ theory and
a 2-form field for $\int BdA+A^4$ theory)
in twisted Dijkgraaf-Witten theories with
$G=({\mathbb{Z}}_N)^{d+1}$ with $N \geq 2$}.
The non-Abelian nature can be seen clearly as follows: On a spatial $S^d$ sphere
with a number $\text{N}_{\text{insert}}$ of insertions of the non-Abelian excitations,
the GSD grows as $\text{GSD} \approx (d_J)^{\text{N}_{\text{insert}}}=N^{\text{N}_{\text{insert}}}$.
Thus, the Hilbert space of degenerate ground state sectors has dimension of order \cgreen{$N^{\text{N}_{\text{insert}}}$}}.
\cred{\cgreen{One} can verify that the quantum dimensions $d_J=N$ is consistent with an independent derivation in Ref.~\cite{Wang1404.7854} based on quantum algebra (without using field theory).}
{In general,
the spacetime braiding process of anyonic particle/string excitations, on a spatial sphere,
in such a highly degenerate ground state Hilbert subspace (of a dimension of an order
$\text{GSD} \approx (d_J)^{\text{N}_{\text{insert}}}$),
would evolve the original state vector
with an additional statistical unitary Berry matrix (thus, non-Abelian statistics).
}
Normally, the non-Abelian statistics for the non-Abelian anyons and non-Abelian strings
for these theories are more difficult to compute, because the non-Abelian statistics require a matrix to characterize the changes of ground-state sectors.
Yet for $\int BdA+A^3$ of Sec.~\ref{sec:aaa-theory} and $\int BdA+A^4$ theories of Sec.~\ref{sec:A4-theory},
we are able to use the Milnor's triple linking and the quadruple-linking numbers of surfaces to characterize their non-Abelian statistics, thus we boil down
the non-Abelian statistics data to a single numeric invariant.\footnote{The physics here is similar to
the observation made in \cite{CWangMLevin1412.1781}, although in our case we have obtained more general link invariants that encode all possible nontrivial braiding processes, instead of particular few braiding processes in \cite{CWangMLevin1412.1781}.}
\cred{We would also like to point out that the main focus
of \cgreen{the current work} is to derive the more subtle statistical Berry
phases of anyonic particles/strings. For this purpose, we choose a
convenient but different normalization of line/surface operators (result shown in Table \ref{table:TQFTlink}). One can easily modify our prescription above to
encode the \emph{quantum dimension} data.}
\item We remark that the $\int BdA+A^3$ studied in Sec.~\ref{sec:aaa-theory} has been found in \cite{Ferrari0210100, Gu:2015lfa}
that it can be embedded into a non-Abelian Chern-Simons theory
\begin{eqnarray}
\label{eq:L-general}
S={\frac{1}{4\pi} }\int d^3 x \epsilon^{\mu\nu\rho} \mathcal{K}^{G}_{a \alpha'} \Big( \mathcal{A}^a_{\mu}(x) \partial_\nu \mathcal{A}^{\alpha'}_{\rho}(x)
+\frac{1}{3} f_{bc}{}^{a} \mathcal{A}^{\alpha'}_{\mu}(x) \mathcal{A}^b_{\nu}(x) \mathcal{A}^c_{\rho}(x)\Big),
\end{eqnarray}
here $a, b, c, \alpha, \alpha'=1,\dots,6$, with a 6-dimensional Lie algebra.
We write
\begin{eqnarray}
&&\mathcal{A}_\mu^\alpha T^\alpha \equiv A_\mu^I X_I + B_\mu^I H_I^*.\\
&& (\mathcal{A}_\mu^1 T^1,\mathcal{A}_\mu^2 T^2, \mathcal{A}_\mu^3 T^3)=
(A_\mu^1 X_1, A_\mu^2 X_2, A_\mu^3 X_3), \nonumber \\
&& ( \mathcal{A}_\mu^4 T^4, \mathcal{A}_\mu^5 T^5, \mathcal{A}_\mu^6 T^6)=
(B_\mu^1 H_1^*,B_\mu^2 H_2^*, B_\mu^3 H_3^*). \nonumber
\end{eqnarray}
Here $\alpha=1,\dots,6$ and $I=1,\dots,3$,
the generic Lie algebra is called the symmetric self-dual Lie algebra \cite{FigueroaO'Farrill:1995cy},
such that the generators $H_I^*$ and $X_I$ in particular obey
\begin{align} \label{eq:Lie_algebra}
[H_I^*,H_J^*]=[H_I^*,X_J]=0; \quad [X_I,X_J]=C_{IJK}H_K^*,
\end{align}
where $C_{IJK}$ serves as an appropriate structure constant now. More generally, we simply write
the whole 6-dimensional Lie algebra as $[T_a,T_b]= f_{ab}{}^c T_c$.
Here even if our Killing form $\kappa_{ab}=\kappa(T_a,T_b)=-{\mathrm{Tr}}(T_a, T_b)$ is degenerate,
as long as we can define a symmetric bilinear form $({\mathcal{K}_{}^{G}})_{IJ}$,
the $({\mathcal{K}_{}^{G}})_{IJ}$ can replace the degenerate Killing form
to define the Chern-Simons theory Eq.(\ref{eq:L-general}). See Ref. \cite{Gu:2015lfa}'s Sec.~X and Appendix C for further details.
As an example, when $\int BdA+A^3$ in Sec.~\ref{sec:aaa-theory} has a $G=({\mathbb{Z}}_2)^3$,
it can be shown that it is equivalent to a non-Abelian $D_4$ (of a group order 8)
discrete gauge theory \cite{deWildPropitius:1995cf,Wang1404.7854, Gu:2015lfa, He1608.05393}.
\item
Surprisingly, non-Abelian TQFTs can be obtained from gauging the Abelian global symmetry of some finite Abelian unitary symmetry group SPTs
, in the sense that the braiding statistics of excitations have non-Abelian statistics.
Non-Abelian statistics mean that the ground state vector in the Hilbert space can change its sector to a different vector, even if
the initial and final configurations after the braiding process are the same.
This happens for the $\int BdA+A^3$ in 2+1D of Sec.\ref{sec:aaa-theory},
and $\int BdA+A^4$ in 3+1D of Sec.\ref{sec:A4-theory}.
Other examples are the 2+1D/3+1D spin-TQFTs obtained by gauging ${\mathbb{Z}}_2^f\times ({\mathbb{Z}}_2)^n$ fSPTs, we also obtain some non-Abelian spin TQFTs.
\item \emph{New mathematical invariants}:
The \textit{quadruple linking number} of four surfaces
defined in Eq. (\ref{quadruple-surface-linking}),
$\text{Qlk}(\Sigma_1,\Sigma_2,\Sigma_3,\Sigma_4)$,
seems to be a new link invariant that has not been explored in the mathematics literature.
In Eqs. (\ref{eq:RokhlinNew}-\ref{eq:RokhlinNew2}), we propose a novel reazliation of ${\mathbb{Z}}_2^f\times {\mathbb{Z}}_2$ fSPT partition function (previously realized via Arf-Brown-Kervaire invariant in \cite{Kapustin1406.7329}) and Rokhlin invariant.
\end{enumerate}
\section{Acknowledgements}
JW thanks Zhengcheng Gu, Tian Lan,
Nathan Seiberg, Clifford Taubes and Edward Witten for conversations.
PP gratefully acknowledges the support from Marvin L. Goldberger Fellowship and the DOE Grant DE-SC0009988.
JW gratefully acknowledges the Corning Glass Works Foundation Fellowship and NSF Grant PHY-1606531.
JW's work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1066293.
This work is supported by the NSF Grant PHY- 1306313, PHY-0937443, DMS-1308244, DMS-0804454, DMS-1159412 and Center for Mathematical Sciences and Applications at Harvard University.
\bibliographystyle{JHEP}
|
2,877,628,091,077 | arxiv | \section{Introduction}
Recently, motion transfer has gained increasing attention from computer vision researchers, due to its numerous potential applications in the fields of video re-enactment~\cite{chan2019everybody}, fashion design~\cite{dong2019fw}, face swapping~\cite{siarohin2020first}, and so on. Given a source image and a driving video of the same object type, the goal of motion transfer is to generate a video that depicts the motion pattern contained in the driving video while preserving the appearance from the source image.
Finding the correspondence between a source image and a driving video is the key to successful motion transfer. Existing motion transfer methods address this issue in two ways. On one hand, model-based methods~\cite{ma2017pose,gu2020flnet} utilize a pre-trained third-party model to extract the structural information of an object (\eg, human bodies, human faces, etc.). However, specific predefined structure priors are required for different objects. On the other hand, model-free methods~\cite{siarohin2019animating,siarohin2020first,wiles2018x2face} treat motion keypoints as unknown variables, then design models to predict them by optimizing the image reconstruction loss. While these approaches do not require a predefined object structure, they often suffer from false correspondences, leading to considerable artifacts emerging in the generated videos (see Fig.~\ref{fig:background prediction} for examples).
To address these issues, in this paper, we propose a novel structure-aware motion transfer approach referred to as the deformable anchor model (DAM). In DAM, we take advantage of both the model-free and model-based methods. On one hand, similar to the model-free methods, we represent motion keypoints (a.k.a., ``anchors") as unknown variables, which enables our model to perform motion transfer on an arbitrary object without knowing its prior structural information. On the other hand, to prevent the false correspondences, we also encode the structural information to constrain those motion anchors. Unlike model-based methods, our approach does not employ any pre-trained third-party model. Instead, as inspired by the well-known deformable part model (DPM)~\cite{DPM}, DAM introduces a latent root anchor to regularize the motion anchors and model the object structure, enabling the correspondence between the source image and driving video to be enforced and thus further improving the performance. Furthermore, by introducing additional latent anchors, DAM can be easily extended to a hierarchical version that can more effectively model complicated object structures.
Note that all latent anchors in our DAM are unknown variables, and that DAM can be learned in an end-to-end manner, similarly to previous model-free methods.
We conduct experiments on four benchmark datasets (\ie, TaiChiHD, FashionVideo, VoxCeleb1 and MGIF) for performance evaluation. The experimental results show that our method not only achieves the best quantitative performance, but also exhibits a strong capacity to capture the motion structure of different objects, such as human bodies, faces, animals, and so on.
\section{Related Work}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.88\linewidth,height=0.6\columnwidth]{fig/background_prediction.pdf}
\caption{Failure cases from the FOMM method~\cite{siarohin2020first}. Inaccurate correspondences between motion points cause parts of the human body to be missed in the generated videos.}
\label{fig:background prediction}
\vspace{-0.6cm}
\end{center}
\end{figure}
\noindent{\bf Video-to-video synthesis:} Motion transfer has been studied in the area of video-to-video synthesis to some extent. Video2video~\cite{wang2018video} proposed to synthesize photo-realistic videos via input video semantic maps. Chan \etal\cite{chan2019everybody,yang2020transmomo} further extended the generation scheme to synthesize human dance videos conditioned on input video pose sequences and a source identity. These methods are good at utilizing the input source appearance information and can generate realistic videos. However, they are also identity-specific methods, meaning that they require a large amount of source images with diverse views and ranges of motion and moreover take a long time to train.
\noindent{\bf Motion transfer:} Early methods\cite{ma2017pose,ma2018disentangled,balakrishnan2018synthesizing,siarohin2018deformable,wei2020c2f} mainly focus on pose-guided human image generation. These works use off-the-shelf pose estimators or keypoint detectors to pre-extract pose information, which is then adopted for conditioning the image generation process. A series of works~\cite{zhu2019progressive,chen2019unpaired,li2019dense,ren2020deep,neverova2018dense,liu2019liquid, ren2021flow,zhang2021pise,Sarkar2020,Yoon_2021_CVPR,pumarola2018ganimation} have adopted this approach. In addition, many works have proposed facial animation methods~\cite{wei2020learning,gu2020flnet,tripathy2021facegan,burkov2020neural,chen2020puppeteergan,wang2021one,yao2021one,kim2018deep} which can be seen as a kind of facial motion transfer. Similarly, these methods also employ an off-the-shelf facial landmark detector for expression modeling.
Despite their ability to transfer the pose of a human body or the expression on a human face, these methods heavily rely on third-party models and are object-specific. Inspired by Jakab et al.~\cite{jakab2018unsupervised}, which proved that object landmarks can be learned in an unsupervised way via image reconstruction, Monkey-Net~\cite{siarohin2019animating} was the first to propose a model-free motion transfer method for arbitrary objects, which was achieved by building backward motion flow from aligned keypoints to warp the source image feature to driving pose. This warping-based method can achieve superior motion modeling and transferring performance, but this performance begins to suffer when the motions in question are large and complex. FOMM~\cite{siarohin2020first} enhances the motion model by introducing local affine transformations to motion keypoints. Since no structural information is provided, however, this approach often suffers from unstable correspondence between the source and driving image. RegionMM~\cite{PCAMotion} further extends the FOMM by defining regions that can be used to model parts of an object, although it does not consider the dependent structure between the different regions.
\noindent{\bf Other related work:} Most of the above motion transfer methods rely on keypoint detection for encoding pose information. Generally speaking, model-based methods tend to adopt supervised keypoint detection or pose estimation methods~\cite{cao2017realtime,newell2016stacked,zhang2015learning,yu2016deep}, while model-free methods tend to be unsupervised keypoint detection methods~\cite{zhang2018unsupervised,jakab2018unsupervised}. For supervised cases, keypoints are learned on additional and richly annotated datasets. For unsupervised cases, keypoints are usually learned via an auxiliary image reconstruction task. Specifically, detected keypoints are considered to represent the structural information of an image object; the image should be reconstructed via combining the structural and appearance information.
Our work is partially inspired by DPM~\cite{DPM}, which is a traditional human object detection approach. It breaks down the task into individual part detection task across human body and defines the score of positive detection of a root location by considering the spatial distance prior between root location and part locations. Intuitively, if the current relative distance from a part (\eg the left leg of a human) to the root (\eg the head of a human) is much larger than the prior relative distance, then these root-part pair locations tend to be assigned a lower positive detection score in DPM. In a similar spirit, we consider the motion prior of a root anchor which is formulated in a similar way to the spatial distance constraint in DPM.
\section{Structure-Aware Motion Transfer}
In this section, we present our structure-aware motion transfer approach which we name the deformable anchor model (DAM). We develop our approach based on the recent first-order motion model (FOMM~\cite{siarohin2020first}), with the addition of two novel deformable anchor models to encode the motion structure information. Below, we first present a brief review of the FOMM method in Section~\ref{3.1}, and then introduce the basic deformable anchor model in Section~\ref{3.2}. A more effective hierarchical deformable anchor model is presented in Section~\ref{3.3}, followed by a summary of the entire model in Section~\ref{3.4}.
\subsection{Motion Flow Modeling}\label{3.1}
Given a source image and a driving video, FOMM~\cite{siarohin2020first} generates the motion transfer video by warping the source image to mimic the driving video in a frame-by-frame manner. For this purpose, they firstly estimate the dense motion flow between these two images. They then warp the source image in the feature space, and synthesize the driving frame with an image generator based on the warped source image feature. The entire process is illustrated in the bottom part of Figure~\ref{fig:pipeline}.
Formally, given a source image $S$ and a driving frame $D$, the motion between two images is modeled by the motion flow $\mathcal{T}_{S \leftarrow D} (z)$, where $z$ denotes the coordinates of any pixel in the image. Estimating the dense motion flow is nontrivial. To ease this process, FOMM employs a set of motion anchors; these anchors are intended to represent identical keypoints of the object in the source image and driving frame (for example, the corresponding physical parts of the human body). With the aid of aligned motion anchors the dense motion flow can be derived through affine transformations.
In more detail, let $z^s_k$ and $z^d_k$ denote the $k$-th pair of corresponding anchors in $S$ and $D$ respectively; here, $k=1,\ldots, K$, where $K$ is the number of motion anchors. This yields the following:
\begin{equation}
\begin{split}
z^s_k = \mathcal{T}_{S \leftarrow D} (z^d_k) \label{motion anchor point flow}
\end{split}
\end{equation}
Given a motion anchor, the motion flow for pixels at the local region around the anchor can be approximately modeled with an affine transformation. For convenience, let $\mathcal{T}_k$ denote the dense motion flow derived by the $k$-th motion anchor. The affine transformation can thus be described as follows:
\begin{equation}
\mathcal{T}_k (z) = \mathcal{T}_k (z^d_k) + \theta_k (z-z^d_k) \label{motion anchor flow}
\end{equation}
where $\theta_k$ is the parameter of local affine transformation for the $k$-th anchor.
Intuitively, the dense motion flow of a pixel $z$ can be derived from any nearby motion anchor. Thus a weight parameter $M_k (z)$ is introduced to automatically combine the $\mathcal{T}_k (z)$ from different anchors. The dense motion flow of any pixel $z$ can thus be represented as follows:
\begin{equation}
\mathcal{T}_{S \leftarrow D} (z) = \sum_{k=1}^{K}M_k (z)\cdot\mathcal{T}_k (z), \label{global flow}
\end{equation}
where $\sum_{k=0}^{K}M_k (z) =1, \forall z$, in which $M_0 (z)$ is an additional mask for modeling background similar to the approach~\cite{PCAMotion}.
With $z^d_k$,$z^s_k$,$\theta_k$, and $M_k$, it is possible to obtain a dense motion flow between the source image $S$ and driving frame $D$, after which the source image can be warped to mimic the driving frame with an image generator. By enforcing an image reconstruction loss on the image generator, a motion estimator can be trained to automatically predict these unknown variables (\ie, $z^d_k$,$z^s_k$,$\theta_k$, and $M_k$) (see Fig.~\ref{fig:pipeline}).
As FOMM has shown, the motion anchors tend to have coarse physical meanings (\eg, a motion anchor may always locate at the head region of a human). However, false correspondences may occur if large motion or background variance are present, which will lead to considerable artifacts in the generated videos,
as shown in Fig.\ref{fig:background prediction}. We will discuss how to address these issues by encoding latent object structure information in the following subsections.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{fig/root_prior_kp2.pdf}
\caption{Illustration of Eqn.~(\ref{root2kp distance}). The colored squares denote prior flow derived from the root anchor, as described in Eqn.~(\ref{root flow at motion anchor point}), while the colored dots denote motion anchors. Euclidean distance is minimized between the pairs.}
\label{fig:root prior}
\vspace{-0.5cm}
\end{center}
\end{figure}
\subsection{Deformable Anchor Model (DAM)} \label{3.2}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=1.0\linewidth]{fig/pipeline.pdf}
\caption{Overview of the proposed method. Anchors of the source image and driving image are respectively predicted through the motion estimator (we draw five motion anchors for clarity). The generated anchors are then are fed into a flow mask estimator together with the source image. The motion anchors and the flow masks are subsequently combined to obtain the dense warping flow for image generation. Note that motion anchors are constrained by the root anchor (\eg. the largest dot) and intermediate root anchors (\eg. medium-sized dots).}
\label{fig:pipeline}
\vspace{-0.5cm}
\end{center}
\end{figure*}
As discussed above, artifacts can be observed in the generated videos by FOMM. This is largely due to the motion anchors in FOMM are not properly regularized. Although different anchors are summed by the $M_k(z)$'s through Eqn~(\ref{global flow}), we observe that $M_k (z)$ tends to focus on only a local region around the anchor $z_k$ due to the assumption of affine transformation. As a result, the $z_k^s$ and $z_k^d$ predicted by the motion estimator may not be accurately corresponded, leading to errors in the dense motion flow and artifacts in the generated video.
To address this issue, we propose a new deformable anchor model (DAM) to discover the motion structure information of the object, then employ this information to regularize the motion anchors. In more detail, our model is inspired by DPM~\cite{DPM}. We introduce an additional \emph{latent root anchor} to establish communications among motion anchors. In a similar spirit to DPM, by connecting motion anchors with the root anchor, we expect the model to become aware of the motion structure of an object, even if its appearance varies in source images and driving videos.
Intuitively, given a source image and a driving frame, the root anchor represents the global motion between the two objects, which means that the flow of motion anchors should be related to that of the root anchor. Let $z^d_{r}$ denote the latent root anchor of the driving frame. We then model the relation between the motion and root anchors with an affine transformation, as follows:
\begin{equation}
\mathcal{T}_r\left (z^d_{k}\right) = \mathcal{T}_r\left (z^d_{r}\right)+\theta_r \left (z^d_{k}-z^d_{r}\right)
\label{root flow at motion anchor point}
\end{equation}
where $\mathcal{T}_r\left (z^d_{k}\right)$ is the derived flow based on the latent root anchor using the affine transformation model. We then regularize the motion flow of $z^d_k$ to be similar to the derived flow using the following loss:
\begin{equation}
\mathcal{L}_{k\leftarrow r} = \left\|\mathcal{T}_k (z^d_k)-\mathcal{T}_r\left (z^d_{k}\right)\right\|_2 \label{root2kp distance}
\end{equation}
A further explanation of Eqn.~(\ref{root flow at motion anchor point}) and (\ref{root2kp distance}) is provided in Fig.~\ref{fig:root prior}. In a departure from the original FOMM, where the motion anchors are almost independent, we encode a latent object structure to regularize the motion anchors. Eqn.~(\ref{root flow at motion anchor point}) implies that we assume an affine transformation relation between the flow of the root anchor and motion anchors. While this may be stricter than required, divergence from ideal cases is permitted, and we use the derived flow as a prior to regularize the motion anchors with Eqn.~(\ref{root2kp distance}).
On the other hand, through the use of Eqn.~(\ref{root flow at motion anchor point}) and (\ref{root2kp distance}), the motion anchors also guide us to learn a meaningful root anchor. As shown in Fig.~\ref{fig:ablation}, the root anchor is always located at the object centroid to capture the global movement of the object from one image to another.
It should further be noted that, at the training stage, the latent root anchor $z^d_r$ and the affine transformation parameters $\theta_r$ can be obtained
by the motion estimator in a similar way as the motion anchors. At the testing stage, the root anchor is discarded, and we only need to use the predicted motion anchors to generate dense motion flow in the same way as FOMM. The overall architecture of our method is illustrated in Fig.~\ref{fig:pipeline}.
\subsection{Hierarchical DAM} \label{3.3}
As discussed above, using an affine transformation to model the structure prior might be too restrictive, especially for objects with complicated motion. Taking the human body as an example, a movable part (\eg, left leg) might contain multiple joints, meaning that a single affine transformation can scarcely be expected to describe such a complex structure prior.
This motivates us to construct a hierarchical deformable anchor model to facilitate the modeling of more complicated object structures. In more detail, we additionally introduce a set of latent intermediate anchors into the basic deformable anchor model. Rather than directly regularizing the motion anchors with the latent root anchor, we instead use latent intermediate anchors to regularize motion anchors and the latent root anchor to regularize latent intermediate anchors. Similarly, the affine transformation prior is applied between different types of anchors. Let $z^d_{i}$ denote an intermediate anchor; accordingly, we have:
\begin{eqnarray}
\mathcal{T}_r\left (z^d_{i}\right) &=& \mathcal{T}_r\left (z^d_{r}\right)+\theta_r \left (z^d_{i}-z^d_{r}\right)\\
\mathcal{T}_i\left (z^d_{k}\right) &=& \mathcal{T}_i\left (z^d_{i}\right)+\theta_i \left( z^d_{k}-z^d_{i}\right)
\label{hdam_affine}
\end{eqnarray}
where $\theta_r$ and $\theta_i$ are affine transformation parameters of the root anchor $z^d_{r}$ and intermediate anchor $z^d_{i}$, while $\mathcal{T}_r$ and $\mathcal{T}_i$ are the respective derived flows.
Accordingly, the loss for regularizing the motion flow of motion anchors can be written as follows:
\begin{eqnarray}
\mathcal{L}_{k\leftarrow i} &=& \left\|\mathcal{T}_k (z^d_k)-\mathcal{T}_{i}\left (z^d_{k}\right)\right\|_2 \label{subroot2kp distance}\\
\mathcal{L}_{i\leftarrow r} &=& \left\|\mathcal{T}_{i} (z^d_{i})-\mathcal{T}_{r}\left (z^d_{i}\right)\right\|_2 \label{root2subroot distance}
\end{eqnarray}
Note that although the loss $\mathcal{L}_{k\leftarrow i}$ is defined for every pair of $z^d_{i}$ and $z^d_{k}$, we expect it takes effect on several $z^d_{k}$'s nearby $z^d_{i}$ only. Therefore, in implementation, we assign attention weights to all $z^d_k$'s for each $z^d_{i}$, and allow the model to adjust these weights automatically.
With latent intermediate anchors, we can model a three-level hierarchical structure for object motion. To this end the procedure illustrated in Fig.~\ref{fig:root prior} can be further extended, where image pixels are involved, above which are the motion anchors, intermediate anchors and the root anchor respectively. By applying the affine transformation prior between the adjacent levels, we are able to model more complex object structures.
\subsection{Training DAM and HDAM} \label{3.4}
In both the basic deformable anchor model and hierarchical deformable anchor model, the newly introduced latent root anchors and the latent intermediate anchors can be predicted by the motion estimator network, which can be trained similarly to FOMM, \ie in an end-to-end fashion by optimizing the image reconstruction loss.
More specifically, following FOMM~\cite{siarohin2020first}, we utilize the perceptual loss as our main driving loss, which is usually defined with a pre-trained VGG-19 networks~\cite{simonyan2014very}. Given a driving image $D$, the perceptual loss can be expressed as follows:
\begin{equation}
\mathcal{L}_{per}=\frac{1}{C \cdot H \cdot W} \sum_{l}\left\|\phi_{l} (D)-\phi_{l}(\tilde {D})\right\|
\end{equation}
where $\tilde {D}$ is the generated driving image, $\phi_{l}$ denotes the feature extractor using the $l$-th layer of the VGG-19 network, and $C,H,W$ denote the number of channels, feature map height and width respectively.
Additionally, similar to recent works~\cite{siarohin2020first,zhang2018unsupervised}, an equivariance loss is adopted to ensure the geometric consistency of the learned anchors. For a known geometric transformation $\mathbf{T}$ and a given image $I$, the loss is defined as follows:
\begin{equation}
\mathcal{L}_{equi}=\sum_{k}\left\|z^I_k-\mathbf{T}^{-1} (z^{\mathbf{T} (I)}_k)\right\|
\label{equi loss}
\end{equation}
\textbf{Training DAM:} For the basic deformable anchor model, we write the loss for regularizing motion anchors as follows:
\begin{equation}
\mathcal{L}_{dam} = \sum_{k=1}^K\mathcal{L}_{k\leftarrow r}
\label{fist order loss}
\end{equation}
where $\mathcal{L}_{k\leftarrow r}$ is defined in Eqn.~(\ref{root2kp distance}).
The total training loss of our DAM model can be defined as:
\begin{equation}
\mathcal{L} = \mathcal{L}_{per} + \mathcal{L}_{equi} + \mathcal{L}_{dam}
\label{dam_loss}
\end{equation}
where we apply equal weights for all losses.
\textbf{Training HDAM:}
For the hierarchical deformable anchor model, assuming a total of $I$ intermediate anchors are used, the loss can be written as follows:
\begin{equation}
\mathcal{L}_{hdam} = \sum_{i}^{I}\left (\sum_{k=1}^K\omega_{ik}\mathcal{L}_{k\leftarrow i}+ \mathcal{L}_{i\leftarrow r}\right)
\label{second order loss}
\end{equation}
where $\mathcal{L}_{k\leftarrow i}$ and $\mathcal{L}_{i\leftarrow r}$ are respectively defined in Eqn.~(\ref{subroot2kp distance}) and Eqn.~(\ref{root2subroot distance}); moreover, $\omega_{ik}$ denotes the attention weight between motion anchor $k$ and intermediate anchor $i$, which is computed through a fully connected layer. More detailed information about the attention process is presented in the supplementary material.
The total training loss of our HDAM model can be defined as:
\begin{equation}
\mathcal{L} = \mathcal{L}_{per} + \mathcal{L}_{equi} + \mathcal{L}_{hdam}
\label{hdam_loss}
\end{equation}
where we also apply equal weights for all losses. In practice, when training HDAM, we use a pretrained DAM model as the initial model, then optimizing the loss in Eqn.~(\ref{hdam_loss}).
\section{Experiments}
In this section, we evaluate our method on the benchmark datasets, and further provide insightful analysis by means of an ablation study and qualitative results.
\subsection{Experimental Setup}
\noindent\textbf{Datasets:} We follow FOMM~\cite{siarohin2020first} and RegionMM~\cite{PCAMotion} in evaluating our method on four benchmark datasets containing different types of object:
\begin{itemize}[itemsep=0pt
\item TaiChiHD~\cite{siarohin2020first} contains 2867 training videos and 253 test videos. This dataset contains Tai-chi performers with different identities and various backgrounds, and is thought to be the most challenging dataset in this area due to its large motion. Two resolution variants of this dataset are evaluated: 1) all raw videos are cropped and resized to the basic $256\times 256$ resolution, as with FOMM; 2) the $512\times 512$ resolution, is a subset that removes any raw videos that fail to satisfy the resolution request for cropping, which contains 962 training videos and 112 testing videos.
\item FashionVideo~\cite{zablotskaia2019dwnet} contains 500 training videos and 100 test videos. Videos in this dataset depict a single posing model with diverse clothing and textures. All videos are resized to a $256\times 256$ resolution.
\item MGIF collected in~\cite{siarohin2019animating}, is a cartoon animal dataset containing 900 training videos and 100 test videos. All videos are resized to a $256\times 256$ resolution.
\item VoxCeleb1~\cite{nagrani2017voxceleb} is a talking head dataset, containing 19522 training 525 test videos. All videos are resized to a $256\times 256$ resolution.
\end{itemize}
\noindent\textbf{Evaluation protocols:} Since ground-truth videos are not available for use in evaluating generated videos for the motion transfer task, we follow the FOMM~\cite{siarohin2020first} evaluation protocol and take self-reconstruction as a proxy task to quantitatively evaluate the proposed method. More specifically, an input video is reconstructed from the appearance representation of its first frame and the motion flow of the entire videos according to Eqn.~(\ref{global flow}). The same four different metrics as in~\cite{siarohin2020first} are used for evaluation.
\begin{itemize}[itemsep=0pt
\item $\mathcal{L}_1$. The average $\mathcal{L}_1$ distance between the pixel values of generated and ground-truth video frames.
\item Average Keypoint Distance (AKD). This metric computes the average keypoint distance between generated and ground-truth video frames. It is designed to evaluate the pose quality of the generated video frames.
\item Missing Keypoint Rate (MKR). For human body datasets, we further report MKR, which represents the percentage of keypoints that are not detected in generated video frames but are localized in the ground truth video frames.
\item Average Euclidean Distance (AED). This metric is designed to assess the identity quality of generated video frames based on specific feature representations;
in the feature space, the average Euclidean distance between generated and ground-truth video frames is computed.
\end{itemize}
\noindent\textbf{Implementation details:} Seen in the supplementary.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1\linewidth]{fig/animation_cmp.pdf}
\caption{Qualitative comparisons on cross-identity motion transfer. We present four source identities driven by two videos from the TaiChiHD dataset. It can be seen that our method generally synthesizes the most structure-stable results.}
\label{fig:animation compare}
\end{center}
\vspace{-0.5cm}
\end{figure*}
\begin{table*}[]
\begin{center}
\resizebox{\textwidth}{!}
{
\begin{tabular}{c|ccc|ccc|ccc|ccc|c}
\hline
\multicolumn{1}{l|}{} & \multicolumn{3}{c|}{TaiChiHD $256\times256$} & \multicolumn{3}{c|}{TaiChiHD $512\times512$} & \multicolumn{3}{c|}{Fashion} & \multicolumn{3}{c|}{VoxCeleb1} & MGIF \\
\multicolumn{1}{l|}{} & L1 & (AKD, MKR) & AED & L1 & (AKD, MKR) & AED & L1 & (AKD, MKR) & AED & L1 & AKD & AED & L1 \\ \hline
Monkey-Net & 0.077 & (10.798, 0.059) & 0.228 & - & - & - & - & - & - & 0.049 & 1.89 & 0.199 & - \\
FOMM & 0.057 & (6.649, 0.036) & 0.172 & 0.065 & (15.08, 0.061) & 0.202 & 0.013 & (1.142,0.005) & 0.059 & 0.041 & 1.28 & 0.133 & 0.0224\\
RegionMM & 0.048 & (5.246, 0.024) & 0.150 & 0.057 & (11.97, 0.028) & 0.166 & \textbf{0.011} & (1.187,0.005) & 0.056 & 0.040 & 1.28 & 0.133 & 0.0206\\ \hline
Ours DAM & 0.045 & (5.102, 0.024) & 0.150 & 0.054 & (10.83, 0.032) & 0.158 & \textbf{0.011} & (1.116, 0.005) & 0.055 & 0.040 & 1.26 & 0.130 & 0.0207 \\
Ours HDAM & \textbf{0.044} & (\textbf{4.790, 0.021}) & \textbf{0.146} & \textbf{0.053} & (\textbf{10.19, 0.027}) & \textbf{0.156} & \textbf{0.011} & (\textbf{1.041, 0.004}) & \textbf{0.054} & \textbf{0.039} & \textbf{1.24} & \textbf{0.124} &\textbf{0.0201} \\ \hline
\end{tabular}
}
\caption{Quantitative comparisons on the self-reconstruction task. We present results on four benchmarks; here, a lower score is preferred for all metrics. For fair comparison, motion anchors are set to 10 for all methods.}
\label{tab1}
\end{center}
\vspace{-0.7cm}
\end{table*}
\begin{table}[]
\begin{center}
\begin{tabular}{c|ccc}
\hline
\multicolumn{1}{l|}{} & TaiChiHD & Fashion & Voxceleb1 \\ \hline
\multicolumn{1}{c|}{Ours vs FOMM} & 91.6\% & 76.0\% & 54.2\% \\
\multicolumn{1}{c|}{Ours vs RegionMM} & 59.1\% & 66.6\% & 66.0\% \\ \hline
\end{tabular}
\caption{User preferences favoring our approach.}
\label{tab2}
\end{center}
\vspace{-0.65cm}
\end{table}
\subsection{Comparison with Existing Methods}
We compare our method with two recent model-free motion transfer methods: FOMM~\cite{siarohin2020first} and RegionMM\cite{PCAMotion}.
\noindent\textbf{Quantitative results:} The comparisons are summarized in Table~\ref{tab1}. We can observe that our proposed HDAM approach generally achieves the best performance on all evaluation metrics. In particular, the fact that our $\mathcal{L}_1$ score is the lowest reflects the good quality of the videos generated by our method. Moreover, the improvement to AKD and MKR indicates that our method achieves good motion transfer, while the improved AED also reflects the appearance quality of the videos generated using our method.
In more detail, compared to the FOMM method, we achieve a notable improvement on the TaiChiHD, FashionVideo and MGIF dataset, while also gaining better results on the VoxCeleb dataset. This clearly proves the effectiveness of using deformable anchor models to regularize motion anchors. Moreover, the fact that our work outperforms the most recent related work, RegionMM, further proves the advantages of modeling object structure; notably, this superiority also holds in the case of higher-resolution inputs. We further note that the improvements on the VoxCeleb dataset are not as significant as those on the TaiChiHD and MGIF datasets. This is possibly because the structures of the human face are relatively simple, while the human body consists of multiple joints and movable parts, meaning that its motion are usually quite complicated. These results reflect that the deformable anchor model helps to transfer motion on various objects, especially those with complicated structures, which also validates the motivation of this work.
\noindent\textbf{User study:} We conduct a user study for cross-identity motion transfer. More specifically, we prepare 50 concatenated results consisting of a source frame, driving videos and videos generated by FOMM, RegionMM and our method; the synthesized videos are placed in random order in each of the concatenated videos. Fifty participants are asked to rank the three videos based on the appearance preservation and transferred motion. As Table~\ref{tab2} shows, participants clearly identified our videos as being of higher quality than the synthesized videos produced by existing methods.
\noindent\textbf{Qualitative results:} We additionally present examples of the videos generated by the three methods in Fig.~\ref{fig:animation compare}. Generally speaking, FOMM often synthesizes an abnormal body shape or an incorrect motion from the driving video. Moreover, while RegionMM is able to roughly depict the motion contained in the driving video, it may also fail to capture more detailed structural information, leading to obvious artifacts (\eg, the lost or weirdly warped human arms). By contrast, our method is generally able to capture the motion details well and produces more stable results. More qualitative results are provided in the supplementary material.
\subsection{Ablation Study} \label{4.3}
We next conduct an ablation study to analyze the impact of our proposed components. Specifically, we study two variants of our proposed approach: 1) the basic deformable anchor model in Section~\ref{3.2} (referred to as ``Ours (DAM)"), and 2) the hierarchical deformable anchor model in Section~\ref{3.3} (referred to as ``Ours (HDAM)"). We further employ FOMM in which no deformable anchor model is used, as a baseline for comparison.
As seen in Table~\ref{tab1}, we conduct experiments on the TaiChiHD dataset and analyze the results. We observe that \textit{Ours (DAM)} achieves considerable improvements relative to the baseline FOMM, confirming the validity of exploiting object structures with a deformable anchor model in order to improve motion transfer. Moreover, by introducing the hierarchical deformable anchor model, \textit{Ours (HDAM)}) achieves further improvements.
In Fig.~\ref{fig:ablation}, we present qualitative examples of our ablation study to reveal how our method works. We draw predicted anchors on generated frames to facilitate detailed analysis. As can be seen from the figure, FOMM generally fails to capture the local structure of the human body (such as hands and legs) due to the incorrectly aligned motion anchors; by contrast, \textit{Ours(DAM)} can synthesize a relatively complete object structure, reflecting the effectiveness of DAM in constraining the object structure. Furthermore, \textit{Ours(HDAM)} generally learns the meaningful structure and synthesizes high-quality results while capturing stable and complete structure information, which further verifies the superiority of modeling the hierarchical object structure.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.88\linewidth]{fig/ablation2.pdf}
\caption{Qualitative ablation study. We visualize the latent root anchor as the largest dot and denote the intermediate anchors using medium-sized dots; correspondingly, the smallest dots represent motion anchors. Adjacent anchors are connected through straight lines according to the max attention weight, which reflects the constraint relation. Note that all learned intermediate anchors are overlapped with a motion anchor in this dataset; further analysis of this is provided in the supplementary material.}
\label{fig:ablation}
\vspace{-0.54cm}
\end{center}
\end{figure}
\subsection{Parameter Analysis} \label{4.4}
To validate the robustness of the proposed method, we study the influence of different hyper-parameter settings in this section. Specifically, we examine two important hyper-parameters in our model, namely the number of motion anchors and the number of intermediate anchors. We conduct experiments on the TaiChiHD and MGIF datasets to perform this analysis. The quantitative results are obtained from the \textit{Ours (HDAM)} model. As seen in Table~\ref{tab3}, our method continues to improve as more motion anchors are involved, with a large improvement visible when moving from 5 to 10 motion anchors and a relatively small one from 10 to 20 motion anchors. Moreover, our method generally works well with $2\sim6$ intermediate anchors, as can be seen from Table~\ref{tab4}. Our observations further suggest that, when more intermediate anchors are involved, the HDAM model tends to learn that some of them are meaningless or overlapping with other intermediate anchors; we explain that the object structure is overfitted in these situations. Overall, our method generalizes well to these hyper-parameters.
\subsection{Structure Visualizations} \label{4.5}
To understand the proposed methods in more depth, we visualize the predicted hierarchical anchors of video frames on different datasets in Fig.~\ref{fig:introduction} and Fig.~\ref{fig:ablation}; more qualitative results are provided in the the supplementary material. As is evident from the results, the learned root anchor is always located at the object centroid regardless of its identity or background; moreover, intermediate root anchors are often located at different local regions of an object, which enables them to capture more detailed motions of the parts in question. Note that in our hierarchical model, as seen in the fourth row of Fig.~\ref{fig:ablation}, when different motions occur, motion anchors can be regularized by different intermediate anchors according to the attention weights in Eqn.~(\ref{second order loss}). This reflects the ability of our HDAM model to flexibly constrain the motion structures according to the varying motions in the dataset. In summary, this star-like motion structure learned by our deformable anchor model exhibits a strong ability to model stable motions between images.
\begin{table}[]
\begin{center}
\begin{tabular}{c|c|ccc}
\hline
\multicolumn{1}{l|}{} & MGIF & \multicolumn{3}{c}{TaiChiHD} \\
\multicolumn{1}{l|}{} & L1 & L1 & (AKD, MKR) & AED \\ \hline
5 & 0.0235 & 0.048 & (5.730, 0.028) & 0.159 \\
10 & 0.0201 & 0.044 & (4.790, 0.021) & 0.146 \\
20 & \textbf{0.0185} & \textbf{0.043} & \textbf{(4.615, 0.018)} & \textbf{0.138} \\ \hline
\end{tabular}
\caption{Quantitative performance with different number of motion anchors. We assess performance at 5, 10, 20 motion anchors respectively. The number of intermediate anchors is fixed at 3.}
\label{tab3}
\end{center}
\vspace{-0.5cm}
\end{table}
\begin{table}[]
\begin{center}
\begin{tabular}{c|c|ccc}
\hline
\multicolumn{1}{l|}{} & MGIF & \multicolumn{3}{c}{TaiChiHD} \\
\multicolumn{1}{l|}{} & L1 & L1 & (AKD, MKR) & AED \\ \hline
2 & 0.0202 & 0.045 & (\textbf{4.763, 0.021}) & \textbf{0.146} \\
3 & 0.0201 & \textbf{0.044} & (4.790, \textbf{0.021}) &\textbf{0.146} \\
4 & 0.0201 & \textbf{0.044} & (4.836, 0.022) & \textbf{0.146} \\
5 & \textbf{0.0199} & 0.045 & (4.926, 0.023) & \textbf{0.146} \\
6 & 0.0200 & \textbf{0.044} & (4.792, 0.022) & \textbf{0.146} \\ \hline
\end{tabular}
\caption{Quantitative performance with different numbers of intermediate anchors. We tune $2\sim6$ intermediate anchors respectively. The number of motion anchors is fixed at 10.}
\label{tab4}
\end{center}
\vspace{-0.6cm}
\end{table}
\section{Conclusion}
This paper proposes a novel structure-aware motion transfer approach with deformable anchor model. In DAM, the latent root anchor is designed to constrain the motion anchors. We then explore the intermediate latent root anchors to build hierarchical DAM, leading to structure-stable motion transfer and yielding the best performance (both qualitatively and quantitatively) relative to existing benchmarks.
We further interpret our method through insightful an ablation study and validate the robustness of our method to different hyper-parameter settings.
\noindent\textbf{Societal impact and limitations:} Motion transfer techniques could be misused for generating fake videos, which might bring negative societal impact. People should be cautious and get authorized when manipulating videos using these techniques. Moreover, while we demonstrate state-of-the-art performance, the results are not perfect. Some artifacts can still be observed when there exists occlusion, large motion, complex background, \etc. We will study these issues in the future.
\noindent\textbf{Acknowledgement:} This work is partially supported by Alibaba Group through Alibaba Innovation Research Program.
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,091,078 | arxiv | \section{Introduction}
Graphene~\cite{Novoselov} remains among the most fascinating and
attractive subject has been seen right now in condensed matter physics. This is
because of its exotic physical properties and the apparent
similarity of its mathematical model to the one describing
relativistic fermions in two dimensions. As a consequence of this
relativistic-like behavior particles could tunnel through very high
barriers in contrast to the conventional tunneling of
non-relativistic particles, an effect known in relativistic field
theory as Klein tunneling. This tunneling effect has already been
observed experimentally~\cite{Stander} in graphene systems. There
are various ways for creating barrier structures in
graphene~\cite{Katsnelsonn, Sevinçli}. For instance, it can be done
by applying a gate voltage, cutting the graphene sheet into finite
width to create a nanoribbons, using doping or through the
creation of a magnetic barrier. In the case of graphene, results
of the transmission coefficient and the tunneling conductance were
already reported for the electrostatic barriers~\cite{Sevin,
Masir, DellAnna, Mukhopadhyay}, magnetic barriers~\cite{DellAnna, Choubabi, Mekkaoui},
potential barrier~\cite{Jellal, Alhaidari} and triangular barrier~\cite{HBahlouli}.
We study the transmission probability of Dirac fermions in graphene scattered
by a triangular double barrier
potential in the presence of an inhomogeneous magnetic fields $B$.
We emphasis that $B$-field discussed in
our manuscript is applied externally. It can be created for
instance by depositing a type-I superconducting film on top of the
system and remove a strip $|x|<d_1$ of the superconductor and
apply a perpendicular magnetic field. This patterning technique of
creating the desired magnetic field profile was proposed in
\cite{Matulis}. One of the interesting features of such
inhomogeneous magnetic field profile is that it can bind
electrons, contrary to the usual potential step. Such a step
magnetic field will indeed result in electron states that are
bound to the step $B$-field and that move in one direction along the
step. Thus there is a current along the $y$-direction but it is a
very small effect and is not relevant for our problem (those
electrons have $k_{x} = 0$). Indeed, we consider free electron
states that have in general $k_x$ non zero, because otherwise they
will not tunnel. A recent work studied double barriers with
magnetic field in graphene without mass term \cite{Ramezani}.
The paper is organized as follows. In section 2, we formulate our
model by setting the Hamiltonian system describing particles
scattered by a triangular double barrier whose well potential zone is
subject to a magnetic field with a mass term. In section 3, we
consider the case of static double barriers and
derive the energy spectrum to finally
determine the transmission and reflection probabilities. Their
behaviors are numerically investigated and in particular resonances were seen in different regions
as well as the Klein tunneling effect. In section 4, we study the
same system but this time by taking into account the presence of an
inhomogeneous magnetic field.
Using boundary conditions, we split the energy into three domains
and then calculate the transmission probability in each case.
In each situation, we discuss the transmission at resonances that
characterize each region and stress the importance of our
results. We conclude our work in the final section.
\section{ Mathematical model}
We consider a system of massless Dirac fermions incident on a two-dimensional strip of
graphene having energy $E$ and at incidence angle
$\phi_1$ with respect to the $x$-direction. This system
is a flat sheet of graphene subject to a square potential barrier
along the $x$-direction while particles are free in the
$y$-direction. Let us first describe the
geometry of our system, which is made of five regions denoted by
${\sf{j = 1}}, {\sf{\cdots}}, {{\sf5}}$. Each region is characterized by its constant
potential and interaction with external sources. All
regions are formally described by a Dirac-like Hamiltonian
\begin{equation}\label{Ham1}
H=v_{F}
{\boldsymbol{\sigma}}\cdot\left(\textbf{p}+\frac{e}{c}\textbf{A}\right)+
V(x){\mathbb
I}_{2}+G_p\Theta\left(d_{1}^{2}-x^{2}\right)\sigma_{z}
\end{equation}
where {${v_{F}\approx 10^6 m/s}$ is the Fermi
velocity, ${{\boldsymbol{\sigma}}=(\sigma_{x},\sigma_{y})}$ and
$\sigma_{z}$} are the Pauli matrices in pseudospin space,
$\textbf{p}=-i\hbar(\partial_{x},
\partial_{y})$ is the momentum operator, ${\mathbb I}_{2}$ the $2 \times 2$ unit matrix,
$V(x)=V_{\sf j}$ is the electrostatic potential in the ${\sf j}$-th scattering region
and $\Theta$ is
the Heaviside step function.
The magnetic field $B(x, y)= B(x)$ is defined through the Landau gauge,
which allows the vector potential to be of the form $\textbf{A} =
(0,A_{y}(x))$ with $\partial_{x}A_{y}(x)= B(x)$. The parameter $G_p = m v_{F}^2$ is
the energy gap owing to the sublattice symmetry breaking, it can also be
seen as the energy gap $G_p = G_{p, so}$ originating from
spin-orbit interaction.
First let us specify potential configuration that will constitute our double barrier potential
\begin{equation}\label{popro}
V(x)=
\left\{%
\begin{array}{ll}
\Lambda ( d_2 + \gamma x ) , & \hbox{$d_{1}\leq |x|\leq d_{2}$} \\
V_{2}, & \hbox{$ |x|\leq d_{1}$} \\
0, & \hbox{otherwise} \\
\end{array}%
\right.
\end{equation}
where {$\gamma=\pm1$, $\gamma=1$ for $x\in [-d_2,
-d_1]$, $\gamma=-1$ for $x\in [d_1, d_2]$ and} the parameter
$\Lambda$ defined by $ \Lambda = \frac{V_1}{d_2-d_1}$ gives the
slope of triangular potentials. The graphical representation of
this potential is shown in Figure 1.
We define each potential region as follows: ${\sf{j = 1}}$ for $x \leq -d_2 $, ${\sf{j = 2}}$ for $ -d_2 \leq x \leq -d_1 $,
${\sf{j = 3}}$ for $ -d_1 \leq x \leq d_1 $, ${\sf{j = 4}}$ for $ d_1 \leq x \leq d_2 $ and ${\sf{j = 5}}$ for $ x \geq d_2 $. The corresponding
constant potentials are defined by \eqref{popro} and are denoted by $V_j$ in the $j$-th region.
\begin{figure}[ht]
\centering
\includegraphics[width=12cm, height=5cm ]{db.eps}\\
\caption{\sf Schematic diagram for the monolayer graphene double barrier.}\label{db.1}
\end{figure}
\section{Static double barrier}
We consider the Hamiltonian describing Dirac fermions in graphene scattered by an
electrostatic double barrier potential without magnetic field $\textbf{A} = 0$. In this case
\eqref{Ham1} reduces to
\begin{equation} \label{eqh1}
H_{s}=v_{F} {\boldsymbol{\sigma}}\cdot\textbf{p}+V (x){\mathbb
I}_{2}+G_p\Theta\left(d_{1}^{2}-x^{2}\right)\sigma_{z}
\end{equation}
where ${\sf{j}}$ labels the five regions indicated schematically in
Figure \ref{db.1} showing the space configuration of the
potential profile. Due to sublattice symmetry we therefore need to study our system only near the \textbf{K}
point. The time-independent Dirac equation for the spinor
$\Phi(x,y)=\left(\varphi^{+},\varphi^{-}\right)^{T}$ at energy
$E=v_{F}\epsilon$ then reads, in the unit system $\hbar = m = c= 1$, as
\begin{equation} \label{eqh1}
\left[{\boldsymbol{\sigma}}\cdot\textbf{p}+{v_j}{\mathbb
I}_{2}+\mu\Theta\left(d_{1}^{2}-x^{2}\right)\sigma_{z}\right]\Phi(x,y)=\epsilon
\Phi(x,y)
\end{equation}
where {$V_{\sf j}=v_{F}v_{\sf j}$ and $G_{p}=v_{F}\mu$}. Our
system is supposed to have finite width $W$ with infinite mass
boundary conditions on the wavefunction at the boundaries $y = 0$
and $y = W$ along the $y$-direction \cite{Tworzydlo, Berry}. These
boundary conditions result in a quantization of the transverse
momentum along the $y$-direction as
\begin{equation}
k_{y}=\frac{\pi}{W}\left(n+\frac{1}{2}\right),\qquad n=0,1,2 \cdots.
\end{equation}
One can therefore assume a spinor solution of the following form
$\Phi_{\sf j}=\left(\varphi_{\sf j}^{+}(x),\varphi_{\sf
j}^{-}(x)\right)^{\dagger}e^{ik_{y}y}$ and the subscript ${\sf
j}= 1, 2, 3, 4, 5$ indicates the space region while the
superscripts indicate the two spinor components. Solving the
eigenvalue equation to obtain the upper and lower components of
the eignespinor in the incident and reflection region {\sf 1} ($x
< - d_{2}$)
\begin{eqnarray}\label{eq3}
&& \Phi_{\sf 1}= \left(
\begin{array}{c}
{1} \\
{z_{1}} \\
\end{array}
\right) e^{i(k_{1}x+k_{y}y)} + r_{s,n}\left(
\begin{array}{c}
{1} \\
{-z_{1}^{-1}} \\
\end{array}
\right) e^{i(-k_{1}x+k_{y}y)}\\
&& z_{1} =s_{1}\frac{k_{1}+ik_{y}}{\sqrt{k_{1}^{2}+k_{y}^{2}}}
\end{eqnarray}
where the sign function is defined by $s_{\sf j}={\mbox{sign}}{\left(E\right)}$.
The corresponding dispersion relation takes the form
\begin{equation}
\epsilon=s_1\sqrt{k_1^2 +k_y^2}.
\end{equation}
In regions {\sf 2} and {\sf 4} ($d_{1}<|x|<d_{2}$), the general
solution can be expressed in terms of the parabolic cylinder
function \cite{Abramowitz, Gonzalez, HBahlouli} as
\begin{equation}\label{hiii1}
\chi_{\gamma}^{+}=c_{n1}
D_{\nu_n-1}\left(Q_{\gamma}\right)+c_{n2}
D_{-\nu_n}\left(-Q^{*}_{\gamma}\right)
\end{equation}
where {$c_{n1}$ and $c_{n2}$ are constants,
$\nu_n=\frac{ik_{y}^{2}}{2\varrho}$ and $
Q_{\gamma}(x)=\sqrt{\frac{2}{\varrho}}e^{i\pi/4}\left(\gamma
\varrho x+\epsilon_{0}\right)$, with
$\epsilon_{0}=\epsilon-v_{1}$, $\Lambda=v_{F}\varrho$,
$V_{1}=v_{F}v_{1}$}. The lower spinor component is given by
\begin{eqnarray}\label{hiii2}
\chi_{\gamma}^{-}=-\frac{c_{n2}}{k_{y}}\left[
2(\epsilon_{0}+\gamma \varrho x)
D_{-\nu_n}\left(-Q^{*}_{\gamma}\right)
+
\sqrt{2\varrho}e^{i\pi/4}D_{-\nu_n+1}\left(-Q^{*}_{\gamma}\right)\right]
-\frac{c_{n1}}{k_{y}}\sqrt{2\varrho}e^{-i\pi/4}
D_{\nu_n-1}\left(Q_{\gamma}\right).
\end{eqnarray}
The components of the spinor solution of the Dirac equation
\eqref{eqh1} in regions {\sf 2} and {\sf 4} can be obtained from
\eqref{hiii1} and \eqref{hiii2} with
$\varphi_{\gamma}^{+}(x)=\chi_{\gamma}^{+}+i\chi_{\gamma}^{-}$ and
$\varphi_{\gamma}^{-}(x)=\chi_{\gamma}^{+}-i\chi_{\gamma}^{-}$. Then, in
regions {\sf 2} and {\sf 4}
we have the eigenspinors
\begin{eqnarray}
\Phi_{\sf j } &=& a_{\sf{j}-1}\left(%
\begin{array}{c}
\eta^{+}_{\gamma}(x) \\
\eta^{-}_{\gamma}(x) \\
\end{array}%
\right)e^{ik_{y}y}+a_{\sf j}\left(%
\begin{array}{c}
\xi^{+}_{\gamma}(x) \\
\xi^{-}_{\gamma}(x)\\
\end{array}%
\right)e^{ik_{y}y}
\end{eqnarray}
where
the functions $ \eta^{\pm}_{\gamma}(x)$ and $\xi^{\pm}_{\gamma}(x)$
are given by
\begin{eqnarray}
&& \eta^{\pm}_{\gamma}(x)=
D_{\nu_{n}-1}\left(Q_{\gamma}\right)\mp
\frac{1}{k_{y}}\sqrt{2\varrho}e^{i\pi/4}D_{\nu_{n}}\left(Q_{\gamma}\right)\\
&& \xi^{\pm}_{\gamma}(x)=
\pm\frac{1}{k_{y}}\sqrt{2\varrho}e^{-i\pi/4}D_{-\nu_{n}+1}\left(-Q_{\gamma}^{*}\right)
\pm
\frac{1}{k_{y}}\left(-2i\epsilon_{0}\pm
k_{y}-\gamma2i \varrho x\right)D_{-\nu_{n}}\left(-Q_{\gamma}^{*}\right).
\end{eqnarray}
More explicitly, it gives in region {\sf 2}
\begin{eqnarray}
\Phi_{\sf 2} &=& a_1\left(%
\begin{array}{c}
\eta^{+}_{1}(x) \\
\eta^{-}_{1}(x) \\
\end{array}%
\right)e^{ik_{y}y}+a_{2}\left(%
\begin{array}{c}
\xi^{+}_{1}(x) \\
\xi^{-}_{1}(x)\\
\end{array}%
\right)e^{ik_{y}y}
\end{eqnarray}
and region {\sf 4}
\begin{eqnarray}
\Phi_{\sf 4} &=& a_3\left(%
\begin{array}{c}
\eta^{+}_{-1}(x) \\
\eta^{-}_{-1}(x) \\
\end{array}%
\right)e^{ik_{y}y}+a_{4}\left(%
\begin{array}{c}
\xi^{+}_{-1}(x) \\
\xi^{-}_{-1}(x)\\
\end{array}%
\right)e^{ik_{y}y}
\end{eqnarray}
where $\gamma=\pm 1$.
Solving the eigenvalue equation for
the Hamiltonian \eqref{eqh1} in region 3, we find the
following eigenspinor
\begin{eqnarray} \label{eq 7}
&& \Phi_{\sf 3}= b_1 \left(
\begin{array}{c}
{\alpha} \\
{\beta z_{3}} \\
\end{array}
\right) e^{i(k_{3}x+k_{y}y)} +b_2 \left(
\begin{array}{c}
{\alpha} \\
{-\beta z_{3}^{-1}} \\
\end{array}
\right) e^{i(-k_{3}x+k_{y}y)}\\
&& z_{3}
=s_{3}\frac{k_{3}+ik_{y}}{\sqrt{k_{3}^{2}+k_{y}^{2}}}
\end{eqnarray}
where the parameters $\alpha$ and $\beta$ are defined by
\begin{equation} \label{eq 18i}
{\alpha=\left({1+\frac{\mu}{ \epsilon-v_{2}}}\right)}^{1/2}, \qquad
{\beta=\left({1-\frac{\mu}{ \epsilon-v_{2}}}\right)}^{1/2}
\end{equation}
with the sign function
$s_{3}=\mbox{sign}(\epsilon-v_{2})$.
The wave vector being
\begin{equation}
k_{3}= \sqrt{(\epsilon-v_{2})^{2}-\mu^{2}-{k_{y}}^{2}}.
\end{equation}
Finally the eigenspinor in region {\sf 5} can be expressed as
\begin{equation}\label{eq6}
\Phi_{\sf 5}= t_{s,n} \left(
\begin{array}{c}
{1} \\
{z_{1}} \\
\end{array}
\right) e^{i(k_{1}x+k_{y}y)}.
\end{equation}
The transmission and reflection coefficients $(r_{s,n},t_{s,n})$
can be determined using the boundary conditions, that is, continuity of the
eigenspinors at each interface. Next we will use the above
solutions to explicitly determine the corresponding coefficient.
Now, requiring the continuity of the spinor wavefunctions at each
junction interface gives rise to the following set of equations
\begin{eqnarray}
\label{eq11} &&\Phi_{\sf 1}(-d_2)= \Phi_{\sf
2}(-d_2)\\
&&\Phi_{\sf 2}(-d_1)= \Phi_{\sf 3}(-d_1)\\
&&\Phi_{\sf 3}(d_1)= \Phi_{\sf 4}(d_1)\\
&&\Phi_{\sf 4}(d_2)= \Phi_{\sf 5}(d_2).
\end{eqnarray}
We prefer to express these relationships in terms of $2\times 2$
transfer matrices between different regions. For this, we write
\begin{equation}
\left(%
\begin{array}{c}
a_{\sf j} \\
b_{\sf j} \\
\end{array}%
\right)=M_{{\sf j}, {\sf j}+1}\left(%
\begin{array}{c}
a_{{\sf j}+1} \\
b_{{\sf j}+1} \\
\end{array}%
\right)
\end{equation}
where
$M_{{\sf j}, {\sf j}+1}$ being the transfer matrices that couple the
wavefunction in the $j$-th region to the wavefunction in the
${\sf j} + 1$-th region. Finally, we obtain the full transfer matrix over the
whole double barrier which can be written, in an obvious notation,
as follows
\begin{equation}\label{systm1}
\left(%
\begin{array}{c}
1 \\
r_{s,n} \\
\end{array}%
\right)=\prod_{{\sf j}=1}^{4}M_{{\sf j}, {\sf j}+1}\left(%
\begin{array}{c}
t_{s,n} \\
0 \\
\end{array}%
\right)=M\left(%
\begin{array}{c}
t_{s,n} \\
0 \\
\end{array}%
\right)
\end{equation}
where the total transfer matrix $M=M_{12}\cdot M_{2
3}\cdot M_{34}\cdot M_{45}$ is given by
\begin{eqnarray}
&& M=\left(%
\begin{array}{cc}
m_{11} & m_{12} \\
m_{21} & m_{22} \\
\end{array}%
\right)
\\
&& M_{12}=\left(%
\begin{array}{cc}
e^{-\textbf{\emph{i}}k_{1} d_{2}} &e^{\textbf{\emph{i}}k_{1} d_{2}} \\
z_{1}e^{-\textbf{\emph{i}}k_{1} d_{2}} & -z^{\ast}_{1} e^{\textbf{\emph{i}}k_{1} d_{2}} \\
\end{array}%
\right)^{-1}\left(%
\begin{array}{cc}
\eta_{1}^{+}(-d_2) & \xi_{1}^{+}(-d_2)\\
\eta_{1}^{-}(-d_2) & \xi_{1}^{-} (-d_2)\\
\end{array}%
\right)
\\
&& M_{23}=\left(%
\begin{array}{cc}
\eta_{1}^{+}(-d_1) & \xi_{1}^{+}(-d_1)\\
\eta_{1}^{-}(-d_1) & \xi_{1}^{-} (-d_1)\\
\end{array}%
\right)^{-1}\left(%
\begin{array}{cc}
\alpha e^{-\textbf{\emph{i}}k_{3} d_{1}} &\alpha e^{\textbf{\emph{i}}k_{3} d_{1}} \\
\beta z_{3}e^{-\textbf{\emph{i}}k_{3} d_{1}} & -\beta z^{\ast}_{3} e^{\textbf{\emph{i}}k_{3} d_{1}} \\
\end{array}%
\right)
\\
&& M_{34}=\left(%
\begin{array}{cc}
\alpha e^{\textbf{\emph{i}}k_{3} d_{1}} &\alpha e^{-\textbf{\emph{i}}k_{3} d_{1}} \\
\beta z_{3}e^{\textbf{\emph{i}}k_{3} d_{1}} & -\beta z^{\ast}_{3} e^{-\textbf{\emph{i}}k_{3} d_{1}} \\
\end{array}%
\right)^{-1}\left(%
\begin{array}{cc}
\eta_{-1}^{+}(d_1) & \xi_{-1}^{+}(d_1)\\
\eta_{-1}^{-}(d_1) & \xi_{-1}^{-} (d_1)\\
\end{array}%
\right)
\\
&& M_{45}=\left(%
\begin{array}{cc}
\eta_{-1}^{+}(d_2) & \xi_{-1}^{+}(d_2)\\
\eta_{-1}^{-}(d_2) & \xi_{-1}^{-} (d_2)\\
\end{array}%
\right)^{-1}\left(%
\begin{array}{cc}
e^{\textbf{\emph{i}}k_{1} d_{2}} & e^{-\textbf{\emph{i}}k_{1} d_{2}} \\
z_{1} e^{\textbf{\emph{i}}k_{1} d_{2}} & -z_{1}^{\ast} e^{-\textbf{\emph{i}}k_{1} d_{2}} \\
\end{array}%
\right).
\end{eqnarray}
These can be used
to evaluate the reflection and transmission amplitudes
\begin{equation}\label{eq 63}
t_{s,n}=\frac{1}{m_{11}}, \qquad r_{s,n}=\frac{m_{21}}{m_{11}}.
\end{equation}
Some symmetry relationship between the parabolic cylindric functions are worth mentioning. These are given by
\begin{equation}
\eta_{-1}^{\pm}(d_1)=\eta_{1}^{\pm}(-d_1),\qquad
\eta_{-1}^{\pm}(d_2)=\eta_{1}^{\pm}(-d_2)
\end{equation}
\begin{equation}
\xi_{-1}^{\pm}(d_1)=\xi_{1}^{\pm}(-d_1),\qquad
\xi_{-1}^{\pm}(d_2)=\xi_{1}^{\pm}(-d_2).
\end{equation}
We should point out at this stage that we were unfortunately
forced to adopt a somehow cumbersome notation for our wavefunction
parameters in different potential regions due to the relatively
large number of necessary subscripts and superscripts. Before
matching the eigenspinors at the boundaries, let us define the
following shorthand notation
\begin{equation}
\eta_{1}^{\pm}(-d_1)=\eta_{11}^{\pm},\qquad
\eta_{1}^{\pm}(-d_2)=\eta_{12}^{\pm}
\end{equation}
\begin{equation}
\xi_{1}^{\pm}(-d_1)=\xi_{11}^{\pm},\qquad
\xi_{1}^{\pm}(-d_2)=\xi_{12}^{\pm}.
\end{equation}
At this level, we should determine the transmission amplitude $t_{s,n}$. After
some lengthy algebra, one can solve the linear system given in
\eqref{systm1} to obtain the transmission and reflection
amplitudes in closed form. As far as the transmission is concerned, we find
\begin{equation}
t_{s,n}=\frac{\alpha\beta e^{2i(k_{1}d_{2}+k_{3}d_{1})}
\left(1+z_{1}^{2}\right)\left(1+z_{3}^{2}\right)}{z_{3}\left(e^{4ik_{3}d_{1}}-1\right)\left(
\alpha^{2}\mathcal{Y}_{2}+\beta^{2}\mathcal{Y}_{1}\right)+\alpha\beta
\mathcal{Y}_{3}}\left(\xi_{11}^{+}\eta_{11}^{-}-\xi_{11}^{-}\eta_{11}^{+}\right)
\left(\xi_{12}^{-}\eta_{12}^{+}-\xi_{12}^{+}\eta_{12}^{-}\right)
\end{equation}
where we have defined the following quantities
\begin{eqnarray}
&&\mathcal{Y}_{1}=\left(\xi_{12}^{-}\eta_{11}^{+}-\xi_{11}^{+}\eta_{12}^{-}-
\xi_{12}^{+}\eta_{11}^{+}z_{1}+\xi_{11}^{+}\eta_{12}^{+}z_{1}\right)\left( \xi_{11}^{+}\eta_{12}^{+}+
\xi_{11}^{+}\eta_{12}^{-}z_{1}-\eta_{11}^{+}(\xi_{12}^{+}+\xi_{12}^{-}z_{1}\right)\\
&&
\mathcal{Y}_{2}=\left(\xi_{11}^{-}\eta_{12}^{+}-\xi_{11}^{-}\eta_{12}^{-}z_{1}-\eta_{11}^{-}(
\xi_{12}^{+}+\xi_{12}^{+}z_{1}\right)
\left( -\xi_{12}^{-}\eta_{11}^{-}+
\xi_{12}^{+}\eta_{11}^{-}z_{1}-\xi_{11}^{-}(\eta_{12}^{-}+\eta_{12}^{+}z_{1}\right)\\
&&
\mathcal{Y}_{3}=\Gamma_{0}\left(1+z_{1}^{2}z_{3}^{2}\right)+\Gamma_{1}z_{1}\left(1-z_{3}\right)+\Gamma_{2}\left(z_{1}^{2}+z_{3}^{2}\right)
+e^{4id_{1}k_{3}}\left(\Gamma_{3}+\Gamma_{4}\right)
\end{eqnarray}
as well as
\begin{eqnarray}
\Gamma_{0}&=&-\xi_{12}^{+}\xi_{12}^{-}\eta_{11}^{+}\eta_{11}^{-}
+\xi_{11}^{+}\xi_{12}^{-}\eta_{11}^{-}\eta_{12}^{+}+
\xi_{11}^{-}\xi_{12}^{+}\eta_{11}^{+}\eta_{12}^{-}-
\xi_{11}^{+}\xi_{11}^{-}\eta_{12}^{+}\eta_{12}^{-}\\
\Gamma_{1}&=&\left(\xi_{12}^{+}\right)^{2}\eta_{11}^{+}\eta_{11}^{-}
-\left(\xi_{12}^{-}\right)^{2}\eta_{11}^{+}\eta_{11}^{-}-
\xi_{11}^{-}\xi_{12}^{+}\eta_{11}^{+}\eta_{12}^{+}-
\xi_{11}^{+}\xi_{12}^{+}\eta_{11}^{-}\eta_{12}^{+}\\\nonumber
&&
+\xi_{11}^{+}\xi_{11}^{-}\left(\eta_{12}^{+}\right)^{2}
-\xi_{11}^{+}\xi_{11}^{-}\left(\eta_{12}^{-}\right)^{2}+
\xi_{11}^{-}\xi_{12}^{-}\eta_{11}^{+}\eta_{12}^{-}+
\xi_{11}^{+}\xi_{12}^{-}\eta_{11}^{-}\eta_{12}^{-}\\
\Gamma_{2}&=&\xi_{12}^{+}\xi_{12}^{-}\eta_{11}^{+}\eta_{11}^{-}
-\xi_{11}^{-}\xi_{12}^{-}\eta_{11}^{+}\eta_{12}^{+}-
\xi_{11}^{+}\xi_{12}^{+}\eta_{11}^{-}\eta_{12}^{-}+
\xi_{11}^{+}\xi_{11}^{-}\eta_{12}^{+}\eta_{12}^{-}\\
\Gamma_{3}&=&\left(\xi_{12}^{+}\right)^{2}\eta_{11}^{+}\eta_{11}^{-}\left(z_{3}^{2}-1\right)
-\xi_{11}^{-}\xi_{12}^{-}\eta_{11}^{+}\left[\eta_{12}^{+}\left(1+z_{1}^{2}z_{3}^{2}\right)-\eta_{12}^{-}z_{1}\left(z_{3}^{2}-1\right)\right]\\\nonumber
&&
+\xi_{11}^{-}\xi_{11}^{+}\left[\left(\eta_{12}^{+}\right)^{2}z_{1}
-\left(\eta_{12}^{-}\right)^{2}z_{1}+\eta_{12}^{+}\eta_{12}^{-}\left(z_{1}^{2}-1\right)\left(z_{3}^{2}-1\right)\right]
\\
\Gamma_{4}&=&\xi_{12}^{-}\eta_{11}^{-}\left[-\xi_{12}^{-}\eta_{11}^{+}z_{1}\left(z_{3}^{2}-1\right)+
\xi_{11}^{+}\left(\eta_{12}^{-}z_{0}\left(z_{3}^{2}-1\right)+\eta_{12}^{+}\left(z_{1}^{2}+z_{3}^{2}\right)\right)\right]\\\nonumber
&&\xi_{12}^{+}\xi_{12}^{-}\eta_{11}^{+}\eta_{11}^{-}\left(z_{1}^{2}+1\right)\left(z_{1}^{3}-1\right)-\xi_{12}^{+}
\xi_{11}^{+}\eta_{11}^{-}\left(\eta_{12}^{-}\left(1+z_{1}^{2}z_{3}^{2}\right)+\eta_{12}^{+}z_{1}\left(z_{1}^{3}-1\right)\right)\\\nonumber
&&
+\xi_{12}^{+}\xi_{11}^{-}\eta_{11}^{+}\left[\eta_{12}^{-}\left(z_{1}^{2}+z_{3}^{2}\right)+\eta_{12}^{+}z_{1}\left(1-z_{3}^{2}\right)\right].
\end{eqnarray}
Now we are ready for the computation of the reflection $R_{s,n}$
and transmission $T_{s,n}$ coefficients. For this purpose, we
introduce the associated current density $J$, which defines
$R_{s,n}$ and $T_{s,n}$ as
\begin{equation}
T_{s,n}=\frac{ J_{\sf {tra}}}{ J_{\sf {inc}}},\qquad R_{s,n}=\frac{J_{\sf {ref}}}{ J_{\sf {inc}}}
\end{equation}
where $J_{\sf {\sf {inc}}}$, $J_{\sf {ref}}$ and $J_{\sf {\sf {tra}}}$
stand for the incident, reflected and transmitted components of
the current density, respectively. It is easy to show that the
current density $J$ reads as
\begin{equation}
J= e\upsilon_{F}\Phi_{\sf }^{\dagger}\sigma _{x}\Phi_{\sf }
\end{equation}
which gives the following results for the incident, reflected and
transmitted components
\begin{eqnarray}
&& J_{\sf {inc}}= e\upsilon_{F}(\Phi_{\sf 1}^{+})^{\dagger}\sigma
_{x}\Phi_{\sf 1}^{+}
\\
&& J_{\sf {ref}}= e\upsilon_{F} (\Phi_{\sf 1}^{-})^{\dagger}\sigma _{x}\Phi_{\sf 1}^{-}
\\
&& J_{\sf {tra}}= e\upsilon_{F}\Phi_{\sf 5}^{\dagger}\sigma _{x}\Phi_{\sf 5}.
\end{eqnarray}
They allow us to express the transmission and reflection
probabilities as
\begin{equation}
T_{s,n}=|t_{s,n}|^{2},
\qquad
R_{s,n}=|r_{s,n}|^{2}.
\end{equation}
The above results will be investigated numerically for different
potential configurations to enable us to study the most important
features of our system. Obviously, we can
check that the probability conservation condition $T_{s,n}+R_{s,n}=1$ is well
satisfied. Let us consider Figure
\ref{fig1ab}a) where we show the transmission and reflection
probabilities versus the energy $\epsilon$. In the first energy interval $\epsilon \leq k_{y}$ we have no
transmission because it is a forbidden zone.
\begin{figure}[h!]
\centering
\includegraphics[width=8cm, height=5cm]{fig1}\ \ \ \
\includegraphics[width=8cm, height=5cm]{fig8}\\
\caption{\sf{a) Transmission and reflection probabilities $(T_{s,n}, R_{s,n})$ as a function of energy
$\epsilon$ with $d_{1}=0.6$, $d_{2}=2.5$, $\mu=4$,
$k_{y}=2$,
$v_{1}=60$ and $v_{2}=30$. b)Transmission probability $T_{s,n}$ as a function of
energy gap $\mu$ with $d_{1}=0.5$, $d_{2}=1.5$, $\epsilon=\{15, 25, 35\}$,
$k_{y}=1$,
$v_{1}=50$ and $v_{2}=40$
.}}\label{fig1ab}
\end{figure}
However, for in second energy intervals $k_{y} \leq \epsilon \leq
v_{2}-k_{y}-\frac{\mu}{2}$ and $v_{2}+k_{y}+\frac{\mu}{2}\leq\epsilon\leq v_{1}$, we observe
resonance oscillations due to the Klein regime. We have no transmission
(like a windows) when $v_{2}-k_{y}-\frac{\mu}{2}\leq\epsilon\leq
v_{2}+k_{y}+\frac{\mu}{2}$. Finally in the interval where
$\epsilon > v_{1}$, there exist usual high energy
oscillations, which asymptotically saturates at high energy. Note
that \eqref{eq 18i} implies that for certain energy gap $\mu$,
there is no transmission. In fact, under the condition
\begin{equation}
\mu>|v_{2}-\epsilon|
\end{equation}
every incoming wave is reflected. In Figure \ref{fig1ab}b) we see that the transmission vanishes
for values of $\epsilon$ below the critical value
$\mu=|v_{2}-\epsilon|$.\\
\begin{figure}[h!]
\centering
\includegraphics[width=8cm, height=5cm]{fig3}\ \ \ \
\includegraphics[width=8cm, height=5cm]{fig2}\\
\caption{\sf{(Color online) Transmission probability for the static barrier $T_{s,n}$ as a function of energy
$\epsilon$ with $d_{1}=0.3$ color red, $d_{1}=1$, $d_{2}=2.5$, $\mu=4$ and $k_{y}=2$. a) the parameters:
$v_{1}=60$ , $v_{2}=30$. b) $v_{1}=30$, $v_{2}=60$.}}\label{figgi3}
\end{figure}
Figure \ref{figgi3} presents the transmission $T_{s,n}$ as a
function of incident electron energy $\epsilon$ for the Dirac
fermion scattered by a double triangular barriers with
$d_{2}=2.5$, $\mu=4$, $k_{y}=2$ and two values of
barrier height $d_{1}=\{0.3, 1\}$. We consider in Figure
\ref{figgi3}a) the parameters:
$v_{1}=2 v_{2}=60$, the results show that as long as the well width
$d_{1}$ increases the transmission resonance shifts and the width
of the resonances increases between $k_{y} \leq \epsilon \leq
v_{2}-k_{y}-\frac{\mu}{2}$ and
$v_{2} + k_{y} + \frac{\mu}{2}\leq\epsilon\leq v_{1}$. In Figure
\ref{figgi3}b) we consider the parameters
$v_{1}=\frac{v_{2}}{2}=30$ for the Dirac fermion scattered by a
double barrier triangular potential where we distinguish five different zones.
\begin{itemize}
\item
The first is a forbidden zone where 0$ \leq \epsilon \leq k_{y}$.
\item The
second zone $k_{y} \leq \epsilon \leq v_{1}$ is the upper Klein
energy zone with transmission resonances.
\item The third zone contains
oscillations.
\item The fourth one
$v_{2}-k_{y}-\frac{\mu}{2}\leq\epsilon\leq
v_{2}+k_{y}+\frac{\mu}{2}$ is a window where the transmission is
zero, the wavefunction is damped and transmission decays
exponentially.
\item The fifth zone $\epsilon\geq
v_{2}+k_{y}+\frac{\mu}{2}$ contains oscillations, the transmission
converges to unity at high energies similarly to the
non-relativistic result.
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[width=8cm, height=5cm]{fig4}\\
\caption{\sf{(Color online) Transmission probability for the static barrier $T_{s,n}$ as a function of
energy potential
$v_{2}$ with $d_{1}=0.2$ color red, $d_{1}=0.6$ color green, $d_{1}=1.2$ color blue, $d_{2}=2$, $\mu=3$,
$k_{y}=1$,
$\epsilon=40$ and $v_{1}=60$.}}\label{figg3}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=8cm, height=5cm]{fig6}\\
\caption{\sf{(Color online) Transmission probability for the static barrier $T_{s,n}$ as a function of
energy potential
$v_{1}$ with $d_{1}=0.7$ color red, $d_{1}=2$ color blue, $d_{1}=0.05$ color green, $d_{2}=2.5$, $\mu=4$, $k_{y}=2$,
$\epsilon=30$ and $v_{2}=60$.
.}}\label{figg4}
\end{figure}
We represent in Figure \ref{figg3} the transmission versus potential energy
$v_{2}$. It is clear that the two transmission curves are
symmetric with respect to the point $v_{2} = \epsilon$.
While an increase in the value $d_{1}$ widens the bowl width.
Figure \ref{figg4} presents the transmission probability for a
static barrier $T_{s,n}$ as function of the
strength of the applied voltage $v_{1}$. The transmission is
observed for small values of $v_{1}$ less than the energy of the
incident fermion. It then decreases sharply for $v_{1} > \epsilon
-(2k_{y}+\mu)$ until it reaches a relative minimum and then begins
to increase in an oscillatory manner.
\section{Magnetic double barrier}
Consider a two-dimensional system of Dirac fermions forming a graphene
sheet. This sheet is subject to a double
barrier potential in addition to a mass term and an externally
applied magnetic field as shown in Figure \ref{fig.1}. Particles
and antiparticles moving respectively in the positive and negative
energy regions with the tangential component of the wave vector
along the $x$-direction have translation invariance in the
$y$-direction.
A uniform perpendicular
magnetic field is applied, along the $z$-direction and confined to the well
region between the two barriers. It is defined by
\begin{equation}\label{eq04}
B(x,y)=B\Theta(d_{1}^{2}-x^{2})
\end{equation}
where $B$ is the strength of the magnetic field within the strip
located in the region $|x|< d_{1}$ and $B=0$ otherwise, $\Theta$ is
the Heaviside step function. Choosing the Landau gauge and
imposing continuity of the vector potential at the boundary to
avoid unphysical effects, we end up with the following vector
potential
\begin{equation}
\qquad A_{y}(x)=A_{j}=\frac{c}{e}\times\left\{%
\begin{array}{ll}
-\frac{1}{l_{B}^{2}}d_{1}, & \hbox{$x<-d_{2}$} \\
\frac{1}{l_{B}^{2}}x, & \hbox{$\mid x\mid<d_{1}$} \\
\frac{1}{l_{B}^{2}}d_{1}, & \hbox{$x\geq d_{2}$} \\
\end{array}%
\right.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=14cm, height=5cm ]{figmagn}\\
\caption{\sf Schematic diagram for the monolayer graphene double barrier.}\label{fig.1}
\end{figure}
with the magnetic length is $l_{B}=\sqrt{1/B}$ in the unit
system ($\hbar=c=e=1$).
The system contains five regions denoted ${\sf j=1,2,3,4,5}$. The
left region (${\sf j=1}$) describes the incident electron beam
with the energy $E=v_{F}\epsilon$ at an incident angle
$\phi_{1}$ where $v_{F}$ is the Fermi velocity. The extreme right region
(${\sf j=5}$) describes the transmitted electron beam at an
angle $\phi_{5}$. The Hamiltonian for one-pseudospin component
describing our system reads as
\begin{equation} \label{equm1}
H_{m}=v_{F}
{\boldsymbol{\sigma}}\cdot\left(\textbf{p}+\frac{e}{c}\textbf{A}\right)+
V(x){\mathbb I}_{2}+G_p\Theta\left(d_{1}^{2}-x^{2}\right)\sigma_{z}
\end{equation}
{To proceed further,} we need to find the solutions of
the corresponding Dirac equation and their associated energy
spectrum.
\subsection{Energy spectrum solutions}
We are set to determine the eigenvalues and eigenspinors of the Hamiltonian
$H_{m}$. Indeed, the Dirac Hamiltonian describing regions 1 and 5, is
obtained from \eqref{equm1} as
\begin{equation}
H_{m}=\left(%
\begin{array}{cc}
0 & \upsilon_{F}\left(p_{x{\sf j}}-i\left(p_{y}+\frac{e}{c}A_{\sf j}\right)\right) \\
\upsilon_{F}\left(p_{x{\sf j}}+i\left(p_{y}+\frac{e}{c}A_{\sf j}\right)\right) & 0 \\
\end{array}%
\right).
\end{equation}
The corresponding time independent Dirac equation for the spinor
$\psi_{\sf j}(x,y)= (\varphi_{\sf j}^{+}, \varphi_{\sf j}^{-})^{T}$ at energy
$E=\upsilon_{F}\epsilon$ is given by
\begin{equation}
H_{m}\left(%
\begin{array}{c}
\varphi_{\sf j}^{+} \\
\varphi_{\sf j}^{-} \\
\end{array}%
\right)=\epsilon\left(%
\begin{array}{c}
\varphi_{\sf j}^{+} \\
\varphi_{\sf j}^{-} \\
\end{array}%
\right).
\end{equation}
This eigenproblem can be written as two linear differential
equations of the from
\begin{eqnarray}
&&p_{x{\sf j}}-i\left(p_{y}+\frac{e}{c}A_{\sf j}\right)\varphi_{\sf j}^{-}=\epsilon\varphi_{\sf j}^{+} \\
&& p_{x{\sf j}}+i\left(p_{y}+\frac{e}{c}A_{\sf j}\right)\varphi_{\sf j}^{+}=\epsilon\varphi_{\sf j}^{-}
\end{eqnarray}
which gives the energy eigenvalue
\begin{equation}
\epsilon=s_{\sf j} \sqrt{p_{x{\sf j}}^{2}+\left(p_{y}+\frac{e}{c}A_{\sf j}\right)}
\end{equation}
where $s_{\sf j}=\mbox{sign}(\epsilon)$. This implies
\begin{equation}
p_{x{\sf j}}=\sqrt{\epsilon ^{2}-\left(p_{y}+\frac{e}{c}A_{\sf j}\right)^{2}}
\end{equation}
with incoming momentum ${\boldsymbol{p_{\sf j}}}=(p_{x{\sf j}}, p_{y})$ and
${\boldsymbol{r}}=(x, y)$. The incoming wave function is
\begin{eqnarray}
&& \psi_{in}=\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
1 \\
z_{p_{x{\sf j}}}\end{array}\right)e^{\textbf{\emph{i}}{\boldsymbol{p_{\sf j}}}{\boldsymbol{r}}}\\
&& z_{p_{x{\sf j}}}=z_{\sf j}=s_{\sf j}\frac{p_{x{\sf j}}
+i(p_{y}+\frac{e}{c}A_{\sf j})}{\sqrt{(p_{x{\sf j}})^{2}
+(p_{y}+\frac{e}{c}A_{\sf j})^{2}}}=s_{\sf j}
e^{\textbf{\emph{i}}\phi_{\sf j}}
\end{eqnarray}
where $s_{0}=\mbox{sgn}(\epsilon)$ and $\phi_{\sf j}=\arctan\left(\frac{p_{y}-\frac{e}{c}A_{\sf j}}{p_{x{\sf j}}}\right)$ is the angle that
the incident electrons make with the {$x$-direction}, $p_{x1}$ and
$p_{y}$ are the $x$ and $y$-components of the electron wave
vector, respectively. The eigenspinors are given by
\begin{eqnarray}
&& \psi_{\sf j}^{+}=\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
1 \\
z_{\sf j}\end{array}\right)e^{\textbf{\emph{i}}(p_{x{\sf j}} x +p_{y} y)}\\
&&
\psi_{\sf j}^{-}=\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
1 \\
-z^{*}_{\sf j}\end{array}\right)e^{\textbf{\emph{i}}(-p_{x{\sf j}} x +p_{y} y)}.
\end{eqnarray}
It is straightforward to solve the tunneling problem for Dirac
fermions. We assume that the incident wave propagates at the angle
$\phi_{1}$ with respect to the {$x$-direction} and write the
components, of the Dirac spinor $\varphi_{\sf j}^{+}$ and
$\varphi_{\sf j}^{-}$, for each
region, in the following form
$\star$ For $x<-d_{2}$ (region 1):
\begin{eqnarray}
&& \epsilon=
\left[p_{x1}^{2}+\left(p_{y}-\frac{1}{l_{B}^{2}}d_{1}\right)^{2}\right]^{\frac{1}{2}}\\
&& \psi_{1}=\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
1 \\
z_{1}\end{array}\right)e^{\textbf{\emph{i}}(p_{x1} x +p_{y} y)}+r_{m}\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
1 \\
-z^{*}_{1}\end{array}\right)e^{\textbf{\emph{i}}(-p_{1x} x +p_{y}
y)}\\
&& z_{1}=s_{1}\frac{p_{x1}
+i\left[p_{y}-\frac{1}{l_{B}^{2}}d_{1}\right]}{\sqrt{p_{x1}^{2}
+\left[p_{y}-\frac{1}{l_{B}^{2}}d_{1}\right]^{2}}}.
\end{eqnarray}
$\star$ In the barrier $x
> d_{2}$ (region 5)
\begin{eqnarray}
&& \epsilon=\left[p_{x5}^{2}+\left(p_{y}+\frac{1}{l_{B}^{2}}d_{1}\right)^{2}\right]^{\frac{1}{2}}\\
&& \psi_{5}=\frac{1}{\sqrt{2}}t_{m}\left(
\begin{array}{c}
1 \\
z_{5}\end{array}\right)e^{\textbf{\emph{i}}(p_{x5} x +p_{y} y)}\\
&& z_{5}=s_{5}\frac{p_{x5}
+i\left[p_{y}+\frac{1}{l_{B}^{2}}d_{1}\right]}{\sqrt{p_{x1}^{2}
+\left[p_{y}+\frac{1}{l_{B}^{2}}d_{1}\right]^{2}}}.
\end{eqnarray}
$\star$ In region {\sf 2} and {\sf 4} ($d_{1}<|x|<d_{2}$):
The general solution can be expressed in terms of the parabolic
cylinder function \cite{Abramowitz, Gonzalez, HBahlouli} as
\begin{equation}\label{hii1}
\chi_{\gamma}^{+}=c_{1}
D_{\nu_\gamma-1}\left(Q_{\gamma}\right)+c_{2}
D_{-\nu_\gamma}\left(-Q^{*}_{\gamma}\right)
\end{equation}
where
$\nu_{\gamma}=\frac{i}{2\varrho}\left(k_{y}-\gamma\frac{d_{1}}{l_{B}^{2}}\right)^{2}$,
$\epsilon_{0}=\epsilon-v_{1}$ and $
Q_{\gamma}(x)=\sqrt{\frac{2}{\varrho}}e^{i\pi/4}\left(\gamma
\varrho x+\epsilon_{0}\right) $, $c_{1}$ and $c_{2}$ are
constants and gives the other component
\begin{eqnarray}\label{hii2}
\chi_{\gamma}^{-}&=&-c_{2}\frac{1}{k_{y}-\gamma\frac{d_{1}}{l_{B}^{2}}}\left[
2(\epsilon_{0}+\gamma \varrho x)
D_{-\nu_\gamma}\left(-Q^{*}_{\gamma}\right)
+
\sqrt{2\varrho}e^{i\pi/4}D_{-\nu_\gamma+1}\left(-Q^{*}_{\gamma}\right)\right]\nonumber\\
&&
-\frac{c_{1}}{k_{y}-\gamma\frac{d_{1}}{l_{B}^{2}}}\sqrt{2\varrho}e^{-i\pi/4}
D_{\nu_\gamma-1}\left(Q_{\gamma}\right)
\end{eqnarray}
The components of the spinor solution of the Dirac equation
\eqref{eqh1} in region {\sf 2} and {\sf 4} can be obtained from
\eqref{hii1} and \eqref{hii2} with
$\varphi_{\gamma}^{+}(x)=\chi_{\gamma}^{+}+i\chi_{\gamma}^{-}$ and
$\varphi_{\gamma}^{-}(x)=\chi_{\gamma}^{+}-i\chi_{\gamma}^{-}$. We
have the eigenspinor
\begin{eqnarray}
\psi_{\sf j} &=& a_{{\sf j}-1}\left(%
\begin{array}{c}
u^{+}_{\gamma}(x) \\
u^{-}_{\gamma}(x) \\
\end{array}%
\right)e^{ik_{y}y}+a_{\sf j}\left(%
\begin{array}{c}
v^{+}_{\gamma}(x) \\
v^{-}_{\gamma}(x)\\
\end{array}%
\right)e^{ik_{y}y}
\end{eqnarray}
where ${\sf j=2, 4}$ and $\gamma=\pm 1$, the function
$u^{\pm}_{\gamma}(x)$ and $v^{\pm}_{\gamma}(x)$ are given by
\begin{eqnarray}
u^{\pm}_{\gamma}(x)&=&
D_{\nu_{\gamma}-1}\left(Q_{\gamma}\right)\mp
\frac{1}{k_{y}-\gamma\frac{d_{1}}{l_{B}^{2}}}\sqrt{2\varrho}e^{i\pi/4}D_{\nu_{\gamma}}\left(Q_{\gamma}\right)
\\
v^{\pm}_{\gamma}(x)&=&
\pm\frac{1}{k_{y}-\gamma\frac{d_{1}}{l_{B}^{2}}}\sqrt{2\varrho}e^{-i\pi/4}D_{-\nu_{\gamma}+1}\left(-Q_{\gamma}^{*}\right)\nonumber\\
&&
\pm
\frac{1}{k_{y}-\gamma\frac{d_{1}}{l_{B}^{2}}}\left(-2i\epsilon_{0}\pm
\left(k_{y}-\gamma\frac{d_{1}}{l_{B}^{2}}\right)-\gamma2i \varrho x\right)D_{-\nu_{\gamma}}\left(-Q_{\gamma}^{*}\right).
\end{eqnarray}
In region {\sf 2}:
\begin{eqnarray}
\psi_{\sf 2} &=& a_1\left(%
\begin{array}{c}
u^{+}_{1}(x) \\
u^{-}_{1}(x) \\
\end{array}%
\right)e^{ik_{y}y}+a_{2}\left(%
\begin{array}{c}
v^{+}_{1}(x) \\
v^{-}_{1}(x)\\
\end{array}%
\right)e^{ik_{y}y}
\end{eqnarray}
In region {\sf 4}:
\begin{eqnarray}
\psi_{\sf 4} &=& a_3\left(%
\begin{array}{c}
u^{+}_{-1}(x) \\
u^{-}_{-1}(x) \\
\end{array}%
\right)e^{ik_{y}y}+a_{4}\left(%
\begin{array}{c}
v^{+}_{-1}(x) \\
v^{-}_{-1}(x)\\
\end{array}%
\right)e^{ik_{y}y}
\end{eqnarray}
$\star$ In the region $|x|\leq d_{1}$:
From the nature of the system under consideration, we write the
Hamiltonian corresponding to region ${\sf 3}$ in matrix form as
\begin{equation}\label{eq 20}
H_m=v_{F}\left(%
\begin{array}{cc}
\frac{V_{2}}{v_{F}}+\frac{G_{p}}{v_{F}} & -i\frac{\sqrt{2}}
{l_{B}}\left(\frac{l_{B}}{\sqrt{2}}\left(\partial_{x}-i\partial_{y}+\frac{e}{c}A_{3}\right)\right)\\
i\frac{\sqrt{2}}{l_{B}}\left(\frac{l_{B}}{\sqrt{2}}\left(-\partial_{x}-i\partial_{y}
+\frac{e}{c}A_{3}\right)\right) & \frac{V_{2}}{v_{F}}-\frac{G_{p}}{v_{F}}\\
\end{array}%
\right)
\end{equation}
Note that, the energy gap $G_{p}$ behaves like a mass term in Dirac equation.
Certainly this will affect the above results and lead to
interesting consequences on the transport properties of our
system. We determine the eigenvalues and eigenspinors of the
Hamiltonian $H_m$ by considering the time independent equation for the spinor
$\psi_{3}(x, y)=(\psi_{3}^{+}, \psi_{3}^{-})^{T}$ using the fact that the
transverse momentum $p_{y}$ is conserved, we can write the wave
function
$\psi_{3}(x, y)=e^{ip_{y}y} \varphi_{3}(x)$
with $\varphi_{3}(x)= (\varphi_{3}^+, \varphi_{3}^-)^{T}$, the energy being defined by
$E=\upsilon_{F}\epsilon$ leads to
\begin{equation}\label{eq 23}
H_{m}\left(%
\begin{array}{c}
\varphi_{3}^+ \\
\varphi_{3}^-\\
\end{array}%
\right)=\epsilon\left(%
\begin{array}{c}
\varphi_{3}^+\\
\varphi_{3}^-\\
\end{array}%
\right)
\end{equation}
At this stage, it is convenient to introduce the annihilation and
creation operators. They can be defined as
\begin{eqnarray}
a=\frac{l_{B}}{\sqrt{2}}\left(\partial_{x}+k_{y}+\frac{e}{c}A_{3}\right),
\qquad
a^{\dagger}=\frac{l_{B}}{\sqrt{2}}\left(-\partial_{x}+k_{y}+\frac{e}{c}A_{3}\right)
\end{eqnarray}
which obey the canonical commutation relations $[a,
a^{\dagger}]={\mathbb I}$. Rescaling our energies $G_{p}=\upsilon_{F}\mu$
and $V_{2}=\upsilon_{F}v_{2}$, \eqref{eq 23} can be
written in terms of $a$ and $a^{\dagger}$ as
\begin{equation}
\left(%
\begin{array}{cc}
v_{2}+\mu & -i\frac{\sqrt{2}}{l_{B}}a \\
+i\frac{\sqrt{2}}{l_{B}}a^{\dagger} & v_{2}-\mu \\
\end{array}%
\right)\left(%
\begin{array}{c}
\varphi_{3}^+ \\
\varphi_{3}^- \\
\end{array}%
\right)=\epsilon\left(%
\begin{array}{c}
\varphi_{3}^+ \\
\varphi_{3}^- \\
\end{array}%
\right)
\end{equation}
which gives
\begin{eqnarray}\label{eq 25}
&&(v_{2}+\mu)\varphi_{3}^{+}-i\frac{\sqrt{2}}{l_{B}}a\varphi_{3}^-=\epsilon\varphi_{3}^+\\
&&\label{eq 26}
i\frac{\sqrt{2}}{l_{B}}a^{\dagger}\varphi_{3}^+ +
(v_{2}-\mu)\varphi_{3}^{-}=\epsilon\varphi_{3}^-.
\end{eqnarray}
Injecting \eqref{eq 26} in \eqref{eq 25}, we obtain a
differential equation of second order for
$\varphi_{3}^{+}$
\begin{equation}
\left[(\epsilon-v_{2})^{2}-\mu^{2}\right]\varphi_{3}^{+}=\frac{2}{l_{B}^{2}}a
a^{\dagger}\varphi_{3}^{+}.
\end{equation}
It is clear that $\varphi_{3}^{+}$ is an eigenstate of the number
operator $\widehat{N}=a^{\dagger}a$ and therefore we identify
$\varphi_{3}^{+}$ with the eigenstates of the harmonic oscillator
$|n-1\rangle$, namely
\begin{equation}
\varphi_{3}^{+} \sim \mid n-1\rangle
\end{equation}
which is equivalent to
\begin{equation}
\left[(\epsilon-v_{2})^{2}-\mu^{2}\right]
\mid n-1\rangle=\frac{2}{l_{B}^{2}}n\mid n-1\rangle
\end{equation}
and the associated energy spectrum is
\begin{equation}
\epsilon-v_{2}=s_{3} \epsilon_{n}=s_{3}\frac{1}{l_{B}}\sqrt{(\mu
l_{B})^{2}+2n}
\end{equation}
where we have set $\epsilon_{n}=s_{3}(\epsilon-v_{2})$ and
$s_{3}=\mbox{sign}(\epsilon_{n}-v_{2})$ correspond to positive and
negative energy solutions. For this reason we write the eigenvalues
as
\begin{equation}
\epsilon=v_{2}+s_{3}\frac{1}{l_{B}}\sqrt{(\mu l_{B})^{2}+2n}
\end{equation}
The second eigenspinor component then can be obtained from
\begin{equation}
\varphi_{3}^{-}=\frac{i\sqrt{2}a^{\dagger}}{(\epsilon-v_{2})l_{B}+\mu
l_{B}}\mid n-1\rangle=
\frac{i\sqrt{2n}}{(\epsilon-v_{2})l_{B}+\mu l_{B}} \mid n\rangle
\end{equation}
where $ \sqrt{2n}=\sqrt{(\epsilon_n l_{B})^{2}-(\mu
l_{B})^{2}}$. We find
\begin{equation}
\varphi_{3}^{-}=s_{3}i\sqrt{\frac{\epsilon_n l_{B}-s_{3} \mu
l_{B}}{\epsilon_n l_{B}+s_{3} \mu l_{B}}} \mid n\rangle
\end{equation}
After normalization we arrive at the following expression for the positive and negative energy
eigenstates
\begin{equation}
\varphi_{3}=\frac{1}{\sqrt{2}}\left(%
\begin{array}{c}
\sqrt{\frac{\epsilon_n l_{B}+s_{3} \mu l_{B}}{\epsilon_n l_{B}}} \mid n-1\rangle \\
s_{3} i\sqrt{\frac{\epsilon_n l_{B}-s_{3} \mu l_{B}}{\epsilon_n l_{B}}} \mid n\rangle \\
\end{array}%
\right)
\end{equation}
Introducing the parabolic cylinder functions
$D_{n}(x)=2^{-\frac{n}{2}}e^{-\frac{x^{2}}{4}}H_{n}\left(\frac{x}{\sqrt{2}}\right)$ to express
the solution in region {\sf 3} as
\begin{equation} \psi_{\sf
3}(x,y)=b_{1}\psi_{3}^{+}+b_{2}\psi_{3}^{-}
\end{equation}
with the two components
\begin{equation}
\psi_{3}^{\pm}(x, y)=\frac{1}{\sqrt{2}}\left(%
\begin{array}{c}
\sqrt{\frac{\epsilon_n l_{B}+s_{3} \mu l_{B}}{\epsilon_n l_{B}}}
D_{\left(\left(\epsilon_n l_{B}\right)^{2}-(\mu l_{B})^{2} \right)/2-1}
\left(\pm \sqrt{2}\left(\frac{x}{l_{B}}+k_{y}l_{B}\right)\right) \\
\pm i\frac{s_{3}\sqrt{2}}{\sqrt{\epsilon_n l_{B}\left(\epsilon_n l_{B}+s_{3} \mu l_{B}\right)}}
D_{\left(\left(\epsilon_n l_{B}\right)^{2}-\left(\mu l_{B}\right)^{2}\right)/2}
\left(\pm \sqrt{2}\left(\frac{x}{l_{B}}+k_{y}l_{B}\right)\right) \\
\end{array}%
\right)e^{ik_{y}y}
\end{equation}
As usual the coefficients $(a_1,a_2,a_3,a_4,b_1,b_2,r,t)$ can be
determined using the boundary conditions, continuity of the
eigenspinors at each interface.
\subsection{ Transmission and reflection amplitudes}
We will now study some
interesting features of our system in terms of the
corresponding transmission probability. Before doing so, let us
simplify our writing using the following shorthand notation
\begin{eqnarray}
&&\vartheta_{\tau1}^{\pm}=D_{\left[(\epsilon_{n}l_{B})^{2}-(\mu
l_{B})^{2}\right]/2-1}
\left[\pm \sqrt{2}\left(\frac{\tau d_{1}}{l_{B}}+k_{y}l_{B}\right)\right]\\
&& \zeta_{\tau1}^{\pm}= D_{\left[(\epsilon_{n}l_{B})^{2}-(\mu
l_{B})^{2}\right]/2}
\left[\pm \sqrt{2}\left(\frac{\tau d_{1}}{l_{B}}+k_{y}l_{B}\right)\right]\\
&& f_{1}^{\pm}=\sqrt{\frac{\epsilon_{n}\pm
\mu}{\epsilon_{n}}}, \qquad
f_{2}^{\pm}=\frac{\sqrt{2/l_{B}^{2}}}{\sqrt{\epsilon_{n}(\epsilon_{n}\pm
\mu)}}\\
&& u^{\pm}_{\gamma}(\tau d_{1})=u^{\pm}_{\gamma, \tau1},\qquad
u^{\pm}_{\gamma}(\tau d_{2})=u^{\pm}_{\gamma, \tau 2}\\
&& v^{\pm}_{\gamma}(\tau d_{1})=v^{\pm}_{\gamma, \tau 1},\qquad
v^{\pm}_{\gamma}(\tau d_{2})=v^{\pm}_{\gamma, \tau 2}
\end{eqnarray}
where $\tau=\pm$. Dirac equation requires the following set
of continuity equations
\begin{eqnarray} \label{eq11} &&\psi_{\sf 1}(-d_2)= \psi_{\sf
2}(-d_2)\\
&&\psi_{\sf 2}(-d_1)= \psi_{\sf 3}(-d_1)\\
&&\psi_{\sf 3}(d_1)= \psi_{\sf 4}(d_1)\\
&&\psi_{\sf 4}(d_2)= \psi_{\sf 5}(d_2) \end{eqnarray}
That is requiring the continuity of the spinor wave functions at each
junction interface give rise to the above set of equations. We prefer to
express these relationships in terms of $2\times 2$ transfer
matrices between {\sf j}-th and {\sf j}+1-th regions, $\mathcal{M}_{{\sf j},{\sf j}+1}$, we
obtain the full transfer matrix over the whole double barrier
which can be written, in an obvious notation, as follows
\begin{equation}\label{syst1}
\left(%
\begin{array}{c}
1 \\
r_{m} \\
\end{array}%
\right)=\prod_{{\sf j}=1}^{4}\mathcal{M}_{{\sf j},{\sf j}+1}\left(%
\begin{array}{c}
t_{m} \\
0 \\
\end{array}%
\right)=\mathcal{M}\left(%
\begin{array}{c}
t_{m} \\
0 \\
\end{array}%
\right)
\end{equation}
where the total transfer matrix $\mathcal{M}=\mathcal{M}_{12}\cdot \mathcal{M}_{2
3}\cdot \mathcal{M}_{34}\cdot\mathcal{ M}_{45}$ are transfer
matrices that couple the wave function in the ${\sf j}$-th region to the
wave function in the ${\sf j} + 1$-th region. These are given explicitly by
\begin{eqnarray}
&& \mathcal{M}=\left(%
\begin{array}{cc}
\tilde{m}_{11} & \tilde{m}_{12} \\
\tilde{m}_{21} & \tilde{m}_{22} \\
\end{array}%
\right)\\
&& \mathcal{M}_{12}=\left(%
\begin{array}{cc}
e^{-\textbf{\emph{i}}p_{x1} d_{2}} &e^{\textbf{\emph{i}}p_{x1} d_{2}} \\
z_{1}e^{-\textbf{\emph{i}}p_{x1} d_{2}} & -z^{\ast}_{1} e^{\textbf{\emph{i}}p_{x1} d_{2}} \\
\end{array}%
\right)^{-1}\left(%
\begin{array}{cc}
u_{1,-2}^{+} & v_{1,-2}^{+}\\
u_{1,-2}^{-} & v_{1,-2}^{-}\\
\end{array}%
\right)\\
&& \mathcal{M}_{23}=\left(%
\begin{array}{cc}
u_{1,-1}^{+} & v_{1,-1}^{+}\\
u_{1,-1}^{-}& v_{1,-1}^{-}\\
\end{array}%
\right)^{-1}\left(%
\begin{array}{cc}
\vartheta_{1}^{+} &\vartheta_{1}^{-} \\
\zeta_{1}^{+} &\zeta_{1}^{-} \\
\end{array}%
\right)\\
&& \mathcal{M}_{34}=\left(%
\begin{array}{cc}
\vartheta_{-1}^{+} &\vartheta_{-1}^{-}\\
\zeta_{-1}^{+} & \zeta_{-1}^{-} \\
\end{array}%
\right)^{-1}\left(%
\begin{array}{cc}
u_{-1,1}^{+} & v_{-1,1}^{+}\\
u_{-1,1}^{-} & v_{-1,1}^{-} \\
\end{array}%
\right)\\
&& \mathcal{M}_{45}=\left(%
\begin{array}{cc}
u_{-1,2}^{+} & v_{-1,2}^{+}\\
u_{-1,2}^{-} & v_{-1,2}^{-}\\
\end{array}%
\right)^{-1}\left(%
\begin{array}{cc}
e^{\textbf{\emph{i}}p_{x5} d_{2}} & e^{-\textbf{\emph{i}}p_{x5} d_{2}} \\
z_{5} e^{\textbf{\emph{i}}p_{x5} d_{2}} & -z_{5}^{\ast} e^{-\textbf{\emph{i}}p_{x5} d_{2}} \\
\end{array}%
\right).
\end{eqnarray}
These will enable us to compute the reflection and transmission amplitudes
\begin{equation}\label{eq 63}
t_{m}=\frac{1}{\tilde{m}_{11}}, \qquad r_{m}=\frac{\tilde{m}_{21}}{\tilde{m}_{11}}.
\end{equation}
More explicitly, we have for transmission
\begin{equation}
t_{m}=
\frac{e^{id_{2}\left(p_{x1}+p_{x5}\right)}\left(1+z_{5}^{2}\right)\left(\vartheta_{1}^{-}\zeta_{1}^{+}+\vartheta_{1}^{+}\zeta_{1}^{-}
\right)}{f_{2}^{+}\left(f_{1}^{-}\mathcal{L}_{1}+if_{2}^{-}\mathcal{L}_{2}\right)+f_{1}^{+}\left(f_{2}^{-}\mathcal{L}_{3}+if_{1}^{-}
\mathcal{L}_{4}\right)}\mathcal{D}
\end{equation}
where the quantities $\mathcal{D}$, $\mathcal{L}_{1}$, $\mathcal{L}_{2}$, $\mathcal{L}_{3}$ and $\mathcal{L}_{4}$ are defined by
\begin{eqnarray}
\mathcal{D}&=&
\left(u_{-1,1}^{-}v_{-1,1}^{+}-u_{-1,1}^{+}v_{-1,1}^{-} \right)
\left(u_{1,-2}^{+}v_{1,-2}^{-}-u_{1,-2}^{-}v_{1,-2}^{+}
\right)\\
\mathcal{L}_{1}&=&
\vartheta_{-1}^{-}\zeta_{1}^{+}\mathcal{F}\mathcal{G}
-\vartheta_{1}^{-}\zeta_{-1}^{+}\mathcal{K}\mathcal{J}\\
\mathcal{L}_{2}&=&\left(\zeta_{1}^{+}\zeta_{-1}^{-}-\zeta_{1}^{-}\zeta_{-1}^{+}\right)\mathcal{F}\mathcal{J}\\
\mathcal{L}_{3}&=&
\vartheta_{-1}^{+}\zeta_{1}^{-}\mathcal{F}\mathcal{G}-\vartheta_{1}^{+}\zeta_{-1}^{-}\mathcal{K}\mathcal{J}\\
\mathcal{L}_{4}&=&=\left(\vartheta_{1}^{+}\vartheta_{-1}^{-}-\vartheta_{1}^{-}\vartheta_{-1}^{+}\right)\mathcal{K}\mathcal{G}
\end{eqnarray}
and
\begin{eqnarray}
\mathcal{F}&=&\left[u_{1,-1}^{+}v_{1,-2}^{-}-u_{1,-2}^{-}v_{1,-1}^{+}-
z_{1}\left(u_{1,-1}^{+}v_{1,-2}^{+}-u_{1,-2}^{+}v_{1,-1}^{+}\right)\right]\\
\mathcal{G}&=&\left[u_{-1,1}^{-}v_{-1,2}^{+}-u_{-1,2}^{+}v_{-1,1}^{-}
+z_{5}\left(u_{-1,1}^{-}v_{-1,2}^{-}-u_{-1,2}^{-}v_{-1,1}^{-}\right)\right]\\
\mathcal{K}&=&\left[u_{1,-1}^{-}v_{1,-2}^{-}-u_{1,-2}^{-}v_{1,-1}^{-}-z_{1}\left(u_{1,-1}^{-}v_{1,-2}^{+}-u_{1,-2}^{+}v_{1,-1}^{-}\right)\right]\\
\mathcal{J}&=&\left[u_{-1,1}^{+}v_{-1,2}^{+}-u_{-1,2}^{+}v_{-1,1}^{+}+z_{5}\left(u_{-1,1}^{+}v_{-1,2}^{-}-u_{-1,2}^{-}v_{-1,1}^{+}\right)\right]
\end{eqnarray}
Actually what we need are exactly the
transmission $T_m$ and reflection $R_m$ probabilities. These
can be obtained using the electric current density $J$
corresponding to our system. From our previous Hamiltonian,
we can show incident, reflected and transmitted current take the form
\begin{eqnarray}
&& J_{\sf {inc,m}}= e\upsilon_{F}(\psi_{1}^{+})^{\dagger}\sigma
_{x}\psi_{1}^{+}\\
&& J_{\sf {ref,m}}= e\upsilon_{F} (\psi_{1}^{-})^{\dagger}\sigma _{x}\psi_{1}^{-}\\
&& J_{\sf {tra,m}}= e\upsilon_{F}\psi_{5}^{\dagger}\sigma _{x}\psi_{5}.
\end{eqnarray}
These can be used to write the reflection and transmission probabilities as
\begin{equation}
T_{m}= \frac{p_{x5}}{p_{x1}}|t_{m}|^{2}, \qquad
R_{m}=|r_{m}|^{2}.
\end{equation}
The physical outcome of particle scattering through the double
triangular barrier depends on the energy of the incoming particle.
We numerically evaluate the transmission probability $T_{m}$ as a
function of structural parameters of the graphene double triangular barrier
with a perpendicular magnetic field, including the energy
$\epsilon$, the $y$-component of the wave vector $k_{y}$, the magnetic
field $B$, the energy gap $\mu$ and the applied potentials $v_{1}$ and
$v_{2}$. The results are shown in Figures \ref{figm1}, \ref{figm2}
and \ref{figm3}. In addition to the expected above-barrier full
transmission for some values of $\epsilon l_{B}$ and $v_{2}l_{B}$.\\
\begin{figure}[h!]
\centering
\includegraphics[width=8cm, height=5cm]{fig8m}\ \ \ \
\includegraphics[width=8cm, height=5cm]{fig9m}\\
\caption{\sf{(Color online) Transmission probability $T_{m}$ for the magnetic barrier as a function of energy
$\epsilon l_{B}$ with $\frac{d_{2}}{l_{B}}=1.5$,
$v_{1}l_{B}=60$, $v_{2}l_{B}=0$ and $\mu l_{B}=0$. (a) the parameters: $k_{y}l_{B}=2$ and
$\frac{d_{1}}{l_{B}}=\{0.12, 0.24, 0.6\}$. (b) the parameters:
$\frac{d_{1}}{l_{B}}=0.12$ and $k_{y}l_{B}=\{1, 2, 3,
5\}$}}\label{figm1}
\end{figure}
We note that in Figure \ref{figm1}a), when the energy is less
than the height of the potential barrier $\epsilon
l_{B}<k_{y}l_{B}+\frac{d_{1}}{l_{B}}$, we have zero transmission.
In the second interval
$k_{y}l_{B}+\frac{d_{1}}{l_{B}}\leq\epsilon l_{B}\leq v_{1}l_{B}$
the third zone contains
oscillations. Finally the interval $\epsilon
l_{B}>v_{1}l_{B}$ contains the usual high energy barrier
oscillations and asymptotically goes to unity at high energy.
Figure \ref{figm1}b) shows the transmission spectrum for
different wave vector $k_{y}l_{B}$, the energy gap $\mu l_{B}$ is
zero and $v_{2}l_{B}=0$. We see that if we increase the wave vector $k_{y}l_{B}$
the zone of zero transmission increases following the condition
$\epsilon l_{B}<k_{y}l_{B}+\frac{d_{1}}{l_{B}}$. In the second
interval the transmission oscillates between the value of the total transmission
and zero as $k_{y}l_{B}$ increases. Finally in the interval $\epsilon
l_{B}>v_{1}l_{B}$ the transmission increases. \\
\begin{figure}[h!]
\centering
\includegraphics[width=8cm, height=5cm]{fig3m}\ \ \ \
\includegraphics[width=8cm, height=5cm]{fig2m}\\
\caption{\sf{(Color online) Transmission probability $T_{m}$ for the magnetic barrier as a function of energy
$E$ with $\frac{d_{1}}{l_{B}}=0.1$ color red, $\frac{d_{1}}{l_{B}}=0.5$ color blue, $\frac{d_{2}}{l_{B}}=1.5$, $\mu l_{B}=4$ and $k_{y} l_{B}=2$.
a) the parameters:
$v_{1} l_{B}=30$ , $v_{2} l_{B}=60$. b) the parameters: $v_{1}l_{B}=60$ , $v_{2}l_{B}=30$
.}}\label{figm2}
\end{figure}
On the other hand, if we keep the same well region and cancel both the
applied magnetic field and mass term in the well region, the
series of potentials behave like a simple double barrier with the
same effective mass $k_{y}$. Thus, in this case we
reproduce exactly the transmission obtained in \cite{Alhaidari},
for the massive Dirac equation with $m = k_{y}$. Let us treat the triangular
double barrier case when $v_{2}<v_{1}$ and
$v_{2}>v_{1}$. In both cases, the transmission is plotted in
Figure \ref{figm2}: In Figure \ref{figm2}a) $v_{2}>v_{1}$ we
distinguish five different zones characterizing the behavior of
the transmission coefficient :
\begin{itemize}
\item
The first is determined by the greater effective mass,
namely $\epsilon l_{B}<k_{y}l_{B}+\frac{d_{1}}{l_{B}}$.
\item The second identifies with the lower Klein energy zone characterized
by resonances and $k_{y}l_{B}+\frac{d_{1}}{l_{B}}<\epsilon
l_{B}<v_{1} l_{B}$. Here we have full transmission at some
specific energies despite the fact that the particle energy is
less than the height of the barrier. As $d_{1}/l_{B}$ increases,
the oscillations in the Klein zone get reduced. This strong
reduction in the transmission in the Klein zone seem to suggest
the potential suppression of the Klein tunneling as we increase
$d_{1}/l_{B}$.
\item
The third zone $v_{1}l_{B}<\epsilon
l_{B}<v_{2}l_{B}-k_{y}l_{B}-\frac{\mu l_{B}}{2}$ is a window where
the transmission oscillates around the value of the total
transmission.
\item The fourth zone defined by $v_{2}l_{B}-k_{y}l_{B}-\frac{\mu
l_{B}}{2}<\epsilon l_{B}<v_{2}l_{B}+k_{y}l_{B}+\frac{\mu
l_{B}}{2}$ is a window where the transmission is almost zero.
\item
The fifth zone $\epsilon
l_{B}>v_{2}l_{B}+k_{y}l_{B}+\frac{\mu l_{B}}{2}$ contains
oscillations, the transmission converges towards unity.
\end{itemize}
Contrary to the case $v_{1}>v_{2}$, see Figure \ref{figm2}b) we distinguish fourth different zones
characterizing the behavior of the transmission coefficient:
\begin{itemize}
\item
Compared to Figure \ref{figm2}a), the behavior in the
first zone is the same as in in Figure
\ref{figm2}a).
\item Concerning the zones $k_{y}l_{B}-\frac{d1
}{l_{B}}<\epsilon l_{B}<v_{2}l_{B}-k_{y}l_{B}-\frac{\mu l_{B}}{2}$
and $v_{2}l_{B}+k_{y}l_{B}+\frac{\mu l_{B}}{2}<\epsilon
l_{B}<v_{1}l_{B}$ the transmission oscillates similarly to Figure
\ref{figm2}a).
\item In the zone $v_{2}l_{B}-k_{y}l_{B}-\frac{\mu
l_{B}}{2}<\epsilon l_{B}<v_{2}l_{B}+k_{y}l_{B}+\frac{\mu
l_{B}}{2}$, one can see that both curves start from zero
transmission and oscillate while the valley gets wider as
$d_{1}/l_{B}$ decreases.
\item Finally zone $\epsilon l_{B}>v_{1}l_{B}$
the transmission oscillate to reach the total transmission.
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[width=8cm, height=5cm]{fig4m}\\
\caption{\sf{(Color online) Transmission probability $T_{m}$ for the magnetic barrier as a function of
potential
$v_{2}l_{B}$ with $\frac{d_{1}}{l_{B}}=0.1$ color green, $\frac{d_{1}}{l_{B}}=0.2$ color red, $\frac{d_{1}}{l_{B}}=0.34$ color blue,
$\frac{d_{2}}{l_{B}}=1.5$, $\mu l_{B}=4$, $k_{y} l_{B}=2$,
$v_{1}l_{B}=60$ and $\epsilon l_{B}=30$.
.}}\label{figm3}
\end{figure}
It is worth to analyze the transmission versus
the potential $v_{2}l_{B}$. In doing so, we choose a fixed value of
$d_{1}/l_{B}$ to present Figure \ref{figm3}. It is clear that two transmission curves increase
while $d_{1}/l_{B}$ decreases in the intermediate zone.
\section{Conclusion}
We have considered a model to describe over-barrier
electron emission from the edge of monolayer graphene through a
triangular electrostatic double barriers in addition to a magnetic field
in graphene.
To underline the behavior of our system, we have separately considered two parts: first including
static barrier and second deal with magnetic barrier. In both cases, we have set the materials
needed to analytically determine and numerically analyze the transmission probability. These have
been done by solving
the eigenvalue equation
to end up with the solutions of the energy spectrum in terms of different
physical parameters involved in the Hamiltonian system.
By using the continuity of the wavefunctions at the interfaces between different regions
inside and outside the barriers we have ensured conservation of the local current density and
derived the relevant transport coefficients of the present system. Specifically, using the
transfer matrix method, we have analyzed the corresponding
transmission coefficient and determined how the transmission
probability is affected by various physical parameters. In particular for static barrier,
the resonances were seen in different regions as
well as the Klein tunneling effect.
Subsequently, we have analyzed the same system but this time by taking
into account the presence of an inhomogeneous magnetic field. Using boundary conditions, we have split
the energy into three domains and then calculated the transmission probability in each case. In each
situation, we have discussed the transmission at resonances that characterize each region and stressed the
importance of our results.
\section*{Acknowledgments}
The generous support provided by the Saudi Center for Theoretical Physics (SCTP)
is highly appreciated by all authors. AJ acknowledges partial support by King Faisal University
while HB acknowledge the support of King Fahd University of Petroleum and minerals under
research group project RGxxxx.
|
2,877,628,091,079 | arxiv | \section{Introduction}
While recent advances in Natural Language Processing have yielded high-quality language models such as BERT~\cite{devlin-etal-2019-bert}, GPT-3~\cite{brown2020language} and ELECTRA \cite{clark2020electra} which are able to continue sentences, fill in masked words and correctly parse human language, using these models for most use-case scenarios still requires them to be trained on a down-stream task using labeled data. For some tasks, e.g. sentiment analysis of reviews, creating datasets is relatively easy as large databases with annotations already exist (such as the IMDb movie review dataset~\cite{maas-EtAl:2011:ACL-HLT2011}). However, training a model on niche tasks often demands hand-crafting new datasets from spread-out documents. This is usually done by humans who collect, preprocess, and annotate sentences which is a laborious task and can result in biased and/or inhomogeneous labeling, e.g. if annotation instructions were not understood correctly or left room for subjective interpretation. This becomes especially apparent if multiple, non-expert individuals are involved in this process.\\
In requirements engineering, we usually work with large documents written in natural language~\cite{Mich04,Kassab14} which describe the specifications of a software project, usually classified as either functional requirements, specifying what functionality the system should provide, and non-functional requirements, specifying in what way the system should implement those functions. However, these documents are often updated during the life cycle of the project and span up to multiple hundreds of pages, depending on the project size. Keeping track of all the changes and maintaining the software based on the requirement document can soon become a challenge~\cite{fischbach20} which is why an automatic conversion to, e.g., UML diagrams can come in handy. To do so, it is necessary to parse the relations between entities from the written text into a structured format, thus creating a comparable corpus of requirements in natural language and the same relation in a formal language.\\
In this paper, we propose a semi-automatic approach that, given a clean, grammatically correct sentence stating a software requirement, outputs a labeling corresponding to the relation the requirement describes based on a small set of pre-defined rules of word dependency relations. This should reduce human bias manifesting in labels as the annotator does not actively choose the labels for each word anymore but instead defines abstract rules which provide for homogeneous, deterministic labeling and reduce the amount of labor for creating such datasets. This automatically annotated data can then be used for training a more powerful model, as shown by~\citet{schmitt-etal-2020-unsupervised}.\\
We summarize our main contributions as follows:
\begin{itemize}
\item We provide a high-quality, preprocessed dataset of 2,093 requirement sentences together with 1,848 automatically created labels and another 199 manually created labels for a subset of the automatically labeled sentences as a resource for further research projects.
\item We provide a flexible, semi-automatic framework for data annotation of the relation extraction domain based on dependency parsing and pattern matching.
\item We conduct a case study on the said framework on requirement document sentences, showing its annotation results are matching those of humans to a substantial degree.
\end{itemize}
\section{Related Work}
\citet{gamallo-etal-2012-dependency} propose a simple Open Information Extraction system based on dependency parse trees. The algorithm extracts triples with two arguments and a sentence part relating those. However, the patterns are not very sophisticated and put a large part of the sentence into the relation. Hence, this approach is not suitable for our use case as we would eventually like to generate object diagrams from the relations we extracted.
\citet{erkan-etal-2007-semi} use dependency parse trees to extract relations between proteins from sentences. They do so by classifying whether a sentence, given a dependency tree, describes a relation between any pair of proteins occurring in the sentence using semi-supervised harmonic functions and support vector machines. However, their entities (the protein names) are already annotated which is not the case if we only have the raw sentences as in our approach.
\citet{mausam-etal-2012-open} use dependency trees and a labeled bootstrap dataset to automatically generate patterns for information extraction, unlike our approach which does not require annotating any data manually but instead to produce patterns. While this approach might be able to extract simple triples well, one needs either a larger annotated dataset, defeating the purpose of our work, or the patterns might not generalize well, thus being unsuitable for constructing a qualitative annotated corpus.
\citet{reddy-etal-2016-transforming} propose an algorithm to automatically extract logical expressions from dependency parse trees for question answering. These were then converted into a graph indicating the relations between the named entities in the sentence by applying semantic parsing. However, this approach always converts the entire sentence into a graph and may include information that is irrelevant for a dataset that is to be generated.
\citet{inago2019parsing} use a rule-based approach on dependency trees to process natural language car parking instructions with decision trees for automated driving systems. Unlike our data (or most datasets in general), sentences of the application domain are very short and similar in structure. While our approach could be effectively converted into a decision tree, it is easier to construct rules with our pattern engine for more complex data.
\section{Corpus Creation}
\subsection{Dataset} For our dataset, we use 19 publicly available requirement documents in the English language from the PURE dataset~\cite{ferrari2017pure}, with a large topical variety, including governmental institution software in military and scientific fields, inventory management systems and video games. All documents are provided in .PDF, .HTML or .DOC format. From these, we manually extracted 2,104 requirement sentences (1,639 functional, 465 non-functional requirements).
\subsection{Preprocessing} As we want to automatically dependency parse our sentences, we have to ensure that all input to the model is grammatically and orthographically sound. We also have to ensure that any unnecessary information is removed to not confuse the parser. Therefore, we manually applied the following formatting operations to each sentence during data extraction:
\begin{itemize}
\item Splitting of enumerations into multiple sentences, adjusting words if necessary to make the sentence sound (e.g., nounification of verbs); e.g., "The system has to include a) [...] b) [...] c) [...]" becomes 3 sentences, each including exactly one of the requirements
\item Removal of extra inter-punctuation (additional spaces, dots, commas, etc.)
\item Removal of references to sections, tables, figures, or other requirements of the document as they are not relevant for extracting the relation of the sentence itself
\item Removal of abbreviations after written-out expressions (e.g., in "automated teller machine (ATM)", the "(ATM)" is dropped)
\item Removal of requirement reference numbers
\item Correction of spelling mistakes where obvious
\item Adding of dots at the end of each sentence if missing
\item Changing the first letter of a sentence to upper case if it is not yet
\item Removal of quotation marks around pseudo-correct terms (e.g., 'the "processor" will [...]' becomes 'the processor will [...]')
\item Removal of explicit explanations of what is included in some term (e.g., "errors of either kind, i.e. hardware and software, [...]")
\item Lower-casing of words if they are not abbreviations (e.g., "NOT" becomes "not")
\item Remove brackets around additional plural 's' (e.g., "socket(s)" becomes "sockets")
\item Exchanging "/" with "and" or "or" where applicable and possible given the context (e.g. "The system should support adding/deleting files" becomes "The system should support adding and deleting files")
\item Unification of the possessive 's' preceding symbols ("`" and "´" are changed to "'")
\item Removal of duplicate sentences (11 in total)
\end{itemize}
After these preprocessing steps, the average sentence length is 19.87 words, the maximum is 69 words and the minimum 4 words.
\subsection{Labeling} These final 2,093 sentences (1,628 functional, 465 non-functional requirements) are parsed to extract dependencies using the Neural Adobe-UCSD Parser~\cite{mrini-etal-2020-rethinking} which achieved state-of-the-art performance on the Penn Treebank dataset \cite{marcus-etal-1993-building}. Based on these dependencies, we handcraft a total of 102 patterns to label 91.03\% of the functional and 78.71\% of the non-functional sentences without any further human interaction. Each pattern is a sequence of triples $(l, dp, c)$ where $l$ is a label, $dp$ a sequence of dependency labels forming a path downwards a dependency tree and $c$ a Boolean value indicating whether all children (direct and indirect) should be left out from labeling or not. Each sequence applies all or a subset of the following entity tags to the sentences:
\begin{itemize}
\item \texttt{ent1}: The main entity of the requirement. Either the acting component or the component on which a constraint is applied (if there is no second entity)
\item \texttt{rel}: The relation/action of the requirement.
\item \texttt{ent2}: The passive entity of the requirement. Either the component on which an action is performed or which is involved in the action passively
\item \texttt{cond}: Any modifier of the requirement. Can further specify the requirement or put conditions on it how or when it will be applied.
\end{itemize}
An excerpt of automatic annotations can be found in Table~\ref{tab:labeling}.
\begin{table*}[htpb]
\centering
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{| c |}
\hline
\textbf{Sentence} \\
\hline
\\
$\text{\underline{While flying two MAE AVs Beyond Line Of Sight} }_\texttt{cond}\text{, \underline{the TCS} }_{\texttt{ent1}} \text{ shall \underline{provide} }_{\texttt{rel}} \text{ \underline{full control functionality} }_\texttt{ent2}\text{ \underline{of each AV} }_{\texttt{cond}}\text{.}$\\
$\text{\underline{NPAC SMS} }_\texttt{ent1}\text{ shall \underline{default} }_\texttt{rel}\text{ \underline{the EDR Indicator} }_\texttt{ent2}\text{ \underline{to False} }_\texttt{cond} \text{.}$ \\
$\text{\underline{A bulk entry} }_\texttt{ent1}\text{ can be used to \underline{add} }_\texttt{rel}\text{ \underline{many assets} }_\texttt{ent2}\text{.}$\\
$\text{\underline{The HATS-GUI} }_\texttt{ent1}\text{ shall interact with the Host OS to \underline{compare} }_\texttt{rel}\text{ \underline{time stamps} }_\texttt{ent2}\text{ \underline{ for files} }_\texttt{cond}\text{.}$\\
$\text{\underline{The BE} }_\texttt{ent1}\text{ shall be able to \underline{apply} }_\texttt{rel}\text{ \underline{corrections} }_\texttt{ent2}\text{ \underline{based on state count and/or quantizer power measurement data} }_\texttt{cond}\text{.}$\\\\
\hline
\end{tabular}
\end{adjustbox}
\caption{\label{tab:labeling} Examples of Labeling}
\end{table*}
Each pattern is applied using tree traversal: for each label that is to be applied, a sequence of dependency labels (optionally with modifiers) is given, starting at the root. The algorithm checks whether the current nodes have any direct children connected to them with the current dependency label of the sequence. If so, we check whether these children have children connected to them with the next label in the sequence. If not, the pattern fitting is stopped and no labeling is applied to the sentence. If we reach the end of the sequence, the final node is labeled with the given label and, depending on a parameter, all of its children, too. A simple example can be found in Table \ref{tab:patterns}, row 1.
Dependency labels can include modifiers to allow for more complex patterns:
\begin{itemize}
\item Starting with \texttt{!}, the pattern matching will remove any node that has one or more children with the given dependency label. Thus, no step downwards the tree is taken
\item Followed by \texttt{=[placeholder]} where \texttt{[placeholder]} is any word, only those nodes are considered where the label is the given label and the actual word of the node is specified by \texttt{[placeholder]}
\item \texttt{..} lets us traverse back to the parent of the current node. This allows us to check nodes for their existence without including them in the actual labeling
\end{itemize}
A selection of patterns used can be found in Table~\ref{tab:patterns}. In our setting, one sentence usually holds one relation, however, this is not the case for conjunctions of multiple main clauses or instructions. Due to current limitations of our engine (see Section \ref{section:conclusion_outlook}), the relation of the first main clause is always chosen, however, this depends on the pattern design. Even though we only use requirements written in English, a large portion of the rules could be applied to data in different languages as the Universal Dependencies~\cite{schuster-manning-2016-enhanced} rely on the concept of primacy of content, allowing for very similar dependency trees. However, patterns explicitly using keywords may not generalize well for other languages. The code for the labeling task as well as the labeled data can be found on GitHub\footnote{\url{https://github.com/JeremiasBohn/RequirementRelationExtractor}}.
\begin{table*}[htpb]
\centering
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{| c | c |}
\hline
\textbf{Pattern} & \textbf{Description} \\
\hline
\specialcell{
('rel', ['root'], True)\\
('ent1', ['root', 'nsubj'], False)\\
('ent2', ['root', 'dobj'], False)\\
('cond', ['root', 'advcl'], False)} & \specialcell{Simple pattern, sets the root of the sentence as\\
the relation (only this single word), the entire nominal subject\\as the acting entity, the entire direct object as\\ the passive entity. An adverbial clause is treated as a\\ relation modifier.} \\
\hline
\specialcell{('rel', ['root=capable', 'prep=of', 'pcomp'], True)\\
('ent1', ['root', 'nsubj'], False)\\
('ent2', ['root', 'prep=of', 'pcomp', 'prep=in', 'pobj'], False)\\
('cond', ['root', 'advcl'], False)} & \specialcell{Catches phrases like "The system should be capable of [...]"\\and searches for the passive entity in the prepositional object of\\the prepositional clause starting with "in".} \\
\hline
\specialcell{('rel', ['root', '!dobj'], True)\\
('ent1', ['root', 'nsubjpass'], False)\\
('cond', ['root', 'prep=in', 'pobj=case', '..'], False)\\
('cond', ['root', 'advmod'], False)} & \specialcell{Pattern is only applied if the sentence has\\no direct object (which could serve as the passive entity).\\Prepositional sentences starting with "in case" are\\labeled as requirement modifier (we have to traverse\\the tree upwards again to include the 'in' as well).}\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Examples of Patterns}
\label{tab:patterns}
\end{table*}
\section{Evaluation}
Given our automatically labeled data, we evaluate the quality of the labels by comparing its output to human annotations. To do so, we randomly sample 199 sentences (10.77\%) from the 1848 sentences which were automatically labeled. Two of the authors then annotated these sentences manually. The annotators were given the descriptions of each label type, but had no access to the actual labeling from the algorithm. Annotators collaboratively labeled the data, discussing the labeling for each sentence and agreeing upon a single valid labeling. We then calculate inter-rater reliability with the Cohen's $\kappa$ between the human annotators and the automatic annotator, once over all labels and once as average inter-reliability per sentence (i.e., we calculate one Cohen's $\kappa$ score per sentence and average over all sentences \textendash this considers each sentence equally while the overall score puts more weight on longer sentences). The results can be found in Table~\ref{tab:eval}.
\begin{table}[htpb]
\centering
\begin{tabular}{| c | c | c |}
\hline
\textbf{Labels considered} & \textbf{Sentence Avg.} & \textbf{Overall} \\
\hline
All labels &0.632& 0.576 \\
\texttt{rel} only &0.790& 0.720\\
\texttt{ent1} only &0.855& 0.822\\
\texttt{ent2} only &0.619& 0.561\\
\texttt{cond} only &0.532& 0.543\\
\hline
\end{tabular}
\caption{Cohen's Kappa Results}
\label{tab:eval}
\end{table}
While the overall score puts more weight on long sentences, the sentence average provides us an approximation of the reliability of our automatic annotator for any sentence. According to the taxonomy of Landis and Koch~\cite{landis1977measurement}, the per sentence average $\kappa$ value indicates a substantial inter-annotator agreement, the overall $\kappa$ a moderate agreement. While the main acting entity is extracted very well with almost perfect agreement according to Landis and Koch, extracting relational modifiers proofs to be the hardest with only moderate agreement between our automatic approach and the human annotators. This is mostly due to the nature of the label itself, spanning a large variety of modifiers from conditions to entities not involved in the relation itself. While one could split the \texttt{cond} label into multiple different labels, this would greatly increase the number of patterns required. Alternatively, one might reduce the coverage of the labeling in general but we focused on including as much information as possible. The relatively low score for \texttt{ent2} mainly arises from sentences containing multiple relations where many words describe a passive entity for other relations than the one of the main sentence. Our approach currently is not able to effectively extract multiple relations from a single sentence yet. This is also the reason why the score \texttt{rel} is lower than the one for \texttt{ent1}.
\section{Limitations}
While our approach works well for requirements documents - after all, relations between software entities and modifications of these relations can be extracted well by syntactically parsing the sentence structure - this does not apply to word labels which require a semantic understanding of the input. For example, if we were to create labels for Named Entity Recognition, our algorithm would fail as it is not possible to find syntactic rules to distinguish between, e.g., an organization and a person. Also, the algorithm fails in some cases if either rules are not specific enough or the dependency parser mistakenly adds dependencies between sentence parts where there is no dependency between them. The latter may especially occur frequently if the sentences were not preprocessed well which is why our algorithm is not suitable as a classifier in general (if we, on the other hand, use our data as training input for a Transformer model~\cite{vaswani2017attention}, it may overcome these strict syntactic requirements and generalize better on real-world data).
\section{Conclusion \& Outlook}\label{section:conclusion_outlook}
In this paper, we present a novel approach for data labeling which allows users to annotate sentences for relation extraction within a shorter time period compared to manual annotation while at the same time having a consistent labeling scheme for the entire dataset. Our approach exploits syntactic features which are the integral foundation of most relation extraction tasks.\\
For the future, it would be helpful to implement an automatic extraction of requirement sentences by, e.g., training a classifier to identify relevant sentences in plain text or .PDF documents as well as a semi-automatic approach with human validation for preprocessing sentences into grammatically and orthographically sound ones. We plan on extending the pattern engine our algorithm relies on, e.g., allowing for recursive patterns to parse nested sentences and to extract multiple relations from one sentence as well as optional pattern parts to reduce redundancy (e.g., a sentence where the active entity is the nominal subject, the relation the dependency tree root and the passive entity the direct object may have a relation modifier in an adverbial clause. As of the current state, this requires two patterns (exponentially increasing with the number of optional dependencies) while with a pattern where this adverbial clause is considered optional, we only need a single pattern).
\bibliographystyle{acl_natbib}
|
2,877,628,091,080 | arxiv | \section{Introduction}
Recent years have witnessed a tremendous progress in studies of quantum information theoretic
measures \cite{nielsen00,vedral07} close to a quantum critical point (QCP) \cite{sachdev99,chakrabarti96,continentino01}.
Quantities like concurrence \cite{osterloh02,osborne02,amico08}, negativity \cite{peres96,vidal02},
quantum fidelity \cite{zanardi06,zhou08,gritsev10,gu10,rams10}, quantum discord \cite{sarandy09} etc., have been found to capture
the ground state singularities associated with a quantum phase transition (QPT); for recent reviews see \cite{dutta10,polkovnikov11}.
On the other hand, the studies of decoherence namely, the quantum-classical transition by a reduction
from a pure state to a mixed state have also attracted the attention of physicists in recent
years \cite{haroche98,joos03,zurek91,zurek03}. In this connection, the concept of Loschmidt echo (LE) has been proposed
to describe the hypersensitivity of the time evolution of the system to the perturbation
experienced by the environment to which it is coupled \cite{zurek94,peres95,jalabert01,karkuszewski02,cucchietti03}. The measure of the LE is
the modulus of the overlap between two states that evolve from the same initial state $|\psi_o \rangle$
under the influence of two Hamiltonians $H_0$ and $H_0 + \delta$, where $\delta$ is a small perturbation, given by
$$ L(t) = |\langle \psi_0| e^{i(H_0 + \delta)t} e^{-iH_0t}|\psi_0\rangle|^2. $$
In some of the recent works, attempt has been made to connect these two fields by studying the behavior of the LE close to a QCP as a probe
to detect the quantum criticality.
Quan $et~al$, studied the decay of LE using the central spin model where a central spin-$1/2$ (qubit) is coupled to the environment which is
chosen to be a transverse Ising chain of $N$ spins in such a way that it is globally coupled to all the spins of the spin chain through the transverse
field term \cite{quan06}. The coupling to the qubit leads to the perturbation term $\delta$ defined above and consequently, the time evolution of the spin chain
initially prepared in its
ground state, gets split in two branches both evolving with the transverse Ising Hamiltonian
but with different value of the transverse field. This results in the decay in the LE. It has been observed that the LE shows a sharp decay in the vicinity of the quantum critical point of the environmental spin chain; at the same
time at the QCP, the LE shows collapse and revival as a function of time with the quasiperiod of revival of the LE being proportional to size
of the surrounding. This study has been generalized to the case where the environment is chosen to be a transverse XY spin chain and the behavior of the
LE has been studied close to the Ising critical point driven by the transverse field \cite{yuan07,ou07}.
Rossini $et~al$ \cite{rossini07}, studied a generalized central spin
model in which the qubit interacts with a single spin of the environmental transverse Ising spin chain and
it has been shown that the decay of the LE at short time is given by the Gaussian form
$\exp(-\Gamma t^2)$ where the decay rate $\Gamma$ depends
on the symmetries of the phases around the critical point and the critical exponents.
For instance, for such systems with local coupling, it has also been reported that $\Gamma$ has a singularity
in its first derivative as a function of the transverse field at the QCP \cite{rossini07}.
In a subsequent work \cite{zhang09}, the LE has been used as a probe to
detect QPTs experimentally; at the same time, using a perturbative study in the short-time
limit, the scaling relation $\Gamma \sim (\lambda)^{-2 z \nu}$ valid close to a QCP (at $\lambda=0$) has
been proposed. Here, $\nu$ and $z$ are associated correlation length and dynamical
exponents, respectively \cite{sachdev99}. In contrast to these studies where the coupling
between the qubit and the environment is chosen to be weak, it has been shown that in
the limit of strong coupling the envelope of the echo becomes independent of the coupling strength which may arise due to quantum phase transition in
the surrounding \cite{cucchietti07,cormick08}. Moreover the LE and the decoherence of the central spin has been studied when the environmental
transverse Ising spin chain is quenched across the QCP by varying the transverse field linearly in time \cite{damski11}.
The central spin model we consider here, consists of a two level central spin $S$ coupled to an environment
$E$ which is chosen to be a spin-$1/2$ $XY$ spin chain with anisotropic
interactions and subjected to a transverse field, described by the Hamiltonian
\begin{equation}
H_{E} =- \sum_{i=1}^N[J_{x} \sigma^x_i \sigma^x_{i+1} + J_{y} \sigma^y_i \sigma^y_{i+1} + h\sigma^z_i], \label{hjx1}
\end{equation}
where $\sigma_{i}^\alpha (\alpha = x, y, z) $ are Pauli spin matrices,
$h$ is the transverse field and $N$ is the total number of spins in $E$.
The spin chain (\ref{hjx1}) is exactly solvable using Jordan-Wigner mapping from spins to fermions \cite{lieb61,barouch70,kogut79,bunder99}.
The phase diagram is shown in the Fig.~(\ref{Fig:XYphase}); the transition from the ferromagnetically ordered phase to the paramagnetic phase driven by the
transverse field $h$ is called the Ising transition and the transition between two ferromagnetically ordered phase, with
magnetic ordering in the $x$ direction ($FM_x$) and the $y$ direction ($FM_y$), respectively, driven by the anisotropy parameter $\gamma=J_x -J_y$, is called
the anisotropic
transition. The anisotropic transition lines extend from $h=-(J_x + J_y)$ to $h=J_x+J_y$ along the $\gamma=0$ axis. Both Ising and anisotropic critical lines meet
at the multicritical points as shown in the figure. We shall exploit the exact solvability of the $XY$ spin chain to calculate the LE close to
these critical points. We also
note at the outset that the spin chain is to be studied under periodic boundary condition
and the wave vector $k$ takes discrete values $k=2\pi m/N$ with $m=1, 2, ...N/2$
and the lattice spacing is set equal to unity.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=8.5cm,height=6.0cm]{sharma1.eps}
\end{center}
\caption {The phase diagram of the anisotropic XY model in a
transverse field with Hamiltonian given by (\ref{hjx1}) in the $h/(J_x + J_y) - \gamma/(J_x+J_y)$ plane, where $\gamma=J_x -J_y$. The vertical bold lines denote Ising transitions from the ferromagnetic
phase to the paramagnetic phase ($\rm{PM}$), whereas the horizontal bold line stands for the anisotropic phase transition between
two ferromagnetic phases $FM_x$ and $FM_y$. The multicritical points at $J_x = J_y$ and $h = \pm1$ are denoted by $\rm{MC_1}$ and $\rm{MC_2}$, respectively.}
\label{Fig:XYphase}
\end{figure}
Earlier studies focussed on the case when the coupling of the central spin to the environment is through the transverse field $h$ \cite{quan06,yuan07,ou07}
and explored the behavior of LE close to the Ising critical point. Motivation behind the
present work is to explore the short-time behavior, collapse and revival of the LE
around the anisotropic critical point (ACP) and especially the multicritical point (MCP) $\rm{MC_1}$ of the phase
diagram. To achieve this we evaluate the LE by coupling the qubit to the anisotropy
term and also one of the interactions of the spin chain.
Finally, we conjecture a generic scaling form that should be valid close to
a QCP at least in the short-time limit.
In the next section (Sec. II), we consider the case when qubit is coupled to
the anisotropy term and the behavior of the LE is explored; in Sec. III, the qubit is coupled to one of the interaction terms ($J_x$). Finally in the concluding section, we
discuss our results and conjecture some generic scaling relations.
\section{Qubit coupled to the anisotropy term of environment Hamiltonian}
In this section, it would be useful to rewrite the Hamiltonian as
\begin{eqnarray}
H_{E} = - \frac{1}{2}\sum_{i=1}^N[(1+\gamma) \sigma^x_i \sigma^x_{i+1} + (1- \gamma) \sigma^y_i \sigma^y_{i+1}
+ 2h\sigma^z_i],
\label{hE}
\end{eqnarray}
with the choice $J_{x}+J_{y} = 1$, and the anisotropy parameter $\gamma=J_{x}-J_{y}$.
Denoting the ground and excited states of the central spin by $\ket{g}$ and $\ket{e}$, respectively,
the coupling of the system $S$ to the environment $E$ can be chosen as
\begin{eqnarray}
H_{SE} &=& - \frac{\delta}{2} \ket{e} \bra{e} \sum_{i=1}^N[ \sigma^x_i \sigma^x_{i+1} - \sigma^y_i \sigma^y_{i+1} ], \label{hint}
\end{eqnarray}
where one assumes that the excited state of the qubit couples to all the spins of the environmental spin chain.
The Hamiltonian of the composite system ($S+ E$) is then given by
\begin{eqnarray}
H_{e} &=& H_{E}+H_{SE} \label{hT}
\end{eqnarray}
The form of the interaction Hamiltonian chosen in Eq.~(\ref{hint}), enables
one to analytically calculate the behavior of the LE close to the anisotropic transition line and also the MCP.
Let us assume the spin $S$ to be initially in a pure state, $\ket{\phi(0)}_{S}=c_g\ket{g}+c_e\ket{e}$, (with coefficients satisfying $|c_g|^2+|c_e|^2=1$) and
the environment $E$ be in the ground state denoted by $\ket{\varphi (0, \gamma)}_{E}$;
the total wave function of the composite system at time $t=0$ can then be written in the direct product form
\begin{eqnarray}
\ket{\Psi(0)}=\ket{\phi(0)}_S \otimes\ket{\varphi(0, \gamma)}_E \label{hwf1}
\end{eqnarray}
One finds that the evolution of the $XY$ spin chain splits into two branches (i) $\ket{
\varphi (t, \gamma) }=\exp(-iH (\gamma) t)\ket{\varphi (0, \gamma)}$ and
(ii)$\ket{\varphi (t,\gamma+\delta) }=\exp(-iH (\gamma+ \delta) t)\ket{\varphi (0, \gamma)} $; this implies that $\ket{\varphi(t, \gamma) }$ evolves with the Hamiltonian
(\ref{hE}) with the anisotropy parameter $\gamma$ and $\ket{\varphi(t,\gamma+\delta)}$ evolves with the same Hamiltonian but the anisotropy parameter modified
to $\gamma + \delta$. The total wave function at an instant $t$ is then given by
\begin{eqnarray}
\ket{\Psi(t)}&=&c_g\ket{g}\otimes\ket{\varphi(t, \gamma)}+c_e\ket{e}\otimes\ket{\varphi (t, \gamma+\delta)}. \label{hwf2}
\end{eqnarray}
One therefore finds the decay of the LE given by \cite{quan06}
\begin{eqnarray}
&L&(\gamma, t)= |\langle{\varphi(t, \gamma)}\ket{\varphi(t,\gamma+\delta)}|^2 \nonumber\\
&=& |\langle \varphi(0, \gamma)|\exp(iH(\gamma)t) \exp(-iH(\gamma+\delta)t) | \varphi (0, \gamma) \rangle |^2\nonumber\\
&=& |\langle \varphi(0, \gamma)| \exp(-iH(\gamma+\delta)t) | \varphi (0, \gamma) \rangle|^2 \nonumber,\\
\label{hLE}
\end{eqnarray}
where we have used the fact that the $|\varphi (0) \rangle$ is an eigenstate of the
Hamiltonian (\ref{hE}) with anisotropy parameter $\gamma$.
The Hamiltonian (\ref{hE}) can be exactly solved by Jordan-Wigner (JW) transformation followed by Bogoliubov transformations \cite{lieb61,barouch70,kogut79,bunder99} and can be written
in the form $H (\gamma + \delta) = \sum_k \varepsilon_k(\gamma+\delta) (A_k^{\dagger} A_k -1/2)$ and $H (\gamma) = \sum_k \varepsilon_k(\gamma) (B_k^{\dagger} B_k -1/2)$
where $A_k$'s and $B_k$s are Bogoliubov fermionic operators and
\begin{equation}
\varepsilon_k (\gamma+\delta) = \sqrt{(h+\cos k)^2 + \{(\gamma +\delta) \sin k\}^2} \label{hL3};
\end{equation}
clearly, $\varepsilon_k(\gamma) =\varepsilon_k (\gamma+\delta)$ with $\delta=0$.
Here we have considered periodic boundary condition, the wave vector $k$ takes discrete values $k=2\pi m/N$ with $m=1, 2, ...N/2$ ( N is assumed to be even and also lattice
spacing is set equal to one).
In fact, under the JW transformation the Hamiltonian (\ref{hE}) gets reduced to direct product of decoupled $2 \times 2$ Hamiltonians for each momentum $k$ which in
the basis $|0\rangle$ (vacuum state) and $|k, -k \rangle$ (two JW fermion state) can
written as
\begin{equation}
H_k (\gamma)=
\left(
\begin{array}{cc}
h + \cos k & i\gamma \sin k \\
-i\gamma \sin k& -(h + \cos k) \\
\end{array}
\right).
\label{rdm}
\end{equation}
In the current problem in which the LE is calculated as a function of $\gamma$, one makes
resort to a basis transformation to $|\tilde 0\rangle$ and $|1\rangle$, such that
$$ |\tilde 0\rangle= \frac 1 {\sqrt 2} (|0\rangle + i |k, -k \rangle) $$
$$ |1\rangle= \frac 1 {\sqrt 2} (|0\rangle - i |k, -k \rangle), $$
so that the reduced Hamiltonian (\ref{rdm}) gets modified to
\begin{equation}
H_k (\gamma)=
\left(
\begin{array}{cc}
\gamma \sin k & h + \cos k \\
h + \cos k & -\gamma \sin k \\
\end{array}
\right).
\label{rdm1}
\end{equation}
The ground state of $H(\gamma)$ and $H(\gamma + \delta)$ can be written in the form
$$|\varphi(\gamma,0)\rangle = \cos \frac {\theta_k(\gamma)}{2} |\tilde 0\rangle -i\sin \frac {\theta_k(\gamma)}{2} |1\rangle$$
$$|\varphi(\gamma+\delta,0)\rangle = \cos \frac {\theta_k(\gamma+\delta)}{2} |\tilde0\rangle -i\sin \frac {\theta_k(\gamma+\delta)}{2} |1\rangle,$$
where $\tan \theta_k (\gamma+\delta)= {(h+ \cos k)}/{\{(\gamma +\delta)\sin k\}}$ and $\tan \theta_k (\gamma)
= \tan \theta_k (\gamma+\delta)|_{\delta=0}$.
The Bogolioubov operators are related to the JW operators through the relation \cite{lieb61}
\begin{equation}
A_{k} = \cos\frac{\theta_k (\gamma+\delta)}{2} a_{k} -i\sin \frac{\theta_k (\gamma+\delta)}{2} a_{-k} ^\dagger. \label{eq_btojw}
\end{equation}
where $a_k$s are the Fourier transform of the JW operators as derived through the
JW transformations of spins.
Using Eq. (\ref{eq_btojw}), one can further arrive at a relation connecting the Bogolioubov operators
\begin{equation}
B_{k}= \cos(\alpha_{k}) A_{k} -i \sin(\alpha_{k}) A_{-k} ^\dagger
\label{eq_AtoB}
\end{equation}
where,
$\alpha_{k}=[\theta_k(\gamma) - \theta_k(\gamma+\delta) ]/2 $
Noting the fact that $A_k |\varphi(\gamma+\delta,0)\rangle=0$ and $B_k |\varphi(\gamma,0)\rangle=0$ for all $k$, one can use the Eq.~(\ref{eq_AtoB}) to establish a connection between the ground states $ |\varphi(\gamma+\delta,0)\rangle$ and $ |\varphi(\gamma,0)\rangle$ given by
\begin{equation}
\ket{\varphi (\gamma,0)}= \prod_{k>0}[\cos(\alpha_{k})+i\sin(\alpha_{k})A_{k}^\dagger A_{-k}^\dagger]\ket{\varphi (0,\gamma+\delta)} . \label{hGS}
\end{equation}
Substituting Eq. (\ref{hGS}) to the Eq.~(\ref{hLE}), we find the expression for the LE given by
\begin{equation}
L(\gamma, t)=\prod _{k>0} L_{k} =\prod_{k>0}[1-\sin^2(2\alpha_{k})\sin^2(\varepsilon_ k(\gamma+\delta) t)] \label{hl1}
\end{equation}
We shall use Eq.~(\ref{hl1}) to calculate the
LE as a function of parameters $\gamma$ and $h$, especially close the quantum critical points.
As shown in Fig.~(\ref{Fig:lemc}),
the LE as a function of $\gamma$ (with the transverse field $h <1$) exhibits a sharp dip
near the anisotropic critical line ($\gamma=0$). In contrary, when $h=1$, and $\gamma$ is changed,
we once again observe a sharp dip near $\gamma=0$ which
in this case happens to be the MCP {$MC_1$} ($\gamma=0,h=1$) as shown in the phase diagram Fig. (\ref{Fig:XYphase});
changing $\gamma$ with $h=1$ implies that we are in fact probing the behavior of LE along the gapless
Ising critical line \cite{divakaran08}. It should also be emphasized here that although the spin chain
lies entirely on the critical line when the MCP is approached, we observe a substantial
dip only at the MCP which suggests that the MCP is apparently playing the role of a
dominant critical point in determining the temporal behavior of the LE \cite{deng09}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=8.5cm,height=7.0cm]{sharma2.eps}
\end{center}
\caption {The LE as a function of $\gamma$ for $h=1$; one observes a sharp dip
around the MCP ($\gamma=-\delta$). Inset shows a similar dip around the aniosotropic critical
line (with $h=0.5$).}
\label{Fig:lemc}
\end{figure}
\subsection{Anisotropic Critical Point (ACP)}
The correlation length exponent $\nu$ and the dynamical exponent $z$ associated
with the ACP is the same as those of the Ising transition, i.e., $\nu=z=1$ and one
therefore expects that the behavior of LE should be similar to that close to the Ising
transition \cite{quan06}. However, one needs to consider the fact that at the ACP the energy gap vanishes at $\gamma=0$ for
a critical mode $k_{c} = \cos ^{-1} (-h)$.
As mentioned the decay of LE at short time is characterized by the critical exponents of the associated QCP. To calculate the
short time behavior close to an ACP, we define a cutoff $K_{c}$ such that only modes up to this cutoff are incorporated
in calculating the LE \cite{quan06} which is then given by
$
L_c(\gamma, t)= \prod _{k>0} ^{K_{c}} L_{k}, \label{stm1}
$
and one defines \begin{equation} S(\gamma, t)=\ln L_{c} \equiv -\sum _{k>0} ^ {K_{c}} |\ln L_{k}|\end{equation}
\noindent Expanding around the critical mode $k_{c} $, we find
$
\sin ^2 \varepsilon _{e} ^k t \approx (\gamma +\delta)^2 k^2 t^2 $ and $\sin ^2 (2\alpha _{k}) \approx {k^2 \delta ^2}/{\{\gamma ^2 (\gamma + \delta)^2\}}$ where we have
relabeled $k-k_c$ as $k$; these lead to $ S(\gamma, t) \approx - \sum _{k>0} ^{K_c} {(k \delta At) ^2}/{\gamma ^2} $. We therefore arrive at an exponential decay of
LE in the short time limit given by
\begin{equation} L_{c} (\gamma, t) \approx \exp(-\Gamma t^2) \label{sta2} \end{equation}
where,
$ \Gamma = {\delta ^2 E(K_{c}) k^2}/{ \gamma ^2} $ and,
$ E(K_{c}) = {\{4\pi ^2 N_{c}(N_{c}+1)(2N_{c}+1)\}}/{6N^2}$
(where $N_{c}$ is integer nearest to $NK_{c}/2 \pi$). From above equation (\ref{sta2}) it is clear that in this case $L_{c}$ remains invariant
under the transformation
$ N \rightarrow N \alpha$, $\delta \rightarrow \delta /\alpha$ and $t \rightarrow t \alpha$, with $\alpha$ being some integer. We now proceed to study the time evolution of LE with $h=0.5$ and $\gamma=-\delta$ so that the Hamiltonian $H((\gamma+\delta)=0)$ is critical. We observe the collapse and revival of LE with time which is an indicator of quantum criticality as shown in fig. ~(\ref{Fig:anisots2}). It should be emphasized that when the size of the spin chain ($E$) is doubled keeping $\delta$ fixed,
the time period of collapse and
revival also gets doubled; this confirms the scaling behavior mentioned above which
is also observed at the Ising critical point \cite{quan06}.
{\bf The quasi period of oscillations can also be calculated in the following way. From Eq.~(\ref{hl1}), we find that the mode $k=k_c+2\pi/N$ gives dominant
contribution for $t\rightarrow \infty$ so that for large $N$ limit one can expand $\varepsilon_{e}^k$ in the
form $\varepsilon_{e} ^k =h+\cos{k}
\approx \sqrt{1-h^2} 2\pi/N$.
We have also chosen $\gamma = -\delta$ such that $\theta_k (\gamma+\delta)=\pi/2$ which makes
\begin{equation} \sin^2{2\alpha_k} = \frac{\gamma^2\sin^2{k}}{\gamma^2\sin^2{k}+(h+\cos{k})^2} \approx 1. \label{sta3}\end{equation}
Therefore from Eqs.~(\ref{hl1}) and (\ref{sta3}), it is clear that oscillations in $L(\gamma,t)$ arises due to $\sin ^2 \varepsilon _{e} ^k t$ term providing the time period
\begin{equation} T=\frac{N}{2\sqrt{1-h^2}}. \label{sta4}\end{equation}
This again shows that the time period of oscillation of the LE is proportional to the size $N$ of the environmental spin chain as shown in the Fig.~(\ref{Fig:anisots2}).
Eq.~(\ref{sta4}) also shows that the time period diverges as $h\rightarrow 1$. This originates for the fact that for $h=1$, the spin chain lies on the gapless
Ising critical line, a situation which we are going to discuss in next sub-section.}
We note that the decay rate $\Gamma$ scales as $\gamma^{-2}$ which is consistent with the scaling given in \cite{zhang09}
since $z\nu=1$ for the transition across the anisotropic transition line also \cite{bunder99}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.9cm]{sharma3.eps}
\end{center}
\caption{ The variation of the LE as a function of time at the ACP for $\gamma = -\delta$, $h=0.5$ and $\delta=0.01$. The collapse and revival of LE is indicator of a QPT and the
quasiperiod of the LE is proportional to the size $N$ of the environmental spin chain }
\label{Fig:anisots2}
\end{figure}
\subsection{Multicritical Point (MCP)}
As mentioned already, we set $h=1$, and approach the MCP by changing $\gamma$ along the Ising critical line. Expanding $\sin \varepsilon _{e} ^k t$ and $\sin (2\alpha _{k})$ near MCP around $k = \pi$;
$
\sin ^2 \varepsilon _{e} ^k t \approx (\gamma +\delta)^2 k^2 t^2 $ and, $\sin ^2 (2\alpha _{k}) \approx {k^2 \delta ^2}/{\{4\gamma ^2 (\gamma + \delta)^2\}}$; $k\to (k-\pi)$, one finds
the short-time decay of LE given by
\begin{equation} L_{c} (\gamma, t) \approx \exp(-\Gamma t^2) \label{stm2} \end{equation}
where, $ \Gamma = {\delta ^2 E(K_{c})}/{4 \gamma ^2} $ and,
$E(K_{c}) ={ \{(1/5)N_{c} ^ 5 + (1/2)N_{c} ^4 + (1/3)N_{c} ^3 - (1/30)N_{c}\}}/{N^4} $
(where $N_{c}$ is integer nearest to $NK_{c}/2 \pi$). Equation (\ref{stm2}) helps in providing analytical scaling for LE i.e., $L_{c}$ is invariant
under transformation
$ N \rightarrow N\alpha$, $\delta \rightarrow \delta /\alpha^2$ and $t \rightarrow t \alpha^2$.
This
scaling has to be contrasted with the scaling of $L_c$ close to the ACP presented in
the previous section.
We note that at the MCP, the minimum energy gap scales as
$(k-\pi)^2$ so that $z=2$ whereas near an ACP , it scales linearly as $(k-k_c)$ with
$z=1$. This difference in the dynamical exponent is the reason behind different scaling
observed in the short-time limit. The collapse and revival of LE as a function of time
is shown in Fig.~(\ref{Fig:mcp1}) for different system sizes and fixed $\delta$; this confirms
the scaling observed in the short-time limit. At the
same time, we note that $\Gamma \sim \gamma^{-2}$ as the exponent $z\nu=1$, even for
transition across the MCP.
{\bf To calculate the time-period of oscillation, we again proceed using the same line of arguments given
in section {\bf A}. At the MCP (h=1),
$ \varepsilon_{e} ^k=h+\cos{k}
\approx {2\pi^2}/{N^2}$
and similarly for $\gamma=-\delta$, it can be shown that
$ \sin^2{2\alpha_k} \approx 1$.
The time period of oscillations in $L(\gamma,t)$ is therefore given by
$ T \approx N^2/2\pi $,
which confirms that the LE oscillates with period is proportional to $N^2$ at MCP (see Fig.~(\ref{Fig:mcp1})).
}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7.9cm]{sharma4.eps}
\end{center}
\caption { The LE is shown as a function of time at the MCP $\gamma = -\delta$ , $h=1$ and
$\delta=0.01$. The quasiperiod of the collapse and revival is proportional to $N^2$ according to the
scaling relation $ N \rightarrow N\alpha$, $\delta \rightarrow \delta /\alpha^2$ and $t \rightarrow t \alpha^2$, discussed in the text. We note that the collapse and revival close to the MCP is
not a smooth function of time. }
\label{Fig:mcp1}
\end{figure}
\section{Qubit coupled to the interaction term of the Environment Hamiltonian}
In this section we shall choose the form of the $XY$ Hamiltonian given in Eq.~(\ref{hjx1});
transforming to a new state of basis vectors defined by \cite{mukherjee07}
$$ |e_{1k}\rangle = \sin (k/2) |0\rangle +i \cos (k/2) |k,-k \rangle$$
$$|e_{2k}\rangle = \cos (k/2) |0\rangle -i \sin (k/2) |k,-k \rangle, $$
one can rewrite the reduce $2 \times 2$ Hamiltonian (\ref{rdm}) $H_k$ in the
form
\begin{equation}
\left(
\begin{array}{cc}
J_x+J_y\cos 2k+h \cos k & J_y \sin 2k + h \sin k\\
J_y \sin 2k + h \sin k& -(J_x+J_y\cos 2k+h \cos k)\\
\end{array} \right).
\nonumber
\label{rdm2} \end{equation}
We choose the coupling term given by
\begin{equation}
H_{SE} = - \delta \ket{e} \bra{e} \sum_{i=1}^N[ \sigma^x_i \sigma^x_{i+1}], \label{jx2}
\end{equation}
Therefore total Hamiltonian (spin and environment) becomes
\begin{equation} H_e= - \sum_{i=1}^N[(J_{x}+\delta \ket{e} \bra{e}) \sigma^x_i \sigma^x_{i+1} + J_{y} \sigma^y_i \sigma^y_{i+1} + h\sigma^z_i], \label{jx3} \end{equation}
The advantage of selecting such a coupling is that it enables us to explore the MCP and ACP via different paths and compare the results with the previous case, e.g.,
if one chooses $J_x =2h_y$, the MCP is approached along a linear path when $J_x$ is changed unlike the previous case when it is approached along the Ising critical line.
Following identical mathematical steps as described in the previous section, one can find
that the expression of the LE is given by
\begin{equation}
L(J_{x}, t)=\prod _{k>0} L_{k} =\prod_{k>0}[1-\sin^2(2\alpha_{k})\sin^2(\varepsilon_ k(J_x+\delta) t)] \label{jx4}
\end{equation}
where,
$ \alpha_{k}=[\theta_k(J_x) - \theta_k(J_x+\delta) ]/2 $, and
\begin{equation}
\theta_{k} (J_x+\delta)= \arctan [\frac{J_{y} \sin(2k) + h \sin(k)}{J_{x}+\delta + J_{y} \cos(2k) +h \cos(k)}],~~~~ \\ \label{hjx6}\
\end{equation}
The energy spectrum given in Eq.~(\ref{hL3}) can be rewritten as
\begin{eqnarray}
\varepsilon_k (J_x+\delta) &=& [(J_{y} \sin 2k+h\sin k)^2 \nonumber\\
&+ &(J_{x}+\delta+J_{y}\cos 2k+h\cos k)^2]^{1/2} .\label{hjx8}
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[height=6.6cm,width=8.8cm]{sharma5.eps}
\end{center}
\caption{ The LE plotted as a function of $J_{x}$ shows dips around the Ising critical points ($J_{x}= -0.2, -1.8$) as well as the ACP ($J_{x}=1$) with $J_{y}=1$ and $h=0.8 (<2J_y)$. Inset shows that for $h= 2.2(>2J_y) $, there are dips only at the Ising critical points as
the variation of $J_x$ does not take the spin chain across the anisotropic critical line.
}
\label{Fig:hless}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=6.4cm]{sharma6.eps}
\end{center}
\caption{The interaction $J_x$ is varied with $h$ fixed to $h=2J_y=2$. The LE shows a dip at Ising critical point($J_{x}= -3$) and also at the MCP ($J_{x}=1$). }
\label{Fig:heq}
\end{figure}
Let us first explore the LE close to different critical points; refereeing to fig~(\ref{Fig:hless}) and (\ref{Fig:heq}), we find that there is a sharp dip in LE wherever the
parameters values are such that the system is close to a critical point. For example,
in Fig.~(\ref{Fig:hless}), we have varied $J_x$ keeping $h$ and $J_y$ fixed and $h<2J_y(=2)$ such that we observe dips at two Ising critical points and also at the anisotropic critical point; for $h>2J_y$, in contrast, one observes dips only at the Ising
critical points as the anisotropic transition point is not crossed in the process of changing
$J_x$. For $h=2J_y$, one observes dips at the Ising critical point and the MCP as
$J_x$ is varied (Fig.~(\ref{Fig:heq})). Equipped with these observations, we now proceed
to study the short time decay of LE close to these critical points.
\subsection{Short time behavior}
Near the critical points and the MCP, we have identical short-time behavior and scaling
with respect to $N$, $\delta$ and $t$ as already reported in the previous section. These are corroborated by numerical estimation
of collapse and revival close to the critical and the multicritical point as shown in
Figs.~(\ref{Fig:anisola}) and (\ref{Fig:anisola1}). This confirms that the scaling of
LE does not depend on how the central spin is coupled to the environment rather it is
hypersensitive to the proximity to a critical point of the environment.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=6.0cm,width=8.6cm]{sharma7.eps}
\end{center}
\caption{ The collapse and revival of the LE at the ACP ($ J_{x} = 1-\delta$ , $h=0.8, J_y=1$). The inset shows the same behaviour at the Ising critical point ($h=2.2, J_x=1-\delta$ and
$J_y=1$). In both the cases $\delta=0.01$ }
\label{Fig:anisola}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7.9cm]{sharma8.eps}
\end{center}
\caption{The time variation of LE at MCP ($J_x=1-\delta$, $h=2=2J_y$) is shown . The quasiperiod of LE is again proportional to $N^2$ as reported in Sec. II. }
\label{Fig:anisola1}
\end{figure}
\subsection{Close to the MCP}
It is well known that for a finite $XY$ spin chain, there exist quasicritical points on the
ferromagnetic side close to the MCP; the energy gap is locally minimum at these quasicritical points and it scales $k^3$
in contrast to the scaling $k^2$ at the MCP. In the limit of $N \to \infty$, all these
quasicritical points approach the MCP. These
quasicritical points and exponents associated with them have been found to dictate the
scaling of the defect density following a slow quench across the MCP
\cite{deng09,mukherjee10} and also the scaling
of fidelity susceptibility close to it \cite{mukherjee11}. We shall
now explore the collapse and revival of the LE fixing the parameters such that
$J_x + \delta$ is right at a quasicritical point. For modes $k \approx \pi$, one can
use the simplification,
$
\sin ^2 \varepsilon _{e} ^k t \approx (J_{x} + \delta - J_{y})^2 t^2 $ and, $\sin ^2 (2\alpha _{k}) \approx {4J_{y} ^2 k^6 \delta ^2}/{\{(J_{x}-J_{y})^2 (J_{x} + \delta -J_{y})^2\}}$.
We therefore get a similar exponential decay of the LE
$ L_{c} (J_{x}, t) \approx \exp(-\Gamma t^2) $ with
\begin{equation}
\Gamma = \frac{4 J_{y} ^2 \delta ^2 E(K_{c})}{(J_{x}-J{y})^2} \label{jx10} ~~{\text and} ~~
E(K_{c}) =\frac { A(N_c)}{N^6},
\label{jx11} \end{equation}
where
$A(N_c) =(1/7)N_{c} ^7 + (1/2)N_{c} ^6 + (1/2)N_{c} ^5 - (1/6)N_{c} ^3 + (1/42)N_{c}$, and as defined previously,
$N_{c}$ is integer nearest to $NK_{c}/2 \pi$. The above equation (\ref{jx11}), shows a very interesting scaling behavior of the LE
$N \rightarrow N\alpha$, $\delta \rightarrow \delta /\alpha^3 $ and $t \rightarrow t\alpha^3$ which
is different from the scaling observed at the MCP.
{\bf Simillar to previous cases, at the quasicritical point
$ \varepsilon_{e} ^k=h+\cos{k}
\approx {16\pi^3}/{3N^3}$ with $h=2J_y=2$;
for $J_x=1-\delta+4\pi^2/N^2$ and large N, it can be easily shown that
$ \sin^2{2\alpha_k} \approx 1$.
The time period of oscillations in $L(J_x,t)$ is therefore given by
$ T \approx {N^3}/{16\pi^2} $, which
verifies the fact that LE oscillates with period proportional to $N^3$ at quasi critical point (as shown in Fig.~(\ref{Fig:jxMCPt})).
}
The collapse and revival of the LE
as a function of time supports the scaling behavior analytically obtained in the short
time limit (see Fig.~(\ref{Fig:jxMCPt})). Comparing Eq.~(\ref{jx11}) with the form of decay rate $\Gamma$ given in Eq.~(\ref{sta2}), we find that
in both the cases $\Gamma \sim 1/\gamma^2$; this is because at a quasicritical
point one can define an effective dynamical exponent $z_{qc}=3$ $\nu_{qc}=1/3$ such
that $\nu_{qc}z_{qc}=1$ \cite{deng09,mukherjee10}. Moreover, we
find that the quasiperiod scales as $N^{z_{qc}}$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7.9cm]{sharma9.eps}
\end{center}
\caption{ Collapse and revival of the LE at a quasicritical point ($ J_{x} = 1-\delta+4\pi ^2/ N^2 $ , $ h=2 $ and $ J_{y}=1 $) as defined in the text. The quasiperiod scales with
$N^3$ in contrast to $\sim N^2$ at the MCP. This is consistent with the scaling
$N \rightarrow N\alpha$, $\delta \rightarrow \delta /\alpha^3 $ and $t \rightarrow t\alpha^3$.
\label{Fig:jxMCPt}
\end{figure}
\section{Conclusion}
In this paper, a spin-1/2 (qubit) is coupled to the environment which is chosen to
be a spin-1/2 $XY$ spin chain and the temporal behavior of the LE is studied. The coupling
is done in such a way that enables us to study the LE close to the ACP
as well as the MCP of the phase diagram and these points are approached in different
fashions, e.g., the MCP is approached along the Ising critical line in Sec. II while in Sec. III
it is approached following a linear path. We find that close to the ACP, the evolution
of the LE is identical to that reported in the ref. \cite{quan06}. However, around the MCP,
we observe that the quasiperiod of the collapse and revival of the LE
as a function of time scales as $N^2$ where $N$ is the size of the environmental
spin chain. We attribute this to the fact that the dynamical exponent $z$ associated
with the MCP is two. To justify this conjecture, we have estimated the scaling of the decay
rate $\Gamma$ and also the period of the collapse and revival of the LE at a quasi-critical
point on the ferromagnetic side of the MCP. We find that quasiperiod scales as $N^3$.
It should be noted here that at the quasicritical point, the minimum gap scales with
the system size as $N^3$ and hence one can define an equivalent dynamical
exponent $z_{qc}=3$ \cite{mukherjee10}. In Sec. II, even though the MCP is approached along
a gapless critical line, a sharp dip in the LE is observed only around the MCP where the decay of
energy gap with the system size is faster ($\sim 1/N^2$) with respect to that near the
Ising or anisotropic critical point. We observe that the collapse and revival of the LE at MCP is not a smooth function of time which is attributed to the fact that in Sec II the spin chain is
always close to the Ising critical line whereas in Sec. III quasicritical points are likely
to influence the temporal evolution of the LE. These quasi-critical points, on the other hand, are expected to be related to the proximity to the critical line of the finite-momentum anisotropic transition.
Although we have studied an integrable spin chain reducible to direct product of two-level system, our
studies indicate the possibility of some interesting scaling behavior. We see that in
all the cases studied here, the LE decays exponentially close to the critical point in the
short time limit
with the decay rate $\Gamma$ scaling as $\Gamma \sim \lambda^{-2z\nu}$ i.e., our studies
support the scaling proposed in \cite{zhang09} based on perturbative calculations and
a Landau-Zener argument. Moreover, we find the quasiperiod of the collapse and revival
of the LE at the critical point scales as $N^z$; we note that the dynamical exponent $z$
determines how does the minimum energy gap vanishes with increasing system size
($\sim N^{-z}$) at the QCP. At a quasicritical point
the effective dynamical exponent $z_{qc}=3$ is found to determine
the scaling of the quasiperiod of collapse and revival with the system size.
Finally, we comment on the decoherence of the central spin during time evolution which is calculated using its
reduced density matrix \cite{quan06}. The off-diagonal terms of the reduced density matrix is given by
$c_g^{*}c_e d(t)$ and its hermitian conjugate where the decoherence factor $d(t)$ is connected to the LE through the
relation $L(t)=|d(t)|^2$ \cite{damski11}. The vanishing of the LE around to the QCP therefore implies
a complete loss of coherence and therefore the qubit makes transition to a mixed state even though initial state
is chosen to be pure. On the other hand, away from the QCP LE stays close to unity, thus the purity of the qubit
state is retained. Our studies reveal that close to the MCP, $\Gamma \sim 1/N^2$ in the short time limit, implying a faster loss of
coherence with the increasing system size when the environment $E$ is close to a MCP than when it is close to a QCP. The loss
is even faster when the spin chain sits at a quasicritical point close to the MCP. This faster
loss of coherence with the system size, we believe, is a note-worthy observation.
\begin{center}
{\bf Acknowledgements}
\end{center}
AD acknowledges CSIR, New Delhi, India, for financial support through research project and SS acknowledges CSIR, New Delhi,
for junior research fellowship.
|
2,877,628,091,081 | arxiv | \section{Introduction}\label{Introduction}
First-order iterative optimization methods have been widely applied in data science and machine learning~\cite{bubeck2015,shalev2014}. These methods only require access to first-order derivative information, and iterate on the data until satisfactory convergence is achieved. For example, the gradient method is
\begin{equation}\label{eq:FG}
x^{k+1} = x^k - \alpha \grad f(x^k).
\end{equation}
Such simple methods are often favored over higher order methods such as Newton's method when the dimension of the underlying space is large and computing Hessians is prohibitively expensive.
There has been significant recent interest in finding ways to accelerate the convergence of the gradient method while maintaining low iteration costs.
For example, the \emph{Heavy-ball} method includes an additional momentum term
\begin{equation}\label{eq:heavy}
x^{k+1} = x^k - \alpha \grad f(x^k) + \beta(x^k-x^{k-1}).
\end{equation}
This slight modification can yield a dramatic improvement in worst-case convergence rate if $f$ is quadratic. A similar acceleration scheme, \emph{Nesterov's accelerated method}, can improve the convergence rate for strongly convex $f$ with smooth gradients~\cite{Lessard2014,YEN03a}. These convergence results are derived on a case-by-case basis, and the intuition behind the acceleration is still not fully understood.
Recent efforts have adopted a dynamical system (or differential equation) perspective in analyzing acceleration for convex objectives~\cite{Su2014NIPS,wibisono2016}, though a more general understanding of acceleration is still lacking (non-convex objectives, inexact computations, etc).
This paper aims to bring new insights on how to accelerate first-order optimization methods for objective functions which are not convex in general. Our main contributions are as follows:
\begin{enumerate}
\item We pose the iterative optimization paradigm as an output regulation problem, which lends itself to a loop-shaping interpretation. In particular, we show that several popular optimization algorithms may be viewed as controllers which are composed of basic PID or lag compensation elements. We also demonstrate that existing parameter tuning guidelines for these optimization methods are consistent with the loop-shaping design guidelines in control theory.
\item Using the small gain theorem~\cite{zames1966}, we draw a connection between the convergence rate analysis of optimization methods (under sector-bounded assumptions) and the input-output gain computation for a particular complimentary sensitivity function. It follows that the \emph{design} of optimization algorithms for sector-bounded functions can be interpreted as $\mathcal{H}_\infty$ state feedback synthesis. This explains why acceleration typically requires stronger function assumptions (not necessarily convexity) beyond just sector-bounded gradients.
\end{enumerate}
A related line of research has emerged in the distributed optimization literature~\cite{jakovetic2015,kia2015,wang2010control,wang2011control}. In \cite{wang2010control,wang2011control}, a continuous-time differential equation was used to describe the dynamics of distributed optimization, leading to a natural iterative algorithm which may be interpreted as a PI controller. In \cite{kia2015}, event-triggered control methods were tailored for distributed optimization over networks. In contrast with the work on distributed optimization, the present work is concerned with control-theoretic properties and interpretations of a big class of first-order optimization
methods.
A second related line of research is the unified integral quadratic constraint framework in~\cite{Lessard2014}, which provides a numerical tool based on semidefinite programming for use in analyzing optimization algorithms. In contrast with this work, the present work uses a small gain approach with a simple interpretation that interfaces with existing results on complementary sensitivity integrals.
The paper is organized as follows. Section~\ref{sec:PF} explains notation and problem formulation, Section~\ref{sec:loopshape} describes our loop-shaping interpretation for first-order methods, and Section~\ref{sec:main} presents our main results involving the small gain theorem and connections to complementary sensitivity.
\section{Preliminaries}
\label{sec:PF}
\subsection{Spaces and operators}
Let $\ell_{2e}^p$ denote the sequences $x\defeq (x^0,x^1,\dots) \subseteq \field{R}^p$, and let $\ell_2^p \subseteq \ell_{2e}^p$ be the set of square-summable sequences, so if $x \in \ell_2^p$, then
$\sum_{k=0}^\infty \| x^k\|^2<\infty$ where
$\|x^k\|^2\defeq (x^k)^\tp x^k$ denotes the
standard Euclidean norm.
We will omit the superscript $p$ when it is implied by context.
The gain of a causal operator $K: \ell_{2e}\to \ell_{2e}$ is defined as
\begin{align} \label{eq: Lipschitz}
\|K\| \defeq \sup_{x\in \ell_2; x \neq 0} \frac{\|K x\|}{\|x\|}
\end{align}
In addition, $K$ is said to be bounded if it has a finite gain.
Notice this gain is induced by $\ell_2$ signals while the operator $K$ itself is defined on $\ell_{2e}$. This definition makes sense since any bounded operator on $\ell_2$ to itself has a natural causal extension to the operator from $\ell_{2e}$ to $\ell_{2e}$. Clearly, every bounded operator must map zero inputs to zero outputs.
\if\MODE3
\subsection{Various objective functions in optimization}
\else
\subsection{Various classes of objective functions in optimization}
\fi
Consider the unconstrained optimization problem
\begin{align}
\label{eq:minP}
\min_{x\in \field{R}^p} \,\, f(x)
\end{align}
where it is assumed that there exists a unique $x^\star\in \field{R}^p$ satisfying $\grad f(x^\star)=0$.
How to solve \eqref{eq:minP} and find $x^\star$ heavily depends on the assumptions about $f$. A simple assumption is that $f$ is quadratic. Two other common assumptions are $L$-\emph{smoothness} and \emph{strong convexity}. A continuously differentiable function $f:\field{R}^p\to \field{R}$ is $L$-smooth if the following inequality holds for all $x, y\in \field{R}^p$
\begin{align}
\|\grad f(x)-\grad f(y)\|\le L \|x-y\|.
\end{align}
We define $\mathcal{L}(L)$ to be the set of $L$-smooth functions.
The continuously differentiable function $f$ is $m$-strongly convex if the following inequality holds for all $x,y \in \field{R}^p$
\begin{align}
\label{eq:strongC}
f(x)\ge f(y)+\grad f(y)^\tp (x-y)+\frac{m}{2} \|x-y\|^2.
\end{align}
Note that we recover ordinary convexity in~\eqref{eq:strongC} if $m=0$. We define $\mathcal{F}(m,L)$ to be the set of functions that are both $L$-smooth and $m$-strongly convex.
The class $\mathcal{F}$ covers a large family of objective functions in machine learning, including $\ell_2$-regularized logistic regression~\cite{teo2007}, smooth support vector machines~\cite{lee2001ssvm}, etc. Clearly, $\mathcal{F}(m,L)\subset\mathcal{L}(L)$.
For all $f \in \mathcal{F}(m,L)$, the following inequality holds for all $x\in \field{R}^p$
\begin{align}
\label{eq:gradient3}
\begin{bmatrix} x-x^\star \\ \grad f(x)\end{bmatrix}^\tp \begin{bmatrix} -2mL I_p & (L+m) I_p \\ (L+m) I_p & -2 I_p\end{bmatrix}
\begin{bmatrix} x-x^\star \\ \grad f(x)\end{bmatrix}\ge 0
\end{align}
where $I_p$ denotes the $p\times p$ identity matrix \cite[Lemma 6]{Lessard2014}.
On the other hand, a function satisfying the above inequality may not belong to $\mathcal{F}(m,L)$, and may not even be convex. The set of continuously differentiable functions satisfying \eqref{eq:gradient3} is denoted as $\mathcal{S}(m,L)$. This class of functions has sector-bounded gradients, and includes $\mathcal{F}(m,L)$ as a subset.
\subsection{Review of first-order optimization methods}
A classical way to solve~\eqref{eq:minP} is the gradient descent method, which uses the iteration~\eqref{eq:FG} to gradually converge to $x^\star$.
The intuition behind gradient descent method is as follows. At each step $k$, we find a quadratic approximation of $f$ about $x^k$, which hopefully captures the local structure of $f$, and we solve the quadratic minimization problem
\begin{align}
\min_{x\in \field{R}^p}\left(f(x^k)+\grad f(x^k)^\tp (x-x^k)+\frac{1}{2\alpha}\|x^k-x\|^2\right).
\end{align}
When $f\in \mathcal{S}(m,L)$, if $\alpha$ is chosen well, then there exists a constant $\rho\in (0,1)$ and a constant $c\ge 1$ such that
\begin{align}
\|x^k-x^\star\|\le c \rho^{k} \|x^0-x^\star\|
\end{align}
Thus the iterates $\{x^k\}$ converge exponentially to $x^\star$. By convention, this is known as \emph{linear convergence} in the optimization literature.
For example, we can choose $\alpha=\frac{2}{L+m}$ and obtain $\rho=\frac{L-m}{L+m}$ and $c=1$. Another popular choice is $\alpha=\frac{1}{L}$, which leads to $\rho=1-\frac{m}{L}$ and $c=1$. These results are formally documented in \cite[Section 4.4]{Lessard2014}. It is emphasized that the proofs of these results only require \eqref{eq:gradient3}.
When $f\in \mathcal{F}(m, L)$, one can achieve a better convergence rate $\rho=\sqrt{1-\sqrt{\frac{m}{L}}}$ using Nesterov's accelerated method:
\begin{align}\label{eq:NFG}
\begin{aligned}
x^{k+1}&=y^k-\alpha \grad f(y^k)\\
y^k&=(1+\beta) x^k-\beta x^{k-1}
\end{aligned}
\end{align}
where $\alpha=\frac{1}{L}$ and $\beta=\frac{\sqrt{L}-\sqrt{m}}{\sqrt{L}+\sqrt{m}}$. When $L/m$ is large, Nesterov's accelerated method guarantees a much faster convergence rate compared to the gradient descent method. This fact was stated in \cite[Theorem 2.2.3]{YEN03a}.
When $f$ is a quadratic function, one can accelerate the gradient descent method by incorporating a momentum term into the iteration, such as the Heavy-ball method~\eqref{eq:heavy}. Although the Heavy-ball method works extremely well for quadratic objective functions, it can fail to converge for other functions in $\mathcal{F}(m,L)$; see \cite[Section 4.6]{Lessard2014}.
The intuitions behind Nesterov's accelerated method and the Heavy-ball method are still not fully understood. Hence, there is no intuitive way to modify these methods to accelerate the convergence when optimizing more general functions, i.e. $f\in\mathcal{S}(m,L)$ or $f\in\mathcal{S}(m,L)\cap\mathcal{L}(L)$. Gaining intuition for these methods can be beneficial for designing accelerated schemes for more general classes of objective functions.
Finally, it is worth mentioning that every optimization method mentioned in this section can be cast as the feedback interconnection $F_u(P, K)$ as shown in Fig~\ref{fig:fdbd}. Here, $P \defeq \grad f$ is a static nonlinearity and $K$ is a linear time-invariant (LTI) system (the algorithm).
\begin{figure}[h]
\centering
\scalebox{0.9}{
\begin{picture}(172,80)(23,25)
\thicklines
\put(80,25){\framebox(30,30){$K$}}
\put(80,75){\framebox(30,30){$\grad f$}}
\put(42,64){$u$}
\put(55,40){\line(1,0){25}}
\put(55,40){\line(0,1){50}}
\put(55,90){\vector(1,0){25}}
\put(143,64){$v$}
\put(135,90){\line(-1,0){25}}
\put(135,40){\line(0,1){50}}
\put(135,40){\vector(-1,0){25}}
\end{picture}
}
\caption{Feedback representation for the optimization method $F_u(\grad f, K)$.}
\label{fig:fdbd}
\end{figure}
Feedback representations for optimization methods are discussed in \cite[Section 2]{Lessard2014}. In \cite{Lessard2014}, $P$ is a static nonlinear operator that maps $u$ to $v=Pu$ as $v^k=\grad f(u^k)$. However, this is not a bounded operator since it does not map zero inputs to zero outputs. For the convenience of our discussion, we will choose $P$ to be the operator which maps $u$ to $v=Pu$ as $v^k=\grad f(u^k+x^\star)$ where $x^\star$ is the unique point satisfying $\grad f(x^\star)=0$. Then this choice of $P$ leads to a bounded operator. One can perform a state shifting argument to the feedback representations in \cite{Lessard2014} and cast all the mentioned optimization methods as
\begin{align}
\label{eq:interSS}
\begin{split}
\xi^{k+1}&=A \xi^k +B v^k\\
u^k&=C \xi^k\\
v^k&=\grad f(u^k+x^\star)
\end{split}
\end{align}
where $(A, B, C)$ are the state matrices of $K$.
For example, to rewrite the gradient descent method \eqref{eq:FG}, one can set $\xi^k=u^k=x^k-x^\star$ and $v^k=\grad f(x^k)=\grad f(u^k+x^\star)$. Then \eqref{eq:FG} can be cast as \eqref{eq:interSS} with $(A, B, C)=(I_p, -\alpha I_p, I_p)$.
Notice here $\xi^k=x^k-x^\star$ and hence the convergence rate of the optimization algorithm is equivalent to the rate at which $\xi^k$ goes to $0$. The optimization method converges to the optimum $x^\star$ at a rate $\rho$ if and only if the model \eqref{eq:interSS} drives $\xi^k$ to $0$ from any initial conditions at the same rate $\rho$.
Using similar arguments, Nesterov's accelerated method and the Heavy-ball method can be written as \eqref{eq:interSS}. In these two cases, the associated state matrices for $K$ are the same as (2.5) and (2.7) in \cite{Lessard2014}, although the states have been shifted by $x^\star$. When $f\in \mathcal{S}(m, L)$, the inequality \eqref{eq:gradient3} imposes a sector bound on the input/output pair of $P$. Let $v=Pu$. Then the following inequality holds for all $k$
\begin{align}
\label{eq:sectorB}
\begin{bmatrix} u^k \\ v^k\end{bmatrix}^\tp \begin{bmatrix} -2mL I_p & (L+m) I_p \\ (L+m) I_p & -2 I_p\end{bmatrix}
\begin{bmatrix} u^k \\ v^k\end{bmatrix}\ge 0.
\end{align}
The above inequality is important for further analysis of optimization methods.
\subsection{Input-output stability and small gain theorem}
The key analysis tool in this paper is the small gain theorem, which is now briefly reviewed.
Suppose two causal operators $P : \ell_{2e}\to \ell_{2e}$ and $K : \ell_{2e} \to \ell_{2e}$ both map zero
input to zero output. Let $[P, K]$ denote the feedback interconnection of $P$ and $K$ illustrated in Fig.~\ref{fig:feedback}:
\begin{align} \label{eq: FB}
\left\{\begin{array}{l}
v = P u + e \\
u= Kv+ r.
\end{array}\right.
\end{align}
\begin{figure}[h]
\centering
\scalebox{1}{
\begin{picture}(162,66)(10,37)
\thicklines
\put(17,90){$r$}
\put(10,85){\vector(1,0){30}}
\put(43,85){\circle{5}}
\put(55,90){$u$}
\put(45,85){\vector(1,0){33}}
\put(78,72){\framebox(26,26){$P$}}
\put(104,85){\line(1,0){35}}
\put(139,85){\vector(0,-1){37}}
\put(165,50){$e$}
\put(172,45){\vector(-1,0){31}}
\put(139,45){\circle{5}}
\put(121,50){$v$}
\put(136,45){\vector(-1,0){32}}
\put(78,32){\framebox(26,26){$K$}}
\put(78,45){\line(-1,0){35}}
\put(43,45){\vector(0,1){38}}
\end{picture}
}
\caption{Feedback interconnection with exogenous inputs}
\label{fig:feedback}
\end{figure}
The interconnection $[P, K]$ is said to be \emph{well-posed} if the map $(u, v) \mapsto (r, e)$ defined by \eqref{eq: FB} has a causal inverse on $\ell_{2e}$. It is (input-output) \emph{stable} if it is well-posed and this inverse causal map from $(r, e)$ to $(u, v)$ is bounded.
Clearly, $u, v \in \ell_{2}$ for all $r, e\in \ell_2$ if $[P, \Delta]$ is stable. Well-posedness holds only if the solutions to~\eqref{eq: FB} have no finite escape time. The small gain theorem states the following~\cite{khalil01,zames1966}.
\begin{theorem}[small gain theorem]
Suppose $P$ and $K$ are bounded causal operators and $[P, K]$ is well-posed. If $\|P\| \|K\|<1$, then $[P, K]$ is input-output stable.
\end{theorem}
The small gain theorem can be used to check the input-output stability of $[P, K]$ when the gains of $P$ and $K$ are both known.
Note that
there are exogenous signals $r$, $e$ in the setup of $[P, K]$ and zero initial conditions on $K$ to ensure that $K$ maps the zero input to a zero output. In contrast, $F_u(P, K)$ allows any initial condition for $K$, so the optimization method $F_u(P, K)$ can be initialized at any initial condition $\xi^0\in \field{R}^n$. For any optimization method $F_u(P, K)$ described by \eqref{eq:interSS}, one can form an associated interconnection $[P, K]$ by adding the signals $(r, e)$ and fixing the initial condition of $K$ to be zero, since the nonlinear static map $P$ is set up in a way to map zero inputs to zero outputs.
An important connection between the internal stability of $F_u(P, K)$ and the input-output stability of $[P, K]$ has been stated in \cite[Proposition 5]{Lessard2015}. Consequently, one may apply the small gain theorem for the convergence rate analysis of optimization methods.
\section{Loop-shaping interpretations for\\ optimization methods}
\label{sec:loopshape}
This section presents basic control interpretations for the gradient descent method, Nesterov's accelerated method, and the Heavy-ball method with the hope of shedding light on the general principles underlying the design of first-order methods. The goal of the optimization method is to find $x^\star$ satisfying $\grad f(x^\star)=0$. Hence the optimization method may be viewed as a controller that regulates the plant ``$\grad f$" to zero. When viewing $\grad f$ as the plant one wants to control, the unconstrained optimization problem \eqref{eq:minP} is an output regulation problem, and the LTI part $K$ in the first-order optimization method can be viewed as a controller. The key issue for this output regulation problem is that the equilibrium point $x^\star$ is unknown.
Transfer functions for the controller $K$ are listed in Table~\ref{tab:TransK}. We use the symbol $\otimes$ to denote the Kronecker product. These products appear because the controllers corresponding to our algorithms of interest are repetitions of a single-input-single-output (SISO) system.
\begin{table}[!h]
\begin{center}
\caption{Transfer function $K(z)$ for first-order methods}
\label{tab:TransK}
\begin{tabular}{l|l}\hline\rule{0pt}{2.6ex}%
Optimization Method & Controller $K$ \\\hline\rule{0pt}{5.2mm}%
Gradient Descent & $I_p\otimes \frac{-\alpha}{z-1}$ \\[2mm]
Heavy-Ball & $I_p\otimes \frac{-\alpha z}{z^2-(1+\beta)z +\beta}$\\[2mm]
Nesterov's Method & $I_p \otimes \frac{-\alpha (1+\beta) z+\alpha \beta}{z^2-(1+\beta)z +\beta}$ \\[2mm]
\hline
\end{tabular}
\end{center}
\end{table}
For the gradient descent method, $K$ is a pure integrator. Hence, the gradient descent regulates the nonlinear plant $P$ via pure integral control. Integral action is necessary since the algorithm must converge to $x^\star$, which amounts to having zero steady-state error when $K$ tracks a step input.
The Heavy-ball method~\eqref{eq:heavy} differs from gradient descent in the inclusion of an additional momentum term. This momentum term may be viewed as a lag compensator.
The Heavy-ball method corresponds to the following controller:
\begin{align}
K=I_p\otimes \left(\frac{-\alpha}{z-1}\right) \left(\frac{z}{z-\beta}\right)
\end{align}
The first term provides integral action to ensure zero steady-state error as with the gradient method, while the second term
is a discrete-time lag compensator. The lag compensation has the net effect of 1)
boosting low-frequency response by a factor of roughly $\frac{1}{1-\beta}$, which improves the tracking speed of the controller and hence the convergence of the algorithm and 2) attenuating high-frequency response by a factor of roughly $\frac{1}{1+\beta}$.
It intuitively makes sense that the Heavy-ball method can accelerate convergence for quadratic objectives, since the plant $P$ becomes a linear operator in this case. However, the lag compensator increases the slope of the loop gain near the crossover frequency, which may have a detrimental effect on the robustness of the closed loop.
This qualitative observation is confirmed by the fact that the Heavy-ball method may fail to converge at all if the objective function is relaxed to include more general strongly convex functions~\cite{Lessard2014}.
Unlike the Heavy-ball method, Nesterov's accelerated method performs well when applied to strongly-convex objective functions. A control interpretation is that Nesterov's accelerated method includes derivative control to decrease the slope of the loop gain near the crossover frequency, which significantly improves the robustness of the algorithm for certain classes of nonlinearities. To see the derivative controller in Nesterov's accelerated method, rewrite \eqref{eq:NFG} as
\begin{multline}
y^{k+1}=y^k+\beta(y^k-y^{k-1})-\alpha \grad f(y^k)\\
-\alpha \beta (\grad f(y^k)-\grad f(y^{k-1}))
\end{multline}
The last term is a difference of the plant output $\grad f$, and can be viewed as a derivative control.
Nesterov's method may also be interpreted as lag compensation together with integral action, as in the Heavy-ball case. The corresponding controller is
\begin{align}
K=I_p\otimes\left(\frac{-\alpha}{z-1}\right) \left(\frac{(1+\beta)z -\beta}{z-\beta}\right),
\end{align}
which has a zero at $z=\frac{\beta}{1+\beta}$, and this helps increase the slope of the Bode plot near the crossover frequency.
The control interpretations for different optimization methods are summarized in Table \ref{tab:GI1}.
\begin{table}[!h]
\begin{center}
\caption{Control interpretations for first-order methods}
\label{tab:GI1}
\begin{tabular}{l|l}\hline\rule{0pt}{2.6ex}%
Optimization Methods & Control Structure \\\hline\rule{0pt}{5.2mm}%
Gradient Descent & Integral Control \\[1mm]
Heavy-Ball & Lag $+$ Integral Control\\[1mm]
Nesterov's Method & Lag $+$ PID Control \\[1mm]
\hline
\end{tabular}
\end{center}
\vspace{-2mm}
\end{table}
We now demonstrate that the design of state-of-the-art optimization methods is actually consistent with general loop-shaping principles from control theory.
The loop-shaping principle states that the low-frequency loop gain should be sufficiently large to ensure good tracking performance while the high-frequency loop gain should be small enough for the purpose of noise rejection. In addition, the slope of the loop gain near the crossover frequency should be flat (typically around $-20$ dB/decade) to assure a proper phase margin and good robustness. A thorough discussion on loop-shaping can be found in standard references \cite{franklin2010, nise2007, ogata2010}.
Given a function $f\in \mathcal{F}(m,L)$, the standard gradient descent stepsize is $\alpha=\frac{1}{L}$, and the standard parameter choice for Nesterov's accelerated method is $\alpha=\frac{1}{L}$ and $\beta=\frac{\sqrt{L}-\sqrt{m}}{\sqrt{L}+\sqrt{m}}$. Other parameter choices, when $f$ is quadratic for example, are documented in \cite[Proposition 1]{Lessard2014}. Fig.~\ref{fig:bodeplot} shows the Bode plots of the resultant controllers $K$ (only the SISO part) for all these parameter choices under the assumption that $m=0.01$ and $L=1$.
The Bode plots are consistent with the properties of these optimization methods when a loop-shaping intuition is adopted. First, the gradient descent method with the standard stepsize $\alpha=\frac{1}{L}$ is known to be slower than other first-order methods when $f$ is quadratic. This is reflected in the Bode plot, which shows that the gradient method has a relatively low gain particularly in the low-frequency region. Using the optimal tuning of $\alpha=\frac{2}{L+m}$ improves the gain slightly.
Second, the optimal quadratic tuning for all three methods leads to controllers whose crossover frequencies are roughly at $0.5$\,Hz. Intuitively, such tuning places excessive weight on tracking performance and is very fragile to noise at the output of the plant $P$. This is consistent with the known robustness properties of these methods as well. For example, the gradient method with $\alpha=\frac{1}{L}$ is known to be very robust to the noise in the gradient computation while the gradient method with $\alpha=\frac{2}{m+L}$ is known to be fragile to such noise \cite[Section 5.2]{Lessard2014}. Comparing the high frequency responses of these two cases immediately leads to the same conclusion. Finally, the slope of Bode plot at the crossover frequency supports the fact that Nesterov's method works for a larger class of functions than the Heavy-ball method.
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{bode_final2}
\caption{Bode plots of $K$ for various first-order methods with commonly-used parameter tunings.}\label{fig:bodeplot}
\end{figure}
In summary, the intuition brought by the traditional loop-shaping theory is consistent with the known properties of the existing first-order methods. This suggests that loop-shaping intuition may be used as a general high-level guideline in the design of optimization methods. The control interpretations above also indicate that classical PID tuning~\cite{ang2005pid} can be used for optimization algorithm design. For example, one could drop the lag compensation and simply use the PID controller:
\begin{align}
K(z)=I_p\otimes \frac{-\alpha(1+\beta)z+\alpha \beta}{z(z-1)}
\end{align}
It remains an open question as to how to choose an appropriate $K(z)$ subject to different assumptions on the objective function. Since acceleration schemes for the optimization of quadratic functions or functions in $\mathcal{F}(m,L)$ already exist,
we will focus on the case $f\in \mathcal{S}(m,L)$.
We will derive one connection between such an optimization design problem and classical control synthesis theory.
\section{Analysis and design of optimization methods using the small gain theorem}
\label{sec:main}
In this section, it is assumed that $f\in \mathcal{S}(m,L)$ and $x^\star$ is the unique point satisfying $\grad f(x^\star)=0$.
\subsection{New feedback representations for first-order methods}
The feedback representation \eqref{eq:interSS} for first-order methods involves a nonlinear operator $P$ which belongs to the sector $(m,L)$ \eqref{eq:sectorB}. This does not coincide perfectly with a gain bound since upper and lower bounds don't match. We will therefore, use a loop-shifted
$F_u(P', K')$ that ensures the small gain condition on $P'$ captures the full sector. Choose $P'$ to map $u$ to $v=Pu$ as $v^k=u^k-\frac{2}{m+L}\grad f(u^k+x^\star)$. Then, substitute $\xi^k=u^k=x^k-x^\star$ into the gradient descent method \eqref{eq:FG} to get an alternative feedback interconnection
\begin{align}
\label{eq:newInter}
\begin{split}
\xi^{k+1}&=\left(1-\tfrac{(m+L)\alpha}{2}\right)\xi^k+\tfrac{(m+L)\alpha}{2}v^k\\
u^k&=\xi^k\\
v^k&=u^k-\tfrac{2}{L+m}\grad f(u^k+x^\star)
\end{split}
\end{align}
Direct manipulation of
\eqref{eq:gradient3} shows that
$P'$ is in a sector $(-\frac{L-m}{L+m},\frac{L-m}{L+m})$, which leads to gain bound $\|P'\|\le \frac{L-m}{L+m}$.
Suppose $K=I_p\otimes \bar{K}$ where $\bar{K}$ is a SISO LTI system.
In general, a loop transformation argument can be used to show that any optimization method $F_u(P, K)$ can also be represented as $F_u(P', K')$ where $K'=I_p\otimes \bar{K}'$ and $\bar{K}'=\bar{K}/(\bar{K}-\frac{2}{m+L})$.
Consequently, the feedback interconnection \eqref{eq:newInter} provides another way to model first-order methods.
\subsection{Main theorem}
Our main approach is inspired by the loop transformation used in \cite{Lessard2015, BinHu2015a}.
For any $\rho\in(0,1)$, the operators $\rho^+$ and $\rho^-$ are defined as the
time-domain, time-dependent multipliers $\rho^k$, $\rho^{-k}$, respectively. Here, superscripts indicate $k$-th power.
Define $K_\rho'\defeq\rho^{-} \circ K' \circ \rho^{+}$, and $P_\rho'\defeq\rho^{-} \circ P' \circ \rho^{+}$. From \mbox{\cite[Section 3]{Lessard2015}}, one can conclude $F_u(P', K')$ converges at rate $\rho$ if $[P_\rho', K_\rho']$ is input-output stable and $K_\rho'(z)=K'(\rho z)$. Similarly, define $\bar{K}_\rho'\defeq\rho^{-} \circ \bar{K}' \circ \rho^{+}$, and one has $\bar{K}_\rho'(z)=\bar{K}'(\rho z)$. The main result of this paper is stated below.
\smallskip
\begin{theorem}
\label{thm:main}
Let $\bar{K}'=\bar{K}/(\bar{K}-\frac{2}{m+L})$, and $K'=I_p\otimes \bar{K}'$.
If $\|\bar{K}_\rho'\|<\frac{L+m}{L-m}$, then the optimization method $F_u(P', K')$ has a linear convergence rate $\rho$.
\end{theorem}
\begin{proof}
Notice $P'$ is a pointwise nonlinearity, and hence one can use the small gain condition on $P'$ to show $\|P_\rho'\|\le \frac{L-m}{L+m}$ (see Section 5.1 in \cite{Lessard2015} or Section IV.C in \cite{BinHu2015a} for detailed arguments). In addition, $\|K_\rho'\|=\|I_p\otimes \bar{K}_\rho'\|=\|\bar{K}_\rho'\|<\frac{L+m}{L-m}$, and $\|K_\rho'\|\|P_\rho'\|<1$.
By the small gain theorem, $[P_\rho', K_\rho']$ is input-output stable. By \mbox{\cite[Proposition 5]{Lessard2015}}, $F_u(P', K')$ converges at rate $\rho$.
\end{proof}
\vspace{0.1in}
The power of Theorem~\ref{thm:main} is that it connects the convergence rate analysis of the optimization method to an input-output gain computation on a SISO system $\bar{K}_\rho'$. Note that the $\ell_2$-induced norm~\eqref{eq: Lipschitz} of a stable LTI system is equal to its $\mathcal{H}_\infty$-norm \cite{zhou96}. In addition, we have $\bar{K}_\rho'(z)=\bar{K}'(\rho z)$. Hence we only need to verify that $\bar{K}'(\rho z)$ is stable and then compare the $\mathcal{H}_\infty$-norm of $\bar{K}'(\rho z)$ to $\frac{L+m}{L-m}$.
\subsection{Recovery of rate results for gradient descent}
As a sanity check, we apply Theorem~\ref{thm:main} to recover convergence rate results for the gradient descent method applied to functions in $\mathcal{S}(m,L)$.
Since $\bar{K}(z)=\frac{-\alpha}{z-1}$, we have
\begin{align}
\bar{K}'(z)=\frac{\alpha (m+L)}{2z-2+\alpha (m+L)}
\end{align}
If $\alpha=\frac{2}{m+L}$, it is straightforward to obtain
\begin{align}
\bar{K}'(z)=\frac{1}{z}\,,\quad\bar{K}'(\rho z)=\frac{1}{\rho z}
\end{align}
Clearly, $\bar{K}'(\rho z)$ is stable for any $\rho>0$. Moreover, $\|\bar{K}'(\rho z)\| = \rho^{-1}$.
By Theorem~\ref{thm:main}, the gradient descent method converges for any $\rho>\frac{L-m}{L+m}$. This recovers the existing rate result for the gradient method with $\alpha=\frac{2}{L+m}$.
Another popular choice for $\alpha=\frac{1}{L}$. In this case, the shifted controller is given by
\begin{align}
\bar{K}'(z)=\frac{1+\kappa}{2z-\kappa+1}
\end{align}
where $\kappa\defeq \frac{L}{m}$ is the condition number. Hence, one has
\begin{align}
\label{eq:case2}
\bar{K}'(\rho z)=\frac{1+\kappa}{2\rho\kappa z-\kappa+1}
\end{align}
When $\rho>\frac{1}{2}(1-\frac{1}{\kappa})$, $\bar{K}'(\rho z)$ is stable. In addition, one can substitute $z=1$ into \eqref{eq:case2} to obtain the peak frequency response (the $\mathcal{H}_\infty$ norm) of $\bar{K}'(\rho z)$. To ensure this norm is smaller than $\frac{L+m}{L-m}$, one has the condition
\begin{align}\label{twentythree}
\frac{1+\kappa}{2\rho \kappa -\kappa +1}<\frac{\kappa+1}{\kappa-1}.
\end{align}
Upon simplifying~\eqref{twentythree} together with $\rho>\frac{1}{2}(1-\frac{1}{\kappa})$, we finally obtain $\rho>1-\frac{1}{\kappa}$, which is the linear convergence rate for the gradient descent method when $\alpha=\frac{1}{L}$.
\begin{remark}
A small technical issue in the above analysis is that the rate result proved by the small gain theorem is a strict inequality. This is due to the fact that for LTI systems, input-output stability is slightly stronger than the global uniform stability. See \cite[Remark 1]{BinHu2015a} for a detailed explanation. This issue is negligible from a practical standpoint.
\end{remark}
\subsection{Connections to complementary sensitivity}
Since we have $\bar{K}'(\rho z)=\bar{K}(\rho z)/(\bar{K}(\rho z)-\frac{2}{m+L})$, $\bar{K}'(\rho z)$ is the complementary sensitivity function of the closed-loop system $F_u(\bar{K}(\rho z), -\frac{m+L}{2})$.
Accelerating optimization for $f\in \mathcal{S}(m,L)$ requires finding the smallest $\rho$ such that there exists $\bar{K}(\rho z)$ that stabilizes $\bar{K}'(\rho z)$ while ensuring that $\|\bar{K}'(\rho z)\| < \frac{L+m}{L-m}$. One possible method would be to perform a bisection search on $\rho$.
For each $\rho$, we can try to design $\bar{K}$ to stabilize $\bar{K}'(\rho z)$ and minimize the $\mathcal{H}_\infty$-norm of $\bar{K}'(\rho z)$ at the same time. This subproblem may be reformulated as an $\mathcal{H}_\infty$ state feedback synthesis problem since $\bar{K}$ always contains a pure integrator
and the state of the scaled dynamics $\frac{1}{\rho z-1}$ is accessible at every timestep. Consequently, the only design variable is the state feedback gain, which happens to be the stepsize of the gradient method. This explains why acceleration in this case is difficult even given memory of past iterates.
It is worth noting that $\bar{K}(\rho z)$ always has an unstable pole at $z = \rho^{-1}$.
There is a large body of discrete-time complementary sensitivity integral results \cite{middleton1991, mohtadi1990bode, sung1988a, sung1989b} which could potentially be used in studying the design limits of $\bar{K}$ under the analytic constraints posed by the unstable pole at $\rho^{-1}$.
From the above connection, we can see that acceleration typically requires some function properties which can be decoded as constraints involving dynamics. The condition \eqref{eq:gradient3} is a static constraint, and it is hard to design accelerated schemes with this single constraint. However, it is still possible to accelerate non-convex optimization when other function properties are available, e.g. $f\in \mathcal{L}(L)$.
\section{Conclusion}
\label{sec:con}
This paper discussed connections between the analysis of optimization algorithms and classical control-theoretic concepts. Specifically, the gradient method, the Heavy-ball method, and Nesterov's accelerated method were interpreted as combinations of PID and lag compensators. A loop-shaping interpretation was also used to explain several well-known robustness properties of these algorithms.
We invoked the small gain theorem to show that finding worst-case convergence rates for algorithms amounts to computing the gain of a complementary sensitivity function. In addition, we demonstrated a connection between $\mathcal{H}_\infty$ state feedback synthesis and stepsize selections of the gradient method.
These observations are an encouraging first step toward leveraging tools from control theory for the analysis and eventual synthesis of robust optimization algorithms.
|
2,877,628,091,082 | arxiv | \section{Introduction}
\label{intro}
\chapquote{``For me, both philosophy and science lose all their attraction when they give up that pursuit [of knowledge and understanding of the world] -- when they become specialisms and cease to see, and to wonder at, the riddles of our world. Specialization maybe a great temptation for the scientist. For the philosopher it is the mortal sin."}{Karl Popper}{\textit{The World of Parmenides}}
While the atomic theory of materials, proposed in the $5^{th}$ centurty BC by Epicurus, Leucippus and Democritus \cite{Stallings} has been embraced and experimentally verified, the question of the atomization (i.e., discretization) of space-time (S-T) has received far less attention \cite{maimonides}. Even though the number of people who have studied S-T discretization is less, the group includes eminent philosopher-scientists such as Ren\'e Descartes, Isaac Newton, Gottfreid Leibniz, George Berkeley, David Hilbert, Werner Heisnenberg, John A. Wheeler and others - as excellently documented by Amit Hagar in \cite{Hag14a}. According to its proponents, a discrete S-T would entail space and time being built up from indivisible units, with space arrayed in a grid of unit cells of equal size (or arrayed in an aperiodic tiling, such as Penrose tiling \cite{Penrose}), and time progressing in indivisible equal steps, as shown in Fig. \ref{fig:Fig1}.
\begin{figure*}[h]
\centering\includegraphics[width=0.75\textwidth]{Fig1_11_11_2016.pdf}
\caption{A schematic of an elementary particle (shaded in gray) moving in discrete space (showing only two of the three spatial dimensions and time). For ease of visualization, time is shown increasing along the horizontal axis, but these are really different ``snap-shots" in time. Both the spatial size of the unit cells and the time duration between snap-shots are the smallest that nature allows \textemdash the quantums of space ($\chi$) and time ($\beta$). Shown in this figure is the common view of discrete space (i.e., the problematic Weyl-tile picture described later in this paper). Motion in this space involves a particle either remaining at a lattice site or moving to an adjacent lattice site during a single $\beta$ of temporal duration. The particle shown in this figure is making one $\chi$ spatial translation for every $\beta$ temporal duration, thereby moving as fast as nature allows, namely the speed of light ($c=\chi / \beta$).}
\label{fig:Fig1}
\end{figure*}
It is widely believed however, that discrete S-T suffers from the following problems \cite{Hag14a}:
\begin{enumerate}
\item\label{Problem1} Lorentz Contraction: The ostensibly smallest possible unit of length (i.e., $\chi$) in one inertial reference frame is Lorentz contracted to yet smaller lengths in moving reference frames.
\item\label{Problem2} Isotopy: Discrete space will introduce preferred directions in space; the motion of particles would be dependent on the direction of travel, even in matter-free space.
\item\label{Problem3} Causality: Forces and acceleration on both sides of the incompressible fundamental spatial unit cell are experienced perfectly simultaneously (i.e., no time delay). This seemingly violates the postulates of special relativity (SR) and causality.
\item\label{Problem4} Nonconservation of Energy and Momentum: Particles would be able to gain or lose momentum in units of $2 \hbar \pi / \chi$.
\item\label{Problem5} Weyl Tiles: In 1949, Hermann Weyl claimed that if space is discrete, the length of the side of \textit{any size} square must be equal to its diagonal. This non-adherence to Pythagoras's theorem is not observed, hence space must not be discrete \cite{Weyl1949}.
\item\label{Problem6} ``Jerkiness" of motion \cite{Stanford}: Motion of a particle in discrete S-T occurs through discrete jumps from one grid point to the next - something thought to be unphysical \cite{Stanford}. Each single spatial jump (of length $\chi$) occurs over one fundamental duration of time ($\beta$). This jump is followed by several $\beta$s of duration where no jump occurs, and then followed by the next jump. During the jump, the particle is presumably traveling at the speed of light $c=\chi / \beta$.
\end{enumerate}
Besides these problems, a discretized S-T must not predict any behavior that is at odds with what has already been physically measured. All current physical theories are built upon the continuous-space model and have been fantastically successful in describing practically all observed physical phenomena, from the structure of the atom to the motion of planets and stars. However, there are a few physical phenomena that are unexplained by conventional continuous-space-based physics, as well as some lingering philosophical issues:
\begin{enumerate}[resume]
\item\label{Problem7} Inertial anomalies (i.e., dark matter and dark energy): Can a correct fusion of quantum mechanics and gravity predict the motion of planets, stars and galaxies without resorting to any modern day ``ether"?
\item\label{Problem8} Selective quantization of observables: Why are some ``measurables" quantized and not others?
\item\label{Problem9} Constancy of the speed of light: Velocity is determined by measurements of position and temporal duration. Why is it then, that a particular velocity, namely $c$ ($=3 \times 10^8 $ m/s), is elevated to the status of a sacrosanct constant of nature and not its foundational quantities of spatial and temporal intervals?
\item\label{Problem10} Absolute versus nonabsolute space: Does any aspect of space exist independent of matter?
\item\label{Problem11} Time: Is the time used by scientists (namely, ``measured time") the only time, or does a separate ``Time" exist, namely Henri's Bergson's ``real" or ``psychological" time \cite{Bergson}?
\end{enumerate}
A few preliminary notes on Problems \ref{Problem7}-\ref{Problem11} are warranted at this point. Concerning Problem \ref{Problem7}, despite many decades of work, the unification of quantum mechanics (QM) and gravity remains unresolved, as well as explanations of inertial anomalies of some astronomical bodies. All approaches being studied (e.g., string theory, m-brane theory) use continuous S-T and absolute or pseudo-absolute S-T. This is seen by QM's use of space and time differentials, general relativity's (GR) assumption of the existence of S-T even in the absence of all particles \cite{Einstein1955}, and the fact that GR allows absolute rotations (of the whole universe) \cite{Godel1949}. We do not presume to unify QM with GR in this work, but we do show that there may be straightforward explanations for the inertial and gravitational anomalies of astronomical bodies (see Section \ref{sec:LeopoldCrystal}). Issues \ref{Problem8}-\ref{Problem11} pertain to space, time, measurement, and the quantization of measurables. Concerning space, time and other measurables, most scientists and philosophers believe that the infinitesimally small and the infinitesimally large do not exist in nature \cite{Hilbert1925}, whether it be of energy, momentum, spin or any other measurable quantity. However, beliefs and opinions have no place in science. Thus the discretization of S-T needs to either be demonstrated using existing postulates and theories, or form a new, more fundamental set of postulates from which existing accepted postulates of physics can be derived, or are mere consequences. In this work we do both.
In the seven main sections of this paper, we take Popper's advice to heart; we do not specialize and use tools from just one narrow philosophical or scientific field to address the nature of discrete S-T, but draw from humanity's rich tool-chest of knowledge that includes epistemology, quantum mechanics, special and general relativity, solid-state physics and mathematics. Section \ref{sec:Mach} provides an outline of a deductive proof of S-T discretization, starting from existing and widely accepted scientific and philosophical theories. One reason for only an outline of a proof is that exact values of the space and time quantums are not necessary in addressing the main focuses of this paper: Pythagoras's theorem, Lorentz transformations and motion in discrete S-T, philosophical implications concerning time and duration, and the inertial anomalies of massive particles. Goethe provides inspiration for the second reason when he stated ``the greatest art in theoretical and practical life consists in changing the problem into a postulate; that way one succeeds" \cite{Canales137}. Thus, after developing the proof, we elevate S-T discretization to the status of a postulate. Along with two postulates used in the proof, the complete set of postulates are \footnote{Again, these are not new postulates but have been proposed by a number of eminent philosopher-physicists. What we do in this work is to strictly adhere to the postulates and let them lead us where they may -- to their ultimate conclusions.}:
\begin{enumerate}
\item\label{Postulate1} Logical Positivism (LP) is the correct philosophy pertaining to the nature of existence and reality, and physical probes are required to perform a measurement (championed by David Hume, George Berkeley, Albert Einstein and others \cite{Hag14a}) \footnote{If one is concerned that LP involves a logically unjustified negation, then this postulate can be dropped if Postulate 2 and 3 are accepted and this paper's conclusions remain unchanged.}.
\item\label{Postulate2} Space and time are non-absolute (the foundation of Mach's Principle \cite{Ghosh2002,Schommers232}).
\item\label{Postulate3} Space and time are quantized, or discretized (first considered by Paramides and Zeno \cite{Hag14a} and later more explicitly by Mainonides \cite{maimonides}).
\end{enumerate}
The proof in Section \ref{sec:Mach} makes the more conservative assumptions of Postulate \ref{Postulate1} and \ref{Postulate2} only, and then uses these to demonstrate the validity of Postulate \ref{Postulate3}. We do so because it is enlightening and useful (for later parts of this paper) to work through this demonstration that describes concepts of measurement, distance, the nature of reality, and the consistency of S-T quantization with, and its naissance from QM, GR and LP. In Section \ref{sec:pointmap}, a description is given of how continuous space is an artificial mathematical construct in which (or with which) to more easily obtain solutions to physical problems, and how to map coordinates of continuous space to coordinates of discrete space. Then in Section \ref{sec:Weyl}, a new solution is developed to the Weyl-tile argument that historically has been the unassailable argument against discrete space. In Section \ref{sec:Potpourri}, resolutions are given for Problems \ref{Problem2}, \ref{Problem3}, \ref{Problem6}, \ref{Problem8}, \ref{Problem9}. Section \ref{sec:lorentz} contains a discussion on how to calculate $\gamma$ assuming discretized S-T, and this model's impact on time dilation and length contraction. Also in this section is a discussion of how discrete S-T allows for particles with nonzero rest mass to temporarily travel at the speed of light. Section \ref{sec:time} contains a discussion of light-clocks with the conclusion that light-clocks with different ``tick-tock" rates experience different amounts of time dilation and other irregularities, but in ways that ensure agreement amongst sets of light-clocks. Thus, as Henri Bergson suspected, Einstein's ideal light-clock is not as ideal as once thought \cite{Bergson}. While Bergson may have been right in that regard, it is ironic that we find that the scientists' measured time is more in line with what Bergson had in mind for his ``psychological" or real ``Time", and that the only candidate for Bergson's immutable Time is the atom of time itself (a duration on the order of $10^{-44}$ s) that has little or no impact on the daily lives or perceptions of humans. In Section \ref{sec:LeopoldCrystal}, descriptions are given of physical effects that can be measured and used to verify and quantify properties of discrete S-T. These effects include the the possibility of an inertial mass $m_{inertial}$ of an object (how a particle accelerates due to an applied force) that is different than the gravitational mass $m_g$ (how much gravity a particle produces), with $m_{inertial}$ even being \textit{negative} under certain circumstances.
\section{Logical Positivism and the Quantums of Space and Time}
\label{sec:Mach}
\chapquote{\\ One thing I am concerned about: you might, as you commence \\ philosophy, decide you see impiety therein, \\ and that the path you enter is the avenue to sin.}{Titus Lucretius Carus}{\textit{The Nature of Things}}
\chapquote{``In this way, that [the question of being] which the ancient philosophers found continually disturbing as obscure and hidden has taken on a clarity and self-evidence that if anyone continues to ask about it he is charged with an error of method."}{Martin Heidegger}{\textit{Being and Time}}
Philosophers have debated for thousands of years about the acquisition of knowledge, measurement and being, and the nature and proof of existence. Throughout this time, philosophers have frequently been either ignored or held in contempt. In the quotes above, both Lucretius and Heidegger were lamenting the contempt for philosophy. For Lucretius, the thought-police were the priests, for more modern-day philosophers such as Heidegger, the authority figures are the physicists. And even though Heidegger was discussing the question of being from an ontological perspective in his statement, as opposed to the epistemological perspective taken in this work, we both agree that contemporary scientists should not eschew philosophy, but embrace it and integrate its concepts within their scientific works and theories. As we shall see in this work, these philosophical issues are not just abstract concepts devoid of any physical consequences, but play a central role in our argument for a discretized S-T, with this discretization leading to measurable effects on the motion of massive particles.
The philosophical school of thought called empiricism, and is off-shoot logical positivism (LP), connect measurement to existence (``What cannot be measured, \textit{ipso facto}, does not exist" \cite{Craig}). This measurement-being definition can be misinterpreted to mean that whatever cannot be measured with existing technology does not exist \footnote{For example, it has been stated that Ernst Mach doubted the existence of atoms because they could not be directly and individually measured at the time \cite{Schommers232}. This is nonsense of course.}. However, the proper interpretation of LP is: whatever can \textit{never} be measured by \textit{any} observer, regardless of how advanced one's technology, does not exist. This is the view taken in this paper. Note that the results of this section are themselves not entirely new; it is generally known that the quantums of space and time are on the order of the Planck length ($l_p=1.62 \times 10^{-35}$ m) and Planck time ($\tau_p = 5.39 \times 10^{-44}$ s) respectively. However, a clear step-by-step demonstration of this seems to be lacking in the literature.
To calculate the shortest distance of space, we first have to review how one measures distances. Einstein instructed us that two probe-particles and a light-pulse (that serves as a signal-particle) are needed when measuring a distance in space (Fig. \ref{fig:Fig2}) \cite{Helliwell2010}. Because any larger distance will be built up from multiples of the fundamental length (i.e., the quantum of spatial distance ($\chi$)), it is adequate to focus on what is the measurable \textit{minimum} separation of these probe-particles, as shown in Fig. \ref{fig:Fig3}. Far into the future, technology may have advanced to such a degree that probe-particles can be made to be as precise and compact as nature allows. Such probes may be elementary particles with the same mass $m_{probe}$ but perform different functions. One probe ($P_A$) emits and receives a signal-particle ($P_S$) (e.g., a photon), with $P_S$ also being as spatially compact as possible. A separate and distinct probe ($P_B$) reflects $P_S$. The questions then are: how small can one make the probes (and therefore how precisely can their positions be determined), and what is the minimum separation between their two centers? This minimum separation is $\chi$ - a quantity called a ``hodon" by Silberstein \cite{Silberstein} \footnote{There exists other arguments aiming to prove or demonstrate the discrete nature of space. For example, Dowker \cite{Dowker} states the most convincing argument that ``the scale at which the continuum breaks down is the Planck scale is the finite value of the entropy of a black hole \cite{Sorkin}". Several of these methods have merit, but the method described in this work to demonstrate the discrete nature of space also provides an ideal foundation on which to discuss all the main focuses of this paper.}.
\begin{figure*}
\centering\includegraphics[width=0.75\textwidth]{Fig2_11_11_2016.pdf}
\caption{To measure distances, the most accurate ``ruler" consists of two probe-particles $P_A$ and $P_B$ between which the spatial interval is to be measured. $P_A$ emits and receives the signal particle $P_S$, and $P_B$ instantaneously reflects $P_S$.}
\label{fig:Fig2}
\end{figure*}
\begin{figure*}
\centering\includegraphics[width=0.75\textwidth]{Fig3_11_11_2016.pdf}
\caption{\textbf{Top:} A system to measure the smallest spatial separation between two distinct probe-particles (in darker gray). Note that $P_S$ is not shown in this figure. The probe-particles need to remain entirely distinct, as spatially compact as possible, and be able to emit, receive or reflect a signal particle. These conditions set the radii of $P_A$ and $P_B$ to $2l_p$ and their masses to $m_p / 2$. \textbf{Bottom:} The set of all continuous space x-values (denoted as $\Delta x_n$) within each sphere that is mapped to a single x-value in discrete space, namely $\tilde{x}_n$. This mapping is done sequentially, starting with $\Delta x_0$ $\rightarrow$ $\tilde{x}_0$ and $\Delta x_1$ $\rightarrow$ $\tilde{x}_1$, and then to the lighter gray circles in the figure that represent the subsequent positions in discrete space, i.e., the mappings $\Delta x_2$ $\rightarrow$ $\tilde{x}_2$ and $\Delta x_3$ $\rightarrow$ $\tilde{x}_3$.}
\label{fig:Fig3}
\end{figure*}
To estimate this minimum separation, one requires that the probe-particles remain distinct (i.e., non-overlapping). If both probe-particles have the same mass $m_{probe}$, then the minimum required separation is given by the Compton wavelength ($\lambda_c=\hbar / m_{probe}c$) of the particles, as shown in Fig. \ref{fig:Fig3}. As for how small the probes can be: they have be of a size such that they are able to perform their assigned roles. Hence, $P_A$ needs to be able to emit some type of signal-particle, therefore it must have a radius at least as large as the Schwartzchild radius $R_s=2Gm_{probe} / c^2$ otherwise the particle would be a black hole from which no particle can escape. This emitted signal-particle must be reflected by $P_B$; this sets the minimum radius of $P_B$ to $R_s$ as well. Equating $\lambda_c$ and $2 R_s$ yields the mass of the probes and the minimum separation of the probes:
\begin{align}\label{probeproperties}
m_{probe} = \frac{1}{2} \sqrt{\frac{\hbar c}{G}}=\frac{1}{2} m_p=1.09 \time 10^{-8} kg && \chi = 2 \sqrt{\frac{\hbar G}{c^3}}=2 l_p = 3.24 \times 10^{-35} m
\end{align}
\noindent with $m_p=\sqrt{\hbar c / G}=2.18 \times 10^{-8}$ kg and $l_p=\sqrt{\hbar G / c^3}=1.62 \times 10^{-35}$ m as the Planck mass and Planck length respectively \cite{Planck}.
To determine the value of the quantum of time ($\beta$), usually called a chronon, assume that we have an elementary particle (e.g., $P_A$ of Fig. \ref{fig:Fig2} or \ref{fig:Fig3}) serving as part of a perfect ``clock", with this particle emitting photons at perfectly regular intervals. These photons are received by another receiver-particle ($P_B$) and analyzed \footnote{Note the different role of $P_B$ here; it is a receiver of $P_S$ as opposed to being a reflector of $P_S$}. Assume that the position of $P_A$ is known as precisely as QM allows, namely within a spherical volume of diameter $\lambda_c$. We use the time intervals between successive photons incident upon $P_B$ to define a unit of time (i.e., $t_{interval}$). However, due to the uncertainty in where within $P_A$ the photon is emitted (namely anywhere within the clock's spherical volume), each ``tick" at the receiver can arrive either sooner or later than it ideally should (in comparison to the case where $P_A$ is infinitely small), by an amount of time $t_q=\lambda_c / c$, where $c$ is the speed of light. To minimize this uncertainty, one would choose an emitter-particle with as small of $\lambda_c$ as possible while having the photons still able to escape the clock. This leads to $P_A$ having a radius of $\lambda_c = 2 l_p$, and the smallest measurable duration in time as:
\begin{align}\label{bergson}
\beta =\frac{2 l_p}{c}=2\tau_p=1.08 \times 10^{-43} s
\end{align}
One can argue that the values of the quantums of length and time may be somewhat different upon a more careful analysis of the processes of signal-particle emission, reflection and detection. For example, one may say that conservation of momentum would require $P_A$ and $P_B$ to be more massive than $P_S$, such that $P_S$ can catch up to $P_A$ ($P_A$ may experience recoil upon emission of $P_S$) on its return trip. Also, the act of reflection and detection of $P_S$ may not be instantaneous with the arrival of $P_S$, but may each require an additional $2\tau_p$ of duration; this would make the $\beta$ be $6\tau_p$ instead of $2\tau_p$. However, whatever the value of $\beta$ is ultimately determined to be, the ratio $\chi / \beta$ must be $c$. And, as was said before, exact values of the quantums of space and time are not needed for the main focuses of this paper, what is needed is only a demonstration that minimums exist for these entities.
As was said in the introduction, now that a deductive argument has been developed to justify discrete S-T, we make the significant step of elevating $\chi$ and $\beta$ to the status of fundamental constants of nature, rather than derived quantities.
\section{Mapping Points from Continuous Space to Discrete Space}
\label{sec:pointmap}
For the purposes of this paper, we only need to know how to treat space between pairs of particles, and this is clearly described in Fig. \ref{fig:Fig3}. We then use this information to counter the Weyl-tile argument against discrete space, derive a modified Pythagoras theorem, and discuss all other matters in this paper. However, before moving on to these issues, a short discussion on the issue of mapping points from continuous space to discrete space is interesting, even if it is not entirely necessary for the rest of this work.
After showing that points in space have a nonzero size, one can follow two different paths. One path involves divorcing ourselves completely of the current prevailing view of space and time as the ``arena" in which particles interact, ``space is not a stage, which might be either empty or full, onto which things come and go" \cite{Smolin2002}. Instead, one should view S-T as nothing more than a collection of spatial (temporal) separations between particles (events), which themselves are just more quantum mechanical parameters within a system's wavefunction $\Psi$. And just like other quantum mechanical parameters (e.g., energy, spin, momentum), a system makes spatial transitions by a raising or lowering of this quantum parameter by a hodon. Similarly for time, but presumably a system can only experience a raising (not a lowering) of the time parameter by a chronon. This viewpoint is gaining increased interest by the research community due to its elegance and simplicity \cite{Smolin2002}, however this concept involves an extreme reevaluation of space and time, and many problems with this concept remain unresolved.
Another, more conservative path is to view continuous space as a ``dual" space of discrete space, with continuous space being a convenient but artificial space in which to solve problems. Once solutions are obtained in this dual space, one must map these solutions to real space (i.e., discrete space). For example, in the field of crystallography, one typically performs calculations in reciprocal space (the dual space of coordinate space) to determine the allowed directions of x-ray scattering by a crystal \cite{Ashcroft}. Even though this viewpoint has some attractive aspects, it also has some problems. One of them is that there is not a \textit{single} unambiguous mapping from continuous space to discrete space. There are different mappings for each pair of particles; there are as many continuous-to-discrete space mappings as there are particle-pairs in the universe. But once again, we do not have to subscribe to either of these viewpoints; only the information in Fig. \ref{fig:Fig3} is necessary.
\section{Weyl Tiles and Leopold's Theorem}
\label{sec:Weyl}
In 1949 Hermann Weyl introduced what has come to be viewed as the definitive argument against discrete S-T \cite{Weyl1949}. Weyl's argument uses a construction (shown in Fig. \ref{fig:Fig4}) in which space is defined \textit{a priori} -- namely, an absolute space laid out in a grid for all particles to reside within, and on which all measurements made. Distances in this model are required to be some integer multiple of $\chi$; \textit{no fractional values of $\chi$ are allowed}. Weyl maintained that the distance ($\overline{AB}$) from the center of one tile (Tile A in Fig. \ref{fig:Fig4}) to the center of the next tile (Tile B) along the triangle's side, and the distance ($\overline{AC}$) from the center of Tile A to the center of Tile C along the triangle's diagonal are equal: $\overline{AB}=\overline{AC}=\chi$. If the concept of absolute space and Weyl's conception of distances in discrete space are accepted, then the length of the side is equal to the length of the diagonal for a square of \textbf{any} size. Weyl then states that because all measurements to date have yielded a length for the diagonal of a square to be a factor of $\sqrt{2}$ longer than the length of the square's side, space cannot be discrete. Weyl's argument is unassailable, provided that his assumption of absolute space is valid. However, such an assumption is wrong, and Mach's concept of non-absolute (NA) space provides a refutation to Weyl's argument and a path forwards towards a modified Pythagoras's theorem.
\begin{figure}
\centering\includegraphics{Fig4_11_11_2016.pdf}
\caption{The Weyl construction that shows an \textit{a priori} defined lattice. All distances, from the center of one tile to the center of any neighboring tile have to be separated by integer multiples of $\chi$; \textit{no fractional values of $\chi$ are allowed}. Thus the length of the diagonal is equal to the length of the side of the square, regardless of the size of the square.}
\label{fig:Fig4}
\end{figure}
\begin{figure}
\centering\includegraphics{Fig5_11_11_2016.pdf}
\caption{The modified Weyl construction that does not assume absolute space. Distances are measured along each path according to the rules described in the text. For this particular triangle formed by points $A$, $B$ and $C$, the distance along the diagonal is equal to the lengths of the triangle's base and height. This is because only one jump of $\chi$ is needed along the diagonal for the sphere defining $P_S$ to partially overlap the sphere centered about $C$, and therefore be at the same position in discrete space.}
\label{fig:Fig5}
\end{figure}
\begin{figure}
\centering\includegraphics{Fig6_11_11_2016.pdf}
\caption{For a larger triangle with the lengths of its sides equal to $3\chi$, $P_S$ needs 4 jumps of $\chi$ along the diagonal such that its defining sphere overlaps the sphere defining point C. Thus the length of the hypotenuse relative to the length of the sides is $4/3=1.333$.}
\label{fig:Fig6}
\end{figure}
Our counter-argument starts by rejecting the first step in Weyl's construction where he assumed absolute space and we instead assume NA-space. In NA-space, a particle can jump in \textit{any} direction as long as the magnitude of the jump is $\chi$. We then construct a system to measure the distances of a square's side and diagonal. The system is composed of three particles $P_A$, $P_B$, $P_C$ at positions $A$, $B$, and $C$ respectively, with $P_A$, $P_B$ and $P_C$ able to emit, reflect or receive of a signal-particle $P_S$ (Fig. \ref{fig:Fig5}). The particles $P_A$, $P_B$, $P_C$, and $P_S$ all have diameters equal to $\chi$. We first construct the smaller right triangle shown in Fig. \ref{fig:Fig5} such that the time (duration) between emission (at $A$) $\rightarrow$ reflection (at $B$) $\rightarrow$ reception (at $A$) of $P_S$ is $2\beta$. \footnote{It is assumed that reflection of $P_S$ by $P_B$ is instantaneous.} This time duration corresponds to a length for the path $\overline{AB}$ of $\chi$, the smallest length that is possible to measure. (Note that $P_S$ is only shown along the segment $A \rightarrow C$, but signal-particles also traverse the segments $A \rightarrow B$ and $B \rightarrow C$ when the measurement of these segments are performed.) Additionally, the system is constructed such that a similar measurement yields a length of $\chi$ for the path $\overline{BC}$. Thus the system is an isosceles right triangle with $\overline{AB}=\overline{BC}=\chi$. However the length of the diagonal $\overline{AC}$ is \textit{not} $\sqrt{2} \chi$. A signal-particle emitted by $P_A$ (centered about \textit{A}) towards $P_C$ (centered about \textit{C}) makes its first discrete jump of $\chi$ and \textit{already}, the sphere that specifies the position of $P_S$ overlaps with the sphere defining the position $C$. Hence, $P_S$ has \textit{arrived} at $C$, will be instantaneously reflected by $P_C$, and will propagate back to $P_A$; a process that takes the same duration $2\beta$ as compared with $P_S$ traveling the path $P_A$ $\rightarrow$ $P_B$ $\rightarrow$ $P_A$. Thus, the length of the hypotenuse is equal to the lengths of the sides, all being $\chi$, and thus Pythagoras's theorem is violated.
Looking at a larger triangle in Fig. \ref{fig:Fig6}, the lengths of the sides are $\overline{AB}=\overline{BC}=3\chi$. When determining the distance $\overline{AC}$, one can focus on points (from the perspective of continuous space) on $P_S$'s and $P_C$'s boundaries that are the closest to each other -- these points are denoted as $\alpha_n$ and $\theta$ in Fig. \ref{fig:Fig6}, with $n$ as an integer denoting the jump number. When $\alpha_n$ is within the sphere that defines the point C, then $P_S$ has arrived at position $C$ and interacted with $P_{C}$. For the larger triangle with sides of length $3\chi$, $n=4$ jumps of $P_S$ are necessary for this to occur. Thus the measured length of the side is $3\chi$ and the length of the diagonal is $4\chi$, with a ratio of $4/3=1.33$. This value is closer to $\sqrt{2}=1.41$ than what was obtained for the smaller triangle, but still is in large disagreement with Pythagoras's Theorem.
Consider an arbitrarily large isosceles right triangle with the lengths of the two sides as $m\chi$. It is easy to derive an equation for the distances from $A$ to $\alpha_n$ and from $A$ to $\theta$ (these quantities are shown in Fig. \ref{fig:Fig6}). Then using these equations, one can determine the lowest number of jumps ($n$) necessary for $P_S$ to arrive at point $C$:
\begin{equation}\label{LargeTriangle}
n \ge \sqrt{2}m-1
\end{equation}
\noindent where again, $m$ is an integer, $m\chi$ is the length of each side, and $n$ is the smallest positive integer that satisfies Eq. \eqref{LargeTriangle}. Figure \ref{fig:Fig7} shows a plot of the lengths (relative to $\chi$) of hypotenuses versus the lengths of the sides of isosceles right triangles. It is seen that for $m=1$ and $m=2$, the length of the hypotenuse is equal to the lengths of the sides. However, as the sides of the triangle become larger, the hypotenuse converges to $\sqrt{2}$ times the length of the side and Pythagoras's Theorem is restored.
\begin{figure}
\centering\includegraphics{Fig7_11_11_2016.pdf}
\caption{A plot of the length of the hypotenuse of an isosceles right triangle versus the length of the sides, both for continuous space (solid line) and discrete space (points). Note that the lengths are normalized relative to $\chi$. The length of the hypotenuse is always less than $\sqrt{2}a$ (with $a$ being the length of the side), but never by more than $\chi$.}
\label{fig:Fig7}
\end{figure}
If the sides of the right triangle are not equal, say being $m\chi$ and $p\chi$ with $m$ and $p$ possibly being different integers, then Eq. \eqref{LargeTriangle} can be generalized to yield a new, more accurate form of Pythagoras's Theorem called Leopold's Theorem:
\begin{equation}\label{Leopold}
\includegraphics[width=4in]{LeopoldsTheorem.pdf}
\end{equation}
\noindent with $n$ being the smallest positive integer that satisfies Eq. \eqref{Leopold}.
\vspace{10mm}
\noindent Already, we see that this model contains several attractive properties:
\begin{enumerate}
\item\label{Advantage 1} It fully embraces the concept of NA-space, and maintains isotropy.
\item\label{Advantage 2} Measurements of lengths are performed in ways accepted by science and adhering to the tenets of LP.
\item\label{Advantage 3} A single equation applies to all size scales and accommodates both Pythagoras's Theorem for any practical distance, and the requirement of discretized space (i.e., distances as integer multiples of $\chi$).
\end{enumerate}
This result is quite simple, and in hindsight obvious. However, the results were obtained in a logical, step-by-step way that provides a strong foundation on which to base discussions of: motion in discrete space, modification of the Lorentz transformations, time versus duration, and inertial anomalies of massive particles. Note that alternative distance formulas (i.e., metrics) have been proposed in the past. Herman Minkowski proposed and studied several different metrics in the early $20^{th}$ century, the most well known of which is the taxicab (or Manhattan) geometry \cite{Minkowski}. In fact, Hermann Weyl's tile argument is an example of the use of the Chebyshev distance formula \cite{Cantrell}.
It is important to note that Bendegem proposed a solution to the Weyl tile argument that has similarities to the solution described in this paper \cite{Bendegem1987}. In his work, he assumed that points and lines have finite extensions. He then postulated that the lengths of the triangle's sides and hypotenuse are given by the number of squares contained in the rectangle formed by the length to be measured and the finite ``width of the line segment" \cite{Bendegem1987}. He certainly was on a similar track compared to what is done in this paper, but -- and this is not to imply that there is any deficiency in his excellent work -- he did not couch the argument in the language of QM, GR and LP.
\section{Motion, Isotropy of Space, Constancy of the Speed of Light and Causality}
\label{sec:Potpourri}
In QM, one can construct a system such that it has one value (from within a set of $N$ discrete or continuous values) for a particular ``measurable" (e.g., energy, angular momentum, linear momentum, and position). Such states are called eigenstates of that measurable, and can be expressed by the wave function $\Psi_n$ (i.e., the eigenfunction), with $n=1,2, \dots N$. When operating on $\Psi_n$ with the quantum mechanical operator associated with the measurable, the result is the product of a value of the measurable (i.e., the eigenvalue) and $\Psi_n$. A system can undergo a transition from one eigenstate to another, either spontaneously or in response to external stimuli. For example, Niels Bohr accurately described the spectrum of light emitted from a hydrogen atom by assuming that angular momentum ($AM$) is quantized in multiples of $\hbar$, such that an electron in a hydrogen atom can only make transitions from an initial eigenstate with $AM_{initial}$ to a final eigenstate with $AM_{final}$ with $AM_{final}=AM_{initial} + m \hbar$ with $m$ being a positive or negative integer. Standard techniques have been developed that provide the probability that such quantum mechanical transitions occur over a certain period of time \cite{Sakurai2011}. These concepts can be applied to a particle's position, as discussed below.
In continuous space, it is assumed that the position operator $\hat{x}$ is equal to the variable $x$ and the eigenfunctions are $\delta (x-x_n)$, with an infinite set of continuous eigenvalues $x_n$. For discrete space, the eigenvalues form an infinite but discrete set $x_n= \{ \dots,-2\chi,-\chi,0,\chi,2\chi, \dots \}$ \footnote{The eigenvalues really denote the separation between two particles and not an absolute position.} and the eigenfunctions (expressed using continuous-space coordinates and expressing only their $x$ dependence) are:
\begin{equation}\label{eigenfunction}
\Psi_n = \frac{1}{\sqrt{\chi}} \left [ H(x-x_n+\chi / 2)-H(x-x_n-\chi / 2) \right ]
\end{equation}
\noindent where $H(x)$ is the Heaviside function.
During a single chronon of duration, a particle in a particular position eigenstate $\Psi_n$ can either remain in that eigenstate (i.e., stay at the position $x_n$) or make a transition to another position eigenstate $\Psi_m$ (i.e., move to a position $x_m=x_n \pm \chi$). In this model, when a particle is said to be ``moving" at a velocity $v$, one really means that the particle is undergoing $M$ spatial translations (of magnitude $\chi$) during $N$ temporal durations (of magnitude $\beta$) with $M$ and $N$ being large integers, such that $v \simeq M\chi / N\beta = \left( M / N \right) c$. But when assessing the system at the finest possible temporal resolution (i.e., every $\beta$ in time), one sees that the instantaneous velocity of a particle is either zero or $c$ -- the particle either makes a $\chi$ spatial translation or it does not over this temporal duration. However, a common question is whether such staccato movement is physical \cite{Stanford}. Is not the particle traveling at a velocity $c$ over this time duration ($\beta$), and therefore does not SR predict that time is maximally dilated, length maximally contracted and mass maximally burgeoned, thereby invalidating this model of motion? In the next section we discuss one of the most important conclusions of this work that shows that this is not the case.
A couple of other important things concerning motion should be noted. First, for each time step of $\beta$, a particle can only make spatial translations of magnitude $\chi$, but this jump can be an any direction, as determined either by an external stimulus or due to a spontaneous translation. This rule then provides the important property of isotropy. Second, because any particle can translate only one hodon per chronon duration, a maximum velocity of $c=\chi / \beta$ is established -- a speed limit that cannot be exceeded by any particle. To justify this statement, consider if a particle has transitioned from $x_n$ to $x_n+2\chi$ over a duration of $\beta$, then one would naturally ask the question: at what duration was the particle at a position $x_n+\chi$? Presumably the answer is: at a duration less than $\beta$, perhaps $\beta / 2$. But since $\beta$ is the smallest possible duration, this is not possible. Also, there is no ``skipping" position states; QM tunneling or any other transition that would allow a particle to skip one or more position states are prohibited \footnote{This is different than other measurables in quantum mechanics, such as energy or angular momentum, in which a system can skip intermediate states.}. In its strict adherence to the principle of NA-space and its postulate of discretized S-T, this theory clearly predicts a constant speed of light that is independent of the velocity of a nonaccelerating reference frame. This is an important result because it shows that the constantancy of $c$ is not fundamental, but rather a consequence of the more fundamental principles of S-T discretization and NA-space.
Continual consternation concerning causality can cease \cite{Hag14a,Dirac38}. Inquiring about displacements, positions, mechanics, kinetics or anything else within any one discrete point (i.e., within a sphere of diameter $\chi$) is meaningless. Moot then, is the debate as to how a force is instantaneously transmitted across one Weyl tile (or across a sphere of diameter $\chi$) such that both sides accelerate identically and synchronously in response to a force \cite{Hag14a}. The important point that has been missed in this debate is that either side of the sphere (in fact the entire sphere) is the \textit{same point in real space (i.e., discrete space)} - one only encounters apparent causality problems when incorrectly viewing the situation from the artificial perspective of the continuous space.
\section{Modified Time Dilation and Travel at the Speed of Light}
\label{sec:lorentz}
In SR, the typical calculation of time-dilation and length contraction starts with the consideration of a clock by two observers in different reference frames (RFs), as shown in Fig. \ref{fig:Fig8}. One observer (O1) is at rest in the train station, and the other observer (O2) is on the train traveling at a speed $v$. Einstein envisioned the use of a ``light-clock" as an ideal clock with which to measure the passage of time from the two observers' perspectives \cite{Helliwell2010}. This light-clock is on the train and is composed of an emitter/receiver (E/R) of a photon at position $P$ and a mirror placed at a position $M$ that is a distance $h$ vertically (in O2's perspective) above $P$. Now consider a photon emitted vertically (again in O2's RF) from E/R towards $P$. After emission, it propagates to the mirror at $M$ where it is reflected, and then propagates back to $P$. The duration of this process is $\Delta t'=2h / c$. (Note that primed (unprimed) coordinates correspond to O2's (O1's) RF.) Changing perspectives to that of O1's, the photon's trajectory is not vertical but maps out two back-to-back right triangles (Fig. \ref{fig:Fig8}). Since the speed of the photon is $c$ in both RFs, the total duration of the process $P \rightarrow M \rightarrow P$ is $\Delta t=2d/c$. Also, the distance traveled by the E/R is $v \Delta t$. So far, this calculation has been setup in the conventional way that can be found in any textbook on relativity \cite{Helliwell2010}, but this is the juncture at which a continuous S-T model and discrete S-T model diverge.
Conventional SR uses Pythagoras's theorem that gives the hypotenuse $d$ as equal to $\sqrt{x^2+h^2}$. With $h=c \Delta t' /2$ and $x=v \Delta t / 2$, one obtains the well known formula:
\begin{equation}\label{StandardTimeDilation}
\Delta t = \Delta t_{Einstein}=\frac{\Delta t'}{\sqrt{1-v^2 / c^2}}=\gamma(v)_{Einstein} \Delta t'
\end{equation}
\noindent This equation predicts a shorter duration measured in a moving RF compared to the corresponding duration measured in a RF at rest. Such time dilations have been experimentally verified numerous times by studying the lifetimes of muons created by cosmic rays bombarding the atmosphere \cite{Rossi} and the time dilation experienced by atomic clocks on airplanes \cite{Hafele}. However the durations involved in these two cases, as well as all other experiments done to date, are much larger than $\beta$.
\begin{figure*}
\centering\includegraphics[width=0.75\textwidth]{Fig8_11_11_2016.pdf}
\caption{(a) An array of light clocks on a train traveling at a speed $v$. The clocks have values of $h$ as integer multiples of $\chi$. (b) One of the clocks in O2's RF on the train, and (c) the same clock in O1's RF at the station.}
\label{fig:Fig8}
\end{figure*}
Upon analyzing time dilation in discrete space, we will find that $\gamma$ depends not only on the velocity of the object (as conventional SR predicts), but that $\gamma$ also depends on the time between two measurements of some property of the object, i.e., the \textit{duration} of the measurement. We therefore express $\gamma$ as $\gamma(v,\Delta t')$ in this paper when the need arises to be explicit about $\gamma$'s dependencies. We therefore also construct a system shown in Fig. \ref{fig:Fig8} composed of an array of clocks, all at rest in O2's RF, but with different values of $h_n$: Clock $1$ has $h_1=1 \chi$, Clock $2$ has $h_2=2 \chi$, and so on until Clock $N$ with $h_N=N \chi$. \footnote{It is more accurate to have all the clocks oriented along the width of the train car, namely aligned perpendicular to the direction of motion.}
The general procedure to calculate $\gamma$ for these clocks involves solving Eq. \eqref{Leopold} as you scan through two sets of parameters in a nested fashion (Fig. \ref{fig:Fig9}). (Matlab codes to implement the algoriths described in this paper are available as supplemental material posted online, as well as at the website given in \cite{website}.) The outer loop scans through values of $p$ and provides the dependence of $\gamma$ on the duration of the measurement. Thus, a particular clock, namely Clock $p$, is first chosen, corresponding to a height $h=p\chi=c \Delta t' /2$ with $p\in\{1,2,3\dots\}$. Once $p$ is set, an inner loop scans through the $x$ values, starting with $x=1\chi$, then $x=2\chi$ and so on to $x=m_{max}\chi$; in general $x=m\chi$ with $m \in \{0,1,2,\dots m_{max} \}$. With each $x$ value, one calculates the hypotenuse $d$ using Eq. \eqref{Leopold} and determines how many integer multiples of $\chi$ are along the hypotenuse, say $n$. And because it is a light pulse that travels along the hypotenuse, the length of the hypotenuse is $d=n\chi=c\Delta t / 2$. Because any particle cannot travel faster than $c$, $n$ sets the upper limit of $m$ of the $x$ scan, namely $m_{max}=n$. One then simply collects the results of these calculations for $n$ and uses the following equations to calculate the velocity $v$ and $\gamma$:
\begin{subequations}
\begin{align}
v &= \frac{m}{n}c \hspace{10mm} m \in \{0,1,2,\dots,n\}\label{velocity}\\
\gamma &= \frac{n}{p}\label{gamma}
\end{align}
\end{subequations}
\begin{figure}
\centering\includegraphics[width=0.50\textwidth]{Fig9_11_11_2016.pdf}
\caption{The general procedure to calculate $\gamma$ as a function of duration and velocity using light-clocks. The algorithm \cite{website} scans through integer values of $p$ starting at $p=1$. For each value of $p$, $m$ is scanned from 1 to a maximum value at which the condition $n=m$ is satisfied, with $n$ being obtained from Eq. \eqref{Leopold}. When this occurs, the maximum velocity has been reached, namely $c$. In this figure, $p=7$, $m=4$, and $n=8$, corresponding to a duration of $\Delta t'=7 \times 2\beta$, a velocity of $v=(m/n)c=(4/8)c=0.5c$, and $\gamma=n/p=8/7=1.14$.}
\label{fig:Fig9}
\end{figure}
\noindent Equation \eqref{gamma} contains $\gamma$'s dependencies on velocity \textit{and} duration. Note that in this model, the velocity of the system is one value within a finite set of discrete values, rather than an infinite set of continuous values from $0$ to $c$. Figure \ref{fig:Fig10} shows $\gamma$ for durations corresponding to the first 15 clocks, namely durations from $\Delta t'=2\beta$ to $\Delta t'=30\beta$. In Fig. \ref{fig:Fig10}, the discrete set of $\gamma$ values (at the allowed velocities) are indicated with the ``$\circ$" markers. The solid black lines show $\gamma_{Einstein}$ that assumes continuous S-T and a continuum of possible velocities from $0 \rightarrow c$ (of course the black curves are the same for each duration).
The dashed line denotes a very interesting and important phenomena, namely a finite value of $\gamma$ for a speed $c$. If a particle has this value for $\gamma$ (denoted as $\gamma_{critical}$), then it will be measured as traveling at a velocity $c$ over a certain duration $\Delta t$. This result arises from the fact that in discrete space, the hypotenuse and one side of a right triangle can have equal lengths, something that is not possible in continuous space and in the conventional special theory of relativity. To obtain these important values of $\gamma_{critical}$ (which are dependent on $\Delta t'$), assume that an object is moving at the speed of light, hence $m=n$. Using Eq. \eqref{Leopold} with $m=n$ in Fig. \ref{fig:Fig9}, one can derive the equation given below:
\begin{equation}\label{ncritical}
n=n_{critical} \ge \frac{1}{2} \left( p^2-1 \right)
\end{equation}
\noindent where $n_{critical}$ is the smallest positive integer that satisfies Eq. \eqref{ncritical}. For large $p$, Eq. \eqref{ncritical} yields $p=\sqrt{2n_{critical}}$; this, in conjunction with $\gamma_{critical}=n_{critical}/p$, $\Delta t'=2p\beta$, and $\Delta t= \gamma_{critical}\Delta t'$ yields:
\begin{equation}\label{gammacritical}
\gamma_{critical}=\frac{1}{2}\sqrt{\frac{\Delta t}{\beta}}
\end{equation}
\noindent Since the kinetic energy ($KE$) of a particle is related to $\gamma$ according to $KE=\left( \gamma - 1 \right) mc^2$, the energy needed to be provided to a particle such that it can be measured as traveling at a speed $c$ over a particular duration $\Delta t$ is:\footnotemark
\begin{equation}\label{KEcritical}
KE_{critical}=\left( \gamma_{critical}-1 \right) mc^2
\end{equation}
\noindent For example, consider and experiment where two probes are separated by a distance $\Delta d = 300$ nm. The probes precisely measure the time at which an electron (of mass $9.11 \times 10^{-31}$ kg) passes them. If the electron is traveling at a speed $c$, then $\Delta t = \Delta d/c = 1 \times 10^{-15}=1$ fs. Equation \eqref{KEcritical} yields a value of $2.5 \times 10^{19}$ eV, or 25,000,000 TeV. This value exceeds what is possible with the most powerful existing particle accelerators by a factor of $10^6$, but may not be impossible to reach with future accelerators. Note that values of $\gamma$ larger than $\gamma_{critical}$ are possible for any measurement duration, but the speed of the particle will be measured as $c$ for any $\gamma \ge \gamma_{critical}$ (see Fig. \ref{fig:Fig10}). Also, the longer the duration under which the measurement of the speed is performed \footnotemark[\value{footnote}]\footnotetext{Note that two position measurements at times $t_1$ and $t_2$ with $\Delta t=t_2-t_1$ are necessary to determine the speed of a particle.}, the greater $\gamma$ needs to be (meaning that more energy needs to be delivered to the system) in order for the measured speed of the object to remain as $c$. Of course, any object cannot travel faster than $c$, and no causality issues arise with this result since $\gamma$ always remain bounded and any particle's velocity must be equal to, or less than $c$. Also, these results do not conflict with results from any experiments performed to date -- any realistic measurement duration has been at least twenty-five orders of magnitude greater than $\beta$ \cite{Sayed}.
\begin{figure*}
\centering\includegraphics[width=0.75\textwidth]{Fig10_11_11_2016.pdf}
\caption{The calculated values for $\gamma$ versus duration and velocity (colored solid lines) with allowable values indicated by the point markers ``$\circ$", and $\gamma_{critical}$ is shown as the dotted black line. Also shown is $\gamma_{Einstein}$ versus $v$ (solid black). Speed is given relative to $c$, and duration is relative to $2\beta$. Note that a speed of $c$ is allowed for any duration, even for a particle with nonzero rest mass.}
\label{fig:Fig10}
\end{figure*}
\section{Time versus Temporal Duration and Inherent Problems with Ideal Light Clocks}
\label{sec:time}
This section discusses the important implications of a duration-dependent $\gamma$ on concepts of time, measured duration, and clocks. It was assumed by scientists that the light-clock was the ideal clock with which to study, nay define, time \cite{Canales2015}. Using these light-clocks, Einstein derived a $\gamma_{Einstein}$ factor (describing time dilation and Lorentz contraction) that appeared to be independent of the clock's tick-tock rates. He then made the significant conceptual leap of stating that the dilation of the physical clocks' tick rates represented the dilation of the flow of time itself (let us call this ``Time"). Thus he said that duration and Time are one in the same, that there is only one time - the time measured by the scientists' ideal light-clocks. In a famous 1924 debate with the leading thinker of the day on the concept of time, namely the philosopher Henri Bergson - and in a room full of philosophers - Einstein made the provocative statement: ``The time of the philosophers does not exist" \cite{Canales2015}. Bergson vehemently objected to the conflation of the scientists' ``measured time" and what he called ``real time", ``psychological time", or simply ``Time". Bergson thought that Time was connected to the flow of consciousness, and was an intuitive concept different than scientists' measured time. He also objected to the perceived ideal nature of the light-clock and the use of clocks at all to measure time \cite{Canales2015}. To address this debate and reach some conclusions, let us consider two of the clocks shown in Fig. \ref{fig:Fig8}, Clock $10^{24}$ with $h=10^{24}\chi$, and Clock $1$ with $h=1\chi$.
\vspace{5mm}
\noindent \textbf{Einstein's Clock(s): }
Consider any ``enormous" clock with $h$ much larger than $\chi$, say Clock $10^{24}$, such that $h(=10^{24}\chi)$ is approximately the size of the hydrogen atom. In this case, the hypotenuse agrees with Pythagoras's theorem to within $10^{-9}\%$. Thus for all practical purposes, time is dilated in the moving RF by the amount predicted by standard SR, given by Eq. \eqref{StandardTimeDilation}, and this clock's tick rate largely agrees with Einstein's predictions. This applies to any clock with larger values of $h$, i.e., slower tick rates.
\vspace{5mm}
\noindent \textbf{Bergson's Clock: }
Next, consider Clock 1 in Fig. \ref{fig:Fig8} -- a clock we shall call a Bergson clock (or B-clock) in the hopes that it may be able to measure Bergson's immutable ``Time". This clock has the E/R and mirror separated only by a single hodon. From Fig. \ref{fig:Fig10}, we see that $\gamma=1$ for a measurement duration of $2\beta$, regardless of the velocity of the system. This means that no time dilation occurs over this temporal duration, therefore O1 measures the same duration for one tick of this clock as that measured by O2. Perhaps this clock is the ideal tool with which to probe, to mirror, to represent Bergson's diaphanous ``Time"? \dots Alas, this is not the case. To see why, we need to consider simultaneous events in the O1's and O2's RFs. But prior to this, we have to more clearly describe a couple of key aspect of the clocks on the train.
\vspace{5mm}
Let us assume that for each clock (Clocks 1 through $N$ in Fig. \ref{fig:Fig8}), whenever a photon completes a round trip, its detection and the emission of a subsequent photon is immediate, namely there there is no temporal duration between these events \footnote{Even if this assumption needs to be modified, say by there there being at least one chronon between reception of the photon and emission of a subsequent photon, this will not affect the qualitative conclusions of this work but only modify quantitative results in a straightforward fashion}. Additionally, we line up the clocks along the \textit{width} of the train so that they are aligned in a perpendicular direction (i.e. $\hat{y}$) relative to direction of motion (i.e., $\hat{x}$). This eliminates any factors (e.g., length contraction, differences in positions) that may bring the results of this analysis into question. Also, assume that O2 starts all clocks at the same time $t=t'=0$, such that the first $N$ photons are simultaneously emitted by the $N$ clocks at exactly $t=t'=0$. Also, each clock instantly emits a signal-particle every time a photon is received by the E/R (see Fig. \ref{fig:Fig8}); this signal-particle propagates to, and is detected by O1 such that the durations between ticks of each clock can be assessed by O1 \footnote{These signal-particles are photons and are as spatially as possible, namely Planck particles}. And finally, we note that because the train is traveling at a velocity $v$, the tick rate (as measured by the arrival of the signal-particles at O1) experiences the Doppler effect. In this paper, Doppler effects will be factored out of the results so that we can focus on the changes to the clocks' tick rates due only to relativistic effects and to the effects introduced by the discrete nature of S-T. We now study the simultaneity of photon detection by O2's B-clock (i.e. Clock 1) and O2's other clocks $2 \rightarrow N$, with the ultimate aim of producing a correspondence between the ticks of the B-clock in O2's RF and the ticks of a second B-clock in O1's RF.
O2's B-clock's first emitted photon returns to the clock's E/R after a duration $\Delta t'_1=2\beta$ and instantly emits a signal-particle to O1. O1 receives the signal-particle and assesses the duration $\Delta t_1$ and concludes that $\Delta t_1 = \gamma (v,2\beta) \Delta t'_1$, in accordance with $\gamma(v,\Delta t')$ for the velocity of the clock and duration. (Note that in this section, we will express $\gamma$ as $\gamma(v,\Delta t')$ so as to indicate what velocity and duration are used in the calculation; also, the subscripts of $\Delta t_n$ and $\Delta t_n'$ represent the $n^{th}$ tick number of O2's B-clock.) O2's B-clock's second photon is detected by the clock after a duration (relative to $t'=0$) of $\Delta t_2'=2 \times 2\beta=4\beta$, this is the same duration required for the first photon of O2's Clock 2 to make a round trip and be detected by Clock 2. Thus, these two events are simultaneous (in both O2's and O1's RFs), namely the reception of O2's B-clock's second photon and the reception of O2's Clock 2's first photon. Subsequently, two signal-particles are emitted simultaneously, one from each clock, and later received (simultaneously) by O1. The question then arises as to what duration (relative to $t'=0$) O1 measures that corresponds to two ticks of Clock 1 and one tick of Clock 2? Is it $\Delta t_2=2 \times \gamma(v,2\beta) \Delta t_1'$? This would be the case in the standard treatment of SR. The answer is no -- in order for simultaneity to be conserved in measurement as it is in fact, the duration measured by O1 must be $\Delta t_2 = \gamma(v,4\beta) \Delta t_2'$. We continue this process with O1 assessing the duration (relative to $t'=0$) of each tick of O2's Clock 1, finding that the $M^{th}$ tick of Clock 1 is received at $\Delta t_M = \gamma(v,M \times 2\beta) \Delta t'_M = M \times \gamma(v,M \times 2\beta) \Delta t'_1$.
In Fig. \ref{fig:Fig11} we plot the correspondence between each tick of O2's B-clock and a tick of a B-clock in O1's RF for two cases: Case 1 is where the train is traveling approximately $v=0.5c$, Case 2 is where $v=c$. In Case 1, the velocity cannot always be exactly $v=0.5c$ because it is calculated by the ratio of hodons along the hypotenuse and base of the triangle in Fig. \ref{fig:Fig9}, as per Eq. \eqref{velocity}. Table \ref{table:Table1} provides (for the first 10 clocks only) the number of hodons along: the height, base and hypotenuse of the triangle traced out by the photon; the corresponding speed of the train is also provided. It is seen in Fig. \ref{fig:Fig11} that O1 observes something rather odd with O2's B-clock, namely that its tick rate is very irregular. As measured by O1, the first six ticks of O2's B-clock (with $v=0.5c$) occur at the same time as the first six ticks of O1's B-clock, even though for ticks 3 and 5, the velocity of the train cannot be exactly $0.5c$. But then things go awry with this ``perfect" clock. The $7^{th}$ tick of O2's B-clock does not arrive until the $8^{th}$ tick of O1's B-clock. Also, both the $7^{th}$ and $8^{th}$ tick of O2's B-clock arrive at the $8^{th}$ tick of O1's B-clock. Then, the $9^{th}$ tick of O2's B-clock corresponds to the $10^{th}$ tick of O1's B-clock. These irregular receptions of signal-particles from O2's B-clock continue, sometimes skipping and sometimes doubling-up on a tick of O1's B-clock. As the tick number $N$ becomes large, $\gamma(v,N \times 2\beta) \rightarrow \gamma_{Einstein}(v)$, thereby agreeing with standard SR.
Also included in Table \ref{table:Table1} and Fig. \ref{fig:Fig11} is the correspondence between O1's and O2's B-clocks for a train traveling at $c$ (and with $\gamma=\gamma_{critical}$) -- something not possible within the framework of standard SR, but allowed in discrete S-T. Upon commencing the measurements at $t=t'=0$, it is seen that the arrival of subsequent ticks of O2's B-clock arrive at O1 after durations that grow rapidly (again, Doppler effects are excluded). In general, the $p^{th}$ tick of O2's B-clock arrives at the $n^{th}$ tick of O1's B-clock, with $p$ and $n$ related according to Eq. \eqref{ncritical}.
Concluding with the discussion of these B-clocks, the important point is that they are not ideal in the sense that time dilation still occurs with these clocks and that the tick rate is irregular. Thus, it cannot represent the immutable ``Time" that Bergson envisioned. \footnote{Potentially, one could slightly improve upon these clocks by replacing the mirror above each E/R with a receiver. Thus, temporal resolution could potentially be increased from $2\beta$ to $\beta$ -- however this will not eliminate the time dilation that occurs with these clocks.} However, these clocks are the best that nature allows. Thus, there is no clock that can measure Bergson's immutable ``Time" -- and if this Time cannot be measured, \textit{ipso facto}, it does not exist. Also, all physical processes occur on time scales much larger than $\beta$ - the fastest measured chemical reactions in the human body occur in the femtosecond time frame, over $10^{30}$ times larger than $\beta$ \cite{Sayed}. Thus, for all practical purposes, chemicals react at rates, objects age, and time can be said to ``flow" according to the time described by the standard rules of SR. Thus Bergson was correct in only two regards: there does exist an immutable component of time, namely the chronon; and Einstein's light clocks are not quite as ideal as most scientists believe. Concerning the former, the chronon is far from the concept of human-lived time Bergson envisioned; c oncerning the later, the tick rate of light clocks can indeed be irregular and dilated, but in a predictable way as described in this work. Thus, Einstein's conflation of measured time with Time is valid -- and ironically, rather than the ``Time" that Bergson championed, Einstein's measured time is the psychological or lived time.
\begin{table}
\captionsetup{justification=centering}
\caption{The integer multiples of $\chi$ for the triangles traced out by the photons of O2's B-clock in O1's RF (see Figs. \ref{fig:Fig8}-\ref{fig:Fig9}). The integers $p$ and $n$ provide the tick correspondence between O2's B-clock and O1's B-clock (after Doppler effects are subtracted out). The height, base and hypotenuse are relative to $\chi$ and the speed is relative to $c$.}
\begin{tabular}{|c|c|c|c||c|c|}
\hline
\multicolumn{4}{|c||}{Velocity $\approx 0.5c$} & \multicolumn{2}{c|}{Velocity$=c$ and} \\ \multicolumn{4}{|c||}{ } & \multicolumn{2}{c|}{$\gamma=\gamma_{critical}$} \\
\hline
Height ($p$) & Base ($m$) & Hypotenuse ($n$) & Speed & Height & Base and \\
or & & or & & or & Hypotenuse or \\
O2's Tick & & O1's Tick & & O2's Tick & O1's Tick \\
\hline
1 & 0(1) & 1(1) & 0(1) & 1 & 1 \\
\hline
2 & 1 & 2 & 0.5 & 2 & 2 \\
\hline
3 & 1(2) & 3(3) & 0.33(0.66) & 3 & 4 \\
\hline
4 & 2 & 4 & 0.5 & 4 & 8 \\
\hline
5 & 2(3) & 5(5) & 0.4(0.6) & 5 & 12 \\
\hline
6 & 3 & 6 & 0.5 & 6 & 18 \\
\hline
7 & 4 & 8 & 0.5 & 7 & 24 \\
\hline
8 & 4 & 8 & 0.5 & 8 & 32 \\
\hline
9 & 5 & 10 & 0.5 & 9 & 40 \\
\hline
10 & 5(6) & 11(11) & 0.45(0.55) & 10 & 50 \\
\hline
\end{tabular}
\medskip
\parbox{\linewidth}{\scriptsize%
\textsc{Note}: If the desired velocity is not possible, results for the two speed neighboring the desired speed are given. For example, for $p=1$, only the speeds of $0$ and $1c$ are possible, corresponding respectively to $m=0$ and $m=1$ for the base and with $n=1$ in both cases for the hypotenuse -- this is recorded in the table under the column labeled ``Speed" as 0(1), ``Base" as 0(1), and for ``Hypotenuse" as 1(1).}
\label{table:Table1}
\end{table}
\begin{figure*}
\centering\includegraphics[width=0.75\textwidth]{Fig11_11_11_2016.pdf}
\caption{The red squares, black circles and blue diamonds represent the corresponding ticks of O1's B-clock for each tick of O2's B-clock. O2's RF (and B-clock) has a velocity of approximately $0.5c$. Red squares indicate times at which the velocity $0.5c$ is not strictly possible (because of Eq. \eqref{velocity}), black circles indicate times that allow $v=0.5c$, and blue diamonds indicate two sequential ticks of O2's B-clock that arrive at the same tick of O1's B-clock. Purple triangles represent the correspondence of the ticks of O2's and O1's B-clocks in the case where O2's RF is traveling at $v=c$ relative to O1's RF. This shows that O2's time is not entirely ``frozen" from O1's perspective, as would be the case with standard SR.}
\label{fig:Fig11}
\end{figure*}
\section{\label{sec:LeopoldCrystal}Ordering John A. Wheeler's Quantum Foam: Measurable Consequences of Discrete Space}
\bigskip
\includegraphics[width=0.5\textwidth, center]{goya_when_reason_sleeps.pdf}
\medskip
Goya's etching was a critique of pre-Enlightenment practices of Spanish society in his time and the ``vulgar prejudices and lies authorized by custom, ignorance or interest, those that he has thought most suitable matter for ridicule". However, a new generation of philosopher-physicists has strove to awaken Reason from its recent slumber \cite{Huggett,Schommers232}. In a caption to the work, Goya makes clear that he is not advocating for society to use reason alone, but rather a synthesis of reason and imagination: ``Imagination abandoned by reason produces impossible monsters; united with her, she is the mother of the arts and source of their wonders". Similarly, Popper laments the fact that no contemporary scientist would dare propose such a bold concept as Anaximander did, devoid of observational evidence, that the Earth is freely poised and stationary in mid-space and of a shape of a drum - a thesis that Popper believes is the naissance of modern science \cite{Popper}. In this section, we use our imagination and propose possible mechanisms to account for some (but not all) of the observed effects attributed to dark matter and dark energy.
\vspace{5mm}
In 1957 John A. Wheeler sought to show that all of classical physics, particle physics included, is ``purely geometrical and based throughout on the most firmly established principles of electromagnetism and general relativity" \cite{Wheeler1957}. Wheeler made use of well developed concepts in quantum electrodynamics, and showed that the fine structure of space is composed of a random array of quantum ``wormholes", with each wormhole having a pair of charges, $q=\pm\sqrt{4 \pi \epsilon_o \hbar c}$, and each charge having a mass $m=m_p=E / c^2 = \sqrt{\hbar c/G}=2.18 \times 10^{-8}$ kg. He stated that these charges (i.e., Planck particles) have an average spacing of $l_p=1.62 \times 10^{-35}$ m, or half of $\chi$. Otherwise the particles are randomly distributed; hence he called this ``quantum foam".
However, if space is discretized, a random distribution of Planck particles involving fractional distances of $\chi$ is not allowed. Order must be imposed on the structure, changing the foam into a crystal (Fig. \ref{fig:Fig12}). The constituent particles of this crystal all have a very large mass ($m_p$) relative to their charge ($q_p$) when compared to any other naturally occurring elementary particle. This structure then forms a gravity crystal (GC) that is described in detail in \cite{Crouse2016}.
\begin{figure}
\centering\includegraphics{Fig12_11_11_2016.pdf}
\caption{The universe-wide gravity crystal that has a cubic lattice, a lattice constant of $\chi$, and with a basis of one particle of mass $m_c$.}
\label{fig:Fig12}
\end{figure}
Since both the gravitational and electromagnetic forces have a $1 / r^2$ dependence, one can use techniques within the field of solid-state physics \cite{Ashcroft} to calculate the behavior of particles traveling within the GC. Let the GC be composed of an array of particles, all with identical mass $m_c$, and with one particle at each position given by $\vec{R} = n_x \chi \hat{x} + n_y \chi \hat{y} +n_z \chi \hat{z}$ with $n_x$, $n_y$, and $n_z$ being integers. The GC creates the following potential energy profile (produced by \textit{all} the CG's constituent particles) for a particle (electrically neutral and mass $m_{particle}$) traveling within it:
\begin{equation}\label{Coulombic}
V(\vec{r})=-G m_{particle} m_c \sum_{\vec{R}} \frac{1}{\left\vert \vec{r}-\vec{R} \right\vert}
\end{equation}
An important quantity to calculate is the particle's dispersion curve ($\omega$ - $k$ curve) that plots the energy ($\mathcal{E}=\hbar \omega$) of the particle versus its wave vector $k$, with $\mathcal{E}$ and $\omega$ both being functions of $k$. To calculate this dispersion curve, one can use the tight-binding method (TBM), a nearly free particle method, a empirical pseudo-potential method (EPM), or several other methods \cite{Ashcroft,Vas,Crouse2016,Strange} to solve the full Schr\"{o}dinger's equation for this system:
\begin{equation}\label{Schrodinger}
\frac{\hbar^2}{2 m_{particle}}\nabla^2\psi+V(\vec{r})\psi=\mathcal{E}\psi
\end{equation}
\noindent Matlab code to implement the EPM and TBM is available as supplemental material posted online, as well as at the website given in \cite{website}. Note that in the calculations of this section, we have used the result of discretization of space in units of $\chi$, but have used the conventional Pythagoras's theorem. Future work is needed to implement Leopold's theorem into EPM, TBM and other band diagram algorithms; it is reasonable to suspect that the use of Leopold's theorem will significantly reduce the anisotropy of the band diagram shown later in this section.
Many interesting properties can be gleaned from a particle's dispersion curve, including bandgaps, Brillouin zones (BZs) and their boundaries, and effective inertial mass. Each one of these things provide important information on how particles behave in crystals, sometimes predicting seemingly bizarre behavior. For example, energy bandgaps indicate forbidden energy ranges for particles, but particles can ``jump" this gap by acquiring the necessary energy from another particle. BZs provide information about the range of momentum that a particle can have, including an effective maximum momentum. Finally, one can calculate an effective inertial mass ($m_{inertial}$) of a particle traveling within a crystal. This parameter is usually written as the inverse of $m_{inertial}$, and also as a tensor $\left(1 / m_{inertial}\right)_{i,j}$, with $i,j \in \{x,y,z\}$. This then allows an applied external force $F_j$ in one direction (with $j \in \{x,y,z\}$) to produce an acceleration $a_i$ in the same or different direction (with $i \in \{x,y,z\}$).
The effective mass method \cite{Ashcroft} allows one to lump all the effects of the crystal particles into this one parameter $1 / m_{inertial}$ and then use this term in a simplified Schr\"{o}dinger's equation:
\begin{equation}\label{SchrodingerEffectiveMass}
-\frac{\hbar^2}{2 m_{inertial}}\nabla^2\psi+V_{external}(\vec{r})\psi=E\psi
\end{equation}
\noindent where $V_{external}$ is the potential energy profile produced only by non-crystal sources, e.g., galaxies, stars, planets, dust. Once $\mathcal{E}(\vec{k})$ of Eq. \eqref{Schrodinger} has been calculated, $m_{inertial}$ can be calculated using the following equation \cite{Ashcroft}:
\begin{equation}\label{effectivemass}
\left(\frac{1}{m_{inertial}}\right)_{i,j} = \frac{1}{\hbar^2} \frac{\partial^2 \mathcal{E}(\vec{k})}{\partial k_i \partial k_j}
\end{equation}
In \cite{Wheeler1957}, Wheeler showed that if the constituent crystal particles (again, of charge $q_p=\pm\sqrt{4\pi \epsilon_o \hbar c}$) are separated from each other by an average distance of $l_p$, then the positive mass produced by electromagnetic energy (via $E=mc^2$) is totally compensated by negative mass produced by gravitational energy, such that ``to the extent this compensation holds locally, nearby wormholes exert no gravitational attraction on remote concentrations of mass-energy". In \cite{Crouse2016}, Crouse considered a GC where this compensation does not happen, and the particles that compose the crystal all had mass $m_c=m_p$. It was seen in \cite{Crouse2016} that a particle traveling within this crystal can exhibit negative and near-zero values for $m_{inertial}$. However, no justification was given in \cite{Crouse2016} as to why no compensation (of the positive electromagnetic mass and negative gravitational mass) occurs. In this work however, we have shown that a spacing of $l_p$ is not possible, because $l_p$ is less than the fundamental length $\chi(=2l_p)$. If $\chi$ is the lattice constant of the GC, then it is easy to show (using Wheeler's methods described in \cite{Wheeler1957}) that there is an uncompensated mass of $3m_p/8$ for each crystal particle; thus $m_c=3m_p/8$. Similar phenomena (e.g., negative effective mass $\cdots$) occur for a crystal with constituent particles all with this mass, compared to the case when $m_c=m_p$; therefore, we refer the reader to \cite{Crouse2016} for a more detailed discussion of a GC of this type.
Instead, let us consider an interesting case where the constituent crystal particles are Higgs bosons ($m_{Higgs}=2.25 \times 10^{-25}$ kg), configured in a cubic lattice with lattice constant $a_o=2l_p=\chi$. First, let us estimate the mass of elementary particles that would ``feel" the effects of the crystal as they travel within it. To do so, one would equate the kinetic energy term and the dominant potential energy term in Eq. \eqref{Schrodinger}:
\begin{equation}\label{massestimateformula}
\frac{\hbar^2k^2}{2m_{particle}} = \frac{Gm_{particle}m_c}{a_o}
\end{equation}
\noindent The effects of the crystal most often manifest themselves at the BZ boundary at $k=\pi/a_o$. Using this value of $k$ in Eq. \eqref{massestimateformula}, one arrives at the following approximation for the mass ($m_{particle}$) of a particle that will interact strongly with the GC:
\begin{equation}\label{massestimate}
m_{particle}=\frac{\pi\hbar}{\sqrt{2Gm_c a_o}}
\end{equation}
\noindent With $a_o=\chi$ and $m_c=m_{Higgs}$, Eq. \eqref{massestimate} yields a value of 10.62 kg. \footnote{It is certainly debatable whether such a massive \textit{elementary} particle (relative to $m_{Higgs}$) will obliterate the crystal around and along the particle's trajectory.} The band diagram and $m_{inertial}$ as functions of $k$ are shown in Fig. \ref{fig:Fig13} and \ref{fig:Fig14} respectively. It is seen that $m_{inertial}$ can be significantly different than the gravitational mass $m_{particle}$, with it even being negative for various ranges of momenta. A negative $m_{inertial}$ would predict that a particle would accelerate in the opposite direction of the external force \footnote{Conservation of energy and momentum are still conserved since the system includes not only the particle, but also the entire crystal. The universe-wide crystal can act as an infinite reservoir for energy and momentum.}. In the case of the universe, the cumulative gravitational force (due to all planets, stars and galaxies) is in a direction towards the ``center" of the universe. Particles with a negative value of $m_{inertial}$ will be observed to be accelerating in the opposite direction, that is, away from the center of the universe -- these particles will be ``pushed" by the ``pull" of gravity. This effect can be easily detected and measured using the latest telescopes.
Instead of Higgs bosons composing the crystal, we can investigate other candidate particles. For example, we could use either of the two commonly stated values for the vacuum energy density, namely $\xi_1=10^{-9}$ J/$m^3$ \cite{Carroll2006} and $\xi_2=10^{113}$ J/$m^3$ \cite{Milonni2013}, and calculate the corresponding mass $m_c$ (via $m=E/c^2$ with $E=\xi \chi^3$) of each constituent crystal particle assuming a lattice constant of $\chi$. For $\xi_1$, we obtain $m_c=3.78 \times 10^{-130}$ kg, and using this value in Eq. \eqref{massestimate} we obtain $m_{particle}=2.59 \times 10^{53}$ kg. In this case, $m_{particle}$ is approximately the mass of the entire universe (as stated in \cite{Davies2006}), thus no particle with a realistic mass would ever feel the effects of this crystal. For $\xi_2=10^{113}$ J/$m^3$, we obtain $m_c=3.78 \times 10^{-8}$ kg and $m_{particle}=2.59 \times 10^{-8}$ kg; particles of this mass scale are realistic, are described in \cite{Crouse2016}, and can produce measurable physical effects.
Another interesting result of these calculations is that it predicts that black holes (BHs) are more complicated than widely believed. Any current textbook on GR or astronomy states that BHs have only three properties: total mass, spin, and electric charge. However, the results in this paper predict that the distribution of the mass within the event horizon is very important in determining the BH's motion in response to external gravitational forces. Consider two cases: Case 1 with a 10.62 kg BH where all this mass is concentrated into the ``singularity" (of volume $V_{BH}=\chi^3$); and Case 2 where the 10.62 kg mass is composed of a uniform distribution of particles over the BH's volume of $V_{BH}=\left(4 \pi/3 \right) R_s^3$, with $R_s=2Gm/c^2=1.57 \times 10^{-26}$ m being nine orders of magnitude greater than $\chi$. For Case 1, $m_{inertial}$ may vary dramatically, as already described (see Fig. \ref{fig:Fig14}). For Case 2 however, $m_{inertial}$ will always be equal to the sum of the constituent particles' gravitational masses because each particle that composes the BH has too little mass to ``feel" the effects of the GC. Thus, by studying the inertial properties of a BH (i.e., how it responds to an external force), one can glean some knowledge of its internal structure.
\begin{figure}
\centering\includegraphics{Fig13_11_11_2016.pdf}
\caption{The dispersion curve (calculated using EPM \cite{Crouse2016,Vas}) for a particle with $m_{particle}=10.62$ kg traveling within a cubic GC composed of Higgs bosons (one per unit cell) with lattice constant $\chi=2l_p$. Nonparabolicity of the bands occur, which is indicative of a $m_{inertial}$ that varies with momentum $k$, as shown in Fig. \ref{fig:Fig14}. \textbf{Inset:} One unit cell of reciprocal space showing the crystal directions.}
\label{fig:Fig13}
\end{figure}
\begin{figure}
\centering\includegraphics{Fig14_11_11_2016.pdf}
\caption{The inertial mass $m_{inertial}$ (blue line) as a function of momentum $k$ of a particle (with gravitational mass $m_{particle}=10.62$ kg) traveling within the GC of Fig. \ref{fig:Fig12}. The red dotted line is $m_{particle}$ and the vertical dotted lines are BZ boundaries. It is seen that away from the $\Gamma$-point, $m_{inertial}$ differs significantly from $m_{particle}$, with $m_{inertial}$ being much greater than $m_{particle}$, near zero, or even negative.}
\label{fig:Fig14}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this work, the discretization of space and time was studied, along with its affects on mathematical and physical theories. First, using well accepted concepts within quantum mechanics and general relativity, and adhering to the tenets of logical positivism, the quantums of space and time were derived. Next, using Mach's principle of strict non-absolute space, a method was developed to map points from continuous space to discrete space. Using these results, and again using concepts from quantum mechanics and Mach's principle, a modified Pythagoras's theorem (Leopold's theorem) was deduced. Leopold's theorem agrees with the Pythagoras's theorem at large size scales but differs from it at the Planck scale, in such a way that it preserves the concept and requirements of discrete space-time. Particle motion in discrete space-time was then described, and it was shown that the constancy of the speed of light is a consequence of the immutability of the quantums of space and time, and non-absolute space. These results were then used in the theory of special relativity to calculate a modified $\gamma$ function that is a function of both speed and measurement duration. Such results significantly alter time dilation and length contraction, as well as allowing particles (even with nonzero rest mass) to temporarily travel at the speed of light. The issue of the existence and nature of a ``real" immutable Time was studied in the context of both logical positivism and the modified laws of special relativity developed in this work. It was seen that the only candidate for an immutable Time was a single quantum of time, the chronon, and a single tick of a Bergson clock. In contrast, scientists' measured time preserves and contains within it the chronon, but it also is the ``lived", the experienced, the psychological time important for practically all particle interactions and human experiences. Finally, it was shown how the discretization of space imposes order on Wheeler's quantum foam, turning it into a universe-wide gravity crystal that can affect the inertial properties of dense, high momentum particles such as black holes. By studying the inertial properties of black holes, some aspects of the mass-distribution within their event horizons can be gleaned.
\section{Acknowledgements}
This work was supported by the National Science Foundation Industry/University Cooperative Research Center for Metamaterials (CfM) (IIP-1068028).
|
2,877,628,091,083 | arxiv | \section{Introduction} \label{s-intro}
The nature of dark energy is one of the key cosmological questions of our time. A basic
component of the question is whether dark energy is static as predicted by the cosmological
constant $\Lambda$ or dynamical as predicted by rolling scalar field cosmologies. The
proper test is to determine which theory best fits the observations. The predictions of the
cosmological constant are well known and appear to be consistent with current observations.
Ideally the predictions of scalar field cosmologies should start with the action of the
cosmology which can accommodate various physically motivated model dark energy potentials
$V(\phi)$ where $\phi$ is the scalar field. Unfortunately it is often mathematically difficult
or impossible to make calculations based on the resulting action even for simple dark
energy models such as power law potentials \citep{nar17}. This work investigates the
use of the beta formalism to provide accurate analytic equations for the evolution of
cosmological parameters as a function of the observable scale factor $a$ as opposed
to the generally unobservable scalar $\phi$.
The beta function is defined as the derivative of the scalar with respect to the natural log
of the scale factor
\begin{equation} \label{eq-beta}
\beta(\phi) \equiv \frac{d \phi}{d \ln(a)} =\phi'
\end{equation}
where the second equality notes the common cosmological practice of denoting the
derivative with respect to $\ln(a)$ with a prime. As described in section~\ref{s-beta}
the beta function is chosen so that the resultant "beta potential" is an accurate
representation of the model dark energy potential in the model action. For most cases
the action with the beta potential is so similar to the action with the model potential that
solutions using the beta action are accurate representations of solutions using the model action.
Once the form of the beta function is defined analytic solutions of the evolution
of the cosmological parameters can be found as a function of the scalar $\phi$. The beta
function also provides the means to express the solutions in terms of the scale factor
$a$ rather than the scalar $\phi$. This investigation explores the bounds of the
parameter space where the beta
function formalism produces solutions that deviate from the exact solution by only on
the order of $1\%$ or less. The primary purpose of the investigation is the
provision of accurate, analytic functions of the evolution of the cosmological parameters
to determine which cosmologies and potentials are consistent with the observed
universe and which must be discarded as untenable in the face of the data. The
functions also serve as excellent starting points for more exact numerical calculations.
The beta function formalism has its roots in a perceived correspondence between
cosmological inflation and the Quantum Field Theory renormalization group flow
equation \citep{bin15, cic17, koh17}. In that context it is valid as the solution
for the slow evolution of a system approaching or leaving a critical (fixed) point \citep{bin15}.
Both \citet{bin15} and \citet{cic17} have considered the formalism for the late time
dark energy inflation where the critical point is in the infinite future. The descriptions
here follow these references with particular dependence on \citet{cic17} who have
incorporated matter as well as dark energy in order to describe a real universe.
The beta function formalism is often associated with the term universality \citep{bin15,
cic17, koh17} referring to a commonality among seemingly disparate cosmologies revealed by the
beta function formalism. The example used in this work is too limited to fully show this but
section~\ref{ss-ha} hints at this where a common analytic function is found for the Hubble
parameter $H=\frac{\dot{a}}{a}$ which is shared by $\Lambda$CDM.
This work concentrates on the "late time" evolution of the universe which is taken to be
the time between a scale factor of 0.1 and 1.0 corresponding to redshifts between zero
and nine. As a demonstration of the method a quintessence cosmology is considered
with power and inverse power law dark energy potentials. Natural units with $\frac{8\pi G}{3}$
and the Planck mass equal to one are used. A flat universe is assumed with $H_0 = 70$
km/sec per megaparsec. The current ratio of the dark energy density to the critical density
$\Omega_{\phi_0}$ is set to 0.7 where $\phi_0$ is the current value of the scalar $\phi$.
The analytic functions have $H_0$ and $\Omega_{\phi_0}$
as parameters therefore results for other choices are easily obtained. Integer powers of
$\phi$ are taken to be $\pm(1, 2, 3, 4, 5)$ as examples but the derived functions are
valid for fractional powers as well. The current values of the dark energy equation of
state $w=\frac{p_{\phi}}{\rho_{\phi}}$ are taken to be $w_0=(-0.98, -0.96, -0.94, -0.92,
-0.90)$ where $p_{\phi}$ is the dark energy pressure and $\rho_{\phi}$ is the dark energy
density. The last two values of $w_0$ are unlikely but are included to determine the
limits of the formalism.
\section{Quintessence} \label{s-q}
Quintessence is of the most studied rolling scalar field cosmologies still standing after
the observation of gravity waves from merging neutron stars \citep{ezq17,dur18}. It is
characterized by an action of the form
\begin{equation} \label{eq-act}
S=\int d^4x \sqrt{-g}[\frac{R}{2}-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\partial_{\nu}\phi -V(\phi)]
+S_m
\end{equation}
where $R$ is the Ricci scalar, $g$ is the determinant of the metric $g^{\mu\nu}$, $V(\phi)$
is the dark energy potential, and, $S_m$ is the action of the matter fluid. Different types
of quintessence are defined by different forms of the dark energy potential.
The dark energy density, $\rho_{\phi}$, and pressure, $p_{\phi}$, are derived from the
energy momentum tensor which again involves $V(\phi)$.
\begin{equation} \label{eq-rhop}
\rho_{\phi} \equiv \frac{\dot{\phi}^2}{2}+V(\phi), \hspace{1cm} p_{\phi} \equiv \frac{\dot{\phi}^2}{2}-V(\phi)
\end{equation}
An essential observable cosmological parameter is the dark energy equation of state
$w=\frac{p_{\phi}}{\rho_{\phi}}$. Note that if $\dot{\phi}$ is zero then $w=-1$ for
all time as in $\Lambda$CDM. For a quintessence cosmology \citet{nun04} give
the dark energy equation of state as
\begin{equation} \label{eq-nun4}
w+1 =\frac{\phi'^2}{3 \Omega_{\phi}} =\frac{\beta^2(\phi)}{3 \Omega_{\phi}}
\end{equation}
where $\Omega_{\phi}$ is the ratio of the dark energy density to the critical density.
The factor $\Omega_{\phi}$ recognizes that there can be matter as well as dark energy
in the universe so that for a flat universe with matter $\Omega_{\phi}$ is not 1 but rather
$1-\Omega_m$ where $\Omega_m$ is the ratio of the matter density to the critical density.
The current value of the equation of state $w_0$ is therefore a possible boundary condition
in the solution for the scalar $\phi$.
\section{The Beta Function} \label{s-beta}
The beta function is defined in eqn.~\ref{eq-beta} as the derivative of the scalar with respect to
the natural log of the scale factor. Analytic solutions for the cosmological parameters are possible
because the beta function provides an additional equation that determines the evolution of the
scalar $\phi$ as a function of the scale factor. The beta function is not an arbitrarily chosen
relation of $\phi$. It is directly tied to the physically relevant model dark energy potential $V(\phi)$
in the action.
For a given model potential $V(\phi)$, the beta function $\beta(\phi)$ is chosen so that
\begin{equation} \label{eq-bv}
V_m(\phi)=\exp\{-\int\beta(\phi)d\phi\}
\end{equation}
where $V_m(\phi)$ is the model potential rather than the full potential given
in eqn.~\ref{eq-v}.
With the proper choice of $\beta(\phi)$ any function for $V(\phi)$ can be represented, not just
the functions considered in this investigation. From eqn.~\ref{eq-bv} $\beta(\phi)$ is chosen
such that the integral of $\beta(\phi)$ equals the logarithmic derivative of $V$. The power and
inverse power law potential beta functions are then
\begin{equation} \label{eq-betap}
\beta(\phi) = \frac{-\beta_p}{\phi}, \hspace{1cm} \beta(\phi) = \frac{\beta_i}{\phi}.
\end{equation}
where $\beta_{p,i}$ are positive numbers equal to the power. The subscripts $p$ and $i$
are used to denote power law and inverse power law respectively. The
scalar $\phi$ is positive for both the power and inverse power law cases.
Figure~\ref{fig-betaw} shows the evolution of the beta functions for $\beta_{p,i}$ held
constant at 3.0 for the five different values of $w_0$. Except where otherwise noted in
subsequent figures power law functions are denoted with a solid line and inverse power
law functions with a dashed line.
\begin{figure}
\scalebox{.6}{\includegraphics{fig1.eps}}
\caption{The evolution of the beta function $\beta(\phi)$ as a function of the scalar $a$
with $\beta_{p,i}=3$ and the five different values of $w_0$. The power law $\beta(\phi)$
(solid line) is negative and the inverse power law $\beta(\phi)$ (dashed lined) is positive.}
\label{fig-betaw}
\end{figure}
In figure~\ref{fig-betab} $w_0=-0.94$ for all five $\beta_{p,i}$ values for the power and
inverse power law potentials.
\begin{figure}
\scalebox{.6}{\includegraphics{fig2.eps}}
\caption{The evolution of the beta function $\beta(\phi)$ as a function of the scalar $a$
with $w_0=-0.94$ for the five values of $\beta_{p,i}$.}
\label{fig-betab}
\end{figure}
Note that the values of $\beta(\phi)$ for a given value of $\beta_{p,i}$ are sensitive to
the value of $w_0$ but for a given value of $w_0$ the values are relatively insensitive to
$\beta_{p,i}$. This is a pattern that occurs for many of the functions and parameters
considered here.
\section{Evolution of the Scalar} \label{s-es}
From the definition of the beta function a simple integration of eqns.~\ref{eq-betap} gives
\begin{equation} \label{eq-phi}
\phi_p(a) = \sqrt{-2 \beta_p ln(a) + \phi_0^2}, \hspace{0.5cm} \phi_i(a) = \sqrt{2 \beta_i ln(a) + \phi_0^2}
\end{equation}
where $\phi_0$ is the present day value of $\phi$. As is evident when
$\beta(\phi)$ is used in eqn.~\ref{eq-nun4} to replace $\phi'$ the value of $\phi_0$ is related
to the current dark energy equation of state $w_0$ by
\begin{equation} \label{eq-phio}
\phi_0 =\frac{\beta_{p.i}}{\sqrt{3 \Omega_{\phi_0}(1+w_0)}}
\end{equation}
for a quintessence cosmology where $\Omega_{\phi_0}$ is the current value
of $\Omega_{\phi}$. Note that $\phi_0$ is the same for both the
power and inverse power law beta functions with the same values of $\beta_{p,i}$.
\subsection{Limitations on the Inverse Power Law Beta Function} \label{ss-bl}
At all times in the past the value of $\ln(a)$ is negative, therefore, the term in the square root
for the inverse power law in eqn.~\ref{eq-phi} becomes negative at some time in the past
limiting the range of the scale factor. This is one justification
for considering the power law and inverse power law beta functions as two separate cases.
For the inverse power law case $\phi_0^2$ must be larger than $|2\beta_i \ln(a)|$ to avoid a
negative argument. Using eqn~\ref{eq-phio} this sets a requirement that
\begin{equation} \label{eq-brec}
2\ln(a) +\frac{\beta_i}{3\Omega_{\phi_0}(w_0+1)} >0
\end{equation}
to insure that $\phi$ is a real number. For the scale factors between 0.1 and 1 considered in
this work the constraint in eqn.~\ref{eq-brec} is satisfied for all values of $\beta_i$ and
$w_0$ utilized in the investigation. For $\beta_i=1$ and $w_0=-0.9$, however, it is not satisfied
at scale factors less than 0.0925, very close to the smallest scale factor of 0.1. As $2\beta_i\ln(a)$
approaches $-\phi_0^2$ the beta function evolves rapidly to large numbers making the solutions
in this region unreliable. The increased deviation of the $\beta_i=1$ track in fig.~\ref{fig-betab} is
an indicator of the problem. A restriction that only scale factors that are at least some number
larger than the scale factor where the argument of eqn.~\ref{eq-brec} becomes zero are
considered reliable could be adopted. Instead in section~\ref{ss-fit} a more physically motivated
restrictions are imposed on the scale factors based on the accuracy of the beta potentials match
to the model potential. These restrictions are applied to both the power law and inverse
power law potentials.
\subsection{The Scalar as a Function of the Scale Factor}
Figure~\ref{fig-phip} shows an example of the evolution of $\phi$ for both the power and inverse
power law cases for $w_0=0.94$.
\begin{figure}
\scalebox{.6}{\includegraphics{fig3.eps}}
\caption{The evolution of the scalar field $\phi$ as a function of the scalar $a$ for the power
and inverse law beta function with $w_0 = -0.94$ for the five values of $\beta_{p,i}$. }
\label{fig-phip}
\end{figure}
The power law scalar decreases as the scale factor increases while the inverse power law scalar
increases with increasing $a$. Both converge to the same value ($\phi_0$) at $a=1$. Even though
$\phi_0$ changes significantly with the value of $\beta_{p,i}$, the scalar $\phi$ evolves relatively
little over $a$ between 0.1 and 1. Figure~\ref{fig-phii} shows the evolution of the scalar with
$\beta_{p,i} = 3.0$ and the five different values of $w_0$.
\begin{figure}
\scalebox{.6}{\includegraphics{fig4.eps}}
\caption{The evolution of the scalar field $\phi$ as a function of the scalar $a$ for the power
and inverse law beta function with $\beta_{p,i} =3.0$ and the five values of $w_0$.. The power
law scalar (solid line) decreases to $\phi_0$ and the inverse power law scalar (dashed line)
increases to $\phi_0$.}
\label{fig-phii}
\end{figure}
Figures~\ref{fig-rb} and~\ref{fig-rw} quantify the small variation of $\phi$ by plotting
the ratio of $\phi$ to $\phi_0$ with $w_0 = -0.94$ in fig.~\ref{fig-rb} and for the five values
of $w_0$ with $\beta_{p,i} = 3$ in fig.~\ref{fig-rw}. The figures show that the scalar
varies by relatively little over the look back time of 13 gigayears considered in this study.
They also show that smaller values of $\beta_{p,i}$ and larger deviations of $w_0$ from
minus one result larger changes in $\phi/\phi_0$.
\begin{figure}
\scalebox{.6}{\includegraphics{fig5.eps}}
\caption{The evolution of the ratio of $\phi$ to $\phi_0$ with $w_0=0.96$ for the five
different values of $\beta_{p,i}$.}
\label{fig-rb}
\end{figure}
\begin{figure}
\scalebox{.6}{\includegraphics{fig6.eps}}
\caption{The evolution of the ratio of $\phi$ to $\phi_0$ with $\beta_{p,i}=3$ for the five
different values of $w_0$.}
\label{fig-rw}
\end{figure}
In some cases the evolution of a parameter depends on the absolute change in the scalar
$\Delta \phi = \phi - \phi_0$ rather than the relative change in $\phi$. Figure~\ref{fig-delphi}
shows the values of $\Delta \phi$ for the five values of $\beta_{p,i}$ for $w_0 = -0.94$. The
value of $\Delta \phi$ is essentially independent of the value of $\beta_{p,i}$ for a given value
of $w_0$. This is a primary factor in the later conclusions that several parameters appear
insensitive to the power, $\beta_{p,i}$, of the power laws considered in this work.
\begin{figure}
\scalebox{.6}{\includegraphics{fig7.eps}}
\caption{The evolution of $\Delta \phi = \phi - \phi_0$ with $w_0= -0.94$ for the five
different values of $\beta_{p,i}$.}
\label{fig-delphi}
\end{figure}
\section{The Potentials} \label{s-pot}
In the beta function formalism two potentials play a prominent role. The first is the dark
energy potential in the action $V(\phi)$ that does not depend on matter. The second, in
analogy with particle physics, is termed the super potential $W$ given by
\begin{equation} \label{eq-W}
W(\phi) = -2H(\phi) = -2\frac{\dot{a}}{a}
\end{equation}
Even though the Hubble parameter $H$ is the parameter of interest the development
of the method utilizes $W$ to be consistent with the literature on beta functions.
Both the potential $V(\phi)$ and the super potential $W(\phi)$ can be expressed in terms
of $\beta(\phi)$ \citep{cic17} by
\begin{equation} \label{eq-wphi}
W(\phi) = W_0 exp\{-\frac{1}{2}\int_{\phi_0}^{\phi}\beta(x)dx\}
\end{equation}
and
\begin{equation} \label{eq-v}
V(\phi) = \frac{3}{4} W_0^2 exp\{-\int_{\phi_0}^{\phi}\beta(x)dx\}(1-\frac{\beta^2(\phi)}{6})
\end{equation}
where $W_0$ is the current value of $W$ equal to $-2H_0$. Note that the super potential is
always denoted as a capital $W$ and the dark energy equation of state by a lower case $w$.
The power law beta function results in simple forms of the two potentials
\begin{equation} \label{eq-wpphi}
W(\phi) = W_0(\frac{\phi}{\phi_0})^{\frac{\beta_p}{2}}
\end{equation}
and
\begin{equation} \label{eq-vpphi}
V(\phi) = \frac{3}{4} W_0^2 (\frac{\phi}{\phi_0})^{\beta_p}(1-\frac{\beta_p^2}{6 \phi^2})
\end{equation}
The inverse power law also has simple forms for the potentials.
\begin{equation} \label{eq-wiphi}
W(\phi) = W_0(\frac{\phi}{\phi_0})^{-\frac{\beta_i}{2}}
\end{equation}
and
\begin{equation} \label{eq-viphi}
V(\phi) = \frac{3}{4} W_0^2 (\frac{\phi}{\phi_0})^{-\beta_i}(1-\frac{\beta_i^2}{6 \phi^2})
\end{equation}
\subsection{Normalization} \label{ss-norm}
It is clear that the beta dark energy potentials have the desired power and inverse power law
potentials multiplied by $(1-\frac{\beta_{p,i}^2}{6 \phi^2})$ which produces both an offset
and a deviation from the model potentials. The deviation is expected to be
small since $\frac{\beta_{p,i}^2}{6\phi_{p,i}^2}$ is much less than one in most cases. The
offset can be corrected by a simple normalization $(1-\frac{
\beta_{p,i}^2}{6\phi_{p,i}^2(a_n)})^{-1}$ where $a_n$ is the scale factor where the normalization
occurs. The average deviation can be minimized by choosing a midway point such as $a_n=0.5$,
however, in this work the normalization point is $a_n=1$, the current epoch since that is where
the boundary condition is set such that $H(a=1)=H_0$. Numerical accuracy could be increased
by normalizing piecewise at several scale factors. A goal of this work is to create analytic
solutions, rather than numerical tables, therefore only one normalization point is utilized.
\subsection{Accuracy of Fit} \label{ss-fit}
The cosmological parameters derived by the beta function formalism are only useful if the
beta potentials accurately represent the model potentials. Figures~\ref{fig-pdev} and~\ref{fig-idev}
show the evolution of the power and inverse power law potentials respectively. In contrast to
previous figures the solid lines are the model potentials and the dashed lines are the beta
potentials. The value of $\beta_{p,i}$ is set to 3.0. The beta potentials are an excellent match
to the model potentials for the parameters in the figure. The matches improve as $w_0$
approaches minus one. For a given value of $\beta_{p,i}$ the inverse power law potentials
have about $10\%$ more evolution than the power law potentials.
\begin{figure}
\scalebox{.6}{\includegraphics{fig8.eps}}
\caption{The solid lines show the model power law potentials with $\beta_p =3.0$ for the five
different values of $w_0$. The dashed lines are the beta potentials for comparison. The quality
of the fits makes it difficult the resolve the solid lines from the dashed. The beta potentials are
normalized to match the model potentials at $a=1$}
\label{fig-pdev}
\end{figure}
\begin{figure}
\scalebox{.6}{\includegraphics{fig9.eps}}
\caption{The same as for fig.~\ref{fig-pdev} except for the beta and model inverse power law
potentials.}
\label{fig-idev}
\end{figure}
Figures~\ref{fig-apfd} and~\ref{fig-aifd} show the fractional deviation of the beta potentials
from the model potentials to quantify the deviations of the beta potentials from the model
potentials. The $\beta_{p,i}$ values equal to 1, 3 and 5 and with $w_0$ values equal to -0.98,
-0.94 and -0.9 are chosen to show the extremes without excessive overlap of tracks in the
figures.
\begin{figure}
\scalebox{.6}{\includegraphics{fig10.eps}}
\caption{The fractional deviation of the beta power law potentials from the model potentials with
$\beta_p =1.0$, dashed lines, $\beta_p =3.0$, solid lines, and $\beta_p =5.0$, dot
dashed lines. For each $\beta_p$ the tracks with the minimum deviation are for $w_0=-0.98$
and the tracks with the maximum deviation are for $w_0=-0.90$}
\label{fig-apfd}
\end{figure}
\begin{figure}
\scalebox{.6}{\includegraphics{fig11.eps}}
\caption{The same as for fig.~\ref{fig-apfd} except for the inverse power law potentials.
For $\beta_i=1$ only the $w_0=-0.98$ track is shown.}
\label{fig-aifd}
\end{figure}
The power law beta potentials are quantitatively good matches to model potentials with
the fit improving as $\beta_p$ increases and as $w_0$ decreases toward minus one. Only
the $\beta_p = 1$ with $w_0=-0.90$ case exceeds a fractional deviation of $1\%$ and then
only at scale factors less than 0.4. The inverse power law beta potentials show the same trends
but are less well behaved. It is clear that for low $\beta_i$ values and large deviations of
$w_0$ from minus one some of the beta potentials deviate from the model potentials by much
more than $1\%$.
In this investigation the conservative limit of no more than $1\%$ deviation of the beta potential
from the model potential is adopted. Tables~\ref{tab-pval} and~\ref{tab-ival} indicate the
minimum value of the scale factor for a given pair $\beta_{p,i}$ and $w_0$ where the beta
potential is within $1\%$ of the model potential. Entries with a v indicate the $1\%$ limitation
is satisfied for all scale factors between 0.1 and 1.0 Subsequent figures adhere to this limitation.
\begin{table}
\begin{tabular}{cccccc}
\hline
& & & $w_0$ & & \\
\hline
$\beta_p$ & $-0.98$ & $-0.96$ & $-0.94$ & $-0.92$ & $-0.90$\\
\hline
1 & v & v & v & 0.2 & 0.4\\
2 & v & v & v & v & 0.15\\
3 & v & v & v & v & v \\
4& v & v & v & v & v \\
5& v & v & v & v & v \\
\hline
\end{tabular}
\caption{Valid values of the scale factor for the power law beta potentials. The scale
factor must be greater than the entered value for the given value of $\beta_p$ and
$w_0$. An entry of v indicates that all scale factors between 0.1 and 1 are valid.} \label{tab-pval}
\end{table}
\begin{table}
\begin{tabular}{cccccc}
\hline
& & & $w_0$ & & \\
\hline
$\beta_i$ & $-0.98$ & $-0.96$ & $-0.94$ & $-0.92$ & $-0.90$\\
\hline
1 & v & v & 0.28 & 0.47 & 0.6\\
2 & v & v & v & 0.21 & 0.37\\
3 & v & v & v & v & 0.20\\
4& v & v & v & v & 0.12\\
5& v & v & v & v & v \\
\hline
\end{tabular}
\caption{The same as table~\ref{tab-pval} for the inverse power law
potentials.} \label{tab-ival}
\end{table}
The range of $w_0$ values for this investigation was extended past -0.94 to test
the limits of the validity of the method. For the power law beta functions the only
cases that are not valid over all scale factors are for $w_0$ values of -0.98 and -0.9
with $\beta_p$ values of 1 and 2. The minimum $a$ value of $\beta_p=2$ and
$w_0=-0.9$ is 0.15, therefore, most of the range of the scale factor is valid. For the
inverse power law case only the $\beta_i=1$ with $w_0=-0.94$ has a limitation on
the scale factor for the three values of $w_0$ nearest minus one. This leads to the
conclusion the beta function formalism is a useful method for power and inverse
power law dark energy potentials within the expected values of $w_0$. Caution,
however, must be exercised for $w_0$ values further from minus one than -0.94
as is shown in the tables. It is clear from tables~\ref{tab-pval} and~\ref{tab-ival}
that as the value of $\beta_{p,i}$ approaches one the solutions for the beta potentials
deviate from the model potentials by more than $1\%$ over a larger fraction of the
scale factors under consideration. Except for the special case of $\beta_{p,i}=0$,
$Lambda$CDM values of $\beta_{p,i}<1$ are considered unreliable and are not
considered in the investigation.
\section{Adding Matter to the Universe} \label{s-mat}
A real universe includes matter as well as dark energy. The explicit inclusion of matter is
discussed in \cite{cic17} and is the basis for this work. As before there is no attempt to
rederive the work presented there except where it is useful for clarity. The purpose of this work
is useful analytic models for comparison with observation rather than a theoretical extension of
previous work. Matter is represented by the $S_m$ term the action, eqn.~\ref{eq-act}.
\subsection{The Matter Density} \label{ss-rhom}
The matter density $\rho_m$ follows the mass continuity equation
\begin{equation} \label{eq-mc}
\dot{\rho_m}=\rho_{m,\phi}\dot{\phi}=-3H \rho_m
\end{equation}
In keeping with the notation of \citep{bin15} and \citep{cic17} the subscript ,$\phi$ indicates
the derivative with respect to $\phi$. This leads to the equations
\begin{equation} \label{eq-bm}
\frac{\rho_{m,\phi}}{\rho_m} = -3\frac{H}{\dot{\phi}}=-\frac{3}{\beta(\phi)}
\end{equation}
Integrating the logarithmic derivative in eqn.~\ref{eq-bm} yields the equation for $\rho_m(\phi)$
\begin{equation} \label{eq-rhomphi}
\rho_m(\phi)=\rho_{m0}\exp(-3\int_{\phi_0}^{\phi} \frac{d \phi}{\beta(\phi)})
\end{equation}
Different beta functions produce different functions for $\rho_m$ as a function of $\phi$.
The emphasis in this work, however, is expressing the cosmological parameters as a function of the
observable scale factor $a$ rather than the unobservable scalar $\phi$. From the definition
of $\beta(\phi)$ in eqn.~\ref{eq-beta} eqn.~\ref{eq-rhomphi} becomes
\begin{equation} \label{eq-rhoma}
\rho_m(a)=\rho_{m0}\exp(-3\int_1^a d\ln(a) )= \rho_{m0}a^{-3}
\end{equation}
as expected, independent of $\beta(\phi)$.
\subsection{The Super Potential $W$ with Mass} \label{s-spm}
The Einstein equations with mass become
\begin{equation} \label{eq-em1}
H^2 = \frac{\rho_m + \rho_{\phi}}{3}
\end{equation}
\begin{equation} \label{eq-em2}
-2\dot{H} = \rho_m + \rho_{\phi} + p_{\phi}
\end{equation}
\cite{cic17} show that the inclusion of matter results in differential equation for $W$ of the form
\begin{equation} \label{eq-difw}
WW_{,\phi} + \frac{1}{2} \beta W^2 = -2\frac{\rho_m}{\beta}
\end{equation}
For the power law beta function $\beta(\phi) = -\frac{\beta_p}{\phi}$ eqn.~\ref{eq-difw} becomes
\begin{equation} \label{eq-bcw}
WW_{,\phi} - \frac{1}{2} \frac{\beta_p}{\phi} W^2 = -2\rho_m \frac{\phi}{\beta_p}
\end{equation}
Equation~\ref{eq-bcw} is solved by multiplying it by an integrating factor that makes the left hand side an
exact differential and the right hand side an integral that can be solved preferably analytically or by numerical
integration. The integrating factor for the power law beta function is $\phi^{-\beta_p}$. The equation then reads
\begin{equation} \label{eq-if}
\frac{d}{d \phi}(\frac{1}{2}W^2 \phi^{-\beta_p})=2\rho_m(\phi) \frac{\phi^{1-\beta_p}}{\beta_p}
\end{equation}
which is a general equation for any positive value of $\beta_p$.
The derivation of the super potential deviates from the discussion of \cite{cic17} at this point
to derive $W(a)$ rather than $W(\phi)$ since the goal is observable predictions. Substituting
eqn.~\ref{eq-rhoma} into eqn.~\ref{eq-if} results in
\begin{equation} \label{eq-wa}
\mid_{\phi_0}^{\phi}W^2 \phi^{-\beta_p} = 4 \frac{\rho_{m_0}}{\beta_p}\int_{\phi_0}^{\phi}\phi^{1-\beta_p}a^{-3}d\phi
\end{equation}
Using $d\phi=-\beta_p(-2\beta_p\ln(a)+\phi_0^2)^{-1/2}\frac{da}{a}$ gives
\begin{equation} \label{eq-iwa}
\mid_{\phi_0}^{\phi}W^2 \phi^{-\beta_p} = -4\rho_{m_0}\int_{1}^{a}x^{-4}(-2\beta_p\ln(x)+\phi_0^2)^{-\frac{\beta_p}{2}}dx
\end{equation}
Equation~\ref{eq-iwa} can also be written as
\begin{equation} \label{eq-phiwa}
\mid_{\phi_0}^{\phi}W^2 = -4\rho_{m_0}\phi^{\beta_p}\int_{1}^{a}x^{-4}\phi^{-\beta_p}dx
\end{equation}
Since $\phi(a) =(-2\beta_p\ln(a)+\phi_0^2)^{1/2}$ the super potential as a function of $a$ is
\begin{align} \label{eq-war}
W(a) = \{-4\rho_{m_0}(-2\beta_p\ln(a)+\phi_0^2)^{\frac{\beta_p}{2}} \nonumber \\
\int_{1}^{a}x^{-4}(-2\beta_p\ln(x)+\phi_0^2)^{\frac{-\beta_p}{2}}dx+W_0^2(\frac{\phi(a)}{\phi_0})^{\beta_p}\}^{1/2}
\end{align}
The integral in eqn.~\ref{eq-war} is solved by two changes of variable. The first change is to
let $z=(-2\beta_p \ln(a) + \phi_0^2)$ which yields
\begin{equation} \label{eq-war1}
-(\frac{1}{2\beta_p}) \exp(-\frac{3\phi_0^2}{2\beta_p})\int z^{-\frac{\beta_p}{2}} \exp(\frac{3z}{2\beta_p})dz
\end{equation}
The second change of variable is $y=-\frac{3z}{2\beta_p}$ which produces the integral
\begin{equation} \label{eq-war2}
-\frac{1}{3}(-\frac{2\beta_p}{3})^{-\frac{\beta_p}{2}}\exp(-\frac{3\phi_0^2}{2\beta_p})\int y^{-\frac{\beta_p}{2}}\exp(-y)dy
\end{equation}
The integral in $y$ in eqn.~\ref{eq-war2} is the incomplete Gamma function $\Gamma(1-\frac{\beta_p}{2},3 \ln(a)-\frac{3\phi_0^2}{2\beta_p})$. The formal solution for the super
potential in terms of the scale factor is
\begin{align} \label{eq-awa}
W_p(a)=-\{-\frac{4\rho_{m_0}}{3}(-\frac{2\beta_p}{3})^{-\frac{\beta_p}{2}}\exp(-\frac{3\phi_0^2}{2\beta_p})(\phi_p(a))^{\beta_p}\nonumber \\
\{ \Gamma(1-\frac{\beta_p}{2},3 \ln(a)-\frac{3\phi_0^2}{2\beta_p}) - \Gamma(1-\frac{\beta_p}{2},-\frac{3\phi_0^2}{2\beta_p}) \}\nonumber \\
+W_0^2(\frac{\phi_p(a)}{\phi_0})^{\beta_p}\}^{1/2}
\end{align}
The negative square root is chosen since $W(a)$ is a negative quantity.
The solution for $W(a)$ in the inverse power law case is very similar to the power law. The
integrating factor is $\phi^{\beta_i}$ rather than $\phi^{-\beta_p}$. The equivalent to
eqn.~\ref{eq-phiwa} is
\begin{equation} \label{eq-phiwai}
\mid_{\phi_0}^{\phi}W^2 = -4\rho_{m_0}\phi^{-\beta_p}\int_{1}^{a}x^{-4}\phi^{\beta_p}dx
\end{equation}
and the formal solution for $W(a)$ for the inverse power law case is
\begin{align} \label{eq-awai}
W_i(a)=-\{-\frac{4\rho_{m_0}}{3}(\frac{2\beta_i}{3})^{-\frac{\beta_p}{2}}\exp(\frac{3\phi_0^2}{2\beta_i})(\phi_i(a))^{-\beta_i}\nonumber \\
\{ \Gamma(1+\frac{\beta_i}{2},3 \ln(a)+\frac{3\phi_0^2}{2\beta_i}) - \Gamma(1+\frac{\beta_i}{2},\frac{3\phi_0^2}{2\beta_i}) \}\nonumber \\
\noindent +W_0^2(\frac{\phi_0}{\phi_i(a)})^{\beta_i}\}^{1/2}
\end{align}
\section{The Evolution of Cosmological Parameters} \label{s-ecp}
Establishing the analytic functions for the super potential $W$ as a function of the scale
factor $a$ provides the means for calculating the evolution of cosmological parameters.
It is obvious from its definition (eqn.~\ref{eq-W}) that super potential determines the Hubble
parameter. Normally the discussion of the cosmological parameters would center on the Hubble
parameter but the super potential is again used here to be consistent with existing literature on
the beta function formalism.
In the following the parameters are presented as a function of $\phi_{p,i}(a)$ with
eqn.~\ref{eq-phi} providing the proper equations for the scalar $\phi$ as a function
of the scale factor $a$. This convention is adopted to preserve the dependence of
the parameters on the scalar $\phi$ while providing the means to calculate the parameters
as a function of the scale factor $a$. An exception to this convention is the matter density
where eqn.~\ref{eq-rhoma} explicitly show that the density varies as $a^{-3}$. Although
it should be obvious from the context the scalar will be written as $\phi_p(a)$ for the
power law and $\phi_i(a)$ for the inverse power law but the current value of $\phi$ will
still be written as $\phi_0$ since it is the same for both cases.
\subsection{The Evolution of the Hubble Factor and the Onset of Acceleration} \label{ss-ha}
Two observable quantities are the evolution of the Hubble factor $H(a)$ and the onset
of the acceleration of the expansion of the universe. Since $H(a) = -\frac{W(a)}{2}$
eqns.~\ref{eq-awa} and~\ref{eq-awai} specify the evolution of the Hubble factor for the
power and inverse power law potentials. Figure~\ref{fig-h} shows the evolution of $H(a)$
for all of the cases considered in this study \emph{including $\Lambda$CDM}. All of the
solutions plotted in fig.~\ref{fig-h} conform to the limits on $a$ in tables~\ref{tab-pval}
and~\ref{tab-ival}.
Remarkably the solutions for both the power and inverse power law as well as $\Lambda$CDM
all overlap each other at the resolution of fig.~\ref{fig-h}, making $H(a)$ insensitive to
either the power of the potential or the current value of the dark energy equation of state,
including $w_0=-1$, for the cases considered here.
\begin{figure}
\scalebox{.6}{\includegraphics{fig12.eps}}
\caption{The evolution of $H(a)$ for the power (solid line) and inverse (dashed line) power law
potentials as well as $\Lambda$CDM. The inverse power law Hubble function lies very slightly
above the power law Hubble function but the difference is not resolvable at the resolution of
the figure. The difference between $\Lambda$CDM and the quintessence plots
is also not resolvable.}
\label{fig-h}
\end{figure}
\subsubsection{The insensitivity of $H(a)$ to the Potential and $w_0$} \label{sss-hins}
At first glance the insensitivity of the Hubble value $H(a)$ to the power of the potential
and the value of $w_0$ seems remarkable but further
examination shows that it is due to a combination of factors. The first is that all solutions
must have the same initial value $H_0$ which is set by observation, independent of
$w_0$. A second factor is that at early times when the evolution is matter
dominated the common $\rho_m = \rho_{m_0}a^{-3}$ term makes the evolution the same
for all cases. Thirdly the last term in both eqns.~\ref{eq-awa} and~\ref{eq-awai} is
proportional to either $\frac{\phi}{\phi_0}$ for the power law or $\frac{\phi_0}{\phi}$
for the inverse power law. Examination of fig.~\ref{fig-phip} shows that the power and
inverse power law scalars are decreasing and increasing respective with $a$ making both
late time evolutions decreasing with increasing $a$. Finally examination of eqns.~\ref{eq-phiwa}
and~\ref{eq-phiwai} reveal that the integrals are multiplied by opposing positive and negative
powers of $\phi$ inside and outside of the integral. Since the change in $\phi$
is small the positive and negative powers of $\phi$ effectively cancel each other.
\subsubsection{A simple common equation for $H(a)$} \label{sss-hcom}
Equations.~\ref{eq-phiwa} and~\ref{eq-phiwai} suggest that the integral over $x$ with
the $\phi$ term held constant may be an excellent approximation for describing $H(a)$.
That approximation is given by
\begin{equation} \label{eq-hcomp}
H(a)=-\frac{1}{2}\sqrt{\frac{4}{3}\rho_{m_0}(a^{-3}-1)+W_0^2(\frac{\phi(a)}{\phi_0})^{\beta_p}}
\end{equation}
for the power law case and
\begin{equation} \label{eq-hcomi}
H(a)=-\frac{1}{2}\sqrt{\frac{4}{3}\rho_{m_0}(a^{-3}-1)+W_0^2(\frac{\phi_0}{\phi(a)})^{\beta_i}}
\end{equation}
for the inverse power law case. Equation~\ref{eq-phi} provides the appropriate $\phi(a)$.
Equations~\ref{eq-hcomp} and~\ref{eq-hcomi} give $H(a)$ solutions that are indistinguishable
from the suite of solutions shown in figure~\ref{fig-h} at the resolution of the plot. It is interesting
to note that $\Lambda$CDM is the $\beta_{p,i}=0$ case for either equation making $\Lambda$CDM
a member of the family of solutions. This is an indication of the universality of the formalism.
\subsubsection{The onset of acceleration} \label{sss-ac}
In a universe with mass the onset of the acceleration of the expansion is delayed until
the matter density is low enough that dark energy begins to dominate. The onset of
acceleration is marked by an increase in the expansion rate $\dot{a} = aH(a)$.
Figure~\ref{fig-hac} shows the track of $\dot{a}$ versus $a$.
\begin{figure}
\scalebox{.6}{\includegraphics{fig13.eps}}
\caption{The figure shows the evolution of $\dot{a} = aH(a)$ versus the scale factor
$a$ for both the power and inverse power law potentials. The power law potential
track is slightly below the inverse power law track but the difference is indistinguishable
at the scale of the figure.}
\label{fig-hac}
\end{figure}
The acceleration begins at a scale factor of $\approx0.6$ $(z\approx0.7)$ which is consistent
with observations eg. \citep{avs14, avs17}. Given the insensitivity of H(a) to $\beta_{p,i}$
and $w_0$ the only adjustable parameters are $H_0$ and $\rho_{m_0}$ which are set by
observation.
\subsubsection{Comparison with Observations} \label{sss-hob}
We have shown that the Hubble factor $H(a)$ is remarkably insensitive to either the
power of the potential, $\beta_{p,i}$ or $w_0$ and is identical to the $\Lambda$CDM
$H(a)$ solution. This
makes the Hubble factor a poor parameter for discriminating between static and dynamical
dark energy. It, however, offers an excellent opportunity for determining $H_0$ for both
cosmologies. The recently compiled $H(a)$ observations by \citet{jes17} provides an
example of such a measurement. Using eqn.~\ref{eq-hcomp} as the model with
$H_0$ as the only variable a chi squared analysis determined that the most likely value
of $H_0$ for the example data set is $H_0 = 66.5$ (km/sec)/Mpc. Figure~\ref{fig-hchi} shows
the run of $\chi^2$ versus $H_0$. This is not a result, just an example for the particular data set.
\begin{figure}
\includegraphics[width=7.6cm]{fig14.eps}
\caption{The values of $\chi^2$ versus $H_0$ for the \citet{jes17} data set showing
a best fit of $H_0=66.5$ as the best fit for this particular data set.}
\label{fig-hchi}
\end{figure}
Figure~\ref{fig-hfit} shows the example $H_0=70$ and the best fit $H_0=66.5$
$H(a)$ evolution superimposed on the \citet{jes17} data set. The dashed curve for the
$H_0=70$ case is just barely resolved above the solid line. The minimum chi square of
about 5.6 is not a high quality measurement but is probably consistent with the scatter in
the data set providing evidence that the beta function calculations have more than sufficient
accuracy for comparison with observations.
\begin{figure}
\includegraphics[width=7.6cm]{fig15.eps}
\caption{The $H_0 = 66.5$ best fit and the $H_0=70$ proposal example fit to the
$H(z)$ data set.}
\label{fig-hfit}
\end{figure}
\subsection{The Dark Energy Equation of State} \label{ss-deos}
One of the most important observable cosmological parameters is the dark energy equation of
state $w$. The static $\Lambda$CDM cosmology predicts that $w$ equals minus one for all time
whereas dynamical cosmologies predict values deviant from minus one. It should be noted that
$w$ need not vary to produce dynamical cosmological parameters, it just needs to be different
from minus one. Section~\ref{s-fc} on fundamental constants is an example of such a case.
From \citet{cic17} the dark energy equation of state is given by
\begin{equation} \label{eq-wden}
1+w(\phi) = \frac{\beta^2}{3}(1-\frac{4\rho_{m_0}a^{-3}}{3W^2})^{-1}
=\frac{\beta^2}{3}(1-\Omega_m)^{-1}=\frac{\beta^2(\phi)}{3\Omega_{\phi}}
\end{equation}
for a flat universe where the terms after the first equality are provided by the author for
clarity. The second equality shows that $1+w(\phi)$ is proportional to $(1-\Omega_m)^{-1}$.
In the matter dominated era $\Omega_m$ approaches one making $(1-\Omega_m)^{-1}$
very susceptible to small errors in $\Omega_m$. For this reason the analytic solutions for
$(1+w)$ employ eqns.~\ref{eq-awa} and~\ref{eq-awai} for $W(a)$ rather than the
approximations for $W(a)$ and $H(a)$ in eqns.~\ref{eq-hcomp} and~\ref{eq-hcomi}.
In terms of the scale factor $a$ the dark energy equation of state $w(a)$ is given by
\begin{equation} \label{eq-wdea}
1+w(a) = \frac{\beta_{p,i}^2}{3 \phi^2_{p,i}(a)}(1-\frac{4\rho_{m_0}}{3a^3W^2_{p,i}(a)})^{-1}
\end{equation}
In eqn.~\ref{eq-wdea} $\phi_{p,i}(a)$ is given by eqns.~\ref{eq-phi} and $W_{p,i}(a)$
by eqn.~\ref{eq-awa} and~\ref{eq-awai}.
Figure~\ref{fig-ww} shows the evolution of $1+w(a)$ for $\beta_{p,i}=3.0$ with the
five values of $w_0$.
\begin{figure}
\scalebox{.6}{\includegraphics{fig16.eps}}
\caption{The dark energy equation of state plus one with $\beta_{p,i}$ held constant at 3.0
for the five different values of $w_0$. The power law cases are plotted with a solid line and
the inverse power law cases with a dashed line.}
\label{fig-ww}
\end{figure}
As expected the evolution of $(1+w(a))$ is slowly freezing toward $w_0$ for scale factors
larger than 0.5 while there is significant evolution for scale factors smaller than 0.5. At increased
deviations of $w_0$ from minus one the inverse power law cases increasingly deviates from the
power law cases.
Figure~\ref{fig-wb} shows the evolution of $1+w(a)$ with $w_0=-0.94$ for the five different
values of $\beta_{p,i}$.
\begin{figure}
\scalebox{.6}{\includegraphics{fig17.eps}}
\caption{The dark energy equation of state plus one with $w_0 = -0.94$ for the five different
values of $\beta_{p,i}$. For both the power law (solid) and the inverse power law (dashed) cases
the degree of evolution decreases slightly with increasing $\beta_{p,i}$. All of the inverse
power law tracks lie above the power law tracks.}
\label{fig-wb}
\end{figure}
As with the other cosmological parameters the power $\beta_{p,i}$ of the power and inverse
power law potentials has only a small effect on the evolution of $w(a)$. Since the value of
$w_0$ was held constant all of the cases have the same present day value of $w(a)$.
\subsection{The Fundamental Constants} \label{s-fc}
Beyond the cosmological parameters the fundamental constants provide important and
generally under utilized information to discriminate between static and dynamic dark
energy. Fundamental constants are pure numbers with no dimensions and therefore
invariant to the system of units. The constants considered here are the proton to electron
mass ratio, $\mu$, and the fine structure constant, $\alpha$. The standard model, with
the cosmological constant as dark energy, predicts that the fundamental constants are
temporally and spatially invariant. Quintessence and most other rolling scalar field
cosmologies predict a temporal variation of the constants that is proportional to the
deviation of $w$ from minus one \citep{cal11,thm12}. This connection occurs because
the scalar $\phi$ that provides dark energy also interacts with other sectors beyond
the gravitational sector.
In the absence of special and finely tuned symmetries it is very difficult to restrict
a scalar field that interacts with gravity from interacting with the weak, electromagnetic and
strong sectors as well eg. \citep{car98, ave06}. In this scenario the same field $\phi$ that
serves as dark energy also produces changes in the fundamental constants and particle
physics parameters through interactions in sectors other than gravity. The values
of the fundamental constants such as the proton to electron mass ratio $\mu$ and the
fine structure constant $\alpha$ are set by the values of the particle physics parameters
such as the Quantum Chromodynamic Scale, $\Lambda_{QCD}$, the Higgs Vacuum Expectation
Value, $\nu$, and the Yukawa couplings, $h$. The scalar interacting with these particle
physics parameters produces changes in the fundamental constants \citep{coc07,thm17}.
The coupling of the scalar field to $\alpha$ and $\mu$ is given by the simple relation
\citep{nun04}
\begin{equation} \label{eq-dx}
\frac{\Delta c}{c} = \zeta_c(\phi - \phi_0), \hspace{0.5cm} c=\alpha,\mu
\end{equation}
The coupling to the constant $c$ is $\zeta_c$, which may be either positive or negative. The
linear dependence of the variance of the constants on $\phi$ can be thought of as the first
term in a Taylor series expansion of a more complicated coupling. Since the limits on
observed changes in the constants are on the order of $10^{-6}$ or less the linear dependence
is a good approximation. Although $\zeta_{\mu}$ is written as a single term
it is actually a combination of the individual couplings to the QCD scale, the Higgs VeV and
the Yukawa couplings as discussed above, in \citet{thm17}, and at the end of
Section~\ref{ss-muob}. It is clear from eqn.~\ref{eq-dx} that once the beta function
is defined and the boundary condition selected the evolution of the fundamental
constants is completely defined. This is one of the significant advantages of the beta
function formalism. Using the connection between $w$ and $\phi$ given in
eqn.~\ref{eq-nun4} the evolution of the fundamental constants can also be written as
\begin{equation} \label{eq-wfc}
\frac{\Delta c}{c} = \zeta_c \int_1^a \sqrt{3 \Omega_{\phi}(w+1)}x^{-1}dx
\end{equation}
\citep{cal11,thm12} which shows that whenever $w$ is different from minus one the fundamental
constants are expected to vary making $\mu$ and $\alpha$ $w$ meters in the universe and
excellent discriminators between static and dynamic dark energy. The beta function, however,
provides a much simpler method for predicting the evolution of the constants as a function of
the cosmology and the dark energy potential.
Since the proton to electron mass ratio $\mu$ has the most reliable and tightest restriction on its
temporal variance it is used as the example in this discussion. The discussion for the fine
structure constant $\alpha$ is for the most part exactly the same except for the substitution of
$\zeta_{\alpha}$ instead of $\zeta_{\mu}$ in eqn.~\ref{eq-dx} or~\ref{eq-wfc}. In both cases
the coupling $\zeta_{\mu,\alpha}$ is considered a constant. The evolution of $\mu$ as a function
of the scale factor is simply
\begin{equation} \label{eq-dmu}
\frac{\Delta \mu}{\mu} =\zeta_{\mu} (\sqrt{-2 \beta_p ln(a) + \phi_0^2}-\phi_0), \hspace{0.5cm} \zeta_{\mu}( \sqrt{2 \beta_i ln(a) + \phi_0^2}-\phi_0)
\end{equation}
for the power law, $\beta_p$ or the inverse power law, $\beta_i$ dark energy potentials.
As an example $\zeta_{\mu}$ is set to $10^{-6}$ and $\beta_{p,i}$ to 3.0. Figure~\ref{fig-dmu}
shows the evolution for both the power law (solid lines) and the inverse power law (dashed)
line cases. Since the coupling constant can be either positive or negative the sign of
$\frac{\Delta \mu}{\mu}$ is not a discriminator between the power and inverse power law
potentials unless the sign of the coupling is somehow determined.
\begin{figure}
\scalebox{.6}{\includegraphics{fig18.eps}}
\caption{The evolution of $\frac{\Delta \mu}{\mu}$ for the five different values of $w_0$ with
$\beta_{p,i}$ set to 3.}
\label{fig-dmu}
\end{figure}
The sensitivity to $w_0$ is evident in the figure. Figure~\ref{fig-dmub} shows the evolution
of $\frac{\Delta \mu}{\mu}$ with $w_0$ held constant at -0.94 with the five different values of
$\beta_{p,i}$.
\begin{figure}
\scalebox{.6}{\includegraphics{fig19.eps}}
\caption{The evolution of $\frac{\Delta \mu}{\mu}$ for the five different values of $\beta_{p,i}$
with $w_0$ set to -0.94}
\label{fig-dmub}
\end{figure}
As expected the evolution of $\frac{\Delta \mu}{\mu}$ is largely insensitive to $\beta_{p,i}$
since it is proportional to $\Delta \phi$ which is also largely independent of the power of the
power laws as shown in fig.~\ref{fig-delphi}.
\subsubsection{Observational Constraints on $\frac{\Delta \mu}{\mu}$} \label{ss-muob}
Observations of molecular absorption lines from cold gas along the line of sight to distant
quasars provide the constraints on $\frac{\Delta \mu}{\mu}$. Changes in $\mu$ alter the
energy levels of molecules according to the quantum numbers of the upper and lower states
of the transition \citep{thm75} changing the wavelengths of the transitions in a manner that
can not be mimicked by a redshift. The majority of constraints arise from the observation
of molecular hydrogen absorption lines of the Lyman and Werner bands at redshifts greater
than 2. More recently radio observations of methanol and ammonia absorption lines at redshifts
less than one have provided more stringent constraints. The tightest constraints come from
methanol lines in the spectrum of PKS1830-211 at a redshift of 0.88582 by \citet{bag13}
and \citet{kan15} finding $\frac{\Delta \mu}{\mu} = (-2.9 \pm 5.7)\times 10^{-8}$.
Concerns about common lines of sight has raised the $1\sigma$ error to $\pm10^{-7}$
which will be used here. Figure~\ref{fig-alle} shows all of the measurements to date.
\begin{figure}
\scalebox{.6}{\includegraphics{fig20.eps}}
\caption{All of the observational constraints on $\frac{\Delta \mu}{\mu}$ with the evolution
of $\frac{\Delta \mu}{\mu}$ from fig.~\ref{fig-dmu} superimposed. The radio constraints
at redshifts less than one are not visible at the resolution of this figure.}
\label{fig-alle}
\end{figure}
All of the measurements at redshifts greater than one are optical observations of molecular
hydrogen redshifted into the visible region. The radio constraints at redshifts less than one
are not visible at the scale of this plot.
The PKS1830-211 constraint is shown in fig.~\ref{fig-rad} at expanded scale to make the
constraint visible.
\begin{figure}
\scalebox{.6}{\includegraphics{fig21.eps}}
\caption{The radio observational constraints on $\frac{\Delta \mu}{\mu}$ with the evolutionary
tracks from fig.~\ref{fig-dmu}. For $\beta_{p,i} = 3.0$ only the track with $w_0 = -0.98$
satisfies the constraint at the $1\sigma$ level with $\zeta_{\mu} = 10^{-6}$.}
\label{fig-rad}
\end{figure}
For $\beta_{p,i}=3.0$ and $\zeta_{\mu}=10^{-6}$ the constraint requires $(w_0+1)$ to be 0.02 or
less and is of course consistent with the $\Lambda$CDM value of zero. If the error bar
was centered on zero then $(w_0+1)$ would have to be less than 0.02.
Any constraint on $\frac{\Delta \mu}{\mu}$ or $\frac{\Delta \alpha}{\alpha}$ can be met
by either adjusting a cosmological parameter $(w_0+1)$ or a particle physics parameter
$\zeta_{\mu,\alpha}$, therefore, the observations constrain a two dimensional space. The
observations define allowed and forbidden areas in the $\zeta_{\mu,\alpha}$, $(w_0+1)$
parameter space. Figure~\ref{fig-fa} shows the allowed and forbidden areas defined by
the $\frac{\Delta \mu}{\mu}$ constraint.
\begin{figure}
\scalebox{.6}{\includegraphics{fig22.eps}}
\caption{The allowed and forbidden regions in the $\zeta_{\mu}$ vs $(w_0=1)$ plane
imposed by the limit on $\frac{\Delta \mu}{\mu}$ shown in fig.~\ref{fig-rad}.}
\label{fig-fa}
\end{figure}
More stringent observational bounds on $(w_0+1)$ could place currently allowed regions
into the forbidden region. The only point on the diagram consistent with $\Lambda$CDM
and the standard model is the origin.
The coupling constants $\zeta_{\mu,\alpha}$ are really couplings to the particle physics
parameters, the Quantum Chromodynamic scale, $\Lambda_{QCD}$, the Higgs Vacuum
Expectation Value, $\nu$, and the Yukawa couplings, $h$ \citep{coc07,thm17}. The fractional
variations, $\frac{\Delta \mu}{\mu}$ and $\frac{\Delta \alpha}{\alpha}$ are two different
functions of the fractional variations of $\Lambda_{QCD}$, $\nu$ and $h$. The combined
limits on the fractional variation of $\mu$ and $\alpha$ then place limits on
$\frac{\Delta \Lambda_{QCD}}{\Lambda_{QCD}}<7.9\times 10^{-5}$ and the sum
$(\frac{\Delta \nu}{\nu} +\frac{\Delta h}{h})<8.0\times 10^{-5}$ that can not be duplicated
by laboratory measurements \citep{thm17}.
\section{Relevant but not Directly Observable Parameters} \label{s-rp}
There are several cosmological parameters that are relevant but not directly observable.
Here three parameters, the time derivative of the scalar field, the dark energy density,
and the dark energy pressure, are calculated as functions of the scale factor $a$.
\subsection{The Evolution of the Time Derivative of the Scalar} \label{ss-phidot}
The time derivative of the scalar $\phi$ is an important cosmological parameter that
appears in both the dark energy pressure and density equations. Since the beta function
is the derivative of the scalar with respect to the natural log of the scale factor the time
derivative of the scalar is simply the Hubble parameter times the beta function.
\begin{equation} \label{eq-pdot}
\dot{\phi}= a\frac{d \phi}{da}\frac{\dot{a}}{a}=\beta H =-\frac{1}{2}\beta W
\end{equation}
Figure~\ref{fig-pdot}
shows the evolution of $\dot{\phi}$ with respect to the scale factor $a$. Since $H(a)$ is
essentially invariant to either the power of the dark energy potential or the value of $w_0$
the dependence on $w_0$ is entirely due to the beta functions' dependence on $\beta_{p,i}$
and $w_0$. Figures~\ref{fig-betaw} and~\ref{fig-betab} show that the main dependence
is on $w_0$ as opposed to $\beta_{p,i}$.
\begin{figure}
\scalebox{.6}{\includegraphics{fig23.eps}}
\caption{The time derivative of the scalar. The $(w_0+1)=0.1$ case has the most evolution
and $(w_0+1)=0.02$ has the least evolution for both the power (solid lines) and inverse power
(dashed lines) law potentials.}
\label{fig-pdot}
\end{figure}
An analytic expression for $\dot{\phi}$ is obtained by multiplying either eqn.~\ref{eq-awa}
or~\ref{eq-awai} by $(-\frac{1}{2})\beta(\phi)$ with $\beta(\phi)$ given by the appropriate
functions in eqns.~\ref{eq-betap} and~\ref{eq-phi}. An alternative is to use the functions
in eqns.~\ref{eq-hcomp} and~\ref{eq-hcomi} for $H(a)$ resulting in
\begin{equation} \label{eq-pdp}
\dot{\phi_p}(a)=\frac{\beta_p}{2 \phi_p(a)}W_p(a)
\end{equation}
for the power law potential and
\begin{equation} \label{eq-pdi}
\dot{\phi_i}(a)= \frac{-\beta_i}{2\phi_i(a)}W_i(a)
\end{equation}
for the inverse power law. In eqns~\ref{eq-pdp} and~\ref{eq-pdi} the approximate forms
of $H(a)$ can also be used. Using the Gamma function forms is slightly more accurate.
It is obvious from fig.~\ref{fig-pdot} that although there is significant early time evolution of
$\dot{\phi}$ the late time evolution is a slow approach to zero. This indicates that power and
inverse power law quintessence predicts very small time variations of the fundamental constants
at the present time. This is a general characteristic of most freezing cosmologies where $w$ is
initially different from minus one and evolves toward minus one with time. The power law
values of $\dot{\phi}$ are negative since the scalar is decreasing while the inverse power law
values are positive since the scalar is increasing with time for this case.
\subsection{The Evolution of the Dark Energy Density and Pressure} \label{ss-dedp}
From eqn.~\ref{eq-em1} it is clear that
\begin{equation} \label{eq-rphi}
\rho_{\phi} = 3 H^2 - \rho_m = 3H^2(a) -\frac{\rho_{m_0}}{a^3}
\end{equation}
which is consistent with eqn. 3.8 from \citet{cic17} which gives the total potential with mass
as
\begin{equation} \label{eq-vm}
V=\rho_{\phi} - \frac{1}{2}\dot{\phi}^2=3H^2-\rho_m - \frac{1}{2}\dot{\phi}^2
\end{equation}
Figure~\ref{fig-aden} shows the evolution of the densities using the Gamma function
equations~\ref{eq-awa} and~\ref{eq-awai} to compute $H(a)$. The matter
density, shown by the dash dot line, is also plotted to indicate the crossover from matter
to dark energy dominated evolution.
\begin{figure}
\scalebox{.6}{\includegraphics{fig24.eps}}
\caption{The power law potential (solid line) and inverse power law (dashed line) dark
energy density values as a function of the scale factor. In both cases the $w_0 = -0.9$
tracks have the highest values and the $w_0=-0.98$ tracks have the lowest values. The
dashed-dot line is the matter density which decreases below the dark energy density near
a scale factor of 0.75}
\label{fig-aden}
\end{figure}
For values of $(w_0+1)$ close to zero the power and inverse power law plots nearly
overlap but as $(w_0+1)$ diverges from zero the inverse power law cases have slightly
higher densities at scale factors less than 0.5. All cases converge to the boundary condition
on the density at a scale factor of one. Most of the evolution of $\rho_{\phi}$ occurs at
scale factor less than 0.3, consistent with the previous plots of the evolution of $(w+1)$ ib
fig.~\ref{fig-ww} and $\dot{\phi}^2$ in fig.~\ref{fig-pdot}.
The dark energy pressure is also given in eqns.~\ref{eq-rhop}.
\begin{equation} \label{eq-dep}
p_{\phi}(a) = \dot{\phi}^2 -3H(a)+\frac{\rho_{m_0}}{a^3}
\end{equation}
Figure~\ref{fig-allp} shows the evolution of the dark energy pressure.
\begin{figure}
\scalebox{.6}{\includegraphics{fig25.eps}}
\caption{The dark energy pressure. The $(w_0+1)=0.1$ case has the most evolution
and $(w_0+1)=0.02$ has the least evolution for both the power (solid lines) and inverse power
(dashed lines) law potentials.}
\label{fig-allp}
\end{figure}
As was the case with the dark energy density the inverse power law dark energy potential case
has more evolution than the power law case, particularly for $w_0$ values further from minus
one. Both the dark energy density and the dark energy potential are not significantly dependent
on the power $\beta_{p,i}$ for a given value of $w_0$.
\section{Summary} \label{s-sum}
The beta function formalism is demonstrated using the example of the quintessence
cosmology with power and inverse power law dark energy potentials. Simple beta
functions were found, $\beta(\phi) = \pm\frac{\beta_{p,i}}{\phi}$ where
$\beta_{p,i}$ is a constant equal to the power. The minus sign
applies to the power law, $p$, and the positive sign to the inverse power law, $i$. From the
beta functions the scalar $\phi$, as a function of the scale factor $a$ is calculated with a
boundary condition supplied by the current value of the dark energy equation of state $w$. This
provides an easy transition from functions of the generally unobservable scalar $\phi$ to functions
of the easily observable scale factor $a=\frac{1}{1+z}$. Beta potentials are produced that
reproduce the model dark energy potentials to better than one percent. These potentials
produce actions that accurately represent the actions with the model potentials. The extra
beta function combined with the quintessence equations for the dark energy pressure and
density plus the usual cosmological equations provide the means to calculate an analytic
function for the super potential, $W=-\frac{1}{2}H$ where H is the Hubble parameter.
The super potential automatically provides the Hubble parameter as a function of the scale
factor. It is found that the Hubble parameter is essentially insensitive to the power
of the potential or $w_0$ and includes the $\beta_{p,i} =0$ case which corresponds to the
$\Lambda$CDM cosmology. This demonstrates that the Hubble parameter is not a
good indicator to discriminate between static and dynamical dark energy. It is confirmed
that the transition from matter dominated to dark energy dominated epochs occurs
at the proper time and that the evolution of the Hubble parameter matches a randomly
selected current list of $H(z)$ measurements. The measurements also provide a
best fit value of $H_0$ for the selected data set.
Additional observable parameters, the dark energy equation of state, and the variation
of the fundamental constants $\mu$ and $\alpha$ in a rolling scalar field are calculated.
The limits on the variation of the constants imposes allowed and forbidden regions in the
two dimensional $w+1$, $\zeta_{\mu}$ plane in a balance between cosmological and
elementary particle physics parameters. Analytic expressions for three not directly
observable parameters, $\dot{\phi}$, $\rho_{\phi}$ and $p_{\phi}$ are also calculated.
It is generally noted that the parameter evolution is more sensitive to the current value
of the dark energy equation of state $w_0$ than the power of the potentials $\beta_{p,i}$.
This work demonstrates of the power of the beta function formalism to produce accurate
predictions for comparison with observation. The formalism is expandable to other forms
of the dark energy potential and other cosmologies which will be the subject of future work.
|
2,877,628,091,084 | arxiv | \section{Introduction}
\label{sec:intro}
We consider the problem of learning a symmetric positive definite (SPD) matrix in a nonlinear regression setting, ${ {\vc Y} = f(\vc X)}$. The input ${\vc X}$ is a matrix of arbitrary size which can be represented as a column vector without loss of generality. The output ${\vc Y}$, on the contrary, is a structured matrix in the form of an SPD matrix. Let the training set ${\mc D_{\mr{train}}=\{\vc Y_i, \vc X_i\}_{i=1}^n}$ include $n$ instances of such inputs and outputs. The task is to learn the function $f$ such that given an unseen test input ${{\vc X_{\star}}\in \mc D_{\mr{test}}}$, it produces the prediction of the corresponding output ${\widehat {\vc Y}}_{\star}$ under the constraint that $\widehat {\vc Y}_{\star}$ has to be an SPD matrix.
Consider solving the problem using a multilayer perceptron (MLP) neural network \citep[e.g.,][]{Goodfellow2016}. The standard MLP cannot straightforwardly be used, since as in most other neural network architectures, an MLP is designed to learn a vector of parameters from data without the consideration of any constraints.
The objective is to design a nonlinear architecture which can learn the target outputs ${\widehat {\vc Y}}$ while satisfying the SPD constraint across all layers.
Our \textit{main contribution} is to show how to alter the architecture of the MLP in such a way that it not only respects the constraints, but also makes explicit use of them.
We will achieve this by:
1) Explicitly taking the non-Euclidean geometry of the underlying SPD manifolds \citep[e.g.,][]{Pennec2005} into account by designing a new loss function, and 2) by deriving a new backpropagation algorithm \citep{Rumelhart1986} that respects the SPD nature of the matrices.
This new model will be referred to as the \emph{matrix multilayer perceptron (mMLP)}.
The mMLP makes use of positive-definite kernels to satisfy the SPD requirement across all layers. Hence, it provides a natural way of enabling deep SPD matrix learning.
We take a step-by-step approach in the development of the model. We first develop a simplified version of the resulting model that is designed for learning SPD matrices. We then extend this model into its most general form and show how it can be applied in connection to the VAE \citep{Kingma2014,Rezende2014}. More specifically, we replace the MLP in the vanilla VAE with the mMLP. This will crucially allow us to consider more general parametric families of distributions, in particular, those with dense covariance (dispersion) matrices.
Two concrete examples are considered: the \emph{dense covariance} multivariate Gaussian distribution and the multivariate power exponential (mPE) distribution \citep[e.g.,][]{Gomez1998}.
Based on the parametric choice of the distributions we examine the effect of increasing model flexibility not only on the VAE's recognition network but also its generative network. This is achieved by relaxing the diagonality assumption on the covariance matrices and the Gaussian assumption.
\section{Related Work}
\label{sec:related}
\paragraph*{SPD manifold metric.}
Earlier approaches for analyzing SPD matrices relied on the Euclidean space. Several recent studies suggest that non-Euclidean geometries such as the Riemannian structure may be better suited \citep[e.g.,][]{Arsigny2006,Pennec2005}. In this work, we consider the von Neumann divergence \citep[e.g.,][]{Nielsen2000} as our choice of the SPD manifold metric which is related to the Riemannian geometry. Previously, \citet{Tsuda2005} used this divergence in derivation of the matrix exponentiated gradients. Their work suggests its effectiveness for measuring dissimilarities between positive definite (PD) matrices.
\vspace{-2ex}
\paragraph*{SPD manifold learning.}
There are multiple approaches towards the SPD matrix learning, via flattening SPD manifolds through tangent space approximations \citep[e.g.,][]{Tuzel2008,Fathy16},
mapping them into reproducing kernel Hilbert spaces \citep{Harandi2012b,Minh2014},
or geometry-aware SPD matrix learning \citep{Harandi2014}.
While these methods typically follow shallow learning, the more recent line of research aims to design a deep architecture to nonlinearly learn target SPD matrices \citep{Ionescu2015,Huang2017ARN,Masci2015GeodesicCN,Huang2018}. Our method falls in this category but differs in the problem formulation. While the previous methods address the problem where the input is an SPD matrix and the output is a vector, we consider the reverse problem where the input is a matrix with an arbitrary size and the output is an SPD matrix.
\vspace{-2ex}
\paragraph*{Backpropagation.} Our extension of the matrix backpropagation differs from the one introduced by \citet{Ionescu2015}. In their work, the necessary partial derivatives are computed using a two-step procedure consisting of first computing the functional that describes the variations of the upper layer variables with respect to the variations of the lower layer variables, and then computing the partial derivatives with respect to the lower layer variables using properties of the matrix inner product. In contrast, we make use of the concept of $\alpha$-derivatives \citep{Magnus2010} and its favorable generalization properties to derive a routine which \emph{closely} mimics the standard backpropagation.
\vspace{-2ex}
\paragraph*{Flexible variational posterior in the VAE.}
An active line of research in the VAE is related to designing flexible variational posterior distributions that preserve dependencies between latent variables.
In this regard, the early work of \cite{Rezende2014} proposed the use of the rank-1 covariance matrix with a diagonal correction. Although computationally attractive, this makes for a poor approximation of the desired dense covariance matrix. Another approach is to induce dependencies between latent variables by considering a dependence on some auxiliary variables \citep{Maale2016} or assuming hierarchical structures \citep{Ranganath2016,Tran2016}. Finally, an alternative approach towards achieving flexible variational posteriors is based on the idea of normalizing flow \citep{Rezende2015,Kingma2016}. In this work, we take a different approach toward increasing the posterior flexibility. This is achieved by learning dense covariance matrices via the mMLP model and relaxing the Gaussian assumption via the mPE distribution. While the approach taken by \citet{Kingma2016} toward learning dense covariance matrices solves an overparameterized problem, in our formulation, we make explicit use of kernels, within the mMLP architecture, to learn the dense covariance matrices.
\vspace{-3ex}
\section{Preliminaries}
\label{sec:preli}
\paragraph{Matrix $\alpha$-derivative.}
Throughout this work we adopt the \emph{narrow} definition of the matrix derivatives known as the $\alpha$-derivative \citep{Magnus2010} in favor of the broad definition, $\omega$-derivative. The reason for this is that the $\alpha$-derivative has better generalization properties. This choice turned out to be crucial in the derivation of the mMLP's backpropagation routine which involves derivatives of matrix functions w.r.t. the matrix of variables. The $\alpha$-derivative and some of its properties are introduced in Appendix~\ref{app:alpha}.
\vspace*{-2ex}
\paragraph{Bregman matrix divergences.}
Let us restrict ourselves to the domain of SPD matrices. Furthermore, let ${\mc F(\vc X):\mbb R\p{d\times d}\rightarrow \mbb R}$ be a real-valued strictly convex differentiable function of the parameter domain and ${\mathscr {F}(\vc X)=\vc \nabla_{\vc X} \mc F(\vc X)}$, where $\vc \nabla_{\vc X} \mc F(\vc X)$ denotes the gradient w.r.t. the matrix. Then the \emph{Bregman divergence} between $\vc X$ and $\widetilde{\vc X} $ is defined as \citep[e.g.,][]{Kulis2009}
\begin{align}
\label{eq:bergman}
\!{\Delta_{\mc F} (\widetilde{\vc X} || \vc X)\! :=\! \mc F (\widetilde{\vc X}) \!-\! \mc F({\vc X}) - \msf {tr} ((\widetilde{\vc X}\! - \!\vc X) \mathscr F({\vc X})^\top)}.
\end{align}
Bregman divergences are non-negative, definite, and in general asymmetric. There are several choices for the function $\mc F$ \citep[e.g.,][]{Sra2016}. The most common choice is probably ${\mc {F}(\vc X)=-\log \mr{det}}(\vc X)$, which leads to the Stein divergence \citep{Stein1956}, or commonly known as the LogDet divergence (refer to Appendix~\ref{app:Stein} for details). However, in this work, we argue in the favor of the \emph{von Neumann entropy}, also known as the \emph{quantum relative entropy (QRE)} \citep[e.g.,][]{Nielsen2000} as the choice of function. A numerical example is discussed in Section~\ref{sec:example1} which highlights the advantage of the von Neumann divergence over the Stein divergence as the choice of the SPD manifold metric within the mMLP architecture.
Using the von Neumann entropy as our choice of function in \eqref{eq:bergman}, we arrive at:
${\mc F (\vc X) = \msf{tr}(\vc X\mathfrak{log} \vc X - \vc X)}$, where $\mathfrak{log} $ denotes the matrix logarithm---for an SPD matrix $\vc A$, it is computed using ${\mathfrak{log}\vc A= \vc V \mr{diag}(\log \bc\lambda)\vc V^\top}$, where $\vc V $ and $\bc\lambda$ are the matrix of eigenvectors and the vector of eigenvalues from the eigendecomposition of $\vc A$. The Bregman divergence corresponding to this choice of function is known as the von Neumann divergence, ${\Delta_{\mc F} (\widetilde{\vc X} || \vc X) = \msf {tr}(\widetilde{\vc {X}} \mathfrak{log} \widetilde{\vc X} - \widetilde{\vc X} \mathfrak{log}{\vc X} - \widetilde{\vc X} + {\vc X})}$.
Throughout, we consider the cases where the parameters are normalized so that: ${\msf {tr}(\vc X) = \msf{tr}(\widetilde{\vc X})=1}$. The normalized von Neumann divergence is given by
\begin{align}
\label{eq:qre}
\Delta_{\mr {QRE}} (\widetilde{\vc X} || \vc X) = \msf {tr}(\widetilde{\vc {X}} \mathfrak{log} \widetilde{\vc X} - \widetilde{\vc {X}} \mathfrak{log}{\vc {X}}).
\end{align}
\vspace*{-4ex}
\section{Matrix Multilayer Perceptron}
\label{sec:mmlp}
We first construct the basic form of the mMLP suitable for learning SPD matrices. Next, we
construct its general form which will be applied in connection to the VAE.
\vspace*{-1ex}
\subsection{The Basic Form of the mMLP}
\label{sec:mmlp1}
\vspace*{-1ex}
\paragraph{Activation matrix function.}
Let ${\vc Z=(\vc z_1, \ldots, \vc z_{d})}$ denote a matrix of variables ${\vc z_i\in \mbb R\p {d}}$. The activation function $\mc K(\vc Z)$ defines a matrix function in the form of ${[\mc K(\vc Z)]_{i,j} = \kappa(\vc z_{i}, \vc z_{j}) }$, ${\forall i, j \in\{1, \ldots, d\}}$,
where $\kappa$ is some differentiable activation function outputting scalar values. In the following, we restrict ourselves to the kernel functions which form PD activation matrix functions. Irrespective of the functional form of $\kappa$, we will---mostly for numerical reasons and partly for the fact that our loss function in \eqref{eq:loss_qre} will make use of the normalized von Neumann divergence---need to normalize the resulting kernel matrix.
This can be achieved by enforcing the trace-one constraint,
\begin{align}
{\mc H(\vc Z) = {\mc K(\vc Z)}/{\msf{tr}(\mc K(\vc Z))}},
\end{align}
where $\mc H$ denotes a differentiable PD activation matrix function of trace one.
Without loss of generality, throughout this work, we use the Mercer sigmoid kernel \citep{Carrington2014} defined as
\begin{align}
\label{eq:msk}
{\kappa({\vc z_i, \vc z_j) = {\mr{tanh} (\alpha \vc z_i + \beta)} \odot \mr{tanh} (\alpha \vc z_j + \beta)} },
\end{align}
where $\alpha$ and $\beta$ denote the slope and the intercept, respectively. Furthermore, $\odot$ denotes the dot product. In all experiments, we use default values of ${\alpha=1}$ and ${\beta=0}$.
\vspace{-2ex}
\paragraph{Model construction.}
Let ${\vc X\in \mbb{R}^{p_1\times p_2}}$ indicate the input matrix and ${\vc Y\in \mbb{R}^{d_0\times d_{0}}}$ indicate the corresponding output matrix, an SPD matrix of trace one. The mMLP of $j$ hidden layers is shown as $\msf{mMLP\!:\!\vc X\! \rightarrow\! \widehat{\vc Y}}$ and constructed as
\begin{align}
\label{eq:mlpa}
\begin{split}
&\begin{cases}
\widehat{\vc Y} = \mc H(\vc Z_0),\\
\vc Z_0 = \vc W_0 \vc H_1 \vc W_0^\top + \vc B_0,
\end{cases}
\\
&\begin{cases}
\vc H_l = \mc H(\vc Z_l), \\
\vc Z_l = \vc W_l \vc H_{l+1} \vc W_l^\top+ \vc B_l,
\end{cases}
\\
&
\begin{cases}
\vc H_{j+1} = \mc H(\vc Z_{j+1}), \\
\vc Z_{j+1} = \vc W_{j+1} \mr{vec}\vc X (\vc W_{j+1}\vc 1_{p_1p_2})^\top + \vc B_{j+1},
\end{cases}
\end{split}
\end{align}
where the pair of ${\vc W_l\!\in\! \mbb R^{d_l\times d_{l+1}}, \forall {0\leq\! l\!\leq j},}$ and ${\vc W_{j+1} \!\in\! \mbb R^{d_{j+1}\times p_1p_2}}$ are the weight matrices, ${\vc B_l\!\in\! \mbb R^{d_l\times d_l}, \forall {0\!\leq \!l\leq j\!+\!1},}$ are the bias matrices, ${\vc Z_l\in \mbb R^{d_l\times d_l}, \forall {0\leq \!l\!\leq j\!+\!1}}$, are the latent input matrices, and ${\vc H_l\in \mbb R^{d_l\times d_l}, \forall {1\leq l\leq j+1},}$ are latent output SPD matrices of trace one.
In the construction of \eqref{eq:mlpa}, we have ensured that $\vc H_l$ are SPD matrices of trace one \emph{across all layers} as opposed to only at the output layer. The idea is to propagate the nonlinearities introduced via the SPD activation matrix functions through all layers. This design choice turned out to be more effective than the alternative, and arguably simpler, design where the SPD requirement is met only at the output layer. We will discuss this further in Section~\ref{sec:shvsde}, where we also present an illustrative numerical example.
\vspace{-2ex}
\paragraph{Loss function.}
We consider the normalized von Neumann divergence \eqref{eq:qre} as the base for the loss function. The von Neumann divergence is asymmetric. However, it can be symmetrized by using the fact that the von Neumann entropy of trace one follows the class of generalized quadratic distances \citep{Nielsen2007}. Hence, we define the loss function as
\begin{multline}
\label{eq:loss_qre}
\!\!\!\ell_{\mr{QRE}}(\widehat{\vc Y}, \vc Y)
= \frac{1}{2}(\Delta_{\mr {QRE}} (\widehat{\vc Y} || \vc Y) + \Delta_{\mr {QRE}} (\vc Y ||\widehat{\vc Y})),
\end{multline}
where $\Delta_{\mr {QRE}}$ is given by \eqref{eq:qre}. The $\alpha$-derivative of $\ell_{\mr{QRE}}$ involves taking partial derivatives through the eigendecomposition. In Appendix~\ref{sec:der_loss}, we derive a method for analytically computing the derivative of $\ell_{\mr{QRE}}$.
\vspace{-2ex}
\paragraph*{Optimization.} The remaining steps are feed-forward computation, backpropagation, and learning, as in the standard MLP. However, here, the backpropagation requires taking derivatives with respect to the matrix functions. These steps are described in Appendix~\ref{app:basic_mlp}.
\vspace{-1ex}
\subsection{The General Form of the mMLP}
\label{sec:mmlp2}
We now discuss a general version of the mMLP which produces both a vector and an SPD matrix as outputs. One possible application of this model is for heteroscedastic multivariate regression (we do not pursue this application here). Another application is within the VAE formulation, as we discuss in more detail in Section~\ref{sec:vae}.
\vspace{-2ex}
\paragraph{Model construction.}
Let ${\vc X\in \mbb{R}^{p_1\times p_2}}$ denote the input matrix. The corresponding outputs in this case are: ${\vc Y\in \mbb{R}^{d_0\times d_{0}}}$ which is an SPD matrix of trace one, and ${\vc y\in \mbb{R}^{r_0}}$.
The mMLP of $j$ hidden layers is shown as $\msf{mMLP\!:\!\vc X\! \rightarrow\! \{ \widehat{\vc y}, \widehat{\vc Y}\}}$ and constructed as:
\begin{align}
\label{eq:mmlpb}
\begin{split}
&\begin{cases}
\widehat{\vc y} = \mathfrak{h} (\vc z_0),\\
\vc z_0 = \vc C_0\widehat{\vc Y} \vc A_0 \vc h_1 + \vc b_0,\\
\widehat{\vc Y} = \mc H(\vc Z_0), \\
\vc Z_0 = \vc W_0 \vc H_1 \vc W_0^\top + \vc B_0,
\end{cases}
\\
&\begin{cases}
\vc h_l = \mathfrak{h} (\vc z_l),\\
\vc z_l = \vc C_l\vc H_{l}\vc A_l \vc h_{l+1} + \vc b_l, \\
\vc H_l = \mc H(\vc Z_l), \\
\vc Z_l = \vc W_l \vc H_{l+1} \vc W_l^\top+ \vc B_l,
\end{cases}
\\
&
\begin{cases}
\vc h_{j+1} = \mathfrak{h} (\vc z_{j+1}),\\
\vc z_{j+1} = \vc C_{j+1}\vc H_{j+1} \vc A_{j+1} \bc 1 + \vc b_{j+1}, \\
\vc H_{j+1} = \mc H(\vc Z_{j+1}), \\
\vc Z_{j+1} = \vc W_{j+1} \mr{vec}\vc X (\vc W_{j+1}\vc 1_{p_1p_2})^\top + \vc B_{j+1},
\end{cases}
\end{split}
\end{align}
where
${{\vc h_{l} \!\in\!{\mbb R^{r_{l}}}}, {\vc H_l\!\in\! \mbb R^{d_l\times d_l}}, \forall {1\!\leq \!l\leq \!j\!\!+\!\!1\!}}$, ${\vc z_l, \vc b_l \!\in \!{\mbb R^{r_l}}}$, ${\vc Z_l,\! \bc B_l \! \in\! \mbb R^{d_l\times d_l}}$, ${\vc C_l\!\in\!{\mbb R^{r_{l} \times d_l }}},\! \forall {0\!\leq l \!\leq\! j\!+\!1}$, ${\vc A_l\!\in\!{\mbb R^{d_l\times r_{l+1}}}\!}$, $ {\vc W_l \!\in \!\mbb R^{d_l\times d_{l+1}}}, \ \forall {0\leq \!l\!\leq j}$.
Just as in the standard MLP, $\mathfrak{h}$ is an activation function of choice, e.g., the hyperbolic tangent function.
\vspace{-2ex}
\paragraph*{Loss function.}
The loss function ${\ell(\widehat{\vc Y}, \widehat{\vc y}, \vc Y, \vc y)} $ needs to be designed with the specific application in mind.
In the context of the VAEs, the default loss function is provided by the lower bound on the marginal likelihood. We will explore this choice in Section~\ref{sec:vae}.
\vspace{-2ex}
\paragraph*{Optimization.}
The remaining steps of feed-forward computation, backpropagation, and learning, are all described in Appendix~\ref{app:gen_mlp}.
\vspace*{-2ex}
\section{Exploiting the mMLP within the VAE}
\label{sec:vae}
\subsection{Background and Problem Formulation}
\label{sec:back_vae}
Let ${\{\vc x\h i\}_{i=1}^n}$ indicate the set of i.i.d. observations on the real space ${\vc x\h i\!\in\!\mbb R^{d}}$. It is assumed that the data are generated by some random process involving a continuous latent variable ${\vc s\in \mbb R^{k}}$ admitting a joint distribution ${p_{\theta, \pi} (\vc x, \vc s)= p_{\theta}(\vc x \given \vc s) p_{\pi}(\vc s)}$ parametrized by $\pi$ and $\theta$. Here, ${p_{\theta}(\vc x \given \vc s)}$ is the generative model which is also known as the decoder, and ${p_{\pi}(\vc s)}$ is the prior.
Let ${q_{\phi}(\vc s\given \vc x)}$ indicate the recognition model also known as the encoder, parametrized by $\phi$. The distribution ${q_{\phi}(\vc s\given \vc x)}$ approximates the intractable true posterior $p_{\theta, \pi}(\vc s\given \vc x)$. \citet{Kingma2014} introduced a method based on variational inference \citep[refer to][for a recent review] {Blei2017} for learning the recognition model parameters $\phi$ jointly with the generative model parameters $\theta$ and $\pi$.
This is done by maximizing a lower bound $\mc L (\theta, \pi, \phi)$ on the marginal log-likelihood of the data, also known as the evidence lower bound (ELBO),
\begin{multline}
\label{eq:ll}
\mc L = \mbb E_{q_{\phi}(\vc s\mid \vc x)}[\log~{p_{\theta}(\vc x \given \vc s)}] - \Delta_{\mr{KL}}(q_{\phi}(\vc s\given \vc x) || p_{\pi}(\vc s)),
\end{multline}
where $\Delta_{\mr{KL}}(q||p)$ is the Kullback-Leibler divergence (KLD) between $q$ and $p$.
\vspace{-2ex}
\paragraph*{The vanilla VAE.} The parametric form of the recognition model is assumed to be a diagonal Gaussian distribution, ${q_{\phi}(\vc s\mid \vc x) = \mc N(\bc \mu_q, \mr{diag}(\bc\sigma_q^2))
}$ with parameters $\bc \mu_q$ and $\bc\sigma_q$ which are outputs of an MLP as a function of $\vc x$.
Similarly for the observations on the continuous real space, the parametric form of the generative model is assumed to take on a diagonal Gaussian distribution, ${p_{\theta}(\vc x\given \vc s) = \mc N(\bc \mu_p, \mr{diag}(\bc\sigma_p^2))
}$ with parameters $\bc \mu_p$ and $\bc\sigma_p$ which are outputs of an MLP as a function of $\vc s$.
\vspace{-2ex}
\paragraph*{The diagonality assumption.} The VAE formulation is general and as pointed out by \citet{Kingma2014}, the reason they insisted on using a diagonal covariance matrix on the Gaussian is to simplify the resulting inference. However, this choice comes with the cost of losing the dependencies between latent variables. It is fair to say that most extensions of the vanilla VAE make the same choice. The key reason for this choice is the computational complexity associated with maintaining a dense matrix.
Irrespective of the computational complexity,
the question we are interested in is that whether there would be any significant gain in relaxing this assumption?
When it comes to the recognition model, we know for a fact that it is never possible to have overfitting. Thus, it can only be advantageous to increase the expressiveness of the recognition network by expressing the posterior via more flexible family of parametric distributions, e.g., dense covariance Gaussian distributions. There is in fact an active line of research that is focused on recapturing some of the lost dependencies between the latent variables (see Section~\ref{sec:related} for references).
On the other hand, when it comes to the generative network, there might be the possibility of overfitting: The argument is that if the generative network is flexible enough, the VAE model may simply choose to ignore the latent variables, forcing the posterior to approach the prior. However, the question is how realistic this scenario really is? Perhaps more importantly, should this be a reason to instead force the generative network to take on a simple parametric form, e.g., the diagonal Gaussian distribution?
To get insights into these questions, we will consider several model alternatives with various degrees of flexibility on both the recognition model and the generative model. For this purpose, we will in the subsequent section replace the MLP with the mMLP which allows us to construct models with the dense covariance matrices. Two parametric families of distributions are considered: the multivariate Gaussian distribution and the mPE distribution. The latter drops the Gaussian assumption and provides a simple way to construct models with the various degree of flexibilities.
\subsection{VAE via mMLP}
\label{sec:vae_mmlp}
We first introduce the parametric distributions we have chosen to work with (Section~\ref{sec:ParFamDist}) and then we proceed with the model constructions (Section~\ref{sec:vae_model_const}), and finally we derive the estimators (Section~\ref{sec:Estimators}).
\vspace*{-1ex}
\subsubsection{Parametric Families of Distributions}
\label{sec:ParFamDist}
\paragraph*{Trace-one Gaussian distribution.}
The output layer of the mMLP model in \eqref{eq:mmlpb} operates under the trace-one constraint. It is of course possible to drop the trace one constraint from the output layer, but we would then also be forced to use a different kernel function
in that layer. Instead, we find it easier to work with a reparameterized Gaussian distribution with a trace-one covariance matrix. This allows us to use the same choice of kernel function~\eqref{eq:msk} across all layers.
For a random variable ${\bc \vartheta\in \mbb R^{d}}$, the trace-one Gaussian distribution is shown as ${\mc N_{\mr{tr1}}\!(\bc \vartheta; \bc \mu, \!\bc \Omega, \!\eta)}$ where ${\bc \mu\in \mbb R^{d}}$ is the mean, ${\eta\in \mbb R^+}$ is the scale, and ${\bc \Omega\in\mbb{R}^{d\times d}}$ is the trace-one covariance matrix, ${\mr {tr}(\bc \Omega)=1}$.
Refer to Appendix~\ref{App:t1gauss} for the exact functional form of the distribution and its stochastic representation.
\vspace*{-2ex}
\paragraph{Trace-one power exponential distribution.}
\label{sec: MGGD}
A generalized variant of the multivariate Gaussian distribution is provided by the mPE distribution \citep[e.g.,][]{Gomez1998}. As in the case of the Gaussian distribution, we find it easier to work with a reparameterized representation where the dispersion matrix is of trace one. Imposing this trace-one constraint, for a random variable ${\bc \vartheta\in\mbb R^{d}}$, the trace-one mPE distribution can be expressed as
\begin{align}
\vspace*{-2ex}
&\!\mc{E}_{\mr{tr1}}(\bc \vartheta; \bc\mu, \vc \Omega, \eta, \alpha, \beta)\!\!=\!\!\frac{c({\alpha, \beta})}{(\mathrm{det}(\eta\bc\Omega))^{\frac{1}{2}}}\! \exp\!\left\{\!\! -\!\frac{1}{2}\!\!\left( \!\frac{t(\bc \vartheta;\bc \mu, \!\vc \Omega)}{\alpha\eta} \!\right)^{\!\beta}\!\right\}, \notag \\
& c({\alpha, \beta}) = \frac{\beta \Gamma(\frac{d}{2})} {\pi^{\frac{d}{2}} \Gamma(\frac{d}{2\beta})2\p{\frac{d}{2\beta}} \alpha\p{\frac{d}{2}}},
\\
& t(\bc\vartheta;\bc \mu, \vc \Omega):=(\bc \vartheta-\bc \mu)^\top \vc \Omega^{-1} (\bc \vartheta-\bc \mu), \quad \mr{tr}(\bc \Omega)=1. \notag
\end{align}
The pair of ${\alpha\in\mbb R^{+}}$ and ${\beta\in\mbb R^{+}}$ are the scale and shape parameters of the density, $\bc \mu$ is the mean vector, and ${\vc \Omega}$ is a ${d\times d}$ symmetric real dispersion matrix where $\mr{tr}(\bc \Omega)=1$. The parameter $\eta$ has the same role as in the trace-one Gaussian distribution.
Figure~\ref{fig:pdf} shows the probability density function of the distribution for various values of $\alpha$ and $\beta$ (for the case of ${d=2}$). As a special case, the mPE includes the Gaussian distribution: For ${\alpha=1}$ and ${\beta=1}$, the trace-one mPE distribution corresponds to the trace-one multivariate Gaussian distribution.
\emph{\textbf{Stochastic representation (reparametrization trick).}}
Let ${\bc\vartheta\in\mbb R^{d}}$, ${\bc\vartheta\sim {\mc {E}_{\mr{tr1}}}(\bc\mu, \vc\Omega, \eta, \alpha, \beta)}$, and ${\phi=\{\bc\mu, \vc\Omega, \eta, \alpha, \beta\}}$. The density admits a known stochastic representation
\begin{align}
\label{eq:rt_mpe}
\vspace*{-2ex}
\! \! \!\bc\vartheta \stackrel{\mathsf d}{=} \mc T_{\mc E_{\mr{tr1}}}(\bc\nu, \varsigma, \bc\vartheta ; \phi)= \bc \mu + \varsigma \vc \Phi \bc\nu, \ \alpha\eta\vc \Omega = \vc \Phi \vc \Phi^\top,
\end{align}
where $\stackrel{\mathsf d}{=}$ denotes the equality in distribution, $\bc\nu$ is a random vector uniformly distributed on the unit-sphere of $\mbb R^{d}$ such that ${\bc\nu^\top \bc\nu=1}$, and $\varsigma$ is a positive scalar random variable distributed according to a known gamma distribution,
\begin{align}
\label{eq:gamma_app}
\vspace{-2ex}
{\varsigma^{2\beta}\sim \mc G({d}/{2\beta}, 2)}.
\end{align} The shape of the gamma unfortunately depends on $\beta$. As there is no known stochastic representation of the gamma distribution, it ultimately causes difficulties when we need to take the derivative of the random sample. There are techniques which can be used for this purpose \citep{Ruiz2016,Naesseth2017}. However, in our case, $\beta $ is a ``second-level'' parameter in the sense that it appears in \eqref{eq:gamma_app} and not directly in \eqref{eq:rt_mpe}. For our specific case, we found that approximating the gamma with a normal distribution as the limiting distribution of the gamma to be sufficiently accurate. More specifically,
\begin{align}
\label{eq:gauss_app}
{\varsigma^{2\beta}\sim \mc N({d}/{\beta}, {2d}/{\beta})} \ \Rightarrow \ \varsigma^{2\beta} \stackrel{\mathsf d}{=} {d}/{\beta} + \epsilon\sqrt{{2d}/{\beta}}, \
\end{align}
where ${\epsilon\sim\mc N(0, 1)}$. The error in the approximation \eqref{eq:gauss_app} is expected to decrease as the shape parameter of the gamma distribution increases. A pessimistic upper bound is ${\left({d}/{2\beta}\right)^{-{1}/{2}}}$ which follows from the Berry-Ess\`{e}en theorem. In our numerical evaluations, we found that the use of \eqref{eq:gauss_app} instead of \eqref{eq:gamma_app} in \eqref{eq:rt_mpe} is quite well suited, surprisingly even in extreme cases where ${d=2}$ and $\beta$ is large (see the numerical comparisons in Appendix~\ref{app:gamma_vs_gauss}).
\emph{\textbf{Practical considerations.}}
The pair of $\alpha$ and $\beta$ control the tail and the shape of the distribution (see Figure~\ref{fig:pdf}). Very large and very small values of $\alpha$ and $\beta$ might be undesirable---for one, they pose numerical challenges. In practice, these parameters can be restricted within a range by choosing suitable output activation functions. In the experiments in Section~\ref{sec:exp_vae}, we choose to bound them conservatively as: ${{0.5}\leq \alpha, \beta \leq 1.5}$, using the sigmoid function.
\subsubsection{Constructing Two Models}
\label{sec:vae_model_const}
\paragraph*{Gaussian model.}
The parametric form of the recognition model is assumed to take on a trace-one Gaussian distribution, ${q_{\phi}(\vc s \given \vc x) = \mc N_{\mr{tr1}}(\bc \mu_q, \bc \Omega_q, \eta_q)}$ with parameters as the output of an mMLP as a function of $\vc x$,
${\msf{mMLP}\!:\!\vc x\! \rightarrow\! \{ ({\bc \mu_q}, \log \eta_q), \bc\Omega_q\}}$.
Similarly, the generative model is assumed to take on a trace-one Gaussian distribution in the form of ${p_{\theta}(\vc x\given \vc s) = \mc N_{\mr{tr1}}(\bc \mu_p, \bc \Omega_p, \eta_p)}$ with parameters as the output of an mMLP as a function of $\vc s$, ${\msf{mMLP}\!:\!\vc s\! \rightarrow\! \{ ({\bc \mu}_p, \log \eta_p), \bc\Omega_p\}}$.
\vspace*{-2ex}
\paragraph*{Power exponential model.}
We can improve the flexibility of the recognition and the generative models by generalizing the Gaussian model to the mPE model. Here, it is assumed that the generative model takes on a trace-one mPE distribution in the form of ${p_{\theta}(\vc x\given \vc s) = \mc E_{\mr{tr1}}(\bc \mu_p, \bc \Omega_p, \eta_p, \alpha_p, \beta_p)}$ with parameters as the output of an mMLP as a function of $\vc s$, ${\msf{mMLP}\!:\!\vc s\! \rightarrow\! \{ ({\bc \mu_p}, \log \eta_p, \alpha_p, \beta_p), \bc\Omega_p\}}$. There are two alternatives when it comes to the recognition model:
\begin{itemize}
\vspace{-2ex}
\item[1)] To assume the same parametric choice for the recognition model as the generative model and define: ${q_{\phi}(\vc s \given \vc x) \!=\! \mc E_{\mr{tr1}}(\bc \mu_q, \bc \Omega_q, \eta_q, \alpha_q, \beta_q)}$,~with parameters as the output of an mMLP as a function of $\vc x$, that is ${\msf{mMLP}\!:\!\vc x\! \rightarrow\! \{ ({\bc \mu_q}, \log \eta_q, \alpha_q, \beta_q), \bc\Omega_q\}}$. The disadvantage of this choice is that the KLD between the posterior and the prior will become analytically intractable. Thus, the estimator needs to rely on Monte Carlo sampling for the evaluation of the KLD term. This could increase the variance of the estimator. The advantage of this choice is that it gives additional flexibility to the recognition model which is desirable.
\item[2)] \vspace{-1.ex} To restrict the recognition model to the dense Gaussian model and define: ${q_{\phi}(\vc s \given \vc x) = \mc N_{\mr{tr1}}(\bc \mu_q, \bc \Omega_q, \eta_q)}$ with parameters as the output of an mMLP as a function of $\vc x$,
${\msf{mMLP}\!:\!\vc x\! \rightarrow\! \{ ({\bc \mu_q}, \log \eta_q), \bc\Omega_q\}}$. This choice leads to an analytically tractable KLD computation.
\vspace{-2ex}
\end{itemize}
We will consider both cases in our evaluation. Based on the choice of the generative and the recognition networks, various models can be constructed. Table~\ref{tb:abb} summarizes the list of model variants considered in this work. For all these models, the prior is assumed to follow a standard Gaussian distribution as ${p_{\pi}(\vc s) = \mc N(\bc 0, \bc I)}$.
\subsubsection{Estimators}
\label{sec:Estimators}
Given ${\{\vc x\h i\}_{i=1}^n}$, an estimator of the lower bound of the full dataset can be constructed based on mini-batches of size $m$ as ${\widetilde{\mc L}_m\approx\frac{n}{m} \sum_{i=1}^m \widetilde{\mc L}(\theta, \pi, \phi;\vc x\h i)}$, where ${\widetilde{\mc L}(\theta, \pi, \phi;\vc x\h i)}$ is the estimate of the lower bound \eqref{eq:ll} using $r$ Monte Carlo samples. Based on the choice of the recognition model, the estimator is different. In the case of the Gaussian recognition model, $\widetilde{\mc L}$ is computed from
\begin{multline}
\label{eq:est_n}
\vspace{-3ex}
\widetilde{\mc L}(\theta, \pi, \phi;\vc x\h i) = \frac{1}{r}\sum_{l=1}^r\log~{p_{\theta}(\vc x\h i \given \vc s\h {i,l})} \\ - \Delta_{\mr{KL}}(q_{\phi}(\vc s\mid \vc x\h i) || p_{\pi}(\vc s)),
\end{multline}
where ${\vc s\h {i,l} = \mc T_{\mc N_{\mr{tr1}}}(\bc \epsilon\h {i, l}, \vc x\h i ; \phi)}$ is given by \eqref{eq:rep}. In the case of the mPE recognition model,
\begin{multline}
\label{eq:est_mpe}
\vspace{-3ex}
\!\!\!\!\!\widetilde{\mc L}(\theta,\! \pi, \!\phi;\vc x\h i) \!= \!\frac{1}{r}\sum_{l=1}^r\!\log~{p_{\theta}(\vc x\h i \given \vc s\h {i,l})} \\ + \log p_{\pi}(\vc s\h {i,l}) - \log q_{\phi}(\vc s\h {i,l}\given \vc x\h i),\vspace{-3ex}
\end{multline}
where ${\vc s\h {i,l}\!\!= \!\!\mc T_{\mc E_{\mr{tr1}}}\!(\varsigma \h {i, l}, \!\bc \nu\h {i, l}, \!\vc x\h i; \phi )}$ is given by \eqref{eq:rt_mpe}.
Next, we need to take the $\alpha$-derivatives of the estimators \eqref{eq:est_n} and \eqref{eq:est_mpe} which are given in Appendix~\ref{app:estimator}.
\begin{table}[t!]
\vspace{-3ex}
\caption{Model specifications and abbreviations.}
\setlength{\tabcolsep}{3pt}
\renewcommand{\arraystretch}{1.5}
\tiny
\begin{tabular}{l*{3}{l}}
Model & ${p_{\theta}(\vc x\given \vc s)}$ & ${q_{\phi}(\vc s \given \vc x)}$ & Estimator \\
\hline
${\mc N_\mr{d}\mc N_\mr{d}}$ & $\mc N_{\mr{tr1}}(\bc \mu_p, \bc \Omega_p^{\mr{diag}}, \eta_p)$ & $\mc N_{\mr{tr1}}(\bc \mu_q, \bc \Omega_q^{\mr{diag}}, \eta_q)$ & \eqref{eq:est_n} \\
${\mc N_\mr{d}\mc N_\mr{f}}$ & $\mc N_{\mr{tr1}}(\bc \mu_p, \bc \Omega_p^{\mr{diag}}, \eta_p)$ & $\mc N_{\mr{tr1}}(\bc \mu_q, \bc \Omega_q^{\mr{full}}, \eta_q)$& \eqref{eq:est_n} \\
${\mc N_\mr{f}\mc N_\mr{d}}$ & $\mc N_{\mr{tr1}}(\bc \mu_p, \bc \Omega_p^{\mr{full}}, \eta_p)$ & $\mc N_{\mr{tr1}}(\bc \mu_q, \bc \Omega_q^{\mr{diag}}, \eta_q)$& \eqref{eq:est_n} \\
${\mc N_\mr{f}\mc N_\mr{f}}$ & $\mc N_{\mr{tr1}}(\bc \mu_p, \bc \Omega_p^{\mr{full}}, \eta_p)$ & $\mc N_{\mr{tr1}}(\bc \mu_q, \bc \Omega_q^{\mr{full}}, \eta_q)$& \eqref{eq:est_n} \\
${\mc E_\mr{f}\mc N_\mr{f}}$ & $\mc E_{\mr{tr1}}(\bc \mu_p, \bc \Omega_p^{\mr{full}}, \eta_p, \alpha_p, \beta_p)$ & $\mc N_{\mr{tr1}}(\bc \mu_q, \bc \Omega_q^{\mr{full}}, \eta_q)$& \eqref{eq:est_n}\\
${\mc E_\mr{f}\mc E_\mr{f}}$ & $\mc E_{\mr{tr1}}(\bc \mu_p, \bc \Omega_p^{\mr{full}}, \eta_p, \alpha_p, \beta_p)$ & $\mc E_{\mr{tr1}}(\bc \mu_q, \bc \Omega_q^{\mr{full}}, \eta_q, \alpha_q, \beta_q)$ & \eqref{eq:est_mpe}
\end{tabular}
\vspace{-3ex}
\label{tb:abb}
\end{table}
\section{Experiments}
Our experiments are divided into two parts. The first part is on the empirical validation of the mMLP model in a supervised task of learning SPD matrices using synthetic data. The second part evaluates the VAE models in an unsupervised task using real data.
\subsection{SPD Matrix Learning}
\label{sec:spd_mtx}
\subsubsection{The Choice of Loss Function}
\label{sec:example1}
Consider the problem of learning SPD matrices on synthetic data using the mMLP model of \eqref{eq:mlpa}. The objectives are to validate the model, and to evaluate the effect of the choice of the loss function on the performance. In particular how the choice of the SPD manifold metric under Bergman matrix divergence affects the learning.
For this purpose, here, we consider two candidate loss functions in the family of Bergman matrix divergences. The first candidate is the loss function based on the normalized von Neumann divergence ${\ell_{\mr {QRE}}(\widehat{\vc Y}, \vc Y)}$ given by \eqref{eq:loss_qre}. The second candidate is based on the symmetrized Stein divergence ${\ell_{\mr{Stein}}(\widehat{\vc Y}, \vc Y)}$ given by
Eq.~\eqref{eq:loss_stein}. These two candidates are related to the Riemannian geometry.
For the sake of comparison, we also consider the quadratic loss ${\ell_{\mr{quad}}(\widehat{\vc Y}, \vc Y)=\mr{tr}((\widehat{\vc Y} - \vc Y)(\widehat{\vc Y} - \vc Y)^{\top}})$ which is related to the Euclidean geometry.
\vspace{-2ex}
\paragraph{Example~1.}
Consider the set ${\mc D_{\mr{train}}\!\!=\! \{\vc X_i,\! \vc Y_i\}_{i=1}^{n_{\mr{train}}}}$ of inputs ${\vc X_i\in \mbb R^{20\times 1}}$ and corresponding SPD matrix outputs ${\vc Y_i\in \mbb{R}^{d_0\times d_0}}$ which are in this case dense PD covariance matrices (refer to Appendix~\ref{app:example1} for details on the data generation). The goal is to estimate the covariance matrices ${\widehat{\vc Y}}$ associated to the input vectors from the unseen test set, ${{\vc X}_i \!\in \!\mc D_{\mr{test}}}$. The training size is varied between ${n_{\mr{train}}\!=\!\{20, 100\}}$ samples. The analysis is carried out for ${d_0\!=\!\{10, 20\}}$.
Two examples of the test outputs $\vc Y_i$ for ${d_0\!=\!20}$ are shown in Figure~\ref{fig:example1}-A.
The mMLP models \eqref{eq:mlpa} are trained using 3 layers (20 units per layer) under three choices of loss functions: $\ell_{\mr{QRE}}$, $\ell_{\mr{Stein}}$, and $\ell_{\mr{quad}}$. The \emph{only} difference here is the loss function.
Refer to Appendix~\ref{app:example1} for additional details on the mMLP initialization.
The performance is evaluated on the test set, ${n_{\mr{test}}=10^3}$, in terms of all three losses as the error measures, shown as $E_{\mr{QRE}}, E_{\mr{Stein}}, E_{\mr{quad}}$.
Table~\ref{tb:ex1} summarizes the results of the evaluation. The first observation is that the quality of estimates differs considerably depending on the choice of the loss function.
The loss function ${\ell_{\mr {QRE}}}$ that takes into account the geometry of the SPD matrices outperforms the one based on the Euclidean geometry, ${\ell_{\mr {qaud}}}$. Between the two choices of Bergman divergences (${\ell_{\mr {QRE}}}$ and ${\ell_{\mr {Stein}}}$), the ${\ell_{\mr {QRE}}}$ is clearly the best performer: It performs consistently well in terms of the various error measures, and shows robustness even in cases where the training data are limited.
Figures~\ref{fig:example1}-B,C,D visualize the predicted covariance matrices for the case of ${d_0=20}$ and ${n_{\mr{train}}=20}$ for two test samples.
For the sake of comparison, we also solved the same problem using the standard MLP as a regressor with the quadratic loss. To meet the SPD requirement, we simply used the Cholesky decomposition. In general, the performance was quite poor in comparison to the mMLP model (refer to Appendix~\ref{app:ex1mlp} for additional details).
\begin{table}[t!]
\vspace{-3ex}
\tiny
\caption{SPD matrix learning using the mMLP (Example~1).}
\vspace{-3ex}
\begin{center}
\begin{tabular}{lccc c ccc}
\hline
\multicolumn{4}{c}{$d_0=10, n_{\mr{train}}=20$}{$d_0=20, n_{\mr{train}}=20$} \hspace{-33ex} \\
\cline{2-4} \cline{6-8}
Loss &\hspace{-3ex}$E_{\mr{quad}}$ &\hspace{-3ex} $E_{\mr{QRE}}$ &\hspace{-3ex} $E_{\mr{Stein}}$ & \hspace{-3ex}~ &\hspace{-3ex} $E_{\mr{quad}}$ &\hspace{-3ex} $E_{\mr{QRE}}$ &\hspace{-3ex} $E_{\mr{Stein}}$\\
\hline
$\ell_{\mr{QRE}}$ &\hspace{-3ex} $\vc{9.7\!\times\! 10^{-5}}$ &\hspace{-3ex} $\vc{7\!\times\!10^{-4}}$ &\hspace{-3ex} $\vc{5.64}$ &\hspace{-3ex} &\hspace{-3ex}$\vc{1.1\!\times\! 10^{-4}}$ &\hspace{-3ex}$\vc{1.3\!\times \! 10^{-3}}$ &\hspace{-3ex} $\vc{28.86}$\\
$\ell_{\mr{Stein}}$ &\hspace{-3ex} $0.033$ &\hspace{-3ex} $0.28$ &\hspace{-3ex} $17.75$&\hspace{-3ex} &$1.07$ &\hspace{-3ex} $8.18$ &\hspace{-3ex} $96.34$\\
$\ell_{\mr{quad}}$ &\hspace{-3ex} $0.043$ &\hspace{-3ex} $0.72$ & \hspace{-3ex} $33.43$ &\hspace{-3ex} &\hspace{-3ex}$0.061$&\hspace{-3ex}$1.14$ &\hspace{-3ex}$83.66$\\
\hline
\end{tabular}
\vspace{-0ex}
\\
\begin{tabular}{lccc c ccc}
\hline
\multicolumn{4}{c}{$d_0=10, n_{\mr{train}}=100$}{$d_0=20, n_{\mr{train}}=100$} \hspace{-33ex} \\
\cline{2-4} \cline{6-8}
Loss &\hspace{-3.5ex}$E_{\mr{quad}}$ &\hspace{-3.5ex} $E_{\mr{QRE}}$ &\hspace{-3.5ex} $E_{\mr{Stein}}$ & \hspace{-3.5ex}~ &\hspace{-3.5ex} $E_{\mr{quad}}$ &\hspace{-3.5ex} $E_{\mr{QRE}}$ &\hspace{-3ex} $E_{\mr{Stein}}$\\
\hline
$\ell_{\mr{QRE}}$ &\hspace{-3.5ex} $\vc{3.3\!\times\! 10^{-8}}$ &\hspace{-3.5ex} $\vc{5.8\!\times\!10^{-6}}$ &\hspace{-3.5ex} $\vc{0.31}$ &\hspace{-3.5ex} &\hspace{-3.5ex}$\vc{6.3\!\times\! 10^{-6}}$ &\hspace{-3.5ex}$\vc{1.1\!\times \! 10^{-4}}$ &\hspace{-3.5ex} $\vc{19.24}$\\
$\ell_{\mr{Stein}}$ &\hspace{-3.5ex} $8.9\!\times\!10^{-4}$ &\hspace{-3.5ex} $0.016$ &\hspace{-3.5ex} $11.18$&\hspace{-3.5ex} &$1.2$ &\hspace{-3.5ex} $8.12$ &\hspace{-3.5ex} $85.73$\\
$\ell_{\mr{quad}}$ &\hspace{-3.5ex} $0.037$ &\hspace{-3.5ex} $0.669$ & \hspace{-3.5ex} $38.7$ &\hspace{-3.5ex} &\hspace{-3.5ex}$0.060$&\hspace{-3.5ex}$1.15$ &\hspace{-3.5ex}$87.62$\\
\hline
\end{tabular}
\vspace{-4.5ex}
\end{center}
\label{tb:ex1}
\end{table}
\vspace*{-1ex}
\subsubsection{Shallow vs Deep SPD Matrix Learning}
\label{sec:shvsde}
The design of the mMLP model in \eqref{eq:mlpa} enables a mechanism for deep SPD matrix learning by satisfying the SPD constraint across all input, hidden and output layers. The simpler approach would be to consider the standard MLP architecture across input and hidden layers but make use of the activation matrix functions only at the output layer to meet the SPD requirement:
\begin{align}
\label{eq:mlps}
\begin{split}
&
\widehat{\vc Y} = \mc H(\vc Z_0), \quad
\vc Z_0 = \vc W_0 \vc h_1 (\vc W_0 \bc 1)^\top + \vc B_0, \\
&
\vc h_l = \mathfrak h(\vc z_l), \quad
\vc z_l = \vc W_l \vc h_{l+1} + \vc b_l,
\\
&
\vc h_{j+1} = \mathfrak h(\vc z_{j+1}), \quad
\vc z_{j+1} = \vc w_{j+1} \mr{vec}\vc X + \vc b_{j+1}.
\end{split}
\end{align}
This amounts to a shallow design in the sense that it does not enable a mechanism for preserving the SPD constraint across all layers during the learning.
The design in \eqref{eq:mlpa} allows nonlinearities to pass through layers via activation function matrices which impose the SPD constraint, whereas in the shallow design, nonlinearities are propagated across layers via activation functions without imposing any constraints.
Our hypothesis is that the former has advantage over the latter in that it captures complex dependencies which are important for the SPD matrix learning. Below we present a numerical example which highlights the importance of preserving the SPD constraint across all layers when learning the SPD matrix.
\vspace{-2ex}
\paragraph{Example~2.}
Consider a similar experiment as in Example~1 for the case of ${n_{\mr{train}}=20}$ and output dimensions ${d_0=\{10, 20\}}$ (using a different random seed from Example~1). We directly compare the performance of \eqref{eq:mlpa} against \eqref{eq:mlps} under different number of hidden layers ${j=\{2, 4, 6\}}$ (20 units per layer). The shallow design \eqref{eq:mlps} uses the hyperbolic tangent as the activation function $\mathfrak h(\cdot)$. The same choice of the activation matrix function $\mc H(\cdot)$, given by \eqref{eq:msk}, is used for both models. We use $\ell_{\mr{QRE}}$ as the choice of the loss function (refer to Appendix~\ref{app:example2} for further details).
The performance is evaluated in terms of $E_{\mr{QRE}}$.
Table~\ref{tb:ex2} summarizes the results of the evaluation. Although the shallow design \eqref{eq:mlps} performs relatively well, it underperforms in comparison to \eqref{eq:mlpa}. Given the limited number of training samples, arbitrary increasing the number of layers may not be necessarily advantageous, which is the case for both models. However, in this regard, the design in \eqref{eq:mlps} is fairly more sensitive.
\vspace*{-1ex}
\subsection{Experiments using VAE Models}
\label{sec:exp_vae}
\begin{table}[t]
\vspace{-3ex}
\tiny
\caption{SPD matrix learning using the mMLP (Example~2).}
\vspace{-2ex}
\begin{center}
\begin{tabular}{lccc c ccc}
\hline
\multicolumn{4}{c}{$d_0=10$}{$d_0=20$} \hspace{-24ex} \\
\cline{2-4} \cline{6-8}
Model &\hspace{-3.5ex}${j=2}$ &\hspace{-3.5ex} ${j=4}$ &\hspace{-3.5ex} ${j=6}$ & \hspace{-3.5ex}~ &\hspace{-3.5ex} ${j=2}$ &\hspace{-3.5ex} ${j=4}$ &\hspace{-3ex} ${j=6}$\\
\hline
\eqref{eq:mlpa} &\hspace{-3.5ex} $\vc {2\!\times\! 10^{-4}}$ &\hspace{-3.5ex} $\vc{1\!\times\!10^{-5}}$ &\hspace{-3.5ex} $\vc{4\!\times\! 10^{-5}}$ &\hspace{-3.5ex} &\hspace{-3.5ex}$\vc{5\!\times\! 10^{-3}}$ &\hspace{-3.5ex}$\vc{3\!\times \! 10^{-4}}$ &\hspace{-3.5ex} $\vc{6\times 10^{-4}}$\\
\eqref{eq:mlps} &\hspace{-3.5ex} $4\!\times\!10^{-3}$ &\hspace{-3.5ex} $5\!\times\!10^{-3}$ &\hspace{-3.5ex} $4\!\times\!10^{-2}$&\hspace{-3.5ex} &\hspace{-3.5ex}$8\!\times\!10^{-2}$ &\hspace{-3.5ex} $6\!\times\!10^{-2}$ &\hspace{-3.5ex} $7\!\times\!10^{-1}$\\
\hline
\end{tabular}
\vspace{-5ex}
\end{center}
\label{tb:ex2}
\end{table}
\begin{table*}[h]
\vspace*{-1ex}
\begin{center}
\vspace{-2ex}
\caption{Output activation matrix functions and activation functions for the models in Table~\ref{tb:abb}}
\tiny
\begin{tabular}{lcccccc}
\hline
\multicolumn{5}{c}{$\qquad \qquad $ Generative Network} \\
\cline{2-6}
Model & $\bc \mu_p$ & $\log \eta_p$ & $\alpha_p$ & $\beta_p$ & $\bc \Omega_p$ ~ \\
\hline
${\mc N_\mr{d}\mc N_\mr{d}}$ & $\mr{linear}()$ & $\mr{linear}()$ & -- & -- & * ~ \\
${\mc N_\mr{d}\mc N_\mr{f}}$ & $\mr{linear}()$ & $\mr{linear}()$ & -- & -- & * ~ \\
${\mc N_\mr{f}\mc N_\mr{d}}$ & $\mr{linear}()$ & $\mr{linear}()$ & -- & -- & ** ~ \\
${\mc N_\mr{f}\mc N_\mr{f}}$ & $\mr{linear}()$ & $\mr{linear}()$ & -- & -- & ** ~ \\
${\mc E_\mr{f}\mc N_\mr{f}}$ & $\mr{linear}()$ & $\mr{linear}()$ & 0.5+$\mr{sigmoid}()$ & 0.5+$\mr{sigmoid}()$ & ** ~ \\
${\mc E_\mr{f}\mc E_\mr{f}}$ & $\mr{linear}()$ & $\mr{linear}()$ & 0.5+$\mr{sigmoid}()$ & 0.5+$\mr{sigmoid}()$ & ** ~ \\
\hline
\end{tabular}
\hspace*{-1ex}
\begin{tabular}{lcccccc}
\hline
\multicolumn{5}{c}{$\qquad \qquad $ Recognition Network} \\
\cline{2-6}
~ & $\bc \mu_q$ & $\log \eta_q$ & $\alpha_q$ & $\beta_q$ & $\bc \Omega_q$ ~ \\
\hline
~ & $\mr{linear}()$ & $\mr{linear}()$ & -- & -- & * ~ \\
& $\mr{linear}()$ & $\mr{linear}()$ & -- & -- & ** ~ \\
& $\mr{linear}()$ & $\mr{linear}()$ & -- & -- & * ~ \\
& $\mr{linear}()$ & $\mr{linear}()$ & -- & -- & ** ~ \\
& $\mr{linear}()$ & $\mr{linear}()$ & -- & -- & ** ~ \\
& $\mr{linear}()$ & $\mr{linear}()$ & 0.5+$\mr{sigmoid}()$ & 0.5+$\mr{sigmoid}()$ & ** ~ \\
\hline
\end{tabular}\\
\vspace*{-3ex}
\begin{itemize}
\item[*] $\mc H(\vc Z) = \frac{\mc K(\vc Z)}{\mr{tr}(\mc K(\vc Z))}$, where $[\mc K(\vc Z)]_{i,i}=\kappa(\vc z_i, \vc z_i)$ and $[\mc K(\vc Z)]_{i,j}=0, \ \forall i\neq j$. The kernel function $\kappa(\cdot, \cdot)$ is given by \eqref{eq:msk}.\\ \vspace*{-2ex}
\item[**] $\mc H(\vc Z) = \frac{\mc K(\vc Z)}{\mr{tr}(\mc K(\vc Z))}$, where $[\mc K(\vc Z)]_{i,j}=\kappa(\vc z_i, \vc z_j), \ \forall i, j$. The kernel function $\kappa(\cdot, \cdot)$ is given by \eqref{eq:msk}.
\end{itemize}
\vspace*{-5ex}
\label{tb:model_details}
\end{center}
\end{table*}
\begin{table*}[t!]
\vspace{-3ex}
\begin{center}
\caption{Performance evaluation on the Frey Face dataset.}
\setlength{\tabcolsep}{6pt}
\renewcommand{\arraystretch}{1.2}
\tiny
\begin{tabular}{l ccc}
\hline
\multicolumn{2}{c}{${\mc N_\mr{d}\mc N_\mr{d}}$} \hspace{-22ex} \\
\cline{2-3}
$k\quad$ & LL & KLD \\
\hline
$5$ & $-66.3$ & $9\times 10^{-6}$ \\
$8$ & $-64.5$ & $2\times 10^{-5}$ \\
\end{tabular}
\hspace{-1.5ex}
\vline
\hspace{-1.5ex}
\begin{tabular}{ccc}
\hline
\multicolumn{2}{c}{${\mc N_\mr{d}\mc N_\mr{f}}$} \hspace{0ex} \\
\cline{1-2}
LL & KLD \\
\hline
$-66.4$ & $1\times 10^{-6}$ \\
$-64.2$ & $1\times 10^{-4}$ \\
\end{tabular}
\hspace{-1.5ex}
\vline
\hspace{-1.5ex}
\begin{tabular}{ccc}
\hline
\multicolumn{2}{c}{${\mc N_\mr{f}\mc N_\mr{d}}$} \hspace{0ex} \\
\cline{1-2}
LL & KLD \\
\hline
$-65.3$ & $3\times 10^{-3}$ \\
$-63.9$ & $5\times 10^{-3}$ \\
\end{tabular}
\hspace{-1.5ex}
\vline
\begin{tabular}{ccc}
\hline
\multicolumn{2}{c}{${\mc N_\mr{f}\mc N_\mr{f}}$} \hspace{0ex} \\
\cline{1-2}
LL & KLD \\
\hline
$-64.4$ & $0.65$ \\
$-63.1$ & $0.46$ \\
\end{tabular}
\hspace{-1.5ex}
\vline
\hspace{-1.5ex}
\begin{tabular}{ccc}
\hline
\multicolumn{2}{c}{${\mc E_\mr{f}\mc N_\mr{f}}$} \hspace{0ex} \\
\cline{1-2}
LL & KLD \\
\hline
$-63.5$ & $1.04$ \\
$-62.3$ & $0.71$ \\
\end{tabular}
\hspace{-1.5ex}
\vline
\hspace{-1.5ex}
\begin{tabular}{ccc}
\hline
\multicolumn{2}{c}{${\mc E_\mr{f}\mc E_\mr{f}}$} \hspace{0ex} \\
\cline{1-2}
LL & KLD \\
\hline
$-64.0$ & $56.2$ \\
$-62.8$ & $40.6$ \\
\end{tabular}
\vspace*{-5ex}
\label{tb:frey_vae}
\end{center}
\end{table*}
We train generative models of images from the Frey Face dataset\footnote{Available at: https://cs.nyu.edu/~roweis/data.html}. The dataset consists of images of size ${20\times 28}$ taken from sequential frames of a video which can be treated as continuous real space. The size of images poses a computational challenge for those model variants with the dense dispersion matrices in their generative networks since we would need to learn SPD matrices of size ${560\times 560}$. As the primary goal of this experiment is to provide insights into the questions raised in Section~\ref{sec:back_vae}, for the computational reasons (see Section~\ref{sec:diss}), we extract the first $10$ principal components of the input images and carry out the analysis on the resulting principal components. This would allow us to evaluate all model variants, listed in Table~\ref{tb:abb}, within the same pipeline. The only difference is the parametric choice for the generative and the recognition networks. In terms of the degree of flexibility, the following is true:
\begin{align}
\label{eq:model_flex}
{\mc N_\mr{d}\mc N_\mr{d}} < {\mc N_\mr{d}\mc N_\mr{f}} \approx {\mc N_\mr{f}\mc N_\mr{d}} < {\mc N_\mr{f}\mc N_\mr{f}} < {\mc E_\mr{f}\mc N_\mr{f}} < {\mc E_\mr{f}\mc E_\mr{f}},
\end{align}
where ${{\mc N_\mr{d}\mc N_\mr{f}} \approx {\mc N_\mr{f}\mc N_\mr{d}} }$ since, at this point, it is not obvious which model is the more flexible one.
All models use mMLPs with 3 layers and $30$ units per layer. The same choices of activation matrix functions, via the Mercer sigmoid kernel function \eqref{eq:msk}, and activation functions, via the hyperbolic tangent function, are used across all layers for all models. However, the output activation matrix functions and the output activation functions are model dependent. These are summarized in Table~\ref{tb:model_details}.
Data are divided into a training set (1000 samples) and a test set (965 samples). The VAE models are trained on the training set and evaluated on the test set in terms of the log-likelihood (LL) and the KLD between the posterior and the prior.
The experiment is repeated for different latent variable dimensions, ${k=\{5, 8\}}$.
Table~\ref{tb:frey_vae} summarizes the results of the evaluation.
The first observation is that the model variants with full covariance matrices at both their generative and recognition networks (${\mc N_\mr f \mc N_{\mr f}}, {\mc E_{\mr f} \mc N_{\mr f}}, {\mc E_{\mr f} \mc E_{\mr f}}$) outperform the ones that impose the diagonality constraint on either the generative network or the recognition network, in terms of the log-likelihood scores. Both ${\mc E_{\mr f} \mc N_{\mr f}}$ and ${\mc E_{\mr f} \mc E_{\mr f}}$ consistently perform better than ${\mc N_\mr f \mc N_{\mr f}}$ which may further suggest that increasing the flexibility of the generative network, in this case by relaxing the Gaussian assumption, might be advantageous. Between the two model variants that use the mPE distribution in their generative networks, namely ${\mc E_{\mr f} \mc N_{\mr f}}$ and ${\mc E_{\mr f} \mc E_{\mr f}}$, the model variant ${\mc E_{\mr f} \mc N_{\mr f}}$ achieved the highest log-likelihood score. This might seem counterintuitive since, as shown in \eqref{eq:model_flex}, ${\mc E_{\mr f} \mc E_{\mr f}}$ allows higher flexibility on the recognition model in comparison to ${\mc E_{\mr f} \mc N_{\mr f}}$---in other words, one might expect this additional flexibility to be translated directly into higher log-likelihood scores. However, this could be explained by the fact that ${\mc E_{\mr f} \mc E_{\mr f}}$ uses the estimator \eqref{eq:est_mpe} which has potentially higher variance than the estimator \eqref{eq:est_n} used by ${\mc E_{\mr f} \mc N_{\mr f}}$ (recall the discussion in Section~\ref{sec:vae_model_const}).
The next observation is that, in terms of the KLD between the posterior and the prior, the more flexible model variants score higher. If the KLD is close to zero, one might argue that the VAE model has partly failed to code any information into the latent variables. Thus, KLD scores greater than zero might be indeed desirable. The model variant ${\mc E_{\mr f} \mc E_{\mr f}}$ that allows the highest degree of flexibility on the recognition network has the highest KLD.
Finally, we generated random samples from the generative network of each model which are shown in Figures~\ref{fig:5} and \ref{fig:8}, available in the supplemental material.
\vspace*{-1ex}
\section{Limitations and Future Work}
\label{sec:diss}
\vspace*{-1ex}
The main limitation of the mMLP has to do with scalability to higher dimensions. The complexity associated with computing the $\alpha$-derivative of the von Neumann loss function~\eqref{eq:loss_qre} at the output layer is $\mc O(d_0^3)$. Taking the symmetric nature of the SPD matrices into account, the computational complexity at the hidden layer $l$ reduces to $\mc O(d_l^2)$.
The current implementation of the matrix backpropagation involves multiple applications of the Kronecker product. Although it facilitates the implementation, we would need access to the full Jacobian matrices (${d_l^2\times d_l^2}$). However, these matrices are in fact available in the form of sparse block matrices, which means that it is possible to implement a memory-efficient computation of the tensor products without the need to actually have access to the full matrices. Future work is needed in this direction.
Within the mMLP, the choice of loss function can be consequential (recall Example~1). In this regard, we showed the effectiveness of the von Neumann loss function \eqref{eq:loss_qre}. However, in connection to the VAE, we used the ELBO as the objective function which includes a KLD (relative entropy) term. It would be interesting to alter the VAE objective function such that it takes advantage of the von Neumann entropy (the quantum relative entropy) in its formulation.
Another interesting direction for future work is to investigate other applications of the proposed mMLP model. One possibility is in the context of the heteroscedastic multivariate regression, and we believe that there are many other cases in which the mMLP can prove to be useful.
\vspace{-1ex}
\section{Discussion}
\vspace{-1ex}
We introduced a tool to learn SPD matrices, referred to as the matrix multilayer perceptron (mMLP). The mMLP takes the non-Euclidean geometry of the underlying SPD manifolds into account by making use of the von Neumann divergence as the choice of the SPD manifold metric. One key aspect of the mMLP is that it preserves the SPD constraint across all layers by exploiting PD kernel functions and a backpropagation algorithm that respects the inherent SPD nature of the matrices.
We presented an application of the mMLP in connection to the VAE.
Integrating the mMLP in the VAE allowed us to consider parametric families of distributions with dense covariance matrices. Two candidates were discussed: the Gaussian distribution with a dense covariance matrix and its generalization to the mPE distribution.
Based on these choices, we constructed six model alternatives with various degrees of flexibility.
Our results support the importance of increasing the flexibility of the VAE's recognition network, which is in line with the current understanding in the VAE. However, we also found that it is \textit{just as important} to increase the flexibility of the generative network. Importantly, we found no signs of overfitting in doing so: The two model variants that achieved the highest likelihood and the largest KLD scores were indeed among the first two most flexible ones.
\section*{Acknowledgements}
This research is financially supported by The Knut and Alice Wallenberg Foundation (J. Taghia, contract number: KAW2014.0392), by the project \emph{Learning flexible models for nonlinear dynamics} (T.~B.~Sch{\"o}n, contract number: 2017-03807), funded by the Swedish Research Council and by the Swedish Foundation for Strategic Research (SSF) via the project \emph{ASSEMBLE} (T.~B.~Sch{\"o}n, contract number: RIT15-0012), by the project
\emph{Learning of Large-Scale Probabilistic Dynamical Models} (F. Lindsten, contract
number: 2016-04278) funded by the Swedish Research Council
and by the Swedish Foundation for Strategic Research via the project
\emph{Probabilistic Modeling and Inference for Machine Learning}
(F. Lindsten, contract number: ICA16-0015). We thank Carl Andersson for useful discussions.
|
2,877,628,091,085 | arxiv | \section{\label{sec:level1}Introduction}
The heavy fermion material URu$_2$Si$_2$ has been a subject of long-standing
interest since the discovery of a phase transition at T$_0$~=~17.5~K, thirty
years ago~\cite{Palstra_85}. Initially thought to be an antiferromagnetic
transition, the small antiferromagnetic moment of 0.03~$\mu_B$ that arises in
this material is far too small to account for the large specific heat jump at
T$_0$~\cite{Broholm_87,Broholm_91}. Three decades of research have produced a
number of conclusions regarding the nature of this
phase~\cite{Mydosh_11,Mydosh_14}, but have failed to determine the order
parameter, leading to this phase being dubbed the `hidden order' phase.
To study the behavior of the hidden order phase, a large number
of perturbations have been applied to the system in the form of applied field,
hydrostatic pressure and chemical substitution. In all cases, the hidden
order phase is destroyed with relatively small perturbations: applied fields
of $>$35~T~\cite{Jo_08}, hydrostatic pressure $>$0.8~GPa~\cite{Butch_10} and
chemical substitution of typically greater than 5~\% on any of the atomic
sites~\cite{Amitsuka_94,Endstra_93,Park_93}. In nearly every case, the hidden
order state is suppressed continuously, and a ferro- or antiferromagnetic
state emerges.
Neutron scattering has played an important role in determining the properties
of the hidden order phase. For example, while careful study has shown that
the small antiferromagnetic moment is present even in ultra-clean
samples~\cite{Bourdarot_14}, it is likely caused by inhomogeneous
strain~\cite{Amitsuka_07}. Within the paramagnetic phase above T$_0$,
inelastic neutron scattering measurements observed gapless, weakly dispersing
features at the $\Sigma$ point on the Brillouin Zone (BZ) edge with
$\vec{Q}_{inc}$~=~(1$\pm \delta$~0~0) ($\delta$~=~0.407), while below T$_0$,
these excitations became gapped
($\Delta_{inc}$~=~4.5-4.8~meV~\cite{Bourdarot_14,Butch_15}) and more
intense~\cite{Broholm_91,Wiebe_07}. It was determined that the gapping of
these excitations results in an entropy change of sufficient size to account
for the specific heat jump at T$_0$~\cite{Wiebe_07}. Below T$_0$ additional,
commensurate excitations appear at the $Z$ point of the BZ,
$\vec{Q}_{com}~=~(1~0~0)$, with a gap of
$\Delta_{com}$~=~1.7-1.8~meV~\cite{Bourdarot_14,Butch_15}. This wavevector is
the ordering wavevector for the antiferromagnetic moment in both the hidden
order and more conventional magnetically-ordered phases. Since the transition
at T$_0$ is related to the gapping of the incommensurate excitations and the
emergence of the commensurate ones, these have both been cited as possible
`signatures' of the hidden order state in neutron scattering
experiments~\cite{Mydosh_14,Bourdarot_14}.
The first instance in which perturbations were found to enhance the hidden
order state was through the use of applied pressure. Application of pressure
increased T$_0$ slightly, reaching 18.5~K at a pressure of
0.5~GPa~\cite{Butch_10}. However, at higher pressures, this system still
transitions to an antiferromagnetic state; at T~=~0 this occurs at
approximately 0.8~GPa. Pressures between 0.8 and 1.4~GPa have both a hidden
order and a Ne\'{e}l transition, while above 1.4~GPa the transition is
directly from paramagnetic to antiferromagnetic at
T$_N$~=~19.5~K~\cite{Butch_10}. Due to this interplay of hidden order and
antiferromagnetism, studying the behavior under applied pressure has
become of particular interest in trying to determine the nature of the unknown
order parameter. Likewise, the chemical substituents that enhance T$_0$ have
also become an interesting avenue of research for determining the order
parameter of the hidden order state. Of the dozens of chemical dopings that
have been applied to URu$_2$Si$_2$ only two dopings, both on the Ru site, have
been shown to increase the value of T$_0$: Fe~\cite{Kanchanavatee_11} and
Os~\cite{Kanchanavatee_14}. In both of these cases, the transition
temperature continues to increase as a function of doping, over a large range,
before dropping abruptly. Interestingly, of all of the pure compounds of the
family U$T_2$Si$_2$, $T$~=~Fe and Os are the only two that are
non-magnetic~\cite{Endstra_93,Sandratskii_94}. Furthering the analogy between
hydrostatic pressure and Fe/Os-doping, the doped systems are also observed to
become more conventionally antiferromagnetic with increasing chemical
pressure, however no signature of multiple transitions have been observed with
transport measurements~\cite{Kanchanavatee_11,Kanchanavatee_14}. It was
speculated that these systems experience only a gradual crossover between the
hidden order and antiferromagnetic states, although this remains an open
question.
In this work, we use elastic and inelastic neutron scattering to measure the
magnetic structure and excitations of various doping concentrations within the
U(Ru$_{1-x}$Fe$_x$)$_2$Si$_2$ series, in an attempt to determine the nature of
the hidden order-to-antiferromagnetic crossover, as well as whether the doped
compounds contain inelastic signatures of the hidden order state and/or
signatures of a conventional antiferromagnetic state (spin waves). Recently,
neutron diffraction measurements have been carried out on a number of dopings
in this series~\cite{Das_15}, which found that the magnetic moment grows
continuously from $x$~=~0 to $x$~=~0.05 and that at dopings above 5\% the
magnetic moment remains relatively constant at 0.8~$\mu_B$. This leads the
authors to suggest that 5\% doping marks the hidden order-to-antiferromagnetic
phase transition, analogous to the transition at 0.8~GPa in the parent
compound under pressure~\cite{Das_15}. This suggests that in order to study
the nature of the excitations through the transition, it is important to
measure dopings both above and below $x$~=~0.05.
\section{\label{sec:level2}Experiment}
Single crystals of U(Ru$_{1-x}$Fe$_x$)$_2$Si$_2$ with $x$~=~0.01, 0.025, 0.05,
0.10 and 0.15 were grown at McMaster University. Stoichiometric amounts of
unpurified depleted Uranium, Ru (99.95\%), Fe (99.99\%) and Si (99.9999\%)
were arc-melted on a water-cooled copper hearth in a mono-arc furnace under an
inert Ar atmosphere. The largest impurity in the Uranium precursor is
elemental Fe at a level of $\approx$50 ppm, which is small ($<$0.01\%) when
compared to the nominal doping concentrations. The resulting polycrystalline
boule was then used to grow the single crystals using the Czochralski method.
This was performed in a tri-arc furnace using a water-cooled copper hearth
under a continuously-gettered Ar atmosphere at 900~$^{\circ}$C. After the
growths, the single-crystalline nature and sample alignments were confirmed
with Laue x-ray diffraction.
These samples were studied using elastic and inelastic neutron scattering at
the High-Flux Isotope Reactor (HFIR) and the Spallation Neutron Source (SNS)
of Oak Ridge National Laboratory (ORNL). The diffraction measurements were
performed on all of the samples using the HB-1A spectrometer at HFIR, while
inelastic measurements were done on the HB-1 (for $x$~=~0.01 and 0.05) and
HB-3 (for $x$~=0.025, 0.10 and 0.15) triple-axis instruments at HFIR, as well
as the SEQUOIA time-of-flight spectrometer at the SNS (for $x$=0.05 and
0.15). For comparison, data on the parent compound has been included where
appropriate; this data was measured on the Multi-Axis Crystal Spectrometer
(MACS) at the NIST Center for Neutron Research and was published
previously~\cite{Williams_16}. The neutron measurements described in this
work were performed using 1 single crystal of each doping: the $x$~=~0.01
sample had a mass of 5.65(2)~g and a mosaic of 4.5$^{\circ}$; the $x$~=~0.025
sample had a mass of 1.99(1)~g and a mosaic of 1.3$^{\circ}$; the $x$~=~0.05
sample had a mass of 2.98(1)~g and a mosaic of 10$^{\circ}$; the $x$~=~0.10
sample had a mass of 1.85(1)~g and a mosaic of 3.0$^{\circ}$; and the
$x$~=~0.15 sample had a mass of 1.74(1)~g and a mosaic of 4.0$^{\circ}$. All
of these samples were aligned in the [H~0~L] scattering plane for each of the
neutron scattering experiments.
The HB-1A measurements were performed in a closed-cycle refrigerator with a
base temperature of 4.0~K using a fixed incident energy of 14.7~meV. PG (002)
monochromator and analyzer crystals were used with PG filters, and the
collimation was 40'-40'-40'-80'. The HB-1 and HB-3 measurements were
performed in closed-cycle refrigerators with a base temperature of 4.0~K using
a fixed final energy of 14.7~meV. PG (002) monochromator and analyzer
crystals were used with PG filters, and the collimation was 48'-40'-40'-120'.
The SEQUOIA measurements were also performed in a closed-cycle refrigerator
with a base temperature of 5~K, using a fixed incident energy of 30~meV. The
crystals were rotated in the [H~0~L] plane in 1$^{\circ}$ steps over a
190$^{\circ}$ range.
\section{\label{sec:level3}Magnetic Structure Determination}
The neutron diffraction involved measurements of all of the Bragg peaks for
which $|\vec{Q}|$~$<$~4.7~\AA$^{-1}$, at 4~K and 30~K, as well as the
temperature dependence of the (1~0~0) and (0~0~1) magnetic Bragg peaks. While
the (0~0~1) peaks was found to have a weak magnetic signal, the $c$-axis
magnetic contribution was found to be consistent with what would be expected
due to multiple scattering for E$_i$~=~14.7~meV, suggesting that the magnetic
moments point along the $\hat{c}$-direction. Multiple scattering was also
encountered in the parent material, where the same magnetic structure was
refined for the small, intrinsic moments~\cite{Ross_14}.
Fig.~\ref{hb1a} shows the (1~0~0) magnetic Bragg peak at 4~K in the various
Fe-doped samples (panel (a)) and their temperature dependence (panel (b)).
This is a disallowed nuclear peak so there is no scattering from the sample
above T$_0$, as seen in the temperature-dependence. We observe the onset of
magnetic scattering, and the transition appears to be second order in nature.
The temperature dependence of the lowest two dopings, 1\% and 2.5\% do not
show the same temperature dependence. Previous work using $\mu$SR has shown
that at these dopings, there is considerable phase separation between magnetic
and non-magnetic regions, likely as a result of the random dopant distribution
in these samples~\cite{Wilson_16}. This is a likely origin of the observed
temperature dependence of the magnetic Bragg peak. However, the peaks are
resolution-limited at all dopings, suggesting that the magnetic order is
sufficiently long-ranged. Using the 7 structural and 9 magnetic peaks
collected on each sample, the magnetic structure and moment can be determined.
In agreement with the parent material at ambient pressure and in the
pressure-induced antiferromagnetic state, this magnetic structure has magnetic
moments aligned along the $c$-axis, with the body-centered moment antiparallel
to the moments in the neighboring $ab$-planes~\cite{Ross_14}.
\begin{figure}[tbh]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{hb1a.pdf}
\caption{\label{hb1a} (color online) (a) Radial scans through the (1~0~0)
magnetic Bragg peaks at T~=~4~K in the various samples of
U(Ru$_{1-x}$Fe$_x$)$_2$Si$_2$. All of the peaks appear resolution-limited,
indicating long-range magnetic order. This is a disallowed nuclear peak, and
so there is no scattering from the sample above T$_0$. (b) The
temperature-dependence of the (1~0~0) magnetic Bragg peak intensity in the
various samples. This shows the second-order transition from the paramagnetic
state to the antiferromagnetic state at T$_N$. The lack of saturation of the
moment in the 1\% (yellow) and 2.5\% (black) samples may be dues to phase
separation (see text). In both plots, the error bars lie within the symbols.}
\end{center}
\end{figure}
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|l|l|l|l|}
\hline
Doping (\%) & ~T$_N$ (K)~ & Moment ($\mu_B$) & ~T$_0$ (K)~\cite{Wilson_16}~ & ~T$_N$ (K)~\cite{Wilson_16}~
\\
\hline
~1.0~\% & ~15.0(5) & ~~~0.048(5) & ~~~17.5 & ~~~16.0 \\
~2.5~\% & ~15.0(5) & ~~~0.51(1) & ~~~ & ~~~ \\
~5.0~\% & ~20.0(5) & ~~~0.59(1) & ~~~21.0 & ~~~21.0 \\
10.0~\% & ~21.0(5) & ~~~0.59(2) & ~~~21.5 & ~~~21.0 \\
15.0~\% & ~22.5(5) & ~~~0.66(2) & ~~~25.5 & ~~~25.0 \\
\hline
\end{tabular}
\end{center}
\caption[]{The transition temperatures and extracted moment sizes in the
various dopings of U(Ru$_{2-x}$Fe$_x$)$_2$Si$_2$ measured in this work. The
value of T$_N$ is the transition temperature seen in the measurement of the
(1~0~0) Bragg peak (Fig.~\ref{hb1a}(b)). Also listed are the values of T$_0$
and T$_N$ as determined from the same crystals that were used in the current
studies. These values were obtained from susceptibility and $\mu$SR
measurements as reported in Ref.~\citen{Wilson_16}.}
\label{moment_tbl}
\end{table}
The magnetic moment as a function of doping at T~=~4~K was extracted from the
integrated intensity of the (1~0~0) magnetic peak normalized by the integrated
intensity of the (1~0~1) structural peak, with the proper Lorentz factors
taken into account for both Bragg peaks. The (1~0~1) structural peak was
chosen for the normalization to minimize the difference in instrumental
Q-resolution at the two peak positions, since resolution effects were not
incorporated in these calculations. This approach is in contrast to the method
employed by Das {\em et al}~\cite{Das_15}, who chose the higher order Bragg
peak (6~0~0) for the normalization to avoid extinction effects. Neither
normalization method accounts for the effect of multiple-scattering, which has
been noted as significant in URu$_2$Si$_2$, but that is difficult to calculate
directly~\cite{Bourdarot_14,Ross_14}. This may produce differences in the size
of the magnetic moments determined.
The moments that were extracted from the neutron diffraction measurements are
shown in Table~\ref{moment_tbl}, along with the values of T$_N$ and T$_0$ from
$\mu$SR in a previous work~\cite{Wilson_16}. The values of T$_N$ from the
measurement of the (1~0~0) magnetic Bragg peak are lower than those found by
$\mu$SR, likely due to the local probe nature of the $\mu$SR measurements.
The size of the moments agree well with the values determined from the
internal field measurements based on the muon precession frequency, suggesting
they are sensitive to the same magnetic ordering. The size of the moment in
the Fe-doped samples is compareable to what is seen in the pressure-induced
antiferromagnetic state of the parent compound~\cite{Amitsuka_99}, except for
the lowest doping (1\%). In the lowest-doped sample, the size of the internal
field determined by $\mu$SR would suggest a moment size of $\sim$0.45~$\mu_B$,
however this was associated with a reduced volume fraction of $\sim$0.6 at
T=~5~K~\cite{Wilson_16}. The decreased moment seen by the neutron
measurements is likely due to the phase separation between antiferromagnetism
and the hidden order phase observed by the $\mu$SR measurements. This would
indicate that the transition from hidden order to antiferromagnetism occurs at
a doping between 1\% and 2.5\%, lower than that suggested by Das {\em et
al.}~\cite{Das_15}. While we speculate that the difference in the moments may
result from a different normalization method, the difference in the doping
dependence may also be a result of differences in nominal and actual doping
concentrations.
\section{\label{sec:level4}Inelastic Measurements}
Fig.~\ref{seq} shows the inelastic time-of-flight measurements of the 5\%
sample at 30~K (panel (a)) and at 5~K (panel (b)), as well as the 15\% sample
at 5~K (panel (c)). Fig.~\ref{seq}(a) shows measurements in the paramagnetic
state. The inelastic spectrum seen here in the 5\%-doped sample is identical
to what is seen in the parent material above T$_0$: gapless excitations
emanating from $\vec{Q}_{inc}$~=~(0.6~0~0), and no excitation at
$\vec{Q}_{com}$~=~(1~0~0). Panel (d) illustrates what happens in the hidden
order state of the parent material (this data is adapted from
Ref.~\citen{Williams_16}). The excitation at $\vec{Q}_{inc}$ becomes gapped,
resulting in the entropy change seen by specific heat. Additionally, gapped
excitations also appear at $\vec{Q}_{com}$, albeit with a smaller gap and less
intensity. Fig.~\ref{seq}(b) shows the excitation spectrum below the
transition in the 5.0\% Fe-doped sample. Relative to the parent material, we
see that the incommensurate excitation is qualitatively unchanged. The gap
appears to be larger, but with little change in the spin wave velocity,
similar to what is observed under hydrostatic pressure~\cite{Williams_16}.
The commensurate excitation, however, shows a large change when compared to
the pure material in the hidden order state. It is significantly weaker
relative to the incommensurate excitation. Furthermore, the scattering that
is present at the commensurate point in the 5\% doping is only present at much
higher energies.
\begin{figure*}[tbh]
\begin{center}
\includegraphics[angle=0,width=\textwidth]{seq.pdf}
\caption{\label{seq} (color online) Time-of-flight neutron measurements of
various U(Ru$_{1-x}$Fe$_x$)$_2$Si$_2$ samples. (a) $x$~=~0.05, measured at
30~K in the paramagnetic phase. As is seen in the paramagnetic state of the
parent ($x$~=~0) compound, there are gapless excitations at the incommensurate
wavevector $\vec{Q}_{inc}$~=~(1.4~0~0). (b) Below T$_0$, these excitations
become gapped and their spectral weight increases. (c) At higher Fe dopings
($x~=~0.15$ is shown here), the gap can be seen to increase and broaden in
$\hbar \omega$ and $\vec{Q}$. (d) Data from the parent compound (taken
from Ref.~\citen{Williams_16}) below T$_0$ shows similar excitations at
$\vec{Q}_{inc}$, however the excitations in the parent material are more
well-defined. Additionally, the commensurate excitations at
$\vec{Q}_{com}$~=~(1~0~0), which are clearly present in the parent material,
are not as obvious in the Fe-doped samples. Cuts through $\vec{Q}_{com}$ show
these excitations to be substantially weakened, and appear at higher energy
than in the parent. (inset) The phase diagram of
U(Ru$_{1-x}$Fe$_x$)$_2$Si$_2$ showing the locations of the measurements for
panels (a) to (d).}
\end{center}
\end{figure*}
Moving to higher Fe-doping (15.0\% in Fig.~\ref{seq}(c)), the weakening of
these excitations seems to continue at both the commensurate and
incommensurate points. Additionally, we observe that the gap at
$\vec{Q}_{inc}$ is larger than at $x$~=~0.05 or in the parent. This type of
trend has been observed under pressure, where an increase in the transition
temperature seems to correlate with an increase in the incommensurate gap,
though the magnitude of the gap change in this system is much larger than what
has been observed under pressure for the same change in the transition
temperature~\cite{Williams_16,Bourdarot_10}. The excitations also appear
broadened, both in $|Q|$ and $\hbar \omega$. This would suggest that Fe
doping distorts the Fermi surface, weakening the nesting that gives rise to
the excitations~\cite{Butch_15}. Furthermore, no additional excitations
appear with Fe doping, including any conventional spin waves centered on the
(1~0~0) magnetic Bragg peak. To more carefully investigate the changes in the
excitations, inelastic triple axis neutron scattering measurements were
performed at both $\vec{Q}_{com}$ and $\vec{Q}_{inc}$, above and below T$_0$.
\begin{figure*}[tbh]
\begin{center}
\includegraphics[angle=0,width=\textwidth]{q_com.pdf}
\caption{\label{q_com} Commensurate excitation as a function of
doping at T~=~4~K (filled circles) and T~=~30~K (open circles). The solid
line is a fit to the low temperature data as described in the text. The data
for the parent compound is adapted from Ref.~\citen{Williams_16}.}
\end{center}
\end{figure*}
The inelastic triple-axis measurements at $\vec{Q}_{com}$~=~(1~0~0) are
shown in Fig.~\ref{q_com}, at 30~K, above the transition (open circles), and
at 4~K, below the transition (filled circles) for each of the measured
dopings. The data for the 1\% (panel (b)) and 5\% (panel (d)) samples were
taken on the HB-1 spectrometer, which had a lower background than the same
measurements on the HB-3 spectrometer for the other Fe-doped samples.
However, all samples clearly show the opening of the gap in the excitation
spectrum below the transition. The same excitation in the parent compound is
shown in Fig.~\ref{q_com}(a) for comparison (data adapted from
Ref.~\citen{Williams_16}). The solid line is a fit to the data, following the
analysis of Ref.~\citen{Broholm_91} and Ref.~\citen{Williams_16}, given by:
\begin{eqnarray}
\tilde{I}({\bf Q},\omega)=
I & \left[ \frac{\hbar\gamma/\pi}{(\hbar\omega-\epsilon({\bf Q}))^2
+(\hbar\gamma)^2} \right. \nonumber \\
& \left. -\frac{\hbar\gamma/\pi}{(\hbar\omega+\epsilon({\bf Q}))^2
+(\hbar\gamma)^2} \right]
\label{eq1}
\end{eqnarray}
\noindent where $I$ is an overall scale factor for the intensity and
$\hbar\gamma$ is the Half Width at Half Maximum (HWHM) for the Lorentzian
functions. With an energy gap $\Delta$, the dispersion relation reads:
\begin{equation}
\epsilon({\bf Q})=
\sqrt{\Delta^2+\hbar^2(\delta Q_{\perp}^2v_{\perp}^2+\delta
Q_{\parallel}^2v_{\parallel}^2)}
\label{eq2}
\end{equation}
\noindent where $\delta Q_{\perp,\parallel}=|({\bf Q}-{\bf
Q}_0)_{\perp,\parallel}|$ is the projection of the difference of the wave
vector transfer ${\bf Q}$ from the critical wave vector ${\bf Q}_0$
perpendicular and parallel, respectively, to the
$\mathbf{\hat{c}}$-direction. The velocities used were those of the parent
compounds, where $v_H$~=~$v_K$~=~$v_{\perp}$~=~23.7(5)~meV$\cdot$\AA~and
$v_L$~=~$v_{\parallel}$~=~32.5(7)~meV$\cdot$\AA~\cite{Williams_16}.
Eq.~\ref{eq1} was multiplied by a Bose factor and convoluted with the 4D
experimental resolution function using \textsc{Reslib}~\cite{Zheludev_07}.
This under-estimates the elastic peak at (1~0~0) in Fig.~\ref{q_com} due to
the elastic magnetic Bragg peak at this $\vec{Q}$, but more reliably
reproduces the quasi-elastic signal at the incommensurate (1.4~0~0) in
Fig.~\ref{q_inc}. Since these measurements were most concerned with
extracting the parameters of the inelastic excitation, no additional terms
were included to model the elastic peak. The values obtained from these fits
are given in Table~\ref{fit_tbl}, below.
We see that in the 1\% doping, the commensurate excitation is nearly unchanged
from the parent material; the gap and width are unchanged within error.
However, we notice a dramatic change in the 2.5\% doped sample, where the
excitation is substantially broadened in energy and is peaked at much higher
energies. The excitation is essentially unchanged with further increases in
doping, with the gap energy and the width much larger than in the parent
compound. This trend is shown in Fig.~\ref{x_dep} where we can see the very
abrupt changes in the gap (panel (a)) and the FWHM (panel (c)), which are
relatively constant above 1\% doping. It is also notable that the
commensurate excitation is qualitatively unchanged across the phase
transition, despite the emergence of the magnetic Bragg peaks at (1~0~0). In
agreement with the time of flight measurements, no other excitations are
present in any of the samples.
\begin{figure*}[tbh]
\begin{center}
\includegraphics[angle=0,width=\textwidth]{q_inc.pdf}
\caption{\label{q_inc} Incommensurate excitation as a function
of doping at T~=~4~K (filled circles) and T~=~30~K (open circles). The solid
line is a fit to the low temperature data as described in the text. The data
for the parent compound is adapted from Ref.~\citen{Williams_16}.}
\end{center}
\end{figure*}
Fig.~\ref{q_inc} shows the excitation that is present below T$_0$ at
$\vec{Q}_{inc}$~=~(1.4~0~0) as a function of doping. This excitation was fit
in the same manner as the commensurate excitation, shown by the solid lines in
Fig.~\ref{q_inc}. The values obtained from this fitting are given in
Table~\ref{fit_tbl}. As with the commensurate excitation, the incommensurate
excitation shows very little change at 1\% doping relative to the parent
compound. However, above 1\% doping, rather than a discontinuous change, the
incommensurate excitation exhibits a continuous broadening and upward shift in
energy. As with the doping dependence of the magnetic moment, the
incommensurate excitation shows a discontinuous change from the hidden order
to antiferromagnetic phases, as well as a continued evolution over the entire
range of Fe doping. This is apparent from looking at Fig.~\ref{x_dep}(a) and
(c), where the gap and FWHM, respectively, show an increase over the full
range of dopings measured. The excitation appears to weaken continuously with
increasing Fe doping, but is present in all dopings measured with no
additional excitations present.
\begin{table}[htbp]
\resizebox{\columnwidth}{!}{
\begin{tabular}{|d|r|r|d|d|}
\hline
\multicolumn{1}{|c|}{Doping~} & \multicolumn{1}{c|}{~Wavevector~} &
\multicolumn{1}{c|}{I (arb. units)} & \multicolumn{1}{c|}{$\Delta$ (meV)} &
\multicolumn{1}{c|}{$\gamma$ (meV)}
\\
\hline
& & & & \\
0.0\%~\cite{Williams_16} & (1~0~0)~~ & \multicolumn{1}{c|}{--} & 2.3(4) & 0.9(1) \\
1.0\%~ & (1~0~0)~~ & 1.55(3.77)~~ & 2.3(1) & 1.2(2) \\
2.5\%~ & (1~0~0)~~ & 6.99(2.13)~~ & 6.7(1) & 8.0(6) \\
5.0\%~ & (1~0~0)~~ & 10.28(3.08)~~ & 6.8(1) & 7.7(6) \\
10.0\%~ & (1~0~0)~~ & 7.01(1.95)~~ & 6.6(1) & 6.9(5) \\
15.0\%~ & (1~0~0)~~ & 6.04(1.34)~~ & 7.5(1) & 6.7(6) \\
& & & & \\
0.0\%~\cite{Williams_16} & (1.4~0~0)~ & \multicolumn{1}{c|}{--} & 4.2(2) & 0.7(1) \\
1.0\%~ & (1.4~0~0)~ & 5.12(3.08)~~ & 4.18(4) & 0.48(9) \\
2.5\%~ & (1.4~0~0)~ & 5.26(2.26)~~ & 3.5(1) & 2.7(3) \\
5.0\%~ & (1.4~0~0)~ & 2.48(78)~~~ & 5.21(6) & 3.4(3) \\
10.0\%~ & (1.4~0~0)~ & 0.59(26)~~~ & 5.9(1) & 6.1(7) \\
15.0\%~ & (1.4~0~0)~ & 0.25(25)~~~ & 7.1(3) & 6.4(1.6) \\
\hline
\end{tabular}
}
\caption[]{Results of fitting the data in Fig.~\ref{q_com} and \ref{q_inc} to
the Eq.~\ref{eq1}, as described in the text. Data for the parent compound
($x~=~0.0$) is taken from Ref.~\citen{Williams_16}.}
\label{fit_tbl}
\end{table}
\begin{figure*}[tbh]
\begin{center}
\includegraphics[angle=0,width=\textwidth]{x_dep.pdf}
\caption{\label{x_dep} (color online) (a) The gap at $\vec{Q}_{com}$ (filled
circles) and $\vec{Q}_{inc}$ (open circles) as a function of Fe doping
measured at T~=~4~K. The values of the gap at 1\% doping are nearly unchanged
from the parent compound. Above 1\% doping, the gap at the commensurate
wavevector increases dramatically, while the incommensurate gap increases
continuously with Fe doping. (b) The value of the gap at $\vec{Q}_{com}$ (red
circles), $\vec{Q}_{inc}$ (blue circles) and the gap measured by transport
(black triangles) as a function pressure. Figure reproduced with permission
from Ref.~\citen{Bourdarot_10}, copyright American Physical Society. (c) The
full width at half maximum (FWHM) of the excitations as a function of doping
at T~=~4~K. Similarly to the behavior of the gaps, the width of the
excitations is nearly unchanged at 1\% doping. Above 1\%, the width of the
commensurate excitation is greatly increased, while the incommensurate
excitation gradually broadens with increasing Fe doping.}
\end{center}
\end{figure*}
Comparing these results to the gap measured by inelastic neutron scattering
under pressure (shown in Fig.~\ref{x_dep}(b)), we see that there is a
similarity when considering the incommensurate excitation (blue circles). The
application of pressure also increases the gap, though it is assumed that
under pressure the gap jumps discontinuously at P$_0$~=~0.5~GPa and is
constant above. However, there may not be enough data points to be
certain~\cite{Bourdarot_10,Williams_16,Villaume_08}.
\begin{figure}[tbh]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{op.pdf}
\caption{\label{op} (color online) Plots of the order parameters for hidden
order and antiferromagnetic phases for the 1\% Fe-doped sample. The elastic
magnetic Bragg peak (black squares) shows an onset around 15~K, coincident
with the transition in the $\mu$SR measurements, while the opening of the gap
at $\vec{Q}_{inc}$ (blue circles) onsets at 17.5~K, the same as for the parent
compound and where the transition is seen by susceptibility~\cite{Wilson_16}.}
\end{center}
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{1pc_tdep.pdf}
\caption{\label{1pc} (color online) (a) The (1~0~0) magnetic Bragg peak
shown at 4~K, 14~K and 16~K, subtracting the same data at 30~K. Here we see
the disappearance of the magnetic Bragg peak at a temperature below the hidden
order transition at T$_0$~=~17.5~K. (b) Energy scan at (1.4~0~0) at the same
temperatures as in (a), showing the temperature evolution of the gap. The gap
is present at all temperatures, though the weak signal and small gap (within
the experimental resolution) at 16~K make this less clear than the measurement
shown in Fig.~\ref{op}.}
\end{center}
\end{figure}
Lastly, to more directly probe the relationship between the hidden order and
the antiferromagnetic order, we measured the order parameters for both types
of ordering simultaneously in the 1\% Fe doped sample, shown in
Fig.~\ref{op}. The black squares denote the peak intensity of the (1~0~0)
elastic magnetic Bragg peak, while the blue circles are the scattering
intensity at (1.4~0~0) and an energy transfer of 2~meV. This shows the
strength of the scattering at a point within the incommensurate gap, a
measurement that was shown to determine the opening of the gap at T$_0$ in the
parent compound~\cite{Wiebe_04}. In agreement with the quantitative
similarities of the excitations in the 1\% sample and the parent compound, as
well as the bulk thermodynamic data~\cite{Kanchanavatee_11,Wilson_16}, we see
the opening of the incommensurate gap at T$_0$~=~17.5~K. However, in
agreement with the $\mu$SR measurements~\cite{Wilson_16}, the onset of the
antiferromagnetic order occurs at a slightly lower temperature, T$_N$~=~15~K.
Despite the apparent variation in the transition temperatures, specific heat
measurements see no entropy change between the hidden order and
antiferromagnetic phases, emphasizing that the Fermi surface reconstruction
happens at the upper transition~\cite{Kanchanavatee_11}. Recent magnetization
and thermal expansion measurements also see evidence for the possibility of
two transitions, though they suggest that this is also present at higher
dopings ($\sim$5\%)~\cite{Ran_16}. This may be due to variations in doping
concentrations or a difference in sensitivity of the measurement techniques.
To verify the presence of two transitions, constant $\vec{Q}$ measurements
were performed at 4~K, 14~K, 16~K, 18~K and 30~K to measure both the (1~0~0)
magnetic Bragg peak and the opening of the gap at (1.4~0~0), shown in
Fig.~\ref{1pc}. It can be seen that the 14~K data shows a gap in the
(1.4~0~0) excitation spectrum, and there is appreciable scattering at the
(1~0~0) magnetic Bragg peak. At 16~K, the magnetic Bragg peak is absent,
within error, while the gap in the (1.4~0~0) constant-$\vec{Q}$ measurement
had been reduced, it is still present. Both measurements at 18~K are
identical within error to the 30~K data. This is consistent with the
separation in temperature of the hidden order and magnetic transitions.
\section{\label{sec:level5}Discussion \& Conclusions}
We have presented a comprehensive set of elastic and inelastic neutron
scattering measurements on a range of Fe-doped samples of
U(Ru$_{1-x}$Fe$_x$)$_2$Si$_2$ with 0.01~$\le~x~\le$~0.15. We have found that
the onset of the antiferromagnetic phase occurs at very low doping, with the
2.5\% doped sample showing an ordered moment of 0.51~$\mu_B$. However, the
1\% sample seems to show excitations that are nearly identical to the parent
compound, but onsetting at a higher temperature than the antiferromagnetic
moment. Combined with previous susceptibility and $\mu$SR
measurements on these samples~\cite{Wilson_16}, there is strong evidence of
different transition temperatures for the antiferromagnetic and hidden orders,
in agreement with other techniques on different Fe-doped
samples~\cite{Ran_16}. Resistivity and specific heat measurements do not see
any signatures of an abrupt phase transition between the hidden order and
antiferromagnetic state~\cite{Kanchanavatee_11,Ran_16}. This is consistent
with no observed change in $\vec{Q}$ for the incommensurate excitation, which
remains at the $\Sigma$-point of the hidden order phase, suggesting no change
in the BZ between the antiferromagnetic and hidden order phases.
Additionally, the $\mu$SR measurements see evidence for phase separation at
low dopings, likely a result of the statistically-random distrubition of Fe
dopants~\cite{Wilson_16}. These dopings are also where the (1~0~0) magnetic
Bragg peak does not show a rapid onset, seen in Fig.~\ref{hb1a}(b), which
would be expected in samples with low doping concentrations.
All of the dopings that were measured show evidence for long-ranged magnetic
order, with the moment size increasing as a function of doping. This suggests
that even far from the parent compound, there is still an evolution away from
hidden order. This increase in the magnetic moment is accompanied by a
continuous increase in T$_N$, which peaks above the dopings studied at
$\sim$40\% doping, before being suppressed to a paramagnetic state above
$\sim$70\% doping. Synthesis of large single crystals becomes difficult above
15\% Fe doping~\cite{Kanchanavatee_11}, but $\mu$SR measurements up to 50\% Fe
doping show that the magnetic moment decreases above 15\% Fe
substitution~\cite{Wilson_16}.
The inelastic time-of-flight and triple-axis measurements show that both sets
of excitations observed in the parent compound are present at all dopings
measured. However, while the excitations are qualitatively unchanged, there
are dramatic changes in the quantitative properties above 1\% doping, most
noticeably in the reduction of the intensity of the commensurate excitation.
The increase in the gap and energy-broadening of the excitations at both the
commensurate and incommensurate point occurs noticeably in the 2.5\% doped
sample. Both the magnitude of the gap ($\Delta$) and the width ($\gamma$)
evolve continuously with doping, which is most apparent at the incommensurate
point. As observed with measurements of the parent compound under pressure,
the increase in the gap at $\vec{Q}_{inc}$ coincides with an increase in
T$_0$. This also follows the monotonic increase in the magnetic moment with
doping, suggesting that the critical doping is between 1\% and 2.5\%, but that
the magnetic moment and the excitations change continuously at higher dopings.
The pressure results have been somewhat unclear about the existence and
properties of the commensurate excitation, with work performed at 0.62~GPa
reporting its absence~\cite{Bourdarot_10,Villaume_08,Hassinger_10a}, while
other work seeing a gap of $<$1~meV at 0.72~GPa~\cite{Aoki_09} and a gap of
1.8~meV at 1.02~GPa~\cite{Williams_16}. This has been interpreted as mode
softening at the critical pressure, P$_C$~=~0.6~GPa, which may explain the
changing value of the gaps as seen in the present case of Fe-doping. However,
the much larger gap and width in the Fe-doped samples clearly demonstrate that
the behavior of the commensurate excitation under Fe doping is not the same as
under applied pressure, which may suggest that the effect of Fe doping on the
$Z$ point Fermi surface pocket is not strictly analogous to the changes that
occur under hydrostatic pressure. Furthermore, the change in the excitations
point to evolutions in the Fermi surface with increasing Fe doping; this
serves to increase the gap, suggesting that the Fermi surface pockets at the
$\Sigma$, $Z$ and/or $\Gamma$ points distort slightly to change the optimal
energy for the nesting. This must occur without any Fermi surface
reconstruction, as there is no entropy change across the HO-AF
transition~\cite{Kanchanavatee_11}, nor do we see any change in the location
of the incommensurate excitation ($\Sigma$), suggesting that the Fermi surface
is not distorted in the antiferromagnetic state. Drawing the analogy to the
antiferromagnetic state induced by applied pressure, that transition similarly
shows no Fermi surface reconstruction by quantum oscillation
measurements~\cite{Hassinger_10b}. We can make further comparison to the
pressure-induced AF state by looking at the excitations seen by neutron
scattering. Under pressure, the gap at the incommensurate point similarly
shows a slight increase, while the intensity of the excitations also
increases~\cite{Williams_16}. The intensity of the excitations does not
increase with Fe doping, but this may be a result of impurities distorting the
Fermi surface, serving to weaken the nesting that is undistorted in the case
of applied pressure. This can also be seen by comparing the width of the
excitations, which are unchanged under pressure~\cite{Williams_16}, but
dramatically broadened in the case of Fe doping.
This study serves to illustrate that URu$_2$Si$_2$ is ideally placed on the
precipice of magnetic states: antiferromagnetism under pressure or Fe-doping,
and even ferromagnetism under Re-doping~\cite{Butch_11}. In all cases, we see
that the excitation spectrum changes quantitatively, but not qualitatively,
and is not destroyed by the emergence of the magnetically-ordered
state~\cite{Williams_16,Williams_12}. Thus this work demonstrates that in the
Fe-doped compounds studied here, as with other perturbations, the hidden order
state is not incompatible with magnetic order but rather that the electronic
correlations are intimately related to magnetism.
\
\section{\label{sec:level6}Acknowledgments}
The authors would like to thank C.R.~Wiebe for helpful discussions as well as
C.~Broholm for his input and collaboration on the parent
compound~\cite{Williams_16}. We also note that inelastic neutron scattering
work on these compounds was submitted recently during the preparation of this
manuscript~\cite{Butch_16} and we thank N.P.~Butch for sharing that work,
whose results are consistent with the present study.
We acknowledge instrument support from S.~Chi, M.~Matsuda, L.M.~Debeer-Schmitt
and D.~Pajerowski. This research at ORNL's High Flux Isotope Reactor
and Spallation Neutron Source was sponsored by the Scientific User Facilities
Division, Office of Basic Energy Sciences, US Department of Energy. Work at
McMaster University was supported by the Natural Sciences and Engineering
Research Council of Canada and the Canadian Foundation for Innovation. T.J.W.
acknowledges support from the Wigner Fellowship program at Oak Ridge National
Laboratory. M.N.W. acknowledges support from the Alexander Graham Bell Canada
Graduate Scholarship program.
|
2,877,628,091,086 | arxiv | \section{Approach}\label{approach}
Simulated annealing is an approach inspired by statistical mechanics which by analogy views the values of a multivariable numeric problem as physical states of particles. Simulated annealing is very useful practically for problems such as the traveling salesperson problem and other so called NP-complete problems which have no known polynomial time numeric solutions.
Let $N$ be the given number to be factored, and use $N$ in base 2 with $n$ digits. We seek numbers $A$ and $B$ (written in binary with respectively $a$ and $b$ digits) such that $A * B = N$ with $A$ the larger number (so $a>=b$). Now, $a + b = n$ or $n+1$. For a given $N$ there are at most $n-2$ possibilities for $\{a,b\}$. (Remember that $a>b$, and the leftmost digit of both A and B must be a 1.) Factor $A$, with $a$ binary digits, can have $1, 2, 3, \ldots, a$ 1s in its binary representation. Factor $B$, with $b$ binary digits, can have $1, 2, 3, \ldots, a$ 1s in its binary representation.
So, we formulate the factoring problem as follows: Given a binary number $N$ with $n$ digits, find binary numbers $A$ and $B$ with, respectively, $a$ and $b$ digits, of which $a^\prime$ and $b^\prime$ are 1, such that $A * B = N$.
The simulated annealing approach starts with a configuration of particles---here the binary digits of trial factors $A$ and $B$---and an energy $E$, here defined as:
\begin{equation}\label{energyDefinition}E = \sum_{i=1}^{n}\left\{ \begin{array}{ll}
f(i) & \mbox{if $\{AB\}_i = N_i$} \\
0 & \mbox{if $\{AB\}_i \ne N_i$}
\end{array}
\right. \end{equation}
where $i$ indexes the digits in the binary representations and $f(i)$ is a function that increases monotonically with $i$. We have tested linear ($f(i)=i$) and quadratic ($f(i)=i^2$). This kind of function favors having as many of the binary digits of the product $AB$ match the corresponding binary digits of $N$ as possible, with weighting increasing for the higher bits. If all digits match exactly, $A$ and $B$ are factors of $N$, and the value of $E$ is maximal. If not, try a different configuration---here new possible factors $A^\prime$ and $B^\prime$---and look at the energy $E^\prime$ in this configuration. If the energy is increased then $A^\prime$ and $B^\prime$ are accepted as the new factors. Using the Metropolis algorithm,\cite{:/content/aip/journal/jcp/21/6/10.1063/1.1699114} occasionally $A^\prime$ and $B^\prime$ are accepted as the new trial factors even if $E^\prime<E$. The chance of accepting the lower energy state is reduced as time goes on; by analogy, the problem cools according to an annealing schedule. From an initial ``temperature'' value $T_0$, given a cooling factor $F_c$, we iterate through $N_a$ annealing steps, each time reducing the temperature by the cooling factor. That is, moving from step $i$ to step $i+1$,
\begin{equation*}
T_{i + 1} = T_i * F_c
\end{equation*}
At each annealing step, try some number of configurations---rearrangements of the bits of binary representations of $A$ and $B$. We allow a number of different mechanisms to generate configurations. For one of the factors, choose from one of the following moves:
\begin{description}
\item[swap] Choose a pair of distinct bits (a 1 and a 0) at random, and swap them
\item[slide] Choose a random contiguous sub-sequence of the bits. Remove the rightmost bit, slide the remaining bits one to the right, then put the removed bit in the hole left behind.
\item[reverse] Choose a random contiguous sub-sequence of the bits and reverse its order.
\item[random] Randomly permute a random selection of the bits (generally a sparse selection, not a contiguous sequence).
\end{description}
For each configuration, test whether $A^\prime$ and $B^\prime$ multiply to $N$ (meaning they are factors of $N$). If so, repeat the whole annealing algorithm recursively on $A^\prime$ and $B^\prime$, finding successive sub-factors, until all endpoint numbers are are either prime factors (success) or the algorithm fails to factor one of them.
If $A^\prime B^\prime \neq N$, test whether whether the new energy of the system as defined in Eq. \ref{energyDefinition}, $E^\prime$, is greater than the energy of the previous configuration ($E$). If so, accept the new configuration as current and discard the previous one. If $E^\prime < E$, select a uniform random number between 0 and 1, $r$, and accept the new configuration if
\begin{equation*}
r < \exp\left(-\frac{E^\prime - E}{kT_i}\right)
\end{equation*}
where $T_i$ is the temperature at annealing step $i$ and $k$ is a constant analogous to the Boltzmann constant in physics.
This approach to factoring $N$ is nondeterministic, meaning that there is no guarantee of successfully finding all (or any) of the prime factors. However, the algorithm executes in polynomial time. If we know in advance that $N$ is {\em semiprime} (the product of two and only two prime factors), we stop the calculation once factors $A$ and $B$ are found such that $AB=N$.
The number of configurations tested per annealing step is a tunable parameter, as are the cooling factor, the number of annealing steps, and $k$. Note that our definition of $E$ implies that the optimum value is the largest $E$, not the smallest. The ``cooling'' actually leads to higher and higher values of $E$.
At this point, the reader may ask how we determined the number of bits in $A$ and $B$. If the number to be factored, $N$, has $n$ bits in its binary representation, its factors could have any number of bits from 2 to $n-1$. For a given $A$, having $a$ bits, we can compute the possible numbers of bits the other factor $B$ could have, given that $a + b = n$ or $n+1$. Our actual program explicitly factors out any prime factors up to (decimal) 1000; so if $N$ has $n$ bits, $A$ could have anywhere from 10 to $n-9$ bits. The corresponding $B$ could have either $b=n-a$ or $b=n-a+1$ bits.
Now the reader may ask how we set the number of 1s in each binary number $A$ and $B$. We specify that the leftmost bit in both is 1 (that is, no leading 0 bits are allowed). With $a_1$ denoting the number of 1s in $A$, and $b_1$ denoting the number of 1s in $B$, we know that their range of values is
\begin{equation*}
\begin{array}{ll}
1\leq a_1 \leq a \\
1\leq b_1 \leq b
\end{array}
\end{equation*}
Our algorithm must try all $ab$ combinations. This scales roughly as $n^2$.
The whole algorithm then has a deep loop nest:
\begin{itemize}
\item{loop over the possible values of $a$}
\item{loop over the corresponding possible values of $b$}
\item{loop over the possible values of $a_1$}
\item{loop over the possible values of $b_1$}
\item{loop over all annealing steps (temperatures)}
\item{loop over configurations}
\end{itemize}
The scaling of the algorithm is then upper-bounded by
\begin{equation*}
n * (n-1) * n * (n-1) * N_a * N_c \approx n^4 * N_a * N_c
\end{equation*}
where $N_a$ is the number of annealing steps and $N_c$ is the number of configurations. Here we ignored the optimization of factoring out all small prime factors less than 1000, and approximated the maximum number if digits in $A$ and $B$ as $n$; their exact limits are lower, as detailed earlier. With $N_a$ and $N_c$ being constants, this is fourth order in $n$, the number of digits in the binary representation of the number we're factoring, $N$.
\section{Tests}
We tested our algorithm on numbers with up to 31 decimal digits (67 binary digits). Typically, the number of configurations per annealing step was set to
\begin{equation*}
M * \max(a,b)
\end{equation*}
where $a$ and $b$ are the number of binary digits in the current configuration of factors $A$ and $B$, and $M$ is an input parameter.
We constrain the acceptable configurations to only those for which the number of 1s in the binary representation of the product $AB$ is equal to the (known) number of 1s in the binary representation of $N$. Optionally, we constrain allowed ``bad'' moves to restrict the decrease in energy to be within a specified fraction of the current energy; this avoids ``really bad'' moves. Optionally, we can retain the previous, higher energy configuration prior to an allowed ``bad'' move, run further configurations for a specified number of tries, then revert to the saved prior configuration if the energy has not evolved to exceed the prior energy. Table \ref{testCaseTable} and Table \ref{resultsTable} show results for a few test cases we successfully factored, and parameter values used.
\begin{table}
\centering
{
\small
\begin{tabular}{clll}
\hlin
{\bf Case} & {\bf $N$} & {\bf $n$} & {\bf Factors} \\
\hlin
A & 99999989237606677 & 57 & $316227731*316227767$ \\
B & 999999866000004473 & 60 & $999999929*999999937$ \\
C & 9999999942014077477 & 64 & $3162277633*3162277669$ \\
D & 99999980360000964323 & 67 & $9999999017*9999999019$ \\
\hlin
\end{tabular}
\caption{{\small A few cases we tested. $N$ is the (semiprime) number to be factored. $n$ is the number of digits in the base-two representation of $N$.}}
\label{testCaseTable}
}
\end{table}
\begin{table}
\centering
{
\small
\begin{tabular}{ccrrrc}
\hlin
{\bf Case} & {\bf $N^*_a$} & $F_c$ &$k$ & {\bf $N_c$} & {\bf Time} \\
\hlin
A & $41 \pm 62$ &0.997 & 63365 & 1,450,000 & $17 \pm 16$ \\
B & $13 \pm 8$ &0.997 & 73810 & 1,500,000 & $12 \pm 8$ \\
C & $144 \pm 161$ &0.997 & 89440 & 1,600,000 & $134 \pm 149$ \\
D & $89 \pm 40$ &0.997 &102510 & 1,700,000 & $96 \pm 45$ \\
\hlin
\end{tabular}
\caption{{\small Results for a few cases we tested. $N^*_a$ is the number of annealing steps before finding the factors. $F_c$ is the temperature factor ($T$ is multiplied by this each annealing step. We always started with initial $T_0=F_c$.) $N_c$ is the maximum number of configurations tried each annealing step. Time is in runtime of the algorithm in minutes. In all cases, $N^*_a$ and Time are averaged over 5 runs, with the standard deviation indicated as error amounts. We used cost function $f(i)=i^2$ (see Eq. \ref{energyDefinition}).}}
\label{resultsTable}
}
\end{table}
The current implementation is in Python, using the {\tt bitarray} module for manipulating the binary representation. Python automatically handles arbitrary-precision integers correctly. We ran all the test cases on a laptop.
Since we knew the factors in advance for these test cases, we reduced the search space over numbers of bits and the number of 1s in the factors to be consistent with the known factors. This is only an expedient, to be able to demonstrate the algorithm running on a single workstation. To run full tests for these and even larger numbers, with unknown factors, we are implementing a parallel version of the algorithm.
\section{Parallelism}
In the deep loop nest in Section \ref{approach}, there is ample parallelism to be exploited. All iterates of the outer four loops over numbers of digits and numbers of 1s in the factors can be computed independently, in parallel. This work scales as $n^4$, and is appropriate for distributed-memory parallelism with message passing. Memory requirements are very low, so replication of data is not a problem The parallel algorithm needs periodic, but infrequent, synchronization to test for success and proceed forward factorizing the factors, down to the final level of prime factors only. The loop over annealing steps is sequential, but the innermost loop over configurations can be parallelized; this is a candidate for thread parallelization and shared memory. Our initial target architecture will be an IBM Blue Gene/Q system, on which we have Python for the compute nodes.
\section{Conclusions}
We have shown the feasibility of using a nondeterministic optimization approach such as simulated annealing to work in a controlled and constrained manner toward prime factorization of large integers. Using a binary (base 2) representation allows for simple configuration-changing rules, and a simple energy cost function whose value is optimized. The algorithm has polynomial scaling in $n$, the numb of bits in the binary representation of the number to be factored. It is not guaranteed to find an exact solution, which is the only solution of interest in number factorization, but in practice we have found solutions for a wide range of numbers. It seems that the closer (further) the ratio of 1s to 0s in the binary representation of a factor of a semiprime number is to one, the harder (easier) it will be for a simulated annealing based method to factor the number. Thus, in picking semiprime numbers to use for encryption, a consideration should be the ratio of 1s to 0s in the two factors.
\bibliographystyle{unsrt}
|
2,877,628,091,087 | arxiv | \section{Introduction}
The fundamental parameter of the strong interactions,
the coupling $\alpha_\msbar(\mu)=\gbar^2_\msbar(\mu)/(4\pi)$, is an essential input parameter for theory predictions
of high energy processes in particular
the physics at the LHC \cite{Dittmaier:2012vm,Heinemeyer:2013tqa,Accardi:2016ndt}.
Conventionally the running
$\alpha_\msbar(\mu)$ is quoted at
the electroweak scale, $\mu=m^{}_\mathrm{Z}$. There
the coupling is weak, $\alpha=\rmO(1/10)$, perturbation theory
(PT) is usually accurate.
In particular $\alpha_\msbar(m^{}_\mathrm{Z})$ is essentially
equivalent
to the renormalization group invariant $\Lambda$-parameter
\begin{eqnarray}
\Lambda^{}_\msbar &=& \varphi^{}_\msbar(\gbar^{}_\msbar(\mu))\, \times\, \mu\,,
\end{eqnarray}
because the function
\begin{eqnarray}
\varphi_s(\gbar_s) &=& ( b_0 \gbar_s^2 )^{-b_1/(2b_0^2)}
\rme^{-1/(2b_0 \gbar_s^2)} \label{e:phig} \\
&& \times \exp\left\{-\int\limits_0^{\gbar_s} \rmd x\
\left[\frac{1}{\beta_s(x)}
+\frac{1}{b_0x^3} - \frac{b_1}{b_0^2x} \right] \right\} \,
\nonumber
\end{eqnarray}
is known precisely by replacing the renormalization group $\beta$-function by its perturbative expansion
$\betapert_s(g)=-g^3 \sum_{n=0}^{\lb-1} b_{n,s} g^{2n}$;
in the $\msbar$-scheme $\betapert_\msbar(g)$
is known up to $\lb=4$ loops \cite{MS:4loop1,Czakon:2004bu}.
At lower energies, $\mu \ll m^{}_\mathrm{Z}$, the
perturbative uncertainty in approximating
$\beta_s\approx\betapert_s$
in \eq{e:phig} is generally not negligible. It is
$\Delta \Lambda_s/\Lambda_s = \Delta \varphi_s/\varphi_s =
c_\lb\alpha^{\lb-1} + \ldots$ with coefficients
$c_\lb$, which are, for
$\lb\leq 4$, of order one in the $\msbar$ scheme and expected to be
so in ``good'' schemes in general.
While the $\msbar$ scheme makes sense only perturbatively,
physical schemes defined beyond the perturbative expansion
are easily derived from short-distance QCD
observables $\obs_s(\mu) = c_1^s \gbar^2_\msbar(\mu) + \rmO(\gbar^4_\msbar(\mu))$ via
\begin{eqnarray}
\gbar^2_s(\mu) \equiv \obs_s(\mu) / c_1^s =
\gbar^2_\msbar(\mu) + \rmO(\gbar^4_\msbar(\mu))\,.
\label{e:gengdef}
\end{eqnarray}
It is clear that high energies $\mu$ (small $\alpha_s$)
and at least $\lb=3$ are needed
if one aims for a precision determination of
$\alpha_\msbar(m^{}_\mathrm{Z})$. Replacing high energy by just
a larger $\lb$ is dangerous because the perturbative expansion
is only asymptotic, not convergent, and non-perturbative ``corrections'' can be large. In particular, whether one has lost control is difficult to detect because our knowledge of non-perturbative physics
is very incomplete.
Thus it is a challenge to
reach an accuracy of a few percent in $\Lambda_\msbar$ equivalent to
sub-percent accuracy in $\alpha_\msbar(m^{}_\mathrm{Z})$.
Unfortunately, the determinations which quote the smallest
uncertainties do typically not come from observables at large
$\mu$ and uncertainties are dominated by systematics such
as unknown higher order perturbative and non-perturbative
terms.
Both the Particle Data Group \cite{Agashe:2014kda}
and the Flavour Lattice Averaging Group \cite{Aoki:2013ldr}
are therefore not just taking weighted averages of the individual
determinations to arrive at their world averages.
Here we consider a family of observables (schemes)
where lattice simulations allow one {\em simultaneously to reach
high precision and high energy before using} PT.
Then PT at $\mu=\rmO(m^{}_\mathrm{Z})$ can be employed with
confidence. In addition one can check its applicability
at lower scales. { The crucial feature enabling
the study of PT at high energy with continuum extrapolated
non-perturbative lattice results is that we use a finite volume renormalization scheme \cite{Luscher:1991wu,Luscher:1993gh}.
QCD is considered inside a small volume of linear extent $L$
with boundary conditions and observables which do not contain
any other scale.
Details will be presented below.
The renormalization scale then is
\begin{eqnarray}
\mu =1/L\,,
\end{eqnarray}
and the continuum limit of lattice simulation results is easily
taken for $L/a \gg 1$, with modestly sized lattices.
}
This is the strategy of the ALPHA collaboration
but so far it was mostly restricted to unphysical models
with an insufficient number of quark flavors~\cite{Luscher:1993gh,Capitani:1998mq,DellaMorte:2004bc}.
For the interesting case of $\nf=3$ QCD,
the strategy was applied by the CP-PACS collaboration
\cite{Aoki:2009tf}.
We now have very precise results for $\nf=3$ which allow us
to see important details previously hidden by uncertainties
(see also \cite{Tekin:2010mm}).
In this letter
we discuss the most essential step: the accuracy of PT for couplings $\alpha \lesssim 0.2$
and our resulting precision for $\Lambda$. We will see that it is crucial to non-perturbatively reach $\alpha \approx 0.1$
to have confidence
in PT at the 3-4 percent level in $\Lambda$.
{
On the other hand at $\alpha \geq 0.15$ and using the three-loop
beta-function, one of our schemes
($\nu=-1/2$) shows a 10\% systematic error in $\Lambda$.
This is not a statistical fluctuation as we will
demonstrate by \eq{e:vbar3eff}.
Given that a priori our scheme has favorable properties
for PT and that
other tests of perturbation theory with similar precision
and similarly small $\alpha$ are presently not available,
our result gives reason for concern
in determinations of $\alpha_\msbar(m^{}_\mathrm{Z})$ from
$\mu$ values of a few GeV. This kind of lack of accuracy of PT
may be one of the sources of the spread of results
reviewed in \cite{Agashe:2014kda}.
}
\section{The SF scheme}
{ Our scheme is based on the so-called
Schr\"odinger functional (SF) \cite{Luscher:1992an}.
There are several introductory texts on
the topic with emphasis on different aspects, from
the general field theoretic concept \cite{Luscher:1998pe}
to detailed descriptions \cite{Sommer:1997xw,Sommer:2006sj}
and a review of concepts, history and recent results \cite{Sommer:2015kza}.
Here we just summarise those aspects which
are needed to judge our findings below.
Dirichlet boundary conditions are imposed in Euclidean time,
\begin{eqnarray}
A_k(x)|_{x_0=0} = C_k\,,
\quad
A_k(x)|_{x_0=L} = C_k'\,,
\end{eqnarray}
for $k=1,2,3$.
The gauge potentials $A_\mu$ are taken periodic in space with period $L$.\footnote{
Quark fields are included as described in \cite{Sint:1993un}.
Their periodicity angle, $\theta$, introduced in
\cite{Sint:1995ch}, is set to $\theta=\pi/5$.}
The six dimensionless matrices
\begin{align}
LC_k &= i \,{\rm diag}\big( \eta-\tfrac{\pi}{3}, \eta(\nu-\tfrac{1}{2}), -\eta(\nu+\tfrac{1}{2}) +\tfrac{\pi}{3} \big) \,,
\nonumber \\
LC^\prime_k &= i \,{\rm diag}\big( -(\eta+\pi), \eta(\nu+\tfrac{1}{2}) +\tfrac{\pi}{3},-\eta(\nu-\tfrac{1}{2})+\tfrac{2\pi}{3} \big)\,,
\nonumber
\end{align}
just depend on the two real parameters $\eta,\nu$, which multiply the Abelian generators of SU(3).
With these boundary conditions the field which minimizes the action is
unique up to gauge equivalence~\cite{Luscher:1993gh} and denoted by $A_\mu = B_\mu^{\rm class}$. In the temporal gauge, $B_0=0$, it is given by
$B_k^\mathrm{class}(x) = C_k + (C_k'-C_k)x_0/L $ and corresponds to a constant color electric field.
A family of couplings~\cite{Sint:2012ae}, $\bar{g}_\nu$, is then obtained by
taking $1/\obs_\nu$ in \eq{e:gengdef} to be the $\eta$-derivative of the effective action.}
This yields a simple path integral expectation value,
\begin{eqnarray}
\langle \partial_\eta S|_{\eta=0} \rangle = \frac{12\pi}{\gbar^2_\nu}\,,
\end{eqnarray}
which is well suited for a Monte Carlo evaluation in the
latticised theory. Small fluctuations around the background field
generate the non-trivial orders in PT.
It is worth pointing out that the whole one-parameter family of couplings
can be obtained from numerical simulations at $\nu=0$,
as the $\nu$-dependence is analytically known,
\begin{equation}
\label{eq:gbarnu}
\frac{1}{\gbar^2_\nu} = \frac{1}{\gbar^2} - \nu \vbar\,,
\end{equation}
in terms of the $\nu=0$ observables $\gbar^2 \equiv \gbar^2_{\nu=0}$ and $\vbar$.
Advantageous properties of these couplings are:
\begin{enumerate}
\setlength\itemsep{-0.2em}
\item $\Delta_\mathrm{stat} \gbarnu^2 = s(a/L) \gbarnu^4 +\rmO(\gbarnu^6)$,
for $\Delta_\mathrm{stat}$ the statistical error at a given
length of the Monte Carlo sample.
This property makes it possible to maintain high precision at high energy.
\item The typical $\sim \mu^{-1},\mu^{-2}$ renormalon contributions~\cite{Beneke:1998ui} are absent since the finite volume
provides an infrared momentum cutoff.
Instead, the leading known non-perturbative contribution is due to a
secondary stationary point of the action \cite{SFcoupinpreparation} at
$g_0^2\,[S(B^\mathrm{sec}) - S(B^\mathrm{class})] = 32.9$.
It generates corrections to PT of order
\begin{eqnarray}
\exp(-{2.62}/\alpha)
\sim (\Lambda/\mu)^{3.8}\,,
\end{eqnarray}
which evaluates to $\rmO(10^{-6})$ for $\alpha=0.2$.
At such values of $\alpha$, fields with non-zero
topology are {\em even further} suppressed given that
$g_0^2\,[S_{|Q|\geq 1} - S(B^\mathrm{class})] \geq 6\pi^2$~\cite{Luscher:1992an,Luscher:1993gh}.
\cbla
\item The $\beta$-function is known including its three-loop term,
\begin{eqnarray}
(4\pi)^3 \times b_{2,\nu}= -0.06(3) - \nu \times 1.26\,,
\;(\nf=3)
\end{eqnarray}
and for reasonable values of
$\nu$
the three-loop term is of order one as it is in the $\msbar$
scheme.
\item As we will see discretisation effects are very small;
at tree-level of perturbation theory they are $\rmO((a/L)^4)$. They are known to two-loop order in PT~\cite{Bode:1999sm} and we
can subtract those pieces~\cite{deDivitiis:1994yz}.
\end{enumerate}
The downside of the SF scheme is that the coefficient
$s(a/L)$ diverges like $(L/a)^{1/2+z}$ for large $L/a$ and
is not that small in general. Here $z$ is the dynamical critical
exponent of the algorithm while the $1/2$ in the exponent is
due to the variance of the observable \cite{deDivitiis:1994yz}.
High statistics is needed
and our computation is limited to $L/a\leq 24$.
A second issue is
the acceleration of the approach to the continuum limit through
Symanzik improvement.
With our Dirichlet boundary conditions
the Symanzik effective Lagrangian contains
terms located at the time-boundaries. These
are responsible for $\Oa$ effects. We cancel them by corresponding
improvement terms with coefficients $\ct$ and $\cttilde$ known only
in PT, see below.
\section{Step scaling functions and $\Lambda$-parameter}
The non-perturbative energy dependence of finite
volume couplings is constructed from the step scaling function \cite{Luscher:1991wu}
\begin{eqnarray}
\sigma_\nu(u) =
\left. \gbarnu^2(1/(2L))
\right|_{\gbarnu^2(1/L)=u,m=0} \,,
\end{eqnarray}
where $m=0$ ensures the quark mass independence of the scheme~\cite{Weinberg:1951ss}.
The \SSF\ corresponds to a discrete version of the $\beta$-function and is computed as the continuum limit $a/L\to 0$ of its
lattice approximants $\Sigma_\nu(u,a/L)$. The conditions $\gbarnu^2(1/L)=u$ and $m=0$ then refer
to a $(L/a)^4$ lattice and fix the bare coupling and bare quark mass of the theory. $\gbarnu^2(1/(2L))$ is to be
evaluated for the same bare parameters on a $(2L/a)^4$ lattice.
We will use the $\nu=0$ scheme for a reference, dropping the index
$\nu$. The scale $\Lswi$ is defined by a value $u_0$ and the
condition
\begin{equation}
\label{eq:gL0}
\gbar^2(1/\Lswi)=u_0\,.
\end{equation}
The solution of the implicit equation
\begin{eqnarray}
u_k = \sigma(u_{k+1}),
\end{eqnarray}
for $u_{k+1}$, $k=0,1,\ldots$ gives a series of couplings
$u_k=\gbar^2(2^k/L_0)$.
With a few steps one reaches $\mu =1/L_n= 2^n/L_0 = \rmO(m^{}_\mathrm{Z})$
and the perturbative $\varphi$ at this high scale
will give a good approximation to $L_0\Lambda$
\begin{equation}
L_0\Lambda = 2^n \varphi(\sqrt{u_n})\,. \label{e:LLpert}
\end{equation}
Note that thanks to \eq{eq:gbarnu} and the exact relation between
$\Lambda$-parameters~\cite{Luscher:1993gh,Sint:1995ch}
\begin{eqnarray}
r_\nu = \Lambda/\Lambda_\nu = \rme^{-\nu \times 1.25516}\,,
\end{eqnarray}
the same combination $L_0\Lambda$ can be obtained in any scheme with
$\nu\ne0$. Whether different values of $\nu$, number of steps ($n$)
and perturbative orders ($\lb$) give consistent results is an
excellent way to test the reliability of perturbation theory.
\section{Simulations}
We used the standard Wilson plaquette action and three massless $\Oa$-improved \cite{Luscher:1996sc,Yamada:2004ja}
quarks simulated by a variant of the
\texttt{openQCD} code \cite{Luscher:2012av,openqcd:2013}.
At eight couplings $\gbar^2(1/L)$ in the range
$1.11 - 2.02$, we simulated pairs of lattices
$L/a,2L/a$ with $L/a=4,6,8$ and at three couplings we
also included $L/a=12$.
Between 80k and 300k
independent Monte Carlo measurements were taken on each lattice.
As we have already noted, non-trivial topology is very suppressed
in these small volumes \cite{Luscher:1982wf}. Therefore topology freezing \cite{DelDebbio:2002xa,Schaefer:2010hu} is irrelevant here.
A critical issue for any lattice computation is the
removal of discretization effects. In preparation
of our continuum extrapolations we apply
both Symanzik improvement of the action and
perturbative improvement of the \SSF~\cite{deDivitiis:1994yz}.
In comparison to earlier work, we here propagate the
estimated uncertainty of those $\Oa$ improvement coefficients which
are only known perturbatively into the errors of the step scaling
functions. They can then be assumed to be free of linear $a$-effects within
their errors. Details are found in the supplementary material attached
at the end of the paper.
\section{Continuum extrapolations and results}
\begin{figure}[t]
\includegraphics*[width=\linewidth]{cont_lim.pdf}
\caption{\label{f:cont}Continuum limit of the step scaling function
$\Sigma^{(i)}(u,a/L)/u$ with $i=2$ loop improvement. As an
illustration a constant ($n_\rho =0$, dashed, fit G) and a linear
($n_\rho =2$, fit C) continuum extrapolation is shown.
Continuum extrapolated results include the errors due to $\ct$ and $\cttilde$ (cf.~text).
The $\star$-symbols
show the perturbative $\sigma$ computed from the three-loop
$\betapert$.}
\end{figure}
As the residual linear $a$ effects are treated as an uncertainty, we can proceed
with continuum extrapolations linear in $a^2$. First we look at the data in \fig{f:cont}.
They are statistically compatible with having no
$a$-effects for $L/a \geq 6$; for $\nf=0$ this was found
with similar precision for $L/a\geq 5$ (see Fig.~3
of \cite{Bode:2001jv}).
Both the continuum limit of the \SSF\
and its cutoff effects are smooth functions of the coupling.
This motivates global fits of the form
\begin{eqnarray}
\Sigma_\nu^{(i)}(u,a/L) &=& \sigma_\nu(u) + \rho_\nu^{(i)}(u) \, (a/L)^2 \, ,
\end{eqnarray}
where $i$ is the order of PT to which cutoff effects are
removed in \eq{e:sigimpr}.
We performed various such fits in order to assess the systematic
errors which result from the assumptions made in the fit
functions.
We parameterize the cutoff effects by
a polynomial in $u$, with the correct asymptotics for small
$u$,
\begin{eqnarray}
\rho_\nu^{(i)}(u) &=& \sum_{k=1}^{n_\rho^{(i)}} \rho^{(i)}_{\nu,k} u^{i+1+k}\, ,
\label{e:rho}
\end{eqnarray}
where the case of neglecting cutoff effects is covered
by ${n_\rho^{(i)}}=0$.
The continuum step scaling function is naturally
parameterized by a polynomial in $u$,
\begin{eqnarray}
\sigma_\nu(u)= u + u^2\sum_{k=0}^3 s_k u^{k}\,.
\end{eqnarray}
Lower order coefficients are fixed to their
known perturbative values while $s_3$ (``$n_c=1$'') or
$s_2,\,s_3$ (``$n_c=2$'') are
fit parameters.
A selection of such fits are illustrated in \tab{t:Sigfits}.
Instead of the parameters of the
continuum \SSF\ the table shows directly the extracted $\Lswi
\Lambda$, where $\Lswi$ is defined through \eq{eq:gL0} and the
value $u_0=2.012$. Recalling \eq{eq:gbarnu} and using $\bar v =
0.1199(10)$ (see next section) we have
\begin{eqnarray}
\gbarnu^2(1/\Lswi) = 2.012\, (1 - 0.1199(10) \times 2.012\,\nu)^{-1}\,.
\label{e:gnuL0}
\end{eqnarray}
Apart from the form of the fit, $\Lswi \Lambda$ depends on
the value of $n$ where
\eq{e:LLpert} with $\beta_\nu=\betapert_\nu$ is used.
Since we insert $\betapert_\nu$ at three-loop, the residual dependence
on the coupling is $\rmO(\alpha^2(1/L_n))$.
\begin{figure}[t]
\includegraphics*[width=0.9\linewidth]{Lambda-vs-alphasq-v6}
\caption{\label{f:LLmax-extrap}
The dependence of the $\Lambda$-parameter on
the coupling, $\alpha$.
From right to left, $n=0,1, \ldots,5$ steps of non-perturbative step-scaling are
performed to arrive at $\alpha(\mu)$ at $\mu=1/L_n$, before
using perturbative running. From top to bottom the different symbols
correspond to $\nu=-0.5,0,0.3$. We use
$i=1$ loop improved data and fit B, for $\nu=0$ we also show
$i=2$, fit C. Dotted lines show linear dependence in $\alpha^2$ to
guide the eye.
\label{f:llplot}}
\end{figure}
The observed behavior, \fig{f:LLmax-extrap}, is consistent
with a dominatingly linear dependence
of $\Lswi \Lambda$ on $\alpha^2(1/L_n)$. For $\nu=0$ the
slope is not very significant and for $\nu=0.3$ it about disappears, but for $\nu=-0.5$ it
is quite large and outside errors.
This suggests to perform alternative fits, where the continuum step scaling function is parameterized by an effective
four-loop $\beta$-function, adding a term
$b_3^\mathrm{eff}g^9$ to the known ones.
The determined $\Lswi \Lambda$
are then automatically independent of $n$ and
we include $b_3^\mathrm{eff}$ instead
of $u_{n=4}$ in the table. For $\nu=-0.5$ the effective
fit value is
larger than it should be in a well-behaved perturbative
expansion.
\begin{table}[h!]
\small\footnotesiz
\centering
\begin{ruledtabular}
\begin{tabular}{ccccccccccc}
fit & $u_4$ & $i$ & $\left. \frac{L}{a}\right|_\mathrm{min}$ & $n_\rho^{(i)}$ & $n_c$ &
$\Lswi \Lambda$ & $b_3^\mathrm{eff}$ & $\chi^2$ & d.o.f.
\\[-0.5ex]
& & & & & & $\times 100$ & $\times (4\pi)^4$ & & \\
\hline\\[-1ex]
A & 1.193(4) & 0 & 6 & 2 & 1 & 3.04(\phantom{1}8) & & 14.7 & 16 \\
B & 1.194(4) & 1 & 6 & 2 & 1 & 3.07(\phantom{1}8) & & 14.2 & 16 \\
C & 1.193(5) & 2 & 6 & 2 & 1 & 3.03(\phantom{1}8) & & 14.5 & 16 \\
D & 1.192(7) & 2 & 6 & 2 & 2 & 3.03(13) & & 14.5 & 15 \\
E & & 2 & 6 & 2 & 1 & 3.00(11) & 4(3) & 14.6 & 16 \\
F & & 2 & 8 & 1 & 1 & 3.01(11) & 4(3) & 12.7 & \phantom{1}9 \\
G & 1.191(11) & 2 & 8 & 0 & 2 & 3.02(20) & & 13.0 & \phantom{1}9 \\
H & & 1 & 6 & 2 & 1 & 3.04(10) & 3(3) & 14.1 & 16 \\
\\[1ex]
fit & $\nu$ & $i$ & $\left. \frac{L}{a}\right|_\mathrm{min}$ & $n_\rho^{(i)}$ & $n_c$ &
$\Lswi \Lambda$ & $b_{3,\nu}^\mathrm{eff}$ & $\chi^2$ & d.o.f
\\[-0.5ex]
& & & & & & $\times 100$ & $\times (4\pi)^4$ & & \\
\hline\\[-1ex]
H & $-$0.5 & 1 & 6 & 2 & 1 & 3.03(15) & 11(5) & 10.4 & 16 \\
H & \phantom{$-$}0.3 & 1 & 6 & 2 & 1 & 3.04(10) & \phantom{1}0(3) & 20.0 & 16 \\
\end{tabular}
\end{ruledtabular}
\caption{Results for $\nu=0$ in the upper part.
}
\label{t:Sigfits}
\end{table}
We will come back to this issue shortly, but first we
give our result for $\Lswi \Lambda$. We take the standard
polynomial fit to $\sigma$ (for $\nu=0$) with
$\alpha_n \approx 0.1$ ($u_n\approx 1.2$). A typical perturbative
error of size $\Delta(\Lambda L_n) = \alpha_n^2\,\Lambda L_n $
is then a factor 3 or
more below our statistical errors. We quote (with $\gbar^2(1/\Lswi) = 2.012$)
\begin{eqnarray}
\Lswi \Lambda =0.0303(8) \; \, \to \;
\Lswi \Lambda_\msbar^{(3)} = 0.0791(21)\,,
\label{e:llresult2}
\end{eqnarray}
with the known $\Lambda_\msbar/\Lambda$~\cite{Luscher:1993gh,Sint:1995ch}.
This is the result of fit~C. It is in perfect agreement with all variations of the
global fit, even with fit~G, which neglects all cutoff effects but
uses only data with $L/a\geq 8$. It has a rather conservative error. If an even more conservative result is desired,
one may take the one of fit~D, $\Lswi \Lambda =0.0303(13)$.
\section{Accuracy of perturbation theory}
While $b_{3,\nu}^\mathrm{eff}$ is large for $\nu=-0.5$, it does
have an error of around 50\%. A much better precision
can be achieved by considering directly the observable
\begin{eqnarray}
\omega(u) = \left.\vbar \right|_{\gbar^2(1/L)=u,m=0}
= v_1 + v_2 u + \rmO(u^2) \,,
\label{e:omega}
\end{eqnarray}
with coefficients
{ $v_1=0.14307,\,v_2=-0.004693$ \cite{DellaMorte:2004bc}}.
In contrast to the \SSF\, $\omega(u)$ does not require pairs of lattices,
so that the continuum extrapolation can be performed using
data for the entire range of lattice sizes $L/a=6,8,10,12,16,24$.
Improvement and fits for obtaining the continuum limit are carried out
in analogy to those of $\Sigma_\nu$.
\Fig{f:omega} shows the result of two different fits
with
fit parameters $d_k$ in $\omega(u)=v_1+v_2u+d_1u^2+d_2u^3+d_3u^4$
and in $\omega(u)=v_1+d_1u^1+d_2u^2+d_3u^3+d_4u^4$.
The overall band of the two fits may be taken as
a safe estimate of the continuum limit.
As an example we find $\omega(2.012)=0.1199(10)$ for both
fits, leading to \eq{e:gnuL0}.
In the above analysis we did not use
data with $L/a=6$. Including them yields only tiny changes
and excellent $\chi^2$ values.
\begin{figure}[t]
\includegraphics*[width=0.9\linewidth]{omega-vs-alpha}
\caption{\label{f:omega} The function $\omega(\gbar^2)$ after continuum
extrapolation, covering the $\pm1\sigma$ band of two fits described
in the text.
}
\end{figure}
A good measure of the deviation from two-loop perturbation
theory is
\begin{eqnarray}
(\omega(\gbar^2) - v_1 - v_2 \gbar^2 )/v_1 = -3.7(2) \,\alpha^2
\label{e:vbar3eff}
\end{eqnarray}
at $ \alpha=0.19$.
It is quite large and statistically significant
beyond any doubt. If one
attempts to describe this by perturbation theory,
the
three-loop coefficient $v_3$ has to be too
large for perturbation theory to be trustworthy
at $\alpha=0.2$.
{ Again, we} come to the conclusion that
$\alpha\approx 0.1$
needs to be reached non-perturbatively before perturbation theory becomes
accurate.
\section{Summary and Conclusions}
Our chosen definition of $\alpha_s(\mu)$ allows us to
compute it with very good precision
through lattice Monte Carlo simulations.
In particular we have
controlled the errors due to the discretisation of the theory
also at large $\mu$. Known non-perturbative corrections are parametrically
very small: $\rmO(\rme^{-2.6/\alpha})$. In other words we have an excellent scheme to test the accuracy of PT in a given
region of $\alpha$.
In fact, we have a family of schemes, depending on $\nu$.
For small positive $\nu$, the couplings follow perturbation theory very closely in the full investigated range
$0.1 \leq \alpha \leq 0.2$ as illustrated by the flatness
of $\Lambda$ in \fig{f:llplot} extracted from \eq{e:LLpert}
with the three-loop $\beta$-function.
However, for negative $\nu$, e.g. $\nu=-0.5$, values of $\alpha$
just below 0.2 are not small enough to confirm
perturbative behaviour. The observable $\vbar$, \fig{f:omega}, shows
that the $\alpha$-dependence seen in \fig{f:llplot} is not just a statistical fluctuation. We could
take the continuum limit of $\vbar$ with very high precision and \eq{e:vbar3eff}
shows a clear deviation from the known perturbative
terms, corresponding to $\lb=3$, at $\alpha=0.19$.
We conclude that it is essential to reach
$\alpha=0.1$ in order to be able to achieve a precision
around $3\%$ for the $\Lambda$-parameter. Fortunately
we have access to that region and can quote such an accuracy in
\eq{e:llresult2}.
While of course schemes exist where three-loop
running holds accurately down to smaller energies -- for
example $\nu=0.3$
produces flatness in \fig{f:llplot}
as far as we can tell --
to know whether a chosen scheme possesses this property
is difficult unless one has
control also over the $\alpha\approx 0.1$ region.
Once that is achieved larger $\alpha$
are not much needed any more.
What we reported in this letter is part of our
determination of a precise value for $\Lambda_\msbar$.
As our next step, we will soon
connect $\Lswi$ to the decay constants of pion and kaon,
as explained above and in \cite{Brida:2015gqj}.
\vspace{8mm}
\begin{acknowledgments}
We thank our colleagues in the ALPHA collaboration,
in particular M.~Bruno, C.~Pena, S.~Schaefer, H.~Simma and U.~Wolff for many useful discussions.
We thank U.~Wolff for a critical reading of the manuscript.
We would also like to show our gratitude to
S. Schaefer and H. Simma for their invaluable contribution in adapting the \texttt{openQCD} code.
We thank the
computer centres at HLRN (bep00040) and NIC at DESY, Zeuthen for providing computing resources and support. S.S.
acknowledges support by SFI under grant 11/RFP/PHY3218.
P.F. acknowledges financial support from the Spanish MINECO’s “Centro de
Excelencia Severo Ochoa” Programme under grant SEV-2012-0249, as well as from
the MCINN grant FPA2012-31686.
This work is based on previous work \cite{Sommer:2015kza} supported strongly by the Deutsche Forschungsgemeinschaft in the SFB/TR~09.
\end{acknowledgments}
\vspace{15mm}
\section{Supplementary material}
Here we add some details on the Symanzik improvement of the action and
perturbative improvement of the \SSF~\cite{deDivitiis:1994yz}. In particular
we discuss how the uncertainties in improvement coefficients are handled.
\subsubsubsection{Improvement of the action}
Apart from the bulk $\Oa$ improvement term,
the complete removal of linear (in $a$) discretization
errors requires a boundary improvement coefficient $\ct$
in the gluon action \cite{Luscher:1992an}
and a coefficient $\cttil$ in the fermion action \cite{Luscher:1996sc}.
Regarding the former, the known two-loop accuracy
\cite{Bode:1999sm},
\begin{eqnarray}
\ct &=& 1 + g_0^2 (-0.0890 + 0.019141 \nf) \nonumber \\ &&
+\, g_0^4 (-0.0294 + 0.002 \nf + 0.000(1)\nf^2)\,,
\end{eqnarray}
is expected to be sufficient~\cite{Bode:2001jv} since we
are in the weak bare coupling region.
Still, we propagate the small deficit of $\Oa$-improvement into our errors. As an
uncertainty in $\ct$ we use the full two-loop term,
adding $\Delta^{\ct} \Sigma = 0.0234g_0^4|\partial_{\ct}\Sigma|$ in quadrature to the statistical errors.
The derivative $\partial_{\ct}\Sigma$ was
obtained from a numerical estimate of
$\delta_b(\gbar^2) \equiv \frac{L}{2a} \frac{1}{\gbar^2}\partial_{\ct} \gbar^2$, namely
$\delta_b(2.02)=-2.15(5) $
with negligible dependence on $L/a$ and $\nu$.
For this purpose
we performed simulations on $L/a=6,8$ lattices
with various values of $\ct$.
Combined with $\delta_b(0)=-1$ we then use the interpolation
$\delta_b(u) = - (1 + 0.57(3)\,u)$
and set $\partial_{\ct} \Sigma(u,a/L) = -(a/L) \,u\, \delta_b(u)$.
Similarly, for the coefficient
$\cttil(g_0) = 1 -0.01795 g_0^2 + {\rm O}(g_0^4)$ \cite{Luscher:1996vw} we use
the full one-loop term as an error estimate.
Its effect is much smaller than the
one of the uncertainty in $\ct$, since it
contributes only through quark loops.
These errors are responsible for around
30\% of the uncertainties of some of our central results in TABLE I.
Throughout error propagation is carried out
as detailed in \cite{Tekin:2010mm}.
\subsubsubsection{Perturbative improvement of the step scaling functions}
In addition,
we can improve the
observables to a given order $i$ in perturbation theory but to all orders
in $a$ via
\begin{equation}
\Sigma^{(i)}(u, a/L) = \frac{\Sigma(u, a/L)}{1 + \sum_{k=1}^i\delta_k(a/L)\,u^k}\,,
\label{e:sigimpr}
\end{equation}
with $\delta_1,\delta_2$ known~\cite{Bode:1999sm}.
\bibliographystyle{apsrev4-1}
|
2,877,628,091,088 | arxiv | \section{Introduction}
\label{sec:intro}
Many complex systems are organized as networks, including social networks, gene interaction networks, the world wide web, and beyond \cite{BoLaMoCh06,BrEr05}. The majority of complex networks from real-world systems have statistical properties that separate them from the random graphs studied more classically. These properties include a skewed or power-law degree distribution \cite{BaAl99,CaRoNe09}, a high clustering coefficient \cite{BaWe99,LuPe49}, and a low path length, creating a small world of connectivity \cite{WattsStrogatz1998}.
A network's clustering coefficient is a measure of how often two connected nodes will have neighbors in common. More formally, the clustering coefficient relates a network's number of triangles (3-cycles) to the possible number of triangles (given by the total number of wedges in the network). There are two common ways to define a clustering coefficient, which are related but different. The first, often called the global clustering coefficient, involves a global count of the number of triangles and possible triangles in a network. The second involves an average of local counts. Formal definitions can be found in Section~\ref{sec:def}. Real-world networks tend to have (relatively) high clustering coefficients under both definitions due to the presence of communities: if two nodes are in the same community, it not only increases the likelihood that there is an edge between them, but also that they have a common neighbor. Due to this, the clustering coefficient is a good measure of the well-connectedness of a network and can indicate the presence or lack of strong community structure \cite{LaFo09,RaCaCeLoPa04}.
Closely related to the concept of the clustering coefficient of a network is the idea of the {\em small world} property found in many real world networks. Informally, the small world property states that the average distance between any two nodes in a given network is relatively small and, in the case of an evolving network, the distance grows more slowly than the network grows. In networks that are not fully connected, having a short average distance is opposed to having a high clustering coefficient. This is due to the fact that, in order for nodes in distant parts of the network to have short paths between each other, there must exist some long range edges between communities. However, adding long range edges increases the number of possible triangles in a network without creating any new triangles, since nodes in different communities are unlikely to have neighbors in common, thus lowering the clustering coefficient. There has been research focusing on how to jointly maximize these two properties under various constraints \cite{barmpoutis2010networks,ZhYaWa05} as well as on understanding processes which may lead to the dual development of both properties in both real world and generated networks \cite{ClMo03,HoKi02,LeGoKaKi04,newman2009random,PeBiTaCh09}.
Most research on generating networks with high clustering or low path length has focused on building these networks from scratch. However, in many real systems a network is already in existence. One may want to minimally rearrange existing edges in order to enhance or reduce certain network properties. The goal of these rewirings include increasing network robustness \cite{begrliri05,JiLiAn14,JiLiGu13,KoPuGa13,LoDaHeTo13,wuellner2010resilience,zhou2014memetic}, communicability \cite{ArBE15}, synchronizablity \cite{LiQiZhYu10} or algebraic connectivity \cite{SyScGr13}. Similar work has been done on the careful addition of edges to improve network features, including conductance \cite{zhou2016faster, papagelis2015refining}, closeness centrality \cite{parotsidis2016centrality, crescenzi2016greedily}, and path lengths \cite{parotsidis2016centrality, papagelis2015refining}. A common feature of many of these rewirings or edge additions is the focus on local properties of the network, such as an individual node and its neighborhood, in order to ultimately impact global properties of a network, such as conductance, robustness, and connectivity. Various algorithms have been proposed in the aformentioned papers to make the best use of such rewirings or edge additions.
Here we present edge rewiring algorithms which increases the clustering coefficient of a given network while minimally impacting other network properties, specifically degree distribution and average path length. We provide proofs that these algorithms monotonically increase the global clustering coefficient. We present basic notation and definitions in Section~\ref{sec:def}. We then describe the algorithms in Section~\ref{sec:algorithms}, along with theoretical results about the effect of these algorithms on clustering. We numerically simulate the algorithms on model networks in Section~\ref{sec:experiments} and present results on real-world networks in Section~\ref{sec:real_experiments}. We discuss the implications of these algorithms in Section~\ref{sec:conclusions}.
\section{Notation and definitions}
\label{sec:def}
Let $G=(V, E)$ be a graph with a set of vertices (also referred to as nodes) $V$, $|V| = n$, and edge set $E = \{u\sim v\ | \ u,v \in V $ are connected by an edge$\}$, and write $\bar{E}$ for the complement of $E$ in ${V\choose 2}$. For a vertex $v\in V(G)$, $d_v$ denotes the {\em degree} of vertex $v$. We define the {\it neighborhood} of $v$ in $G$ to be $N(v) = \{u\in V(G)\ | \ u\sim v\}$. To simplify notation, we shall denote by $\overline N(v)$ the complement of $N(v)$ in $V(G)\backslash\{v\}$; that is, $\overline N(v)$ is the set of vertices in $G$ other than $v$ itself to which $v$ is not adjacent. Given two vertices $x, y$, define their {\it common neighborhood} to be $N(x, y)=\{u\in V(G)\ | \ u\sim x\hbox{ and }u\sim y\}$. For a set $S\subset V$, define $e(S)$ to be the number of edges in $G$ having both endpoints in $S$.
Define $N_p(G)$ and $N_t(G)$ to be the number of length-two paths and number of triangles in the graph, respectively. Note that \[N_p(G)=\sum_{x, y\in V(G)} |N(x, y)|,\] and \[3N_t(G)=\sum_{\{x, y\}\in E(G)} |N(x, y)|.\] We define the clustering coefficient of $G$ to be $$C(G)=3N_t(G)/N_p(G).$$
The above version of the clustering coefficient is sometimes referred to as the {\em global clustering coefficient} and we shall use that language at times in this work. Another measure of clustering, known as the {\em local average clustering coefficient}, is also in common usage. For a vertex $v\in V(G)$, let
\[c_v = \frac{2e(N(v))}{d_v(d_v-1)}.\]
Here, we have that ${d_v\choose 2}$ is the number of length-two paths having $v$ at the center; that is, it is a measure of the number of triangles in which $v$ could be involved. Thus, $c_v$ can be seen as the proportion of these triangles that actually exist in the graph, compared to how many triangles are possible involving $v$. We then define the local average clustering coefficient to be $$c(G) = \frac{1}{n}\sum_{v\in V}c_v.$$
Although the global clustering coefficient and the local average clustering coefficient often produce similar results regarding clustering in the graph, they are not equivalent measures \cite[p.~83]{Es12}. The primary difference between the two is the amount of emphasis placed on lower degree vertices. In the local average clustering coefficient, each vertex is treated with equal weight and, hence, vertices of very low degree, for which $c_v$ might be unusually high or low, have a much more substantial impact on the constant than those of high degree. In the global version, this impact is avoided by considering the graph as a whole, rather than any specific local structure.
The {\it adjacency matrix} associated with a graph $G$ is given by $A = a(u,v)$ with
$$a(u,v) = \left\{\begin{array}{ll}
1,& \textnormal{ if } u \sim v,\\
0, & \textnormal{ else. }
\end{array}\right .
$$
Given two sets $A, B\subset X$, we use the notation $A\triangle B$ to denote the symmetric difference between $A$ and $B$; that is,
\[A\triangle B = (A\backslash B)\cup (B\backslash A).\]
\section{Algorithms}
\label{sec:algorithms}
In this section, we fully describe the algorithms used to rewire a graph to increase its clustering coefficient and prove that these algorithms are monotonic with respect to the global clustering coefficient. Further, we examine some of the local optima of these algorithms and consider the impact of the rewiring procedure on the local average clustering coefficient. Finally, we briefly discuss their computational complexity.
\subsection{Algorithm Description}
To begin, let us examine the algorithms in question. The fundamental idea of the algorithms we employ here is as follows. We wish to identify a set of vertices $\{x, y, v\}$ such that $\{x, y\}$ are not adjacent, $\{x, v\}$ are adjacent, and further, that if we rewired the edge $\{x, v\}$ to $\{x, y\}$, the clustering coefficient of the graph will improve. Fundamentally, if this move is considered from the perspective of the vertex $x$, the goal is for $x$ to replace less valuable edges (with respect to clustering) with more valuable edges. This process is then iterated so that the clustering coefficient increases upon each edge movement.
We shall consider two different, but similar algorithms. Both of these are based on the iterated protocol described above. The first version of the protocol is detailed in Rewiring Algorithm \ref{Algone} below. Essentially, the protocol is as follows. We first choose a nonedge $\{x, y\}$ whose addition to the graph $G$ would increase the number of triangles in $G$ as much as possible. We then look among the existing incident edges for an edge whose removal would decrease the number of triangles in $G$ as little as possible. If certain degree considerations are met, we then rewire this edge to $\{x, y\}$ to form a new graph $G'$. More details about the degree requirements can be found in Thm.~\ref{thm:rewiring} and its proof. The edge movement is akin to swinging a door from one doorway to another. The algorithm's process can be described as finding the best open doorway and swinging a door toward it. This is illustrated in Figure~\ref{Fig:Firstalg}.
\begin{algorithm}[H]
\floatname{algorithm}{``Swing Toward Best" Rewiring Algorithm}
\caption{}
\label{Algone}
\begin{algorithmic}[1]
\STATE Find a pair $\{x, y\}\in \bar{E}$ with the maximum value of $|N(x, y)|$
\STATE For $v\in N(x)$, define $f_v= |N(x, v)|$, and for $v\in N(y)$, define $f_v=|N(y, v)|$. Choose the vertex $v\in N(x)\triangle N(y)$ with minimum $f_v$; given a tie, choose $v$ to have the highest possible degree. WLOG, suppose $v\sim x$.
\STATE If $|N(x, v)|\geq |N(x, y)|$, return to step 1, and eliminate the edge $\{x, y\}$ from consideration.
\STATE If $d_v>d_y$, rewire the edge $vx$ to the edge $yx$ to form $G'$.
\STATE If $d_v\leq d_y$, return to step 2, and eliminate the vertex $v$ from consideration.
\STATE If no neighbor in $N(x)\triangle N(y)$ satisfies the requirements, return to step 1 and choose a different pair of nonadjacent vertices.
\end{algorithmic}
\end{algorithm}
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node[inner sep=0pt] (pic) at (0,0) {\includegraphics[width = 2.5 in]{ex4.pdf} \includegraphics[width = 2.5 in]{ex5.pdf}};
\node at (-4,-0.925) {\textcolor{blue}{$x$}};
\node at (2.5,-0.925) {\textcolor{blue}{$x$}};
\node at (-3,-1.4) {\textcolor{blue}{$y$}};
\node at (3.525,-1.4) {\textcolor{blue}{$y$}};
\node at (-1.5,-0.35) {\textcolor{red}{$v$}};
\node at (4.9,-0.35) {$v$};
\end{tikzpicture}
\caption{A graph $G$ being rewired using Rewiring Algorithm \ref{Algone}. Here, the two blue vertices are $x$ and $y$. Note that in the original graph (left), $x$ and $y$ have the largest number of common neighbors among nonedges of the graph. We then choose $v$ as the red vertex, as its rewiring would destroy the fewest number of triangles (open the fewest closed doors), and it has maximum degree among such vertices. After verifying that all the degree considerations are met, we rewire the edge $xv$ to form the new graph (right).\\}
\label{Fig:Firstalg}
\end{figure}
For the second version of the protocol, described below in Rewiring Algorithm \ref{Algtwo}, we take a slightly different approach to choosing an edge to rewire. In the first version, we first seek an open doorway, having as many incomplete triangles as possible. For the second version, we take the opposite approach, and choose an edge whose removal would destroy the fewest number of triangles in $G$ as possible; that is to say, the algorithm finds the closed door that is least beneficial to clustering and swings the door away to a more useful doorway. This approach can be much faster when the graphs in question are sparse; the remainder of the procedure is essentially the same as the first version of the algorithm.
\begin{algorithm}[H]
\floatname{algorithm}{``Swing Away from Worst" Rewiring Algorithm}
\caption{}
\label{Algtwo}
\begin{algorithmic}[1]
\STATE Find a pair $\{x, v\}\in E$ with the minimum value of $|N(x, v)|$
\STATE For $y\in \overline{N}(x)$, define $f_y= |N(x, y)|$, and for $y\in \overline N(v)$, define $f_y=|N(y, v)|$. Choose a vertex $y\in \overline N(x)\triangle \overline N(v)$ with minimum $f_y$; given a tie, choose $y$ to have the lowest possible degree. WLOG, suppose $y\sim x$.
\STATE If $|N(x, y)|< |N(x, v)|$, return to step 1, and eliminate the edge $\{x, v\}$ from consideration.
\STATE \label{Deg1}If $d_y<d_v$, rewire the edge $vx$ to the edge $yx$ to form $G'$.
\STATE \label{Deg2} If $d_y\geq d_v$, return to step 2, and eliminate the vertex $y$ from consideration.
\STATE If no neighbor in $\overline N(x)\triangle \overline N(v)$ satisfies the requirements, return to step 1 and choose a different pair of nonadjacent vertices.
\end{algorithmic}
\end{algorithm}
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node[inner sep=0pt] (pic) at (0,0) {\includegraphics[width = 2 in]{p21.pdf} \includegraphics[width = 2 in]{p22.pdf}};
\node at (-4.55,-1.6) {\textcolor{blue}{$x$}};
\node at (0.625,-1.6) {\textcolor{blue}{$x$}};
\node at (-2.6,-1.6) {\textcolor{blue}{$v$}};
\node at (2.575,-1.6) {\textcolor{blue}{$v$}};
\node at (-0.55,1.2) {\textcolor{red}{$y$}};
\node at (4.6,1.2) {\textcolor{red}{$y$}};
\end{tikzpicture}
\caption{A graph $G$ being rewired using Rewiring Algorithm \ref{Algtwo}. Here, the two blue vertices are $x$ and $v$. Note that in the original graph (left), $x$ and $v$ have the fewest number of common neighbors among edges of the graph; namely, $x$ and $v$ are not involved in any common triangles. We then choose $y$ as the red vertex, noting that rewiring $xv$ to $vy$ will add two triangles at the green vertices. After verifying that all the degree considerations are met, we rewire the edge to $vy$ to form the new graph (right).\\}\label{Fig:Secondalg}
\end{figure}
First, let us verify that the action of ``swinging a door,'' regardless of the algorithm itself, is monotonic with respect to clustering coefficient.
\begin{theorem}
\label{thm:rewiring}
Any individual rewiring performed satisfying the conditions of the algorithms above will strictly increase the clustering coefficient.
\end{theorem}
\begin{proof}
Suppose we have three vertices $x, y, v$ such that
\begin{itemize}
\item $\{x, y\}\in \bar{E}$, $\{x, v\}\in E$
\item $|N(x, y)|> |N(x, v)|$
\item $d_v>d_y$.
\end{itemize}
Then the rewiring of edge $xv$ to edge $xy$ is permitted according to either of the algorithms described above. We note that this situation accommodates both versions of the protocol, although the vertex labels are changed for the second version. Let $G'=(G\backslash\{xv\})\cup \{xy\}$. Let us consider how this rewiring affects the clustering coefficient. Specifically, we need only consider the total number of triangles and the total number of length 2 paths in the new graph $G'$.
First, we consider triangles. We note that the only triangles that will be present in $G$ but not in $G'$ are those that have $xv$ as an edge. On the other hand, the only triangles that will be present in $G'$ but not in $G$ are those that have $xy$ as an edge. Hence, we have $N_t(G')=N_t(G)-|N(x, v)|+|N(x, y)|> N_t(G)$.
Likewise, the only length-two paths that appear in $G$ but not $G'$ are those involving the edge $xv$; we note that there are $(d_v-1)+(d_x-1)$ such paths. Similarly, the only length-two paths that appear in $G'$ but not in $G$ are those involving the edge $xy$, of which there are $(d_x-1)+d_y$. Hence, we have $N_p(G')=N_p(G)-(d_v-1)-(d_x-1)+(d_x-1)+d_y = N_p(G)+d_y-d_v+1\leq N_p(G)$.
Therefore, we have that $C(G')=3N_t(G')/N_p(G')>3N_t(G)/N_p(G)=C(G)$.
\end{proof}
We note here that there are many possible variants on the choice of doorway; the fundamental piece of the algorithm is the swing itself. Indeed, in Section \ref{sec:experiments}, we shall also examine the algorithm in a regime in which doorways are chosen randomly, rather than greedily as described in the above algorithms.
\subsection{Local optima}
We now turn to a consideration of local optima under this algorithm. As the algorithm is strictly monotone with respect to clustering, one would expect that any such optima will have a high clustering coefficient. Indeed, as the theorem below shows, this will be the case for our technique.
\begin{theorem}\label{localopt}
Let $G$ be a graph, such that the edges of $G$ can be partitioned into cliques of size at least 3, say $C_1, C_2, \dots, C_k$. Then $G$ is a local optimum with respect to the above algorithm.
\end{theorem}
\begin{proof}
We need only show that there are no legal rewires to be performed under the algorithm. We recall that in order to rewire $vx$ to $yx$, we must have that $|N(x, v)|<|N(x, y)|$, and hence if $N|(x, y)|=0$, there are no edges that can be rewired to $xy$. Moreover, note that any two cliques can share at most one vertex, as the cliques partition the edges of $G$.
Now, suppose that $x, y\in \bar{E}$. Now, if $d_x=0$ or $d_y=0$, then $|N(x, y)|=0$ and there will be no legal rewires.
If not, both $x$ and $y$ appear as members of at least one clique. Note that they are not in the same clique, as if they were, we would have $x\sim y$. If the cliques including $x$ and the cliques including $y$ share no vertices, then $|N(x, y)|=0$, and there will be no edge to rewire to $xy$.
Hence, we may assume that $x\in C_i$, $y\in C_j$, where $|V(C_i)\cap V(C_j)|=1$. Thus, $x$ and $y$ share as a neighbor the vertex at which the two cliques intersect, and hence $|N(x, y)|=1$. Now, note that if $v$ is a neighbor of $x$, then $v$ and $x$ appear in some clique of size at least 3 together, and hence $|N(x, v)|\geq 1$. But then $xv$ cannot be rewired to $xy$. As the same will be true of neighbors of $y$, there is no edge that satisfies the requirements of the algorithm.
Therefore, $G$ is locally optimal with respect to the algorithm.
\end{proof}
We note that the clustering coefficient of these graphs will be quite close to 1. Taking $G$ to be as described in Theorem \ref{localopt}, define $H$ to be the graph on $[k]$, wherein $i\sim j$ if and only if $C_i$ and $C_j$ share a vertex. We then have
\begin{eqnarray*}
C(G) & = & \frac{3N_t(G)}{N_p(G)}\\
& = & \frac{ 3\displaystyle\sum_{i=1}^{k}{n_i\choose 3}}{\displaystyle\sum_{i=1}^{k}n_i{n_i-1\choose 2} + \displaystyle\sum_{i\sim_Hj}n_in_j}\\
& \geq & \frac{\displaystyle\sum_{i=1}^{k}n_i{n_i-1\choose 2} }{\displaystyle\sum_{i=1}^{k}n_i{n_i-1\choose 2} +\sum_{i, j}n_in_j}
\end{eqnarray*}
Clearly, the fewer common vertices we have among cliques, the higher the clustering coefficient will be. Moreover, if we imagine that $k$ is fixed, but the number of vertices in the graph is tending to $\infty$, then $C(G)$ tends to $1$ asymptotically.
To the best of these authors' knowledge, the optimum connected graph on $n$ vertices with a fixed number of edges $m$ with respect to the global clustering coefficient $C(G)$ is unknown. We note that these algorithms as written do not necessarily require that the edge rewiring preserves connectivity or components in the graph $G$, although it is clear based on the structure that this algorithm cannot combine two components into one. However, it is straightforward to construct examples in which the algorithms will rewire a bridge, thus disconnecting a formerly connected graph (one such example is shown in Figure~\ref{Fig:bridge
). Moreover, we note that there are local optima that do not take the form described above; for example, a barbell graph in which two complete graphs are connected by exactly one edge cannot be partitioned into cliques of size at least 3, but it is still optimal with respect to this rewiring algorithm.
\begin{figure}[H]
\centering
\includegraphics[width = .7\textwidth]{bridge.pdf}
\caption{A graph $G$ in which the only legal rewiring will disconnect the graph.\\}\label{Fig:bridge}
\end{figure}
Finally, we consider the case of a ring lattice with respect to this algorithm. Define the ring lattice $L(n, k)$ to be the graph with $n$ vertices, labeled as $v_1, v_2, \dots, v_n$, and having $v_i$ adjacent to $v_j$ if and only if $|i-j|\leq k
\mod n$. We note that in the Watts-Strogatz experiment described in \cite{WattsStrogatz1998}, it is this lattice that is rewired randomly to produce a random graph. As our algorithm increases clustering and decreases path length, essentially reversing the process of the experiment performed by Watts-Strogatz, one might expect that the algorithm cannot improve upon the ring lattice; that is indeed the case.
\begin{theorem}
Let $2\leq k < \frac{n}{4}$. Then $L(n, k)$ is locally optimal with respect to the above algorithms.
\end{theorem}
\begin{proof}
Note that it is sufficient to show that any single edge rewiring on $L(n, k)$ cannot increase the clustering coefficient. To that end, let us suppose that we have an edge $v_iv_j$ that will be rewired to the edge $v_iv_\ell$. Note here that it must be the case that $|i-j|<k$; wolog let us suppose that $i=1$ and $1<j\leq k+1$. Note that the edge $v_1v_j$ in $L(n, k)$ participates in $2k-j$ distinct triangles, namely those triangles with vertices $(v_1, v_j, v_t)$, where $n+(j-k)\leq t\leq n$ and $2\leq t\leq k+1$, provided $t\neq j$. Note moreover that $2k-j\geq k-1$.
Let $L'$ be the graph obtained by rewiring $v_1v_j$ to $v_1v_\ell$. Note that if $n/2<\ell<n-k+1$, then the only triangles that the edge $v_1v_j$ participates in are those for which the third vertex, $v_t$, has $\ell<t\leq n$ and $t-\ell<k, n+1-t<k$. Note that there are at most $k-1$ such triangles, when $\ell=n-k$. Hence, this cannot increase the number of triangles.
Likewise, if $v_1v_j$ is rewired to an edge $v_1v_\ell$ having $\frac{n}{2}\leq\ell>k+1$, we create at most $k-1$ triangles in $L'$.
Hence, we cannot increase the number of triangles that appear in $L'$ by a single edge rewiring. Moreover, if we consider the number of wedges in $L$ and $L'$, we obtain
\begin{eqnarray*}
N_p(L(n, k)) &=& \sum_{v\in V} {\deg(v)\choose 2}\\
& = & n {2k\choose 2}, and
\end{eqnarray*}
\begin{eqnarray*}
N_p(L'(n, k)) & = & \sum_{v\in V}{\deg(v)\choose 2}\\
& = & (n-2){2k\choose 2} + {2k-1\choose 2} + {2k+1\choose 2}
\end{eqnarray*}
It is trivial to compute that $N_p(L')>N_p(L)$. Hence, a rewiring cannot increase the number of triangles, but always increases the number of paths of length 2. Hence, no rewiring can increase the clustering coefficient, and $L(n, k)$ is locally optimal with respect the described algorithms.
\end{proof}
\subsection{Degree sequences}
It is clear from the definition of the algorithms above that the degree sequence in $G$ will not be preserved under these rewirings. Indeed, by examining steps \ref{Deg1} and \ref{Deg2} in the algorithm statements, it can be seen that this algorithm always rewires edges from higher degree nodes to lower degree nodes. Although it might seem, based on this feature, that the rewired graph would tend toward regularity, it can be seen empirically that this is not the case. If we expect that high degree nodes will control large clusters, the edges that are rewired are those that are not actually involved in many triangles with that high degree node.
This being said, the algorithm can be modified to allow for a degree-preserving version. In this version, rather than rewiring one edge, we must rewire two edges at a time, as follows.
\begin{algorithm}[H]
\floatname{algorithm}{Degree Sequence Preserving Rewiring Algorithm}
\caption{}
\label{Algthree}
\begin{algorithmic}[1]
\STATE Find a pair $\{x, y\}\in \bar{E}$ with a maximum value of $|N(x, y)|$
\STATE Let $F\subset \bar{E}$ be the set of nonedges in $G$ such that for each $\{u, v\}\in F$, we have $\{u,v\}$ is independent from $\{x, y\}$ and $u\sim x, v\sim y$. Among these, choose a nonedge $\{u, v\}$ with the maximum value of $|N(u, v)|$.
\STATE \label{oneway} If $|N(u, v)|+|N(x, y)|>|N(x, u)|+|N(v, y)|$ and $|N(u, v)|+|N(x, y)|>|N(u, y)|+|N(x, v)|$, delete the edges $\{x, u\}$ and $\{y, v\}$, and replace them with the edges $\{x, y\}$ and $\{v, y\}$.
\STATE \label{otherway} Otherwise, if $|N(u, y)|+|N(v, x)|>|N(x, u)|+|N(v, y)|$, delete the edges $\{x, u\}$ and $\{y, v\}$, and replace them with the edges $\{x, v\}$ and $\{u, y\}$.
\STATE If neither the conditions of steps \ref{oneway} or \ref{otherway} are met, return to step 2 and remove the edge $\{u, v\}$ from consideration.
\STATE If no edge in $F$ satisfies the requirements, return to step 1 and choose a different edge from $G$.
\end{algorithmic}
\end{algorithm}
We note that this algorithm can be seen as a revision of Algorithm \ref{Algone}; a similar revision can be made for Algorithm \ref{Algtwo}. We further note that the benefit of degree sequence preservation here may be outweighed by the expense of such an algorithm; the computation time is substantially higher, since in step 2, we must consider substantially more edges in $G$ than in the previous versions. As with Algorithms \ref{Algone} and \ref{Algtwo}, it is straightforward to show that this algorithm is monotone with respect to the global clustering coefficient; indeed, the number of length-two paths here is constant, and hence the only change is that we are increasing the total number of triangles in $G$.
\subsection{Complexity}
Computationally, the most expensive step of both Algorithms \ref{Algone} and \ref{Algtwo} is the first, the calculation of the pair $\{x,y\} \in \bar{E}$ with the maximum (minimum) value of $|N(x,y)|$, which corresponds to finding the maximum (minimum) value of a subset of the elements of $A^2$. In its most straightforward implementation, $A^2$ can be calculated in $\mathcal{O}(n^3)$ time, where $n$ is the number of vertices in $G$. However, for many large, real-world networks this computational time can be prohibitive. Algorithms \ref{Algone} and \ref{Algtwo} don't require the full calculation of $A^2$, only lists of the number of triangles and wedges in which each node participates. The number of wedges can be calculated in $\mathcal{O}(n)$ time using the degree of each vertex. In \cite{Co09}, an elegant algorithm for efficiently enumerating all triangles in a network using MapReduce was introduced and, in \cite{NoWiPhBe10}, it was shown that this enumeration can be done in $\mathcal{O}(n)$ for power law graphs with an exponent of more than $\frac{7}{3}$ and a maximum vertex degree bounded by $\sqrt{n}$. Additionally, the authors in \cite{NoWiPhBe10} provide experimental results which indicate that the calculation remains $\mathcal{O}(n)$ even when the max degree is bounded by $n-1$.
Isolating the appropriate subset and finding its maximum (minimum) can be done in $\mathcal{O}(n)$ time. Each iteration of the algorithm will change the number of triangles and wedges in the network. However, it will only change the values of vertices $x, y, u,$ and $v$ and those of nodes in their neighborhoods. These effects can be calculated {\it a priori} and the changes can be implemented through neighborhood-centric updates which involve summing (or subtracting) a few appropriate values, which takes $\mathcal{O}(n)$ time. Thus, the full enumeration of the number of triangles in the network only needs to be done once.
The rest of the algorithms consist of find the vertex $v \in N(x) \Delta N(y)$ with minimum $f_v$, checking that it meets the degree requirements, moving the edge and updating the lists of the numbers of wedges and triangles in which each vertex participates. These can all be done in $\mathcal{O}(n)$ time for each iteration. Thus, for the majority of real-world networks, the total cost of the algorithm is $\mathcal{O}(kn)$, where $n$ is the number of vertices in the graph and $k$ is the number of edges to be rewired.
\section{Results on Generated Networks}
\label{sec:experiments}
We illustrate the effects of the rewiring algorithms by running them on simulated networks. We create synthetic networks of various types and then iteratively rewire them using one of the rewiring procedures until there are no valid rewiring moves left. These experiments are run on Erd\"os-R\'enyi ($G_{n,p}$) networks \cite{ErRe59} and Barab\'asi-Albert preferential attachment networks \cite{BaAl99}.
Both rewiring algorithms
led to the global clustering coefficient of the network increasing monotonically.
The ``Swing Toward Best" algorithm (Algorithm \ref{Algone}) had generally greater clustering gains than the ``Swing Away from Worst" algorithm (Algorithm \ref{Algtwo}), as can be seen in Figure~\ref{Fig:algorithm_comparison} on Erd\"os-R\'enyi $G_{n,p}$ networks with $n=100$ and $p=0.07$. This is expected, as the ``Swing Toward Best"
algorithm is a global optimization procedure while the
``Swing Away from Worst" algorithm is a local optimization procedure. Results from choosing edges to rewire randomly or probabilistically based on the two procedures are also plotted.
It is easy to see how employing different methods for selecting the candidate edges lead to clustering gains of different sizes even within the ``Swing Toward Best'' scheme. The greedy versions of the algorithms raised the clustering the most (see the blue line on the left of Figure~\ref{Fig:algorithm_comparison}). Randomly selecting a candidate edge still increased clustering, but not as rapidly (see the red line on the left of Figure~\ref{Fig:algorithm_comparison}). In between was selecting candidate edges probabilistically (see the green line on the left of Figure~\ref{Fig:algorithm_comparison}). Similar results can be seen within the ``Swing Away from Worst'' scheme, although random and probabilistic edge selection perform much more similarly to the optimum in this case (see the right side of Figure~\ref{Fig:algorithm_comparison}).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{Algorithm_Comparison.pdf}
\caption{\textbf{Rewiring a network's edges using either algorithm \ref{Algone} or \ref{Algtwo} raised its clustering coefficient.}
Simulated networks: 100 replications of Erd\"os-R\'enyi ($G_{n,p}$) networks ($n=100, p=0.07$). (Unless otherwise stated, for all figures, lines show the mean over ten runs of the algorithm; shading shows the standard deviation; the first dot is the point at which the first network stopped rewiring; the second dot is point at which the last network stopped rewiring.)
Different line colors/line styles indicate different methods for finding an edge to rewire.
This figure shows the outcomes of the ``Swing Toward" rewiring algorithm \ref{Algone} (left panel) and the ``Swing Away" rewiring algorithm \ref{Algtwo} (right panel).
}\label{Fig:algorithm_comparison}
\end{figure}
For the rest of the paper, all results are generated using the optimum version of the ``Swing Toward" algorithm, Algorithm~\ref{Algone}, unless otherwise specified.
In Figure~\ref{Fig:algorithm_comparison}, it can be seen that in all 100 runs, over 60\% of the edges in the Erd\"os-R\'enyi networks were rewired before the algorithm stopped and, in some cases nearly 100\% were. It is unclear {\em a priori} whether algorithmic restrictions to preserve degree distribution will also limit the number of valid rewirings. In Figure~\ref{Fig:degree_preservation}, the degree-preserving version of the optimum ``Swing toward Best'' algorithm is run on 100 instances of Erd\"os-R\'enyi $G_{n,p}$ networks with $n=100$ and $p=0.07$. It can be seen that, in every instance, over 60\% of the edges were rewired before the algorithm completed, even with the modification to preserved degrees. While this procedures put greater requirements on what constituted a valid rewiring move, it did not seem to limit the number of edges that were rewired. However, it did not produce as large of an increase in clustering, which was expected due to the more limited options of where edges could be moved.
\begin{figure}[h]
\centering
\includegraphics[]{Unpreserved_vs_Preserved.pdf}
\caption{\textbf{Rewiring a network while preserving the degree sequence also increased clustering, though not as quickly.}
Simulated networks: 100 replications of Erd\'os-R\'enyi ($G_{n,p}$) networks ($n=100, p=.07$).
The green solid line shows rewiring with the ``Swing Toward" Algorithm \ref{Algone}, which does not preserve the degree sequence.
The blue dashed line shows rewiring with Algorithm \ref{Algthree}, which does preserve the degree sequence.
Although both algorithms increased the clustering coefficient of the network, Rewiring Algorithm \ref{Algone}, which allowed for changes in the degree sequence of the network, increased the clustering more than Algorithm \ref{Algthree}. \\}\label{Fig:degree_preservation}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[]{Barabasi_Albert.pdf}
\caption{\textbf{Rewiring raised the clustering of scale-free networks.}
Simulated networks: 100 replications of 100 node Barab\'asi-Albert networks.
Different line colors/styles correspond to different values of $m$, the number of nodes that a new node attaches to during the initial network generation.
}\label{Fig:Barabasi_Albert}
\end{figure}
We also ran the optimal version of the ``Swing Toward'' algorithm on Barab\`asi-Albert preferential attachment networks with a variety of parameters. These results can be seen in Figure~\ref{Fig:Barabasi_Albert}. Algorithm \ref{Algone} was run on 100 instances each of preferential attachment networks with $m=3,5,$ and $7$, where $m$ is the number of edges each new node added begins with. Although the initial clustering coefficient was different for each of these three parameters, with a lower $m$ corresponding to a lower initial clustering, after the rewiring the clusterings converge to values that are much closer. That is, the networks with a lower $m$ show greater gains in clustering coefficient than those with a higher $m$. Again, we see that, in every instance, over 60\% of the edges are rewired before the algorithm halts. We also see that the final clustering coefficient of the rewired networks hovers around 0.15-0.2, which is the same range as the final clustering coefficient in the rewired Erd\"os-R\'enyi networks.
\subsection{Different Kinds of Clustering}
In Section \ref{sec:algorithms}, we prove that these rewiring algorithms are guaranteed to increase a network's global clustering coefficient, but there is another common measure of a network's clustering: the average local clustering coefficient. The rewiring algorithms also typically increase average local clustering, as can be seen in Figure \ref{Fig:average_local_clustering} (100 runs of rewiring Erd\"os-R\'enyi $G_{n,p}$ networks with $n=100$ and $p=.07$). However, it is possible for the local clustering to stall or go down during portions of the rewiring process, whereas for total clustering it is not possible.
\begin{figure}[h]
\centering
\includegraphics[width = 0.5\textwidth]{Total_vs_Local_Triangle_Density.pdf}
\caption{\textbf{Rewiring networks increased different kinds of clustering.} The change in average local clustering over 100 runs on an Erd\"os-R\'enyi $G_{n,p}$ network ($n=100, p=.07$).
The blue solid line shows the total triangle density (global clustering coefficient).
The green dashed line shows the average local triangle density (average local clustering coefficient).
}\label{Fig:average_local_clustering}
\end{figure}
\subsection{Small World Creation}
In addition to clustering, the rewiring algorithms introduced here also alter other properties of the network structure. One such property is the average path length of the network that is undergoing rewiring. We examined this in more detail and found that the average path length increased during rewiring; as clusters formed it became harder to quickly navigate between them. This can be seen in the left panel of Figure~\ref{Fig:path_length_small_world
, which shows these changes on 100 runs of the optimal ``Swing Toward'' rewiring algorithm on Erd\"os-R\'enyi $G_{n,p}$ networks with $n=100$ and $p=0.07$. However, the trade-off between clustering and path length was not constant. Clustering increased faster than path length during the majority of the edge rewires but, at the end of the rewiring process, the path length increased more quickly and the clustering coefficient stabilized. A network with a high clustering and low path length is commonly known as a small-world network \cite{WattsStrogatz1998} and the {\em small-world index} summarizes this relationship through the ratio of the clustering coefficient and the path length. A high small-world index indicates that a network has a particularly complex structure. We plot the effect of optimal ``Swing Toward'' rewirings on the left of Figure~\ref{Fig:path_length_small_world}. The small-world index increased during the rewiring, as the clustering grew faster than the path length, but then plateaued and slightly decreased before the rewiring algorithm terminated.
\begin{figure}[h]
\centering
\includegraphics[width = \textwidth]{Small_World.pdf}
\caption{\textbf{The rewiring algorithm increases clustering faster than it increases path length, creating a small-world network.} We show the effects of optimal ``Swing Toward'' rewiring on 100 replications of Erd\"os-R\'enyi $G_{n,p}$ networks with $n=100$ and $p=.07$.
On the left, the change in both the global clustering coefficient (blue solid line) and the average path length (green dotted line) are plotted.
On the right, the change in the small world index, the ratio of the clustering coefficient and the average path length, is plotted.
Initially, average path length increases along with the clustering coefficient. However, because clustering increases faster than path length, the rewiring process increases the small-world index by several multiples relative to the initial value.
}\label{Fig:path_length_small_world}
\end{figure}
This rewiring procedure can be thought of as the reverse of the classic algorithm for creating a small world network, introduced by Watts and Strogatz in \cite{WattsStrogatz1998}. In that process, the network starts as a lattice and is rewired randomly. Randomizing every edge turns the network into an Erd\"os-R\'enyi network, but randomizing only a small percentage of the edges greatly decreases the network's mean path length while only slightly decreasing the clustering, yielding a small world structure. Figure \ref{Fig:Watts_Strogatz_Inverse:a} shows this behavior on a ring-lattice with 100 nodes and degree of 6. As more edges are randomized, the network's average local clustering and path length both decrease, although at different rates, eventually terminating in a random graph.
\begin{figure}[!h]
\centering
\begin{subfigure}[h]{\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{Watts_Strogatz_resize.png}
\caption{Changes in average local clustering (red solid) and path length (blue dashed) relative to the original ring lattice as a greater percentage of the edges of a ring lattice ($n=100$, $k=6$) are randomly rewired.}
\label{Fig:Watts_Strogatz_Inverse:a}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{Watts_Strogatz_Inverse_Clustering_resize.png}
\caption{Changes in average local clustering relative the original ring lattice as edges are rewired randomly (black dotted) or rewired using Algorithm 1 after various levels of edge randomization: 300 edges (purple solid), 150 (red solid), 75 (green solid), and 50 (blue solid).}
\label{Fig:Watts_Strogatz_Inverse:b}
\end{subfigure}
\begin{subfigure}[h]{\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{Watts_Strogatz_Inverse_Path_Length_resize.png}
\caption{Changes in average path length relative the original ring lattice as edges are rewired randomly (black dotted) or rewired using Algorithm 1 after various levels of edge randomization: 300 edges (purple solid), 150 (red solid), 75 (green solid), and 50 (blue solid).
\label{Fig:Watts_Strogatz_Inverse:c}
\end{subfigure}
\caption{\textbf{Rewiring reversed the clustering loss of the Watts-Strogatz algorithm.}
}
\label{Fig:Watts_Strogatz_Inverse}
\end{figure}
The rewiring algorithms introduced here can be thought of as inverting this randomization, creating clustering instead of destroying it. After randomizing different portions of the original lattice, the rewiring algorithms presented above are employed to recreate clustering in the network, moving it away from a random graph, as seen in Figure \ref{Fig:Watts_Strogatz_Inverse:b}. Here, each colored line with arrows pointing left represents starting the rewiring at a different level of randomization. The clustering can end up higher than that of the original lattice, both due to the randomization and because the rewiring altered the network's degree distribution. Because of this the rewiring algorithm is not a true inverse of the randomization, but is indeed a reversal of the clustering loss.
As discussed earlier, the rewiring process increases the network's average path length, but when it terminates, the average path length is still lower than that of the original lattice. This can be seen in Figure \ref{Fig:Watts_Strogatz_Inverse:c}. Thus, we see that the rewiring algorithm creates a small world network, with high clustering and low path length relative to the lattice.
\section{Results on Real-World Networks}
\label{sec:real_experiments}
We further applied our algorithm to several real-world networks. These networks were formed from data collected by Traud et al.~and analyzed in \cite{facebookold} and \cite{facebooklong}. They present a snapshot of the Facebook network on a single day in September of 2005. At that time, Facebook was a fairly new entity, initially established as a Harvard-exclusive site called ``The Facebook" in February of 2004 and growing to include many colleges by September of 2005 under its current title of Facebook. At the time the data was collected, Facebook members needed to have a \verb|.edu| email address and thus only students and other members of the college's community, including faculty, were able to join \cite{Boyd_FB,Mayer2008329,Lewis2008330}. We have considered the Facebook networks of the California Institute of Technology (Caltech) and Reed College, two of the 100 American colleges and universities on Facebook at the time. While Facebook allowed for cross-college connections between individuals in 2005, the networks to which we apply our algorithm are the largest connected components of the Caltech and Reed networks excluding cross-college connections.
The Caltech Facebook network, which was the smallest Facebook network in the snapshot, has 762 members (nodes) with 16651 connections (edges) between them and a global clustering coefficient of approximately 0.0971.
As in Section \ref{sec:experiments}, the edges in this network were rewired using Algorithm~\ref{Algone}. Figure~\ref{Fig:clustering_caltech} shows the monotonic increase in the average local clustering coefficient of the Caltech network from the application of this algorithm. A total of 10466 rewires were completed before the algorithm could no-longer find an edge to move, resulting in a network that was approximately 62.8\% rewired with a clustering coefficient of 0.230.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.5\textwidth]{clustering_caltech.png}
\caption{\textbf{Rewiring the Caltech Facebook network increases clustering.}}
\label{Fig:clustering_caltech}
\end{figure}
Along with altering the clustering, the rewirings performed in Algorithm~\ref{Algone} affect other network properties, such as the average shortest path-length. The change in average shortest path-length as edges are rewired in the Caltech Network compared to the change in clustering is shown in Figure~\ref{Fig:path_caltech_every100}. Measuring the average shortest path-length across the Caltech network throughout the rewiring process is computationally slow, so network properties were calculated every 100 rewiring steps with a total of 10400 rewiring steps taken before the graph became disconnected. As was seen in the generated examples in Section \ref{sec:experiments}, although the average shortest path-length increases, in the initial rewiring steps it does so much more slowly than the average local clustering coefficient. Similarly to earlier, when Algorithm~\ref{Algone} is used to rewire a small percentage of the edges in the network, the average local clustering coefficient increases significantly while other network properties are minimally changed.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.5\textwidth]{pathlength_caltech_every100.png}
\caption{\textbf{Clustering (green) and path length (blue) of Caltech Facebook network with network properties being calculated every 100 rewiring steps.} }
\label{Fig:path_caltech_every100}
\end{figure}
A second social network examined was the Reed College Facebook network. This network has 962 members with 18812 connections between them and a global clustering coefficient of approximately 0.0736, slightly lower than that of the Caltech network.
Figure~\ref{Fig:clustering_reed} shows how the clustering of the Reed College network changed when Algorithm~\ref{Algone} was applied and an increasing number of edges were rewired. A total of 14762 rewires were completed before the algorithm could no-longer find an edge to move, resulting in a network that was approximately 78.5\% rewired with a clustering coefficient of 0.197.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.5\textwidth]{clustering_reed.png}
\caption{\textbf{Rewiring the Reed College Facebook network increases clustering.}}
\label{Fig:clustering_reed}
\end{figure}
Similar treatment given to the Caltech network can be applied to the Reed network, computing average shortest path-length at every 500 rewires due to time considerations, until a rewire can no longer be completed. These results are shown in Figure~\ref{Fig:path_reed_every500}. As seen in Figure~\ref{Fig:path_caltech_every100}, in the early stages of rewiring the average local clustering increases much more quickly than the average shortest path-length, again increasing the ``small-worldness'' of the network.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.5\textwidth]{pathlength_clustering_reed_every500.png}
\caption{\textbf{Clustering (green) and path length (blue) of Reed Facebook network with network properties being calculated every 500 rewiring steps.} }
\label{Fig:path_reed_every500}
\end{figure}
It may seem surprising that both of these networks were able to rewire more than half (and in the case of the Reed network, almost 80\%) of their edges. Yet, it is important to note that these graphs likely reflect incomplete social networks due to the novelty of Facebook at the time \cite{facebooklong,Boyd_FB}. This would leave many missing connections between close friends, allowing the algorithm to exploit the missing links, rearrange edges, and increase clustering.
\section{Concluding Remarks}
\label{sec:conclusions}
We have introduced a new set of algorithms for rewiring edges in a network with the goal of maximally increasing the global clustering coefficient and minimally impacting other network properties, with a focus on preserving degree distribution and average degree length. Several variations of this algorithm were implemented, including one that preserves the degree sequence of the original network. We proved that the algorithms strictly increase the global clustering coefficient of a network, provided examples of local optima under Algorithm \ref{Algone}, and discussed the time complexity of running the algorithms.
We ran the algorithms on a number of Erd\'os-R\'enyi ($G_{n,p}$) networks to examine how the clustering coefficient increased both as a function of the the number of nodes in the network (holding the edge density constant) and as a function of the number of edges rewired. We also examined how these were affected by the exact variation of the algorithm used. Other experiments examined how the increase in the average path length or the average local clustering coefficient compared to that of the global clustering coefficient. Our experiments corroborate the theoretical findings that the algorithms presented strictly increase the global clustering coefficient. They also show the average local clustering coefficient increases, though not necessarily monotonically.
Additionally, the simulation experiments demonstrate that the rewiring procedures create a small world. This is essentially the reverse of the process that Watts and Strogatz used when they introduced the small world concept \cite{WattsStrogatz1998}. They started with a regular lattice and rewired edges randomly until the network was fully random. Along the way the path length decreased faster than the clustering decreased, and so there was a period during the rewiring in which clustering was high and path length was low: a small world. Here, our rewiring procedures are doing the reverse by starting with a randomized network and deliberately rewiring edges to create a more regular network. And again, along the way a small world is created. Thus, the rewiring algorithms introduced here give a reciprocal story to how to build a small world network, given a fixed number of nodes and edges. These rewiring algorithms may be useful to those seeking to grow small world networks in engineered physical systems, or to explain how naturally-occurring physical systems construct themselves into small worlds.
We also ran Algorithm \ref{Algone} on several networks from a single-time snapshot of Facebook data \cite{facebookold,facebooklong}. The results from these experiments indicate that the real-world networks behave similarly to the Erd\'os-R\'enyi networks in that, while both clustering and average shortest path-length increase as the number of rewirings increase, when only a small number of edges are rewired the average local clustering increases significantly more quickly than the average shortest path-length.
Future work includes determining whether using approximations of the initial values of $A^2$ and $N(x,y)$ (the most computationally expensive element of the algorithm) affect the performance of the algorithm. One aspect of these algorithms which, for the sake of brevity, was not discussed in this paper is that of applications where these algorithms may be deployed. These include areas such as community detection and graph partitioning. Future work concerns research into these applications. Another important aspect of future work is determining, both experimentally and theoretically, the ideal number of edges which should be rewired for various applications in which modifying a network to increase its clustering coefficient is desirable.
\section*{Acknowledgments}
J.A. was supported by the SUTD-MIT Postdoctoral Programme. The work of C.K. was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
\bibliographystyle{siam}
|
2,877,628,091,089 | arxiv | \section{Introduction}
Neural networks especially deep networks \cite{HintonOsinderoTeh2006,LeCunBengioHinton2015} have attracted a lot of attention recently due to their superior performance in several machine learning tasks such as face recognition, image understanding and language interpretation. The applications of neural netowrks go far beyond artificial intelligence domain, stretching to autonomous driving systems \cite{AngelovaKrizhevskyVanhoucke2015,KornhauserXiao2015}, pharmaceutical research \cite{UnterthinerMayrKlambauerSteijaertWegnerCeulemansHochreiter2015a,UnterthinerMayrKlambauerSteijaertWegnerCeulemansHochreiter2015}, neuroscience \cite{BernikerKording2015,DoshiKiraWagner2015,SukShenInitiativeothers2015,WangMalaveCipollini2015,YaminsCohenHongKanwisherDiCarlo2015} among others. Because of its usefulness and tremendous application potential, some open source software packages are made available for research such as caffe \cite{JiaShelhamerDonahueKarayevLongGirshickGuadarramaDarrell2014,TurchenkoLuczak2015} and Theano \cite{BergstraBreuleuxBastienLamblinPascanuDesjardinsTurianWarde-FarleyBengio2010}. Furthermore, there are even efforts to build integrated circuits for neural networks \cite{HammerstromNarayanan2015,QiaoMostafaCorradiOsswaldStefaniniSumislawskaIndiveri2015,Service2014}.
Evolving from the simplest perceptron \cite{Rosenblatt1957} to the most sophisticated deep learning neural networks \cite{LeCunBengioHinton2015}, the basic structure of the most widely used neural networks remains almost the same, i.e. hierarchical layers of computing units (called neurons) with feed forward information flow from previous layer to the next layer \cite{Bishop1995}. Although there is no restriction on how the neurons should be arranged spatially, traditionally they all line in a row or a column just like elements in a vector. The benefit of this is apparently the ease of visualisation of networks as well as the convenience of deduction of mathematical formulation of information flow. As a consequence, vectors are naturally the inputs for the neural networks. This special structure requires the non-vectorial inputs especially matrices (e.g. images) to be converted into vectors. The usual way of vectorising a matrix or multi mode tensor is simply concatenating rows or columns into a long vector if it is a matrix or flatten everything to one dimension if it is a tensor. We are mostly interested in matrices and therefore we restrict our discussion on matrices from now on. Unfortunately this process can be problematic. Firstly, the spatial information among elements of the data may be lost during vectorisation. Images especially nature images have very strong spatial correlations among pixels. Any sort of vectorisation will certainly result in the loss of such correlation. Moreover, the interpretability is heavily compromised. This renders the neural networks as ``black boxes'' as what is going on inside the network is not interpretable by human operator as the information encoded in the parameters or neurons deviates from the form we would normally percept from the very beginning if we take images as an example. Secondly, the solution space becomes very large which demands very special treatments to the network parameters. There are many adverse effects. First, the chance of reaching a meaningful local minimum is reduced due to large domain for sub-optimum. Second, the success of training relies heavily on human intervention, pretraining, special initialisation, juggling parameters of optimisation algorithms and so on. This situation becomes even worse with the growth of the depth of the networks. This is the well known model complexity against learning capacity dilemma \cite{Vapnik1995}. Third, if the spatial information among elements in matrices has to be utilised by the network, one has to resort to either specially designed connection configuration among neurons if it is possible or priors on the network parameters as regularisation which may cripple back prorogation based optimisation because spatial connection means coupling. For large scale problems e.g. big data, this may not be viable at all. Fourth, the computational cost is very high which requires massive computation platforms.
To address the issues discussed above, we propose matrix neural networks or MatNet for short, which takes matrices directly as inputs. Therefore the input layer neurons form a matrix, for example, each neuron corresponds to a pixel in a grey scale image. The upper layers are also but not limited to matrices. This is an analogy to the neurons in retina sensing visual signal which are organised in layers of matrix like formation \cite{Rodieck1973}. It is worth of pointing out that the convolutional neural network (ConvNet) \cite{LeDenkerHendersonHowardHubbardJackel1990,LeCunBottouBengioHaffner1998} works on images (matrices) directly. However, the major difference between ConvNet and MatNet is that ConvNet's input layers are feature extraction layers consisting of filtering and pooling and its core is still the traditional vector based neural network. While in MatNet matrices are passing through each layer without vectorisation at all. To achieve this, each neuron in MatNet senses summarised information through bilinear mapping from immediate previous layer units' outputs plus an offset term. Then the neuron activates complying with the pre-specified activation function e.g. sigmoid, tanh, and rectified linear unit (reLU) \cite{NairHinton2010} to generate its output for the next layer. It is exactly the same way as the classic feed forward neural networks. Obviously the bilinear mapping is the key to preserve matrix structure. It is also the key for the application of simple back prorogation to train the network. This will become very clear after we formulate the MatNet model in the next section. In order not to disturb the flow, we leave the derivation of the gradients to appendix where interested readers can find the details.
To demonstrate the usefulness of the proposed MatNet, we will test it in two image processing tasks, the well-known MNIST handwritten digits classification and image super resolution. For digits classification, it is just a direct application MatNet to normalised images with given class labels, where MatNet acts as a classifier. However, for image super resolution, MatNet needs some adaptation, i.e. an ``add-on'' to accommodate multimodal inputs. As we will show in Section \ref{sec:mmMatNet}, this process is straightforward with great possibility to embrace other modalities such as natural languages for image understanding \cite{YangLiFermullerAloimonos2015} and automated caption generation \cite{UshikuYamaguchiMukutaHarada2015}. As shown in Section \ref{sec:exp}, MatNet can achieve comparable classification rate as those sophisticated deep learning neural networks. We need to point out that MatNet is not optimised for this task and the choices of the key network parameters such as the number of layers and neurons are somewhat arbitrary. Surprisingly for super resolution task, MatNet has superior results already in terms of peak signal to noise ratio (PSNR) compared to the state-of-the-art methods such as the sparse representation (SR) \cite{YangWrightHuangM2010}. Once again, this result can be further optimised and we will discuss some further developments that will be carried out in near future in Section \ref{sec:discussion}.
\section{Matrix Neural Network Model}\label{sec:mnnetmodel}
The basic model of a layer of MatNet is the following bilinear mapping
\begin{align}
Y = \sigma (UXV^T+B) + E, \label{eq:mnnetmodelmat}
\end{align}
where $U$, $V$, $B$ and $E$ are matrices with compatible dimensions, $U$ and $V$ are connection weights, $B$ is the offset of current layer, $\sigma(\cdot)$ is the activation function acting on each element of matrix and $E$ is the error.
\subsection{Network Structure}
The MatNet consists multiple layers of neurons in the form of \eqref{eq:mnnetmodelmat}. Let $X^{(l)}\in\mathbb{R}^{I_l\times J_l}$ be the matrix variable at layer $l$ where $l = 1, 2, \ldots, L, L+1$. Layer 1 is the input layer that takes matrices input directly and Layer $L+1$ is the output layer. All the other layers are hidden layers. Layer $l$ is connected to Layer $l+1$ by
\begin{align}
X^{(l+1)} = \sigma ( U^{(l)} X^{(l)} V^{(l)T} + B^{(l)}). \label{eq:MatNet-2}
\end{align}
where $B^{(l)}\in\mathbb{R}^{I_{l+1}\times J_{l+1}}$, $U^{(l)}\in\mathbb{R}^{I_{l+1}\times I_l}$ and $V^{(l)}\in\mathbb{R}^{J_{l+1}\times J_l}$, for $l = 1, 2, ..., L-1$. For the convenience of explanation, we define
\begin{align}
N^{(l)} = U^{(l)} X^{(l)} V^{(l)T} + B^{(l)} \label{eq:MatNet-3}
\end{align}
for $l=1, 2, ..., L$. Hence \[ X^{(l+1)} = \sigma (N^{(l)}).
\]
The shape of the output layer is determined by the functionality of the network, i.e. regression or classification, which in turn determines the connections from Layer $L$. We discuss in the following three cases.
\begin{itemize}
\item Case 1: Normal regression network. The output layer is actually a matrix variable as $O = X^{(L+1)}$. The connection between layer $L$ and the output layer is defined as \eqref{eq:MatNet-2} with $l = L$.
\item Case 2: Classification network I. The output layer is a multiple label (0-1) vector $\mathbf o = (o_1, ..., o_K) $ where $K$ is the number of classes. In $\mathbf o$, all elements are 0 but one 1. The final connection is then defined by
\begin{align}
o_k = \frac{\exp(\mathbf u_k X^{(L)} \mathbf v^T_k + tb_k)}{\sum^K_{k'=1} \exp(\mathbf u_{k'} X^{(L)} \mathbf v^T_{k'} + tb_{k'})}, \label{eq:MatNet-4}
\end{align}
where $k=1,2,...,K$, $\overline{U} = [\mathbf u^T_1, ...., \mathbf u^T_K]^T \in\mathbb{R}^{K\times I_L}$ and $\overline{V} = [\mathbf v^T_1, ...., \mathbf v^T_K]^T\in\mathbb{R}^{K\times J_L}$. That is both $\mathbf u_k$ and $\mathbf v_k$ are rows of matrices $\overline{U}$ and $\overline{V}$, respectively. Similar to \eqref{eq:MatNet-3}, we denote
\begin{align}
n_k = \mathbf u_k X^{(L)} \mathbf v^T_k + tb_k.\label{eq:MatNet-5}
\end{align}
\eqref{eq:MatNet-4} is the softmax that is frequently used in logistic regression \cite{HosmerLemeshowSturdivant2013}. Note that in \eqref{eq:MatNet-4}, the matrix form is maintained. However, one can flatten the matrix for the output layer leading to the third case.
\item Case 3: Classification network II. The connection of Layer $L$ to the output layer can be defined as the following
\begin{align}
&N^{(L)}_{k} = \text{vec}(X^{(L)})^T \overline{\mathbf u}_k + tb_k \label{eq:MatNet-6}\\
&o_{k} = \frac{\exp(N^{(L)}_{k})}{\sum^K_{k'=1}\exp(N^{(L)}_{k'})} \label{eq:MatNet-7}
\end{align}
where vec$()$ is the vectorisation operation on matrix and $\overline{\mathbf u}_k$ is a column vector with compatible length. This makes Case 2 a special case of Case 3.
\end{itemize}
Assume that we are given a training dataset $\mathcal{D} = \{(X_n, Y_n)\}^N_{n=1}$ for regression or $\mathcal{D} = \{(X_n, \mathbf t_n)\}^N_{n=1}$ for classification problems respectively. Then we define the following loss functions
\begin{itemize}
\item Case 1: Regression problem's loss function is defined as
\begin{align}
L = \frac1N\sum^N_{n=1}\frac12 \| Y_n - X^{(L+1)}_n\|^2_F. \label{eq:MatNet-8}
\end{align}
\item Cases 2\&3: Classification problem's cross entropy loss function is defined as
\begin{align}
L = -\frac1N\sum^N_{n=1} \sum^K_{k=1} t_{nk} \log (o_{nk}). \label{eq:MatNet-9}
\end{align}
\end{itemize}
Note that the selection of cost function is mainly from the consideration of the convenience of implementation. Actually, MatNet is open to any other cost functions as long as the gradient with respect to unknown variables can be easily obtained.
From Eq. \eqref{eq:MatNet-2} we can see that the matrix form is well preserved in the information passing right from the input layer. By choosing the shape of $U^{(l)}$, $V^{(l)}$ and $B^{(l)}$ accordingly, one can reshape the matrices in hidden layers. In traditional neural networks with vectors input, Eq. \eqref{eq:MatNet-2} actually becomes
\begin{equation}\label{eq:vnnet1}
\vc x^{(2)} = \sigma(W^{(1)}\text{vec}(X^{(1)}) + \vc b^{(1)})
\end{equation}
where $\vc x^{(2)}$ and $\vc b^{(1)}$ are column vectors with compatible lengths. If we vectorise the first hidden layer of MatNet we obtain
\begin{equation}\label{eq:vectorisedMatNet}
\text{vec}(X^{(2)}) = \sigma(({V^{(1)}}^\top\otimes U^{(1)})\text{vec}(X^{(1)}) + \text{vec}(B^{(1)})),
\end{equation}
where $A\otimes B$ is the Kronecker product between matrix $A$ and $B$ and we used the identity
\[
\text{vec}(A X B) = (B^T\otimes A)\text{vec}(X).
\]
It is clear that by choosing $W^{(1)}$ in traditional neural networks such that $W^{(1)} = {V^{(1)}}^\top\otimes U^{(1)}$, it is possible to mimic MatNet and it is also true for other layers. Therefore, MatNet is a special case of traditional neural networks. However, ${V^{(l)}}^\top\otimes U^{(l)}$ has significantly less degrees of freedom than $W^{(l)}$, i.e. $I_{l+1}I_l + J_{l+1}J_l$ v.s. $I_{l+1}I_lJ_{l+1}J_l$. The reduction of the solution space brought by the bilinear mapping in Eq. \eqref{eq:MatNet-2} is apparent. The resultant effects and advantages include less costly training process, less local minima, easier to handle and most of all, direct and intuitive interpretation. The first three comes immediately from the shrunk solution space. The improved interpretability comes from the fact that $U^{(l)}$ and $V^{(l)}$ work on the matrices directly which normally correspond to input images. Therefore, the functions of $U^{(l)}$ and $V^{(l)}$ becomes clearer, i.e. the linear transformation applied on matrices. This certainly connects MatNet to matrix or tensor factorisation type of algorithms such as principal component analysis \cite{HopkinsShiSteurer2015,PaateroTapper1994,ZhouCichockiXie2012} broadening the understanding of MatNet.
\subsection{Optimisation}
We collect all the unknown variables i.e. the network parameters of each layer here. They are $U^{(l)}$, $V^{(l)}$, $B^{(l)}$ for $l=1,\ldots,L$, and $\overline{\mathbf u}_k$ and $tb_k$ for the output layer. Write the parameters of each layer as $\Theta^{(l)}$. From Eq. \eqref{eq:MatNet-2} one can easily see that the information is passing in the exactly the same way of the traditional fee forward neural networks. The underlining mechanism is the bilinear mapping in \eqref{eq:MatNet-3}, which preserves the matrix form throughout the network. This suggests that the optimisation used in traditional neural networks, i.e. back propagation (BP) and gradient descent combination can be used for MatNet. All we need to do is to obtain the derivative of the cost function w.r.t $\Theta^{(l)}$, which can be passed backwards the network.
Since we proposed both regression and classification network models, the derivatives differ slightly in these two cases due to different cost functions while the back propagation is exactly the same. The details about the gradients and back propagation are in the appendix for better flow of the paper. Once the gradients are computed, then any gradient descent algorithm such as the limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) \cite{GaoReynoldsothers2004} can be readily used to find the sub-optimum given an initialisation. Normally, the network is initialised by random numbers to break symmetry. When the number of layers of a MatNet is 3, this strategy is good enough. However, if MatNet contains many layers, i.e. forming a deep network, then the complexity of the model increases drastically. It requires more training samples. Meanwhile some constraints will be helpful for faster convergence or better solution.
\subsection{Regularisation}
Although MatNet has reduced solution space heavily by using the bilinear mapping in \eqref{eq:MatNet-3} already, some techniques routinely used in traditional neural networks can still be used to further constrain the solution towards the desired pattern. The first is the weight decay, i.e. clamping the size of the weights on the connections, mainly $U^{(l)}$ and $V^{(l)}$. Normally we use Frobenius norm of a matrix for this purpose, that is to incorporate
\[
\lambda \sum_l(\|U^{(l)}\|_F^2+\|V^{(l)}\|_F^2),
\]
where $\lambda$ is a nonnegative regularisation parameter and the summation of Frobenius norms includes the output layer as well.
One may immediately think of the sparsity constraint on the weights to cut off some connections between layers similar to the DropConnect in \cite{WanZeilerZhangCunFergus2013}. It turns out that it is not trivial to incorporate sparsity constraint manifested by sparsity encouraging norms such as $\ell_1$ norm favourably used in sparse regressions \cite{Tibshirani1996}. The dropping in \cite{WanZeilerZhangCunFergus2013} in implemented by a 0/1 mask sampled from Bernoulli distribution. Here we discuss another type of sparsity which is much easier to be incorporated into MatNet. This is the situation when we have an over supply of neurons in hidden layers. In this case, the neural network may be able to discover interesting structure in the data with less number of neurons.
Recall that $X^{(l)}_n$ in \eqref{eq:MatNet-2} denotes the activation at hidden unit $l$ in the network. let
\begin{align}
\overline{\rho}^{(l)} = \frac1N\sum^N_{n=1} X^{(l)}_n \end{align} be the average activations of hidden layer $l$ (averaged over the training set).
Through (approximately) enforcing the constraint elementwise
\[
\overline{\rho}^{(l)}_{ij} = \rho,
\]
one can achieve sparsity in reducing the number of neurons \cite{ShuFyshe2013}. Therefore, $\rho$ is called a sparsity parameter, typically a small value close to zero, e.g. $\rho = 0.05$. In words, the constraint requires the average activation of each hidden neuron to be close to a small given value. To satisfy this constraint, some hidden units' activations must be close to 0.
To implement the above equality constraint, we need a penalty term penalising the elements of $\overline{\rho}^{(l)}$ deviating significantly from $\rho$. The deviation is quantified as the following akin to Kullback-Leibler divergence or entropy \cite{CoverThomas2006}:
\begin{align}
R_l = \text{sum}\left( \rho \log\frac{\rho}{\overline{\rho}^{(l)}} + (1-\rho)\log\frac{1-\rho}{1-\overline{\rho}^{(l)}}\right) \label{eq:rl}
\end{align}
where $\text{sum}(M)$ summing over all the elements in matrix $M$; $\log$ and $/$ are applied to matrix elementwise.
To screen out neurons that are not necessary, we add the following extra term in the cost function of MatNet
\[
\beta\sum^{L}_{l=2} R_l.
\]
The gradient of this term is detailed in the appendix.
\begin{comment}
Other regularisations can be considered in MatNet. In particular we want to mention the manifold constraint on connection weights proposed in \cite{BadrinarayananMishraCipolla2015} proven to be useful when extending MatNet to deep architectures, i.e. with many hidden layers which may include max-pooling and sub-sampling layers. The idea is to restrict the connection weights, i.e. $U^{(l)}$ and $V^{(l)}$ in MatNet, to be always on unit-norm manifold. This constraint helps improve accuracy of the network effectively.
\end{comment}
\section{Multimodal Matrix Neural Networks}\label{sec:mmMatNet}
We have the basics of MatNet from above discussion. Now we proceed to extending MatNet to multimodal case for image super resolution application. The extension is as straightforward as including more than one input matrix at the same time at input layer. Conceptually, we have more than one input layer standing side by side for different modalities and they all send the information to the shared hidden layers through separate connections \cite{NgiamKhoslaKimNamLeeNg2011}. It turns out for super resolution, three layer MatNet is sufficient, i.e., input layer, hidden layer and output layer, and it works on autoencoder \cite{HintonSalakhutdinov2006} mode meaning a regression MatNet reproducing the input in output layer. This requires that the output layer has the same amount of modalities as the input layer. Although we showcase only a three layer regression multimodal MatNet, it is not difficult to extend to other type of multimodal MatNet with multiple hidden layers using the same methodology.
Assume $D$ modalities as matrices in consideration denoted by $X^j \in \mathbb{R}^{K_{j1}\times K_{j2}}$ ($j=1,2,...,D$). Similarly there are $D$ output matrix variables of the same sizes. Denote by $\mathcal{X} = (X^1, ..., X^D)$. In the hidden layer, we only have one matrix variable $H \in \mathbb{R}^{K_1\times K_2}$. The transformation from input layer to hidden layer is defined by the following multiple bilinear mapping with the activation function $\sigma$ (sigmoid or any other activation function)
\begin{align}
H = \sigma(\sum^D_{j=1} U_j X^j V^T_j + B) \label{eq:MatNetmm-1}
\end{align}
and from hidden layer to output layer by
\begin{align}
\widehat{X}^j = \sigma(R_j H S^T_j + C_j), \;\; j = 1, 2, ..., D. \label{eq:MatNetmm-2}
\end{align}
We call $H$ the encoder for data $\mathcal{X}$. For a given set of training data $\mathcal{D} = \{\mathcal{X}_j\}^N_{i=1}$ with $\mathcal{X}_i = (X^1_i, ..., X^D_i)$, the corresponding hidden variable is denoted by $H_i$. The objective function to be minimised for training an MatNet autoencoder is defined by
\begin{align}
L = \frac1{2N}\sum^N_{i=1} \sum^D_{j=1}\| \widehat{X}^j_i - X^j_i\|^2_F. \label{eq:MatNetmm-3}
\end{align}
$L$ is a function of all the parameters $W = \{U_j, V_j, R_j, S_j, C_j, B\}^D_{j=1}$.
We leave the derivation of the gradients of multimodal MatNet autoencoder to the appendix. It is very similar to those of the original MatNet and therefore the the same BP scheme can be utilised for optimisation.
\section{Experimental Evaluation}\label{sec:exp}
In this section, we apply MatNet to MNIST handwritten digits classification and image super resolution. The network settings are somewhat arbitrary, or in other words, we did not optimise the number of layers and neurons in each layer in these tests. For handwritten digits recognition, MatNet was configured as a classification network, i.e. the output layer was a vector of softmax functions as in Eq. \eqref{eq:MatNet-6} and \eqref{eq:MatNet-7} of length 10 (for 10 digits). For illustration purpose, we selected a simple MatNet. It contained 2 hidden layers, each with $20\times20$ and $16\times16$ neurons. As the numbers of layers and neurons were very conservative, we turned off sparsity constraint as well as
weights decay. For super resolution task, the only hidden layer was of size $10\times10$, therefore, only 3 layer MatNet.
The activation function in both networks was sigmoid.
\subsection{MNIST Handwritten Digits Classification}
The MNIST handwritten digits database is available at \url{http://yann.lecun.com/exdb/mnist/}. The entire database contains 60,000 training samples and 10,000 testing samples, and each digit is a $28\times 28$ gray scale image. We use all training samples for modeling and test on all testing samples. Figure \ref{fig:weightsmnist} shows the weights, $U^{(l)}$ and $V^{(l)}$, and bias $B^{(l)}$ in hidden layers.
Figure \ref{fig:MatNetforMNISTHLoutput} shows the first 100 test digits, and hidden layer outputs. The check board effects can be seen from the the hidden layer output in Figure \ref{fig:MatNetforMNISTHLoutput}(b). The final test accuracy is 97.3\%, i.e. error rate of 2.7\%, which is
inferior to the best MNIST performance by DropConnect with error rate 0.21\%.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{weightsmnist.pdf}\\
\caption{Weights and bias learnt by MatNet classifier.}\label{fig:weightsmnist}
\end{figure}
\begin{figure}
\centering
\subfloat[First 100 test digits.]{\includegraphics[width=0.3\linewidth]{mnist100testdigits}}\label{fig:mnist100testdigits}
\subfloat[Hidden layer 1 output.]{\includegraphics[width=0.3\linewidth]{hl1mnist100testdigits}}\label{fig:hl1mnist100testdigits}
\subfloat[Hidden layer 2 output.]{\includegraphics[width=0.3\linewidth]{hl2mnist100testdigits}}\label{fig:hl2mnist100testdigits}
\caption{Hidden layer output of MatNet for MNIST dataset. }\label{fig:MatNetforMNISTHLoutput}
\end{figure}
However, as we stated earlier, MatNet has much less computational complexity. To see this clearly, we carried out a comparison between MatNet and ``plain'' convolutional neural networks (CNN), i.e. CNN without all sorts of ``add-ons''. The CNN consisted of two convolutional layers of size $20\times1\times5\times5$ and $50\times20\times5\times5$ one of which is followed by a $2\times2$ max pooling, and then a hidden layer of 500 and output layer of 10, fully connected. This is the structure used in Theano \cite{Al-RfouAlainAlmahairiEtAl2016} demo. The total number of parameters to optimise is 430500, while the total number of parameters in MatNet is 5536. The server runs a 6-core i7 3.3GHz CPU with 64GB memory and a NVIDIA Tesla K40 GPU card with 12GB memory. We used Theano for CNN which fully utilised GPU. On contrast, MatNet is implemented with Matlab without using any parallel computing techniques. The difference of training time is astounding. It costed the server more than 20 hours for CNN with final test accuracy of 99.07\%, whereas less than 2 hours for MatNet with test accuracy of 97.3\%, i.e. 1.77\% worse. In order to see if MatNet can approach this CNN's performance in terms of accuracy, we varied the structure of MatNet in both number of neurons in each layer and number of layers (depth). However, we limited the depth to the maximum of 6 as we did not consider deep structure for the time being. Due to the randomness of the stochastic gradient descent employed in MatNet, we ran through one structure multiple times and collected the test accuracy. Fig. \ref{fig:matnetvscnn} shows the performance of different MatNet compared against CNN. The model complexity is rendered as the number of parameters in the model, which is the horizontal axis in the plot. So when MatNet gets more complex, it approaches CNN steadily. Fig. \ref{fig:matnetmulti} shows some statistics of all the tested MatNets where the depth is also included. The bar plots are mainly histograms of given pair of variables. The diagonal panels are density for corresponding variables such as the right bottom one is the test accuracy density where it show the majority of MatNets achieved more than 98\% accuracy. The left two bottom panels show the scatter plots of accuracy against depth and number of parameters. However, the two panels on the top right summarise these as box plots are more informative. They show that the most complex models are not necessarily the best models on average. The best model (with highest test accuracy) is the one with depth of 4, i.e. two hidden layers, $160\times160$ neurons each and 316160 parameters in total that achieved 98.48\% accuracy, very close to that of CNN despite the fact that MatNet is not at all optimised in almost every aspect such as optimisation strategy. This implies that MatNet has the potential to match the performance of CNN with more future efforts with foreseeable great savings in computation.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{MatNetvsCNN.pdf}\\
\caption{Test accuracy of MatNet vs CNN}\label{fig:matnetvscnn}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{MatNetstats.pdf}\\
\caption{Some statistics of MatNet in this experiment. }\label{fig:matnetmulti}
\end{figure*}
\subsection{Image Super Resolution}
For image super resolution, we need to use the multimodal MatNet detailed in Section \ref{sec:mmMatNet}. The training is the following. From a set of high resolution images, we downsample them by bicubic interpolation to the ratio of $1/s$ where $s$ is the target up-scaling factor. In this experiment, $s=2$. From these down scaled images, we sampled patches, say $15\time15$, from their feature images, i.e. first and second derivatives along x and y direction, 4 feature images for each. These are the modalities from $X^2$ to $X^5$. We also sampled the same size patches from the original high resolution images as $X^1$. See Eq. \eqref{eq:MatNetmm-1}. These data were fed into multimodal MatNet for training.
To obtain a high resolution image we used the following procedure. First upscale the image by bicubic interpolation to the ratio of $s$ and convert it to YCbCr space. The luminance component is then the working image on which the same size patches are sampled by sliding window as new input $X^1$. Obtain 4 feature images from this working image on which patches are sampled exactly the same way to form $X^2$ to $X^5$. Feed these to a well trained multimodal MatNet to get high resolution image patches from network output. The high resolution patches are then merged together by averaging pixels in patches. This gives us the high resolution luminance image, which is in turn combined with up-scaled, chrominance images, Cb and Cr images, simply by bicubic interpolation, to form final high resolution image in YCbCr space. For better display, it is converted to RGB format as final image.
We applied MatNet to the data set used in SR \cite{YangWrightHuangM2010}, both for training and testing. There are 69 images for training. The patch size was $15\times15$. We randomly sampled 10,000 patches altogether from all images for training. Some additional parameters for MatNet are $\lambda = 0.001$, $\rho = 0.05$ and $\beta=1$. So we turned on weight decay and sparsity constraints but left out the manifold constraint. Figure \ref{fig:srweights} shows the network parameters learnt from the data, from which we can observe the scaling changing filters in the weights for high resolution patches.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{mmnnetSRweights}\\
\caption{Multimodal MatNet weights learnt for super resolution.}\label{fig:srweights}
\end{figure}
Fig. \ref{fig:srimages} shows the results on two testing images. Multimodal MatNet has comparable performance as SR, the state-of-the-art super resolution method, evaluated by PSNR: for Lena image, multimodal MatNet, SR and bicubic interpolation achieved PSNR 33.966dB, 35.037dB and 32.795dB respectively; for kingfisher image, they had PSNR 36.056dB, 36.541dB and 34.518dB respectively. We applied to a number of images of similar size ($256\times256$) and we observed similar scenario. Fig. \ref{fig:srtestpsnrcomp} (a) shows the all the test images, including the two in Fig. \ref{fig:srimages}, and PSNR's obtained by different methods is shown in Fig. \ref{fig:srtestpsnrcomp} (b). MatNet is very close to SR in terms of PSNR, especially for image 5 and 8.
\begin{figure}
\centering
\subfloat[Lena image ($128\times128$)]{\includegraphics[width=0.9\linewidth]{set1res.png}}\\
\subfloat[Kingfisher image ($256\times256$)]{\includegraphics[width=0.9\linewidth]{set2res.png}}
\caption{Super resolution on 2 sets of testing images. From left to right: input small size image, true high resolution image, up-scaled images (2 times) produced by multimodal MatNet, SR and bicubic interpolation respectively. }\label{fig:srimages}
\end{figure}
\begin{figure}
\centering
\subfloat[All 12 test images]{\includegraphics[width=0.45\linewidth]{SRalltestimgs.png}}
\subfloat[PSNR results]{\includegraphics[width=0.45\linewidth]{SRresbar}}
\caption{Super resolution results comparison. The images are indexed from left to right, from top to bottom. }\label{fig:srtestpsnrcomp}
\end{figure}
\section{Discussion}\label{sec:discussion}
We proposed a matrix neural network (MatNet) in this paper, which takes matrices input directly without vectorisation. The most prominent advantage of MatNet over the traditional vector based neural works is that it reduces the complexity of the optimisation problem drastically, while manages to obtain comparable performance as the state-of-the-art methods. This has been demonstrated in applications of MNIST handwritten digits classification and image super resolution.
As we mentioned several times in the text, MatNet was not specially optimised for the tasks we showed in experiment section. There is a lot of potentials for further improvement. Many techniques used for deep networks can be readily applied to MatNet with appropriate adaptation, e.g. reLU activation function, max-pooling, etc., which certainly become our future research.
\bibliographystyle{plain}
|
2,877,628,091,090 | arxiv | \subsubsection*{Derivation of Eq.~(\ref{equ1})}
A drive at angular frequency $\omega$ near readout frequency induces damped Rabi oscillations of the readout transition with the Hamiltonian
\begin{equation}
H/\hbar=\frac{(\omega-\omega_{03})}{2}(|3\rangle\langle3|-|0\rangle\langle0|)-i\frac{\Omega}{2}(|3\rangle\langle0|-|0\rangle\langle3|)
\end{equation}
and $\Omega=2\sqrt{\Gamma}\alpha_\mathrm{in}$. In all generality one should consider all sources of loss and dephasing such as spontaneous emission in the line, non-radiative decay and extra-dephasing due to flux noise, including transitions from the readout subspace to other states of the atom. However, the readout transition has been designed so that its emission rate in the line overcomes by many orders of magnitude the other decay and dephasing processes. Assuming an environment at zero temperature, the density matrix of the atom $\rho$ evolves according to the following Lindblad equation
\begin{equation}
\dot{\rho}=-\frac{i}{\hbar}[H,\rho]+\Gamma\mathcal{D}[|0\rangle\langle3|](\rho)
\end{equation}
where $\mathcal{D}$ is the Lindblad superoperator. The steady-state of this equation yields to the expectation value
\begin{equation}
\langle|0\rangle\langle3|\rangle = \frac{\Omega}{\Gamma}\frac{\Gamma^2/2-i(\omega-\omega_{03})\Gamma}{\Gamma^2/2+\Omega^2+2(\omega-\omega_{03})^2},
\end{equation}
from which we derive Eq.~(\ref{equ1}). The good agreement between the theoretical model and the experimental data validates the assumptions made in the previous derivation.
\subsubsection*{Reflection coefficient calibration}
The measured reflection coefficient is only known up to a scaling factor and has to be calibrated. In fact, the readout drive undergoes several stages of attenuation before reaching the artificial atom, while the fluorescence signal is amplified before digitization. Calibration is performed as follows. The circuit is flux biased at $\phi_\textrm{ext}=0$ and is probed in reflection between 6.3 and 6.6~GHz. The external flux is chosen so that no circuit transition exist in the frequency band of interest. The acquired signal $s_\textrm{cal}(\omega)$ therefore accounts for the temperature-dependent attenuation and filtering of the lines as well as the whole acquisition setup amplification chain. Once the atom is biased at the wanted flux, the reflection coefficient $r$ is deduced from the experimentally measured signal $s_\textrm{exp}(\omega)$ by
\begin{equation}
r = \frac{s_\textrm{exp}(\omega)}{s_\textrm{cal}(\omega)}10^{(P_\textrm{cal}-P_\textrm{exp})/20}
\end{equation}
with $P_\textrm{cal/exp}$ the room-temperature power of the readout drive during the calibration/experiment, expressed in dBm.
\subsubsection*{Relaxation model of transient dynamics}
The relaxation model used to reproduce the data represented on Fig.~\ref{fig3} considers three distinct decay mechanisms. A jump from a state $|i\rangle$ to $|j\rangle$ can occur due to radiative decay in the line, dielectric loss, or quasiparticle tunneling in the weak junction, with the rates $\Gamma_{ij}^\textrm{rad},\ \Gamma_{ij}^\textrm{diel},\ \Gamma_{ij}^\textrm{qp}$, respectively. All rates are computed assuming an equal bath temperature $T=50$~mK for all decay mechanisms~\cite{supp}.
We simulate the dynamics of the atom using the Python library QuTiP\cite{johansson2013}. The spectrum of the fluxonium hamiltonian $H_f=4E_C(-i\partial\phi)^2+E_L\phi^2-E_J\cos(\phi-\phi_\textrm{ext})$ is first computed in a Hilbert space of dimension $100\times100$, to ensure accuracy of the computed eigenfrequencies $\omega_i$ and eigenstates $|i\rangle,\ i\leq99$. We then solve numerically the Lindblad master equation in the Fock state basis for a smaller Hilbert space containing the first 10 eigenstates. This takes into account the fact that high-energy states do not contribute to the atom dynamics and allows for a faster computing time. The master equation contains all the jump operators $\{\sqrt{\Gamma_{ij}^\textrm{rad}+\Gamma_{ij}^\textrm{diel}+\Gamma_{ij}^\textrm{qp}}|j\rangle\langle i|\}_{i,j\leq9}$ computed from the diagonalization of $H_f$. The initial state of the atom is the thermal equilibrium state $\rho_\textrm{th}$ for Fig.~\ref{fig3}a and a perfect swap between $|0\rangle$ and $|2\rangle$ on $\rho_\textrm{th}$ for Fig.~\ref{fig3}b.
The Hamiltonian for Fig.~\ref{fig3}a is $H=-i\frac{\Omega}{2}(|3\rangle\langle0|-|0\rangle\langle3|)$ and represents the readout drive, while it is 0 for Fig.~\ref{fig3}b. They correspond to computing the evolution in the frame rotating at all the eigenfrequencies, i.e. applying the unitary $U(t)=\exp(-iH_f t/\hbar)=\exp(-i\sum_j\omega_j|j\rangle\langle j|t)$ to the driven and undriven hamiltonians, respectively. We find that the transient dynamics of fluorescence is best reproduced for $\Omega=0.95\Gamma/\sqrt{2}$. This deviation from the ideal value can be due to small drifts between the measurements of Fig.~\ref{fig2} and~\ref{fig3}.\\
\noindent\textbf{Data Availability.} The data that support the findings of this study are available from the corresponding author upon reasonable request.
\noindent\textbf{Acknowledgement.} We thank Ray Mencia for providing samples at the initial stages of this work and Nathan Langford, Shimon Kolkowitz, and Benjamin Huard for useful discussions. We acknowledge funding from Sloan Foundation, NSF-PFC at JQI, and ARO-MURI. \\
\noindent\textbf{Author contributions.} H.X. fabricated the device and along with Y.H.L acquired and analyzed the data. Y.H.L. designed the coaxial-to-waveguide adaptor and built the low-temperature measurement setup. L.B.N. contributed to circuit design, room-temperature instrumentation, and to identifying decoherence mechanisms. N.C. performed initial conditional fluorescence experiments and developed procedures for analyzing and modeling the data. V.E.M. managed the project. The authors declare no financial conflict of interest.
\newpage
|
2,877,628,091,091 | arxiv | \section{Introduction}\label{sec:intro}
This is the first in a two-part series of papers, in which we develop a high-order numerical algorithm to simulate compressible fluid flow
with shock waves and contact discontinuities, as well as shock-wall collision and bounce-back. In the second part of this series \cite{RaReSh2018b}, we treat problems in two space dimensions. In this first part, we begin the development for one-dimensional flows.
The initial-value problem
for a nonlinear system of conservation laws in one space dimension is given as
\begin{subequations}\label{nonlinear-conservation-law}
\begin{align}
\partial_t {\bm u}(x,t) + \partial_x F({\bm u}(x,t)) &= 0, \\
{\bm u}(x,t=0) &= {\bm u}_0(x),
\end{align}
\end{subequations}
where ${\bm u}(x,t)$ denotes a vector of conserved quantities, $x$ denotes the space coordinate, and $t$ denotes the time coordinate.
Many different physical phenomena can be modeled by \eqref{nonlinear-conservation-law},
including gas dynamics, described by the compressible Euler equations, which shall be the focus of this paper.
It is well known that
solutions of \eqref{nonlinear-conservation-law} can develop finite-time discontinuities, even for smooth initial
data $\bm{u}_0$. In this case, the discontinuities are propagated according to the \emph{Rankine-Hugoniot}
conditions (see \S\ref{subsec:review}). Consequently, it is important to develop robust numerical schemes that
can approximate discontinuous solutions. This is a nontrivial task, since approximations to discontinuous
solutions usually result in the occurrence of small-scale oscillations, or Gibbs-phenomenon;
however, a variety of high-order discretization schemes and techniques have been developed to combat this issue
and produce non-oscillatory solutions.
In the case of 1-D gas dynamics, the construction of non-oscillatory, higher-order, numerical algorithms such as ENO by Harten, Engquist, Osher \& Chakravarthy \cite{Harten1987231} and Shu \& Osher \cite{Shu1988439}, \cite{OsherShu1989}; WENO by Liu, Osher, \& Chan \cite{LiOsCh1994} and
Jiang \& Shu \cite{JiangShu1996}; MUSCL by Van Leer \cite{VanLeer1979101}, Colella \cite{Colella1985104}, and Huynh \cite{Huynh19951565}; or PPM by Colella \& Woodward \cite{CoWo1984} requires
carefully chosen {\it reconstruction} and {\it numerical flux}.
Such numerical methods evolve cell-averaged quantities; to calculate an accurate approximation of the flux at cell-interfaces, these schemes reconstruct $k$th-order ($k\ge 2$) polynomial
approximations of the solution (and hence the flux) from the computed cell-averages, and thus provide $k$th-order accuracy away from discontinuities. See, for example, the convergence plots
of Greenough \& Rider \cite{GreenoughRider2004} and Liska \& Wendroff \cite{Liska2003995}. Given a polynomial representation of the solution, a strategy is chosen to compute the
most accurate cell-interface flux, and this is achieved by a variety of algorithms.
Centered numerical fluxes, such as Lax-Friedrichs, add dissipation as a mechanism to preserve stability and monotonicity. On the other hand, {\it characteristic-type} upwinding based upon exact (Godunov) or approximate (Roe, Osher, HLL, HLLC) Riemann solvers,
which preserve monotonicity without adding too much dissipation, tend to be rather complex and PDE-specific; moreover, for strong shocks, other techniques may be
required to dampen post-shock oscillations or to yield entropy-satisfying approximations (see Quirk \cite{Quirk1994555}).
Again, we refer the reader to the papers \cite{GreenoughRider2004}, \cite{Liska2003995} or Colella \& Woodward \cite{Colella1984115} for a thorough overview, as well as a comparison of the effectiveness of a variety of competitive schemes.
Majda \& Osher {\cite{MaOs1977}} have shown that \emph{any} numerical scheme for a
problem with discontinuities
will suffer from a formal loss of accuracy near the discontinuity. Nonetheless, the use of high-order
schemes is imperative for the resolution of finer structures in smooth regions of the flow. Formally
high-order WENO schemes (as well as other high-order methods) maintain high-order accuracy in regions away from shocks, but
are only first-order accurate at the discontinuity.
In order to
ascertain the performance of a method, it is essential to conduct numerical tests for a range of problems
with different features of varying complexity. These tests are made precise by calculating error norms
of the computed solution
relative to either an exact solution (if available), or a highly resolved solution which may be regarded
as the exact solution. Proposed numerical algorithms should demonstrate small error norms and
close to optimal convergence for a range of test problems. However, due to the fact that different tests
can exhibit very different phenomena and features, it is not so surprising that there are a number of
situations in which anomalous behavior of solutions is observed, which results in large errors and
poor rates of convergence. Examples of such errors are wall-heating,
the carbuncle phenomenon, long wavelength instabilities in slow-moving shocks, and non entropy-satisfying
``expansion shocks'' (see Quirk {\cite{Quirk1994555}} for further details).
In this paper, we continue the development of the $C$-method {\cite{ReSeSh2012}}, a nonlinear artificial viscosity
modification of the Euler equations of gas dynamics, whose numerical discretization by a simple WENO-type (or even
central differencing) scheme can stabilize the type of instabilities noted above.
As proven in {\cite{ReSeSh2012}}, weak solutions of the $C$-method modification of the Euler equations
converge to the unique (entropy) solutions of the Euler equations as the artificial viscosity parameter tends to zero.
Herein, we present
numerical error analysis and order of accuracy studies for a number of classical shock tube experiments; we
show that a highly simplified WENO discretization of the $C$-method yields highly accurate solutions displaying close to optimal rates of convergence.
For instance, we show that for the Sod problem, our simple WENO-type discretization of the $C$-method yields
smaller errors and faster rates of convergence in the $L^1$, $L^2$, and $L^ \infty$ norms as compared to the
same WENO discretization of the unmodified Euler equations.
In particular, for the difficult problem of shock-wall collision
(to be introduced in \S{\ref{intro:shock-collision}} and developed in \S{\ref{sec:wallvisc}}) for the
Sod problem on a grid with 801 cells, we show that the $L^1$ error with the $C$-method
is 35\% of the error without the $C$-method. Moreover, the order of convergence of solutions is
approximately 0.95, which is close to optimal and more than twice the
order of convergence when the $C$-method is not employed.
Similar conclusions hold also for the extremely difficult LeBlanc problem, for which
we show that the use of the $C$-method produces $L^1$ errors that are approximately four times smaller
prior to shock-wall collision, and approximately three times smaller post shock-wall collision.
Our quantitative analysis, together with the qualitative observations we make via plot comparison, lead us
to conclude that the use of a simple discretization of the $C$-method provides a flexible, highly accurate scheme that produces solutions
with close to optimal rates of convergence for a variety of problems with different features.
\subsection{Using artificial viscosity with conservation laws}
Artificial viscosity is an effective method for
the numerical stabilization of shock waves in gas dynamics; the simplest such regularization of \eqref{nonlinear-conservation-law}
replaces the right-hand side with the linear second-order operator (see, for instance, \mbox{\cite{Landshoff1955,Wilkins1980,DiPerna1983}})
\begin{equation}\label{linear-viscosity}
\beta \, \Delta x \, \partial_{xx} {\bm u}(x,t)\,,
\end{equation}
where $\beta=O(1)$ is a constant, and $\Delta x $ denotes a small asymptotic parameter that, when the term
\eqref{linear-viscosity} is numerically discretized, represents the grid spacing.
For each such $\beta >0$, solutions to the regularized conservation law smooth the shock across a
small region of width proportional to $\Delta x$, and simultaneously prevent small-scale oscillations from corrupting sound waves in numerical simulations; nevertheless,
the uniform application of diffusion given by \eqref{linear-viscosity} ensures only first-order accuracy of numerical schemes and overly diffuses wave
amplitudes and speeds.
In \cite{vNRi1950}, Von Neumann and Richtmeyer replaced the
uniform linear viscosity \eqref{linear-viscosity} with a nonlinear term given by
\begin{equation}\label{nonlinear-artificial-viscosity}
\beta \, (\Delta x)^2 \partial_x \left( |\partial_x u|\, \partial_x {\bm u} \right)\,,
\end{equation}
which we shall refer to as \emph{classical artificial viscosity}.
Here, $u(x,t)$ represents, in the case of the Euler equations of gas dynamics, the velocity of the
fluid.
The use of the localizing coefficient $|\partial_x u|$ in
\eqref{nonlinear-artificial-viscosity}
concentrates the addition of viscosity to the narrow intervals containing shocks,
while
maintaining high-order accuracy in regions away from the shock, wherein the solution is smooth. See also Margolin
\cite{Margolin2018} and Mattsson \& Rider \cite{MaRi2015} for a description of the origin
and the interpretation of artificial viscosity as a physical phenomenon.
It is now well-known \cite{Lapidus1967154, GeMaDa1966} that classical
artificial viscosity corrects for the over-dissipation of the linear viscosity \eqref{linear-viscosity}, and allows for
the implementation of numerical methods that are both non-oscillatory at shocks, as well as high-order accurate
in smooth regions.
On the other hand, the fact that the localizing coefficient $|\partial_x u|$ itself
becomes highly irregular in regions containing
shocks often results in the failure of such schemes to suppress spurious oscillations.
This inadequacy of classical artificial viscosity may be observed with the highly singular phenomenon of
shock-wave wall collision. In this case, large amplitude, high frequency, non-physical oscillations appear in the
solution post-collision behind the shock curve, and the rough nature of $|\partial_x u|$ in both space and time
means that the classical artificial viscosity is often unable to remove such oscillations.
This suggests that a space-time smoothed variant of the localizing coefficient $|\partial_x u|$ might allow for
a less oscillatory, more accurate solution profile.
We propose the use of the $C$-method as a
means of producing such a localizing coefficient.
A similar method is employed by Cabot \& Cook
\mbox{\cite{CaCo2004,CaCo2004b}}, who use a
high-wavenumber indicator together with a Gaussian filter to produce such a function, though we note that
the produced function is only spatially regularized, and not temporally. See also the work of
Barter \& Darmofal {\cite{BaDa2010}}, who utilize a PDE-based approach to smooth the localizing function.
As we shall explain below, the function $C(x,t)$ will play the role of $|\partial_x u|$; not only
will it be a space-time smooth approximation, but it will moreover be an envelope for $|\partial_x u|$, maintaining its highly localized
properties while retaining a certain memory of the behavior of the shock wave.
\subsection{Stabilizing shock collision}\label{intro:shock-collision}
In the context of fixed-grid, explicit, finite-difference schemes, shock-wall collision and bounce-back
leads to egregious oscillatory behavior. This is primarily due to the fact that the shock-wall collision causes an
immediate change in the sign of the shock speed $\dot{\sigma}(t)$, leading to a discontinuity in
$\dot{\sigma}(t)$. Consequently, shock-wall collision is a highly singular phenomenon that requires explicit
stabilization. In \S\ref{sec:wallvisc}, we introduce a simple modification of the $C$-method, which we call the wall $C$-method,
that implements a space-time smooth stabilization for shock-wall collision.
This method is then applied to various test cases in \S\ref{sec:simulations}, with the computed solutions showing excellent agreement with the exact solution post shock-wall collision.
Error analysis and convergence tests show that the wall $C$-method produces solutions with much
smaller errors, even
for the difficult LeBlanc shock tube problem.
\subsection{High-frequency noise}
The occurrence of high-frequency, often small amplitude, spurious oscillations (or \emph{noise}) is a common
issue in numerical schemes. One cause of this noise is related to the stability (CFL) condition for explicit
time-integration methods. A simple method for suppressing such noise is the use of the linear viscosity
\eqref{linear-viscosity}, though, as explained above, this often results in the degradation of the solution
in regions without noise.
An alternative is to first decompose the solution using a basis of orthogonal \emph{wavelets}, then
truncate the decomposition so as to remove the high frequency components (which correspond with noise),
though this may be very computationally expensive and, moreover,
requires the use of a fully orthogonal basis of
wavelets. In \S\ref{sec:noiseind},
we introduce a hybridized version of the two above methods, wherein wavelets are used
to accurately locate high frequency noise, and then a linear viscosity is used, via a localized heat equation
solver, to remove this noise. This noise detection and removal algorithm is very simple
to implement, and is applied to a number of test problems in \S\ref{sec:simulations}.
Error analysis shows that the algorithm improves the accuracy of the solution while retaining the order of
convergence; in particular, the algorithm is able to suppress high-frequency noise while preserving the
amplitude of lower frequency (physical) sinusoidal waves for the Osher-Shu problem.
\subsection{Outline of the paper}
In \S\ref{subsec:compEuler}, we introduce the compressible Euler equations, the corresponding flux, and the Rankine-Hugoniot jump
conditions. In \S\ref{subsec:review}, we review the original $C$-method, as introduced in \cite{ReSeSh2012}.
Then in \S\ref{sec:wallvisc}, we discuss the problem of shock-wave wall collision, and introduce
a novel generalization of the $C$-method, which relies on a new artificial {\it wall viscosity} mechanism
that suppresses post shock-collision oscillations. We then introduce our WENO-$C$-$W$ scheme as a discretized version of our
new $C$-method for shock-wall collision.
In \S\ref{sec:noiseind}, we present a wavelet based \emph{noise indicator}, that locates regions of
noise containing high frequency oscillations on the discretized domain. A noise removal algorithm, based on a localized
solution of
the heat equation, is then used to remove high frequency oscillations. We then describe our WENO-$C$-$W$-$N$ algorithm which
adds the noise indicator and noise removal scheme to our WENO-$C$-$W$ method. Finally, in \S\ref{sec:simulations}, we demonstrate
the efficacy of our method for a number of classical shock tube problems, including the Sod, LeBlanc, Peak, and Osher-Shu tests. We show
numerical results and order-of-accuracy studies, and in the process, we explain the cause and solution to the wall-heating problem.
In Appendix \ref{sec:appendix}, we describe two WENO schemes that we use for comparison purposes: the first couples WENO with classical
artificial viscosity, while the second couples WENO with Noh's artificial viscosity operator, designed specifically for the
case of shock-wall collision and the wall-heating phenomenon.
\section{The compressible Euler equations and the original $C$-method}
\subsection{The conservation laws of gas dynamics}
\label{subsec:compEuler}
The compressible Euler equations on a 1-$D$ spatial interval $ x_1 \leq x \leq x_M$,
and a time interval $0 \le t \le T$
are written in vector-form as the following coupled system of nonlinear conservation laws:
\begin{subequations}
\label{subeq:consLaw}
\begin{alignat}{2}
\partial_t {\bm u}(x,t)+ \partial_x {\bm F}({\bm u}(x,t)) = \bm{0},& && \ \ \ x_1< x< x_M \,, t > 0, \label{eqn:consLawEvolution} \\
{\bm u}(x,0) = {\bm u}_0(x),& && \ \ \ x_1 \le x \le x_M \,, t = 0, \label{eqn:consLawIC}
\end{alignat}
\end{subequations}
where the $3$-vector ${\bm u}(x,t)$ and {\it flux function} ${\bm F}({\bm u}(x,t))$ are defined, respectively, as
$$
{\bm u} = \left ( \begin{array}{c} \rho \\ \rho u \\ E \end{array} \right ) \quad \text{and} \quad {\bm F}({\bm u}) = \left ( \begin{array}{c} \rho u \\ \rho u^2 + p \\ u \left ( E + p \right ) \end{array} \right )\,,
$$
while the given initial data for the problem is
$$
{\bm u}_0(x) = \left (\begin{array}{c} \rho_0(x) \\ (\rho u)_0(x) \\ E_0(x) \end{array}\right ) \,.
$$
The {\it{conservative variables}} $\rho, \ \rho u$, and $E$ denote
the {\it density}, {\it momentum}, and {\it energy} of a compressible gas,
while the variable $u$ represents the {\it{velocity field}}.
The variable $p$ denotes the {\it pressure} function, and according to the
ideal gas law is given by
\begin{equation}\label{eq-of-state}
p = (\gamma - 1) \left ( E - \frac{1}{2} \rho u^2 \right )\,,
\end{equation}
where $ \gamma $ is the adiabatic constant.
Equations (\ref{subeq:consLaw}) represent the conservation
of mass, linear momentum, and energy in the evolution of a compressible gas.
The total energy per unit volume $E$ is the sum of
kinetic energy and the potential energy,
\begin{equation}\label{energy_identity}
E= \underbrace{\mystrut{2.5ex} \frac{1}{2}\rho u^2}_{\text{kinetic}} + \underbrace{\mystrut{2.5ex} \frac{p}{\gamma -1}}_{\text{potential}} \,.
\end{equation}
We also define the {\it{specific internal energy per unit mass}} of the system $e$, defined as
\begin{equation}\label{defn-internal-energy}
e = \frac{p}{(\gamma-1)\rho}\,,
\end{equation}
so that the total energy of the system may be written as the sum of the kinetic energy and the internal energy
per unit volume $\rho e$,
$$
E=\frac{1}{2}\rho u^2 + \rho e\,.
$$
The gradient of the flux ${\bm F}({\bm u})$ is given by
$$
\mathrm{D}{\bm F}({\bm u}) =
\left[
\begin{array}{ccc}
0 & 1 & 0 \\[0.5em]
\frac{1}{2} (\gamma -3) u^2 & (3-\gamma) u & \gamma -1 \\[0.5em]
- \gamma \frac{u E }{\rho} + (\gamma -1) u^3 &
\frac{ \gamma E}{\rho} + \frac{3}{2}(1- \gamma ) u^2 & \gamma u
\end{array}\right]
$$
with eigenvalues
\begin{subequations}
\label{subeq:maxWaveSpeed}
\begin{equation*}
\lambda_1 = u + c\,, \ \lambda_2 = u\,, \ \lambda_3 = u - c \,,
\end{equation*}
\end{subequations}
where $c =\sqrt{\gamma p /\rho }$ denotes the sound speed
(see, for example, Toro \cite{Toro2009}). These eigenvalues determine the wave speeds. Since the behavior of the various wave patterns is greatly influenced by the speed of propagation, we define
the {\it maximum wave speed} $S(\bm{u})$ as
\begin{equation}\label{wave-speed}
S(\bm{u}) = [S({\bm u})](t) = \max_{i =1,2,3} \max_{x} \left \{ |\lambda_i(x,t)| \right \} \,.
\end{equation}
We are interested in solutions $\bm{u}$ with discontinuous wave profiles,
such as those with shock waves and contact discontinuities.
The Rankine-Hugoniot (R-H) conditions
determine the speed $\dot \sigma = \dot \sigma(t)$ of the moving shock or contact discontinuity,
and represent conservation of mass, linear momentum and energy across the
discontinuity (see, for example, \cite{Leveque2002}). For a shock wave, the R-H condition is given by the relation
$$
F(\bm{u} _l) - F(\bm{u} _r) = \dot \sigma ( \bm{u}_l - \bm{u}_r)
$$
where the subscript $l$ denotes the state to the left of the discontinuity, and the
subscript $r$ denotes the state to the right of
the discontinuity. This means that the following three {\it jump conditions} must hold:
\begin{subequations}\label{RHconditions}
\begin{align}
\left( \rho_l u_l \right) - \left(\rho_r u_r \right) & = \dot \sigma (\rho_l - \rho_r) \label{RH1} \\
\left(\rho_l u_l^2 + p_l \right) -
\left( \rho_r u_r^2 + p_r \right) & = \dot \sigma \left( \left( \rho u\right)_l - \left(\rho u\right)_r \right) \label{RH2} \\
\left(u_l (E_l + p_l) \right) - \left( u_r (E_r + p_r) \right) &= \dot \sigma (E_l -E_r) \,. \label{RH3}
\end{align}
\end{subequations}
Uniqueness for weak solutions that have jump discontinuities in general does not hold,
unless entropy conditions are satisfied
(see the discussion in \S2.9.4 in \cite{ReSeSh2012}). However, solutions obtained in the limit of zero viscosity
are known to satisfy the entropy condition
and are hence unique. We refer the reader to \cite{ReSeSh2012} for a discussion of the
convergence of $C$-method solutions as $ \Delta x \to 0$.
\subsection{A review of the original $C$-method}\label{subsec:review}
We now briefly review the $C$-method from \cite{ReSeSh2012}, which is a spacetime smooth version of classical artificial viscosity
with a \emph{compression switch}:
\begin{equation*}
\beta (\Delta x)^2 \, \partial_x \Big( \mathbbm{1}_{(-\infty,0)} (\partial_x u) \, |\partial_x u| \,\partial_x u \Big)\,,
\end{equation*}
where the compression switch $\mathbbm{1}_{(-\infty,0)} (\partial_x u)$ ensures that artificial viscosity is only activiated
during compression, and not in regions of expansion where there are no shocks.
The localizing function $C(x,t)$ is given as the solution to the
scalar reaction-diffusion equation\footnote{We note that this scalar reaction-diffusion equation
is not Galilean invariant. In the current implementation, the $C$-method is viewed purely as a numerical
tool, whereas the function $C$ may very well be viewed as an important physical quantity, in which case
the $C$-equation itself should be preserved under Galilean transformations. This can be accomplished by
the addition of an advection term to the current $C$-equation. We have checked for some 1-$D$ examples
that the addition of such a term has little effect on the demonstrated success of the $C$-method.}
\begin{equation*}
\partial_t C + \frac{S(\bm{u})}{\Delta x} C - S(\bm{u}) \Delta x \, \partial_{xx} C = \frac{S(\bm{u})}{\Delta x} \, G\,,
\end{equation*}
where the forcing $G$ is
\begin{equation}\label{C-forcing}
G \equiv G(x,t) = \mathbbm{1}_{(-\infty,0)} (\partial_x u) \, \frac{ | \partial_x u | }{ \max_{x} | \partial_x u |}\,,
\end{equation}
and $S(\bm{u})$ is the maximum wave speed \eqref{wave-speed}.
The $C$-method artificial viscosity term is then given by
\begin{equation*}
\tilde{\beta} (\Delta x)^2 \, \partial_x \left( C \, \partial_x u \right)\,, \text{ where } \tilde{\beta} = \frac{\max_{x} | \partial_x u |}{\max_{x} C} \, \beta\,,
\end{equation*}
and the compressible Euler equations coupled with the $C$-method are written as the following Euler-$C$
system:
\begin{subequations}
\label{C-method-1d}
\begin{align}
\partial_t \rho + \partial_x (\rho u) & = 0\,, \\
\partial_t (\rho u) + \partial_x (\rho u^2 + p) & = \tilde{\beta} (\Delta x)^2 \, \partial_x \left( \rho C \, \partial_x u \right)\,, \\
\partial_t E + \partial_x (u(E+p)) &= \tilde{\beta} (\Delta x)^2 \, \partial_x \left( \rho C \, \partial_x (E/\rho) \right) \,, \\
\partial_t C + \frac{S(\bm{u})}{\Delta x} C - S(\bm{u}) \Delta x \, \partial_{xx} C &= \frac{S(\bm{u})}{\Delta x} \, G(\partial_x u) \,.
\end{align}
\end{subequations}
Solutions of the Euler-$C$ equations \eqref{C-method-1d} converge to solutions of the Euler equations \eqref{subeq:consLaw} as $\beta \to 0$ (see
Section 2.9 in \cite{ReSeSh2012} for a proof).
As was demonstrated in \cite{ReSeSh2012}, a simple WENO-type numerical
discretization of the Euler-$C$ equations \eqref{C-method-1d} (as
will be described in \S\ref{sec:wallvisc}) is an effective high-order scheme which compares favorably to the best state-of-the-art algorithms
for the classical shock-tube experiments of
Sod, Osher-Shu, Woodward-Colella, and
LeBlanc.
In particular, this simple WENO-type discretization of the $C$-method is able to remove the large
overshoot in the LeBlanc contact discontinuity for the internal energy function (see \cite{ReSeSh2012}), whereas the other state-of-the-art schemes were not able
to do so.
Herein, we generalize the $C$-method to allow for
shock-wave wall collision and bounce-back, and introduce a wavelet-based \emph{noise indicator} algorithm that locates high-frequency
noise; a heat equation-based local solver will be used for noise removal.
We shall also explain the well-known problem of wall-heating (see, for example, \cite{Rider2000,Noh1987}).
\section{A new $C$-method for shock-wall collision}\label{sec:wallvisc}
We now consider the highly singular problem of shock-wall collision and bounce-back, and specifically, the removal of spurious post collision
oscillations.
\subsection{The Euler-$C$-$W$ system}
As a generalization to the Euler-$C$ system \eqref{C-method-1d}, we consider the following coupled Euler-$C$-$W$ system:
\begin{subequations}\label{EulerC}
\begin{align}
\partial_t \rho + \partial_x (\rho u) &=0, \label{EulerC-density}\\
\partial_t (\rho u) + \partial_x (\rho u^2 + p) &= \partial_x \left( \mathcal{B} ^{(u)}(t) \, \rho \, C \, \partial_x u \right), \label{EulerC-momentum}\\
\partial_t E + \partial_x (u(E + p)) &= \partial_x \left( \mathcal{B} ^{(E)}(t) \, \rho \, C \, \partial_x (E / \rho) \right) \,, \label{EulerC-energy} \\
\partial_t C + \frac{S(\bm{u})}{\varepsilon \Delta x} C - \kappa \Delta x \cdot S(\bm{u}) \partial_{xx} C &
= \frac{S(\bm{u})}{\varepsilon \Delta x} G \,, \label{C-Sod} \\
\partial_t C_w + \frac{S(\bm{u})}{\varepsilon_w \Delta x} C_w - \kappa_w \Delta x \cdot S(\bm{u}) \partial_{xx} C_w
& = \frac{S(\bm{u})}{\varepsilon_{w} \Delta x} G \,, \label{Cwall-sod}
\end{align}
\end{subequations}
where
\begin{subequations}
\label{artificial_visc}
\begin{align}
\mathcal{B} ^{(u )}(t) & = (\Delta x)^2 \cdot \frac{\max_{x} | \partial_x u| }{\max_{x} C} \left( \beta^u + \beta^{u }_w \cdot \overline{C}_w(t)\right)\,, \\
\mathcal{B} ^{(E )}(t) & = (\Delta x)^2 \cdot \frac{ \max_{x} | \partial_x u| }{ \max_{x} C}
\left(\beta^{E} + \beta^{E}_w \cdot \overline{C}_w(t)\right) \,,
\end{align}
\end{subequations}
and where
the smooth and localized bump function $\overline{C}_w(t)$ is defined as
\begin{equation}\label{wall-ind-fn}
\overline{C}_w(t) = \frac{C_w(x_M,t)}{\max_{x} C_w(x,t)},
\end{equation}
and $x_M$ denotes the right boundary, where the shock-wall collision and bounce-back is
assumed (for simplicity) to occur.
Furthermore, $S(\bm{u})$ is the maximum wave-speed \eqref{wave-speed}, and $G = G(x,t)$ is the forcing to the $C$-equation, defined by \eqref{C-forcing}.
The indicator function $ \mathbbm{1}_{(-\infty,0)}(\partial_x u) $ is
the {\it compression switch}, in which $G$ is non-zero only if $\partial_x u < 0$. For convenience, we list all of the
parameters and variables associated with the system \eqref{EulerC} in Table \ref{table:sod}. We note that due to the presence of the
compression switch in the definition of $G$, we can instead define $G(x,t) = \mathbbm{1}_{(-\infty,0)} (\partial_x \rho) \cdot \frac{|\partial_x \rho|}{\max_{x} | \partial_x \rho|} $ and obtain identical results.\footnote{Indeed, this will be our strategy for the 2-$D$ $C$-method that we indroduce
in \cite{RaReSh2018b}.}
We shall explain the use of this new $C_w(x,t)$ function and the localized time-function $\overline{C}_w(t)$ below, when we present
the results of numerical experiments of shock-wall collision.
We remark that the artificial viscosity terms on the right-hand side of the momentum equation \eqref{EulerC-momentum} and the
energy equation \eqref{EulerC-energy} ensure that the total energy is conserved; in particular, the solution $E(x,t)$ of \eqref{EulerC-energy}
continues to obey the identity \eqref{energy_identity}. For simplicity, we consider the case of periodic boundary conditions. On the one
hand, integration of
the energy equation \eqref{EulerC-energy} over the spatial domain $[x_1,x_M]$ shows that
$\frac{\mathrm{d}}{\mathrm{d}t} \int_{x_1}^{x_M} E\, \mathrm{d}x = 0$.
On the other hand, multiplying the momentum \eqref{EulerC-momentum} by $u$, integrating over the domain $[x_1,x_M]$, utilizing the
conservation of mass
\eqref{EulerC-density} together with the pressure identity \eqref{eq-of-state}, and the energy equation\eqref{EulerC-energy}, we find that
$$\frac{\mathrm{d}}{\mathrm{d}t} \int_{x_1}^{x_M} \left( \frac{1}{2} \rho u^2 + \frac{p}{\gamma -1} \right) \, \mathrm{d}x = 0 \,.$$
This shows that the velocity $u$ and pressure $p$ adjust accordingly to maintain the relation \eqref{energy_identity}, and that our
modified Euler-$C$-$W$ system conserves total energy.
{
\begin{table}[H]
\centering
{\small
\begin{tabular}{|M{4cm} | M{6cm}|}
\hline
Parameter / Variable & Description \\ [0.0em]
\hline \hline
$\beta^{u}$, $\beta^E$ & artificial viscosity coefficients for the momentum
and energy, respectively. \\[0.5em]
\hline
$\beta^{u}_w$,
$\beta^E_w$ & wall viscosity coefficients for the
momentum and energy, respectively. \\[0.5em]
\hline
$S(\bm{u})(t)$ & maximum wave speed $\max_{x} \left( \max \left\{ \, | u(x,t) | , | u(x,t) \pm c | \,\right\} \right) $. \\[0.5em]
\hline
$\varepsilon$, $\varepsilon_w$ & parameters controlling support
of $C$ and $C_w$, respectively. \\[0.5em]
\hline
$\kappa$, $\kappa_w$ & parameters controlling smoothness
of $C$ and $C_w$, respectively. \\[0.5em]
\hline
$\overline{C}_w(t)$ & smooth and localized bump function. \\[0.5em]
\hline
\end{tabular}}
\caption{Relevant parameters and variables for the Euler-$C$-$W$ system \eqref{EulerC}.}
\label{table:sod}
\end{table}}
\subsection{Boundary conditions for the Euler-$C$-$W$ system}
We consider two types of boundary conditions on the interval $x_1 \leq x \leq x_M$.
For many of the test problems,
we employ the so-called {\it{reflective}} or {\it{solid wall}} boundary
conditions at $x=x_1$ and $x=x_M$ and $t\ge 0$:
\begin{equation}
\label{var-bcs}
\partial_x \rho (x,t) = 0 \,, \ \
\rho u (x,t) =0 \,, \ \
\partial_x E (x,t) =0 \,, \ \ \partial_x C(x,t) = 0 \,, \ \ \partial_x C_w(x,t) = 0 \,.
\end{equation}
Alternatively, we shall sometimes use the {\it{free flow}} boundary conditions:
\begin{equation}
\label{var-bcs-alternate}
\partial_x \rho (x,t) = 0 \,, \ \
\partial_x\left(\rho u\right) (x,t) =0 \,, \ \
\partial_x E (x,t) =0 \,, \ \ \partial_x C(x,t) = 0 \,, \ \ \partial_x C_w(x,t) = 0 \,.
\end{equation}
\subsection{The WENO-$C$-$W$ algorithm}\label{sec-weno-reconstruction-procedure}
\subsubsection{Discretization of the Euler-$C$-$W$ system}
We now describe the simple WENO-based space discretization scheme used for the Euler-$C$-$W$ system \eqref{EulerC}.
We use a formally fifth-order WENO reconstruction procedure together with upwinding,
based on the sign of the velocity at the cell edges. We stress that the WENO-type discretization we
use is highly simplified, and is not meant to be representative of the class of full WENO solvers.
However, we note that, for certain problems, our simplified WENO-type discretization produces
solutions with similar errors and convergence rates to those produced using a standard WENO scheme
(see \S{\ref{sec:comparison-with-other-schemes}}).
The spatial domain $x_1\leq x \leq x_M$ is subdivided into $M$ equally sized cells of width $\Delta x$, where
the left-most and right-most cells are centered on the left and right boundaries, respectively. We
denote the cell centers by $x_i$ for $i =1,\ldots,M$, and the cell edges with the fractional index
$$
x_{i+\frac{1}{2}} = \frac{ x_i + x_{i+1}}{2} , \text{ for } i=1,\ldots,M-1 \,.
$$
\begin{figure}\label{fig:cells}
\centering
\begin{tikzpicture}
\draw[ultra thick,dashed] (-3.5,3) -- (-3.5,0);
\draw[ultra thick,dashed] (3.5,3) -- (3.5,0);
\filldraw [lightgray] (-4,2) --(4,2) -- (4,1) -- (-4,1);
\draw[very thick] (-4,2) -- (4,2) -- (4,1)--(-4,1)--cycle;
\draw[ultra thick,dashed] (-7,2) -- (-4,2);
\draw[ultra thick,dashed] (7,2) -- (4,2);
\draw[ultra thick,dashed] (-7,1) -- (-4,1);
\draw[ultra thick,dashed] (7,1) -- (4,1);
\draw[ultra thick,dashed] (-7,2) -- (-7,1);
\draw[ultra thick,dashed] (7,2) -- (7,1);
\draw[very thick] (-5,2) -- (-5,1);
\draw[very thick] (-6,2) -- (-6,1);
\draw[very thick] (5,2) -- (5,1);
\draw[very thick] (6,2) -- (6,1);
\draw[very thick] (3,2) -- (3,1);
\draw[very thick] (-3,2) -- (-3,1);
\draw[very thick] (2,2) -- (2,1);
\draw[very thick] (-2,2) -- (-2,1);
\filldraw [black] (-6.5,1.5) circle (2pt);
\filldraw [black] (6.5,1.5) circle (2pt);
\filldraw [black] (-3.5,1.5) circle (2pt);
\filldraw [black] (3.5,1.5) circle (2pt);
\filldraw [black] (5.5,1.5) circle (2pt);
\filldraw [black] (-5.5,1.5) circle (2pt);
\filldraw [black] (4.5,1.5) circle (2pt);
\filldraw [black] (-4.5,1.5) circle (2pt);
\filldraw [black] (2.5,1.5) circle (2pt);
\filldraw [black] (-2.5,1.5) circle (2pt);
\node at (-6.5,1.2) {\footnotesize $x_{-2}$};
\node at (-5.5,1.2) {\footnotesize $x_{-1}$};
\node at (-4.5,1.2) {\footnotesize $x_0$};
\node at (-3.5,1.2) {\footnotesize $x_1$};
\node at (-2.5,1.2) {\footnotesize $x_2$};
\node at (2.5,1.2) {\footnotesize $x_{M-1}$};
\node at (3.5,1.2) {\footnotesize $x_M$};
\node at (4.5,1.2) {\footnotesize $x_{M+1}$};
\node at (5.5,1.2) {\footnotesize $x_{M+2}$};
\node at (6.5,1.2) {\footnotesize $x_{M+3}$};
\node at (0,1.5) {$\ldots$};
\node at (-3.5,3.5) {left boundary};
\node at (3.5,3.5) {right boundary};
\draw[very thick,decoration={brace,amplitude=10pt,mirror,raise=10pt},decorate]
(-7,1) -- node[below=20pt] {Ghost cells} (-4,1);
\draw[very thick,decoration={brace,amplitude=10pt,mirror,raise=10pt},decorate]
(4,1) -- node[below=20pt] {Ghost cells} (7,1);
\end{tikzpicture}
\caption{The grid, together with ghost cells, for the WENO-$C$-$W$ algorithm.}
\end{figure}
Any quantity evaluated at a cell center $x_i$ shall be denoted by $w_i$, and a quantity evaluated
at a cell edge $x_{i+\frac{1}{2}}$ is denoted by $w_{i+\frac{1}{2}}$.
Given a vector $w_i$ corresponding to cell-center values, and vectors $z_{i-\frac{1}{2}}$ and
$z_{i+\frac{1}{2}}$ corresponding to cell edge values, we define the $j^{\text{th}}$ component
by
$$
\left[ \operatorname{WENO}(w_i,z_{i \pm \frac{1}{2}}) \right]_j = \frac{1}{\Delta x} \left( \tilde{w}_{j+\frac{1}{2}} z_{j+\frac{1}{2}} - \tilde{w}_{j-\frac{1}{2}} z_{j-\frac{1}{2}} \right),
$$
where the cell-edge values $\tilde{w}_{j+\frac{1}{2}}$ are calculated using a standard fifth-order
WENO reconstruction procedure (see \cite{JiangShu1996}, \cite{Shu2003})
with upwinding based on the sign of $z_{j+\frac{1}{2}}$.
Then, defining the vectors $\bm{u} = [\rho,\rho u,E]^T$ and
$\bm{C} = [C,C_w]^{T}$, we now construct the
operators $\mathcal{A}_{\operatorname{WENO}}$ and $\mathcal{B}_{\operatorname{WENO}}$ as
\begin{equation}\label{A-weno}
\left[ \mathcal{A}_{\operatorname{WENO}}(\bm{u}_i,\bm{C}_i) \right] =
\begin{bmatrix}
\left[ \operatorname{WENO}\left( \rho_i, \hat{u}_{i \pm \frac{1}{2}} \right) \right]_i \\[1.5em]
\left[ \operatorname{WENO}\left( (\rho u )_i, \hat{u}_{i \pm \frac{1}{2}} \right) \right]_i + \tilde{\partial_4} p_i - \mathcal{B}^{(u)}(t) \cdot \frac{ \tilde{\partial_{C}} \left(u_{i+\frac{1}{2}}\right) - \tilde{\partial_{C}} \left( u_{i-\frac{1}{2}} \right) }{\Delta x} \\[1.5em]
\left[ \operatorname{WENO}\left( (E + p )_i, \hat{u}_{i \pm \frac{1}{2}} \right) \right]_i - \mathcal{B}^{(E)}(t) \cdot \frac{ \tilde{\partial_{C}} \left( (E/\rho)_{i+\frac{1}{2}} \right) - \tilde{\partial_{C}} \left( (E/\rho)_{i-\frac{1}{2}} \right) }{\Delta x}
\end{bmatrix}
\end{equation}
and
\begin{equation}\label{B-weno}
\left[ \mathcal{B}_{\operatorname{WENO}}(\bm{u}_i,\bm{C}_i) \right] =
\begin{bmatrix}
\frac{S(\bm{u}_i)}{\varepsilon \Delta x} \left\{ C_i - G_i \right\} + \frac{\tilde{\partial_S} C_{i+\frac{1}{2}} - \tilde{\partial_S} C_{i-\frac{1}{2}}}{\Delta x} \\[1.5em]
\frac{S(\bm{u}_i)}{\varepsilon_{w} \Delta x} \left\{ \left[C_w\right]_i - G_i \right\} + \frac{\tilde{\partial_S} \left[ C_w\right]_{i+\frac{1}{2}} - \tilde{\partial_S} \left[ C_w\right]_{i-\frac{1}{2}}}{\Delta x}
\end{bmatrix}.
\end{equation}
Here, we have used the notation $\tilde{\partial_4} p_i$ to denote the fourth-order central
difference approximation for the derivative of the pressure at the cell center $x_i$:
$$
\tilde{\partial_4} p_i = \frac{p_{i-2} - 8p_{i-1} + 8p_{i+1}-p_{i+2}}{12 \cdot \Delta x}.
$$
The cell-edge velocities $\hat{u}_{i \pm \frac{1}{2}}$ used for upwinding
are calculated using a fourth-order averaging:
$$
\hat{u}_{i-\frac{1}{2}} = \frac{-u_{i-2} + 7u_{i-1} + 7u_i - u_{i+1}}{12} \,.
$$
We have also used the notation $\tilde{\partial_C} \left( w_{i+\frac{1}{2}} \right)$ and $\tilde{\partial_S} C_{i+\frac{1}{2}}$ to denote
\begin{align*}
\tilde{\partial_C} \left( w_{i+\frac{1}{2}} \right) &= \rho_{i+\frac{1}{2}} C_{i+\frac{1}{2}} \tilde{\partial} w_{i+\frac{1}{2}}, \\
\tilde{\partial_S} C_{i+\frac{1}{2}} &= \kappa \Delta x \, S(\bm{u}_i) \, \tilde{\partial}C_{i+\frac{1}{2}},
\end{align*}
respectively. Here, the notation $z_{i+\frac{1}{2}}$ denotes a quantity calculated at the
cell edge $x_{i+\frac{1}{2}}$ using the standard averaging
$$
z_{i+\frac{1}{2}} = \frac{z_i + z_{i+1}}{2},
$$
while the quantity $\tilde{\partial} w_{i+\frac{1}{2}}$ denotes the central difference
approximation for $\partial_x w$ at the cell edge $x_{i+\frac{1}{2}}$,
$$
\tilde{\partial} w_{i+\frac{1}{2}} = \frac{w_{i+1}-w_i}{\Delta x}.
$$
Now, given $\bm{u}^n$ at a time $t = t_n = n \Delta t$, we evolve the solution as follows:
\begin{subequations}\label{Euler-semi-discrete}
\begin{align}
\bm{{u}}_i^{n+1} &= \text{RK}\left( \bm{{u}}_i^{n}, \mathcal{A}_{\operatorname{WENO}}(\bm{{u}}_i^{n},\bm{{C}}_i^n) \right) \,, \\
\bm{{C}}_i^{n+1} &=\text{RK}\left( \bm{{C}}_i^{n} ,\mathcal{B}_{\operatorname{WENO}}(\bm{{u}}_i^{n},\bm{{C}}_i^n) \right) \,,
\end{align}
\end{subequations}
where RK denotes the explicit fourth-order Runge-Kutta time-integration method.
\subsubsection{Discretization of boundary conditions and ghost node values}
Boundary conditions for the functions $C$ and $C_w$ are imposed through the assigning of the so-called \textit{ghost node} values. More precisely, the ghost node values for the functions $C$ and $C_w$ are prescribed via an even extension:
\begin{equation}\label{C-ghost-node}
C_{1-k} = C_{1+k} \quad \text{and} \quad C_{M+k} = C_{M-k}
\end{equation}
for $k=1,\ldots,M_g$, where $M_g$ is the number of ghost nodes. For our (formally) fifth-order
WENO scheme, $M_g = 3$.
The associated boundary conditions for the conservative variables are also imposed via the {ghost node}
conditions. For the Dirichlet boundary condition, an odd extension is used, while for the Neumann boundary condition, an even extension is used. More precisely, suppose that we wish to impose the free-flow boundary
conditions \eqref{var-bcs-alternate}. This is done by choosing the ghost node values as
\begin{subequations}\label{ghost-node-alternate}
\begin{alignat}{3}
\rho_{1-k} &= \rho_{1+k} \quad &&\text{and} & \quad \rho_{M+k} &= \rho_{M-k},\\
\rho u_{1-k} &= \rho u_{1+k} \quad &&\text{and} & \quad \rho u_{M+k} &= \rho u_{M-k},\\
E_{1-k} &= E_{1+k} \quad &&\text{and} & \quad E_{M+k} &= E_{M-k},
\end{alignat}
\end{subequations}
for $k=1,\ldots,M_g$.
The solid wall boundary conditions \eqref{var-bcs} are imposed
by replacing the even extension of the momentum in \eqref{ghost-node-alternate} with the odd extension
of the momentum:
\begin{subequations}\label{ghost-node}
\begin{alignat}{3}
\rho_{1-k} &= \rho_{1+k} \quad &&\text{and} & \quad \rho_{M+k} &= \rho_{M-k},\\
\rho u_{1-k} &= -\rho u_{1+k} \quad &&\text{and} & \quad \rho u_{M+k} &= -\rho u_{M-k},\\
E_{1-k} &= E_{1+k} \quad &&\text{and} & \quad E_{M+k} &= E_{M-k},
\end{alignat}
\end{subequations}
for $k=1,\ldots,M_g$. Again, it is easy to verify that the density $\rho$ and the energy $E$ satisfy
the homogenous Neumann boundary condition in \eqref{var-bcs}. To verify that
the momentum satisfies the homogenous Dirichlet boundary condition, we
need to use the momentum equation in the semi-discrete form \eqref{Euler-semi-discrete}. Suppose that at time-step $n$, the velocity at the boundaries vanishes:
$u_{M}^n = u_1^n = 0$.
For simplicity, we restrict to the right boundary in cell $x_M$.
The even extensions of $\rho$ and $C$ and the
odd extension of $u$ mean that the diffusion term on the right-hand side of the momentum equation vanishes
since
\begin{align*}
\tilde{\partial_{C}} \left( u_{M+\frac{1}{2}} \right) &= \rho_{M+\frac{1}{2}} \cdot C_{M+\frac{1}{2}} \cdot \frac{\left(u_{M+1}-u_M\right)}{\Delta x} \\
&= \rho_{M-\frac{1}{2}} \cdot C_{M-\frac{1}{2}} \cdot \frac{\left(-u_{M-1}+u_M\right)}{\Delta x}
=\tilde{\partial_{C}} \left( u_{M-\frac{1}{2}} \right).
\end{align*}
Moreover, since the pressure $p$ is evenly extended, the derivative at the
boundaries $\tilde{\partial_4} p_M$ and $\tilde{\partial_4} p_1 $, also vanishes. One can also check
that the derivative of the flux term at the boundary vanishes:
$
\left[ \operatorname{WENO}\left( (\rho u)_i, \hat{u}_{i\pm\frac{1}{2}} \right) \right]_M = 0$.
This means that $\partial_t (\rho u) = 0$ at the boundaries, so that momentum satisfies
$\rho u = 0$ at the boundaries for $t\ge 0$,
provided that the initial momentum vanishes on the boundaries.
\subsection{Using WENO-$C$-$W$ for the Sod shock-wall collision problem}
The reflection of a shock wave from a fixed wall was first considered in \cite{courant1999supersonic}
from a theoretical viewpoint (see also \cite{Alpher1954,meyer1957}).
Further investigations in
\cite{Noh1987,DONAT199642} were done primarily in the context of the
wall-heating phenomenon (to be discussed below).
The reflection of a shock-wave from a non-rigid boundary was considered in
\cite{mazor1992head,Igra1992}, wherein an artificial viscosity method was utilized to stabilize the solution.
As a motivating example, we first consider the classical Sod shock tube experiment. This is a Riemann problem
on the domain $0 \leq x \leq 1$, with initial data given by
\begin{equation}\label{sod_initialdata}
\begin{bmatrix}
\rho_0 \\ (\rho u)_0 \\ E_0
\end{bmatrix}
=
\begin{bmatrix}
1 \\ 0 \\ 2.5
\end{bmatrix}
\mathbbm{1}_{[0,\frac{1}{2})}(x)
+
\begin{bmatrix}
0.125 \\ 0 \\ 0.25
\end{bmatrix}
\mathbbm{1}_{[\frac{1}{2},1]}(x) \text{ and } \gamma=1.4\,,
\end{equation}
where $\mathbbm{1}_{[a,b)}(x)$ denotes the indicator function on the interval $a \leq x < b$.
The solution consists of a rarefaction wave, a contact discontinuity, and a shock wave.
The shock propagating to the right collides with the wall, modeled by the point $x=1$, at time $t \approx 0.28$.
In Fig.\ref{fig:sod-before-collision1}, we show the success of the WENO-$C$ method
for this problem prior to the collision of the shock wave with the wall at $x=1$; however, as shown
in Fig.\ref{fig:sod-after-collision1}, the WENO-$C$ scheme (without the addition of the wall function
$\overline{C}_w(t)$)
is not sufficient to remove spurious oscillations post shock collision in the case of small $\beta=0.5$.
On the other hand, by setting
$\beta=4.0$, the velocity is mostly free of post shock-wall collision oscillations at $t=0.36$, at the expense of an overly diffused
shock profile prior to shock-wall collision at $t=0.2$. Moreover, for more difficult problems, such as the
LeBlanc problem considered in \S\ref{subsec:leblanc}, very
precise choices of the artificial viscosity parameters are
required to maintain stability and correct wave speeds. Consequently, it is difficult to choose $\beta$ such that
the solutions both pre and post shock-wall collision are accurate and noise-free.
The use of the wall viscosity will provide a nice solution strategy.
\begin{figure}[H]
\centering
\subfigure[$t=0.20$]{\label{fig:sod-before-collision1}\includegraphics[width=75mm]{sod-before-collision1}}
\subfigure[$t=0.36$]{\label{fig:sod-after-collision1}\includegraphics[width=75mm]{sod-after-collision1}}
\caption{The velocity profile for the Sod shock tube problem, calculated using our
WENO-$C$ scheme with 201 cells. The blue and red curves are the velocity profiles and the dashed
green curve is the exact solution.}
\end{figure}
\subsubsection{An explanation of the temporal bump function $\overline{C}_w(t)$}
We now explain the use of the new $C_w(x,t)$ function together with the
temporal bump function $\overline{C}_w(t)$. We shall assume, for simplicity, that the shock wave
is traveling to the right, so that the shock wave collides with the wall $x = 1$.
Thanks to the homogeneous Neumann
boundary condition $\partial_x C_w=0$ at the wall $x=1$, there is a smooth
growth (in time) of the amplitude of $C_w(1,t)$ just prior to shock-wall collision,
followed by a smooth decrease of amplitude during shock bounce-back.
In Fig.\ref{fig:wallvisc-expln}, we illustrate the
WENO-$C$-$W$ scheme as applied to Sod.
While the shock is away from the wall, $C_w(1,t)$ is zero, and
thus by formula \eqref{wall-ind-fn} so is $\overline{C}_w(t)$; see
the purple curve in Fig.\ref{fig:wallvisc-expln1}.
As the shock approaches the wall (as shown in Fig.\ref{fig:wallvisc-expln2}), the
Neumann boundary condition for the $C_w$-equation ensures that
$\overline{C}_w(t)$ increases smoothly, until it reaches a maximum
when the shock collides with the wall (Fig.\ref{fig:wallvisc-expln3}), before smoothly
decreasing back to zero as the shock moves away from the wall (Fig.\ref{fig:wallvisc-expln4}).
In Fig.\ref{fig:wallvisc-expln-alt}, we plot the graph of $\overline{C}_w(t)$.
The localized nature of the temporal bump function $\overline{C}_w(t)$ means that the extra viscosity,
given by $\beta_w$ in \eqref{artificial_visc},
is added only during shock-wall collision and bounce-back; prior to collision, no extra viscosity is added and
the solution is consequently not overly diffused.
In \S \ref{sec:simulations}, we apply the
WENO-$C$-$W$ scheme to a number of different shock tube problems for shock collision and bounce-back.
\begin{figure}[H]
\centering
\subfigure[$t=0.200$]{\label{fig:wallvisc-expln1}\includegraphics[width=75mm]{wallvisc-expln1}}
\subfigure[$t=0.272$]{\label{fig:wallvisc-expln2}\includegraphics[width=75mm]{wallvisc-expln2}}
\subfigure[$t=0.296$]{\label{fig:wallvisc-expln3}\includegraphics[width=75mm]{wallvisc-expln3}}
\subfigure[$t=0.360$]{\label{fig:wallvisc-expln4}\includegraphics[width=75mm]{wallvisc-expln4}}
\caption{The velocity profile for the Sod shock tube problem, calculated using our
WENO-$C$-$W$ scheme with 201 cells. The blue curve is the velocity profile and the dashed
green curve is the exact solution. The red curve is the (normalized and resized) function
$C_w(x,t)$.}
\label{fig:wallvisc-expln}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[$\overline{C}_w(t)$]{\label{fig:wallvisc-expln5}\includegraphics[width=75mm]{wallvisc-expln5}}
\subfigure[zooming in on $\overline{C}_w(t)$ during shock collision]{\label{fig:wallvisc-expln6}\includegraphics[width=75mm]{wallvisc-expln6}}
\caption{The wall indicator function $\overline{C}_w(t)$ for the
Sod shock tube problem. The function is zero when the shock is away from the wall, increases
smoothly as the shock approaches the wall, and reaches a maximum when the shock
collides with the wall, before decreasing smoothly as the shock moves away from the wall.}
\label{fig:wallvisc-expln-alt}
\end{figure}
\subsubsection{A generalization of our algorithm to shock-shock collision problems}
We remark here that a shock hitting a wall is simply a special case of shock-shock collision;
indeed, the
shock-wall collision problem may be viewed as the collision between two identical shocks but with different signs
for the shock speed. A simple generalization of the Euler-$C$-$W$ algorithm which allows for arbitrary shock-shock collision
is obtained by
redefining the temporal bump function \mbox{\eqref{wall-ind-fn}} with the new function
\begin{equation}\label{wall-ind-fn-general}
\overline{C}_w(t) = \sum_{i} \frac{C_w(x^*_i,t)}{\max_{x} C_w(x,t)},
\end{equation}
where $x^*_i(t)$ denotes the time-dependent local minima of the function $C_w(x,t)$ and approximates the location of the
shock-shock collision (at the collision time).
The functions $x^*_i(t)$
are analogous to the time-independent wall location $x_M$ in the shock-wall collision
problem (where the location of the collision is predetermined).
Fig.{\ref{fig:shock-shock-collision}} shows the density function during shock-shock collision. Also shown, is the temporal bump function $\overline{C}_w$,
which
naturally increases as two shock waves approach one another, and provides a natural method for the addition of spacetime smooth
additional artificial viscosity during the shock-shock collision process. As can be seen, the two shocks collide at $t =0.192$, at which time
the function $\overline{C}_w$ achieves its maximum value. We examine this problem of shock-shock collision in great detail in {\cite{RaReSh2019}}.
\begin{figure}[H]
\centering
\subfigure[$t=0.160$]{\label{fig:shock-shock1}\includegraphics[width=75mm]{shock-shock1}}
\subfigure[$t=0.184$]{\label{fig:shock-shock2}\includegraphics[width=75mm]{shock-shock2}}
\subfigure[$t=0.192$]{\label{fig:shock-shock3}\includegraphics[width=75mm]{shock-shock3}}
\subfigure[$\overline{C}_w(t)$]{\label{fig:shock-shock4}\includegraphics[width=75mm]{shock-shock4}}
\caption{The density profile for a non-identical shock-shock collision problem.
The blue curve is the density profile and the purple curve in Fig.\ref{fig:shock-shock1}-\ref{fig:shock-shock3}
is the (normalized and resized) function $C_w(x,t)$. The red curve in Fig.\ref{fig:shock-shock4} is the
temporal bump function $\overline{C}_w(t)$.}
\label{fig:shock-shock-collision}
\end{figure}
\section{A wavelet-based noise indicator: the WENO-$C$-$W$-$N$ method}\label{sec:noiseind}
Numerical solutions of gas dynamics often develop
high-frequency noise.
These (often small amplitude) spurious oscillations can occur if the time-step is too large
or because of the smearing of
contact discontinuities. Large time-step noise can be seen
with any explicit numerical scheme, while noise in the velocity field at the contact discontinuity is illustrated
in Fig.\ref{fig:noise-at-contact} for the Sod problem.
This noise is caused by the slightly different slopes that the momentum and
density profiles have at the contact discontinuity.
\begin{figure}[H]
\centering
\subfigure[velocity profile at $t = 0.20$]{\label{fig:noise-at-contact1}\includegraphics[width=75mm]{noise-at-contact1}}
\subfigure[zooming in on the noise]{\label{fig:noise-at-contact2}\includegraphics[width=75mm]{noise-at-contact2}}
\caption{The velocity profile for the Sod shock tube problem, calculated using our WENO-$C$
scheme with 201 cells. The blue curve is the velocity profile and the dashed green curve
is the exact solution. There is noise in the region $x \in [0.65,0.75]$. This is the location
of the contact discontinuity in the density and momentum profiles.}
\label{fig:noise-at-contact}
\end{figure}
To deal with the occurrence of spurious noise, we implement a localized {\it{wavelet}}-based noise indicator.
Wavelets were first used in fluid dynamics in the analysis of turbulence by Farge \cite{Farge1992} and Meneveau \cite{Meneveau1991}. They have also been used in the numerical solution of PDE
on adaptive grids (see the review paper
\cite{SchVas2010}).
With regards to noise detection and removal, wavelets have
generally been used in the form of a nonlinear {\it{filter}}, in which
a noisy function is first decomposed using wavelets, and
the function is then
{\it{de-noised}} by retaining only the low-frequency components. Such filtering techniques often over-smooth
the noisy data, or introduce additional Gibbs-like oscillations \cite{Coifman1995}.
The main novelty of our approach is the use
of wavelets only for high-frequency noise detection, while noise removal is achieved by a highly localized heat equation approach.
\subsection{Construction of wavelets}
A wavelet is like a traditional wave (sine or cosine waves), but localized in space i.e. it
has a finite support. We define a \emph{mother wavelet}
$\psi(x) = \psi_{0,0}(x)$ that represents the lowest frequency oscillation, and then use a dyadic
scaling and integral translation to produce wavelets of higher frequencies:
$$
\psi_{r,s}(x) = 2^{r/2}\psi(2^rx-s); \,\, r = 0,1,2,\ldots \text{ and } s = \pm 1, \pm 3, \ldots, \pm (2^r-1).
$$
Suppose that the spatial domain is given by $x_1 \leq x \leq x_M$. For our purposes, there
are two key properties that the wavelet family $\{ \psi_{r,s} \}$ needs to satisfy:
\begin{enumerate}
\item Zero mean:
$$
\int_{x_1}^{x_M} \psi(x) \,\mathrm{d}x = 0.
$$
Note that due to the dyadic scaling and integral translation, this condition also ensures that
wavelets of higher frequency have zero mean.
\item ``Quasi-orthogonality'' of the form:
$$
\int_{x_1}^{x_M} \psi_{r,s}(x) \cdot \psi_{r,s'}(x) \,\mathrm{d}x = 0, \text{ for } r \geq 0 \text { and } 0 \leq s, s' \leq 2^r - 1.
$$
That is, each wavelet is orthogonal to every other wavelet of the same frequency. This is to
ensure that one can locate exactly where each frequency is active.
\end{enumerate}
We define our wavelets to take the form shown in Fig.\ref{fig:highest-frequency-wavelet}.
Since we are only interested in the highest frequency noise, we provide the exact formula for the
highest frequency wavelet as
\begin{equation}\label{highest-freq-wavelet}
\psi_i (x) = \left\{ \begin{alignedat}{5}
&-\frac{a}{\Delta x} (x-x_{2i-1}), \quad & \text{if} && \quad x_{2i-1} &\leq x \leq \,\, && x_{2i-\frac{1}{2}} \\[0.5em]
&+\frac{3a}{\Delta x} (x-x_{2i}) + a, \quad & \text{if} && \quad x_{2i-\frac{1}{2}} &\leq x \leq && x_{2i} \\[0.5em]
&-\frac{3a}{\Delta x} (x-x_{2i}) + a, \quad & \text{if} && \quad x_{2i} &\leq x \leq && x_{2i+\frac{1}{2}} \\[0.5em]
&+\frac{a}{\Delta x} (x- x_{2i+1}), \quad & \text{if} && \quad x_{2i+\frac{1}{2}} & \leq x \leq && x_{2i+1}
\end{alignedat}\right.\vspace{1em}
\end{equation}
for each $i = 1, 2, \ldots, \frac{M-1}{2}$, where the notation $x_{k+\frac{1}{2}}$ denotes the midpoint of
$x_k$ and $x_{k+1}$.
It is clear from formula \eqref{highest-freq-wavelet} that
each $\psi_i$ is supported in the interval $\mathcal{I}_i \coloneqq [x_{2i-1},x_{2i+1}]$.
\begin{figure}[H]
\centering
\scalebox{.5}{\begin{tikzpicture}
\filldraw [black] (-5,0) circle (2pt);
\filldraw [black] (0,0) circle (2pt);
\filldraw [black] (5,0) circle (2pt);
\draw[- >,very thick,dashed] (-7,0) -- (7,0);
\node at (6.7,0.3) {\huge $x$};
\draw[blue,very thick] (-5,0) -- (-2.5,-2) -- (0,4) -- (2.5,-2) -- (5,0);
\node at (-5,0.5) {\huge $x_{2i-1}$};
\node at (0,0.5) {\huge $x_{2i}$};
\node at (5,0.5) {\huge $x_{2i+1}$};
\end{tikzpicture}}
\caption{The highest frequency wavelet $\psi_i$.}
\label{fig:highest-frequency-wavelet}
\end{figure}
The constant $a \coloneqq \sqrt{3/ \Delta x}$ in \eqref{highest-freq-wavelet}
is a normalization factor to ensure that
the wavelets have $L^2$ norm equal to 1. Since the highest frequency wavelets have
disjoint supports, it is obvious that the quasi-orthogonality property is satisfied.
One can also check that the zero mean property is satisfied.
\subsection{High-frequency noise detection}\label{sec:how-does-noise-ind-work}
Given a discretized spatial domain,
the highest frequency wavelet is supported over two grid cells and is shown in Fig.\ref{fig:highest-frequency-wavelet}.
There are $\frac{M-1}{2}$ two-cell intervals in the computational domain. Each two-cell interval is denoted by
$ \mathcal{I} _j$, and there is a highest frequency wavelet $\psi_j(x)$ corresponding to each
$ \mathcal{I} _j$ for every $j=1,...,\frac{M-1}{2}$.
For a given function $f(x)$, we
next compute the {\it wavelet coefficients} $ \mathcal{C} _j(f)$ for this function.
For each $j=1,..., \frac{M-1}{2}$,
$$
\mathcal{C} _j(f) := \langle f, \,\psi_j \rangle_{L^2} = \int_{ \mathcal{I} _j} f(x) \, \psi_j(x) \,\mathrm{d}x \text{ for } j=1,..., \frac{M-1}{2}\,.
$$
Given the cell-center values $f(x_{2j-1}), f(x_{2j}), f(x_{2j+1})$, we can approximate the given function $f(x)$ on
the interval $\mathcal{I}_j= [x_{2j-1}, x_{2j+1}]$ by a piecewise linear function $\tilde{f}(x)$; in particular,
we define $\tilde{f}(x)$ by linear interpolation of the cell-center values
of $f(x)$. We then approximate the wavelet coefficients by $\mathcal{C}_j(f) \approx \mathcal{C}_j(\tilde f)$, and can compute
\begin{equation}\label{l2-inner-product}
\mathcal{C}_j(\tilde f) = \langle \tilde f, \psi_j \rangle_{L^2} = -\sqrt{\frac{\Delta x}{48}} \cdot \Big[f(x_{2j+1}) - 2f(x_{2j}) + f(x_{2j-1}) \Big] \,.
\end{equation}
Notice that the right-hand side of \eqref{l2-inner-product} is proportional to the second-order central difference
approximation to $f''(x_{2i})$. Also, note that if
$f(x_{2i}) = \frac{1}{2} \left(f(x_{2i+1}) + f(x_{2i-1}) \right)$, i.e., if the
function $\tilde{f}$ is linear on $\mathcal{I}_i$, then the associated wavelet coefficient is
zero. This is crucial in ensuring that only the \emph{highest} frequency noise is detected.
The magnitude of the wavelet coefficients grows with the amplitude of the
high-frequency oscillations. For example, consider the case that $f(x)$ is
a hat function over the interval $\mathcal{I}_j= [x_{2j-1}, x_{2j+1}]$ and that
$f(x_{2j-1j}) = f(x_{2j+1}) = 0$. Then the amplitude of the oscillation is given by the magnitude at the peak of the hat, $f(x_{2j})$, and
$|\mathcal{C}_j(f)|$ is proportional
to $f(x_{2j})$. Consequently, $| \mathcal{C}_j(f) | $ grows linearly with the amplitude of the oscillation.
On the other hand, suppose that we have a lower frequency oscillation, given by a hat function
that spans 4 cells, say the intervals $\mathcal{I}_j$ and $\mathcal{I}_{j+1}$. In each of these
intervals, the oscillating function is linear, so that the associated wavelet coefficients $\mathcal{C}_j(f)$
and $\mathcal{C}_{j+1}(f)$ are equal to zero. This illustrates the fact that the highest frequency
wavelets detect \emph{only} the highest frequency noise.
\subsection{Noise detection in the presence of a shock wave}\label{sec:noise-detection}
We next examine
the noise detection algorithm, applied to a function $u(x)$ which has a shock discontinuity.
For $j=1,...,(M-1)/2$, we again compute the wavelet coefficients $\mathcal{C}_j(u)$ for each two-cell interval $ \mathcal{I} _j$
according to \eqref{l2-inner-product}. Suppose the shock discontinuity occurs spans the two-cell interval $ \mathcal{I} _j$; then,
on $ \mathcal{I} _j$ the shock curve is essentially linear and $C_j(u)=0$, but if the shock is out of phase by one cell with $ \mathcal{I} _j$,
then the wavelet coefficient $C_j(u)$ can be large (see
Fig.\ref{fig:wavelet-coeff-expln}).
\begin{figure}[H]
\centering
\subfigure[shock is ``in phase'' with the wavelet]{\label{fig:wavelet-zero}\includegraphics[width=75mm]{wavelet-zero}}
\subfigure[shock is ``out of phase'' with the wavelet]{\label{fig:wavelet-large}\includegraphics[width=75mm]{wavelet-large}}
\caption{Wavelet coefficients at the shock curve compared with wavelet coefficients in regions
where there is noise. The blue curve is $u(x)$, and the red dots indicate the relative magnitude
of the associated wavelet coefficient $\mathcal{C}_j(u)$.
The wave profile in Fig.\ref{fig:wavelet-large} is identical
to that in Fig.\ref{fig:wavelet-zero}, but shifted by
one cell to the left.}
\label{fig:wavelet-coeff-expln}
\end{figure}
In order to avoid over-diffusion at the shock, we prevent noise detection near shock discontinuities.
This is achieved by noting that the function $C(x,t)$ attains a local maximum for points $x$ along the shock curve.
Consequently, we locate the local maximums of $C(x,t)$, by
finding the cells for which $\partial_x C = 0$ and $\partial_{xx} C < 0$. We then deactivate the noise detection
in the cells surrounding the shock curve.
Having deactivated the noise indicator near the discontinuity, the largest wavelet coefficients are
now those where the high-frequency oscillations exist. We may then define the \emph{noise detector function}
$\mathbbm{1}_{\operatorname{noise}}(x)$ as follows: for each $j=1,..., \frac{M-1}{2}$ and
$x \in \mathcal{I} _j$,
we set $\mathbbm{1}_{\operatorname{noise}}(x)=1$ if $ |\mathcal{C} _j(u)| > \mathcal{C} _{\text{ref}} >0$ and set
$\mathbbm{1}_{\operatorname{noise}}(x)=0$ otherwise.
The constant $\mathcal{C}_{\operatorname{ref}}$ is obtained by computing the wavelet coefficient of a standard hat function
on the interval
$[-\Delta x, +\Delta x]$ with amplitude $\delta h$:
\begin{equation}\label{cref}
\mathcal{C}_{\operatorname{ref}} = \delta h \cdot \sqrt{\frac{\Delta x}{12}}\,.
\end{equation}
\subsection{Noise removal algorithm}\label{sec:noise-removal}
Having described the noise detection algorithm, we next propose an efficient scheme for
removing noise from a given function $u(x)$ by
solving a localized heat equation over the collection of intervals $ \mathcal{I} _j$ where high-frequency noise has been detected.
The union of all noisy intervals $ \mathcal{I} _j$ consists of $K$ connected intervals $V_1,..., V_K$. For each set $V_k$, $k=1,...,K$,
we define the set $W_k$ by affixing one cell on the left and one cell on the right.
We then solve a localized
heat equation for the ``de-noised'' solution $w(x,\tau)$ in each of the domains $W_k$:
\begin{subequations}\label{localized-heat-equation}
\begin{alignat}{2}
\partial_\tau {w}(x,\tau) &= \eta \cdot \partial_{xx} {w}(x,\tau), &&\text { for } x \in W_k \text{ and } \tau \geq 0 \,, \\
{w}(x,0) &= u(x), &&\text{ for } x \in W_k \,, \\
{w}(x,\tau) &= u(x), &&\text{ for } x \in \partial W_k \,,
\end{alignat}
\end{subequations}
where $0< \eta \ll 1$ is a small constant, which we refer to as the noise removal viscosity. The function
$w(x,\tau) = u(x)$ for $x \in \left( \bigcup_{k=1}^K W_k \right)^\mathcal{C}$ and $\tau \geq 0$.
We remark that the time $\tau$ is a ``fictitious'' time, introduced for the
diffusion mechanism. Equation (\ref{localized-heat-equation}b) is the initial condition over the intervals where noise has been
detected, and (\ref{localized-heat-equation}c) is a Dirichlet boundary condition ensuring that $w(x,\tau)$ continuously transitions to $u(x)$.
We use an explicit scheme to solve \eqref{localized-heat-equation}, and in practice, one time-step is sufficient to remove noise.
If an explicit time-stepping scheme is used to solve \eqref{localized-heat-equation}, it is
not necessary to construct the domains $W_k$. Instead, one can simply use the
noise indicator function $\mathbbm{1}_{\operatorname{noise}}(x)$, and solve a modified heat equation:
\begin{subequations}
\begin{alignat}{2}
\partial_\tau {w}(x,\tau) &= \eta \cdot \mathbbm{1}_{\operatorname{noise}}(x) \cdot \partial_{xx} {w}(x,\tau), \quad &&\text {for } x_1<x<x_M\text{ and } \tau \geq 0, \label{heat-equation-local1}\\
{w}(x,0) &= u(x), &&\text{for } x_1<x<x_M, \\
{w}(x,\tau) &= u(x), &&\text{for } x=x_1 \text{ and } x=x_M.
\end{alignat}
\end{subequations}
The utilization of an explicit scheme results in the stability constraint
$\eta \Delta \tau/ (\Delta x)^2 < 1/2$.
However, in practice, we have found that much smaller values
$\eta \Delta \tau /(\Delta x)^2 \ll 1/2$ are sufficient to dampen spurious noise. We also remark that
the use of a single time-step means that the noise removal provided by the localized heat equation
can be viewed as a
\emph{filtering} process, in which noise is removed through a local averaging. Consequently, the
averaging provided by the Laplacian term on the right-hand side of {\eqref{heat-equation-local1}},
namely $(w_{i+1}-2w_i+w_{i-1})/2$, can be replaced by other local averages, such as that provided
by Gaussian filtering {\cite{CaCo2004}}.
However, we wish to stress that there is a distinction between the operation of
\emph{smoothing} a noisy function and the noise removal process we have outlined. While, of course,
removing high-frequency noise does indeed smooth the function, because we remove highly localized
(in space and time) packets of oscillations, the procedure is quite different to more traditional smoothing
algorithms, in which one uses truncation of frequencies in Fourier space or the analogous hyperviscosity
operators in physical space. As such, it is difficult to obtain analytically the truncation error by means of
a Taylor expansion, but it is possible to measure the error improvement by virtue of convergence
studies comparing the algorithm with and without the noise removal algorithm activated. We provide results
of such studies in \S{\ref{sec:simulations}}.
\subsection{The WENO-$C$-$W$-$N$ algorithm}\label{sec-weno-algorithm}
We now describe how we implement the above noise indicator algorithm for the
Euler equations. The algorithm proceeds in two stages; in
the first stage, we use the WENO-$C$-$W$ scheme described in \S\ref{sec-weno-reconstruction-procedure}
to solve for an intermediary
solution $\bm{\tilde{u}} = \left[ \tilde{\rho}, \tilde{\rho u}, \tilde{E} \right]^T$; in the second stage, we
feed this intermediary solution $\bm{\tilde{u}}$ into the noise indicator algorithm to de-noise the solution. The two-stage process is now
described.
\begin{enumerate}
\item An intermediary solution $\bm{\tilde{u}}$ is obtained as
\begin{align*}
\bm{\tilde{u}}_i &= \text{RK}\left( \bm{{u}}_i^{n}, \mathcal{A}_{\operatorname{WENO}}(\bm{{u}}_i^{n},\bm{{C}}_i^n) \right) \,, \\
\bm{{C}}_i^{n+1} &=\text{RK}\left( \bm{{C}}_i^{n} ,\mathcal{B}_{\operatorname{WENO}}(\bm{{u}}_i^{n},\bm{{C}}_i^n) \right) \,.
\end{align*}
\item The intermediate velocity $\tilde{u}_i $ is then
de-noised using the procedure described
in \S \ref{sec:noise-detection} and \S\ref{sec:noise-removal},
producing the noise-free velocity $u(x,t_{n+1})$. The
updated solution $\bm{u}(x,t_{n+1})$ is then obtained as
$$
\renewcommand\arraystretch{1.5}
\bm{u}(x,t_{n+1}) \equiv
\left(
\rho(x,t_{n+1}) , \rho u (x,t_{n+1}) , E(x,t_{n+1}) \right)
\coloneqq
\left(
\tilde{\rho}(x) \,, \tilde{\rho}(x) \cdot u(x,t_{n+1}) \,, \tilde{E} (x) \right) \,.
$$
\end{enumerate}
\begin{remark}\label{remark-noise}
Implementation of our noise removal scheme
has been motivated by the high-frequency oscillations of the velocity field that occur exacly
at the contact discontinuity (see, for example, Fig.{\ref{fig:noise-at-contact}}).
We note that once the noise indicator function
$\mathbbm{1}_{\mathrm{noise}}(x)$ is computed,
there are many possible choices for the noise removal portion of the algorithm. For example,
while our implemented algorithm only removes high-frequency oscillations form the velocity,
we could also remove such oscillations from $\rho$ and $E$, and we could in place of the velocity field,
instead remove oscillations from the momentum $\rho u$. We have found that any of these
choices produce the same
relative errors in the one-dimensional test problems considered herein. Moreover, as we demonstrate in
\S{\ref{sec:simulations}}, the removal of high-frequency oscillations from $u$ alone, is sufficient to remove noise from
the density and energy as well.
A more detailed examination of various noise removal algorithms (as well as those more ideally suited for parallelization) is made in {\cite{RaReSh2019}}.
\end{remark}
\section{Numerical simulations of classical shock tube experiments}\label{sec:simulations}
In this section, we show results of the discretized $C$-method for a variety of classical shock tube experiments. For some of the
problems, we will compare against WENO-based classical artificial viscosity schemes and Noh schemes. See Appendix
\ref{sec:appendix} and Table \ref{table:schemes} for a description of all of the numerical methods
employed herein.
{
\begin{table}[H]
\centering
{\small
\begin{tabular}{|M{4cm} | M{6cm}|}
\hline
Parameter / Variable & Description \\ [0.0em]
\hline \hline
$\beta^{u}$, $\beta^E$ & artificial viscosity coefficients for the momentum
and energy, respectively. \\[0.5em]
\hline
$\beta^{u}_w$,
$\beta^E_w$ & wall viscosity coefficients for the
momentum and energy, respectively. \\[0.5em]
\hline
$\delta h$, $\eta$ & amplitude of noise and noise removal viscosity, respectively. \\[0.5em]
\hline
$\varepsilon$, $\varepsilon_w$ & parameters controlling support
of $C$ and $C_w$, respectively. \\[0.5em]
\hline
$\kappa$, $\kappa_w$ & parameters controlling smoothness
of $C$ and $C_w$, respectively. \\[0.5em]
\hline
\end{tabular}}
\caption{Relevant parameters and variables used in the numerical tests.}
\label{table:parameters}
\end{table}}
As with any artificial viscosity scheme, parameters must be chosen for the problem under consideration.
Before presenting our numerical results, we consider this issue for
the $C$-method, whose parameters are
listed in Table {\ref{table:parameters}}. The artificial viscosity parameters $\beta$ are chosen in the following
manner: we set $\beta^E=0$, choose $\beta^u$, $\beta^u_w$ large enough so that
post-shock oscillations are removed both pre and post shock-wall collision, then choose $\beta^E_w$ large
enough so that the wall-heating phenomenon (discussed later in \S{\ref{sec-sod-shock-tube}}) does not occur.
A similar philosophy is applied to choice of parameters for the noise detection and removal algorithm: first, we determine
the amplitude of highest-frequency oscillation $\delta h$ and then choose the artificial viscosity parameter $\eta$ large enough
to diffuse the noise.
The parameters $\varepsilon$, $\varepsilon_w$, $\kappa$, and $\kappa_w$ are $O(1)$ constants.
Setting a larger value for $\varepsilon$ or $\varepsilon_w$ serves to increase the support of the corresponding
$C$-function, while increasing the value of $\kappa$ or $\kappa_w$ produces smoother $C$-functions.
For certain problems, smoothing the $C$-variables by using a larger $\kappa$ further minimizes noise that
occurs in the solution.
In Appendix {\ref{sec:appendix2}} we demonstrate the accuracy of the $C$-method when the
values of the
parameters $\varepsilon$, $\varepsilon_w$, $\kappa$, and $\kappa_w$ are fixed values for the
different test problems. It is shown that the differences between the solutions computed using the
optimized parameter sets we use for the
problems in this section and the fixed-parameter sets we use for the runs in Appendix {\ref{sec:appendix2}} are
minimal, and that the fixed choice of parameters can be used for general problems. However, we wish to
emphasize that one of the strengths of the $C$-method is its
flexibility to optimize parameters for specific features associated with particular data.
The error analysis and convergence studies we perform for the numerical experiments considered in the
following sections use the $L^1$, $L^2$, and
$L^ \infty$ norms. Given two functions $f(x)$ and $g(x)$ defined on the computational grid with $M$ cells,
these error norms are defined by
\begin{subequations}
\begin{alignat}{2}
&\lvert \lvert f- g\rvert \rvert_{L^1} &&= \frac{1}{M} \sum_{i=1}^M | f(x_i) - g(x_i) |\,, \label{error-formula} \\
&\lvert \lvert f- g\rvert \rvert_{L^2} &&= \sqrt{\frac{1}{M} \sum_{i=1}^M | f(x_i) - g(x_i) |^2} \,, \label{error-L2}\\
&\lvert \lvert f- g\rvert \rvert_{L^ \infty} &&= \max_{i=1,\ldots,M} | f(x_i) - g(x_i) |\,. \label{error-Linf}
\end{alignat}
\end{subequations}
As stated by Greenough \& Rider {\cite{GreenoughRider2004}}, the
$L^1$ and $L^2$ norms provide a global view of the errors in the computed solution, whereas the
$L^ \infty$ norm highlights local errors, such as the undershoot or overshoot that occurs at a discontinuity.
Thus, these three norms together provide a precise \emph{quantitative} description of the errors of numerical
solutions, and complement the \emph{qualitative} evidence we provide through the visualization of
numerical simulations.
\subsection{Linear advection}\label{sec-linear-advection}
We begin by considering a linear advection problem to demonstrate the high-order convergence of the
base WENO scheme. The domain is $0 \leq x \leq 1$, the adiabatic constant is $\gamma=1.4$, the
initial data is
$$
\begin{bmatrix}
\rho_0 \\ (\rho u)_0 \\ E_0
\end{bmatrix}
=
\begin{bmatrix}
1 + 0.5 \sin(2 \pi x) \\ 1 + 0.5 \sin(2 \pi x) \\ 0.5 + 0.25 \sin(2 \pi x) + \frac{1}{\gamma -1}
\end{bmatrix}\,,
$$
and we employ periodic boundary conditions. In the exact solution,
the velocity and pressure remain a
constant value of 1, while the initial density field is advected by the velocity, so that the density at time $t$
satisfies $\rho(x,t) = \rho_0(x-t)$. We employ our simplified WENO scheme on grids with 51, 101, 201,
and 401 cells. Each simulation is run with a CFL number of approximately 0.6, and the final time is $t=1.0$.
In Table {\ref{table:linear-convergence}},
we list the $L^1$ error of the computed density minus the exact solution; as expected, the solutions
converge with almost fifth-order accuracy.
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.0}
\scalebox{0.8}{
\begin{tabular}{|lc|cccc|}
\toprule
\midrule
\multirow{2}{*}{\textbf{Scheme}} & & \multicolumn{4}{c|}{\textbf{Cells}}\\
{} & & 51 & 101 & 201 & 401\\
\midrule
\multirow{2}{*}{WENO} & Error &
$7.298 \times 10^{-6}$ & $2.318 \times 10^{-7}$ & $7.526 \times 10^{-9}$ & $2.654 \times 10^{-10}$\\
& Order & -- & 4.977 & 4.945 & 4.826\\
\midrule
\bottomrule
\end{tabular}}
\caption{$L^1$ error of the computed density minus the exact solution and convergence for the
linear advection problem.}
\label{table:linear-convergence}
\end{table}
\subsection{The Sod shock tube problem}\label{sec-sod-shock-tube}
The data for the Sod shock tube is given in \eqref{sod_initialdata}, with the
exact solution given by a shock wave, a rarefaction wave, and a contact discontinuity.
To simulate the shock-wave wall collision, we employ reflective boundary conditions \eqref{var-bcs}.
In the tests below, we employ our WENO-$C$-$W$ scheme with 201 cells.
\subsubsection{The wall-heating problem in the Sod shock tube}\label{sod-bounce-back}
We first demonstrate the well-known wall-heating problem (\cite{Noh1987, Rider2000}) in which an anomalous slope appears in
the density and internal energy upon shock bounce-back. We shall then explain what the root cause of this problem is, and its solution.
We begin by choosing the parameters
in equations \eqref{EulerC} and \eqref{artificial_visc} as
\begin{alignat*}{4}
\beta^u&=0.5, \qquad \beta^E&=0.0, \qquad \beta^u_w &= 3.0, \qquad \beta^E_w &= 0.0 \\
\varepsilon&=1.0, \qquad \kappa&=5.0, \qquad \varepsilon_w &=50.0, \qquad \kappa_w &=1.0 \,.
\end{alignat*}
The resulting solutions for the
velocity before and after the shock-wall collision are shown in Fig.\ref{fig:sod-collision3}. Before
the shock collision with the wall (Fig.\ref{fig:sod-before-collision3}), the solution maintains a sharp shock
front. After the shock collision (Fig.\ref{fig:sod-after-collision3}), the high-frequency oscillations behind the
shock wave are damped-out for sufficiently large $\beta^u_w>0$, while maintaining a sharp front.
While post-collision oscillations in the density profile are suppressed, Fig.\ref{fig:sod-collision-rho1} shows the presence of the
anomalous density slope $\partial_x\rho(1,t)$ at the wall (which should be zero).
This incorrect slope is termed {\it{wall heating}} because the undershoot in the density results in
an overshoot in the
internal energy \eqref{defn-internal-energy} (and hence temperature) at the wall. Noh \cite{Noh1987} suggested
that wall heating would occur in {\it{any}} artificial viscosity scheme, and is in fact built into the
exact solutions of the difference equations of the artificial viscosity method. Menikoff \cite{Menikoff1993}
argues that wall-heating is caused by the {\it{smearing}} of the shock curve that occurs with any artificial
viscosity scheme, and is thus unavoidable. Rider \cite{Rider2000}
argues that incorrect wave speeds result in too much or too little dissipation.
\begin{figure}[H]
\centering
\subfigure[$t=0.20$: pre shock-wall collision]{\label{fig:sod-before-collision3}\includegraphics[width=75mm]{sod-before-collision3}}
\subfigure[$t=0.36$: post shock-wall collision]{\label{fig:sod-after-collision3}\includegraphics[width=75mm]{sod-after-collision3}}
\caption{The velocity profile for the Sod shock tube problem, with the wall viscosity activated for the
momentum equation. The dashed green curve is the exact solution.}
\label{fig:sod-collision3}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[$t=0.36$: post shock-wall collision]{\label{fig:sod-after-collision-rho1}\includegraphics[width=75mm]{sod-after-collision-rho1}}
\subfigure[$t = 0.36$: zooming in on the undershoot]{\label{fig:sod-after-collision-zoom-rho1}\includegraphics[width=75mm]{sod-after-collision-zoom-rho1}}
\caption{The density profile for the Sod shock tube problem, calculated with the wall viscosity activated
for the momentum equation. The dashed green curve is the exact solution.}
\label{fig:sod-collision-rho1}
\end{figure}
In fact, it appears that the wall-heating error
is the result of the misalignment of the {\it gradient of fluxes} for the density, momentum and
energy equations, which in turn is caused by a slight difference in the speed of the shock fronts for the
density, momentum and energy.
We define the {\it{forcing terms}}
\begin{align*}
\mathcal{H}(\rho) &= -\partial_x ( \rho u ), \\
\mathcal{H}(\rho u ) &= -\partial_x (\rho u^2 + p) + \partial_x \left( \mathcal{B}^{(u)}(t)\,\rho\,C\,\partial_x u \right), \\
\mathcal{H}( E ) &= -\partial_x (u(E+p)) + \partial_x \left( \mathcal{B}^{(E)}(t)\,\rho\,C\,\partial_x (E/\rho) \right).
\end{align*}
In Fig.\ref{fig:bad-flux-compare}, we compare the energy and density profiles,
along with the terms $\mathcal{H}(\rho)$, $\mathcal{H}(\rho u )$, and $\mathcal{H}(E)$,
all suitably resized\footnote{More precisely, we plot the following: first,
$\frac{3}{2} \frac{\mathcal{H}}{\max_{\Omega} \mathcal{H}}$ for each of the forcing terms
$\mathcal{H}(\rho)$, $\mathcal{H}(\rho u)$, and $\mathcal{H}(E)$; second, the function
$1.1928+4.4403 \rho$; and finally, the energy $E$.} for ease of comparison, at various times just before or
after the shock
fronts have collided with the wall, zoomed-in on the region next to the wall. In
Fig.\ref{fig:bad-flux-compare1}, the density and energy profiles are very similar, but the forcing terms
are slightly misaligned; it is clear that $\mathcal{H}(E)$ is
slightly behind both $\mathcal{H}(\rho)$ and $\mathcal{H}(\rho u)$.
This misalignment causes the solution profiles for the
energy and density to begin to diverge, as can be seen in Fig.\ref{fig:bad-flux-compare2}.
Again, there is a misalignment between the forcing terms $\mathcal{H}(E)$ and $\mathcal{H}(\rho)$.
As the shock moves
away from the wall in Fig.\ref{fig:bad-flux-compare3} and Fig.\ref{fig:bad-flux-compare4},
the difference between the solution profiles is now clear. Even though the forcing terms
are now better aligned, the earlier misalignment ensures that the difference between the
energy and density profiles is permanent.
\begin{figure}[H]
\centering
\subfigure[$t=0.2725$: pre-collision]{\label{fig:bad-flux-compare1}\includegraphics[width=75mm]{bad-flux-compare1}}
\subfigure[$t=0.2925$: post-collision]{\label{fig:bad-flux-compare2}\includegraphics[width=75mm]{bad-flux-compare2}}
\subfigure[$t=0.3000$: post-collision]{\label{fig:bad-flux-compare3}\includegraphics[width=75mm]{bad-flux-compare3}}
\subfigure[$t=0.3050$: post-collision]{\label{fig:bad-flux-compare4}\includegraphics[width=75mm]{bad-flux-compare4}}
\caption{Comparison of the energy and energy forcing term $\mathcal{H}(E)$
(blue/blue dashed) with the density, suitably
resized, and the density forcing term $\mathcal{H}(\rho)$ (red/red dashed) and the
momentum forcing term $\mathcal{H}(\rho u)$ (black dashed) for the Sod shock tube problem with the wall
viscosity activated for the momentum equation. The green dashed curve is the exact solution. The figures
shown are zoomed in at the shock just before or just after the shock front has collided with the wall at $x=1$.}
\label{fig:bad-flux-compare}
\end{figure}
\subsubsection{A solution to the wall-heating problem}
The solution to the wall heating problem suggested by
Noh \cite{Noh1987} is the addition of a heat conduction term to the energy equation. For the WENO-Noh
scheme we implement in this study, we shall use a heat conduction term of the form\footnote{In equations (2.1)-(2.5) in the paper of Noh {\cite{Noh1987}}, there is, in fact, an additional
term proportional to $-\rho |\partial_x u|^2 \partial_x u$
on the right-hand side of {\eqref{Noh-heat-conduction}}. We have found that this term is not
necessary to remove the wall-heating error, and thus omit it from the Noh scheme we implement in
this paper.}
\begin{equation}\label{Noh-heat-conduction}
\partial_x \left( \beta^E_{\operatorname{Noh}}\, \rho \, |\partial_x u| \,\partial_x e \right),
\end{equation}
where $e=p/(\gamma-1)\rho = c_v \Theta$ is the internal energy of the system, proportional to the temperature $\Theta$, with $c_v$ the specific heat capacity at a constant volume.
We use the following artificial (wall) viscosity for the
energy equation \eqref{EulerC-energy}:
\begin{equation}\label{heat-conduction}
\partial_x \left( \mathcal{B}^{(E)}(t) \, \rho \, C\, \partial_x(E/\rho) \right) \,.
\end{equation}
There are two differences between the terms \eqref{heat-conduction} and \eqref{Noh-heat-conduction}:
\begin{enumerate}
\item While \eqref{Noh-heat-conduction} uses the oscillatory localizing coefficient $|\partial_x u(x,t)|$, we instead use the space-time smooth localizer $C(x,t)$.
\item We use $\partial_x (E/\rho)$ in our diffusion operator rather than the function $\partial_x e$. This
difference can be explained as follows:
equation \eqref{defn-internal-energy} shows that
\begin{equation}\label{twoterms}
\partial_x \left( \mathcal{B}^{(E)}(t) \, \rho \, C\, \partial_x(E/\rho) \right) =
\partial_x \left( \mathcal{B}^{(E)}(t) \, \rho \, C\, \partial_xe \right) +
\partial_x \left( \mathcal{B}^{(E)}(t) \, \rho \, C\, u \partial_x u \right)\,.
\end{equation}
Hence, \eqref{twoterms} has a similar form to \eqref{Noh-heat-conduction} (with $C$ replacing $|u_x|$), but with
the additional term $\partial_x \left( \mathcal{B}^{(E)}(t) \, \rho \, C\, u \partial_x u \right)$. Indeed, the two terms in \eqref{twoterms}
are both proper diffusion operators near shock waves. This is easy to see: multiplying \eqref{twoterms} by $E$ and integrating by parts then shows that
$$
\int_{x_1}^{x_M} \mathcal{B}^{(E)}(t) \, \rho \, C\, \partial_x e \partial_x E \, dx +
\int_{x_1}^{x_M} \mathcal{B}^{(E)}(t) \, \rho \, C\, u \, \partial_x u \partial_x E \, dx \,.
$$
At the shock, $\partial_x e$ has the same sign as $\partial_x E$ so that $\int_{x_1}^{x_M} \mathcal{B}^{(E)}(t) \, \rho \, C\, \partial_x e \partial_x E \, dx \ge 0$;
moreover, in the case of
a right-traveling shock front, $\partial_x u < 0$, $\partial_x E < 0$ and $u > 0$ at the shock, so that
$ \int_{x_1}^{x_M} \mathcal{B}^{(E)}(t) \, \rho \, C\, u \, \partial_x u \partial_x E \, dx \ge 0$, while for a
a left-moving shock, $\partial_x u < 0$, $\partial_x E > 0$ and $u < 0$, so that once again $ \int_{x_1}^{x_M} \mathcal{B}^{(E)}(t) \, \rho \, C\, u \, \partial_x u \partial_x E \, dx \ge 0$.
This then ensures that
\eqref{heat-conduction} is a {\it dissipative} operator and that the structure of the artificial viscosity term in \eqref{heat-conduction}
adjusts the Noh-type dissipation $\partial_x \left( \mathcal{B}^{(E)}(t) \, \rho \, C\, \partial_xe \right)$ by the velocity-dependent term
$\partial_x \left( \mathcal{B}^{(E)}(t) \, \rho \, C\, u \partial_x u \right)$.
\end{enumerate}
In our numerical experiments, presented below, we compare Noh's scheme, called WENO-Noh (see Appendix \ref{sec:appendix}), with our WENO-$C$-$W$ scheme. For
WENO-Noh, we set
$
\beta^{u}_{\operatorname{Noh}} = 15.0$ and $ \beta^{E}_{\operatorname{Noh}} = 10.0$ in \eqref{EulerC-noh}.
These viscosity coefficients were chosen in the following manner: $\beta^{u}_{\operatorname{Noh}}$ was
first chosen large enough to suppress the post-collision oscillations, and then
$\beta^{E}_{\operatorname{Noh}}$ was chosen to correct the wall-heating error.
In our WENO-$C$-$W$ scheme, we set
$\beta^u = 0.5$, $ \beta^{u}_w = 3.0$, $\beta^{E}=0.0$, and $\beta^{E}_w = 6.0$.
In Fig.\ref{fig:sod-collision-veleng1}, we compare the velocity and density
profiles computed with the two schemes above. It is clear that the WENO-$C$-$W$ scheme
produces a superior solution both before and after the shock-wall collision. The large amount
of viscosity needed in the WENO-Noh scheme post-collision means that the solution prior to
shock-wall collision is affected, with a smeared shock curve and overshoot at the top of the
expansion wave. Moreover, even the relatively large value of $\beta^{u}_{\operatorname{Noh}}$ as
compared with $\beta^{u} + \beta^{u}_{w}$ is unable to fully suppress the oscillations behind the
shock curve that occur post-collision. This is due to the smoothness of the localizing coefficient
$C$ as compared with the
rough nature of $|\partial_x u|$.
\begin{figure}[H]
\centering
\subfigure[$t=0.20$: velocity, pre-collision]{\label{fig:sod-before-collision4}\includegraphics[width=75mm]{sod-before-collision4}}
\subfigure[$t=0.36$: velocity, post-collision]{\label{fig:sod-after-collision4}\includegraphics[width=75mm]{sod-after-collision4}}
\subfigure[$t=0.36$: density, post-collision]{\label{fig:sod-after-collision-rho2}\includegraphics[width=75mm]{sod-after-collision-rho2}}
\subfigure[$t=0.36$: density, post-collision zoom-in]{\label{fig:sod-after-collision-zoom-rho2}\includegraphics[width=75mm]{sod-after-collision-zoom-rho2}}
\caption{The velocity and density
profiles for the Sod shock tube problem before and after shock-wall collision.}
\label{fig:sod-collision-veleng1}
\end{figure}
As is the case with the velocity profile,
prior to shock collision the WENO-$C$-$W$ scheme produces a superior solution for
the density profile, with a much
sharper shock front and more accurate expansion wave. Post-collision, the heat conduction terms
ensure that neither of the methods exhibit the wall heating error. However, there are still small oscillations
present in the solution computed with the WENO-Noh scheme, and the shock front is much more
smeared than that of the solution computed with WENO-$C$-$W$.
Finally, comparing Fig.\ref{fig:bad-flux-compare} and Fig.\ref{fig:good-flux-compare}, we
see that the wall viscosity for the energy equation has properly
aligned the forcing terms $\mathcal{H}(\rho)$, $\mathcal{H}(\rho u)$ and $\mathcal{H}(E)$. Realignment of the gradient of fluxes removes the
wall-heating problem, created by the smearing of the shock fronts. The artificial viscosity term \eqref{heat-conduction} is responsible for this realignment.
\begin{figure}[H]
\centering
\subfigure[$t=0.2725$: pre-collision]{\label{fig:good-flux-compare1}\includegraphics[width=75mm]{good-flux-compare1}}
\subfigure[$t=0.2925$: post-collision]{\label{fig:good-flux-compare2}\includegraphics[width=75mm]{good-flux-compare2}}
\subfigure[$t=0.3000$: post-collision]{\label{fig:good-flux-compare3}\includegraphics[width=75mm]{good-flux-compare3}}
\subfigure[$t=0.3050$: post-collision]{\label{fig:good-flux-compare4}\includegraphics[width=75mm]{good-flux-compare4}}
\caption{Comparison of the energy and energy forcing term $\mathcal{H}(E)$
(blue/blue dashed) with the density, suitably
resized, and the density forcing term $\mathcal{H}(\rho)$ (red/red dashed) and the
momentum forcing term $\mathcal{H}(\rho u)$ (black dashed) for the Sod shock tube problem with the wall
viscosity activated for the momentum and energy equations.
The green dashed curve is the exact solution. The figures
shown are zoomed in at the shock just before or just after the shock front has collided with the wall at $x=1$.}
\label{fig:good-flux-compare}
\end{figure}
\subsubsection{Noise removal with the noise indicator}
We now employ our noise indicator algorithm to the Sod shock tube problem with the aim of removing
the noise present in the velocity profile at the contact discontinuity.
In the test below, we employ our WENO-$C$-$N$ scheme with $\eta$ chosen
such that $\eta \Delta \tau /\Delta x^2 = 0.005$ in the heat equation used for noise removal; an explicit time-stepping scheme is used
and only one time-step is taken. For the noise detection algorithm, $C_{\text{ref}}$ in \eqref{cref} is computed using
$\delta h = 0.0001$. Fig.\ref{fig:sod-noise-indicator2} shows that the noise indicator removes the spurious
oscillations from the velocity profile. The localized diffusion mechanism ensures that the solution
in other regions is unchanged, and this is demonstrated in Fig.\ref{fig:sod-noise-indicator1}. We note that the
noise indicator algorithm affects neither the sharpness of the shock front nor the speed of the shock, nor the order of the numerical
method (which we shall show results for below).
\begin{figure}[H]
\centering
\subfigure[$t=0.20$]{\label{fig:sod-noise-indicator1}\includegraphics[width=75mm]{sod-noise-indicator1}}
\subfigure[$t=0.20$: zooming in on the noise at the contact discontinuity]{\label{fig:sod-noise-indicator2}\includegraphics[width=75mm]{sod-noise-indicator2}}
\caption{Comparison of the velocity profiles for the Sod shock tube problem computed with
WENO-$C$ and WENO-$C$-$N$ with 201 cells.
The red crosses in Fig.\ref{fig:sod-noise-indicator1} indicate where the function
$\mathbbm{1}_{\operatorname{noise}}(x)$ is active.}
\label{fig:sod-noise-indicator}
\end{figure}
\begin{wrapfigure}{L}{0.4\textwidth}
\includegraphics[scale=0.15]{sod-noise-indicator3}
\end{wrapfigure}
There is also high-frequency noise present to left of the expansion wave (shown in the figure to the left); again, the noise indicator
detects and removes this noise.
In Fig.\ref{fig:sod-noise-indicator-wall-viscosity1}, we show the velocity profile, computed using the
WENO-$C$-$W$-$N$ scheme, after the shock-wall collision. Here, all of the post-collision noise is damped
by the wall viscosity. The WENO-$C$-$W$-$N$ scheme removes spurious oscillations in the solution,
while ensuring that a sharp shock front and the correct wave speed are retained, even after multiple
shock-wall collisions, as shown in Fig.\ref{fig:sod-noise-indicator-wall-viscosity2}.
\begin{figure}[H]
\centering
\subfigure[$t=0.36$]{\label{fig:sod-noise-indicator-wall-viscosity1}\includegraphics[width=75mm]{sod-noise-indicator-wall-viscosity1}}
\subfigure[$t=0.90$]{\label{fig:sod-noise-indicator-wall-viscosity2}\includegraphics[width=75mm]{sod-noise-indicator-wall-viscosity2}}
\caption{Comparison of the velocity solution profile produced using
WENO-$C$ and WENO-$C$-$W$-$N$ for the post-collision bounce-back in the Sod shock tube problem with
201 cells.}
\label{fig:sod-noise-indicator-wall-viscosity}
\end{figure}
\subsubsection{Error analysis and convergence tests}
We now compare the errors of the various numerical schemes given in Table \ref{table:schemes} in Appendix \ref{sec:appendix} applied to the
Sod shock-wall collision and bounce-back problem.
The solutions are computed with all parameters fixed across the different methods,
giving an objective evaluation of each scheme.
As advised by Greenough \& Rider {\cite{GreenoughRider2004}}, and in order
to fairly compare our results with those found in the numerics literature, we use the CFL number equal to
0.6.
We have found that the use of a smaller CFL number of 0.3 does not (appreciably) change the
conclusions of our numerical tests. For instance, the solutions produced using WENO and
WENO-$C$-$W$-$N$ show (roughly) the same relative error and order of convergence when CFL=0.3 as
they do when CFL=0.6.
However, the presence of the nonlinear artificial viscosity terms in the Euler-$C$ equations, when combined with
an explicit time-integration scheme, place restrictions on the CFL number that would otherwise not be present
in the stand-alone WENO algorithm. In particular, for the Sod test problem, due to the additional artificial viscosity
present during the shock-wall collision phase, we have found an upper bound on the CFL number to be $\approx$ 0.7.
While
our stand-alone WENO scheme is (formally) stable for
much larger CFL numbers, the relative
error and the order of convergence degrades as the CFL number is increased. Indeed, it is demonstrated
by Greenough \& Rider {\cite{GreenoughRider2004}} that only 75\% of the fifth-order convergence rate of WENO is achieved when
CFL=1.0, whereas the full fifth-order convergence is achieved for CFL=0.6. Therefore, the use of the smaller
CFL=0.6 is also necessary for the stand-alone WENO scheme.
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.0}
\scalebox{0.8}{
\begin{tabular}{|lc|cccc|}
\toprule
\midrule
\multirow{2}{*}{\textbf{Scheme}} & & \multicolumn{4}{c|}{\textbf{Cells}}\\
{} & & 101 & 201 & 401 & 801\\
\midrule
\multirow{2}{*}{WENO} & Error &
$1.662 \times 10^{-2}$ & $1.772 \times 10^{-2}$ & $1.086 \times 10^{-2}$ & $8.214 \times 10^{-3}$\\
& Order & -- & -0.093 & 0.706 & 0.403\\
\midrule
\multirow{2}{*}{WENO-$|u_x|$} & Error &
$1.534 \times 10^{-2}$ & $1.441 \times 10^{-2}$ & $8.444 \times 10^{-3}$ & $5.864 \times 10^{-3}$\\
& Order & -- & 0.090 & 0.771 & 0.526\\
\midrule
\multirow{2}{*}{WENO-Noh} & Error &
$3.436 \times 10^{-2}$ & $1.799 \times 10^{-2}$ & $9.117 \times 10^{-3}$ & $4.795 \times 10^{-3}$\\
& Order & -- & 0.934 & 0.980 & 0.927\\
\midrule
\multirow{2}{*}{WENO-$N$} & Error &
$1.667 \times 10^{-2}$ & $1.666 \times 10^{-2}$ & $1.064 \times 10^{-2}$ & $7.262 \times 10^{-3}$\\
& Order & -- & 0.001 & 0.648 & 0.550\\
\midrule
\multirow{2}{*}{WENO-$C$} & Error &
$1.520 \times 10^{-2}$ & $1.160 \times 10^{-2}$ & $6.453 \times 10^{-3}$ & $3.927 \times 10^{-3}$\\
& Order & -- & 0.390 & 0.846 & 0.717\\
\midrule
\multirow{2}{*}{WENO-$C$-$N$} & Error &
$1.504 \times 10^{-2}$ & $1.134 \times 10^{-2}$ & $6.412 \times 10^{-3}$ & $3.780 \times 10^{-3}$\\
& Order & -- & 0.407 & 0.823 & 0.763\\
\midrule
\multirow{2}{*}{WENO-$C$-$W$} & Error &
$1.990 \times 10^{-2}$ & $1.151 \times 10^{-2}$ & $5.774 \times 10^{-3}$ & $3.019 \times 10^{-3}$\\
& Order & -- & 0.790 & 0.995 & 0.936\\
\midrule
\multirow{2}{*}{WENO-$C$-$W$-$N$} & Error &
$1.983 \times 10^{-2}$ & $1.146 \times 10^{-2}$ & $5.770 \times 10^{-3}$ & $3.018 \times 10^{-3}$\\
& Order & -- & 0.791 & 0.990 & 0.935\\
\midrule
\bottomrule
\end{tabular}}
\caption{Post shock-wall collision ($t=0.36$) $L^1$ error of the computed velocity minus the exact solution and convergence for the
Sod problem with shock-wall collision and bounce-back.}
\label{table:sod-error-post}
\end{table}
In Table \ref{table:sod-error-post}, we list the $L^1$ error of the computed velocity minus the exact solution, and study the
order of convergence for the Sod problem with shock-wall collision and bounce-back.
WENO-$C$ produces solutions that are significantly better than
those produced with
WENO-$|u_x|$, which are significantly better than those solutions produced with the stand-alone WENO algorithm.
Note that the use of the noise removal algorithm consistently improves the error bounds, while maintaining the order of accuracy.
On the coarser grids containing 101 or 201 cells, the WENO-$C$-$W$ and WENO-$C$-$W$-$N$ schemes
produce solutions with slightly larger errors than the solution produced with WENO-$C$-$N$; this is caused
by the slight smearing of the shock,
post wall collision. The solutions are, however, {\it{qualitatively}} significantly better, as evidenced by
Fig.\ref{fig:sod-collision-veleng1} and Fig.\ref{fig:sod-noise-indicator-wall-viscosity} above, as well as
Fig.\ref{fig:sod-weno-c-comparison} below.
Both WENO-$C$-$W$ and WENO-$C$-$W$-$N$ maintain a relatively high order of accuracy,
whereas the presence of the post-collision noise ensures that both WENO and WENO-$|u_x|$ have
convergence rates that are irregular and relatively poor.
\begin{figure}[H]
\centering
\subfigure[$t=0.36$, velocity post-collision]{\label{fig:sod-weno-c-comparison1}\includegraphics[width=75mm]{sod-weno-c-comparison1}}
\subfigure[$t=0.36$, velocity post-collision zoom-in]{\label{fig:sod-weno-c-comparison2}\includegraphics[width=75mm]{sod-weno-c-comparison2}}
\caption{Comparison of the velocity solution profile produced using
WENO-$C$-$W$-$N$, WENO-$C$-$W$, and WENO-$C$-$N$ for the post-collision bounce-back in the Sod shock tube problem with
201 cells.}
\label{fig:sod-weno-c-comparison}
\end{figure}
We remark that our conclusions described above do not change if we replace the $L^1$ norm with either the $L^2$ or $L^ \infty$
norms. We list in Table {\ref{table:sod-error-post-2}} the $L^2$ and $L^ \infty$ error analysis for the
post shock-wall collision velocity for the Sod problem, where the velocity is computed using
either WENO-$|u_x|$ or WENO-$C$-$W$-$N$.
For the $L^2$ error analysis, we first note the odd behavior for WENO-$|u_x|$ on the coarser grids with
101 and 201 cells. The increase in the $L^2$ error despite mesh refinement is caused by the base WENO
scheme; referring to Table {\ref{table:sod-error-post}}, we see that the $L^1$ error of solutions produced with
WENO increases as the mesh is refined from 101 to 201 cells. This phenomenon
is due to the large oscillations that occur post shock-wall collision. The WENO-$C$-$W$-$N$ scheme removes
these oscillations, at the cost of a slight smearing of the shock front; this smearing results in a larger $L^2$
error on coarse grids when compared with WENO-$|u_x|$, but smaller $L^2$ errors and better rates of
convergence as the mesh is refined.
Table {\ref{table:sod-error-post-2}} shows that the $L^ \infty$ errors for both WENO-$|u_x|$ and
WENO-$C$-$W$-$N$ grow as the mesh is refined. As noted in {\cite{GreenoughRider2004}}, this is due
to the localization of the error at the shock. However, we remark that the $L^ \infty$ errors
for WENO-$C$-$W$-$N$ are smaller than the $L^ \infty$ errors for WENO-$|u_x|$ on the grids
with 201, 401, and 801 cells; moreover, these errors grow at a faster rate for WENO-$|u_x|$ than for
WENO-$C$-$W$-$N$.
In addition to the
figures and qualitative evidence provided, the $L^1$, $L^2$, and $L^ \infty$ error studies indicate that
the $C$-method produces highly accurate solutions with close to optimal rates of convergence for the
Sod shock-wall collision and bounce-back test.
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.0}
\scalebox{0.8}{
\begin{tabular}{|llc|cccc|}
\toprule
\midrule
\multirow{2}{*}{\textbf{Norm}} & \multirow{2}{*}{\textbf{Scheme}} & & \multicolumn{4}{c|}{\textbf{Cells}}\\
& {} & & 101 & 201 & 401 & 801\\
\midrule
\multirow{4}{*}{\vspace{-1.25em}${L^2}$} & \multirow{2}{*}{WENO-$|u_x|$} & Error &
$4.775 \times 10^{-2}$ & $6.068 \times 10^{-2}$ & $4.640 \times 10^{-2}$ & $3.765 \times 10^{-2}$ \\
& & Order & -- & -0.346 & 0.387 & 0.302 \\[1.25em]
& \multirow{2}{*}{WENO-$C$-$W$-$N$} & Error &
$6.953 \times 10^{-2}$ & $6.098 \times 10^{-2}$ & $4.423 \times 10^{-2}$ & $3.324 \times 10^{-2}$ \\
& & Order & -- & 0.189 & 0.463 & 0.412 \\[1.25em]
\midrule
\multirow{4}{*}{\vspace{-1.25em}$L^ \infty$} & \multirow{2}{*}{WENO-$|u_x|$} & Error &
$4.262 \times 10^{-1}$ & $7.417 \times 10^{-1}$ & $8.024 \times 10^{-1}$ & $8.925 \times 10^{-1}$ \\
& & Order & -- & -0.799 & -0.113 & -0.154 \\[1.25em]
& \multirow{2}{*}{WENO-$C$-$W$-$N$} & Error &
$5.663 \times 10^{-1}$ & $6.926 \times 10^{-1}$ & $7.124 \times 10^{-1}$ & $7.456 \times 10^{-1}$ \\
& & Order & -- & -0.290 & -0.041 & -0.066 \\[1.25em]
\midrule
\bottomrule
\end{tabular}}
\caption{Post shock-wall collision ($t=0.36$) $L^2$ and $L^ \infty$ error of the computed velocity
minus the exact solution and convergence for the
Sod problem with shock-wall collision and bounce-back.}
\label{table:sod-error-post-2}
\end{table}
\subsubsection{Comparison with other schemes}\label{sec:comparison-with-other-schemes}
For the purposes of benchmarking our WENO and WENO-$N$ schemes prior to
shock-wall collision, we present error analysis and convergence rates comparing our simplified WENO scheme
with the scheme utilized by Greenough and Rider in \cite{GreenoughRider2004}. The WENO scheme
that is presented in \cite{GreenoughRider2004} is
formally fifth-order accurate in space, with time integration
done using a total variation diminishing (TVD) third-order Runge-Kutta method. Flux-splitting is accomplished
using a method similar to the Lax-Friedrichs flux-splitting (see
\cite{GreenoughRider2004} for the details). We will refer to this method
as RK3-WENO5.
The error norm utilized in \cite{GreenoughRider2004} is of the form
$$
\lvert \lvert \rho(\cdot,t) - \rho^*(\cdot,t) \rvert \rvert _{L^1_{GR}} \coloneqq \frac{1}{M} \sum_{i=1}^{M} \frac{| \rho(x_i ,t ) - \rho^*(x_i,t)| }{| \rho^*(x_i,t) |}\,,
$$
where $\rho$ is the computed density and $\rho^*$ is the exact solution for the density.
We will refer to this norm as the $L^1_{GR}$ norm.
In Table \ref{table:sod-error-GR}, we calculate the $L^1_{GR}$ errors for the density for the Sod
shock tube problem computed with WENO and WENO-$N$, and compare them with the corresponding values
in \cite{GreenoughRider2004}. All simulations were run with a CFL number of 0.6. We see that our simplified WENO scheme and noise indicator algorithm compare well with
the more complicated RK3-WENO5 scheme. Consequently, using our simplified WENO algorithm for the
purposes of comparison in our error analysis for the artificial viscosity methods presented is justified; that is to
say, comparing the performance of our artificial viscosity methods with our simplified WENO scheme is
similar to comparing the performance of our artificial viscosity methods with the more complicated
(and `industry-standard') RK3-WENO5.
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.0}
\scalebox{0.8}{
\begin{tabular}{|lc|ccc|}
\toprule
\midrule
\multirow{2}{*}{\textbf{Scheme}} & & \multicolumn{3}{c|}{\textbf{Cells}}\\
{} & & 101 & 201 & 401 \\
\midrule
\multirow{2}{*}{WENO} & Error &
$1.57 \times 10^{-2}$ & $7.93 \times 10^{-3}$ & $4.49 \times 10^{-3}$ \\
& Order & -- & 0.99 & 0.82 \\
\midrule
\multirow{2}{*}{WENO-N} & Error &
$1.60 \times 10^{-2}$ & $7.90 \times 10^{-3}$ & $4.37 \times 10^{-3}$ \\
& Order & -- & 1.02 & 0.85 \\
\midrule
\multirow{2}{*}{RK3-WENO5} & Error &
$1.58 \times 10^{-2}$ & $8.24 \times 10^{-3}$ & $4.47 \times 10^{-3}$ \\
& Order & -- & 0.93 & 0.88 \\
\midrule
\bottomrule
\end{tabular}}
\caption{Pre shock-wall collision ($t=0.20$) $L^1_{GR}$ error analysis and convergence tests for the
density for the Sod shock tube problem.}
\label{table:sod-error-GR}
\end{table}
\subsection{The Noh problem}\label{subsec:noh}
As a further example of wall-heating, we next consider the classical 1-$D$ planar Noh problem
\mbox{\cite{Noh1987,Liska2003995}}. The domain of interest is $-0.5 \leq x \leq 0.5$, the
adiabatic constant is $\gamma = 5/3$, and the initial data is given by
$$
\begin{bmatrix}
\rho_0 \\ (\rho u)_0 \\ E_0
\end{bmatrix}
=
\begin{bmatrix}
1 \\ 1 \\ 0.5 + \frac{10^{-6}}{\gamma -1}
\end{bmatrix}
\mathbbm{1}_{[-0.5,0)}(x)
+
\begin{bmatrix}
1 \\ -1 \\ 0.5 + \frac{10^{-6}}{\gamma -1}
\end{bmatrix}
\mathbbm{1}_{[0,0.5]}(x)\,.
$$
The solution for this problem consists of two infinite strength shocks propagating with speed $1/3$
outwards from the origin, with a state of constant density and pressure left behind.
As demonstrated in {\cite{Liska2003995}}, most schemes tend to produce the anomalous wall-heating at
the center origin. We shall utilize our WENO-$C$ method (i.e. no shock-wall
collision algorithm) with 101 cells. We choose the relevant parameters as
\begin{gather*}
\beta^u=1.0, \qquad \beta^E=10.0, \qquad \varepsilon=50.0, \qquad \kappa=1.0 \,.
\end{gather*}
The value of $\beta^u$ is chosen large enough to
eliminate post-shock oscillations, while $\beta^E$ is chosen to minimize the wall-heating.
In Fig.{\ref{fig:noh}}, we compare the solutions computed using WENO and WENO-$C$; it is clear that
WENO-$C$ produces a much more accurate solution, with the post-shock oscillations and wall-heating
eliminated.
\begin{figure}[H]
\centering
\subfigure[$t=1.0$: density, comparison of WENO-$C$ and WENO]{\label{fig:noh1}\includegraphics[width=75mm]{noh1}}
\subfigure[$t=1.0$: density, comparison of WENO-$C$ with different grid spacings]{\label{fig:noh2}\includegraphics[width=75mm]{noh2}}
\subfigure[$t=1.0$: density zoom-in at the shock, comparison of WENO-$C$ with different grid spacings]{\label{fig:noh3}\includegraphics[width=75mm]{noh3}}
\caption{The density profile at time $t = 1.0$ for the Noh problem, with the solution
computed using (a) WENO or (b,c) WENO-$C$. The dashed green curve is the exact solution.}
\label{fig:noh}
\end{figure}
\subsection{The LeBlanc shock tube problem}\label{subsec:leblanc}
We now turn our attention to the LeBlanc shock tube problem. Here, the domain of
interest is $0 \leq x \leq 9$, the adiabatic constant is $\gamma = \frac{5}{3}$,
and the initial data is given by
$$
\begin{bmatrix}
\rho_0 \\ (\rho u)_0 \\ E_0
\end{bmatrix}
=
\begin{bmatrix}
1 \\ 0 \\ 10^{-1}
\end{bmatrix}
\mathbbm{1}_{[0,3)}(x)
+
\begin{bmatrix}
10^{-3} \\ 0 \\ 10^{-9}
\end{bmatrix}
\mathbbm{1}_{[3,9]}(x)\,.
$$
The large jump in the initial energy $E_0$ produces a very strong shock wave, making the
LeBlanc shock-tube problem a very difficult test case. Most numerical schemes tend to
produce large overshoots in the internal energy
at the contact discontinuity, which results in a loss of accuracy in the shock speed, as shown
in Fig.\ref{fig:weno-leblanc}. This overshoot in the internal energy is in fact an example of
wall-heating; a small undershoot in the density and the continuity of the pressure at the contact
produce this observed overshoot in the internal energy. We refer the reader
to \cite{ReSeSh2012,Liu20098872,Loubere2005105} for further details.
\begin{figure}[H]
\centering
\subfigure[$t=6.0$: internal energy]{\label{fig:weno-leblanc1}\includegraphics[width=75mm]{weno-leblanc1}}
\subfigure[$t=6.0$: internal energy, zoomed in]{\label{fig:weno-leblanc2}\includegraphics[width=75mm]{weno-leblanc2}}
\caption{The internal energy profile at time $t = 6.0$ for the LeBlanc shock tube problem, with the solution
computed using WENO. The dashed green curve is the exact solution.}
\label{fig:weno-leblanc}
\end{figure}
Our strategy is to add an additional diffusion term to the right-hand side
of the energy equation \eqref{EulerC-energy} that will serve to remove
the large overshoot in the internal energy at the contact discontinuity. Specifically, we solve an
additional $C$-equation for a variable $C^e$ forced by $| \partial_x e |/ \max_{x} | \partial_x e|$. Thus,
equation \eqref{EulerC-energy} is replaced by
\begin{equation}
\partial_t E + \partial_x (uE + up) = \partial_x \left( \mathcal{B} ^{(E)}(t) \, \rho \, C \, \partial_x (E/\rho) \right) + \partial_x \left( \mathcal{B} ^{(e)}(t) \, \rho \, C^e \, \partial_x \left( E/\rho \right) \right) \,,
\end{equation}
where the function $C^e$ is computed using
\begin{equation}
\partial_t C^e + \frac{S(\bm{u})}{\varepsilon_e \Delta x} C^e - \kappa_e \Delta x \cdot S(\bm{u}) \partial_{xx} C^e = \frac{S(\bm{u})}{\varepsilon_e \Delta x} G^e \,. \label{C-energy-LeBlanc}
\end{equation}
The artificial viscosity coefficients are given by \eqref{artificial_visc} and
$$
\mathcal{B} ^{(e)}(t) = (\Delta x)^2 \cdot \frac{ \max_{x} | \partial_x u| }{ \max_{x} C^e }
\left(\beta^{e} + \beta^{e}_w \cdot \overline{C}_w(t)\right) \,,
$$
and $C$, $C_w(x,t)$, and $\overline{C}_w(t)$ are defined by
\eqref{C-Sod}, \eqref{Cwall-sod}, and \eqref{wall-ind-fn}, respectively.
The forcing to the $C^e$ equation \eqref{C-energy-LeBlanc} is
$$
G^e(x,t) = \mathbbm{1}_{(0,\infty)}(\partial_x u) \frac{ | \partial_x e |}{\max_{x} | \partial_x e |} \,.
$$
Here, the indicator function $ \mathbbm{1}_{(0,\infty)} \, (\partial_x u) $ represents an
{\it expansion switch}, in which $G^e$ is non-zero only if $\partial_x u > 0$.
\subsubsection{Stabilizing shock-wall collision}
To simulate the collision of the shock-wave with the wall, we use solid wall boundary conditions \eqref{var-bcs}.
Motivated by the results for the Sod shock tube problem
presented in \ref{sod-bounce-back}, we add wall viscosity to the momentum
and energy equations; we choose the parameters as
\begin{alignat*}{6}
\beta^u&=0.001, \qquad \beta^E&=0.0, \qquad \beta^e&=0.4, \qquad \beta^u_w&= 4.0, \qquad \beta^E_w&= 0.0, \qquad \beta^e_w&=0.0, \\
\varepsilon&=1.25, \qquad \kappa&=10.0, \qquad \varepsilon_e&=1.25, \qquad \kappa_w&=14.0, \qquad \varepsilon_w &=50.0, \qquad \kappa_w&=4 \,.
\end{alignat*}
We employ our WENO-$C$-$W$ scheme with 721 cells. Since this is a more challenging problem than
the Sod shock tube problem, we use the smaller CFL number of 0.25.
For the purpose of comparison,
we also implement the WENO-Noh scheme, with the parameters in \eqref{EulerC-noh} set as
$\beta^u_{\operatorname{Noh}} = 8.0$ and
$\beta^E_{\operatorname{Noh}} = 9.0$.
These parameters were chosen with the aim of suppressing post-collision noise while preventing
the occurrence of the wall heating error. We remark that WENO-Noh failed for CFL=0.25, and
required the much smaller CFL $\approx 0.045$ to run.
The shock-wave moves to the right and collides with the right wall at time $t \approx 7.2$.
Prior to shock collision, the WENO-Noh scheme produces a solution with an overshoot in the internal energy
at the contact discontinuity. This results in an incorrect shock front and wave speed.
The viscosity for the momentum at the shock and the energy at the
contact discontinuity in our WENO-$C$-$W$ scheme remove
post-shock oscillations and the overshoot in the internal
energy, respectively, as shown in Fig.\ref{fig:leblanc-before-collision}.
\begin{figure}[H]
\centering
\subfigure[$t=6.0$: velocity]{\label{fig:leblanc-before-collision1}\includegraphics[width=75mm]{leblanc-before-collision1}}
\subfigure[$t=6.0$: internal energy]{\label{fig:leblanc-before-collision-eng1}\includegraphics[width=75mm]{leblanc-before-collision-eng1}}
\caption{The (a) velocity and (b) internal energy profiles for the LeBlanc shock tube problem
before the collision with the wall. The solution is computed with viscosity activated for the
momentum and energy equations.}
\label{fig:leblanc-before-collision}
\end{figure}
Post shock-wall collision, the wall viscosity for the momentum and energy equations damp-out the
oscillations behind the shock, while ensuring that the solution
maintains a sharp shock front and the
correct shock speed (see Fig.\ref{fig:leblanc-after-collision}). Moreover, the wall viscosity for
the energy equation
prevents the wall heating error from occurring, as shown in
Fig.\ref{fig:leblanc-after-collision-zoom-rho}.
Due to the lack of smoothness of the localizing artificial viscosity coefficient
$|\partial_x u|$, the WENO-Noh scheme is unable to fully suppress all the post-collision oscillations, though
the heat conduction term in the energy equation prevents the wall heating error from occurring.
In Fig.\ref{fig:leblanc-after-collision-zoom-inteng}, we zoom
in on the internal energy profile near the wall; it is evident that the solution computed with
WENO-$C$-$W$ is better than that computed with WENO-Noh, but there is a small error between
the computed solution and the exact solution. This error occurs because of a very small inaccuracy
in the density profile, shown in Fig.\ref{fig:leblanc-after-collision-zoom-rho}. Since the
density is so small here, and since the internal energy is given by \eqref{defn-internal-energy},
even tiny errors are greatly amplified, making it very difficult to get a completely accurate solution for the
internal energy.
\begin{figure}[H]
\centering
\subfigure[$t=8.0$: velocity]{\label{fig:leblanc-after-collision1}\includegraphics[width=75mm]{leblanc-after-collision1}}
\subfigure[$t=8.0$: internal energy]{\label{fig:leblanc-after-collision-eng1}\includegraphics[width=75mm]{leblanc-after-collision-eng1}}
\subfigure[$t=8.0$: internal energy, zoomed in at the wall]{\label{fig:leblanc-after-collision-zoom-inteng}\includegraphics[width=75mm]{leblanc-after-collision-zoom-inteng}}
\subfigure[$t=8.0$: density, zoomed in at the wall]{\label{fig:leblanc-after-collision-zoom-rho}\includegraphics[width=75mm]{leblanc-after-collision-zoom-rho}}
\caption{The (a) velocity, (b) internal energy, (c) zoomed in internal energy and (d) zoomed in density profiles
for the LeBlanc shock tube problem
after the collision with the wall. The solution computed with WENO-$C$-$W$ has the wall
viscosity activated for the momentum and energy equations.}
\label{fig:leblanc-after-collision}
\end{figure}
\subsubsection{Error analysis and convergence tests}
We now compare the errors of the various numerical schemes
listed in Table \ref{table:schemes} applied
to the LeBlanc shock tube problem, with the various relevant parameters fixed across the different methods.
The $L^1$ errors in the velocity are computed using formula \eqref{error-formula}, and are listed
in Table \ref{table:leblanc-error-pre} (time $t = 6.0$) and
Table \ref{table:leblanc-error-post} (time $t=8.0$). All of the simulations were run with a CFL number of
0.25, except for WENO-Noh, which required CFL $\approx 0.045$.
Prior to shock-wall collision, it is evident from Table \ref{table:leblanc-error-pre} that the $C$-method produces
a solution that is significantly better than those solutions produced without the $C$-method. The $L^1$ errors
for the velocity computed using WENO-$C$ are almost an order of magnitude smaller than the
$L^1$ errors for the velocity
computed using WENO and WENO-Noh. This is primarily due to the removal of the overshoot
in the internal energy, which results in an accurate shock speed.
On the other hand, the removal of the odvershoot in the internal energy through the use of $C^e$ results in
a more smeared contact discontinuity, as shown in Fig.\ref{fig:leblanc-before-collision-eng1}. The
smearing of the contact discontinuity results in a non-physical ``bump'' appearing in the velocity profile,
as shown in Fig.\ref{fig:leblanc-before-collision1}. Note that this bump does not appear in the velocity
profile computed using WENO-Noh, since in this case the contact discontinuity is sharper, at the expense
of a large overshoot in the internal energy. We suggest, however, that this defect in the solution computed
using $C^e$ is relatively insignificant when compared against the magnitude of the error in the internal
energy solutions computed without $C^e$. This is primarily due to the fact that the internal energy error
results in a highly inaccurate shock speed which, in turn, leads to a highly inaccurate solution, as evidenced
by Table {\ref{table:leblanc-error-pre}}. On the other hand, the velocity bump error arises from the
correction of the overshoot in the internal energy, and subsequently the shock speed and location; the latter
two corrections result in a much more accurate solution overall, again demonstrated in Table
{\ref{table:leblanc-error-pre}}.
Moreover, we note that the bump error decreases with mesh
refinement approximately four times as fast as the overshoot error, as shown in
Table \ref{table:leblanc-overshoot-error}. Here, the overshoot/bump error is computed by calculating
the difference between the value at the peak of the
overshoot/bump and the value of the exact solution there\footnote{This error is thus a \emph{local}
$L^ \infty$ error.}.
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.25}
\scalebox{0.8}{
\begin{tabular}{|lc|cccc|}
\toprule
\midrule
\multirow{2}{*}{\textbf{Scheme}} & & \multicolumn{4}{c|}{\textbf{Cells}}\\
{} & & 361 & 721 & 1441 & 2881\\
\midrule
\multirow{2}{*}{WENO} & Error &
$3.469 \times 10^{-2}$ & $1.659 \times 10^{-2}$ & $8.010 \times 10^{-3}$ & $4.016 \times 10^{-3}$\\
& Order & -- &1.065 & 1.050 & 0.996\\
\midrule
\multirow{2}{*}{WENO-Noh} & Error &
$2.546 \times 10^{-2}$ & $1.239 \times 10^{-2}$ & $6.001 \times 10^{-3}$ & $3.010 \times 10^{-3}$\\
& Order & -- & 1.040 & 1.045 & 0.996\\
\midrule
\multirow{2}{*}{WENO-$N$} & Error &
$3.468 \times 10^{-2}$ & $1.661 \times 10^{-2}$ & $8.015 \times 10^{-3}$ & $4.022 \times 10^{-3}$\\
& Order & -- & 1.062 & 1.051 & 0.995\\
\midrule
\multirow{2}{*}{WENO-$C$} & Error &
$7.190 \times 10^{-3}$ & $3.959 \times 10^{-3}$ & $2.008 \times 10^{-3}$ & $1.096 \times 10^{-3}$\\
& Order & -- & 0.864 & 0.976 & 0.873\\
\midrule
\multirow{2}{*}{WENO-$C$-$N$} & Error &
$7.169 \times 10^{-3}$ & $3.881 \times 10^{-3}$ & $2.007 \times 10^{-3}$ & $1.113 \times 10^{-3}$\\
& Order & -- & 0.885 & 0.951 & 0.851\\
\midrule
\bottomrule
\end{tabular}}
\caption{Pre shock-wall collision ($t = 6.0$) $L^1$ error analysis and convergence tests for the
velocity for the LeBlanc shock tube problem.}
\label{table:leblanc-error-pre}
\end{table}
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.0}
\scalebox{0.8}{
\begin{tabular}{|lc|cccc|}
\toprule
\midrule
\multirow{2}{*}{\textbf{Scheme}} & & \multicolumn{4}{c|}{\textbf{Cells}}\\
{} & & 361 & 721 & 1441 & 2881\\
\midrule
\multirow{2}{*}{WENO} & Overshoot Error &
$7.330 \times 10^{-2}$ & $6.780 \times 10^{-2}$ & $6.200 \times 10^{-2}$ & $5.620 \times 10^{-2}$\\
& Order & -- &0.113 & 0.129 & 0.142 \\
\midrule
\multirow{2}{*}{WENO-$C$} & Bump Error &
$1.500 \times 10^{-2}$ & $9.700 \times 10^{-3}$ & $6.600 \times 10^{-3}$ & $3.900 \times 10^{-3}$\\
& Order & -- & 0.629 & 0.556 & 0.759 \\
\midrule
\bottomrule
\end{tabular}}
\caption{Comparison of the overshoot error in the internal energy and the bump error in the velocity
for solutions to the LeBlanc shock tube problem at time $t = 6.0$.}
\label{table:leblanc-overshoot-error}
\end{table}
We note here that Table {\ref{table:leblanc-error-pre}} seems to suggest that the WENO and
WENO-Noh schemes produce solutions that converge at first-order,
even though the solutions computed using these schemes are very poor,
as shown, for example, in Fig.{\ref{fig:weno-leblanc}}. This ``super-convergence'' {\cite{JiangShu1996}} is
due to large errors on coarser meshes, rather than smaller errors on finer meshes, and is therefore
superficial. On the other hand,
the WENO-$C$ and WENO-$C$-$N$ schemes produce solutions with much smaller errors, and suggest close
to first-order convergence.
Post shock-wall collision, the wall $C$-method produces a highly accurate non-oscillatory solution, while
ensuring that the wall heating error does not occur. While the WENO-Noh scheme is able to suppress
most of the oscillations, the large amount of viscosity needed due to the lack of smoothness of
$|\partial_x u|$ results in a shock front that is too smeared, as well as the imposition of a smaller time-step.
Again, we see that the noise indicator algorithm
serves primarily as an error correction mechanism, removing small-scale high-frequency oscillations from
the solution.
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.0}
\scalebox{0.8}{
\begin{tabular}{|lc|cccc|}
\toprule
\midrule
\multirow{2}{*}{\textbf{Scheme}} & & \multicolumn{4}{c|}{\textbf{Cells}}\\
{} & & 361 & 721 & 1441 & 2881\\
\midrule
\multirow{2}{*}{WENO} & Error &
$2.160 \times 10^{-2}$ & $9.832 \times 10^{-3}$ & $5.336 \times 10^{-3}$ & $2.896 \times 10^{-3}$\\
& Order & -- &1.136 & 0.882 & 0.882 \\
\midrule
\multirow{2}{*}{WENO-Noh} & Error &
$1.528 \times 10^{-2}$ & $6.544 \times 10^{-3}$ & $3.407 \times 10^{-3}$ & $1.668 \times 10^{-3}$ \\
& Order & -- & 1.224 & 0.942 & 1.030\\
\midrule
\multirow{2}{*}{WENO-$N$} & Error &
$2.141 \times 10^{-2}$ & $9.684 \times 10^{-3}$ & $5.178 \times 10^{-3}$ & $2.793 \times 10^{-3}$ \\
& Order & -- & 1.144 & 0.903 & 0.891 \\
\midrule
\multirow{2}{*}{WENO-$C$} & Error &
$5.703 \times 10^{-3}$ & $3.486 \times 10^{-3}$ & $2.024 \times 10^{-3}$ & $1.052 \times 10^{-3}$\\
& Order & -- & 0.710 & 0.785 & 0.944 \\
\midrule
\multirow{2}{*}{WENO-$C$-$N$} & Error &
$5.627 \times 10^{-3}$ & $3.384 \times 10^{-3}$ & $2.001 \times 10^{-3}$ & $1.045 \times 10^{-3}$\\
& Order & -- & 0.734 & 0.758 & 0.937 \\
\midrule
\multirow{2}{*}{WENO-$C$-$W$} & Error &
$6.077 \times 10^{-3}$ & $3.170 \times 10^{-3}$ & $1.694 \times 10^{-3}$ & $8.257 \times 10^{-4}$ \\
& Order & -- & 0.939 & 0.904 & 1.037 \\
\midrule
\multirow{2}{*}{WENO-$C$-$W$-$N$} & Error &
$6.064 \times 10^{-3}$ & $3.143 \times 10^{-3}$ & $1.703 \times 10^{-3}$ & $8.363 \times 10^{-4}$\\
& Order & -- & 0.948 & 0.884 & 1.026 \\
\midrule
\bottomrule
\end{tabular}}
\caption{Post shock-wall collision ($t=8.0$) $L^1$ error analysis and convergence tests for the
velocity for the LeBlanc shock tube problem.}
\label{table:leblanc-error-post}
\end{table}
\subsection{The Peak shock tube problem}
We next consider the Peak shock tube problem, introduced in \cite{Liska2003995}. The domain of interest is $0.1 \leq x \leq 0.6$, the adiabatic gas constant is $\gamma = 1.4$, and the
initial data is given by
$$
\begin{bmatrix}
\rho_0 \\ (\rho u)_0 \\ E_0
\end{bmatrix}
=
\begin{bmatrix}
0.1261192 \\ 11.1230540 \\ 1.962323 \times 10^{3}
\end{bmatrix}
\mathbbm{1}_{[0.1,0.5)}(x)
+
\begin{bmatrix}
6.591493 \\ 14.932505 \\ 24.800422
\end{bmatrix}
\mathbbm{1}_{[0.5,0.6]}(x).
$$
The difficulty in simulating solutions to Peak is due to the fact that the shock wave moves significantly slower than the expansion wave;
moreover,
the distance between the
contact discontinuity and the shock is very small, resulting in a sharp and narrow peak in the density.
Most schemes produce inaccurate velocity profiles with large overshoots and low-frequency noise at the
expansion wave \cite{Liska2003995,GreenoughRider2004}.
The stand-alone WENO scheme produces a similarly poor velocity profile, but the $C$-method can be used to produce a good solution.
Since the noise appears in the velocity profile in the
region with the rarefaction wave, and since the usual $C$-method includes a compression
switch so that artificial viscosity is active only in regions of compression, we employ an additional $C$-equation for the rarefaction wave,
whose solution is
$C^r(x,t)$. We consider the following modification to
(\ref{EulerC}b):
\begin{align*}
&\partial_t (\rho u) + \partial_x (\rho u^2 + p) = \partial_x \left( \rho\, \left(\tilde{\beta}^u \, C + \tilde{\beta}^r\,C^r\right) \, \partial_x u \right) \,, \\
&\partial_t C^r + \frac{S(\bm{u})}{\varepsilon \Delta x} C^r - \kappa \Delta x \cdot S(\bm{u}) \partial_{xx} C^r = \frac{S(\bm{u})}{\varepsilon \Delta x} G^r \,,
\end{align*}
where
$C$ is the solution to \eqref{C-Sod}, and where
$\tilde{\beta}^u = \frac{\max_{x} |\partial_x u |}{\max_{x} C} \beta^u $ and $\tilde{\beta}^r = \frac{\max_{x} |\partial_x u |}{\max_{x} C^r} \beta^r$,
with
$$
G^r(x,t) = \mathbbm{1}_{(0,+\infty)} (\partial_x u) \cdot \frac{|\partial_x u(x,t)|}{\max_{x} | \partial_x u(x,t)|} \,.
$$
We remark that we have omitted the wall function $\overline C_w$ since we are not simulating the
shock-wall collision for this problem.
WENO and WENO-$C$ (with the above modification) are used on a grid with 801 cells and
with a time-step $\Delta t \approx 3.55 \times 10^{-6}$, giving CFL=0.6. The final time is $t=0.0039$, and the
results are shown in Fig.\ref{fig:peak}. The relevant parameters are chosen as
\begin{gather*}
\beta^u=1.0, \qquad \beta^r=10.0, \qquad \varepsilon=10.0, \qquad \kappa=40.0, \qquad \varepsilon_r=1.0, \qquad \kappa_r=20.0 \,.
\end{gather*}
As shown in Fig.\ref{fig:peak}, the extra viscosity provided by $\beta^r$
at the rarefaction wave removes the large overshoot and
low frequency non-physical oscillations that are present in the solution produced with WENO.
\begin{figure}[H]
\centering
\subfigure[$t=0.0039$: velocity]{\label{fig:peak1}\includegraphics[width=75mm]{peak1}}
\subfigure[$t=0.0039$: velocity, zoomed in]{\label{fig:peak-zoom1}\includegraphics[width=75mm]{peak-zoom1}}
\caption{Comparison of WENO and WENO-$C$ for the Peak shock tube problem with 801
cells. The green curve is the exact solution.}
\label{fig:peak}
\end{figure}
In \cite{Liska2003995}, an error analysis of various schemes applied to the Peak shock tube problem with the
above specifications is provided. We compute the $L^1$ and $L^2$ errors for the computed velocity minus the exact solution, using
\eqref{error-formula} for the $L^1$ error and
$$
\lvert \lvert u(\cdot,t) - u^*(\cdot,t) \rvert \rvert_{L^2} = \sqrt{\frac{1}{M} \sum_{i=1}^M | u(x_i,t) - u^*(x_i,t) |^2}\,,
$$
where $M$ is the number of cells used in the simulation and $u^*$ is the exact solution. Following
\cite{Liska2003995}, we list the errors in percentage form with the ratio in question given by
$\frac{\lvert \lvert u-u^* \rvert \rvert}{\lvert \lvert u^* \rvert \rvert}$. We also list the smallest error computed
from all the schemes considered
in \cite{Liska2003995}; namely, the error computed from the scheme of Liu and Lax
\cite{LiuLax1998,LiuLax2003}, which we will refer to as LL. We see in Table \ref{table:peakl} that WENO-$C$ compares very well
with LL, with the solution producing smaller errors in both the $L^1$ and $L^2$ norms.
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.0}
\scalebox{0.8}{
\begin{tabular}{|llc|c|}
\toprule
\midrule
\multirow{2}{*}{\textbf{Norm}} & \multirow{2}{*}{\textbf{Scheme}} & & \multicolumn{1}{c|}{\textbf{Cells}}\\
& {} & & 801\\
\midrule
\multirow{6}{*}{\vspace{-1.25em}$\Vert u-u^* \rVert_{L^1}$} & \multirow{2}{*}{WENO} & Error &
$1.057 \times 10^{-1}$ \\
& & \% & 1.0 \% \\[1.25em]
& \multirow{2}{*}{WENO-$C$} & Error &
$7.260 \times 10^{-2}$ \\
& & \% & 0.7 \% \\[1.25em]
& \multirow{2}{*}{LL} & Error &
-- \\
& & \% & 0.8 \% \\
\midrule
\multirow{6}{*}{\vspace{-1.25em}$\lVert u -u^* \rVert_{L^2}$} & \multirow{2}{*}{WENO} & Error &
$5.168 \times 10^{-1}$ \\
& & \% & 4.7 \% \\[1.25em]
& \multirow{2}{*}{WENO-$C$} & Error &
$4.684 \times 10^{-1}$ \\
& & \% & 4.3 \% \\[1.25em]
& \multirow{2}{*}{LL} & Error &
-- \\
& & \% & 4.4 \% \\
\midrule
\bottomrule
\end{tabular}}
\caption{$L^1$ and $L^2$ error analysis for the velocity $u$ for the Peak
shock tube problem at time $t=0.0039$.}
\label{table:peakl}
\end{table}
This test demonstrates the flexibility of the $C$-method; although a standard WENO scheme produces an
inaccurate and oscillatory solution, a very simple modification of the $C$-method allows for the suppression
of these oscillations, resulting in a more accurate solution.
\subsection{The Osher-Shu shock tube problem}
The Osher-Shu shock tube problem, introduced in \cite{OsherShu1989}, simulates a shock front, perturbed by sinusoidal
fluctuations.
The computational domain is $-1 \leq x \leq 1$, $\gamma =1.4$, with initial
data
\begin{equation}\label{osher-shu-initial}
\begin{bmatrix}
\rho_0 \\ (\rho u)_0 \\ E_0
\end{bmatrix}
=
\begin{bmatrix}
3.857143 \\ 10.14185 \\ 39.1666
\end{bmatrix}
\mathbbm{1}_{[-1,-0.8)}(x)
+
\begin{bmatrix}
1 + 0.2\sin(5 \pi x) \\ 0 \\ 2.5
\end{bmatrix}
\mathbbm{1}_{[-0.8,1]}(x) \,.
\end{equation}
We employ free-flow boundary
conditions \eqref{var-bcs-alternate} at the left wall $x=-1$, and solid wall boundary conditions \eqref{var-bcs} at the right wall $x=1$.
\subsubsection{Noise removal with the noise indicator}
In order to test the efficacy of our noise detection and removal algorithm for the Osher-Shu test, we
perform our numerical simulations using too large a time-step and hence a numerically unstable
CFL number, which
produces spurious high-frequency oscillation behind the shock\footnote{Artificially inflating the CFL number
allows us to model a typical scenario in computational physics in which a DNS-type simulation requires
a prohibitively small time-step, and forces simulations that require entering the unstable CFL regime.
Our objective is to demonstrate that this high-frequency instability can be supressed by use
of our localized noise removal algorithm.}.
Of course, high-frequency oscillations can be created by numerous numerical instabilities,
but an unstable CFL number creates the prototypical oscillation pattern for testing a noise removal scheme.
Our goal is to remove the high-frequency noise from the solution without affecting the low-frequency sinusoidal
oscillations that are the main feature of this test problem. To this end, we first compute a solution using
WENO with 1025 cells with a time-step $\Delta t = 5.0 \times 10^{-4}$, giving a CFL number of 1.2.
The relatively large number of
cells and time-step produce noise with a frequency that is significantly higher than the lower frequency
non-spurious oscillations present in the solution. The WENO-$N$ scheme is used with
the reference coefficient $C_{\mathrm{ref}}$ in \eqref{cref} calculated using
$\delta h = 10^{-3}$. The
noise removal viscosity $\eta$ is chosen such that $\eta \Delta \tau / \Delta x^2 = 0.25$ and only one time-step is taken in the
heat equation.
Since an
exact solution is not available for this problem, our ``exact'' solution is computed with
WENO using 8193 cells and a time-step of $\Delta t = 3.125 \times 10^{-5}$, so that CFL $\approx 0.6$.
In Fig.\ref{fig:oshu-noise}, we compare the solutions computed with WENO and WENO-$N$. The
noise indicator algorithm locates and removes the high-frequency noise present in the solution, without
affecting the sinusoidal oscillations.
The sharpness of the shock front remains unaffected with the use of the noise indicator, due
to the deactivation of noise detection in a small region surrounding the shock.
\begin{figure}[H]
\centering
\subfigure[$t=0.36$: velocity]{\label{fig:oshu-noise1}\includegraphics[width=75mm]{oshu-noise1}}
\subfigure[$t=0.36$: velocity, zoomed in]{\label{fig:oshu-noise-zoom1}\includegraphics[width=75mm]{oshu-noise-zoom1}}
\caption{Comparison of WENO and WENO-$N$ for the Osher-Shu problem with 1025
cells. The red crosses indicate where the
noise indicator function $\mathbbm{1}_{\operatorname{noise}}(x)$ is active. The green curve is
the ``exact'' solution.}
\label{fig:oshu-noise}
\end{figure}
For the purpose of benchmarking our noise detection and removal algorithm, we also conduct
tests in which we use linear (hyperviscosity) operators
(see \mbox{\cite{Landshoff1955,Wilkins1980,CaShWh1998,PaPo1988,CaCo2004,CaCo2004b}}) of the form
\begin{equation}\label{hyperviscosity}
(-1)^{r-1} \beta_r (\Delta x)^{2r-1} \frac{\partial^{2r} u}{\partial x^{2r}}
\end{equation}
to remove noise, where $r \geq 1$.
The equations of motion we consider are the Euler equations
{\eqref{eqn:consLawEvolution}} with the term {\eqref{hyperviscosity}} on the right-hand side of the
momentum equation. When numerically approximated using our WENO-type discretization, the
resulting scheme is referred to as the WENO-$\Delta^r u$ scheme.
We perform numerical tests for the WENO-$\Delta^r u$ scheme with
$r=1, 2, 3$, and set $\beta_1=0.2$, $\beta_2=0.05$, and $\beta_3=0.01$, with these values determined
\emph{a posteriori} to optimize the resulting solutions.
We compare in Fig.{\ref{fig:hyperviscosity}} the WENO-$N$ and WENO-$\Delta^r u$ simulations;
each subfigure shows the computed velocity, obtained using one of
the schemes on grids with 513, 1025,
2049, and 4097 cells, as well as the exact solution. The plots shown are zoomed in on the region behind the
shock where there is high-frequency noise.
\begin{figure}[H]
\centering
\subfigure[WENO-$N$]{\label{fig:oshu-convergence1}\includegraphics[width=75mm]{oshu-convergence1}}
\subfigure[WENO-$\Delta u$]{\label{fig:oshu-convergence2}\includegraphics[width=75mm]{oshu-convergence2}}
\subfigure[WENO-$\Delta^2 u$]{\label{fig:oshu-convergence3}\includegraphics[width=75mm]{oshu-convergence3}}
\subfigure[WENO-$\Delta^3 u$]{\label{fig:oshu-convergence4}\includegraphics[width=75mm]{oshu-convergence4}}
\caption{Comparison of the velocity profiles at $t=0.36$ for the Osher-Shu test. The green curve is the
exact solution.}
\label{fig:hyperviscosity}
\end{figure}
It is clear from these figures that, qualitatively, the WENO-$N$ scheme produces solutions with minimal noise
that appear to converge to the exact solution. The hyperviscosity schemes, on the other hand, produce
solutions with erratic behavior; for instance, despite mesh refinement,
the WENO-$\Delta^3 u$ solution on the 4097 cell grid appears
much worse than the solutions on the 1025 and 2049 cell grids. Similarly inconsistent convergence
behavior can be observed with WENO-$\Delta u$ and WENO-$\Delta^2 u$. This is due to the CFL
condition violation. It is interesting to observe, on the other hand,
that the WENO-$N$ solutions are not subject to this erratic
convergence behavior. This is likely the result of the highly localized (in both space and time) nature
of the noise detection.
Overall, WENO-$N$ appears to produce noise-free, accurate, and
convergent solutions.
Defining the $L^1_t$ norm as
$$
\lVert f \rVert_{L^1_t} = \frac{1}{KM} \sum_{j=1}^{K} \sum_{i=1}^{M} \lvert f(x_i,t_j) \rvert\,,
$$
in Table \ref{table:oshu-noise-error-vel}
we compute the $L^1$ and $L^1_t $ errors
for the velocity at time $t = 0.36$ at various mesh refinements.
Once again, we see that the noise indicator algorithm
functions as an ``error correcter'', reducing the numerical error through the removal of high-frequency
noise, while maintaining a relatively high order of accuracy. Among all the schemes considered, WENO-$N$ produces solutions with the smallest errors,
providing a quantitative validation of the observations made from Fig.{\ref{fig:hyperviscosity}}.
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.0}
\scalebox{0.8}{
\begin{tabular}{|llc|cccc|}
\toprule
\midrule
\multirow{2}{*}{\textbf{Norm}} & \multirow{2}{*}{\textbf{Scheme}} & & \multicolumn{4}{c|}{\textbf{Cells}}\\
& {} & & 513 & 1025 & 2049 & 4097\\
\midrule
\multirow{10}{*}{\vspace{-5.0em}$\Vert \tilde{u} \rVert_{L^1}$}
& \multirow{2}{*}{WENO} & Error &
$1.003 \times 10^{-2}$ & $5.478 \times 10^{-3}$ & $2.018 \times 10^{-3}$ & $1.258 \times 10^{-3}$\\
& & Order & -- & 0.873 & 1.440 & 0.682 \\[1.125em]
& \multirow{2}{*}{WENO-$\Delta u$} & Error &
$1.045 \times 10^{-2}$ & $4.717 \times 10^{-3}$ & $1.990 \times 10^{-3}$ & $9.770 \times 10^{-4}$ \\
& & Order & -- & 1.148 & 1.245 & 1.026 \\[1.125em]
& \multirow{2}{*}{WENO-$\Delta^2 u$} & Error &
$1.050 \times 10^{-2}$ & $4.774 \times 10^{-3}$ & $2.459 \times 10^{-3}$ & $1.132 \times 10^{-3}$ \\
& & Order & -- & 1.137 & 0.957 & 1.119 \\[1.125em]
& \multirow{2}{*}{WENO-$\Delta^3 u$} & Error &
$1.084 \times 10^{-2}$ & $4.806 \times 10^{-3}$ & $1.981 \times 10^{-3}$ & $1.109 \times 10^{-3}$ \\
& & Order & -- & 1.174 & 1.279 & 0.838 \\[1.125em]
& \multirow{2}{*}{WENO-$N$} & Error &
$1.013 \times 10^{-2}$ & $4.432 \times 10^{-3}$ & $1.973 \times 10^{-3}$ & $1.005 \times 10^{-3}$ \\
& & Order & -- & 1.193 & 1.168 & 0.973 \\
\midrule
\multirow{10}{*}{\vspace{-5em}$\lVert \tilde{u} \rVert_{L^1_t}$}
& \multirow{2}{*}{WENO} & Error &
$7.328 \times 10^{-3}$ & $3.223 \times 10^{-3}$ & $1.139 \times 10^{-3}$ & $6.761 \times 10^{-4}$\\
& & Order & -- & 1.185 & 1.501 & 0.752 \\[1.25em]
& \multirow{2}{*}{WENO-$\Delta u$} & Error &
$7.484 \times 10^{-3}$ & $3.348 \times 10^{-3}$ & $1.192 \times 10^{-3}$ & $6.418 \times 10^{-4}$\\
& & Order & -- & 1.161 & 1.490 & 0.893 \\[1.25em]
& \multirow{2}{*}{WENO-$\Delta^2 u$} & Error &
$7.333 \times 10^{-3}$ & $3.316 \times 10^{-3}$ & $1.254 \times 10^{-3}$ & $7.255 \times 10^{-4}$\\
& & Order & -- & 1.145 & 1.403 & 0.789 \\[1.25em]
& \multirow{2}{*}{WENO-$\Delta^3 u$} & Error &
$7.419 \times 10^{-3}$ & $3.340 \times 10^{-3}$ & $1.196 \times 10^{-3}$ & $6.903 \times 10^{-4}$\\
& & Order & -- & 1.151 & 1.482 & 0.793 \\[1.25em]
& \multirow{2}{*}{WENO-$N$} & Error &
$7.066 \times 10^{-3}$ & $3.004 \times 10^{-3}$ & $1.050 \times 10^{-3}$ & $5.656 \times 10^{-4}$ \\
& & Order & -- & 1.234 & 1.517 & 0.893 \\
\midrule
\bottomrule
\end{tabular}}
\caption{$L^1$ and $L^1_t$ error analysis and convergence tests for the velocity $u$ for the Osher-Shu
problem at time $t=0.36$, with $\tilde{u} = u - u^*$ the difference between the computed
solution $u$ and the ``exact solution'' $u^*$.}
\label{table:oshu-noise-error-vel}
\end{table}
We note that the numerical error and the order of convergence remains unchanged when
using the
density $\rho$ instead of the velocity $u$;
in particular, the WENO-N algorithm produces solutions with smaller errors and
similar rates of convergence as WENO, when errors and accuracy are computed using $\rho$.
And so, the removal of high-frequency noise in $u$, in turn, provides a density field that is also
free of high-frequency oscillations (c.f. Remark {\ref{remark-noise}}).
\subsubsection{Stabilizing shock-wall collision for Osher-Shu}
We now turn to the issue of stabilizing shock-wall collision for the Osher-Shu problem. The problem is set up
as follows: the initial data is \eqref{osher-shu-initial}, the time-step is given by $\Delta t = 5 \times 10^{-4}$ with
final time $t = 0.63$, and the number of cells is 512, so that the CFL number is 0.6. We impose the
solid wall boundary conditions \eqref{ghost-node} at the
right boundary $x=1$, and free-flow boundary conditions \eqref{ghost-node-alternate}
at the left boundary $x=-1$.
The shock-wave moves to the right
and collides with the wall at $x=1$
at time $t \approx 0.5$. Post-collision, there is a large amount of noise present
in the solution behind the shock-wave, and our aim is to remove the noise while preserving the
sharpness of the shock front and minimizing the damping of the post-shock low frequency
oscillations.
\begin{figure}[H]
\centering
\subfigure[$t=0.63$: density]{\label{fig:oshu-after-rho}\includegraphics[width=75mm]{oshu-after-collision-rho}}
\subfigure[$t=0.63$: density, zoomed in]{\label{fig:oshu-after-rho-zoom}\includegraphics[width=75mm]{oshu-after-collision-rho-zoom}}
\caption{Comparison of WENO vs. WENO-$C$-$W$ for the density just after the shock-wall collision problem for the Osher-Shu problem with 512 cells.}
\label{fig:oshu-collision}
\end{figure}
We employ our WENO-$C$-$W$ scheme and choose the relevant parameters as
\begin{alignat*}{4}
\beta^u&=1.0, \qquad \beta^E&=0.0, \qquad \beta^u_w &= 2.5, \qquad \beta^E_w &= 0.85 \\
\varepsilon&=1.0, \qquad \kappa&=5.0, \qquad \varepsilon_w &=40.0, \qquad \kappa_w &=4.0 \,.
\end{alignat*}
The results are shown in Fig.\ref{fig:oshu-collision}.
Post shock-wall collision, WENO produces a noisy solution with high frequency noise
interfering with the sinusoidal oscillations, while
WENO-$C$-$W$ produces a solution with a sharp front and without noise.
\section{Concluding remarks}
In this paper, we have presented three ideas: the first is a space-time smooth artificial viscosity
method that is versatile and
simple to implement; the second is a shock-wall collision scheme that can be used to suppress post-collision
noise that occurs when a shock-wave collides with a fixed boundary and bounces-back; the third is a wavelet-based noise
detection and removal scheme that is highly localized and can be used to remove noise present in solutions.
We have demonstrated the efficacy of the new method on a variety of 1-$D$
test problems with different features, and demonstrated that the solutions produced retain sharp fronts, correct wave speeds, remain oscillation-free, are not subject to the wall-heating error, and maintain high-order accuracy.
|
2,877,628,091,092 | arxiv | \section{Introduction}
\section{Survey}
A multitude of literature is available for initial access techniques at sub-6GHz frequencies in ad-hoc wireless network scenarios or, at 60 GHz IEEE 802.11ad WLAN and WPAN scenarios \cite{wang2009beam,alkhateeb2013hybrid, tsang2011coding,baykas2011ieee,kim2014fast, zhou2012efficient}. However, the initial access for the mmWave cellular network is burgeoning with very recent papers \cite{hur2013millimeter,barati2014directional,desai2014initial,barati2015directional,li2016performance, palacios2016speeding, jeong2015random}.
The initial access at high frequencies involves a directional search to align the directional beams of the two end users for the maximum gain using a set of procedures known as beam training. It is also referred by different names like initial beamforming, beamforming training or beam steering. We use these terms interchangeably for a complete survey of all of the techniques.
The techniques for beam training can be broadly classified into two categories: (i) Sequential Search and (ii) Hierarchical Search.
In sequential search, the AP and the UE use a highly directional beam and scan the beam space to find the desired beam. Initial work as in \cite{jeong2015random, barati2014directional} proposed a blind exhaustive search where the AP randomly transmits synchronization signals in different directions for each time slot, eventually scanning the whole angular space. This naive approach might entail a high delay in obtaining the desired beam.
Thus, multiple proposals have been introduced to reduce the beam training delay. In particular, \cite{zhou2012efficient}, \cite{barati2016initial} and\cite{ barati2015directional} proposed the use of quasi-omnidirectional beam pattern at the transmitter, while the receiver does an exhaustive scan over all possible beam space; this process is reversed in a second phase with the transmitter scanning the space, while the receiver uses a quasi-omnidirectional beam.
\cite{li2017design} is a recent paper that compares four different exhaustive search techniques using a combination of directional and omnidirectional transmission.
But, using an omnidirectional antenna in the mmWave cellular environment requires a huge power for a large cellular coverage and causes a lot of interference to other devices.
Another smart approach is the use of multiple directional beams simultaneously.
For instance, \cite{tsang2011coding} generated multiple beams simultaneously by manipulating the antenna weights and then coded the beams with a unique signature. Agile link beam training \cite{abari2016millimeter} instead used sparse FFT and randomized hashing to generate multiple beams and then used a voting procedure to find the right direction.
However, obtaining multiple beams simultaneously is difficult and it suffers from high power in the side lobes.
On the other hand, the hierarchical search techniques use a combination of high and low-resolution antenna patterns iteratively for beamforming. The AP first performs an exhaustive search on wide beam and then refines to search narrow beams. These protocols follow some codebook for the beam pattern. For instance, the beam coding presented in \cite{wang2009beam} consists of 3 stages---namely, the device to device linking, sector-level searching, and beam-level searching. This is also an optional part of IEEE 802.15.3c license for 60 GHz mmWave WPAN system.
Various other codebook designs are presented in \cite{alkhateeb2013hybrid,hur2013millimeter,kim2014fast,eltayeb2015opportunistic,li2016performance,de2017millimeter}. In particular, \cite{hur2013millimeter} is designed for wireless backhaul between fixed APs and \cite{kim2014fast} is designed for adaptive modulation schemes. Since these techniques use a wide beam during the initial phase, they all suffer from the limited coverage, with the worst performance for the users at the boundary of the cell. Moreover, high power is needed for initial sector-level searching which may result in high interference with other users.
A recent survey \cite{giordani2016comparative} compared the sequential search and hierarchical search techniques for the mmWave cellular network scenario. They showed a trade-off between beam training delay and misdetection probability. The hierarchical search gives a smaller delay, but since they make use of small antenna array in the first phase, they present an order of magnitude higher misdetection probability as compared to the exhaustive search technique.
Implementing these beam training techniques for mmWave frequencies require some special beamforming architectures.
The fully digital architecture used for sub-6 GHz Massive MIMO systems may provide a small access delay \cite{barati2014directional}, but it is impractical at mmWave frequencies because of the high cost, high power consumption at the analog-to-digital converters, and the complexity of mixed-signal hardware which prevents the use of dedicated RF chain per antenna element.
On the other hand, analog architecture requires only a single RF chain to process all the antennas. Unlike MIMO systems, it uses a network of phase shifters that controls the phase of each antenna element to produce a directional beam \cite{hur2013millimeter,kim2014fast,li2016performance,abari2016millimeter}.
A hybrid of digital and analog architecture which requires only a few RF chains is capable of simultaneous multi-direction scanning. Adaptive hierarchical beam training and codebook design for hybrid architecture is considered in \cite{alkhateeb2013hybrid, eltayeb2015opportunistic, palacios2016speeding, de2017millimeter}. In fact,
\cite{alkhateeb2013hybrid} is the first paper that exploited the sparsity of mmWave channel and the availability of partial channel knowledge and proposed a hybrid iterative beamforming protocol utilizing a variant of matching pursuit algorithm.
A comparison of analog architecture using an exhaustive sequential search procedure with the digital and hybrid architectures for both sequential and hierarchical search procedures is presented in \cite{desai2014initial}. Their simulation reveals that the hierarchical search always performs worse than sequential search. The reason mentioned in the paper is that the wide beams (\textit{e.g.} $45^\circ$) used for hierarchical search had more power in the sidelobes than narrow beams, which resulted in UE selecting an incorrect beam at the initial stage of the hierarchical search procedure. The performance of digital and hybrid architecture was almost similar but worse than the analog architecture with exhaustive search.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,877,628,091,093 | arxiv | \section{Introduction}
In recent years, contextualized embeddings have become increasingly important.
Embeddings created by the BERT model and its variants have been used to get state-of-the-art performance in many tasks \citep{devlin-etal-2019-bert, liu2019roberta, yang2019xlnet, radford2018gpt, radford2019gpt2,brown2020gpt3}.
Several multimodal-BERT models have been developed that learn multimodal contextual embeddings through training jointly on linguistic data and visual data \citep{lu2019vilbert, su2019vlbert, li2019visualbert, chen2020uniter}.
They achieve state-of-the-art results across many tasks and benchmarks, such as Visual Question Answering \citep{balanced_vqa_v2}, image and text retrieval \citep{lin2014microsoft}, and Visual Commonsense Reasoning \citep{suhr2018corpus}.\footnote{From here on we refer to the text-only BERT models as 'BERT' and the multimodal-BERT models as 'multimodal-BERTs'.}
BERT and multimodal-BERTs are blackbox models that are not easily interpretable.
It is not trivial to know what knowledge is encoded in the models and their embeddings.
A common method for getting insight into the embeddings of both textual and visual content is probing.
Language utterances have an inherent grammatical structure that contributes to their meaning.
Natural images have a characteristic spatial structure that likewise allows humans to interpret their meaning.
In this paper we hypothesize that the textual and visual embeddings learned from images that are paired with their descriptions encode structural knowledge of both the language and the visual data.
Our goal is to reveal this structural knowledge with the use of probing.
More specifically, in order to perform this probing, we first make the inherent structure of language and visuals explicit by a mapping between a dependency parse of the sentences that describe the image and by the dependency between the object regions in the image, respectively.
Because the language truthfully describes the image, and inspired by \citet{draschkow2017scene}, we define a visual structure that correlates with the dependency tree structure and that arranges object regions in the image in a tree structure.
We call this visual dependency tree the \textit{scene tree}.
An example of this mapping to the scene tree is visualized in Figure~\ref{fig:tree_mapping}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/visualize-1077546505_1.png}
\caption{Example of the mapping from the linguistic dependency tree to the visual tree. The borders of the regions in the image have the same color as the phrase they are attached to. The rows below the image are the textual tree depth (in black), the visual tree depth (in red), the phrase index, and the words in the sentence.}
\label{fig:tree_mapping}
\end{figure}
The aligned dependency tree and scene tree allow us to
conduct a large set of experiments aimed at discovering encoded structures in neural representations obtained from multimodal-BERTs.
By making use of the structural probes proposed by \citet{hewitt-manning-2019-structural}, we compare the dependency trees learned by models with or without provided image features.
Furthermore, we investigate if scene trees are learned in the object region embeddings.
\paragraph{Research Questions}
In this study, we aim to answer the following research questions.
\begin{itemize}
\item \textbf{RQ 1:} Do the textual embeddings trained with a multimodal-BERT retain their structural knowledge? \\
\textbf{Sub-RQ 1.1:} To what extent does the joint training in a multimodal-BERT influence the structures learned in the textual embeddings?
\item \textbf{RQ 2:} Do the visual embeddings trained with a multimodal-BERT learn to encode a scene tree?
\end{itemize}
In a broader framework this study might contribute to better representation learning inspired by how humans acquire language in a perceptual context.
It stimulates the learning of representations that are compositional in nature and are jointly influenced by the structure of language and the corresponding structure of objects in visuals.
\section{Related Work}
\paragraph{Probing studies}
Several studies have been performed that aim at analyzing BERT and multimodal-BERTs. For BERT, probes are designed that explore gender bias \citep{bhardwaj2021probe_gender_bias},
relational knowledge \citep{singh-etal-2020-bertnesia},
linguistic knowledge for downstream tasks \citep{liu-etal-2019-linguistic},
part-of-speech knowledge \citep{hewitt2019designing,hewitt-etal-2021-conditional},
and for sentence and dependency structures \citep{tenney2019structure,hewitt-manning-2019-structural}.
These studies have shown that BERT latently learns to encode linguistic structures in its textual embeddings.
\citet{basaj2021explaining} made a first attempt at converting the probes to the visual modality and evaluated the information stored in the features created by visual models trained with self-supervision.
For multimodal-BERTs, one study by \citet{parcalabescu2020seeing} investigates how well these models learn to count objects in images and how well they generalize to new quantities.
They found that the multimodal-BERTs overfit the dataset bias and fail to generalize to out-of-distribution quantities.
\citet{frank-etal-2021-vision} found that visual information is much more used for textual tasks than textual information is used for visual tasks when using multimodal models.
These findings suggest more needed research into other capabilities of and knowledge in multimodal-BERT embeddings.
We build on this line of work but aim to discover structures encoded in the textual and visual embeddings learned with multimodal-BERTs. This is a first step towards finding an aligned structure between text and images. Future work could exploit this to make textual information more useful for visual tasks.
\paragraph{Structures in visual data}
There is large research interest in identifying structural properties of images e.g., scene graph annotation of the visual genome dataset \citep{krishnavisualgenome}.
In the field of psychology, research towards scene grammars \citep{draschkow2017scene} evidences that humans assign certain grammatical structures to the visual world.
Furthermore, some studies investigate the grounding of textual structures in images,
such as syntax learners \citep{shi_visually_2019} and visually grounded grammar inducers \cite{zhao_visually_2020}. Here the complete image is used, without considering object regions and their composing structure, to aid in predicting linguistic structures.
Closer to our work, \citet{Elliott2013ImageDU} introduced visual dependency relations (VDR), where spatial relations are created between object in the image. The VDR can also be created by locating the object and subject in a caption and matching it with object annotations in the image \citep{Elliott2015DescribingIU}. Our scene tree differs, since it makes use of the entire dependency tree of the caption to create the visual structure.
\section{Background}
\paragraph{Multimodal-BERT}
Many variations of the BERT model implement a transformer architecture to process both visual and linguistic data, e.g., images and sentences. These Multimodal-BERTs can be categorized into two groups: single-stream and dual-stream encoders. In the former, a regular BERT architecture processes the concatenated input of the textual description and the image through a transformer stack.
This allows for an "unconstrained fusion of cross-modal features" \citep{bugliarello2021volta}. Some examples of these models are ViL-BERT \citep{su2019vlbert}, VisualBERT \citep{li2019visualbert}, and UNITER \citep{chen2020uniter}.
In the dual-stream models, the visual and linguistic features are first processed separately by different transformer stacks, followed by several transformer layers with alternating \textit{intra-modal} and \textit{inter-modal} interactions.
For the \textit{inter-modal} interactions, the query-key-value matrices modeling the multi-head self-attention are computed, and then the key-value matrices are exchanged between the modalities.
This limits the interactions between the modalities but increases the expressive power with separate parameters. Examples of such dual-stream models are ViLBERT \citep{lu2019vilbert}, LXMERT \citep{tan2019lxmert}, and ERNIE-ViL \citep{yu2021ernie}.\footnote{The ERNIE-ViL model is trained with scene graphs of the visual genome dataset. We do not probe this model as there is an overlap between the training data of ERNIE-ViL and our evaluation data.}
\section{Method}
\subsection{Tree Structures}\label{sec:tree}
In the probing experiments we assume that the structural knowledge of a sentence is made explicit by its dependency tree structure and that likewise the structural knowledge of an image is represented by a tree featuring the dependencies between object regions.
Further, we assume that the nodes of a tree (words in the dependency tree of the sentence, phrase labels in the region dependency tree of the image) are represented as embeddings obtained from a layer in BERT or in a multimodal-BERT.
To generate the depths and distances values from the tree, we use properties of the embedding representation space \citep{mikolov2013distributed}.
For example, similar types of relations between embeddings have a similar distance between them, such as counties and their capital city.
The properties we use are that the length (the norm) of a vector which describes the depth in a tree and the distance between nodes that can be translated as the distance between vectors.
\paragraph{Generating distance values}
For the distance labels, a matrix ${\bm{D}} \in {\mathbb{N}}^{n\times n}$ is required, with each ${\bm{D}}_{ij}$ describing the distance between nodes $i$ and $j$.
To fill the matrix, we iterate over all possible pairs of nodes.
For nodes $i$ and $j$, it is computed by starting at node $i$ in the tree and traverse it until node $j$ is reached while ensuring a minimum distance.
This is achieved by using the breadth-first search algorithm.
\paragraph{Generating depth values}
For the depth labels, we generate a vector ${\bm{d}} \in {\mathbb{N}}^n$, with $n$ the number of nodes in the tree.
There is a single node that is the root of the tree, to which we assign a depth of zero.
The depth increases at every level below.
\subsection{Constructing the Trees}\label{sec:data_processing}
\begin{algorithm}[pt]
\caption{$ConstructSceneTree(T_t, P, I)$}
\label{alg:tree_mapping}
\begin{algorithmic}[1]
\REQUIRE Language dependency tree $T_t = \{E_t,V_t\}$, with $V_t$ the set of $TextIDs$ for words in a sentence and $E_t$ the set of edges such that each $e_{t} = (v_{t,j}, v_{t,k})$, where $v_{t,k}$ is a child node of $v_{t,j}$
\REQUIRE Set of phrases $P$, each $p_i$ describes one or more regions and covers multiple words
\REQUIRE Image $I$
\ENSURE Scene tree $T_s$
\STATE $V_s = \{\}$, set of Nodes in Scene Tree $T_s$
\STATE $E_s= \{\}$, set of Edges in Scene Tree $T_s$
\STATE $v_{s,0} = I$, set Image as root node
\STATE $D_0 = 0$, set root node depth as 0
\STATE $add(V_s, v_{s,0})$
\STATE $v_{t,0} = FindRootNode(T_t)$
\STATE $PhraseID2TextID(0) = v_{t,0}$
\FOR{$p_i \in P$}
\STATE $v_{t,k} = FindHighestNode(p_i)$
\STATE $PhraseID2TextID(p_i) = v_{t,k}$
\STATE $D_i = DepthInTree(T_t, v_{t,k})$
\ENDFOR
\FOR{$p_i \in P$ \textbf{ordered by} $D$}
\STATE $v_{t,k} = PhraseID2TextID(p_i)$
\WHILE{\textbf{True}}
\STATE $e_{t} = EdgeWithChildNode(E, v_{t,k})$
\STATE $v_{t,j} = SelectParentNode(e_{t})$
\STATE $p_p = TextID2PhraseID(v_{t,j})$
\IF{$p_p \in V_s$}
\STATE $add(V_s, p_i), \quad add(E_s, (p_p, p_i))$
\STATE $D_i = D_p + 1$
\STATE \textbf{break while loop}
\ELSE
\STATE $v_{t,k} = v_{t,j}$
\ENDIF
\ENDWHILE
\ENDFOR
\RETURN $T_s$
\end{algorithmic}
\end{algorithm}
\paragraph{Language dependency tree}
We use the dependency tree as linguistic structure. The tree annotations are according to the Stanford dependency guidelines \citep{de2008stanford}.
They can either be provided as gold-standard in the dataset, or generated using the spacy dependency parser \citep{spacy}.
\paragraph{Scene tree}
\citet{draschkow2017scene} found that there are commonalities between words in language and objects in scenes, allowing to construct a scene grammar.
Furthermore, \citet{zhao_visually_2020} have shown that an image provides clues that improve grammar induction.
In line with these works, we want a visual structure that aligns with a linguistic representation like the dependency tree.
As visual structure, a scene graph could be used for the relations between regions \citep{krishnavisualgenome}.
However, the unconstrained graph
is difficult to align with the dependency tree.
Therefore, we propose a novel visual structure, the \textit{scene tree}, that is created by mapping a textual dependency tree to the object regions of an image.
An example of such a mapping for an image-sentence pair is given in Figure~\ref{fig:tree_mapping}.
This process requires a tree for the sentence and paired data for images and sentences.
Each node in the scene tree directly matches one or more visual regions.
The node description is a phrase that covers multiple words in the sentence (or nodes in the dependency tree).
The output of this method is a tree that contains the phrase trees that directly correspond to the regions.
The algorithm is completely described as pseudo-code in Algorithm~\ref{alg:tree_mapping}.
The algorithm starts by initializing the scene tree.
We set the full image as the root node.
For each phrase that describes an image region, we select the dependency tree node (or word with a $TextID$) that is closest to the root and assign this a phrase ID. This creates a mapping between the phrases (Phrase IDs) and dependency tree nodes (Text IDs) $PhraseID2TextID$, and its reverse $TextID2PhraseID$.
We assign each phrase an initial depth, based on the word it maps to in $PhraseID2TextID$.
On line 12, the loop over the phrases that describe the object regions starts, to find the direct parent for each phrase so it can be added to the new scene tree.
For each phrase $p_i$, we select the matching dependency tree node the $v_{t,k}$ from $PhraseID2TextID$. From $v_{t,k}$ we follow the chain of parent nodes, until an ancestor $v_{t,l}$ is found that points back to a phrase $p_j$ (using $TextID2PhraseID$) that is already a member of the scene tree. Phrase $p_i$ is added to the tree as child of $p_j$.
The completed tree of phrases is our \textit{scene tree}.
\subsection{Embeddings}\label{sec:embeddings}
\paragraph{Textual embeddings}
For each sentence $l$, every word becomes a node $n_i$ in the tree, such that we have a sequence of $s$ nodes $n_{1:s}^l$. To obtain the textual embeddings ${\bm{h}}_{1:s}^l \in {\mathbb{R}}^{m}$, we do a wordpiece tokenization \citep{wu2016google} and pass the sentence into BERT. Depending on the requested layer, we take the output of that BERT layer as the embeddings. For nodes with multiple embeddings because of the wordpiece tokenization, we take the average of those embeddings.
To obtain the textual embeddings ${\bm{h}}_{1:s}^l$ for a multimodal-BERT, we use the same process but also provide visual features.
When an image is present, we enter the visual features (as described in the next paragraph), otherwise, a single masked all-zero feature is entered.
\paragraph{Visual embeddings}
For sentence with image $l$, the sequence of $s$ nodes $n_{1:s}^l$ consists of the number of regions plus the full image.
The visual embeddings ${\bm{h}}_{1:s}^l \in {\mathbb{R}}^{m}$ are obtained by passing the raw Faster R-CNN features \citep{ren2015faster} into the multimodal-BERT.
Depending on the requested layer, we take the output of that multimodal-BERT layer as the embeddings.
\subsection{Structural Probes}\label{sec:probes}
Here we shortly describe the structural probes as defined by \citet{hewitt-manning-2019-structural}.
Originally designed for text, we use these probes to map from an embedding space (either textual embeddings or visual embeddings) to depth or distance values as defined in Section~\ref{sec:tree}.
\paragraph{Distance probe}
Given a sequence of $s$ nodes $n_{1:s}^l$ (words or objects) and their embeddings ${\bm{h}}_{1:s}^l \in {\mathbb{R}}^{m}$, where $l$ identifies the sequence and $m$ the embedding size, we predict a matrix of $s\times s$ distances.
First, we define a linear transformation ${\bm{B}} \in \mathbb{R}^{k\times m}$ with $k$ the probe rank, such that ${\bm{B}}^T{\bm{B}}$ is a positive semi-definite, symmetric matrix.
By first transforming a vector ${\bm{h}}$ with matrix ${\bm{B}}$, we get its norm like this: $({\bm{B}}{\bm{h}})^T({\bm{B}}{\bm{h}})$.
To get the squared distance between two nodes $i$ and $j$ in sequence $l$, we compute the difference between node embeddings ${\bm{h}}_i$ and ${\bm{h}}_j$ and take the norm following equation~\ref{eq:distance}:
\begin{equation}
{\bm{D}}_{ij} = ({\bm{B}}({\bm{h}}_i^l-{\bm{h}}_j^l))^T({\bm{B}}({\bm{h}}_i^l-{\bm{h}}_j^l))\label{eq:distance}
\end{equation}
The only parameters of the distance probe are now the transformation matrix ${\bm{B}}$, which can easily be implemented as a fully connected linear layer.
Identical to the work by \citet{hewitt-manning-2019-structural}, the probe is trained through stochastic gradient descent.
\paragraph{Depth probe}
For the depth probe, we transform the embedding of each node $n_i$ to their norm, so we can construct the vector ${\bm{d}}$.
This imposes a total order on the elements and results in the depths.
We compute the squared vector norm $\norm{{\bm{h}}_i}_{\bm{B}}^2$ with the following equation:
\begin{equation}
{\bm{d}}_i = \norm{{\bm{h}}_i}_{\bm{B}}^2 = ({\bm{B}}{\bm{h}}_i^l)^T({\bm{B}}{\bm{h}}_i^l)\label{eq:depth}
\end{equation}
\section{Experimental Setup}
\subsection{Data}\label{sec:data}
By using a text-only dataset, we can test how the textual embeddings of the multimodal-BERTs perform compared to the BERT model, without the interference from the visual embeddings.
This allows us to see how much information the multimodal-BERTs encode in the visual embeddings.
Therefore, we use the Penn Treebank (PTB3) \citep{marcus1999treebank}.
It is commonly used for dependency parsing (also by \citet{hewitt-manning-2019-structural} from whom we borrow the probes) and consists of gold-standard dependency tree annotations according to the Stanford dependency guidelines \citep{de2008stanford}.
We use the default training/validation/testing split, that is, the subsets 2-21 for training, 22 for validation and 23 for testing
of the Wall Street Journal sentences.
This provides us with 39.8k/1.7k/2.4k sentences for the splits, respectively.
The second dataset is the Flickr30k dataset \citep{young2014flickr30k}, which consists of multimodal image captioning data.
It has five caption annotations for each of the 30k images. An additional benefit of this dataset are the existing extensions, specifically the Flickr30k-Entities (F30E) \citep{plummer2015flickr30k_entities}.
In F30E all the phrases in the captions are annotated and match with region annotations in the image.
This paired dataset is used to create the scene trees proposed in Section~\ref{sec:data_processing}.
The Flickr30k dataset does not provide gold-standard dependency trees.
Therefore, the transformer based Spacy dependency parser \citep{spacy} is used to generate silver-standard dependency trees according to the Stanford dependency guidelines \citep{de2008stanford}.
The dataset consists of 30k images, with (mostly) 5 captions each, resulting in 148.9k/5k/5k sentences for the training/validation/testing splits, respectively.
\subsection{Models}\label{sec:models}
We use two different multimodal-BERTs, one \textbf{single-stream} and one \textbf{dual-stream} model.
As implementation for the multimodal-BERTs, we make use of the \textsc{Volta} library \citep{bugliarello2021volta}.
Here, all the models are implemented and trained under a controlled and unified setup with regard to hyperparameters and training data.
Based on the performance under this unified setup on the Flickr30k image-sentence matching task, we have chosen the best performing models: ViLBERT \citep{lu2019vilbert} as single-stream model and UNITER \citep{chen2020uniter} as dual-stream model.
When probing the textual embeddings, we also use a text-only \textbf{BERT-base model} (from here on referred to as BERT) \citep{devlin-etal-2019-bert}.
\citet{hewitt-manning-2019-structural} use the same model, allowing for easy comparability.
The implementation used is from the HuggingFace Transformer library \citep{wolf-etal-2020-transformers}.
\paragraph{Hyperparameters}
For our setup and metrics, we follow the setup from \citet{hewitt-manning-2019-structural}.
The batch size is set to 32 and we train for a maximum of 40 epochs.
Early stopping is used to terminate training after no improvement on the validation L1-loss for 5 epochs.
\subsection{Metrics}\label{sec:metrics}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.9\linewidth}
\centering
\includegraphics[scale=0.4]{figures/results/ptb3/BERT/select_layer_ParseDepthTask_rank128.test.legend.pdf}
\end{subfigure}
\\
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.48]{figures/results/ptb3/BERT/select_layer_ParseDepthTask_rank128.test.pdf}
\caption{BERT}
\label{fig:ptb_depth_bert}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.48]{figures/results/ptb3/UNITER/select_layer_ParseDepthTask_rank128.test.pdf}
\caption{UNITER}
\label{fig:ptb_depth_unit}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.48]{figures/results/ptb3/ViLBERT/select_layer_ParseDepthTask_rank128.test.pdf}
\caption{ViLBERT}
\label{fig:ptb_depth_vil}
\end{subfigure}
\caption{Comparison for the depth probe on the PTB3 test set, with textual embeddings.}
\label{fig:ptb3_depth}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.9\linewidth}
\centering
\includegraphics[scale=0.4]{figures/results/ptb3/BERT/select_layer_ParseDistanceTask_rank128.test.legend.pdf}
\end{subfigure}
\\
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/ptb3/BERT/select_layer_ParseDistanceTask_rank128.test.pdf}
\caption{BERT}
\label{fig:ptb_dist_bert}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/ptb3/UNITER/select_layer_ParseDistanceTask_rank128.test.pdf}
\caption{UNITER}
\label{fig:ptb_dist_unit}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/ptb3/ViLBERT/select_layer_ParseDistanceTask_rank128.test.pdf}
\caption{ViLBERT}
\label{fig:ptb_dist_vil}
\end{subfigure}
\caption{Comparison for the distance probe on the PTB3 test set, with textual embeddings.}
\label{fig:ptb3_distance}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.9\linewidth}
\centering
\includegraphics[scale=0.4]{figures/results/flickr30k/BERT/select_layer_ParseDepthTask_rank128.test.legend.pdf}
\end{subfigure}
\\
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/BERT/select_layer_ParseDepthTask_rank128.test.pdf}
\caption{BERT}
\label{fig:layer_flickr_dep_bert}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/UNITER/select_layer_ParseDepthTask_rank128.test.pdf}
\caption{UNITER}
\label{fig:layer_flickr_dep_unit}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/ViLBERT/select_layer_ParseDepthTask_rank128.test.pdf}
\caption{ViLBERT}
\label{fig:layer_flickr_dep_vil}
\end{subfigure}
\caption{Comparison for the depth probe on the Flickr30k test set, with textual embeddings.}
\label{fig:layer_flickr_dep}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.9\linewidth}
\centering
\includegraphics[scale=0.4]{figures/results/flickr30k/BERT/select_layer_ParseDistanceTask_rank128.test.legend.pdf}
\end{subfigure}
\\
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/BERT/select_layer_ParseDistanceTask_rank128.test.pdf}
\caption{BERT}
\label{fig:layer_flickr_dist_bert}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/UNITER/select_layer_ParseDistanceTask_rank128.test.pdf}
\caption{UNITER}
\label{fig:layer_flickr_dist_unit}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/ViLBERT/select_layer_ParseDistanceTask_rank128.test.pdf}
\caption{ViLBERT}
\label{fig:layer_flickr_dist_vil}
\end{subfigure}
\caption{Comparison for the distance probe on the Flickr30k test set, with textual embeddings.}
\label{fig:layer_flickr_dist}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.9\linewidth}
\centering
\includegraphics[scale=0.4]{figures/results/flickr30k/BERT/select_layer_ParseDepthTask_rank128.test.legend.pdf}
\end{subfigure}
\\
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/BERT/select_layer_ParseDepthTask_rank128.test.pdf}
\caption{BERT}
\label{fig:layer_flickr_dep_bert_just_text}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/UNITER/just_text/select_layer_ParseDepthTask_rank128.test.pdf}
\caption{UNITER - only text}
\label{fig:layer_flickr_dep_unit_just_text}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/ViLBERT/just_text/select_layer_ParseDepthTask_rank128.test.pdf}
\caption{ViLBERT - only text}
\label{fig:layer_flickr_dep_vil_just_text}
\end{subfigure}
\caption{Ablation comparison for the depth probe on the Flickr30k test set while just providing textual embeddings to the multimodal-BERTs.}
\label{fig:layer_flickr_dep_just_text}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.9\linewidth}
\centering
\includegraphics[scale=0.4]{figures/results/flickr30k/BERT/select_layer_ParseDistanceTask_rank128.test.legend.pdf}
\end{subfigure}
\\
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/BERT/select_layer_ParseDistanceTask_rank128.test.pdf}
\caption{BERT}
\label{fig:layer_flickr_dist_bert_just_text}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/UNITER/just_text/select_layer_ParseDistanceTask_rank128.test.pdf}
\caption{UNITER - only text}
\label{fig:layer_flickr_dist_unit_just_text}
\end{subfigure}
\begin{subfigure}[t]{0.32\linewidth}
\centering
\includegraphics[scale=0.47]{figures/results/flickr30k/ViLBERT/just_text/select_layer_ParseDistanceTask_rank128.test.pdf}
\caption{ViLBERT - only text}
\label{fig:layer_flickr_dist_vil_just_text}
\end{subfigure}
\caption{Ablation comparison for the distance probe on the Flickr30k test set while just providing textual embeddings to the multimodal-BERTs.}
\label{fig:layer_flickr_dist_just_text}
\end{figure*}
\begin{figure*}[t]
\begin{minipage}{.49\textwidth}
\flushleft
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[scale=0.4]{figures/results/flickr30k/UNITER/select_layer_ParseVisualDepthTask_rank128.test.legend.pdf}
\end{subfigure}\\
\begin{subfigure}[t]{0.48\linewidth}
\flushleft
\includegraphics[scale=0.4]{figures/results/flickr30k/UNITER/select_layer_ParseVisualDepthTask_rank128.test.pdf}
\caption{UNITER}
\label{fig:layer_flickr_visdep_unit}
\end{subfigure}
~
\begin{subfigure}[t]{0.48\linewidth}
\flushright
\includegraphics[scale=0.4]{figures/results/flickr30k/ViLBERT/select_layer_ParseVisualDepthTask_rank128.test.pdf}
\caption{ViLBERT}
\label{fig:layer_flickr_visdep_vil}
\end{subfigure}
\caption{Comparison for the depth probe on the Flickr30k test set, with visual embeddings. Note that the scale is different in this Figure.}
\label{fig:layer_flickr_visdep}
\end{minipage}
~
\begin{minipage}{.49\textwidth}
\flushright
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[trim={0 10 0 3},clip,scale=0.4]{figures/results/flickr30k/UNITER/select_layer_ParseVisualDistanceTask_rank128.test.legend.pdf}
\end{subfigure}
\\
\begin{subfigure}[t]{0.47\linewidth}
\flushleft
\includegraphics[trim={0 0 30 0},clip,scale=0.4]{figures/results/flickr30k/UNITER/select_layer_ParseVisualDistanceTask_rank128.test.pdf}
\caption{UNITER}
\label{fig:layer_flickr_visdist_unit}
\end{subfigure}
~
\begin{subfigure}[t]{0.47\linewidth}
\flushright
\includegraphics[trim={30 0 0 0},clip,scale=0.4]{figures/results/flickr30k/ViLBERT/select_layer_ParseVisualDistanceTask_rank128.test.pdf}
\caption{ViLBERT}
\label{fig:layer_flickr_visdist_vil}
\end{subfigure}
\caption{Comparison for the distance probe on the Flickr30k test set, with visual embeddings. Note that the scale is different in this Figure.}
\label{fig:layer_flickr_visdist}
\end{minipage}
\end{figure*}
The main metric used for both the distance and the depth probes is the Spearman rank coefficient correlation.
This indicates if the predicted depth vector of the nodes, or the predicted distance matrix of the nodes, correlate with the gold-standard (or silver) depths and distances generated according to the method in Section~\ref{sec:probes}.
The Spearman correlation is computed for each length sequence separately.
We take the average over the scores of the lengths between 5 and 50 and call this the Distance Spearman (DSpr.) for the distance probe and the Norm Spearman (NSpr.) for the depth probe.\footnote{Just as done by \citet{hewitt-manning-2019-structural}.}
For the depth probes, we also use the root accuracy (root\_acc). This computes the accuracy of predicting the root of the sequence.
This metric is only applicable for the textual embeddings, due to our method of generating the visual tree, where the root is always the full image at the start of the sequence.
For the distance probe, we make use of the undirected unlabelled attachment score (UUAS). This directly tests how accurate the predicted tree is compared to the ground-truth (or silver) tree by computing the accuracy of predicted connections between nodes in the tree. It does not consider the label for the connection or the direction of the connection \citep{jurafsky2021speech}.
\paragraph{Baseline comparisons}
We design one baseline for the textual data and two for the visual data. For the textual baseline, we use the initial word piece textual embeddings (from either BERT or a multimodal-BERT) before inserting them into the transformer stack.
We simply refer to it as \textbf{baseline}.
The first visual baseline implements the raw Faster R-CNN features \citep{ren2015faster} of each object region. However, they have a larger dimension than the BERT embeddings.
We refer to it as \textbf{R-CNN baseline}.
The second baseline uses the visual embeddings before they are fed to the transformer stack. This is a mapping from the Faster R-CNN features to the BERT embedding size. We refer to it as \textbf{baseline}.
\subsection{Hypotheses}\label{sec:hypotheses}
First, we want to determine the probe rank of the linear transformation used on the textual or the visual embeddings.
Based on results by \citet{hewitt-manning-2019-structural}, we set the probe rank for BERT to 128. We run a comparison with several probe ranks on UNITER and ViLBERT to find the optimal setting for the textual and visual embeddings. The results are shown and discussed in Appendix~\ref{app:tune_rank}. We use a rank of 128 for all our following experiments.
\paragraph{RQ 1}
The multimodal-BERT models are pre-trained on language data. We assume that the resulting embeddings integrate structural grammatical knowledge and hypothesize that this knowledge will not be forgotten during multimodal training.
To determine if training on multimodal data affects the quality of predicting the dependency tree when trained solely with textual data, we train the probes with BERT and both multimodal-BERTs and evaluate on the PTB3 dataset \citep{marcus1999treebank}.
\paragraph{Sub-RQ 1.1}
We expect that more interaction between the regions and the text will have a stronger impact. Some dependency attachments that are hard to predict might require visual knowledge.
Next to the effect on the linguistic knowledge, we also want to discover if the multimodal data helps the multimodal-BERTs in learning structural knowledge.
We run the probes on Flickr30k dataset \citep{young2014flickr30k} with the textual embeddings for all our models. Furthermore, we compare these to the difference in scores on the PTB3 dataset \citep{marcus1999treebank}.
\paragraph{RQ 2}
The Multimodal-BERTs learn highly contextualized embeddings. Therefore, we hypothesize that a model should be able to discover important interactions between object regions in the image.
To see if the model has learned to encode the scene tree in the visual region embeddings, we run the probes on the Flickr30k dataset \citep{young2014flickr30k} with the visual embeddings.
Furthermore, to see if the scene tree is learned mainly through joint interaction with the textual embeddings, we compare the scores between the single-stream model UNITER (with many cross-modal interactions) and the dual-stream model ViLBERT (with limited cross-modal interactions).
\section{Results and Discussion}\label{sec:results}
This discussion is based on the results from the test split. The results on the validation split (see Appendix~\ref{sec:val_results}), lead to the same observations.
\paragraph{RQ 1: Do the textual embeddings trained with a multimodal-BERT retain their structural knowledge?}
To answer RQ 1, we report the results for both structural probes on the PTB3 dataset. Here we only use the textual embeddings, since no visual features are available. The results for the depth probe are in Figure~\ref{fig:ptb3_depth}, and for the distance probe in Figure~\ref{fig:ptb3_distance}.
The results of both multimodal-BERTs (Figures~\ref{fig:ptb_depth_vil} and \ref{fig:ptb_dist_vil} for ViLBERT and Figures~\ref{fig:ptb_depth_unit} and \ref{fig:ptb_dist_unit} for UNITER) in terms of NSpr. and Root Acc are very comparable showing similar curves and scores.
For both, the seventh layer is the best performing one. The shape of the curves across the layers is similar to those for the BERT model in Figures~\ref{fig:ptb_depth_bert} and \ref{fig:ptb_dist_bert}.
However, the scores of the multimodal-BERTs drop significantly.
While the multimodal-BERTs were initialized with weights from BERT, they were trained longer on additional multimodal data with a different multimodal objective. This shows that the multimodal training hampers the storing of grammatical structural knowledge in the resulting embeddings.
\paragraph{Sub-RQ 1.1: To what extent does the joint training in a multimodal-BERT influence the structures learned in the textual embeddings?}
For this experiment, we compare the effect of having visual features present when using the structural probes on the textual embeddings. We run the probes on Flickr30k. The results for the depth probe are in Figure~\ref{fig:layer_flickr_dep}, and for the distance probe in Figure~\ref{fig:layer_flickr_dist}.
First, we see that for all models (BERT and multimodal-BERTs) the scores increase compared to the results on the PTB3 dataset (see discussion of RQ 1), but still follow a similar trend across the layers.
The latter is most likely due to the complexity of the sentences and language of the PTB3 dataset, which is simpler for the captions.
For ViLBERT, there is a drop in performance for the earlier layers. We believe this is caused by the early stopping method firing early with these settings. Another explanation is that it is more difficult for the dual-stream model to use the additional parameters.
BERT outperforms the multimodal-BERTs on PTB3, however, this is not the case on Flickr30k.
For the depth probe (Figure~\ref{fig:layer_flickr_dep}) and the UUAS metric on the distance probe (Figure~\ref{fig:layer_flickr_dist}), the results obtained on these two datasets are almost equal.
This can be due to the additional pretraining of the multimodal-BERTs on similar captioning sentences.
Another explanation is that, during such pretraining, the models learned to store relevant information in the visual embeddings.
We run an additional experiment where we use the pretrained multimodal-BERT, but while probing we only provide the sentence to the model, and mask out the image. The results for the depth probe are in Figure~\ref{fig:layer_flickr_dep_just_text}, and for the distance probe in Figure~\ref{fig:layer_flickr_dist_just_text}. Here we can see that the results are almost identical to when we provide the model with the visual embeddings. This indicates that the model does not have any benefit from the visual data when predicting the structures for textual embeddings, and it seems that the model uses the extra parameters of the vision layers to store knowledge about the text.
\paragraph{RQ 2: Do the visual embeddings trained with a multimodal-BERT learn to encode a scene tree?}
We aim to find the layer with the most structural knowledge learned when applied to multimodal data. See the results in Figures~\ref{fig:layer_flickr_visdep} and \ref{fig:layer_flickr_visdist}.
Regarding the results for the depth probe (Figure~\ref{fig:layer_flickr_visdep}), the scores between layers fluctuate inconsistently. The scores do improve slightly over the baselines, indicating that the multimodal-BERT encodes some knowledge of depth in the layers.
With regard to the distance probe (Figure~\ref{fig:layer_flickr_visdist}), the trend in the curves across the layers indicate that this is a type of knowledge that can be learned for the regions.
The multimodal-BERTs seem to disregard scene trees. There is a strong downward trend across the layers.
Furthermore, all the scores are much lower than the baseline and the R-CNN baseline scores.
This lack of learning of the scene tree can be caused by the chosen training objective of the multimodal-BERTs.
These objectives require an abstract type of information, where only basic features are needed to predict the masked items.
For the distance probe, there is a noticeable difference between the single-stream (Figure~\ref{fig:rank_flickr_visdist_unit}) and the dual-stream (Figure~\ref{fig:rank_flickr_visdist_vil}) models, where single stream models benefit from the multimodal interactions to retain structural knowledge.
For UNITER, the scores in the first layers are very close to the baseline, showing that the single stream interaction benefits the memorizing of the scene tree structure.
\section{Conclusion and Future Work}
We made a first attempt at investigating whether the current Multimodal-BERT models encode structural grammatical knowledge in their textual embeddings, in a similar way as text-only BERT models encode this knowledge.
Furthermore, we were the first to investigate the existence of encoded structural compositional knowledge of the object regions in image embeddings.
For this purpose, we created a novel scene tree structure that is mapped from the textual dependency tree of the paired caption.
We discovered that the multimodal-BERTs encode less structural grammatical knowledge than BERT. However, with image features present, it is still possible to achieve similar results. The cause for this requires more research.
While tree depths from the scene tree are not natively present in the features, we found that this could be a potential method of finding connections and distances between regions, already decently predicted with the Faster R-CNN features.
The Multimodal-BERT models are currently trained with an objective
that does not enforce the learning or storing of these types of structural information. Hence we assume that the models learn to encode more abstract knowledge in their features.
Our work opens possibilities to further research on scene trees as a joint representation of object compositions in an image and the grammatical structure of its caption.
Furthermore, we recommend investigating the training of multimodal-BERTs with objectives that enforce the encoding of structural knowledge.
\section*{Acknowledgments}
We would like to thank Desmond Elliott, Djam\'e Seddah, and Liesbeth Allein for feedback on the paper.
Victor Milewski and Marie-Francine Moens were funded by the European Research Council (ERC) Advanced Grant CALCULUS (grant agreement No. 788506).
Miryam de Lhoneux was funded by the Swedish Research Council (grant 2020-00437).
\bibliographystyle{acl_natbib}
|
2,877,628,091,094 | arxiv | \section{Background and Related Works}
\label{sec:background}
\subsection{ML Model Inferences as Queries}
\eat{Tab.~\ref{tab:input-loading-overhead} illustrates that such data transfer overhead is significant, and could be multiple times of the inference time. \footnote{All the tests in Tab.~\ref{tab:input-loading-overhead} are conducted under Ubuntu 18.04 with $8$ Processor Cores and $10,752$ MB memory. The version of TensorFlow is 2.5.0 and the version of TensorFlow Hub is 0.12.0. The RDBMS is using PostgreSQL 13.3. }.
\begin{table}[h]
\centering
\scriptsize
\caption{\label{tab:input-loading-overhead} {\small \color{black} Overhead of loading input for inference in TensorFlow.}}
\scalebox{0.78}{
\begin{tabular}{|l|l|c|c|} \hline
Model Architecture&Input size and location&inference time(s)&input loading time(s)\\\hline \hline
VGG16~\cite{Krizhevsky09learningmultiple}~\cite{cifar-10}&$10,000$ images($32$x$32$x$3$)($31.0$ MB Binary file)&$0.42$&$0.57$\\ \hline
NNLM~\cite{tf2-text-classification}~\cite{TFDS}&$25,000$ texts($66.2$ MB CSV file)&$2.25$&$15.58$\\ \hline
BigBiGAN~\cite{donahue2019large}~\cite{tfflowers}&$3,670$ pictures($180$x$180$x$3$)($240.19$ MB JPEG file)&$8.36$&$65.31$\\ \hline
Transformer ~\cite{cer2018universal}~\cite{tf-universal-sentence-encoder-large}&$8,628$ sentences($590.0$ MB Binary file)&$0.83$&$0.98$\\ \hline
VGG16~\cite{Krizhevsky09learningmultiple}~\cite{cifar-10}&$10,000$ images($32$x$32$x$3$)(from PostgreSQL)&$0.42$&$1.03$\\ \hline
NNLM~\cite{tf2-text-classification}~\cite{TFDS}&$25,000$ texts(from PostgreSQL)&$2.25$&$16.75$\\ \hline
BigBiGAN~\cite{donahue2019large}~\cite{tfflowers}&$3,670$ pictures($180$x$180$x$3$)(from PostgreSQL)&$8.36$&$93.39$\\ \hline
Transformer ~\cite{cer2018universal}~\cite{tf-universal-sentence-encoder-large}&$8,628$ sentences(from PostgreSQL)&$0.83$&$7.76$\\ \hline
\end{tabular}
}
\end{table}
}
Existing works~\cite{luo2018scalable, meng2016mllib, boehm2016systemml} propose to: (1) Abstract the tensor as a set of tensor blocks; (2) Encode local linear algebra computation logics that manipulate single or a pair of tensor blocks, in user defined functions (UDFs), also called as kernel functions, such as matrix multiplication, matrix addition, etc.; (3) Apply the relational algebra operators nested with these UDFs for performing linear algebra computations. Based on the above ideas, tensor relational algebra (TRA)~\cite{yuan2020tensor} further introduces a set of tensor-oriented relational operations, such as \texttt{tile}, \texttt{concat}, \texttt{rekey}, \texttt{transform}, \texttt{join}, \texttt{aggregation}, \texttt{selection}, etc.
We found that most ML workloads can be decomposed into linear algebra operations that are further represented in such TRA.
For example, \textbf{matrix multiplication} is a \texttt{join} followed by \texttt{aggregation}~\cite{yuan2020tensor, boehm2016systemml}. The \texttt{join} pairs two blocks from the two tensors if the first block's column index equals the second's row index. Then each joined pair of tensor blocks is applied with a UDF that multiplies these two tensor blocks. An output block has its row index being the first block's row index and its column index being the second block's column index. Then all tensor blocks output from the transformation are \texttt{grouped by} their row and column indexes, and all tensor blocks in the same group will be added up in an aggregate/reduce UDF. Similarly, \textbf{matrix addition} is a \texttt{join}.
In addition, \textbf{matrix transpose} is a \texttt{rekey}~\cite{yuan2020tensor}; \textbf{activations} such as relu, tanh, and sigmoid are \texttt{transform}s; \textbf{softmax and normalization} can be represented as an \texttt{aggregation} followed by a \texttt{transform}.
Therefore, as illustrated in Fig.~\ref{fig:la-and-ra}, a fully-connected feed-forward network (FFNN) can be represented in relational algebra~\cite{jankov2019declarative, luo2018scalable}.
\begin{figure}[h]
\vspace{-5pt}
\centering
\includegraphics[width=0.4\textwidth]{Figures/la-and-ra.pdf}
\caption{\label{fig:la-and-ra} \small
Example of mapping linear algebra to relational algebra.
}
\vspace{-5pt}
\end{figure}
While the experiments in this work (Sec. ~\ref{sec:evaluation}), mainly used aforementioned operators, other types of neural networks can also be represented in relational algebra. For example, convolution can be converted into a multiplication of two matrices~\cite{Conv-spatial-rewrite, spatial-rewrite}, where the first matrix is created by spatially flattening every filtered area of the input features into a vector, and concatenating these vectors, and the second matrix is created by concatenating all filters and bias. Long short-term memory (LSTM) consists of \texttt{concat}, matrix multiplication, matrix addition, tanh, and sigmoid; and the transformer's attention mechanism consists of matrix multiplication, transpose, softmax, etc~\cite{vaswani2017attention}.
The storage optimization techniques proposed in this work can be easily extended to other tensor/array-based machine learning systems, which adopt a similar tensor representation that chunks a tensor to blocks, such as SystemML~\cite{boehm2016systemml}, Spark MLlib~\cite{meng2016mllib}, SciDB~\cite{stonebraker2011architecture}, SPORES~\cite{wang2020spores}, LaraDB~\cite{hutchison2017laradb}, etc.
In contrast, Raven~\cite{karanasos2019extending} and HummingBird~\cite{nakandala2020tensor} propose to transform relational data to tensors and leverage deep learning frameworks to run tensor computations. We will investigate how to apply proposed deduplication techniques to such and other systems~\cite{DBLP:journals/pvldb/KoutsoukosNKSAI21, dolmatova2020relational} in the future.
\eat{
}
\eat{
}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Figures/system-overview.pdf}
\caption{\label{fig:overview} \small
Overview of the proposed model deduplication workflow.
}
\end{figure*}
\subsection{Tensor Deduplication and Virtualization }
\label{sec:related-works1}
Mistique~\cite{vartak2018mistique} proposed a data store for managing and querying the historical intermediate data generated from ML models.
It optimized the storage of fuzzy intermediate data using quantization, summarization, and deduplication. However, these techniques are designed for diagnosis queries, which are not linear algebra computations and have significantly less stringent accuracy and latency requirements compared to model inferences. While they considered both exact and approximate deduplication for traditional ML models, they only considered exact deduplication for DNN models, which is another limitation. In addition, they didn't consider page packing and caching optimization.
Jeong and et al.~\cite{jeong2020accelerating} proposed to merge related models resulting from ensemble learning, transfer learning, and retraining into a single model through input-weight connectors, so that multiple models can be served in one process and context switch overheads caused by running multiple concurrent model processes can be avoided. However, their method makes strong assumptions about the model architecture, achieves only coarse-grained deduplication, and is not applicable to models that are owned by different individuals and organizations.
Weight virtualization~\cite{lee2020fast} is a recently proposed technique for edge device environments. It merges pages across multiple heterogeneous models into a single page that is shared by these models. However, their work relied on each weight's fisher information that must be extracted from the training process, which is usually not available at the serving stage in production. It also models the page matching and merging process as an expensive optimization process, which may work for small-scale models on edge devices, but not scalable to large-scale models. In addition, they didn't consider the integration with relational databases.
\subsection{Other Existing Deduplication Techniques}
\label{sec:related-works2}
Deduplication of relational data in RDBMS, also known as record linkage, identifies duplicate items through entity matching~\cite{elmagarmid2006duplicate}, using various blocking techniques to avoid the pair-wise comparison for dissimilar items~\cite{bilenko2006adaptive, ananthakrishna2002eliminating, hernandez1995merge, borthwick2020scalable}. Various distributed algorithms were proposed to further accelerate such deduplication~\cite{chu2016distributed}. For example, Dedoop~\cite{kolb2012load, kolb2012dedoop} leveraged the MapReduce platform, and Dis-Dedup~\cite{chu2016distributed} provided strong theoretical guarantees for load balance. In addition, various similarity join techniques were proposed to identify pairs of similar items, which leveraged similarity functions to filter out pairs that have similarity scores below a threshold~\cite{xiao2008ed} or used LSH to convert similarity join to an equi-join problem~\cite{yu2016generic}. While these works are helpful for cleaning data in RDBMS, they are not optimized for numerical tensor data. For example, they never considered how deduplication of tensor data will affect the accuracy of ML applications.
There exists abundant work in storage deduplication to facilitate the file backup process~\cite{meyer2012study}.
Bhagwat et al. ~\cite{bhagwat2009extreme} proposed a two-tier index managing the fingerprints and file chunks. Zhu et al. ~\cite{zhu2008avoiding} proposed RAM prefetching and bloom-filter based
techniques, which can avoid disk I/Os on
close to $99\%$ of the index lookups. ChunkStash~\cite{debnath2010chunkstash} proposed to construct the chunk index using flash memory. CacheDedup~\cite{li2016cachededup} proposed
duplication-aware cache replacement algorithms (D-LRU, DARC) to optimize both cache performance and endurance. AustereCache~\cite{wang2020austere} proposed a new flash caching design that aims for memory-efficient
indexing for deduplication and compression. All such works focus on exact deduplication of file chunks, because information integrity is required for file storage. However, the storage of model parameters for model serving can tolerate a certain degree of approximation if such approximation will not harm the inference accuracy.
\section{Buffer Pool Management}
\label{sec:caching}
A model serving workload involves multiple types of tensors that have different locality patterns. For example, the model parameter tensors at each layer are persisted to disk and are repeatedly read for making inferences; the input feature vector also needs to be persisted, but is read only once. The intermediate features output from each layer do not need to be persisted and are read only once.
Existing works proved that compared to LRU/MRU/LFU, which only consider reference time/distance/frequency, a fine-grained buffer pool management strategy that groups different types of data based on a locality set abstraction~\cite{zou2019pangea, zou2020architecture, chou1986evaluation} and considers the access pattern and durability requirements of each locality set, can achieve better data locality for large-scale data analytics processing~\cite{zou2019pangea, zou2020architecture}. A locality set is a set of pages that will be processed similarly. For example, the pages in each equivalent class
are regarded as a separate locality set. Users can configure the page eviction policy, e.g., MRU or LRU, for each locality set. When pages need to be evicted from the buffer pool to make room for new pages,
the system chooses a locality set to be the victim locality set if the next page-to-be-evicted from the locality set
has the lowest expected eviction cost among all locality sets.
The expected eviction cost is formalized in Eq.~\ref{eq:cost}.
\vspace{-10pt}
\begin{equation}
\label{eq:cost}
c_w + p_{reuse} \times c_r
\end{equation}
\noindent
Here, $c_w$ is the cost for writing out the page,
$c_r$ is the cost for loading it back for reading, and $p_{reuse}$ is the probability of accessing the page within the next $t$ time ticks. The formulation of $c_w$ and $c_r$ in existing works~\cite{zou2020architecture, zou2019pangea, chou1986evaluation} have considered the lifetime, durability requirements, access patterns, etc. of each locality set, and can be reused for this work. However, when modeling $p_{reuse}$, existing works did not consider page sharing caused by model deduplication. To address the problem, we need to reformulate this factor.
In the scenario of serving multiple models, we propose to apply the queueing theory~\cite{kleinrock1976queueing} to model the page accesses so that each page is like a server, and each model inference request that triggers a page access is like a customer. Because a page may be shared by multiple models, inference requests from each model will be dispatched to a queue associated with the model. If we assume the arrival time of the next access to each page from each queue as an independent Poisson point process~\cite{kleinrock1976queueing}, the probability of reusing each page (i.e., the probability that the page will be accessed within $t$ time ticks) can be estimated using Eq.~\ref{eq:cost1}. Here, $M =\{m_1, ..., m_s\}$ represents a set of models that share this page, and $\lambda_i$ denotes the access rate per time tick for the model $m_i$.
\vspace{-10pt}
\begin{equation}
\label{eq:cost1}
p_{reuse}=1 - e^{-\sum_{m_i \in M}{\lambda_i} t}
\end{equation}
This approach is more accurate than simply estimating $p_{reuse}$ based on the reference frequency/distance measured for each page, because the access patterns of various datasets involved in each model inference is fixed, mostly affected by $\lambda_i$.
\eat{
tracking the frequency or reference distance of each page, is less efficient than tracking the frequency of each model as well as how pages are shared (or deduplicated) across models. That's because the access patterns of various datasets involved in each model inference are fixed. If once the inference frequency for each model and the set of models that share each page is known, the probability of reusing each page can be derived by modeling the arrival time of the next access to each model as a Poisson point process~\cite{kleinrock1976queueing}, and uses the cumulative density of the exponential distribution to predict the time-until-next arrival for a Poisson point process. Here, $M =\{m_1, ..., m_s\}$ represents a set of models that share this page, and $\lambda_i$ denotes the access rate per time tick for the model $m_i$:}
\eat{
(1) the reference distance. $p_{reuse}$ for a page is computed from the page's $\lambda$ value, where $\lambda$
is the rate (per time tick) at which the page is referenced.
If we model the arrival time of the next reference to each page as a Poisson point process \cite{kleinrock1976queueing}, then the probability that the page is referenced in the next $t$ time
ticks is $1 - e^{-\lambda t}$ (this follows from the cumulative density of the exponential distribution, which models the time-until-next arrival for a Poisson point process).
There are a number of ways that $\lambda$ for a page can be estimated.
We can collect the number of references $n$ to the page in the
last $t'$ time ticks, and estimate the rate of references per time tick as $\lambda \approx n/t'$.
This quantity is a bit difficult to deal with in practice, however. It requires storing multiple
references to each page, maintained over a sliding time window.
$\lambda$ can also be estimated from the time since the last reference, which is used in our Pangea implementation.
If a page was last referenced at time tick $t_{ref}$ and the current time tick is $t_{now}$, the number of references
since the beginning of time can be estimated as $t_{now} / (t_{now} - t_{ref})$; dividing by $t_{now}$ to get the number
of references per time tick gives $\lambda \approx 1 / (t_{now} - t_{ref})$; that is, $\lambda$ is the inverse of the time-to-last
reference of the page.\footnote{\small {The inverse of the page's reference distance can be seen as yet another reasonable
estimate for $\lambda$, as this
replaces $t_{now} - t_{ref}$ with the page's
last observed between-reference time as an estimate for the expected time interval between page references. We choose the time
since the last reference, however, as it requires only a single reference to be valid.}}
Finally, note that the previous discussion assumes that
\texttt{lifetime-ended} is false for each locality set. If there are one or more locality sets where
\texttt{lifetime-ended} is true, these are always chosen for eviction first, again according to the minimum expected
cost of evicting a page from the locality set.
\vspace{5 pt}
\noindent \textbf{A note on rate vs. probability.}
There is a strong relationship between using $p_{reuse}$ computed via an exponential distribution
with a time horizon of $t = 1$,
and simply weighting
a page's read cost is $c_r$ by $\lambda$ (the inverse of the time since last reference in the case of Pangea).
In fact, the latter is a linear approximation to the former.
If one approximates the exponential computation of $p_{reuse}$ with a linear function
(a first-degree Taylor series approximation of the exponential function about the point $\lambda' = 0$), we have:
\begin{align}
p_{reuse} &= 1 - e^{-\lambda t}
\approx 1 - e^{-\lambda' t} + te^{-\lambda' t}(\lambda - \lambda') \nonumber \\
&= 1 - e^{0} + te^{0}\lambda
= t\lambda = \lambda \nonumber
\end{align}
}
\section{Evaluation}
\label{sec:evaluation}
In this section, we will answer the following questions:
\noindent
(1) How effective is the proposed synergistic model deduplication mechanism in reducing the latency and improving the storage efficiency for various model serving scenarios? (Sec.~\ref{sec:overall})
\noindent
(2) How will the proposed index approach affect the time required for detecting the duplicate blocks, the overall storage efficiency, and the model serving accuracy? (Sec.~\ref{sec:dedup-index})
\noindent
(3) How will the proposed strategies of packing blocks to pages affect the storage efficiency and the computation overheads, compared to various baselines? (Sec.~\ref{sec:dedup-paging})
\noindent
(4) How will optimized caching improve memory locality? (Sec.~\ref{sec:dedup-caching})
\noindent
(5) How will deduplication work with popular model compression techniques, such as pruning and quantization? (Sec.~\ref{sec:dedup-compression})
\vspace{-6pt}
\subsection{Evaluation and Workloads}
\subsubsection{Multiple Versions of Personalized Text Embedding Models}
\label{sec:workload-word2vec}
Text embedding is important for many natural language processing applications, and its accuracy can be greatly improved using large open corpus like Wikipedia~\cite{wikipedia-data}. However, at the same time, every enterprise or domain has its own terminologies, which are not covered in the open data. To personalize the text embeddings, for each domain, we need to train a separate model on both the shared open data and the private domain/enterprise data. Word2Vec is a two-layer neural network used to generate word embeddings.
We use skip-gram Word2Vec as well as
negative sampling, with $64$ negative samples, and
noise contrastive estimation (NCE) loss.
We deploy a Word2Vec model pretrained using a Wikipedia dump and downloaded from TFHub~\cite{tfhub}. The model embeds the 1 million most frequent tokens in the
corpus. Then we finetune the pre-trained model using different domain-specific corpus including texts extracted from Shakespeare's plays~\cite{TF-shakespeare}, posts collected from Firefox support forum~\cite{Web-text-corpus}, articles collected from Fine Wine Diary~\cite{Web-text-corpus}, Yelp reviews~\cite{zhang2015character}, IMDB reviews~\cite{maas2011learning}. The input document is processed with a skip window size of 1.
The Word2Vec embedding layer has one million $500$ dimensional embedding vectors corresponding to one million words in the dictionary.
Therefore, a weight tensor is in the shape of $10^6 \times 500$.
The inference of a word2vec model on netsDB is implemented via matrix multiplication, where an input feature vector is of the shape of $[100, 10^6]$, representing a batch of $100$ input words, sentences, or documents. A word can be represented as a $10^6$ dimensional one-hot encoding vector, where the corresponding word in the vocabulary is specified as $1$, and other words are specified as $0$. Then, multiplying the batch of encoding vectors of words with the embedding weight matrix will output the batch of embedding vectors for these words. Similarly, the encoding vector for a sentence or a document, which is seen as a "bag of words", can be represented as the sum of the one-hot encoding vectors of all the words in this sentence or document. By multiplying the batch of encoding vectors and the embedding weight matrix, the embedding for each sentence or document is obtained as the weighted sum of the embedding vectors of the words in this sentence or document.
\subsubsection{Multiple Versions of Text Classification Models}
\label{sec:text-classification-desc}
We further investigate a scenario that serves five different text semantic classification models. Each classification task takes a review as input and outputs a binary label to indicate the input is toxic or nontoxic \cite{zhang2015character, maas2011learning, borkan2019nuanced}. All tasks use the same model architecture. Each model uses three layers. The first layer is a Word2Vec layer as mentioned in Sec.~\ref{sec:word2vec}, using a vocabulary size of one million and an embedding dimension of $500$. The second layer is a fully connected layer that consists of merely $500\times16$ parameters, and the third layer is an output layer that consists of $16\times2$ parameters. Because the fully connected layer is small in size, we encode it in a UDF that is applied to the output of the Word2Vec embedding layer.
The first two text semantic classification models are trained using the same IMDB review datasets. The difference is that Model-1's Word2Vec layer uses the weights of a pre-trained model directly downloaded from TFHub as mentioned in Sec.~\ref{sec:word2vec}, which is set as \texttt{Non-Trainable}, so that only the weights of the fully connected layers are changed during the training process. However, Model-2's Word2Vec layer is set to be \texttt{Trainable}, which means the weights of the layer will also change during the training process. Similarly, Model-3 and Model-4 are trained using Yelp datasets, with the Word2Vec layer set to be \texttt{Non-Trainable} and \texttt{Trainable} respectively. The Model-5 is trained using the civil comments~\cite{borkan2019nuanced}, which are collected from news sites with labeled toxicity values, and the Word2Vec layer in this model is set to be \texttt{Trainable}.
\vspace{-5pt}
\subsubsection{Transfer Learning of Extreme Classification Models}
\label{sec:workload-amazon14k}
Following TRA~\cite{yuan2020tensor}, a two-layer feed-forward neural network (FFNN) is implemented in our proposed system for the AmazonCat-14K \cite{mcauley2015image, mcauley2015inferring} benchmark. This FFNN requires five parameter tensors: the weight tensors and bias tensors of the two layers, and the input tensor for which predictions are generated. The input tensor includes $1,000$ data points that have $597,540$ features, and the extreme classification task uses $14,588$ labels. The hidden layer has $1,000$ neurons. Therefore, the weight tensor (denoted as $W_1$) in the first layer has $597,540,000$ parameters, and the weight tensor (denoted as $W_2$) in the second layer has $14,588,000$ parameters.
A transfer learning scenario is tested, where the first layer $W_1$ is freezed, and $W_2$ is specialized for different tasks. Only for this scenario, the inputs, weights, and biases are randomly generated instead of being trained from real-world data like other scenarios. The experiments are still reasonable as deduplication in this scenario hardly affects the inference accuracy. That is because $W_1$ used in all the models are the same and thus no weights need to be approximated for deduplicating it, and we also choose not to deduplicate any blocks from the specialized and smaller $W_2$ layer.
The implementation of the feed-forward inference at each fully-connected layer is illustrated in Fig.~\ref{fig:la-and-ra}.
\vspace{6pt}
\noindent
\textbf{Evaluation Environment Setup} Unless explicitly specified, most of the experiments used an AWS r4xlarge instance that has four vCPU cores and $30$ gigabytes RAM.
The storage volumes include a $128$ GB SSD, and a $128$ GB hard disk drive.
For the experiments on the GPU, we used an AWS g4dn.2xlarge instance that is installed with one NVIDIA T4 Tensor Core GPU that has $16$ gigabytes memory, besides eight CPU cores and $32$ gigabytes host memory.
\subsection{Overall Evaluation Results}
\label{sec:overall}
\subsubsection{Multiple Versions of Personalized Text Embeddings}
\label{sec:word2vec}
We find that word embedding models finetuned from the same TFHub pretrained Word2Vec model share more than $90\%$ of pages. (The accuracy of each Word2Vec model after finetuning is above $99\%$.) Each model is a $1,000,000 \times 500$ tensor, stored in a set of tensor blocks in the shape of $10,000 \times 100$, each weight is stored in double precision.
Without our proposed deduplication mechanism, storing six word embedding models separately requires more than $24$ gigabytes storage space. However, by applying our work, only $6.7$ gigabytes storage space is required, which is a \textbf{$3.6\times$} reduction. Note that the overall memory requirements for serving $6$ models will be higher than the storage requirements, as we also need to cache the intermediate data, which includes the \texttt{join} HashMap constructed for probing the model parameters, and about $1$ gigabytes input data.
In Tab.~\ref{tab:word2vec-overall-1} and ~\ref{tab:word2vec-overall-2}, we measured the total latency of making a batch of $100$ inferences on all six models using different configurations for buffer pool size and storage hardwares. We observed that our proposed deduplication mechanism brought up to $1.4\times$ and $4.7\times$ speedups in model serving latency for SSD and HDD storage respectively, as illustrated in Tab.~\ref{tab:word2vec-overall-1} and Tab.~\ref{tab:word2vec-overall-2}.
\begin{table}[h]
\vspace{-5pt}
\centering
\scriptsize
\caption{\label{tab:word2vec-overall-1} {\small \color{black} Overall latency for serving different number of Word2Vec models, tested in a r4xlarge instance, using SSD and HDD. Buffer pool size is set to $15$ gigabytes. (Unit: seconds)}}
\begin{tabular}{|l|r|r|r|} \hline
num models& disk type &w/o dedup&w/ dedup \& optimized caching\\\hline \hline
2&SSD&191 &175 \\ \hline
3&SSD&350&262\\ \hline
4&SSD&506&381\\ \hline
6&SSD&720&513\\ \hline
2&HDD&430&425\\ \hline
3&HDD&1112&639\\ \hline
4&HDD&1474&962\\ \hline
6&HDD&2209&1398\\ \hline
\end{tabular}
\end{table}
\begin{table}[h]
\vspace{-15pt}
\centering
\scriptsize
\caption{\label{tab:word2vec-overall-2} {\small \color{black} Overall latency for serving six word2vec models using different storage configurations (Unit: Seconds)}}
\begin{tabular}{|l|r|r|r|r|} \hline
disk type& buffer pool size &w/o dedup&w/ dedup&w/ dedup \& optimized caching\\\hline \hline
SSD&$15$GB&$720$ &$513$ &$513$ \\ \hline
SSD&$10$GB& $762$&$594$ & $580$\\ \hline
SSD&$8$GB&$786$ &$710$ & $638$\\ \hline
HDD&$15$GB&$2209$ &$1398$ &$1398$ \\ \hline
HDD&$10$GB&$2264$&$1435$&$1435$ \\ \hline
HDD&$8$GB&$8120$&$4921$&$1720$\\ \hline
\end{tabular}
\end{table}
We also compared the netsDB's performance to the CPU-based TensorFlow on the same AWS r4.xlarge instance and the GPU-based TensorFlow on a g4dn.2xlarge instance. On TensorFlow, we developed two approaches for Word2Vec inference.
The first approach used matrix multiplication (\texttt{tf.matmul}), which is similar to the netsDB implementation of Word2Vec inference as mentioned in Sec.~\ref{sec:workload-word2vec}. In the experiments of comparing this approach and netsDB, we used double precision for both systems.
The second approach is based on embedding lookup by using Keras' Word2Vec embedding layer (i.e., \texttt{keras.layers.Embedding}). The implementation takes a list of IDs as input, and searches the embedding for each ID (via index) in parallel.
For the second approach, because Keras' embedding layer enforces single precision, we changed netsDB implementation to use the single-precision float type. The experiments for this approach used $1$ million IDs in each batch. For netsDB's implementation based on matrix multiplication, we assume the $1$ million IDs are from $100$ documents, and each document has $10,000$ different words, so its input features include $100$ vectors, each vector is a sum of the one-hot embedding vectors of $10,000$ words, as mentioned in Sec.~\ref{sec:workload-word2vec}.
The input batch has $800$ megabytes in size for the implementation based on matrix multiplication, but only $8$ megabytes for the implementation based on embedding lookup.
In Tab.~\ref{tab:word2vec-comparison}, TF-mem, TF-file, and TF-DB loads an input batch from the local memory, the local CSV file, and a PostgreSQL table ($400$ BLOB fields for the first approach, and $1$ BLOB field for the second approach), respectively. We observed that netsDB supports the inference of significantly more models in the same system than TensorFlow.
For this case, we did not observe performance gain brought by GPU acceleration in TensorFlow, mainly because inference is less complicated than training and a batch of such inferences cannot fully utilize the GPU parallelism.
\begin{table}[h]
\vspace{-10pt}
\centering
\scriptsize
\caption{\label{tab:word2vec-comparison} {\small \color{black} Comparing the serving performance of multiple word2vec models deployed in netsDB to TensorFlow. (Unit: Seconds)}}
\begin{tabular}{|r|r|r|r|r|r|r|r|} \hline
\multicolumn{2}{|c|}{\texttt{}}&\multicolumn{3}{|c|}{\texttt{TensorFlow CPU}} &\multicolumn{3}{|c|}{\texttt{TensorFlow GPU}}\\ \hline
numModels&netsDB&TF-mem&TF-file&TF-DB&TF-mem&TF-file&TF-DB\\\hline \hline
\multicolumn{8}{|c|}{\texttt{Matrix-Multiplication-based inference, \texttt{double precision}}}\\\hline \hline
$3$&$252$&$9$&$64$&$96$&$14$&$69$&$128$\\ \hline
$6$&$503$&Failed&Failed&Failed&Failed&Failed&Failed\\ \hline
$12$&$1008$&Failed&Failed&Failed&Failed&Failed&Failed \\ \hline
\multicolumn{8}{|c|}{\texttt{Embedding-lookup-based inference ($1$ million IDs/batch), \texttt{single precision}}}\\\hline \hline
$3$&$114$&$57$&$58$&$58$&Failed&Failed&Failed\\ \hline
$6$&$229$&Failed&Failed&Failed&Failed&Failed&Failed\\ \hline
$12$&$456$&Failed&Failed&Failed&Failed&Failed&Failed \\ \hline
\end{tabular}
\vspace{-5pt}
\end{table}
\subsubsection{Multiple Versions of Text Classification Models}
Based on the above results, we further evaluated the proposed techniques on the text classification task described in Sec.~\ref{sec:text-classification-desc}.
We imported these text classification models into netsDB. The default page size used in this experiment is $64$ megabytes and when using a block shape of $100\times10000$, each text classification model requires $64$ pages of storage size before deduplication. We first compared the required number of private and shared pages after deduplication as well as the classifier inference accuracy before and after deduplication. The comparison results are illustrated in Tab.~\ref{tab:text-classification-storage-and-accuracy}.
Without deduplication, the total storage space required is $20.5$GB for $320$ pages in total. After applying the proposed deduplication mechanism, the total storage space required is reduced to $5.6$GB for $87$ pages, using the block size of $100\times10000$.
\begin{table}[h]
\vspace{-10pt}
\centering
\scriptsize
\caption{\label{tab:text-classification-storage-and-accuracy} {\small \color{black} Pages deduplicated (shared pages) and inference accuracy before and after deduplication. (Unit: Seconds)}}
\begin{tabular}{|l|r|r|r|r|} \hline
&private pages&num shared pages&auc before dedup&auc after dedup\\\hline \hline
Model-1&$2$&$62$&$85.01\%$&$85.01\%$ \\ \hline
Model-2&$7$&$57$&$81.25\%$&$81.25\%$ \\ \hline
Model-3&$1$&$63$&$84.69\%$&$81.11\%$\\ \hline
Model-4&$13$&$51$&$90.38\%$&$86.79\%$\\ \hline
Model-5&$1$&$63$&$94.80\%$&$94.09\%$\\ \hline
\end{tabular}
\vspace{-10pt}
\end{table}
\begin{table}[h]
\vspace{-10pt}
\centering
\scriptsize
\caption{\label{tab:page-sharing} {\small \color{black} Page reference count distribution after deduplication}}
\begin{tabular}{|l|r|r|r|r|r|r|} \hline
&Model-1&Model-2&Model-3&Model-4&Model-5&Total\\\hline \hline
pages shared by 5 models&$51$&$51$&$51$&$51$&$51$&$51$ \\ \hline
pages shared by 4 models&$6$&$6$&$6$&$0$&$6$&$6$ \\ \hline
pages shared by 3 models&$5$&$0$&$5$&$0$&$5$&$5$ \\ \hline
pages shared by 2 models&$0$&$0$&$1$&$0$&$1$&$1$ \\ \hline
private pages&$2$&$7$&$1$&$13$&$1$&$24$ \\ \hline
\end{tabular}
\vspace{-10pt}
\end{table}
Each shared page may have a different reference count (i.e., shared by a different set of tensors). So we illustrate the reference counts of pages for each model in Tab.~\ref{tab:page-sharing}.
The comparison of the overall inference latency of all five text classification models, using different block sizes and storage configurations, is illustrated in Tab.~\ref{tab:text-classification-overall}. We observed that $1.1\times$ to $1.6\times$ speedup were achieved by applying our proposed techniques.
\eat{
\noindent
\textbf{Tuning of Block Size} We also tried to use a smaller block size of $300\times300$. We find that by using a smaller block size, we can identify more similar blocks. However, the overall results discourage the use of a smaller block size, because of two reasons. First, using smaller tensor blocks size will significantly increase the latency even using the same page size, because more objects need to be handled by each relational operator. Second, it will significantly increase the complexity of the page packing algorithm. For this case, packing $300\times300$ blocks to $64$megabytes pages, leads to $108$ pages by using the Greedy-2 algorithm, when the two-stage algorithm failed due to the large search space of packing $88$ blocks to one page.}
\subsubsection{Transfer Learning of Extreme Classification Models}
In this experiment, all three models have the same architecture as described in Sec.~\ref{sec:workload-amazon14k}, using double precision weights, and are specialized from the same feed-forward model through transfer learning and they share a fully connected layer, which contains $597$ millions of parameters. This layer is stored as a shared set in netsDB, and it accounts for $4.8$ gigabytes of storage space. Each model's specialized layer only accounts for $0.2$ gigabytes of storage space. Therefore, with deduplication of the shared layer, the overall required storage space is reduced from $15$ gigabytes to $5.4$ gigabytes. We need to note that the required memory size for storing the working sets involved in this model-serving workload is almost twice of the required storage space, considering the input batch of the $1,000$ $597,540,000$ dimensional feature vectors and the intermediate data between layers for both models.
Besides a significant reduction in storage space, we also observed up to \textbf{$1.18\times$} and \textbf{$1.45\times$} speedup in SSD and HDD storage respectively, because of the improvement in cache hit ratio ($40\%-46\%$). Because this is a transfer learning scenario, the shared pages have no approximation at all, there exists no influence on accuracy.
\begin{table}[h]
\vspace{-5pt}
\centering
\scriptsize
\caption{\label{tab:text-classification-overall} {\small \color{black} Overall latency for serving text classification models using different storage configurations. (Unit: Seconds)}}
\begin{tabular}{|l|r|r|r|r|} \hline
disk type& buffer pool size &w/o dedup&w/ dedup&w/ dedup \& optimized caching\\\hline \hline
SSD&$15$GB&$646$ &$427$ &$426$ \\ \hline
SSD&$10$GB& $655$&$572$ & $540$\\ \hline
SSD&$8$GB&$675$ &$595$ & $557$\\ \hline
HDD&$15$GB&$1,675$ &$1,091$ &$1,085$ \\ \hline
HDD&$10$GB&$1,815$&$1,515$&$1,467$ \\ \hline
HDD&$8$GB&$1,815$&$1,686$&$1,620$\\ \hline
\end{tabular}
\end{table}
\begin{table}[h]
\vspace{-10pt}
\centering
\scriptsize
\caption{\label{tab:transfer-learning-overall} {\small \color{black} Overall deduplication results for transfer learning with FFNN. (Unit: Seconds)}}
\begin{tabular}{|l|r|r|r|r|r|} \hline
disk type& buffer pool size &w/o dedup&w/ dedup&w/ dedup \& optimized caching\\\hline \hline
SSD&$9$GB&$115$ &$109$ &$103$ \\ \hline
SSD&$13$GB&$114$ &$96$ &$96$ \\ \hline
HDD&$9$GB&$221$ &$203$ &$157$ \\ \hline
HDD&$13$GB&$204$ &$141$ &$141$ \\ \hline
\end{tabular}
\end{table}
We also compared the netsDB performance to TensorFlow, using the Keras implementation of the FFNN model. As illustrated in Tab.~\ref{tab:transfer-comparison}, netsDB outperforms TensorFlow for loading input from a CSV file and a Blob field of a PostgreSQL table.
If we compute and store the input feature vectors in a table of $400$ Blob fields, the TF-DB latency for CPU and GPU is $1,274$ and $945$ seconds respectively, significantly slower than the latency on netsDB, which serves data and model in the same system.
\begin{table}[h]
\vspace{-5pt}
\centering
\scriptsize
\caption{\label{tab:transfer-comparison} {\small \color{black} Comparing the serving performance of multiple FFNN models deployed in netsDB to TensorFlow. (Unit: Seconds)}}
\begin{tabular}{|r|r|r|r|r|r|r|r|} \hline
\multicolumn{2}{|c|}{\texttt{}}&\multicolumn{3}{|c|}{\texttt{TensorFlow CPU}} &\multicolumn{3}{|c|}{\texttt{TensorFlow GPU}}\\ \hline
numModels&netsDB&TF-mem&TF-file&TF-DB&TF-mem&TF-file&TF-DB\\\hline \hline
$2$&$64$&$43$&$383$&$94$&$17$&$310$&$55$\\ \hline
$3$&$96$&$64$&$Failed$&$115$&$Failed$&$Failed$&$Failed$\\ \hline
\end{tabular}
\end{table}
\begin{figure}[H]
\vspace{-15pt}
\centering{%
\includegraphics[width=3.4in]{Figures/text-classification-large-block_v2.pdf}
}
\caption{\label{fig:deduplication-detection-text-classification} \small
Comparison results of deduplicating a text classification model using different indexing approaches (block size: $100$x$10000$)
}
\end{figure}
\subsection{Evaluation of Duplicate Block Detection}
\label{sec:dedup-index}
We compared our indexing strategy as illustrated in Alg.~\ref{alg:index-building} to two baselines: (1) A naive indexing scheme using pair-wise comparison to identify similar blocks based on Euclidean distance; (2) Mistique's approximate deduplication using MinHash~\cite{vartak2018mistique}. As illustrated in Fig.~\ref{fig:deduplication-detection-text-classification}, we observed significant accuracy improvement brought by our proposed deduplication detection approaches (w/ and w/o finetune) for deduplicating the same amount of blocks. That's because both baselines failed to consider a block's magnitude as well as its impact on accuracy.
Moreover, we also compared the compression ratio, the average latency for querying one tensor block from the index, and the accuracy of our proposed approach to (1) Mistique exact deduplication approach, where two tensor blocks are deduplicated only if they have the same hash code; (2) Mistique approximate deduplication; and (3) Enhanced pairwise comparison approach with magnitude ordering applied. Both (2) and (3) used periodic accuracy checks, for which, we evaluate the accuracy of a model once for indexing every five blocks from the model, and we stop deduplication for a model once its accuracy drop exceeds $3.5\%$. However, we do not roll back to ensure the accuracy drop is within $3.5\%$ for these experiments, though such rollbacks can be easily implemented. As illustrated in Tab.~\ref{tab:exact-comparison} and Tab.~\ref{tab:lsh-comparison}, the proposed approach based on L2 LSH still achieved the best compression ratio.
The Mistique's approximate approach is significantly slower in querying the index because a new block requires to be discretized and the MinHash generation requires multiple rounds of permutations.
Due to such overhead, the latency required for building an index using the Mistique approximate approach is significantly higher than our proposed approach.
\begin{table}[h]
\centering
\scriptsize
\caption{\label{tab:exact-comparison} {\small \color{black} Comparison of compression ratio and index query time.}}
\begin{tabular}{|l|c|c|c|c|} \hline
\multicolumn{1}{|l|}{} & Blocks w/o dedup & Blocks w/ dedup & \begin{tabular}[c]{@{}c@{}}Query Time \\ (Per Block, second)\end{tabular} \\\hline \hline
Mistique Exact Dedup & $2545$ & $2040$ & $0.02$ \\ \hline
Mistique Approximate Dedup & $2545$ & $712$ & $10+$ \\ \hline
Enhanced Pairwise & $2545$ & $693$ & $2.9$ \\ \hline
Proposed (w/o finetune) & $2545$ & $662$ & $0.2$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[h]
\vspace{-15pt}
\centering
\scriptsize
\caption{\label{tab:lsh-comparison} {\small \color{black} Comparison of model accuracy drop.}}
\begin{tabular}{|l|c|c|c|c|c|} \hline
&Model-1&Model-2&Model-3&Model-4&Model-5\\\hline \hline
Mistique Exact Dedup & $0.00\%$ & $0.00\%$ & $0.00\%$ & $0.00\%$ & $0.00\%$ \\ \hline
Mistique Approximate Dedup & $0.00\%$ & $0.00\%$ & $3.64\%$ & $4.06\%$ & $0.71\%$ \\ \hline
Enhanced Pairwise & $0.00\%$ & $0.00\%$ & $3.57\%$ & $3.58\%$ & $2.92\%$ \\ \hline
Proposed (w/o finetune) & $0.00\%$ & $0.00\%$ & $3.58\%$ & $3.59\%$ & $0.71\%$ \\ \hline
\end{tabular}
\end{table}
\begin{figure}[H]
\vspace{-15pt}
\centering
\includegraphics[width=3.4in]{Figures/Figure_6_ver2.pdf}
\caption{\label{fig:sharing} \small
Block sharing in Text Classification
}
\vspace{-5pt}
\end{figure}
We also visualized the distribution of duplicate blocks across the models for the text classification workload, as illustrated in Fig.~\ref{fig:sharing}. The results showed that the blocks that are shared across models tend to be located in the same position of the tensor. This observation leads to the optimization of metadata as described in Sec.~\ref{sec:overview}: metadata such as the index (i.e., position) of a shared tensor block in each tensor can be simplified.
\subsection{Evaluation of Page Packing Algorithms}
\label{sec:dedup-paging}
We evaluated our proposed page packing algorithms using four evaluation scenarios: (1) Two-stage algorithm, which used Alg.~\ref{alg:greedy} in stage 1, and then apply Alg.~\ref{alg:greedy1} to items in non-full bins in stage 2.
(2) Greedy-1 algorithm that is based on equivalent classes (Alg.~\ref{alg:greedy});
(3) Greedy-2 algorithm that applies Alg.~\ref{alg:greedy1} to overall page packing.
(4) A baseline algorithm, where we simply pack tensor blocks to pages in order, and then we eliminate the duplicate pages which contain the same set of tensor blocks.
We observed significant improvement in storage efficiency brought by our proposed two-stage algorithm compared to alternatives, as illustrated in Tab.~\ref{tab:page-packing-storage}. In addition, the computation efficiency of the two-stage algorithm is comparable to Greedy-1, as illustrated in Tab.~\ref{tab:page-packing}.
As mentioned, the extreme classification workload involves models that share the same fully connected layer, which means all tensor blocks in that layer are fully shared by all models. In such a special case, all algorithms achieve similar storage efficiency.
\eat{
\begin{figure}[H]
\subfigure[Model-3: IMDB Trainable]{%
\label{fig:block-sharing2}
\includegraphics[width=3.4in]{Figures/fig3.png}
}%
\hspace{0pt}
\centering\subfigure[Model-4: Yelp Trainable]{%
\label{fig:block-sharing1}
\includegraphics[width=3.4in]{Figures/fig-1.pdf}
}%
\hspace{0pt}
\subfigure[Model-5:Civil Comments Trainable]{%
\label{fig:block-sharing3}
\includegraphics[width=3.4in]{Figures/fig5.png}
}
\caption{\label{fig:sharing} \small
Block sharing in Text Classification($100$x$10000$)
}
\end{figure}
}
\begin{table}[h]
\centering
\scriptsize
\caption{\label{tab:page-packing-storage} {\small \color{black} Comparison of required number of pages using different page packing algorithms.}}
\begin{tabular}{|l|r|r|r|r|} \hline
Scenario (block size, page size)& Baseline & Two-Stage&Greedy-1&Greedy-2\\\hline \hline
word2vec ($100\times10000$, $64$MB)&$130$&$\textbf{98}$&$99$&$\textbf{98}$\\ \hline
text classification ($100\times10000$, $64$MB)&$101$&$\textbf{87}$&$91$&$\textbf{87}$\\ \hline
text classification ($300\times300$, $64$MB)&$156$&$\textbf{104}$&$108$&$109$\\ \hline
text classification ($300\times300$, $32$MB)&$270$&$\textbf{195}$&$198$&$202$\\ \hline
\end{tabular}
\end{table}
\begin{table}[h]
\vspace{-15pt}
\centering
\scriptsize
\caption{\label{tab:page-packing} {\small \color{black} Comparison of page packing latency using different page packing algorithms. (Unit: seconds)}}
\begin{tabular}{|l|r|r|r|r|} \hline
Scenario (block size, page size)&Baseline&Two-Stage&Greedy-1&Greedy-2\\\hline \hline
word2vec($100\times10000$, $64$MB)&1.29&0.02&\textbf{0.01}&0.82\\ \hline
text classification ($100\times10000$, $64$MB)&0.68&\textbf{0.01}&\textbf{0.01}&0.52\\ \hline
text classification ($300\times300$, $64$MB)&13.65&\textbf{0.05}&\textbf{0.05}&11.50\\ \hline
text classification ($300\times300$, $32$MB)&44.72&\textbf{0.04}&\textbf{0.04}&42.72\\ \hline
\end{tabular}
\end{table}
Above testing results are based on the offline page packing. We also tested the online approach of page packing. We find that in the text classification workload, when using $100\times10,000$ block size and $64$ megabytes page, each time we involve a new model, about $20\%$ of pages need to be reorganized, while $80\%$ of pages can be reused and thus do not need to be changed, as illustrated in Tab.~\ref{tab:online-page-packing}.
\begin{table}[h]
\centering
\scriptsize
\caption{\label{tab:online-page-packing} {\small \color{black} Page reuse and reorganization for online page packing.}}
\begin{tabular}{|l|c|c|c|c|} \hline
Step & New model to pack & pages reused&pages discarded&pages created\\\hline \hline
1&Model-1&0&0&64\\ \hline
2&Model-2&52&11&15\\ \hline
3&Model-3&52&9&15\\ \hline
4&Model-4&50&13&23\\ \hline
5&Model-5&52&13&16\\ \hline
\end{tabular}
\end{table}
\vspace{-5pt}
\subsection{Evaluation of Caching Optimization}
\label{sec:dedup-caching}
We also compare the proposed caching optimization to a number of baselines, including LRU, MRU, as well as the locality set-based page replacement policy without considering the page sharing. The detailed cache hit ratio comparison for the Word2Vec and text classification applications are illustrated in Tab.~\ref{fig:cache-miss-ratio}. Locality Set-M/L refers to the locality set page replacement policy~\cite{zou2020architecture, zou2019pangea} that treats shared pages as one locality set and applies the MRU/LRU to this locality set of shared pages. Optimized M/L refers to the localitySet-M/L with the proposed caching optimization applied (i.e., shared pages will be given a higher priority to be kept in memory).
We observed that, after deduplication, the cache hit ratio improved significantly because of the reduction in memory footprint. In addition, with the proposed deduplication approach applied, Optimized-M/L achieved a significantly better cache hit ratio than alternative page replacement policies.
\begin{figure}[H]
\centering
\includegraphics[width=3.2in]{Figures/hit-ratio.pdf}
\caption{\label{fig:cache-miss-ratio} \small
Comparison of different page replacement policies
}
\vspace{-5pt}
\end{figure}
\subsection{Relationship to Model Compression }
\label{sec:dedup-compression}
Besides deduplication, there exist a number of model compression techniques, such as pruning~\cite{han2015deep, han2015learning} and quantization~\cite{jacob2018quantization}, which can only be applied to each single model separately. In this work, we found that as a cross-model compression technique, model deduplication can be applied after pruning or quantization individual models, which achieved $2\times$ to $3\times$ better storage efficiency. The reason is that pruning and quantization will not significantly change the similarity of tensor blocks across models.
We also observed similar results for an ensemble of VGG-16 models. We omit the details here, because the use cases of convolutional neural networks in RDBMS are unclear and the volume of model parameters is relatively small (up to hundreds of megabytes).
\begin{table}[h]
\vspace{-5pt}
\centering
\scriptsize
\caption{\label{tab:comparison} \small Comparison of compression techniques (Compression ratio is defined as the ratio of the size after compression to the size before compression. Accuracy drop is measured as the maximum accuracy drop of the models after compression.)}
\begin{tabular}{|r|r|r|r|r|r|} \hline
& pruning & quantization & dedup & dedup+ pruning&dedup+quant\\\hline \hline
auc drop&$3.2$\%&$1.33$\%&$3.98$\%&$3.6$\%&$3.78$\%\\ \hline
compression ratio&$23.4$\%&$12.5$\%&$27.32$\%&$6.74$\%&$5.24$\%\\ \hline
\end{tabular}
\end{table}
\section{Index for Duplication Detection}
\label{sec:index}
\subsection{Problem Description}
\eat{While the tensor block shape will also affect the block-based deduplication, we found smaller blocks will not necessarily lead to more duplicate blocks. Fig.~\ref{}.illustrates how the number of similar blocks will change with the tensor block shapes in various model serving scenarios.}
In this section, we focus on one problem: \textit{For the tensors with same blocking shapes, how to divide all tensor blocks of these tensors into distinct groups, so that the tensor blocks in each group can replace each other without a significant drop in the inference accuracy of each model?} We can further pick one block, i.e., the first identified block, in each group as a representative tensor block to replace other blocks in its group, without significant accuracy drop.
The problem is formalized as follows:
Given $k$ tensors:$T=\{t_1, ..., t_k\}$, the $i$-th tensor $t_i$ is split into $n_i$ tensor blocks: $t_i = \{b_1, ..., b_{n_i}\}$. The question is how to divide all tensor blocks, $B=\cup_i{t_i}$, into $m$ clusters: $C=\{c_1, ..., c_m\}$, so that (1) $\forall c \in C, c \subset B$; (2) $\forall c_i, c_j \in C$, $c_i \cap c_j = \phi$; (3) $\forall c \in C$, $\forall b_i, b_j \in c, b_i \approx b_j$.
Here, $b_i \approx b_j$ means that $b_i$ can be replaced by $b_j$ so that the drop in model accuracy is smaller than a threshold $t$.
\vspace{-5pt}
\subsection{Main Ideas}
\subsubsection{Magnitude-aware Duplicate Detection}
Existing works about deduplication~\cite{li2016cachededup, elmagarmid2006duplicate, bilenko2006adaptive, ananthakrishna2002eliminating, hernandez1995merge, borthwick2020scalable, chu2016distributed, kolb2012load, kolb2012dedoop} and tensor chunk deduplication~\cite{vartak2018mistique}, include exact page deduplication and similar/approximate page deduplication, as detailed in Sec.~\ref{sec:related-works1} and ~\ref{sec:related-works2}. However, we found these works cannot be directly applied to tensor block deduplication for model serving applications:
(1) Exact deduplication of tensor chunks does not consider the fuzziness or similarity of model weights. In fact, the number of tensor blocks that can be deduplicated based on exact match is significantly lower than similarity-based match.
(2) We also found it \textit{ineffective} to perform deduplication solely based on the similarity, without considering the impact of model weights on the prediction accuracy. For example, we found that deduplicating similar blocks in a batch normalization layer in a ResNet50 model (two blocks with less than $0.1\%$ different weights were considered as similar), without considering the importance of weights, will reduce accuracy from $81\%$ to $8\%$.
Therefore, it is \underline{critical} to develop new methods to identify tensor blocks that can be deduplicated with limited impacts on accuracy.
Motivated by the iterative pruning process~\cite{han2015deep, han2015learning}, in which weights with small magnitude are pruned first, we developed a process of magnitude-aware duplicate detection, where blocks of smaller magnitude are deduplicated first, and the model accuracy is periodically validated after deduplicating every $k$ blocks.
\vspace{-5pt}
\subsubsection{LSH-based Approximate Tensor Block Deduplication}
To reduce the pair-wise similarity comparison overhead, we consider leveraging Locality Sensitive Hash (LSH), which is a popular technique to solve nearest neighbor problems. LSH based on Hamming distance~\cite{datar2004locality}, Euclidean distance~\cite{indyk1998approximate}, and cosine similarity ~\cite{charikar2002similarity} are designed to identify similar numerical vectors with fixed dimensions, and can be directly applied to detect similar tensor blocks. In addition, the MinHash based on Jaccard similarity~\cite{broder1997resemblance} is designed to identify similar binary vectors or similar sets of items. In this work, we mainly use the LSH based on Euclidean distance~\cite{indyk1998approximate, chen2019locality}, which we call L2 LSH, because it is easy to compute (e.g., it does not require an expensive numeric value discretization process like MinHash) and it can be linked to the JS-divergence~\cite{lin1991divergence} of weights' probability distributions of two tensor blocks~\cite{chen2019locality}.
For each block, its LSH signature is computed and used as the search key, and the identifier of the block (\texttt{TensorID}, \texttt{BlockID}) is used as the value. The key-value pair is sent to an index to look up a group of similar blocks that collide on the signature. For each group, the \textit{first indexed block} is used as the representative block of this group, and other blocks are replaced by this representative block if accuracy drop is tolerable.
If another block in the group has the same \texttt{BlockID} with the representative block, the \texttt{BlockID} field, which encodes the block's position along all dimensions of the tensor, can be omitted to save space.
\subsection{Index Building}
Given a set of models, we execute following steps \textit{for each model}:
\noindent
Step 1. Calculate an aggregated magnitude value (e.g., average, median, 1st percentile, 3rd percentile, etc.) for each tensor block in the tensors of the model. We use the 3rd percentile, because even if the block contains only a few large magnitude weights, it may impact the inference accuracy significantly and should not be deduplicated. 3rd percentile can better reflect the magnitude of large weights in this block than aforementioned alternatives.
\noindent
Step 2. Order all tensor blocks in the model by their magnitude values in ascending order.
\noindent
Step 3. Select $k$ blocks that have the lowest magnitude values, and for each block, its LSH signature is computed and used to query the index. If the index has seen similar blocks before, the block's identifier will be added to the corresponding group and this block will be replaced by the representative block, which is the first indexed block in this group. If the index hasn't seen similar block before, a new group will be created, and this block becomes the representative block in the group.
\noindent
Step 4. We will test the model using a validation dataset to check whether its inference accuracy drop is less significant than a threshold $t$. If so, the algorithm repeats Step 3 and 4.
Otherwise, it will \textit{stop deduplication} for this model. That said, it simply adds each remaining block to the corresponding group, but such block will NOT be replaced by the representative block in the group. Such remaining blocks as well as the representative blocks are called as distinct blocks \footnote{It is possible a remaining block is also a representative block in its own group.}, each of which has only one physical copy.
We repeat the above process for each model to incrementally construct the index, as illustrated in Alg.~\ref{alg:index-building}. The inputs of the algorithm include: (1) $T=\{t_1, ..., t_k\}$, which is a set of tensors belonging to the model; (2) $idx$, which maps an LSH signature to a representative block $d_c$ and a cluster $c$ consisting of the identifiers of blocks of which the signatures collide and thus are similar to the representative block; (3) $L$, which is a list of distinct tensor blocks derived from previous models. The $idx$ and $L$ are \textit{shared by all models} and will be updated during the execution of the algorithm.
The output of the algorithm is $F_{T}=\{f_1, ..., f_k\}$. Each $f_i$ is a mapping for the $i$-th tensor in the model, which specifies the identifier of the distinct tensor block corresponding to each (logical) block in the tensor.
The deduplication is achieved by allowing multiple tensor blocks across models mapped to one distinct block. The output information is needed to pack distinct tensor blocks to pages as detailed in Sec.~\ref{sec:paging}.
\begin{algorithm}\small
\caption{\bf Index Building}
\label{alg:index-building}
\begin{algorithmic}[1]
\STATE INPUT1: $T=\{t_1, ..., t_k\}$ (A set of parameter tensors in a model)
\STATE INPUT2: $idx$ (The index that has been constructed for previous models, and will be updated by this model.)
\STATE INPUT3: $L=\{d_1, ..., d_m\}$ (A set of distinct blocks derived from previous models, which will be updated by this model.)
\STATE OUTPUT: $F_{T}=\{f_1, ..., f_k\}$ ($f_i$ maps each tensor block in $t_i$ to a distinct block)
\STATE $B=\{b_1, ..., b_n\} \leftarrow \cup_{i=1}^{k}{t_i}$
\STATE $a_0 \leftarrow accuracy(Model_B)$
\FOR {$i=1,...n$}
\STATE $(b_i, v_i) \leftarrow (b_i, getMagnitude(b_i))$
\ENDFOR
\STATE $B'=\{b'_1, ..., b'_n\} \leftarrow$ sort $B$ by $v_i$ in ascending order
\STATE $i \leftarrow 0$
\WHILE{$i \leq n$}
\FOR{$j=i+1, ..., i+k$}
\STATE $s_j \leftarrow lsh(b'_j)$
\IF{$idx$.count($s_j$) > 0}
\STATE $(b_c, c) \leftarrow$ $idx$.look\_up($s_j$)
\STATE $c \leftarrow$ $\{(tensorID(b'_j), blockID(b'_j))\} \cup c $
\STATE $idx$.update($s_j$, ($b_c, c$))
\STATE $b'_j \leftarrow b_c$ //use representative block $b_c$ to replace $b'_j$
\STATE $f_{tensorID(b'_j)}[blockID(b'_j)]\leftarrow IndexInL(b_c)$
\ELSE
\STATE $idx$.insert($<s_j$, $(b'_j, \{(tensorID(b'_j), blockID(b'_j))\}>$)
\STATE $L$.push\_back($b'_j$)
\STATE $f_{tensorID(b'_j)}[blockID(b'_j)]\leftarrow IndexInL(b'_j)$
\ENDIF
\ENDFOR
\STATE $a \leftarrow accuracy(Model_B)$
\IF {$a_0 - a > t$}
\FOR{$u = j+1, ..., n$}
\STATE $idx$.insert($<lsh(b'_u)$, $(b'_u, \{(tensorID(b'_u), blockID(b'_u))\}>$)\STATE $L$.push\_back($b'_u$)
\STATE $f_{tensorID(b'_u)}[blockID(b'_u)]\leftarrow IndexInL(b'_u)$
\ENDFOR
\RETURN $F_{T}$
\ENDIF
\STATE $i \leftarrow i+k$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\noindent
\textbf{Further Optimizations.} In order to further improve the accuracy, after deduplicating the models based on the constructed index, an additional parameter finetune stage can be carried out to optimize the accuracy after deduplication. In our implementation, for simplicity, during the finetune process, the tensor blocks that are shared by multiple models will be frozen, and only the weights in the private pages will be tuned for each model.
\noindent
\textbf{Removal and Updates.} If a tensor block in a model needs to be removed, the LSH signature of the block is computed to query the index.
If there exists a match and the block's identifier exists in the corresponding group, the identifier will be removed from the group. Adding or removing blocks from the group will not affect the representative block of the group. If the representative block is the only block in the group, and it is to be removed, the group will be removed. The update of a tensor block can be regarded as a removal followed by an insertion.
\section{Introduction}
\label{sec:intro}
In the life cycle of deep learning, serving models for inferences is a vital stage and usually incurs significant operational costs.
An Amazon user study found that model serving is responsible for $45$-$65$\% of the total cost of ownership of data science solutions~\cite{amazon-tco}.
One important reason is that most of today's platforms that serve deep neural network (DNN) models, such as Nexus~\cite{shen2019nexus}, Clipper~\cite{crankshaw2017clipper}, Pretzel~\cite{lee2018pretzel}, TensorFlow Serving~\cite{olston2017tensorflow}, and Rafiki~\cite{wang2018rafiki}, are standalone systems that are totally decoupled from the data management systems. From the perspective of end-to-end applications, this decoupling incurs significant costs as follows:
\noindent
(1) Existing deep learning serving frameworks
are compute-focused and require models, input features, and intermediate feature maps all fit in memory. Failing to meet such requirements leads to the failing of the system. For large models with large working set, which is common in applications such as natural language processing and extreme multi-label classification~\cite{extreme-classification}, this problem significantly impacts the availability of a model serving system.
\noindent
(2) The physical decoupling of data serving and model serving introduces management complexity and extra latency to transfer input features from the databases where input features are extracted to the deep learning frameworks.
Therefore, it is imperative to investigate the serving of deep learning models natively from the relational database management system (RDBMS)~\cite{yuan2020tensor, jankov2019declarative, nakandala2020tensor, karanasos2019extending, hutchison2017laradb, DBLP:journals/pvldb/KoutsoukosNKSAI21, wang2020spores, dolmatova2020relational, boehm2016systemml}. RDBMS has a long history of optimizing the memory locality for computations (i.e., queries), whether the working set size exceeds memory capacity or not, through effective buffer pool management. It also eases the management of data through data independence, views, and fine-grained authorization. All of these capabilities, if leveraged for model serving, will significantly reduce the operational costs and simplify system management for a broad class of real-world workloads~\cite{olteanu2020relational}, such as credit-card fraud detection, personalized targeting recommendation, and personalized conversational-AI for customer supports. In such applications, the features are extracted from various historical transaction records or customer profiles, which are stored in RDBMS.
\eat{
}
\vspace{5pt}
As aforementioned, unlike deep learning frameworks, workloads in RDBMS are not expected to have a working set fit in the available memory. The RDBMS buffer pool manager moves pages between disk and memory to optimize the data locality while continuing query processing. This allows more models to be served concurrently than deep learning frameworks such as TensorFlow with the same memory capacity. Nonetheless, there is always a desire to \textit{increase buffer reuse and minimize page displacement}. To achieve this in model serving, we look into \textbf{model deduplication}.
Serving multiple similar models, such as ensemble and personalized model serving, can greatly improve the accuracy and customer experiences, and thus becomes a common pattern of DNN model serving~\cite{anyscale, crankshaw2015scalable, crankshaw2017clipper}. Such DNN models contain abundant \textit{similar} tensor blocks that can be deduplicated without affecting the accuracy. As a result, proper deduplication of such DNN models significantly reduces the storage space, memory footprint, and cache misses, and thus reduces the inference costs and latency.
However, existing deduplication techniques for tensors~\cite{vartak2018mistique}, files~\cite{meyer2012study, zhu2008avoiding, bhagwat2009extreme, li2016cachededup, debnath2010chunkstash, wang2020austere}, relational data~\cite{elmagarmid2006duplicate, bilenko2006adaptive, ananthakrishna2002eliminating, hernandez1995merge, borthwick2020scalable, yu2016generic, xiao2008ed}, and MapReduce platforms~\cite{kolb2012load, kolb2012dedoop, chu2016distributed}, are not applicable to the above problem, because: (1) They do not consider the impacts on model inference accuracy; (2) They do not consider how existing database storage functionalities, including indexing, page packing, and caching, should be enhanced to better support the inference and the deduplication of DNN models.
The \underline{challenges} that we focus on in this work include:
\vspace{3pt}
\noindent
1. How to leverage indexing to efficiently detect similar parameters that can be deduplicated without hurting the inference accuracy?
\noindent
2. A database page can contain multiple tensor blocks. How to pack tensor blocks into pages to maximize page sharing across multiple models and minimize the total number of needed pages for representing all tensors?
\noindent
3. How to augment the caching policy to increase the data locality for deduplicated model parameters, so that pages that are needed by multiple models have a higher priority to be kept in memory?
\vspace{6pt}
\noindent
To address these challenges, in this work, we propose a novel RDBMS storage design optimized for tensors and DNN inference workloads. We mainly leverage our previous works on Tensor Relational Algebra~\cite{yuan2020tensor, jankov2019declarative} to map deep learning computations to relational algebra expressions.
A tensor is partitioned and stored as a set of tensor blocks of equivalent shape, where each block contains the metadata that specifies its position in the tensor. A tensor is similar to a relation and a tensor block is similar to a tuple. A DNN model inference is represented as a relational algebra graph, as detailed in \textbf{Sec.~\ref{sec:background}}. This high-level abstraction is also consistent with many popular systems that integrate database and machine learning, such as SystemML~\cite{boehm2016systemml}, Spark MLlib~\cite{meng2016mllib}, SciDB~\cite{stonebraker2011architecture}, SPORES~\cite{wang2020spores}, LaraDB~\cite{hutchison2017laradb}, among others.
Similar to the classical physical representation of a relation, we store a tensor as a set of database pages, with each page containing multiple tensor blocks. The difference is that each tensor relation consists of a set of private pages, and an array of references to shared pages that belong to more than one tensor, as detailed in \textbf{Sec.~\ref{sec:overview}}.
On top of such physical representation, we propose novel and synergistic indexing, paging, and caching techniques as follows:
\vspace{3pt}
\noindent
\textbf{Tensor block index for fast duplication detection (Sec.~\ref{sec:index}).} It is widely observed that a small portion of model parameters (e.g., weights, bias) are critical to prediction accuracy. Deduplicating these parameters will lead to a significant reduction in accuracy~\cite{lee2020fast}. To address the problem, different from existing tensor deduplication works~\cite{vartak2018mistique}, we propose to first measure each tensor block's sensitivity to prediction accuracy based on weight magnitude or other post-hoc analysis~\cite{han2015learning}, and thus avoid deduplicating accuracy-critical blocks.
Because pair-wise similarity-based comparison across tensor blocks exhibits inhibitive overhead, we used the Locality Sensitive Hash (LSH) based on Euclidean (L2) distance ~\cite{indyk1998approximate, zhou2020s},
to facilitate the nearest neighbor clustering.
\vspace{3pt}
\noindent
\textbf{Packing distinct tensor blocks to pages for minimizing storage size (Sec.~\ref{sec:paging}).} The problem is a variant of the bin-packing problem with different constraints: (1) Two bins (i.e., pages) can share space if they contain the same set of items (i.e., tensor blocks)~\cite{korf2002new, sindelar2011sharing}; (2) For each tensor, there must exist a set of pages that exactly contain all blocks of that tensor. To address this problem, we propose a concept called \texttt{equivalent class} so that blocks that are owned by the same set of tensors will be assigned to the same class. Then, we propose a two-stage algorithm that first employs a divide-and-conquer approach to pack tensor blocks in each equivalent class to pages respectively, and later it adopts an approximation algorithm to repack the tensor blocks from non-full pages.
\vspace{3pt}
\noindent
\textbf{Deduplication-aware buffer pool management (Sec.~\ref{sec:caching}).} Existing deduplication-aware cache replacement strategies~\cite{li2016cachededup, wang2020austere} do not consider the locality patterns of different sets of pages, which are important for model inference workloads where the input and the output of each layer have different locality patterns. However, existing locality-aware buffer pool management policies~\cite{chou1986evaluation}\eat{, including the locality set abstraction and the page replacement algorithm we implemented in our previous work Pangea~\cite{zou2019pangea, zou2020architecture},} do not distinguish private pages and shared pages. To address this problem, we propose a cost model for page eviction, which considers the reference count of a page (i.e., the number of locality sets/tensors that share this page) and gives pages that are shared by more tensors higher priority to be kept in memory.
\vspace{6pt}
\noindent
The key contributions of our work are as follows:
\noindent
1. We are the first to systematically explore the storage optimization for DNN models
in RDBMS, with an overall goal of supporting deep learning model serving (i.e., inferences) natively from RDBMS.
\noindent
2. We propose three synergistic storage optimizations: (a) A novel index based on L2 LSH and magnitude ordering to accelerate the discovery of duplicate tensor blocks with limited impacts on the accuracy; (b) A two-stage strategy to group tensor blocks to pages to minimize the number of pages that are needed to store the tensor blocks across all tensors; (c) A novel caching algorithm that recognizes and rewards shared pages across locality sets. It is noteworthy that our optimization can work together with other compression techniques such as pruning~\cite{han2015deep, han2015learning} and quantization~\cite{jacob2018quantization} to achieve a better compression ratio, as detailed in Sec.~\ref{sec:dedup-compression}.
\noindent
3. We implement the system in an object-oriented relational database based on our previous work of PlinyCompute~\cite{zou2018plinycompute, zou2019pangea, zou2020architecture, zou2020lachesis}, called netsDB~\footnote{https://github.com/asu-cactus/netsdb. Related documentation can be found in https://github.com/asu-cactus/netsdb/tree/master/model-inference/.}. We evaluate the proposed techniques using the serving of (1) multiple customized Word2Vec embedding models; (2) multiple versions of text classification models; (3) multiple specialized models for extreme classification.
The results show that our proposed deduplication techniques achieved $2.7\times$ to $3.6\times$ reduction in storage size, speeded up the inference by $1.1\times$ to $4.7\times$, and improved the cache hit ratio by up to $1.6\times$. The results also show that netsDB outperformed TensorFlow for these workloads.
\noindent
\noindent
\noindent
\section{Conclusions}
Serving deep learning models from RDBMS can greatly benefit from the RDBMS' physical data independence and manageability. This work proposed several synergistic storage optimization techniques covering indexing, page packing, and caching. We implemented the system in netsDB, an object-oriented relational database. We evaluated these proposed techniques using several typical model serving scenarios, including the serving of (1) multiple fine-tuned word embedding models, (2) multiple text classification models, and (3) multiple extreme classification models specialized through transfer learning. The results showed that our proposed deduplication techniques achieved $2.7\times$ to $3.6\times$ reduction in storage size, speeded up the inference by $1.1\times$ to $4.7\times$, and improved the cache hit ratio by up to $1.6\times$. The results also showed that significantly more models can be served from RDBMS than TensorFlow, which helps to reduce the operational costs of model inferences.
\eat{
\section{Acknowledgments}
}
\balance
\bibliographystyle{ACM-Reference-Format}
\section{System Overview}
\label{sec:overview}
Leveraging tensor relational algebra~\cite{yuan2020tensor, jankov2019declarative}, a tensor is represented as a set of tensor blocks\footnote{Luo et al~\cite{luo2021automatic} proposed an auto tuning strategy for blocking tensors for TRA~\cite{yuan2020tensor}.}. Without deduplication, the set is physically stored in an array of pages of equivalent size, where each page consists of multiple tensor blocks. With deduplication, certain pages will be shared by multiple tensors. These shared pages are stored separately in a special type of set. Each tensor not only stores an array of private pages, but also maintains a list of page IDs that points to the shared pages that belong to the set.
Given a set of models, we propose a novel \textbf{deduplication process}, as illustrated in Fig.~\ref{fig:overview} and described below:
(1) An LSH-based index is incrementally constructed to group tensor blocks based on similarity, so that similar tensor blocks can be replaced by one representative tensor block in their group, with limited impacts on the model inference accuracy. To achieve the goal, the main ideas include: (a) Always examining the tensor blocks in the ascending ordering of their estimated impacts on the accuracy; (b) Periodically testing the deduplicated model inference accuracy along the duplication detection process, and stopping the deduplication for tensor blocks from a model, if its accuracy drops below a threshold. ( Sec.~\ref{sec:index})
(2) Each set of tensor blocks is physically stored as an array of pages of fixed size on disk. Distinct tensor blocks identified by the indexing are carefully grouped to pages so that each tensor is exactly covered by a subset of pages, and the number of pages that are required by all models is minimized. We optimize these objectives by assigning distinct tensor blocks that are shared by the same set of tensors to one equivalent class. Then blocks in the same equivalent class are grouped to the same set of pages. After this initial packing, tensor blocks from non-full pages are repacked to further improve the storage efficiency. (Sec.~\ref{sec:paging})
(3) The pages are automatically cached in the buffer pool. When memory resources become insufficient, the buffer pool manager will consider the locality patterns of each tensor and give hot pages and shared pages higher priority to be kept in memory through a novel cost model. (Sec.~\ref{sec:caching})
\vspace{5pt}
\noindent
\textbf{Block Metadata.}
A major portion of overhead of the proposed deduplication mechanism is incurred by the additional metadata used to map each tensor block in these shared pages to the correct position in each tensor. Each tensor block needs $m\times d$ integers to specify such mapping, where $m$ is the number of tensors that share the block and $d$ is the number of dimensions of the tensor. The metadata size is usually much smaller than the block size. For an $8$ megabytes block (e.g., $100\times 10000$ with double precision), its metadata for position mapping is merely $400$ bytes, supposing such a $2$D block is shared by $100$ tensors, using short type to store block indexes. Even when we use small block sizes such as $100\times 100$, the block size is hundreds times larger than the metadata size.
As aforementioned, an important pattern of model serving involves multiple versions of models that have the same architecture, e.g., obtained by retraining/finetuning a model using different datasets. We found that the deduplication of such models does not require tensor block remapping at all, as a shared tensor block is often mapped to the same position of all tensors it belongs to. That's because during the process of finetuning and retraining, only partial weights will change. For a tensor block in such scenarios, we only need $m$ integers to specify the IDs of tensors that share it.
\vspace{5pt}
\noindent
\textbf{Model Removal and Updates.}
To remove a tensor, all private pages belonging to the tensor will be removed, and then, for each shared page belonging to this tensor, its reference count will be decremented. Once a shared page's reference count is dropped to $1$, this shared page will be moved from the shared page set to the private set of the tensor that owns the page.
Given that the models in a serving scenario are less frequently updated than models in a training scenario, an update is implemented as a removal of the old tensor followed by an insertion of the new tensor. However, the index can be easily extended to facilitate model updates at a fine-grained level, as discussed in Sec.~\ref{sec:index}.
\section{Grouping Tensor Blocks into Pages}
\label{sec:paging}
Based on Sec.~\ref{sec:index}, we obtained a mapping from each (logical) tensor block to a (physical) distinct block. Each tensor may consist of both private distinct blocks that belong to only one tensor and shared distinct blocks that belong to multiple tensors. Now we investigate the problem of how to pack multiple tensor blocks to database pages, so that we can maximize the sharing of pages and minimize the total number of pages that are needed.
\subsection{Inconsistent Pages and Tensor Blocks}
Database storage organizes data in pages, so that a page is the smallest unit of data for I/O read/write and cache load/evict operations.
Analytics databases usually use a page size significantly larger than a tensor block (e.g., Spark uses $128$ megabytes page size and $1024\times 1024$ block shape by default~\cite{meng2016mllib}). As a result, a database page may contain multiple tensor blocks. Each tensor consists of a set of pages that should contain exactly the set of tensor blocks belonging to the tensor: no more and no less. If these pages contain tensor blocks that do not belong to the tensor, it will significantly complicate the scanning and various operations over the tensor.
However, the default paging process used in database systems cannot work well with deduplication.
By default, tensor blocks are packed into pages based on the ordering of the time when each block is written to the storage. If a page can hold up to $l$ tensor blocks, every batch of $l$ consecutive tensor blocks are packed into one page. Then a page deduplication process is performed, so that each distinct page will be physically stored once. However, such default packing with page-level deduplication is sub-optimal, because deduplicable tensor blocks may not be adjacent to each other spatially. As illustrated in Fig.~\ref{fig:page-packing-motivation}, the default packing requires $8$ pages, while the optimal packing scheme requires only $5$ pages.
\begin{figure}[h]
\vspace{-10pt}
\centering
\includegraphics[width=0.4\textwidth]{Figures/page-packing-motivation.pdf}
\caption{\label{fig:page-packing-motivation} \small
Motivation of page packing optimization
}
\vspace{-10pt}
\end{figure}
\subsection{Problem Formalization}
The problem is: \textit{How to group the tensor blocks across all models to pages to satisfy that: (1) For each tensor, we can find a subset of pages so that the set of tensor blocks contained in the pages is exactly the set of all tensor blocks that belong to the tensor; (2) The total number of distinct pages that need to be physically stored is minimized.}
Here we formalize the problem definition as a variant of the bin packing problem, where each \textbf{bin} represents a page that holds a limited number of tensor blocks, and each distinct tensor block represents an \textbf{item}.
Given $k$ tensors $T=\{t_1, ..., t_k\}$ and a set of distinct tensor blocks $I=\{item_1, ..., item_m\}$ derived from these tensors, a Boolean value $a_{ij}$ specifies whether $item_i$ exists in the $j$-th tensor. $\forall t_i\in T$, $t_i \subset I$, as described in Sec.~\ref{sec:index}.
The problem is to look for a bin-packing scheme that packs the items (i.e., distinct tensor blocks) to $n$ bins (i.e., pages), denoted as $bins=\{bin_1, ..., bin_n\}$, where each bin can hold at most $l$ items and each item can be allocated to one or more bins, denoted as $bin_i \subset I$ and $|bin_i|\leq l$. Boolean value $p_{ij}$ denotes whether $item_i$ exists in $bin_j$. The bin packing mechanism $P=\{p_{ij}\}$ must satisfy conditions as follows: (1) the total number of bins, $\sum_{j=0}^n{y_j}$, is minimized, where the Boolean value $y_j$ denotes whether the $bin_j$ is used; (2) $\forall t_i \in T, \exists bins' \subset bins$, so that $t_i = \cup_{bin \in bins'}{bin}$, which means the set of distinct items contained in a tensor $t_i$ is equivalent to the set of distinct items contained in all bins belonging to $bins'$.
\begin{equation}
\min {\sum_{j=0}^n{y_j}}
\end{equation}
\begin{equation}
\;\;\;\; y_j =
\begin{cases}
1 & if \sum_{i=0}^m{p_{ij}} > 0\\
0 & otherwise
\end{cases}
\end{equation}
\begin{equation}
\;\;\;\;\; \forall bin_j \in bins, bin_j \subset I, p_{ij} =
\begin{cases}
1 & if item_i \in bin_j\\
0 & otherwise
\end{cases}
\end{equation}
\begin{equation}
s.t. \;\;\;\;\; \forall j, \sum_{i=0}^m{p_{ij}} \leq l
\end{equation}
\begin{equation}
\forall t_j \in T, t_j = \{item_k | a_{kj} = 1\}, \exists bins' \subset bins, t_j = \cup_{bin \in bins'}{bin}
\end{equation}
\noindent
\textbf{Problem Importance and Hardness.}
It is an \underline{important} problem, because large page sizes up to hundreds of megabytes, are widely adopted in analytics databases~\cite{zaharia2010spark} and when memory resource becomes insufficient, even saving only a few pages may significantly reduce the memory footprint and improve the performance.
The problem is a variant of the bin-packing problem where items (i.e., distinct blocks) can share space when packed into a bin (i.e., pages)~\cite{korf2002new, sindelar2011sharing}, which is NP-hard. A dynamic programming strategy, which searches packing plans for one tensor first, and then repeatedly pack for more tensors based on previously searched packing plans, will easily fail with exploded search space.
\eat{
\subsection{Greedy-1: Dynamic Bin-Packing}
\eat{
}
Instead of using a time-consuming dynamic programming approach that requires a large amount of memory space for storing intermediate results, we may utilize following heuristics that are widely adopted for bin-packing~\cite{coffman2013bin}:
(1) Large tensors first. Packing for a large tensor is more likely to generate bins that can be reused by other (smaller) tensors. Therefore, we want to pack the items in large tensors first.
(2) Hot items first. Packing frequently shared blocks into a bin is more likely to generate bins that can be reused across multiple tensors. Therefore, each time we pack for a tensor, we want to pack the hot blocks together (into the same bin).
Based on these heuristics, we propose a greedy strategy, which first packs for large tensors, and then small tensors. When we pack for a given tensor, we order tensor blocks based on their frequency (i.e., the number of tensors in which a block is used), and then simply pack the tensor blocks to pages in order, without leaving any holes in a page except for the last page.
The greatest benefit of this algorithm is that it can scale to a large number of tensors.
\eat{
\begin{algorithm}\small
\caption{\bf Greedy Strategy-1 ($pack1(I=I, l)$)}
\label{alg:greedy}
\begin{algorithmic}[1]
\STATE INPUT1: $T$ (a set of tensors)
\STATE INPUT2: $l$ (the maximum number of items for each bin)
\STATE OUTPUT: $P=\{p_{ij}\}$ (an approximate optimal bin-packing scheme)
\STATE $I \leftarrow \phi$
\WHILE{$t_i \in T$}
\STATE $I \leftarrow I \cup t_i$
\ENDWHILE
\STATE $\{t_1, ..., t_k\} \leftarrow orderBySizeDescend(T)$
\STATE $\{item_1, ..., item_{m1}\} \leftarrow orderByGlobalFreqDescend(t_1)$
\STATE $\forall i,j, p_{ij} \leftarrow 0$
\FOR{$i=1,...,m1$}
\STATE $j \leftarrow indexInI(item_i)$
\STATE $s \leftarrow \ceil{j/l}$
\STATE $p_{js} \leftarrow 1$
\ENDFOR
\STATE $numBins \leftarrow \ceil{m1/l}$
\FOR{$i=2,...,k$}
\STATE based on $P$, find a minimal set of $bins$ that can maximally cover $t_i$
\STATE $I_{\delta} \leftarrow t_i-\cup_{bin\in bins} bin$
\IF {$I_{\delta}=\phi$}
\STATE continue
\ELSE
\STATE $\{item_1, ..., item_{\delta i}\} \leftarrow orderByGlobalFreqDescend(I_{\delta})$
\FOR{$j=1,...,\delta i$}
\STATE $s \leftarrow indexInI(item_j)$
\STATE $u \leftarrow numBins + \ceil{\delta i/l}$
\STATE $p_{su} \leftarrow 1$
\ENDFOR
\STATE $numBins \leftarrow numBins + \ceil{\delta i/l}$
\ENDIF
\ENDFOR
\RETURN $P=\{p_{ij}\}$
\end{algorithmic}
\end{algorithm}
}
}
\subsection{Equivalent Class-based Packing}
\label{sec:greedy}
While approximation algorithms~\cite{coffman2013bin}, such as Best-fit, First-fit, Next-fit, are widely used for general bin-packing problems, they are suboptimal for the above problem, because they didn't consider how tensor blocks are shared by tensors.
To solve the problem, we propose to group tensor blocks that are shared by the same set of tensors together into \textbf{equivalent classes}. Different tensor blocks that are shared by the same set of tensors are regarded as equivalent in terms of page packing.
As illustrated in Fig.~\ref{fig:page-packing-example}, which depicts the tensor sharing relationship for the example in Fig.~\ref{fig:page-packing-motivation}, $12$ distinct blocks are shared by Tensor 1($t_1$) and Tensor 2($t_2$), these distinct tensor blocks can be grouped to the same equivalent class $C_3$. Four distinct tensor blocks are private to $t_1$ and they can be grouped to the same equivalent class $C_1$, and so do the blocks private to $t_2$ ($C_2$).
\eat{
}
It is beneficial to use a divide and conquer strategy to pack for each equivalent class in parallel by grouping the blocks falling into the same equivalent class to the same page(s). That's because each page can be shared by all tensors associated with the page's corresponding equivalent class. By doing so, in the above example (Fig.~\ref{fig:page-packing-motivation} and Fig.~\ref{fig:page-packing-example}), the $12$ distinct blocks in equivalent class $C_3$ will be packed to three pages, the four distinct blocks in $C_1$ will be packed to one page, and the four distinct blocks in $C_2$ will be packed to one page, which leads to the optimal plan, as shown in Fig.~\ref{fig:page-packing-motivation}. The algorithm is illustrated in Alg.~\ref{alg:greedy}.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{Figures/page-packing-example.pdf}
\caption{\label{fig:page-packing-example} \small
Illustration of equivalent classes of tensor blocks for page packing for the example in Fig.~\ref{fig:page-packing-motivation}.
}
\end{figure}
\begin{algorithm}\small
\caption{\bf Equivalent Class-Based Greedy Strategy }
\label{alg:greedy}
\begin{algorithmic}[1]
\STATE INPUT1: $T$ (a list of tensors)
\STATE INPUT2: $l$ (the maximum number of items for each bin)
\STATE OUTPUT: $P=\{p_{ij}\}$ (a bin-packing scheme)
\STATE $\{C_1, ..., C_m\} \leftarrow T$ \COMMENT{divide $I$ into multiple equivalent classes, so items in each class are shared by the same set of tensors}
\STATE $numBins \leftarrow 0$
\FOR{k=0..m}
\FOR{$item$ : $C_k$}
\STATE $i \leftarrow indexInI(item)$
\STATE $j \leftarrow numBins+\ceil{indexInC_k(item)/l}$
\STATE $p_{ij} \leftarrow 1$
\ENDFOR
\STATE $numBins \leftarrow numBins+\ceil{|C_k|/l}$
\ENDFOR
\RETURN $P=\{p_{ij}\}$
\end{algorithmic}
\end{algorithm}
\subsection{A Two-Stage Page Packing Strategy}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{Figures/page-packing-2.pdf}
\caption{\label{fig:page-packing-2} \small
Another example: the equivalent class-based greedy strategy leads to three non-full pages for $C_1$, $C_2$, $C_6$.
}
\end{figure}
The problem with the equivalent class-based packing is that it may lead to non-full pages, because items in certain equivalent classes may not fully fill the bins. For another example as illustrated in Fig.~\ref{fig:page-packing-2}, if a bin can maximally hold two items, the items in $C_1$, $C_2$, $C_6$ will be packed to three non-full bins respectively. However, a better scheme is to pack these items into two bins: $bin_1=C_1 \cup C_6$ and $bin_2=C_1 \cup C_2$.
Considering that a page may have a size up to tens or hundreds of megabytes, and repacking non-full pages will enable significant improvement in storage efficiency, memory footprint, and data locality. Therefore, we propose a two-stage strategy for optimizing page packing schemes. At the first stage, items from each equivalent class are packed to bins separately, and no bin is allowed to mix items from different non-equivalent classes. Then, at the second stage, we repack items from non-full bins, by applying an approximation algorithm based on the following heuristics:
(1) Largest-Tensor-first. A tensor that contains more tensor blocks to be repacked is more likely to generate pages that can be reused by other tensors.
(2) Hottest-Block-First. Frequently shared tensor blocks, if packed together, are more likely to generate pages that can be reused across multiple tensors.
The approximation algorithm picks the tensor that has the most tensor blocks in non-full pages to repack first. When it repacks for a given tensor, it first attempts to identify and reuse packed pages that cover as many blocks to repack as possible. Then it orders the remaining tensor blocks based on their sharing frequency (i.e., the number of tensors a block is shared by), and simply packs these blocks to pages in order, without leaving any holes in a page except for the last page.
We formalized the algorithm for the second stage as Alg.~\ref{alg:greedy1}. The algorithm for the first stage is the same with Alg.~\ref{alg:greedy}.
\eat{
\noindent
\textbf{Heuristics: Use equivalent classes to prune candidate schemes.} $bin_1=\{item1, item2\}, bin_2=\{item3, item4\}$ is equivalent with $bin_1=\{item1, item4\}, bin_2=\{item2, item3\}$, if $item1$ and $item3$ are in the same equivalent class, and $item2$ and $item4$ are in the same equivalent class. That's because the two schemes have the same influence on the packing of the rest of the tensor blocks and lead to the same storage costs. Therefore, one of them can be pruned.
This heuristics can be used to prune a significant portion (more than $95\%$) of candidate packing schemes.}
\eat{
\noindent
\textbf{Heuristics 2: Increase the lower bound number of tensor blocks in a page.} For example, we should not allow a page that has the number of tensor blocks smaller than a threshold $min_{i}{(|t_i|\%l})$, where $|t_i|$ denotes the number of blocks in the $i$-th tensor and $|t_i|\%l\neq0$, and $l$ denotes the maximal number of blocks allowed in a page.
Otherwise, the few blocks in this page can be combined with other remaining blocks to create a better scheme.
\noindent
\textbf{Heuristics 3: Early stop.} We also identify several early stopping criteria. For example, when each tensor has smaller than $l$ blocks that include at least one private block, we should directly pack all remaining unpacked blocks into a page for each tensor.
}
\begin{algorithm}\small
\caption{\bf Approximation Strategy}
\label{alg:greedy1}
\begin{algorithmic}[1]
\STATE INPUT1: $T=\{t_1, ...,t_k\}$ (A set of tensors for packing to pages. When applied to Stage-2, each tensor only contains items from non-full bins resulted from Stage 1)
\STATE INPUT2: $l$ (the maximum number of items for each bin)
\STATE OUTPUT: $P=\{p_{ij}\}$ (a bin-packing scheme)
\STATE $T \leftarrow orderByNumTensorBlocksDescend(T)$
\STATE $I \leftarrow \phi$
\WHILE{$t_i \in T$}
\STATE $I \leftarrow I \cup t_i$
\ENDWHILE
\STATE $numBins \leftarrow 0$
\FOR{$i=1,...,k$}
\IF{i > 1}
\STATE $bins \leftarrow$ a set of existing bins that form a maximal subset of $t_i$
\STATE $I_{\delta} \leftarrow t_i-\cup_{bin\in bins} bin$
\IF {$I_{\delta}=\phi$}
\STATE continue
\ENDIF
\ELSE
\STATE $I_{\delta} \leftarrow t_1$
\ENDIF
\STATE $\{item_1, ..., item_{\delta i}\} \leftarrow orderBySharingFreqDescend(I_{\delta})$
\FOR{$j=1,...,\delta i$}
\STATE $s \leftarrow indexInI(item_j)$ //index of $item_j$ in $I$
\STATE $u \leftarrow numBins + \ceil{\delta i/l}$
\STATE $p_{su} \leftarrow 1$
\ENDFOR
\STATE $numBins \leftarrow numBins + \ceil{\delta i/l}$
\ENDFOR
\RETURN $P=\{p_{ij}\}$
\end{algorithmic}
\end{algorithm}
\noindent
\textbf{Online Packing}
The proposed algorithms can also be utilized for online packing of tensor blocks to pages.
Each time when a new tensor is about to be added to the database, the list of tensor blocks in this tensor as well as all related tensors (i.e., tensors which share at least one block with the new tensor) will be retrieved to run the proposed algorithm to obtain a new packing scheme. Then the difference between the new packing scheme and the existing packing scheme will be computed. Only these pages that need to be changed will be repacked again.
|
2,877,628,091,095 | arxiv | \section*{Introduction}
Graphene is one of the most promising physical systems discovered in the last ten years, since it's properties offer place for many physical theories to be studied and observed in the laboratory. Particularly, graphene can be modelled by using relativistic quantum mechanics and quantum field theoretic methods \cite{graph1,graph2,graph3,graph4,graph5,graph6,graph7,graphfourfermion} giving rise to interesting and useful properties for electrons \cite{graphnew1,wilzek,graphnew3,graphnew4,graphnew5,graphnew6,graphnew7}. Graphene is a one-atom thick layer of carbon atoms equipped with a hexagonal lattice structure where electrons obey a Dirac equation and have a linear dispersion relation with Fermi velocity $v_F$ \cite{graphnew1}. The hexagonal lattice brings along a number of topologically and geometrically originating new phenomena \cite{review} that rendered graphene an experimental material where ideas coming from 2+1 gravity can be tested \cite{graphcurv}. With respect to the latter perspective, it is the existence of multilayers and their intersection that create singularities, where physical quantities blow up. At these singular points, curvature is singular and the surfaces can be modelled by using a metric \cite{graphcurv}, so contact with $2+1$ gravity can be achieved. Graphene was identified for the first time in the laboratory in 2004 \cite{graphexper} and since then has created a new research stream for many theoretical ideas. For a recent review see \cite{review} and references therein.
One of the most interesting phenomena in graphene is the localization of Dirac electrons and all the related problems on how to achieve localization \cite{graphnew1,wilzek,graphnew6,graphnew7}. Localization of fermions always occurs in the presence of defects and the first study was done in the seminal paper of Jackiw and Rossi \cite{graphzero}. The topology of the physical system's configuration space is critically affected by the presence of the defect, and this plays an important role for the localization effect. The localized fermionic modes can be classified according to a topological index theorem \cite{graphindex} which relates the net number of fermionic modes with the topology of the state space. In this article we shall be interested in graphene layers that have two types of defects, namely gapped graphene in the presence of a domain wall \cite{graphnew1} and superconducting graphene \cite{wilzek} in the presence of multivortices. In both frameworks there exist localized fermionic modes near the defects. Our study will be focused on another field theoretic aspect of graphene localized fermions, which is the existence of a rich extended supersymmetric structure underlying the system of localized fermions. As we shall explicitly demonstrate, these one dimensional supersymmetries have non-trivial topological charges, a fact probably indicating the existence of a non-linear supersymmetry. The relation of supersymmetry and graphene was also pointed out in \cite{graphenesusy}, but from a completely different point of view. In our study the focus will be on revealing the supersymmetric structures in both gapped and superconducting graphene and relating the corresponding Witten index with the localized modes. As we shall see, the lowest supersymmetric structure that underlies both systems is a number of distinct unbroken $N=2$, $d=1$ supersymmetries. The fact that the supersymmetries are unbroken is closely related to the existence of localized modes on the defects, but the proof for this is different in the two systems under study. The $N=2$ supersymmetries are combined to form $N$-extended supersymmetries with non-trivial topological charges. These extended supersymmetries are not simply higher order representations but are actually new supersymmetric realizations of the two systems. We shall see that supersymmetries remain unbroken even if the systems are perturbed by compact odd perturbations. This can help us to further understand the topological properties of the two systems, since from a physical point of view, compact perturbations can be caused by changing the pairing gap function $\Delta(r)$, with this perturbation leaving the Witten index of the system invariant. We believe that our study could provide another important field theoretic aspect of graphene-defects systems.
This paper is organized as follows: In section 1 we present the $N=2$ supersymmetric structure of the gapped graphene system along with some implications to the Hilbert space of the localized electrons. In section 2 we present the $N=4$ extended supersymmetric structure of the gapped graphene localized electrons and we also give a brief account on the non-reducible representations of $N=4$, $d=1$ supersymmetry. In section 3, we study what is the effect of domain wall perturbations on the Witten index and also if the Witten index changes when the gap function is changed. In section 4 we study the underlying supersymmetric structure in a superconducting graphene framework. The conclusions follow in the end of the paper.
\section{Gapped Graphene and One Dimensional Extended Supersymmetry-Non-Fredholm Operators Case}
\subsection{A Brief Gapped Graphene Primer}
A useful modification in graphene constructions, is to introduce an energy gap between the electron energies \cite{graphnew1}. This for example can be realized by using a staggered chemical potential \cite{graphnew1}. As it was shown in \cite{graphnew1}, domain walls can be materialized in a realistic way in graphene and these give rise to a band of mid-gap electron states. Specifically, the domain walls practically imitate an one dimensional metal embedded in a semi-conductor and thereby can be used as a single-channel quantum wire. The mid-gap states contain localized fermion modes in the Dirac Hamiltonian spectrum of the quantum system. These localized fermionic modes exist in the presence of topological defects like domains walls. The focus in this article with regards to gapped graphene, is on domain walls, which was studied in detail in \cite{graphnew1}. We shall adopt the notation of \cite{graphnew1}, in our study of the gapped graphene fermionic system. In \cite{graphnew1} it was shown that, owing to the existence of the domain wall, localized fermions occur in the location of the domain wall. As we shall demonstrate in this section, there exists a rich one dimensional supersymmetric structure underlying the system of localized mid-gap electron states.
The two valley electrons of graphene are described by the following Hamiltonian \cite{graphnew1},
\begin{equation}\label{hamgraph}
H_{graph}=\hbar v_F\left ( \begin{array}{ccccc}
\frac{m(x)v_F}{\hbar} & i\frac{\mathrm{d}}{\mathrm{d}x}+ \frac{\mathrm{d}}{\mathrm{d}y}& 0 & 0\\
i\frac{\mathrm{d}}{\mathrm{d}x}- \frac{\mathrm{d}}{\mathrm{d}y} & -\frac{m(x)v_F}{\hbar} & 0 & 0 \\
0 & 0 & \frac{m(x)v_F}{\hbar} & i\frac{\mathrm{d}}{\mathrm{d}x}- \frac{\mathrm{d}}{\mathrm{d}y} \\
0 & 0 & i\frac{\mathrm{d}}{\mathrm{d}x}+ \frac{\mathrm{d}}{\mathrm{d}y} & \frac{m(x)v_F}{\hbar} \\
\end{array}\right )
\end{equation}
The two graphene valleys are described by the diagonal blocks, which can be transformed to each other by time reversal and parity transformation. The domain wall is described by the function $m(x)$ which has a solitonic profile of the following form,
\begin{equation}\label{solitonprofilemass}
\lim_{x\rightarrow -\infty}m(x)=-m<0,{\,}{\,}{\,}\lim_{x\rightarrow \infty}m(x)=m>0
\end{equation}
We shall make a simple assumption that there exists at least one mid-gap electron state for each graphene valley, without loss of generality. This is enough to reveal the underlying supersymmetric structure.
\subsection{$N=2$, $d=1$ Supersymmetric Subalgebras}
The supersymmetries we shall present are one dimensional supersymmetries, also known as the research field of supersymmetric quantum mechanics. The latter \cite{reviewsusyqm} was introduced as a simplified model for the study of supersymmetry breaking in quantum field theory and nowadays is an independent research field, with many applications in various research areas. For example, in \cite{diffgeomsusyduyalities} and \cite{susyqminquantumsystems} interesting Hilbert space properties of supersymmetric quantum mechanical systems were studied, along with applications and also non linear realizations of supersymmetry. Some applications of supersymmetry in scattering related phenomena were studied in \cite{susyqmscatter} and various features of supersymmetry breaking were presented in \cite{susybreaking}. In addition, one dimensional supersymmetries are of great importance since, higher $N$-extended one dimensional supersymmetries \cite{extendedsusy} have a link to harmonic superspaces, see for example \cite{ivanov}. For some important works on supersymmetric quantum field theory see \cite{witten1,odi1,odi2,odi3} and references therein.
Before we reveal the extended one dimensional supersymmetric algebra that underlies the fermionic system on the gapped graphene domain wall, it is of crucial importance to present the four unbroken one dimensional $N=2$ subalgebras that underlie the system. It's importance is owing to the fact that these four algebras actually combine to form a higher non-trivial supersymmetric algebra and note that we are not discussing simply the formation of a higher reducible representation of the two simple algebras. The latter is ensured, as we explicitly demonstrate, by the existence of non-trivial topological supercharges.
To start with, consider the Hamiltonian (\ref{hamgraph}) in the limits described in relation (\ref{solitonprofilemass}). The corresponding Dirac equation reads,
\begin{equation}\label{shroe}
H_{graph}\psi= E\psi
\end{equation}
which can in turn be written in terms of two operator equations, in terms of the operators $\mathcal{D}_1$ and $\mathcal{D}_2$, given by,
\begin{equation}\label{susyqmrn5safsfsf67m}
\mathcal{D}_{1}=\left(%
\begin{array}{cc}
\frac{(m-E)v_F}{\hbar} & i\frac{\mathrm{d}}{\mathrm{d}x}+ \frac{\mathrm{d}}{\mathrm{d}y}
\\ i\frac{\mathrm{d}}{\mathrm{d}x}- \frac{\mathrm{d}}{\mathrm{d}y} & -\frac{(m+E)v_F}{\hbar} \\
\end{array}%
\right),{\,}{\,}{\,}\mathcal{D}_{2}=\left(%
\begin{array}{cc}
\frac{(m-E)v_F}{\hbar} & i\frac{\mathrm{d}}{\mathrm{d}x}- \frac{\mathrm{d}}{\mathrm{d}y}
\\ i\frac{\mathrm{d}}{\mathrm{d}x}+ \frac{\mathrm{d}}{\mathrm{d}y} & -\frac{(m+E)v_F}{\hbar} \\
\end{array}%
\right)
\end{equation}
which are considered to act in the following two 2-component spinors $\psi_1$ and $\psi_2$ as follows,
\begin{equation}\label{twocompbispinors}
\mathcal{D}_{1}\psi_1=\mathcal{D}_{1}\left(%
\begin{array}{c}
u_{-} \\
u_{+} \\
\end{array}%
\right)=0,{\,}{\,}{\,}\mathcal{D}_{2}\psi_2=\mathcal{D}_{2}\left(%
\begin{array}{c}
v_{-} \\
v_{+} \\
\end{array}%
\right)=0
\end{equation}
The spinor $\psi$ is written in term of the 2-component spinors $\psi_1$ and $\psi_2$ in the following way,
\begin{equation}\label{bispinor}
\psi=\left(%
\begin{array}{c}
\psi_1 \\
\psi_2 \\
\end{array}%
\right)
\end{equation}
It is worth writing down the zero modes equations for the adjoint operators $\mathcal{D}_{1,2}$, which will prove to be useful later on in this section. The operators $\mathcal{D}_1^{\dag}$ and $\mathcal{D}_2^{\dag}$, satisfy the following equations,
\begin{equation}\label{diffeqnforadj1}
\mathcal{D}_1^{\dag}\psi_3=0,{\,}{\,}{\,}\mathcal{D}_2^{\dag}\psi_4=0
\end{equation}
By looking the form of the operators $\mathcal{D}_1$ and $\mathcal{D}_2$ in relation (\ref{susyqmrn5safsfsf67m}), we can easily verify that the exact form of the vectors $\psi_3$ and $\psi_4$ is,
\begin{equation}\label{formpsi34}
\psi_3=\left(%
\begin{array}{c}
-v_{-} \\
v_{+} \\
\end{array}%
\right),{\,}{\,}{\,}\psi_4=\left(%
\begin{array}{c}
-u_{-} \\
u_{+} \\
\end{array}%
\right)
\end{equation}
So practically speaking, the zero modes of the operators $\mathcal{D}_1^{\dag}$ and $\mathcal{D}_2^{\dag}$ are exactly the same as these of the operators $\mathcal{D}_1$ and $\mathcal{D}_2$, and particularly,
\begin{equation}\label{dimeke1r11firstrelation}
\mathrm{dim}{\,}\mathrm{ker}\mathcal{D}_{1}^{\dag}=\mathrm{dim}{\,}\mathrm{ker}\mathcal{D}_{2},{\,}{\,}{\,}\mathrm{dim}{\,}\mathrm{ker}\mathcal{D}_{2}^{\dag}=\mathrm{dim}{\,}\mathrm{ker}\mathcal{D}_{1}
\end{equation}
Relation (\ref{dimeke1r11firstrelation}) shall be proven quite useful when we shall address the issue of whether supersymmetry is unbroken, later on in this section.
Notice that from the previous section, the energy eigenvalue $E$ takes two values, namely,
\begin{equation}\label{energy}
E=\pm v_F\sqrt{k^2+m^2v_F^2}
\end{equation}
so everything that follows is assumed to hold true for each existing energy eigenvalue. Let us focus to the operator $\mathcal{D}_1$ first and we demonstrate that we can built an $N=2$, $d=1$ algebra with it's basic constituent being the operator $\mathcal{D}_1$. Indeed, the supercharge $\mathcal{Q}_1$ and $\mathcal{H}_1$ of the $N=2$, $d=1$ superalgebra written in terms of the operator $\mathcal{D}_1$,
\begin{equation}\label{s7gsgdsgdgrdd}
\mathcal{Q}_{1}=\bigg{(}\begin{array}{ccc}
0 & \mathcal{D}_{1} \\
0 & 0 \\
\end{array}\bigg{)},{\,}{\,}{\,}\mathcal{Q}^{\dag}_{1}=\bigg{(}\begin{array}{ccc}
0 & 0 \\
\mathcal{D}_{1}^{\dag} & 0 \\
\end{array}\bigg{)},{\,}{\,}{\,}\mathcal{H}_{1}=\bigg{(}\begin{array}{ccc}
\mathcal{D}_{1}\mathcal{D}_{1}^{\dag} & 0 \\
0 & \mathcal{D}_{1}^{\dag}\mathcal{D}_{1} \\
\end{array}\bigg{)}
\end{equation}
which satisfy the relations,
\begin{equation}\label{relationsforsusysddssdg}
\{\mathcal{Q}_{1},\mathcal{Q}^{\dag}_{1}\}=\mathcal{H}_{1}{\,}{\,},\mathcal{Q}_{1}^2=0,{\,}{\,}{\mathcal{Q}_{1}^{\dag}}^2=0
\end{equation}
These relations (\ref{relationsforsusysddssdg}) are the constituting equations of an $N=2$, $d=1$ algebra \cite{reviewsusyqm}. The issue whether supersymmetry is broken or not is a bit more involved and shall be properly addressed later on in this section and as we explicitly demonstrate, supersymmetry is unbroken, so we take that for granted for the moment. It is worth presenting what are the implications of unbroken supersymmetry for the Hilbert space that consists of the gapped graphene fermion states. We denote the Hilbert space of the supersymmetric quantum mechanical system as $\mathcal{H}_{sp}$, which is rendered a $Z_2$ graded space, by the action of the involution operator $\mathcal{W}$. This operator is called Witten parity, and it satisfies,
\begin{equation}\label{s45}
[\mathcal{W},\mathcal{H}_{sp}]=0,{\,}{\,}{\,}\{\mathcal{W},\mathcal{Q}_{1}\}=\{\mathcal{W},\mathcal{Q}_{1}^{\dag}\}=0
\end{equation}
Moreover, the Witten parity satisfies the following identity,
\begin{equation}\label{s6}
\mathcal{W}^{2}=1
\end{equation}
which is a property very common to projective operators. The operator $\mathcal{W}$ has the following matrix form representation in the case at hand,
\begin{equation}\label{wittndrf}
\mathcal{W}=\bigg{(}\begin{array}{ccc}
1 & 0 \\
0 & -1 \\
\end{array}\bigg{)}
\end{equation}
The actual operation of the Witten parity on the Hilbert space of the supersymmetric quantum system is that it spans the total Hilbert space into two $Z_2$ equivalent subspaces. As a consequence, the total Hilbert space of the quantum system acquires the following decomposition \cite{reviewsusyqm},
\begin{equation}\label{fgjhil}
\mathcal{H}_{sp}=\mathcal{H}^+\oplus \mathcal{H}^-
\end{equation}
with the vector-states belonging to the subspaces $\mathcal{H}^{\pm}$ being classified to even and odd parity states, according to their Witten parity, that is:
\begin{equation}\label{shoes}
\mathcal{H}^{\pm}=\mathcal{P}^{\pm}\mathcal{H}_{sp}=\{|\psi\rangle :
\mathcal{W}|\psi\rangle=\pm |\psi\rangle \}
\end{equation}
The decomposition of the Hilbert space has a direct implication on the total Hamiltonian $\mathcal{H}_1$, which is written as follows,
\begin{equation}\label{h1}
{\mathcal{H}}_{+}=\mathcal{D}_{1}{\,}\mathcal{D}_{1}^{\dag},{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\mathcal{H}}_{-}=\mathcal{D}_{1}^{\dag}{\,}\mathcal{D}_{1}
\end{equation}
The operator $\mathcal{P}$, introduced in (\ref{shoes}) actually makes the classification of the Hilbert vectors to odd states and even states, since it's eigenstates $|\psi^{\pm}\rangle$, satisfy the following relation,
\begin{equation}\label{fd1}
P^{\pm}|\psi^{\pm}\rangle =\pm |\psi^{\pm}\rangle
\end{equation}
We shall call them for brevity positive and negative parity eigenstates \cite{reviewsusyqm}, with the term ''parity'' referring to the operator $P^{\pm}$. Using the Witten parity operator in the representation (\ref{wittndrf}), the parity eigenstates can acquire the following vector representation,
\begin{equation}\label{phi5}
|\psi^{+}\rangle =\left(%
\begin{array}{c}
|\phi^{+}\rangle \\
0 \\
\end{array}%
\right),{\,}{\,}{\,}
|\psi^{-}\rangle =\left(%
\begin{array}{c}
0 \\
|\phi^{-}\rangle \\
\end{array}%
\right)
\end{equation}
with $|\phi^{\pm}\rangle$ $\epsilon$ $\mathcal{H}^{\pm}$. We can write the vectors we defined in the relations above, in terms of the spinors $\psi_1$ and $\psi_2$. Indeed, the reader can convince himself that we can write,
\begin{equation}\label{fdgdfgh}
\psi_1 =|\phi^{-}\rangle=\left(%
\begin{array}{c}
u_{-} \\
u_{+} \\
\end{array}%
\right),{\,}{\,}{\,}\psi_3 =|\phi^{+}\rangle=\left(%
\begin{array}{c}
-v_{-} \\
v_{+} \\
\end{array}%
\right)
\end{equation}
Hence, we can write the corresponding even and odd parity supersymmetric quantum states in term of the vectors $\psi_1$ and $\psi_3$ as follows,
\begin{equation}\label{phi5}
|\psi^{+}\rangle =\left(%
\begin{array}{c}
\psi_3 \\
0 \\
\end{array}%
\right),{\,}{\,}{\,}
|\psi^{-}\rangle =\left(%
\begin{array}{c}
0 \\
\psi_1 \\
\end{array}%
\right)
\end{equation}
on which, the Hamiltonian and the supercharges of the supersymmetric algebra act.
Having established the fact that a supersymmetric algebra can be constructed using operator $\mathcal{D}_1$, we can easily show by using the same line of argument, that another $N=2$, $d=1$ supersymmetric algebra can be constructed using the operator $\mathcal{D}_2$. Indeed, the supercharges and the Hamiltonian of this algebra are,
\begin{equation}\label{s7gsgdsgdgrddfffg}
\mathcal{Q}_{2}=\bigg{(}\begin{array}{ccc}
0 & \mathcal{D}_{2} \\
0 & 0 \\
\end{array}\bigg{)},{\,}{\,}{\,}\mathcal{Q}^{\dag}_{2}=\bigg{(}\begin{array}{ccc}
0 & 0 \\
\mathcal{D}_{2}^{\dag} & 0 \\
\end{array}\bigg{)},{\,}{\,}{\,}\mathcal{H}_{2}=\bigg{(}\begin{array}{ccc}
\mathcal{D}_{2}\mathcal{D}_{2}^{\dag} & 0 \\
0 & \mathcal{D}_{2}^{\dag}\mathcal{D}_{1} \\
\end{array}\bigg{)}
\end{equation}
which satisfy the $N=2$, $d=1$ supersymmetric algebra:
\begin{equation}\label{relationsforsusysddghhfdssdgfffggf}
\{\mathcal{Q}_{2},\mathcal{Q}^{\dag}_{2}\}=\mathcal{H}_{2}{\,}{\,},\mathcal{Q}_{2}^2=0,{\,}{\,}{\mathcal{Q}_{2}^{\dag}}^2=0
\end{equation}
The rest of the analysis is the same and we omit it, for the shake of brevity.
Now comes the issue of whether this supersymmetry is unbroken, which is very closely related to the following observation: The zero modes of the operator $\mathcal{D}_1$ are exactly the eigenfunctions of the gapped graphene fermionic system. Notice that this holds true for all energy eigenvalues, taken into account one at a time (the operator $\mathcal{D}_1$ is built using only one of them at a time, namely $E$). Of course the same applies for the operator $\mathcal{D}_2$. As we shall see, the zero modes of the operators $\mathcal{D}_1$ and $\mathcal{D}_2$ play a crucial role in the determination of whether supersymmetry is unbroken or not, due to the existence of an index theorem. Recall that supersymmetry is unbroken if the Witten index is a non-zero integer. The Witten index for Fredholm operators is equal to,
\begin{equation}\label{phil}
\Delta =n_{-}-n_{+}
\end{equation}
with $n_{\pm}$ the exact number of zero
modes of the operators ${\mathcal{H}}_{\pm}$ in the subspace $\mathcal{H}^{\pm}$. Notice that when Fredholm operators are considered, the zero modes have to be finitely many. In the case at hand, the operators are not Fredholm since the energy parameter takes values that span a continuum range, hence we have to make use of a continuum generalized Witten index. In a later section, when we study the superconducting graphene case, we shall come back to the issue of Fredholm operators and supersymmetry breaking. Let us focus on the first $N=2$ and the operator $\mathcal{D}_1$, for which the heat-kernel regularized index, denoted as $\mathrm{ind}_t\mathcal{D}_1$ and the Witten index, denoted as $\Delta_t$, are formally defined as follows \cite{reviewsusyqm,thaller},
\begin{align}\label{heatkerw}
& \mathrm{ind}_t\mathcal{D}_1=\mathrm{Tr}(-\mathcal{W}e^{-t\mathcal{D}_1^{\dag}\mathcal{D}_1})=\mathrm{tr}_{-}(-\mathcal{W}e^{-t\mathcal{D}_1^{\dag}\mathcal{D}_1})-\mathrm{tr}_{+}(-\mathcal{W}e^{-t\mathcal{D}_1\mathcal{D}_1^{\dag}}) \\ \notag
& \Delta_t=\lim_{t\rightarrow \infty}\mathrm{ind}_t\mathcal{D}_1.
\end{align}
where we assumed that $t>0$, and also that $\mathrm{tr}_{\pm }$ denotes the trace in the subspaces $\mathcal{H}^{\pm}$. The formal definition of the heat-kernel regularized index involves trace-class operators, which have a finite norm \cite{thaller}. In the case at hand, the operator that must be trace-class is $\mathrm{tr}(-\mathcal{W}e^{-t\mathcal{D}_1^{\dag}\mathcal{D}_1})$. Now recall relation (\ref{dimeke1r11firstrelation}), from which we can easily establish the result that,
\begin{equation}\label{keeerrrr}
\mathrm{ker}\mathcal{D}_1=\mathrm{ker}\mathcal{D}_1^{\dag}\neq 0,
\end{equation}
owing to the existence of gapped graphene fermionic states. Relation (\ref{keeerrrr}) implies that,
\begin{equation}\label{keeerrrr1}
\mathrm{ker}\mathcal{D}_1\mathcal{D}_1^{\dag}=\mathrm{ker}\mathcal{D}_1\mathcal{D}_1^{\dag}\neq 0,
\end{equation}
and consequently, the following relation holds true, regarding the operators
$e^{-t\mathcal{D}_1^{\dag}\mathcal{D}_1}$ and $e^{-t\mathcal{D}_1\mathcal{D}_1^{\dag}}$,
\begin{equation}\label{qggeeee123}
\mathrm{tr}_{-}e^{-t\mathcal{D}_1^{\dag}\mathcal{D}_1}=\mathrm{tr}_{+}e^{-t\mathcal{D}_1\mathcal{D}_1^{\dag}}
\end{equation}
Recalling that $\mathrm{tr}_{\pm }$ denotes the trace corresponding to the subspaces $\mathcal{H}^{\pm}$, relation (\ref{qggeeee123}) implies that the regularized index of the operator $\mathcal{D}_{1}$ is actually equal to zero. As a consequence, the regularized Witten index is also zero and in conjunction with relation (\ref{keeerrrr}) we come to the conclusion that the $N=2$ supersymmetric algebra corresponding to the operator $\mathcal{D}_1$ is unbroken. The same argument holds true for the algebra built on the operator $\mathcal{D}_2$, hence finally we have two unbroken $N=2$, $d=1$ supersymmetric algebras corresponding to the energy eigenvalue $E$. Bearing in mind that there are another two corresponding to the energy eigenvalue $-E$, we end up having four unbroken $N=2$, $d=1$ supersymmetries. Now the question is whether these supersymmetries combine in some way to form a higher order extended non-trivial supersymmetry. This is the subject of the next section.
\subsection{Global R-Symmetries}
The $N=2$ supersymmetric quantum mechanics algebra has some implications for the Hilbert space of the fermionic states that are localized on the domain wall. Specifically, it implies a global R-symmetry as we explicitly demonstrate. Focusing on the first algebra, described by the supercharges $\mathcal{Q}_1$ and $\mathcal{Q}_2$, the supercharges algebra is invariant under the following transformations:
\begin{align}\label{transformationu1}
& \mathcal{Q}_{1}=e^{-ia}\mathcal{Q'}_{1}, {\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}
{\,}{\,}\mathcal{Q}_{1}^{\dag}=e^{ia}\mathcal{Q'}_{1}^{\dag}
\end{align}
Therefore, the $N=2$ supersymmetric system is globally invariant under the global-$U(1)$ symmetry of relation (\ref{transformationu1}). This global
$U(1)$ symmetry is inherited to the Hilbert states
corresponding to the spaces $\mathcal{H}^{+}$,
$\mathcal{H}^{-}$ we presented earlier. Let, $\psi^{+}_{1}$ and
$\psi^{-}_{1}$ represent the Hilbert states corresponding to the graded Hilbert
spaces $\mathcal{H}^{+}$ and $\mathcal{H}^{-}$. Then, the $U(1)$ transformation transforms the Hilbert space states as follows,
\begin{equation}
\psi^{'+}_{1}=e^{-i\beta_{+}}\psi^{+}_{1},
{\,}{\,}{\,}{\,}{\,}{\,}{\,}{\,}
{\,}{\,}\psi^{'-}_{1}=e^{-i\beta_{-}}\psi^{-}_{1}
\end{equation}
where $\beta_{+}$ and $\beta_{-}$ are global parameters satisfying $a=\beta_{+}-\beta_{-}$. Having in mind that there are two $N=2$ supersymmetries for the energy eigenvalue $E$ and two for $-E$, the total $R$ symmetry, denoted as $R_{tot}$, of the localized fermionic system, is a product of four distinct $U(1)$ symmetries, that is,
\begin{equation}\label{fouru1s}
R_{tot}=U(1)\times U(1)\times U(1)\times U(1)
\end{equation}
\section{Topological Charge Extended $N=4$ Superalgebras in Gapped Graphene}
As we explicitly demonstrated, the domain wall graphene fermionic modes with energy $E$ constitute two separate $N=2$, $d=1$ algebras. We shall now demonstrate that these two supersymmetric algebras can be combined to give an $N=4$ extended supersymmetric algebra with non-trivial topological charges. This extended supersymmetric structure is not a simple higher dimensional representation of the supersymmetry algebra but a non-trivial supersymmetry with higher $N$ and non-trivial topological charges, which are not however central charges. In order to reveal the inherent extended supersymmetric structure of the system under study, we compute the following commutation and anticommutation relations that the supercharges and the Hamiltonians of the two algebras satisfy,
\begin{align}\label{commutatorsanticommedfgdfgrgdxvxtgrs}
&\{{{\mathcal{Q}}_{1}},{{\mathcal{Q}}^{\dag}_{1}}\}=2\mathcal{H}+\mathcal{Z}_{11},
{\,}\{{{\mathcal{Q}}_{2}},{{\mathcal{Q}}^{\dag}_{2}}\}=2\mathcal{H}+\mathcal{Z}_{22}
,{\,}\{{{\mathcal{Q}}_{2}},{{\mathcal{Q}}_{2}}\}=0,
\\ \notag &\{{{\mathcal{Q}}_{1}},{{\mathcal{Q}}_{ 1}}\}=0, {\,} \{{{\mathcal{Q}}_{2}},{{\mathcal{Q}}_{ 1}}^{\dag}\}=\mathcal{Z}_{2 1},{\,}\{{{\mathcal{Q}}_{1}},{{\mathcal{Q}}^{\dag}_{2}}\}=\mathcal{Z}_{1 2},{\,}\\ \notag
&\{{{\mathcal{Q}}^{\dag}_{1}},{{\mathcal{Q}}^{\dag}_{1}}\}=0,\{{{\mathcal{Q}}^{\dag}_{2}},{{\mathcal{Q}}^{\dag}_{2}}\}=0,\\
\notag
&[{{\mathcal{Q}}_{1}},{{\mathcal{Q}}_{2}}]=0,[{{\mathcal{Q}}^{\dag}_{2}},{{\mathcal{Q}}^{\dag}_{1}}]=0,{\,}[{{\mathcal{Q}}_{2}},{{\mathcal{Q}}_{1}}]=0,{\,}
\end{align}
These relations constitute an extended $N=4$, $d=1$ superalgebra with four non-trivial topological charges which we denoted as, $\mathcal{Z}_{11},\mathcal{Z}_{22},\mathcal{Z}_{12},\mathcal{Z}_{21}$ and with a Hamiltonian $\mathcal{H}$. The Hamiltonian is equal to,
\begin{equation}\label{hamgrapheaven}
\mathcal{H}=\mathrm{diag}\left ( \Delta_1 ;\Delta_2 ; \Delta_2 ;\Delta_1 \right )
\end{equation}
with $\Delta_1$ and $\Delta_2$ being equal to,
\begin{align}\label{diagmatxixhamilt}
& \Delta_1=\frac{\mathrm{d^2}}{\mathrm{d}x^2}+\frac{\mathrm{d^2}}{\mathrm{d}y^2}+\frac{(m-E)^2v_F^2}{\hbar^2} \\ \notag &
\Delta_2=\frac{\mathrm{d^2}}{\mathrm{d}x^2}+\frac{\mathrm{d^2}}{\mathrm{d}y^2}+\frac{(m+E)^2v_F^2}{\hbar^2}
\end{align}
The topological charge $\mathcal{Z}_{11}$ is equal to,
\begin{equation}\label{newsusymat22newsuperch}
\mathcal{Z}_{11 }=\left ( \begin{array}{ccccc}
\mathcal{Z}^1_{11 } & 0 \\
0 & \mathcal{Z}^2_{11 } \\
\end{array}\right ).
\end{equation}
and the operator $\mathcal{Z}^1_{11 }$ stands for the following matrix,
\begin{equation}\label{newsusymat32sjhiuhguhg}
\mathcal{Z}^1_{11 }=\left ( \begin{array}{cc}
i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y}-i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x} & (i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m+E)v_F}{\hbar}+\frac{(m-E)v_F}{\hbar}(-i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y}) \\ -\frac{(m+E)v_F}{\hbar}(-i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y})+(i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m-E)v_F}{\hbar} & i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x}-i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y} \\
\end{array}\right )
\end{equation}
while the operator $\mathcal{Z}^2_{11 }$, is stands for,
\begin{equation}\label{newsusymat42sfgghuguohfdogudfg}
\mathcal{Z}^2_{11 }=\left ( \begin{array}{cc}
-i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y}+i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x} & -(-i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m+E)v_F}{\hbar}+\frac{(m-E)v_F}{\hbar}(-i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y}) \\ (i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m+E)v_F}{\hbar}+\frac{(m-E)v_F}{\hbar}(i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y}) & -i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x}+i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y} \\
\end{array}\right )
\end{equation}
Moreover, the topological charge $Z_{22}$ is,
\begin{equation}\label{newsusymat22newsuperch}
\mathcal{Z}_{11 }=\left ( \begin{array}{ccccc}
\mathcal{Z}^1_{11 } & 0 \\
0 & \mathcal{Z}^2_{11 } \\
\end{array}\right ).
\end{equation}
with the operator $\mathcal{Z}^1_{22 }$ being equal to,
\begin{equation}\label{newsusymat32sjhiuhguhg}
\mathcal{Z}^1_{11 }=\left ( \begin{array}{cc}
i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y}-i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x} & (i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m-E)v_F}{\hbar}-\frac{(m+E)v_F}{\hbar}(-i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y}) \\ \frac{(m-E)v_F}{\hbar}(-i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y})-(i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m+E)v_F}{\hbar} & i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x}-i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y} \\
\end{array}\right )
\end{equation}
and in addition, the operator $\mathcal{Z}^2_{22 }$, is equal to,
\begin{equation}\label{newsusymat42sfgghuguohfdogudfg}
\mathcal{Z}^2_{11 }=\left ( \begin{array}{cc}
-i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y}+i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x} & (i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m-E)v_F}{\hbar}-\frac{(m-E)v_F}{\hbar}(i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y}) \\ -(i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m+E)v_F}{\hbar}+\frac{(m-E)v_F}{\hbar}(i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y}) & -i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x}+i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y} \\
\end{array}\right )
\end{equation}
Finally, the topological charge $\mathcal{Z}_{12 }$ is,
\begin{equation}\label{newsusymat22newsuperchwt123}
\mathcal{Z}_{12 }=\left ( \begin{array}{ccccc}
\mathcal{Z}^1_{12 } & 0 \\
0 & \mathcal{Z}^2_{12 } \\
\end{array}\right ).
\end{equation}
with $\mathcal{Z}^1_{12 }$ being equal to,
\begin{equation}\label{newsusymat32sjhiuhguhgwt}
\mathcal{Z}^1_{12 }=\left ( \begin{array}{cc}
-\frac{\mathrm{d^2}}{\mathrm{d}x^2}-i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y}+i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x}-\frac{\mathrm{d^2}}{\mathrm{d}y^2}+\frac{(m-E)^2v_F}{\hbar^2} & -(i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m+E)v_F}{\hbar}+\frac{(m-E)v_F}{\hbar}(i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y}) \\ -\frac{(m+E)v_F}{\hbar}(-i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y})+(i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m-E)v_F}{\hbar} & -\frac{\mathrm{d^2}}{\mathrm{d}x^2}+i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y}-i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x}-\frac{\mathrm{d^2}}{\mathrm{d}y^2}+\frac{(m+E)^2v_F}{\hbar^2} \\
\end{array}\right )
\end{equation}
and also, the operator $\mathcal{Z}^2_{12 }$, is equal to:
\begin{equation}\label{newsusymat42sfgghuguohfdogudfgwt1}
\mathcal{Z}^2_{11 }=\left ( \begin{array}{cc}
-\frac{\mathrm{d^2}}{\mathrm{d}x^2}-i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y}-i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x}-\frac{\mathrm{d^2}}{\mathrm{d}y^2}+\frac{(m+E)^2v_F}{\hbar^2} & -(i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m-E)v_F}{\hbar}-\frac{(m+E)v_F}{\hbar}(i\frac{\mathrm{d}}{\mathrm{d}x}-\frac{\mathrm{d}}{\mathrm{d}y}) \\ -(-i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y})\frac{(m+E)v_F}{\hbar}+\frac{(m-E)v_F}{\hbar}(i\frac{\mathrm{d}}{\mathrm{d}x}+\frac{\mathrm{d}}{\mathrm{d}y}) & \frac{\mathrm{d^2}}{\mathrm{d}x^2}+i\frac{\mathrm{d^2}}{\mathrm{d}x\mathrm{d}y}+i\frac{\mathrm{d^2}}{\mathrm{d}y\mathrm{d}x}-\frac{\mathrm{d^2}}{\mathrm{d}y^2}+\frac{(m-E)^2v_F}{\hbar^2} \\
\end{array}\right )
\end{equation}
The remaining topological charge $\mathcal{Z}_{21 }$ is the conjugate of $\mathcal{Z}_{12 }$, that is, $\mathcal{Z}_{12 }=\mathcal{Z}_{12 }^{\dag}$, so we omit the details.
Before we close this section a brief comment on the topological charges we found is in order. The appearance of non-trivial topological charges in supersymmetric algebras of any dimension was firstly noticed in \cite{wittentplc}, were actually the terminology topological charge was first used. Topological charges cannot be considered as central charges \cite{fayet}, since there exist non-vanishing commutation relations of these with some operators of the superalgebra. Intriguingly enough, the theoretical framework of reference \cite{wittentplc}, where topological charges where firstly pointed out, consisted of a supersymmetric algebra in the presence of topological defects. It seems that supersymmetry and topological charges have a deeper interconnection, as was also pointed out in \cite{topologicalcharges}, and also the existence of non-trivial topological charges in a supersymmetric algebra, could be the indicator of a non-linear and certainly non-trivial supersymmetric structure. We defer this investigation for a future work.
\subsection{Representations of the Algebras}
Having established the result that the gapped graphene fermions possess a rich supersymmetric structure which is at most an $N=4$ extended supersymmetry, it is normal to ask whether there can be any realistic structure formation at which this supersymmetry can actually be realized. In view of this question, we shall present in this section an indirect way of perhaps observing the supersymmetric structure. What we actually intend to do is to briefly present the irreducible representations of the supersymmetric algebra at hand. In this way, we do not actually find a realistic structure but since the gapped graphene fermions constitute this supersymmetry, perhaps the observation that these are classified according to a specific pattern, could be linked to our investigation. By irreducible representations, we do not mean the perspective we adopted in \cite{oikonomoudomain}, but we are interested in irreducible representations of $N$-extended supersymmetry.
It is very well known in the related to the subject literature \cite{toppannew}, that $N$-extended supersymmetry and the division algebras of real, complex quaternionic and octonionic numbers \cite{toppannew} are in close connection. The latter when applied to one dimensional supersymmetric algebras, then this close relation can be actually viewed as a correlation between Clifford algebras and the $N$-extended supersymmetric algebras. Clifford algebras have irreducible representations which are classified in terms of division algebras and more specifically octonions (see for example \cite{toppannew} for details). The classification of the N-extended one dimensional supersymmetry irreducible representations can be realized if the admissible ordered integers $(n_1,n_2,n_3,...,n_k)$ can be found for any given $N$. Then these admissible ordered integers correspond to the irreducible multiplets with $n_i$ fields of dimension $d_i$. Note that the length of the irreducible representation is the integer $m$ which corresponds to the maximum dimensionality of a representation $d_m$.
The complete and detailed study on the irreducible representations of extended supersymmetry was done in detail in references \cite{toppannew}, and specifically by Pashnev and Toppan (2001). We shall not go in details since the work is done in \cite{toppannew}, we just present the most sound results of these studies, corresponding of course to the $N=4$ case.
All the multiplets can be formed in such a way, so that these multiplets is in one to one correspondence with the set of reducible representations and in effect, each multiplet corresponds to only one representation. This can be achieved if an equivalence relation is used among the multiplets \cite{toppannew}. The representations contain in general, a number $n$ of bosonic and fermionic fields, with $n$ and $N$ being related as follows,
\begin{equation}\label{ehdhd}
N=8p+q,{\,}{\,}{\,}n=2^{4p}G(q)
\end{equation}
with $p=0,1,2,...$ and $q=1,2,...,8$. The function $G(q)$, which is related to the 8 modulo Bott periodicity, is known as the Radon-Hurwitz function. Notice that the 8 modulo Bott periodicity is a consequence of the underlying octonionic structure. Given the number $N$ of extended supersymmetry, one can find length-3 representations which have the form $(n-k,n,k)$, with $k$ being a positive integer with values $k=0,1,2,..,2n$. In addition we can form length-4 representations, which however exist for specific extended supersymmetric structures, with the total number $N$ taking the values $N=3,5,7$ and also for $N\geq 9$. Higher length representations exist only for $N\geq 10$. In our case, we are interested in the case $N=4$ so we have at most length-3 representations and below we quote all the irreducible length-3
representations,
\begin{equation}\label{4lengthirreps}
(4,4,0),{\,}{\,}{\,}(3,4,1),{\,}{\,}{\,}(2,4,2),{\,}{\,}{\,}(1,4,3)
\end{equation}
In addition to the above, it is possible to form tensor product representations using the representations (\ref{4lengthirreps}) but we refrain from going into further details, since these can found in \cite{toppannew}.
\section{Domain Wall Perturbations in Gapped Graphene and the Witten Index}
Having found a rich supersymmetric structure underlying the fermionic system corresponding to gapped graphene, in this section we shall study the effect of domain wall perturbations on the Witten index of the supersymmetric algebras. Recall that the domain wall effect was assumed to be a solitonic type effect described by relation (\ref{solitonprofilemass}). Now consider that we perturb this form by adding a slow varying function of $x$, so that the solitonic profile at infinity is nearly described by the following limits,
\begin{equation}\label{solitonprofilemass}
\lim_{x\rightarrow -\infty}m(x)=-m-m^{-\infty}(x)<0,{\,}{\,}{\,}\lim_{x\rightarrow \infty}m(x)=m+m^{\infty}(x)>0
\end{equation}
This behavior of the domain wall has a direct effect on the Dirac equation of the fermionic modes, which in turn has an impact on the operators $\mathcal{D}_1$ and $\mathcal{D}_2$ defined in (\ref{susyqmrn5safsfsf67m}). We focus for the moment on the former but the same argument applies for the latter too. The operator $\mathcal{D}_1$ is modified and we denote the new operator as $\mathcal{D}_{n}$, which has the following form,
\begin{equation}\label{susyqmrn5safsfsf67mnewrel}
\mathcal{D}_{n}=\left(%
\begin{array}{cc}
\frac{(m+m^{\infty}(x)-E)v_F}{\hbar} & i\frac{\mathrm{d}}{\mathrm{d}x}+ \frac{\mathrm{d}}{\mathrm{d}y}
\\ i\frac{\mathrm{d}}{\mathrm{d}x}- \frac{\mathrm{d}}{\mathrm{d}y} & -\frac{(m^{-\infty}(x)+E)v_F}{\hbar} \\
\end{array}%
\right)
\end{equation}
which can be written in the following equivalent form,
\begin{equation}\label{bnewfoirf}
\mathcal{D}_{n}=\mathcal{D}_{1}+\mathcal{C}
\end{equation}
with $\mathcal{C}$,
\begin{equation}\label{codd}
\mathcal{C}=\left(%
\begin{array}{cc}
0& \frac{m^{\infty}(x)v_F}{\hbar}
\\ -\frac{m^{-\infty}(x)v_F}{\hbar} & 0\\
\end{array}%
\right)
\end{equation}
The operator $\mathcal{C}$ contains non-infinite terms, since the functions $m^{\pm \infty}(x)$ are slowly varying, and as a consequence of that, it is a bounded operator. In addition, it is an odd matrix and therefore the operators $\mathcal{D}_n$, $\mathcal{D}_1$ and $\mathcal{C}$, satisfy a theorem which states (see \cite{thaller} page 168, Theorem 5.28):
\begin{itemize}
\item Let $D$ be a trace class operator and $C$ a bounded odd operator. Then, the regularized indices of $D+C$ and $C$ are equal, that is
\begin{equation}\label{indperturbhfgatrn}
\mathrm{ind}_{t}(D+C)=\mathrm{ind}_{t}D
\end{equation}
\end{itemize}
In the case at hand, the operator $\mathcal{D}_1$ is trace-class and as we saw, the operator $\mathcal{C}$ is bounded, so the theorem applies directly. As a consequence, we have that,
\begin{equation}\label{indperturbhhgjhjghkjgjfgatrn}
\mathrm{ind}_t\mathcal{D}_{n}=\mathrm{ind}_t(\mathcal{D}_{1}+\mathcal{C})=\mathrm{ind}_t\mathcal{D}_{1}
\end{equation}
Therefore, by recalling how the Witten index is connected to the regularized index of the operator $\mathcal{D}_1$ (see relation (\ref{heatkerw})), we conclude that the modification of the domain wall solitonic profile has no effect on the Witten index of the supersymmetric algebra that underlies the system, and thereby the $N=2$, $d=1$ supersymmetric algebra remains unbroken. The same applies of course for the rest three $N=2$ supersymmetric algebras.
We have to note that if we add Hopping effects \cite{graphnew1}, the Witten index will still remain invariant, but we omit this analysis since the line of the argument is pretty much the same as in the case we just presented.
\section{Superconducting Graphene and One Dimensional Extended Supersymmetries-Fredholm Operators Case}
It is well known that topological defects such as kinks, domain walls and vortices arise quite frequently in physical systems as excitations in a background quantum field theory or in an ordered state of matter \cite{graphnew6}. Fermions existing in these topological backgrounds can have fractional quantum numbers, with the fractionalization mediated by zero-energy bound states of the fermions to the defect. The number of zero modes is a central feature of these physical systems and helps towards the complete understanding of the physical properties of the physical system consisting of fermions and defects \cite{graphnew1,wilzek,graphnew6}. This number of zero modes is usually given in terms of an index theorem \cite{graphnew1,wilzek,graphnew6}.
In view of the importance of zero modes to defect-fermions physical systems, we shall study the zero modes of superconducting graphene and express the number of electron zero modes of graphene in terms of a supersymmetric Witten index. We will explicitly demonstrate that, as in the case of gapped graphene, in the superconducting graphene the electrons zero modes constitute unbroken $N=2$, $d=1$ algebras which actually combine to give an extended supersymmetric algebra, which is much more complicated in comparison to the gapped graphene. The difference between the two cases is that in the superconducting graphene case, the zero modes of the fermions constitute the supersymmetric algebras.
Superconductivity in graphene can be induced in multiple ways \cite{wilzek}, with the most simple way being the one involving multivortices \cite{wilzek}. The vortices in the superconducting graphene state acquire interesting internal structure \cite{wilzek}, because each vortex supports a low energy mode of the Bogoliubov-de-Gennes equations. These structured vortices resemble in some aspects vortices in superconductors, with the surface of topological insulators supporting Dirac zero modes. It is intriguing that a supersymmetric structure, quite similar but much more simple in comparison to the one we present here, exists in topological insulators, as was demonstrated in \cite{oikonomousuperconductors}.
The full details for the superconducting graphene can be found in the article by Ghaemi and Wilczek \cite{wilzek}, here we present a few necessary for our presentation information. The electron zero modes in the presence of a vortex in superconducting graphene is described by the Bogoliubov-de-Gennes equation,
\begin{equation}\label{hamgraphsupercondgraphen}
\left ( \begin{array}{ccccc}
H_+^p+H_+^A & 0 & \Delta (r) & 0\\
0 & H_+^p+H_+^A & 0 & \Delta (r) \\
\Delta^* (r) & 0 & -H_+^p+H_+^A & i\frac{\mathrm{d}}{\mathrm{d}x}- \frac{\mathrm{d}}{\mathrm{d}y} \\
0 & \Delta^* (r) & 0 & -H_+^p+H_+^A \\
\end{array}\right )\Psi =0
\end{equation}
with $\Psi $ being equal to the 4-component spinor,
\begin{equation}\label{formpsi3434345}
\Psi=\left(%
\begin{array}{c}
u_{-} \\
u_{+} \\
v_{-} \\
v_{+} \\
\end{array}%
\right)
\end{equation}
and with the operators $H_{\pm}^p$ and $H_{\pm}^A$ being equal to,
\begin{equation}\label{oprs}
H_{\pm}^p=-i(\sigma_x\partial_x\pm\sigma_y\partial_y),{\,}{\,}{\,}H_{\pm}^A=-q(\sigma_xA_x\pm \sigma_yA_y)
\end{equation}
The $(A_x,A,y)$ denote the components of the electromagnetic vector potential which will describe the multivortices. Subscripts refer to valley index and pseudospin. The equations (\ref{hamgraphsupercondgraphen}) decouple in two independent sets involving $u_+,v_{-}$, with the two sets of solutions being related by a reflection about the $x$ axis, so we focus on the first set of equations. Note that the supersymmetries we will find are doubled in the end, due to the existence of these two sets of equations. By putting $v=\sigma_yu^*$, choosing $\vec{A}=-e_{\theta}A(r)$ and decomposing $u_{+}$ as follows,
\begin{equation}\label{formpsi3434345dec}
u_{+}=\left(%
\begin{array}{c}
a(r) \\
b(r) \\
\end{array}%
\right)
\end{equation}
we end up to the following set of equations,
\begin{align}\label{finaleqnsgraphe1}
& e^{i\theta}\left (\frac{\partial}{\partial r}+\frac{i}{r}\frac{\partial}{\partial \theta} a-q A(r)e^{i\theta}a+\Delta (r) a^*=0\right ) \\ \notag & -e^{i\theta}\left (\frac{\partial}{\partial r}-\frac{i}{r}\frac{\partial}{\partial \theta} b+q A(r)e^{i\theta}b+\Delta (r) b^*=0\right )
\end{align}
A rescaling can eliminate the vector field \cite{wilzek} and we assume that the condensate function $\Delta (r)$ has the form $\Delta (r)=\Delta_n(r)e^{in\theta}$. The function $\Delta_n$ is considered to behave as follows,
\begin{equation}\label{functionbehga}
\lim_{r\rightarrow 0}\Delta_n(r)\rightarrow r^{|n|},{\,}{\,}{\,}\lim_{r\rightarrow \infty}\Delta_n(r)\rightarrow \mathrm{const}
\end{equation}
a configuration that is appropriate to describe an $n$-fold multivortex \cite{wilzek}. We focus on the first equation of (\ref{finaleqnsgraphe1}), so the number of the final supersymmetries takes contribution from the $b(r)$ equation too. There are two kinds of solutions as was explicitly shown in \cite{wilzek} and we are interested in the one, in which $a(r)$ is decomposed as $a(r)=f(r)e^{il\theta}+g(r)e^{im\theta}$, with $l+m=n-1$. Assuming real functions we end up to the set of equations,
\begin{align}\label{seteqns}
& \frac{\mathrm{d}f(r)}{\mathrm{d}r}-\frac{l}{r}f(r)+\Delta_n(r)g(r)=0\\ \notag &
\frac{\mathrm{d}g(r)}{\mathrm{d}r}-\frac{m}{r}f(r)+\Delta_n(r)f(r)=0
\end{align}
Now the solutions are in order. For $n$ odd, there are $n-1$ zero modes for $a(r)$ with $n-1\geq l\geq 1$, while when $n$-even, $n-1\geq l\geq 0$. As for $b(r)$, there are $n$ zero modes for $n\geq 0$ and none for $n<0$. All the zero modes are assumed to be normalizable.
\subsection{Construction of $N=2$, $d=1$ Supersymmetries}
Along the same line of research we followed in the gapped graphene case, we can easily construct the $N=2$, $d=1$ supersymmetries in a straightforward way. Our analysis shall be based on equation (\ref{seteqns}) which gives the zero modes of the superconducting graphene corresponding to the function $u_+$. For each set of $(n,m,l)$ we can construct an operator similar to $\mathcal{D}_1$ in the case of gapped graphene. We denote by $i$ the triad of numbers $i=(n,m,l)$ for simplicity, and the operator that can be constructed from (\ref{seteqns}) is the following,
\begin{equation}\label{dfsaskia}
\mathcal{D}_i=\left(%
\begin{array}{cc}
\frac{\mathrm{d}}{\mathrm{d}r}-\frac{l}{r} & \Delta_n(r)
\\ \Delta_n(r) & \frac{\mathrm{d}}{\mathrm{d}r}-\frac{m}{r} \\
\end{array}%
\right)
\end{equation}
For each set of numbers $i$, we can form the following supercharges and Hamiltonians,
\begin{equation}\label{s7gsgdsgdgrddtriad}
\mathcal{Q}_{i}=\bigg{(}\begin{array}{ccc}
0 & \mathcal{D}_{i} \\
0 & 0 \\
\end{array}\bigg{)},{\,}{\,}{\,}\mathcal{Q}^{\dag}_{i}=\bigg{(}\begin{array}{ccc}
0 & 0 \\
\mathcal{D}_{i}^{\dag} & 0 \\
\end{array}\bigg{)},{\,}{\,}{\,}\mathcal{H}_{i}=\bigg{(}\begin{array}{ccc}
\mathcal{D}_{i}\mathcal{D}_{i}^{\dag} & 0 \\
0 & \mathcal{D}_{i}^{\dag}\mathcal{D}_{i} \\
\end{array}\bigg{)}
\end{equation}
which satisfy the relations,
\begin{equation}\label{relationsforsusysddssdgtriad}
\{\mathcal{Q}_{i},\mathcal{Q}^{\dag}_{i}\}=\mathcal{H}_{i}{\,}{\,},\mathcal{Q}_{i}^2=0,{\,}{\,}{\mathcal{Q}_{i}^{\dag}}^2=0
\end{equation}
Therefore, since we have $n$ sets of such operators, we have $n$ distinct $N=2$, $d=1$ supersymmetries. Each one of these supersymmetries share all the characteristics we presented in the gapped graphene case, so we omit the details. In addition, each one of these supersymmetries is unbroken. This is easy to demonstrate and it is based on the fact that for each $n$ there are exactly $n-1$ zero modes. This means that the kernel of each of the operators is finite, and particularly,
\begin{equation}\label{kefrdi}
\mathrm{dim}{\,}\mathrm{ker}\mathcal{D}_i=n-1
\end{equation}
In addition the kernel of the adjoint operator $\mathcal{D}_i^{\dag}$ consists of exactly the same zero modes as $\mathcal{D}_i$, since the latter operator is self adjoint, hence we have,
\begin{equation}\label{kefrdi1}
\mathrm{dim}{\,}\mathrm{ker}\mathcal{D}_i^{\dag}=n-1
\end{equation}
Since the operators $\mathcal{D}_i$ and $\mathcal{D}_i^{\dag}$ have a finite kernel, these are Fredholm operators. Now let us formally address the supersymmetry breaking issue for Fredholm operators. The supersymmetry breaking is controlled by the Witten index, which for the finite kernel case is defined as follows,
\begin{equation}\label{phil}
\Delta =n_{-}-n_{+}
\end{equation}
with $n_+$ and $n_{-}$ the number of zero modes of the operators $\mathcal{D}_i\mathcal{D}_i^{\dag}$ and $\mathcal{D}_i^{\dag}\mathcal{D}_i$ respectively. When the Witten index is a non-zero
integer, supersymmetry unbroken. The case when the Witten index is zero is a bit more complicated because when $n_{+}=n_{-}=0$
supersymmetry is obviously broken, but when $n_{+}= n_{-}\neq 0$
supersymmetry is not broken. The latter case applies for the operator $\mathcal{D}_i$. Indeed, recall that the Fredholm index of the operator $\mathcal{D}_i$ is equal to,
\begin{equation}\label{ker}
\mathrm{ind} \mathcal{D}_i = \mathrm{dim}{\,}\mathrm{ker}
\mathcal{D}_i-\mathrm{dim}{\,}\mathrm{ker} \mathcal{D}_i^{\dag}=
\mathrm{dim}{\,}\mathrm{ker}\mathcal{D}_i^{\dag}\mathcal{D}_i-\mathrm{dim}{\,}\mathrm{ker}\mathcal{D}_i\mathcal{D}_i^{\dag}
\end{equation}
Then, the Witten index is related to the Fredholm index as follows,
\begin{equation}\label{ker1}
\Delta=\mathrm{ind} \mathcal{D}_i=\mathrm{dim}{\,}\mathrm{ker}
\mathcal{D}_i^{\dag}\mathcal{D}_i-\mathrm{dim}{\,}\mathrm{ker} \mathcal{D}_i\mathcal{D}_i^{\dag}
\end{equation}
Owing to the fact that,
\begin{equation}\label{kefrdishfdh}
\mathrm{dim}{\,}\mathrm{ker}\mathcal{D}_i=\mathrm{dim}{\,}\mathrm{ker}\mathcal{D}_i^{\dag}=n-1\neq 0
\end{equation}
each of the $n$ different supersymmetries is unbroken. Bearing in mind that there is another set of solutions which we did not take into account (see below relation (\ref{oprs})), the final number of $N=2$, $d=1$ supersymmetries is actually $4n$. In the next section we focus on one type of these supersymmetries for simplicity and we shall reveal the extended supersymmetric structure of these $n$ $N=2$ supersymmetries.
Before closing this section it is worth mentioning that owing to the fact that the operators we dealt with in this section are Fredholm, the compact perturbations of the operators leave the Witten index invariant. This is because the Fredholm operators are by definition trace-class. This means that if we perturb the gap function $\Delta(r)$ so that it produces compact perturbations, supersymmetry remains unbroken. A similar conclusion was derived in \cite{wilzek}, but in relation to the Fredholm index of the corresponding Dirac operators.
\subsection{Extended Supersymmetric Structure for $(u_+,v_{-})$ Subsystem}
As in the gapped graphene case, we shall investigate if the $n$ different supersymmetries we found in the previous section, combine to form an extended supersymmetric structure. As we shall explicitly demonstrate, these indeed combine to form a much more complicated structure, in comparison to the gapped graphene case, where we found only an $N=4$ supersymmetry with non trivial supercharges. In the present case that we study superconducting graphene, we can form a number of $n$ different supercharges of the following form:
\begin{equation}\label{wit2jdnhdgeneralc}
\mathcal{Q}_{i}=\bigg{(}\begin{array}{ccc}
0 & \mathcal{D}_{i} \\
0 & 0 \\
\end{array}\bigg{)}
\end{equation}
with $\mathcal{D}_{i}$ given in relation (\ref{dfsaskia}) and $i=1,2,3,...n$. The supercharges $\mathcal{Q}_i$ can form an $N=2n$ extended supersymmetric one dimensional algebra with non-trivial topological charges, which is described by the following algebra,
\begin{align}\label{n4algbe1sjdjfgeneraldfgdfg}
&\{Q_{i},Q_{j}^{\dag}\}=2\delta_{ij}\mathcal{H}+Z_{ij},{\,}{\,}{\,}{\,}i ,j=1,2,..n \\ \notag &
\{Q_{i},Q_{j}\}=0,{\,}{\,}\{Q_{i}^{\dag},Q_{j}^{\dag}\}=0
\end{align}
In the relation above, the generalized Hamiltonian $\mathcal{H}$ is equal to,
\begin{equation}\label{newsusymat121233}
\mathcal{H}=\left ( \begin{array}{ccccc}
\frac{\mathrm{d}^2}{\mathrm{d}r^2} & 0 & 0 & 0\\
0 & \frac{\mathrm{d}^2}{\mathrm{d}r^2} & 0 & 0 \\
0 & 0 & \frac{\mathrm{d}^2}{\mathrm{d}r^2} & 0 \\
0 & 0 & 0 & \frac{\mathrm{d}^2}{\mathrm{d}r^2} \\
\end{array}\right ).
\end{equation}
The topological charges appearing in (\ref{n4algbe1sjdjfgeneraldfgdfg}), can be classified more easily in comparison to the gapped graphene case, since the operator $\mathcal{D}_i$ is self-adjoint. Each of the topological charges $\mathcal{Z}_{ii }$ is given by,
\begin{equation}\label{newsusymat22}
\mathcal{Z}_{ii}=\left ( \begin{array}{ccccc}
\mathcal{Z}^1_{ii} & 0 \\
0 & \mathcal{Z}^2_{ii } \\
\end{array}\right ).
\end{equation}
with the operator $\mathcal{Z}^1_{ii }$ being equal to,
\begin{equation}\label{newsusymat32}
\mathcal{Z}^1_{ii}=\left ( \begin{array}{cc}
-\frac{\mathrm{d}}{\mathrm{d}r}(\frac{l}{r})-\frac{l}{r}\frac{\mathrm{d}}{\mathrm{d}r}-\frac{l^2}{r^2}+\Delta_n^2(r) & (\frac{\mathrm{d}}{\mathrm{d}r}-\frac{l}{r})\Delta_n(r)+\Delta_n(r)(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m}{r}) \\
(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m}{r})\Delta_n(r)+\Delta_n(r)(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{l}{r}) & -\frac{\mathrm{d}}{\mathrm{d}r}(\frac{m}{r})-\frac{m}{r}\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m^2}{r^2}+\Delta_n^2(r) \\
\end{array}\right )
\end{equation}
and with the operator $\mathcal{Z}^2_{ii }$ being equal to $\mathcal{Z}^1_{ii }$, that is $\mathcal{Z}^1_{ii }=\mathcal{Z}^2_{ii }$. In addition, the topological charges $\mathcal{Z}_{ij }$ are equal to,
\begin{equation}\label{newsusymat52}
\mathcal{Z}_{ i j }=\left ( \begin{array}{ccccc}
\mathcal{Z}^1_{ i j } & 0 \\
0 & \mathcal{Z}^2_{ i j } \\
\end{array}\right ).
\end{equation}
with $\mathcal{Z}^1_{ i j }$ being equal to the matrix,
\begin{equation}\label{newsusymat62}
\mathcal{Z}^1_{i j }=\left ( \begin{array}{cc}
\frac{\mathrm{d}^2}{\mathrm{d}r^2}-\frac{\mathrm{d}}{\mathrm{d}r}(\frac{l'}{r})-\frac{l}{r}\frac{\mathrm{d}}{\mathrm{d}r}-\frac{ll'}{r^2}+\Delta_n(r)\Delta_{n'}(r) & (\frac{\mathrm{d}}{\mathrm{d}r}-\frac{l}{r})\Delta_{n'}(r)+\Delta_n(r)(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m'}{r}) \\
(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m}{r})\Delta_{n'}(r)+\Delta_n(r)(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{l'}{r}) & (\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m}{r})(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m'}{r})+\Delta_n(r)\Delta_{n'}(r) \\
\end{array}\right )
\end{equation}
and the operator $\mathcal{Z}^1_{ i j }$, is given by,
\begin{equation}\label{newsusymat72}
\mathcal{Z}^2_{ i j }=\left ( \begin{array}{cc}
\frac{\mathrm{d}^2}{\mathrm{d}r^2}-\frac{\mathrm{d}}{\mathrm{d}r}(\frac{l}{r})-\frac{l'}{r}\frac{\mathrm{d}}{\mathrm{d}r}-\frac{ll'}{r^2}+\Delta_n(r)\Delta_{n'}(r) & (\frac{\mathrm{d}}{\mathrm{d}r}-\frac{l'}{r})\Delta_{n}(r)+\Delta_{n'}(r)(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m}{r}) \\
(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m'}{r})\Delta_{n}(r)+\Delta_{n'}(r)(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{l}{r}) & (\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m'}{r})(\frac{\mathrm{d}}{\mathrm{d}r}-\frac{m}{r})+\Delta_n(r)\Delta_{n'}(r) \\
\end{array}\right )
\end{equation}
Therefore, given the number $n$, the electrons in superconducting graphene can form an extended $N=2n$ one dimensional supersymmetry with non trivial topological charges. Note that we have four sets of these $N=2n$ supersymmetries for the same reasons as explained in detail in the $N=2$ case.
\section*{Conclusions}
In this paper we studied some field theoretic attributes of two graphene configurations, namely in a gapped graphene setup and in superconducting graphene. Specifically, we found that the electron states constitute a number of one dimensional $N=2$ supersymmetries in both gapped and superconducting graphene. We explicitly demonstrated that these supersymmetries are unbroken for both cases. We have to note that there is no way of breaking these supersymmetries dynamically or spontaneously, since these are one dimensional supersymmetries and there is no way to achieve this. In the case of gapped graphene, the $N=2$ supersymmetries combine to form an $N=4$ one dimensional supersymmetry which has non-trivial topological charges. In the superconducting graphene case, the extended supersymmetric structure is much more involved and depends on the number of the electron zero modes around the vortex defect. If there exist $n$ distinct zero modes, then the extended supersymmetry is an $N=2n$ supersymmetry with non-trivial topological charges. In both cases these topological charges cannot be central charges due to the fact that these do not commute with all the operators of the algebra.
An interesting feature of both superconducting and gapped graphene is that the supersymmetries remain unbroken, a result that holds true even if hopping effects and compact perturbations of the gap function $\Delta (r)$ are taken into account. As we explicitly showed, the Witten index is robust against such kind of changes. In the case of superconducting graphene, in which case zero modes are considered, our result proves the validity of our findings, since these kind of changes never affect the Fredholm index of the associated to the system Dirac operators. Supersymmetry offers another point of view of the problem at hand. In addition, the same could apply for the gapped graphene, although the modes have a specific energy eigenvalue. The perspective of supersymmetry we adopted for gapped graphene is new and could possibly be an indicator of a non-linear underlying supersymmetry. The latter feature strongly validates the field theoretic limit of gapped graphene, which is also suggested and used in the relevant literature (see the review \cite{review}). We hope to further address the field theoretic character problem of gapped graphene in the future.
|
2,877,628,091,096 | arxiv | \section{History and Overview}
A real polynomial $f(x_1,\dots,x_n)$ is {\it psd} or {\it positive} if
$f(a) \ge
0$ for all $a \in \mathbb R^n$; it is {\it sos} or a {\it sum of
squares} if there exist real polynomials $h_j$ so that $f = \sum h_j^2$.
For forms, we follow the notation
of \cite{CL1} and use $P_{n,m}$ to denote the cone of real psd
forms of even degree $m$ in $n$ variables, $\Sigma_{n,m}$ to denote its
subcone of sos forms and let $\Delta_{n,m} = P_{n,m} \smallsetminus
\Sigma_{n,m}$. The
Fundamental Theorem of Algebra implies that $\Delta_{2,m} = \emptyset$;
$\Delta_{n,2} = \emptyset$ follows from the diagonalization of psd
quadratic forms.
The first suggestion that a psd form might not be sos was made by
Minkowski in the oral defense of his 1885 doctoral dissertation: Minkowski
proposed the thesis that not every psd form is sos. Hilbert was one of his
official ``opponents'' and remarked that Minkowski's
arguments had convinced him that this thesis should be true for
ternary forms. (See \cite{Hi3}, \cite{Min} and \cite{Sch}.) Three years later,
in a single remarkable paper, Hilbert \cite{Hi1} resolved the question.
He first showed that $F \in P_{3,4}$ is a sum of three squares of
quadratic forms; see \cite{Ru} and \cite{Sw} for recent
expositions and \cite{PR,PRSS} for another approach.
Hilbert then described a construction of
forms in $\Delta_{3,6}$ and $\Delta_{4,4}$;
after multiplying these by powers of linear forms if necessary, it
follows that $\Delta_{n,m} \neq \emptyset$ if $n \ge 3$ and $m \ge 6$ or
$n \ge 4$ and $m \ge 4$.
The goal of this paper is to
isolate the underlying mechanism of Hilbert's construction, show that
it applies to situations more general than those in
\cite{Hi1}, and use it to produce many new examples.
In \cite{Hi1}, Hilbert first worked with polynomials in two variables,
which homogenize to ternary forms.
Suppose $f_1(x,y)$ and $f_2(x,y)$ are two relatively
prime real cubic polynomials with nine distinct real
common zeros -- $\{\pi_i\}$, indexed arbitrarily -- so that no three
of the $\pi_i$'s lie on a line and no six lie on a quadratic. By counting
coefficients, one sees that there exists a non-zero quadratic $\phi(x,y)$
with zeros at
$\{\pi_1,\dots,\pi_5\}$ and a non-zero quartic $\psi(x,y)$ with the
same zeros, and which is singular at $\{\pi_6,\pi_7,\pi_8\}$:
the sextic $\phi\psi$ is thus singular at $\{\pi_1,\dots,\pi_8\}$. Hilbert
showed that $(\phi\psi)(\pi_9) \neq 0$ and that there exists
$c \neq 0$ so that the perturbed polynomial
$p = f_1^2 + f_2^2 + c\phi\psi$ is positive. If $p =
\sum h_j^2$, then each $h_j$ would be a cubic which vanishes on
$\{\pi_1,\dots,\pi_8\}$. But Cayley-Bacharach implies that $h_j(\pi_9) = 0$ for
each $j$, hence $p(\pi_9) = 0$, a contradiction. Thus, $p$ homogenizes
to a form $P \in \Delta_{3,6}$.
Hilbert also considered in \cite{Hi1} three relatively prime real quadratic
polynomials, $f_i(x,y,z)$, $1 \le i \le 3$, with
eight distinct real common zeros -- $\{\pi_i\}$, indexed arbitrarily --
so that no four of the zeros lie on a plane. There exists
a non-zero linear $\phi(x,y,z)$ with zeros at
$\{\pi_1,\pi_2,\pi_3\}$ and a non-zero cubic $\psi(x,y,z)$ with the
same zeros, and which is singular at $\{\pi_4,\pi_5,\pi_6,\pi_7\}$.
Similarly, $(\phi\psi)(\pi_8) \neq 0$ and there exists
$c \neq 0$ so that $f_1^2 + f_2^2 + f_3^2+ c\phi\psi$ is positive and not
sos. This homogenizes to a form in $\Delta_{4,4}$.
In 1893, Hilbert \cite{Hi2} showed that if
$F \in P_{3,m}$ with $m \ge 4$, then there exists a form
$G \in P_{3,m-4}$ and forms $H_{k}$, $1 \le k \le 3$, so that
$GF= H_{1}^2 + H_{2}^2 + H_{3}^2$.
(Hilbert's construction does not readily identify $G$ or the $H_k$'s.)
In particular, if $F \in P_{3,6}$, then there exists
$Q\in P_{3,2}$ so that $QF \in \Sigma_{3,8}$; since $Q\cdot QF \in
\Sigma_{3,10}$, $F$ is a sum of squares of rational functions with
common denominator $Q$. An iteration of this argument shows
that if $F \in P_{3,m}$, then there exists $G$ so that $G^2F$ is sos.
Hilbert's 17th Problem \cite{Hi23} asked whether this representation
as a sum of squares of rational functions exists for forms in
$P_{n,m}$ when $n \ge 4$.
For much more on the history of this subject up to 1999, see
\cite{Re2}. Recently, Blekherman \cite{Bl}
has shown that for fixed degree $m$, the ``probability'' that a psd
form is sos goes to 0 as $n$ increases. This result highlights the importance
of understanding psd forms which are not sos.
Hilbert's restriction on the common zeros meant
that no very simple or symmetric example could be constructed, and the first
explicit example of any $P \in \Delta_{n,m}$ did not appear for many
decades. The only two detailed references to Hilbert's construction
before the late 1960s (known to the author) are by
Terpstra \cite{Ter} (on biquadratic forms, related to
$\Delta_{4,6}$, thanks to Roland Hildebrand for the reference), and an
exposition \cite[pp.232-235]{GV} by Gel'fand and Vilenkin of the
sextic case only.
At a 1965 conference on inequalities,
Motzkin \cite{Mo} presented a specific sextic polynomial
$m(x,y)$ which is positive by the
arithmetic-geometric inequality and not sos by the
arrangement of monomials in its Newton polytope. (Hilbert's last
assistant, Olga Taussky-Todd, who
had a lifelong interest in sums of squares, heard Motzkin speak, and
informed him
that $m(x,y)$ was the first specific polynomial known to be positive but not
sos.) After
homogenization, Motzkin's example is
\begin{equation}
M(x,y,z) = x^4y^2 + x^2y^4 + z^6 - 3x^2y^2z^2 \in \Delta_{3,6}.
\end{equation}
Around the same time and independently,
R. M. Robinson \cite[p.264]{Ro} wrote that he
saw ``an unpublished example of a ternary
sextic worked out recently by W. J. Ellison using Hilbert's Method. It
is, as would be expected,
very complicated. After seeing this, I discovered that an astonishing
simplification would be possible
by dropping some unnecessary assumptions made by Hilbert."
Robinson observed that the cubics $f_1(x,y) = x^3-x$ and
$f_2(x,y) = y^3-y$ have nine common zeros: the $3 \times 3$ square
$\{-1,0,1\}^2$. There are eight lines which each contain three of the
zeros. Still, the sextic $(x^2-1)(y^2-1)(1-x^2-y^2)$ is positive
at (0,0) and singular at the other eight points. By taking the
maximum value for $c$ in Hilbert's construction and homogenizing,
Robinson showed that
\begin{equation}
R(x,y,z) = x^6 + y^6 + z^6 - x^4y^2 - x^2y^4 - x^4z^2-y^4z^2 -
x^2z^4-y^2z^4+3x^2y^2z^2
\end{equation}
is in $\Delta_{3,6}$. Similarly, by taking the three quadratics $x^2-x$,
$y^2-y$ and $z^2-z$, whose common zeros are $\{0,1\}^3$,
choosing $(1,1,1)$ as the eighth point, and then
homogenizing, Robinson showed that
\begin{equation}
\tilde R(x,y,z,w) = x^2(x-w)^2 + y^2(y-w)^2+z^2(z-w)^2+2xyz(x+y+z-2w)
\end{equation}
is in $\Delta_{4,4}$. (The only other published implementation of
Hilbert's Method known to the author is a 1979 sextic studied by Schm\"udgen
\cite{Schm} using $\{-2,0,2\}^2$, with ninth
point $(2,0)$.)
The papers of Motzkin and Robinson renewed interest in these
polynomials, and two more examples in the style
of $M$ were presented by Choi and Lam \cite{CL1,CL2}:
\begin{equation}
S(x,y,z) = x^4y^2 + y^4z^2 + z^4x^2 - 3x^2y^2z^2 \in \Delta_{3,6}, \\
\end{equation}
\begin{equation}
Q(x,y,z,w) = x^2y^2 + x^2z^2 + y^2z^2 + w^4 - 4wxyz \in \Delta_{4,4}.
\end{equation}
Here is an overview of the rest of the paper.
In section two, we
present some preliminary material, mainly from curve theory; it is important
to consider reducible (as well as irreducible) polynomials.
In section three, we present our version of Hilbert's Method (see
Theorem 3.4), based on more general perturbations and contradictions.
There is a class of perturbations of a given positive polynomial with
fixed zeros by a polynomial which is singular at these zeros, in which
positivity is preserved. By counting dimensions, under certain
circumstances, there are
polynomials of degree $2d$ which are singular on a set $A$, but are
not in the vector space generated by products of pairs of polynomials of degree
$d$ which vanish on $A$. If such a polynomial is positive, it cannot be sos.
In Robinson's work, the set of cubics vanishing
at the eight points is spanned by $\{f_1,f_2\}$, but the vector space of
sextics which are singular at the eight points has dimension four and
so cannot be spanned by $\{f_1^2,f_1f_2,f_2^2\}$. It is not necessary
to construct $\phi$ and $\psi$ to find this new sextic, although its behavior
at the ninth point must be analyzed to show that
a successful perturbation is possible.
We show in Theorem 4.1 that Hilbert's Method works
when $f$ and $g$ are ternary cubics with exactly nine real
intersections, whether or not three are on a line or six on a
quadratic. (In other
words, Robinson's ``astonishing simplification'' always works.)
We also show that
Hilbert's Method applies to the set of cubics which vanish on a
set of seven zeros, no four on a line, not all on a quadratic; see
Theorem 4.3.
\begin{example}
Let
\begin{equation}
\begin{gathered}
{\mathcal A} =
\{(1,0,0),(0,1,0),(0,0,1),(1,1,1),(1,1,-1),(1,-1,1),(1,-1,-1)\}, \\
F_1(x,y,z) = x(y^2-z^2), F_2(x,y,z) = y(z^2-x^2), F_3(x,y,z)
= z(x^2-y^2), \\
G(x,y,z) = (x^2-y^2)(x^2-z^2)(y^2-z^2).
\end{gathered}
\end{equation}
It is easy to show that the $F_k$'s span the set of ternary cubics
which vanish on $\mathcal A$ and that $G$ is singular on $\mathcal A$
and not in the
span of the $F_jF_k$'s. It follows from Theorem 4.3 that for some
$c>0$, $P_c =
F_1^2+F_2^2+F_3^2+cG$ is psd and not sos. In fact, $P_1 = 2S$,
providing a new construction of (1.4).
\end{example}
In section five, we look at the sections of the cones $P_{3,6}$ and
$\Sigma_{3,6}$ consisting of ternary sextics with the eight zeros of
Theorem 4.1. In addition to some general results,
we give a one-parameter family $\{R_t: t > 0\}$ of forms in
$\Delta_{3,6}$ with ten zeros and such that $R_1 = R$:
\begin{equation}
\begin{gathered}
R_t(x,y,z) := \\
\left(\frac{t^4+2t^2-3}{3} \right)(x^3-x z^2)^2 +
\left(\frac{1+2t^2-3t^4}{3t^4}\right)(y^3-y z^2)^2 + R(x,y,z).
\end{gathered}
\end{equation}
We give necessary and sufficient conditions for a sextic polynomial
$p(x,y)$ with zeros at $\{-1,0,1\}^2 \setminus (0,0)$ to be psd and to
be sos.
In section six, we present more examples in $\Delta_{3,6}$. This paper
would not be complete without an explicit illustration of Hilbert's
Method under his original restrictions. Theorems 4.1 and 4.3 and other
techniques are then applied to produce new forms in
$\Delta_{3,6}$, including one-parameter families which
include $R$, $S$ and $M$. For $t^2 < \frac 12$, let
\begin{equation}
\begin{gathered}
M_t(x,y,z) = (1-2t^2)(x^4y^2+x^2y^4) + t^4(x^4z^2+y^4z^2)\\- (3 -
8t^2+2t^4)x^2y^2z^2 -2t^2(x^2+y^2)z^4 + z^6;
\end{gathered}
\end{equation}
$M_t \in \Delta_{3,6}$ has ten zeros and $M_0 = M$. Let
\begin{equation}
\begin{gathered}
S_t(x,y,z) = t^4(x^6+y^6+z^6) + (1-2t^6)(x^4y^2+y^4z^2+z^4x^2)\\ +
(t^8 - 2t^2)(x^2y^4+y^2z^4+z^2x^4)-3(1-2t^2+t^4-2t^6+t^8)x^2y^2z^2;
\end{gathered}
\end{equation}
$S_t \in \Delta_{3,6}$ has ten zeros if $t>0$. Note that $S_0 =
S$ and $S_1 = R$, so $S_t$
provides a ``homotopy'' between $S$ and $R$ in
$\Delta_{3,6}$ in the set of forms with ten zeros. We also show that
\begin{equation}
U_c(x,y,z) = x^2y^2(x-y)^2 + y^2z^2(y-z)^2 + z^2x^2(z-x)^2 +
cxyz(x-y)(y-z)(z-x)
\end{equation}
is psd if and only if $|c| \le 4\sqrt{\sqrt 2 - 1}$ and sos only if $c=0$.
We conclude the section by returning to a subject brought up by Robinson:
$(ax^2+by^2+cz^2)R(x,y,z)$ is sos if and only if $a,b,c \ge
0$ and $\sqrt a, \sqrt b, \sqrt c$ are
the sides of a (possibly degenerate) triangle.
In section seven, we discuss the zeros of extremal ternary forms,
using the perturbation argument from Hilbert's Method and show that
if $p \in \Delta_{3,6}$ has exactly ten zeros, then it is
extremal in the cone $P_{3,6}$. We present supporting evidence for the
conjecture that, at least in a limiting sense, all extremal forms in
$\Delta_{3,6}$ have ten zeros.
Finally, in section eight, we apply Hilbert's Method to provide a
family of positive polynomials in two variables in even degree $\ge 6$
which are not sos. We also
speculate on the general applicability of Hilbert's Method in
higher degree.
Bezout's Theorem becomes more complicated in more variables,
and for that reason, we have confined our
discussions to ternary forms. However, we wish to record
a somewhat unexpected connection between $\tilde R$ and $Q$
(c.f. (1.3), (1.5)):
\begin{equation}
\tilde R(x-w,y-w,z-w,x+y+z-w) = 2 Q(x,y,z,w).
\end{equation}
Robinson's example, after homogenization and this change in
variables, gives a new derivation of the Choi-Lam example.
The set of quaternary quadratics which vanish on
\begin{equation}
\begin{gathered}
{\mathcal A} =
\{(1,0,0,0),(0,1,0,0),(0,0,1,0),\\ (1,1,1,1),(1,1,-1,-1),
(1,-1,1,-1),(1,-1,-1,1)\}
\end{gathered}
\end{equation}
is spanned by $\{xy - zw, xz - yw, xw - yz\}$, and any such quadratic
also vanishes at $(0,0,0,1)$. The form $Q$ is evidently psd by the
arithmetic-geometric inequality, singular on $\mathcal A$ and
positive at $(0,0,0,1)$, and so is not sos.
Parts of this paper have been presented at many conferences over the
last several years. The author thanks the organizers for their many
invitations to speak, and his friends and colleagues for
their encouragement and suggestions.
\section{Preliminaries}
Throughout this paper, we toggle between forms $F$ in $k$ variables
and polynomials $f$ in $k-1$ variables, with the ordinary convention that
\begin{equation}
\begin{gathered}
f(x_1,\dots,x_{k-1}) := F(x_1,\dots,x_{k-1},1), \\
F(x_1,\dots,x_k) := x_k^d f(\tfrac{x_1}{x_k}, \dots, \tfrac{x_{k-1}}{x_k}),
\end{gathered}
\end{equation}
where $d = \deg f$. For even $d$, it is easy to see that $F$ and $f$
are simultaneously psd or sos. It is usually more convenient to use
forms, since $F \in P_{k,m}$ if and only if $F(u) \ge 0$ for $u$ in
the compact set
$S^{k-1}$, simplifying perturbation. On the other
hand, the zeros of $f$ can be isolated, whereas those of $F$ are not.
Following \cite{CLR1}, we define the {\it zero-set} of any $k$-ary $m$-ic
form $F$ by
\begin{equation}
{\mathcal Z}(F):= \{(a_1,\dots,a_k) \in \mathbb R^k\ : \ F(a_1,\dots,a_k) =0 \}.
\end{equation}
We have $0 \notin {\mathcal Z}(F)$ by convention, $|{\mathcal Z}(F)|$ will be
interpreted as the number of lines in ${\mathcal Z}(F)$ and only
only one representative of each line need be given.
If $a \in {\mathcal Z}(F)$ and $a_k \neq 0$, then $a$ corresponds to
a unique zero of $f$; if $a_k = 0$, then $a$
corresponds to a {\it zero of $f$ at infinity}. We also define
\begin{equation}
{\mathcal Z}(f):= \{(a_1,\dots,a_{k-1}) \in \mathbb R^{k-1}\ : \
f(a_1,\dots,a_{k-1}) =0 \},
\end{equation}
for non-homogeneous $f$. It is
possible for a strictly positive $f$ to be have zeros at infinity.
Consider $f(x,y) = x^2 + (xy-1)^2$ (and $F(x,y,z) = x^2z^2 + (xy-z^2)^2$):
clearly, $f(a,b)>0$ for $(a,b) \in \mathbb R^2$ and ${\mathcal Z}(F) =
\{(1,0,0),(0,1,0)\}$.
If $f$ is positive and $a \in {\mathcal Z}(f)$, then of course $\frac{\partial
f}{\partial x_i}(a) = 0$
for all $i$. We shall say that $f$ is {\it round at $a$} if $f_a$, the
second-order component of the
Taylor series to $f$ at $a$, is a positive definite quadratic
form. This is a ``singular non troppo''
zero for a positive polynomial. The
corresponding second-order component of Taylor series for $F$ is psd
but not positive definite, since $F$ vanishes on lines through the
origin.
If $F \in P_{n,m}$ (resp. $\Sigma_{n,m}$), and $G$ is derived from $F$
by an invertible linear change of variables, then $G \in P_{n,m}$
(resp. $\Sigma_{n,m}$). Thus, it is harmless to assume when convenient
that ${\mathcal Z}(F)$ avoids the hyperplane $a_{n} = 0$; that is, $f$ has no zeros at
infinity.
Let $\mathbb R_{n,d} \subset \mathbb R[x_1,\dots,x_n]$ denote the
$\binom{n+d}n$-dimensional vector space of real polynomials
$f(x_1,\dots,x_n)$ with $\deg f \le d$. Suppose
$A = \{\pi_1,\dots,\pi_r\} \subset \mathbb R^n$ is given. Let
$I_{s,d}(A)$ denote the vector space of those $p \in \mathbb R_{n,d}$
which have an
$s$-th order zero at each $\pi_j$. In particular,
\begin{equation}
\begin{gathered}
I_{1,d}(A) = \{ p \in \mathbb R_{n,d}\ : \ p(\pi_j) = 0, \quad 1 \le j
\le r \}; \\
I_{2,2d}(A) = \left\{ p \in \mathbb R_{n,2d}\ : \ p(\pi_j) = \tfrac {\partial
p}{\partial x_i}(\pi_j) = 0, \quad 1 \le i \le n,\quad 1 \le j \le r
\right\}.
\end{gathered}
\end{equation}
Since an $s$-th order zero in $n$ variables imposes $\binom{n+s-1}n$
linear conditions,
\begin{equation}
\dim I_{s,d}(A) \ge \binom{n+d}n - r\binom{n+s-1}n.
\end{equation}
In Hilbert's sextic construction, $A = \{\pi_1,\dots,\pi_9\}$ is the
set of common zeros of $f_1(x,y)$ and $f_2(x,y)$,
and $\dim(I_{1,3}(A)) = 2 > \binom 52 - 9\binom 22$.
Let
\begin{equation}
I_{1,d}^2(A): = \left\{ \sum f_ig_i\ : \ f_i, g_i \in I_{1,d}(A) \right\}.
\end{equation}
Clearly, $I_{1,d}^2(A) \subseteq I_{2,2d}(A)$.
It is essential to Hilbert's Method that this inclusion may be strict;
for example, $\phi\psi(\pi_9) > 0$ so
$\phi\psi \in I_{2,6}(A) \smallsetminus I_{1,3}^2(A)$.
We also need to consider the ``forced'' zeros, familiar from the
Cayley-Bacharach Theorem; see \cite{EGH}. Suppose
$A\subset \mathbb R^n$ and $I_{1,d}(A)$ are given as above. Let
\begin{equation}
\tilde A := \bigcap_{j=1}^r {\mathcal Z}(f_j)
\smallsetminus A = {\mathcal Z} \biggl(\sum_{j=1}^r f_j^2 \biggr)
\smallsetminus A.
\end{equation}
Unfortunately, this notation fails to capture forced zeros at
infinity. Accordingly, for $A \subset \mathbb R^n$, define the
associated projective set ${\mathcal A} \subset \mathbb R^{n+1}$ by
\begin{equation}
(a_1,\dots,a_n) \in A \iff (a_1,\dots, a_n,1) \in {\mathcal A}.
\end{equation}
As before, we define $I_{s,d}({\mathcal A})$ to be the set of $d$-ic
forms $F(x_1,\dots,x_{n+1})$ which have $s$-th order zeros on
$\mathcal A$. Then $f \in I_{s,d}(A)$ if and only if $F \in I_{s,d}({\mathcal
A})$. We define
\begin{equation}
\tilde {\mathcal A} := \bigcap_{j=1}^r {\mathcal Z}(F_j)
\smallsetminus {\mathcal A} = {\mathcal Z} \biggl(\sum_{j=1}^r F_j^2 \biggr)
\smallsetminus {\mathcal A}.
\end{equation}
Given $A \subset \mathbb R^n$, $\tilde{\mathcal A} = \emptyset$
when there are no forced zeros, even at infinity.
We say that $I_{1,d}(A)$ is {\it full} if, for any $\pi \in A$ and $v
\in \mathbb R^n$, there exists $f\in I_{1,d}(A)$ such that $\vec\nabla
f(\pi) = v$. Equivalently, if $\{f_1,\dots,f_s\}$ is a basis for
$I_{1,d}(A)$ and $f = \sum_j f_j^2$, then $I_{1,d}(A)$ is full if and
only if $f$ is round at each $\pi \in A$.
Bezout's Theorem in a relatively simple form is essential to our
proofs. Suppose
$f_1(x,y)$ and $f_2(x,y)$ are relatively prime polynomials of degrees
$d_1$ and $d_2$. Let ${\mathcal Z} \subset \mathbb C^2$ denote the set of
common (complex) zeros of $f_1$ and $f_2$. Then
\begin{equation}
d_1d_2 = \sum_{\pi \in {\mathcal Z}} {\mathcal I}_\pi(f_1,f_2),
\end{equation}
where ${\mathcal I}_\pi(f_1,f_2)$ measures the singularity of the intersection
of the curves $f_1=0$ and $f_2=0$ at $\pi$.
In particular, ${\mathcal I}_\pi(f_1,f_2) = 1$ if and only if the curves
$f_1=0$ and $f_2=0$ are nonsingular at $\pi$ and have different tangents.
Thus, ${\mathcal I}_\pi(f_1,f_2) = 1$ if and only if
$f_1^2+f_2^2$ is round at $\pi$,
and ${\mathcal I}_\pi(f_1,f_2) \ge 2$ otherwise. If $f_1$ and $f_2$ are both
singular at $\pi$, then ${\mathcal I}_\pi(f_1,f_2) \ge 4$.
\begin{lemma}
Suppose $f_1(x,y), f_2(x,y) \in \mathbb R_{2,d}$ and $|{\mathcal Z}(f_1) \cap
{\mathcal Z} (f_2)| = d^2$. If $A \subseteq {\mathcal Z}(f_1) \cap {\mathcal Z} (f_2)$ is
such that $I_{1,d}(A)$ has basis $\{f_1,f_2\}$, then $A$ is full.
\end{lemma}
\begin{proof}
It follows from (2.10) that any common zero of $f_1$ and $f_2$
must be real, and that ${\mathcal I}_\pi(f_1,f_2) = 1$ for each
common zero $\pi$. It follows that $A$ is full.
\end{proof}
The next proposition collects some useful information from curve
theory. As is customary, if $f(\pi) = 0$, we say that
$\pi$ {\it lies on $f$} or $f$ {\it contains} $\pi$.
\begin{proposition}
All polynomials herein
are assumed to be in $\mathbb R[x,y]$, and all
enumerated sets of points are assumed to be distinct. These
results apply to ternary forms with the obvious modifications.
\begin{enumerate}
\item If a quadratic $q$ is singular at $\pi$ and $q(\pi') = 0$ for
some $\pi' \neq \pi$, then $q = \ell_1\ell_2$ is a product of two
linear forms $\ell_j$ containing $\pi$.
\item If a set of eight points $A = \{\pi_1,\dots,\pi_8\}$
is given, no four on a line and no seven on a quadratic, then
$\dim I_{1,3}(A) = 2$.
\item In the last situation, if $A_j = A \smallsetminus \{\pi_j\}$, then
there exists a cubic $f$ so that $f |_{A_j} = 0$, but $f(\pi_j) \neq
0$; in particular, $\dim I_{1,3}(A_j) = 3$.
\item Suppose $f(x,y)$ and $g(x,y)$ are cubics, $A = {\mathcal Z}(f) \cap
{\mathcal Z} (g) = \{\pi_1,\dots,\pi_9\}$ and $A_j = A \smallsetminus \{\pi_j\}$.
For each $j$,
$I_{1,3}(A_j) = I_{1,3}(A)$. In other words, if eight of the points
lie on a cubic $h$, then so will the ninth.
\item Under the same conditions as (4), no four of the $\pi_i$'s
lie on a line and no seven lie on a quadratic. Three of the
$\pi_i$'s lie on a line if and
only if the other six lie on a quadratic if
and only if $I_{1,3}(A)$ contains a reducible cubic.
\end{enumerate}
\end{proposition}
\begin{proof}
For (1), write $q(x,y) = a + bx + cy + dx^2 + exy + fy^2$ and assume by
translation that $\pi = (0,0)$. Then $a=b=c=0$ and $q(x,y) = dx^2 +
exy + fy^2$. If $\pi' = (r,s) \neq (0,0)$, then $sx - ry$ is a factor
of $q$. The next two assertions are classical and proofs can be found,
for example, in \cite[Ch.15]{Bi}; (4) is well-known and is often
attributed to Cayley-Bacharach, but it was discovered by Chasles; see \cite{EGH}.
For (5),
if four $\pi_i$'s lie on a line $\ell$, then $\ell$ divides both
$f$ and $g$ by Bezout, so that $|{\mathcal Z}(f) \cap {\mathcal Z} (g)| = \infty$. If seven
$\pi_i$'s lie on a reducible quadratic $q = \ell_1\ell_2$, then at
least four lie on one $\ell_i$, and we are in the earlier case. If
they lie on an irreducible $q$,
then it must be indefinite, and again, $q$ divides both
$f$ and $g$ by Bezout, so that $|{\mathcal Z}(f) \cap {\mathcal Z} (g)| = \infty$.
Suppose now that three points of $A$, say $\{\pi_1,\pi_2,\pi_3\}$, lie
on the line $\ell$ and let
$q$ be the quadratic containing $\{\pi_4,\dots,\pi_8\}$. Then $\ell q =
0$ on $A_8$, so by (4), $(\ell q)(\pi_9) = 0$. Since $\ell(\pi_9)\neq 0$,
we must have $q(\pi_9) = 0$; thus six zeros lie on $q$ and $\ell q \in
I_{1,3}(A_j)$.
(A similar proof follows if we start with six points lying on the
quadratic $q$.) Finally, if $\ell q \in
I_{1,3}(A)$, then at most three of the $\pi_i$'s can lie on $\ell$, and
at most six can lie on $q$, hence these numbers are exact.
\end{proof}
\begin{lemma}
Suppose $A$ is a set of eight distinct points, no four on a line and no seven
on a quadratic, and let $\{f_1,f_2\}$ be a basis for
$I_{1,3}(A)$. Then $f_1$ and $f_2$ are relatively prime.
\end{lemma}
\begin{proof}
If $f_1$ and $f_2$ have a common quadratic factor $q$,
then $f_i = \ell_i q$ and at most six points of $A$ lie on $q$, so
$\ell_1$ and $\ell_2$ share two points and so are proportional, a
contradiction. If $f_1$ and $f_2$ have only a common linear factor $\ell$,
then $f_i = \ell q_i$, and at most three points of $A$ lie on $\ell$, so
$q_1$ and $q_2$ share five points and so are proportional, again a
contradiction.
\end{proof}
In the situation of Lemma 2.3,
Bezout's Theorem has one of three possible implications:
(a) there is a ninth point $\pi \in \tilde A$ so that $f_1(\pi) = f_2(\pi) =
0$; (b) $\tilde A = \emptyset$, but $(a,b,0) \in \tilde {\mathcal A}$
is a common zero of $f_1$ and $f_2$ at infinity; (c)
${\mathcal I}_\pi(f_1,f_2) = 2$
for some $\pi \in A$. The first two cases are essentially the same: if (b)
occurs, we homogenize and change variables so that the zero is no
longer at infinity after dehomogenization. Any necessary construction
can then be performed, and the variables changed back.
The third case is singular, but seems to be difficult to identify in
advance, and is equivalent to the existence of a cubic in $I_{1,3}(A)$
which is singular at some $\pi \in A$.
We shall say that a set of eight points $A$ for which (a) or (b)
occurs is {\it
copacetic}. Since $f_1$ and $f_2$ are real, $f_1(\pi) = f_2(\pi) = 0
\implies f_1(\bar \pi) = f_2(\bar \pi) = 0$. Bezout implies that
$\pi = \bar \pi$;
that is, the ninth point $\pi$ must be real. We have the following corollary
to Lemma 2.1.
\begin{lemma}
If $A$ is copacetic, then it is full.
\end{lemma}
The following lemma was probably known a hundred years ago.
\begin{lemma}
Suppose seven points $A = \{\pi_1,\dots,\pi_7\}$ in the plane are given,
not all on a quadratic and no four on a line. Then, up to multiple,
there is a unique cubic $f(x,y)$ which is singular at $\pi_1$ and contains
$\{\pi_2,\dots,\pi_7\}$.
\end{lemma}
\begin{proof}
Since $1 \cdot 3+6\cdot 1< 10$ linear conditions are given, at least one such
nonzero $f$
exists. Suppose $f_1$ and $f_2$ satisfy these properties and are not
proportional. Then
$\sum_j {\mathcal I}_{\pi_j}(f_1,f_2) \ge 2^2 + 6\cdot 1 > 3 \cdot 3$, hence
$f_1$ and $f_2$ have a common factor. The common factor could be an
irreducible quadratic, a reducible quadratic, or linear.
In the first case, $f_1 = \ell_1 q$ and $f_2 = \ell_2 q$, where
$q(\pi_1) = \ell_i(\pi_1) = 0$ by Prop. 2.2(1). At least one point,
say $\pi_7$, does not lie on $q$, hence $\ell_i(\pi_7) = 0$ as well.
Thus the two $\ell_i$'s share two zeros and are proportional, a contradiction.
In the second case, we have $f_1 = \ell_1\ell_2\ell_3$ and $f_2 =
\ell_1\ell_2\ell_4$, and $\pi_1$ lies on at least two of
$\{\ell_1,\ell_2,\ell_3\}$ and two of
$\{\ell_1,\ell_2,\ell_4\}$. If $\ell_1(\pi_1) = \ell_2(\pi_1) = 0$,
then $\ell_1$ and $\ell_2$ together can contain at most four of the six points
$\{\pi_2,\dots,\pi_7\}$, hence $\ell_3$ and $\ell_4$ must each contain
at least two points in common, and so are proportional, again
a contradiction. Otherwise, without loss of generality,
$\ell_1(\pi_1)=0$ and $\ell_2(\pi_1) \neq 0$, hence
$\ell_3(\pi_1)=\ell_4(\pi_1)=0$.
In this case, $\ell_1$ and $\ell_2$ can
together contain at most five of the six points
$\{\pi_2,\dots,\pi_7\}$, so that
$\ell_3$ and $\ell_4$ must contain also some $\pi_j$ other than $\pi_1$. This
is again a contradiction.
Finally, suppose $f_1 = \ell q_1$ and $f_2 = \ell q_2$, where $q_1$
and $q_2$ are relatively prime quadratics, so they share at most four
points. If $\ell(\pi_1) = 0$, then $q_j(\pi_1) = 0$ as well and since
at least four of $\{\pi_2,\dots,\pi_7\}$ do not lie on $\ell$, they
must lie on both $q_1$ and $q_2$. Thus $q_1$
and $q_2$ share five points, a contradiction. If $\ell(\pi_1)
\neq 0$, then $h_1 = \ell \ell_1 \ell_2$ and $h_2 = \ell \ell_3 \ell_4$,
where the $\ell_i$'s are distinct lines containing $\pi_1$. But if
$\ell(\pi_j) \neq 0$ (which is true for at least four $\pi_j$'s) then
$\pi_j$ must also lie on one of $\{\ell_1,\ell_2\}$ and one of
$\{\ell_3,\ell_4\}$. That is, the line through $\pi_1$ and $\pi_j$
divides both $\ell_1\ell_2$ and $\ell_3\ell_4$, a final contradiction.
\end{proof}
The last lemma in this section is used in the proof of Theorem 4.3.
\begin{lemma}
If d=3 and $A$ is a set of seven points in $\mathbb R^2$, no four on a line
and not all on a quadratic, then $A$ is full and
$\tilde{\mathcal A} = \emptyset.$
\end{lemma}
\begin{proof}
Choose $\pi_8$ to avoid any line between two points of $A$ and any
quadratic determined by five points of $A$. Then $A \cup \{\pi_8\}$
has no four points in a line and no seven on a quadratic, and so
$\dim I_{1,3}(A) = 3$ by Prop. 2.2(3). Suppose $\{f_1,f_2,f_3\}$ is
a basis for $I_{1,3}(A)$ and for each $j$, consider the map
\begin{equation}
T_j: (c_1,c_2,c_3) \mapsto \sum_{k=1}^3 c_k \vec\nabla f_k(\pi_j).
\end{equation}
By Lemma 2.5, $\dim(\ker(T_j)) = 1$, hence each $T_j$ is surjective, and
so $A$ is full.
Suppose $\pi \in \tilde {\mathcal A}$; after an invertible linear
change, we may assume without loss of generality that $\pi \in \tilde
A$. By the contrapositive to Prop. 2.2(3), either $A \cup
\{\pi\}$ has four points in
a line or has seven points on a quadratic.
Again, choose $\pi_8$ so that $A_1 = A \cup \{\pi_8\}$ has no four
points in a line and no seven on a quadratic. By Prop. 2.2(2),
we may assume without loss of generality that $I_{1,3}(A_1)$ has basis
$\{f_1,f_2\}$, so $\pi \in \tilde A_1$. Let $A_2 = A_1 \cup
\{\pi\}$. Thus $f_1$ and $f_2$ are two cubics which vanish on a set
$A_2$ with four points on a line $\ell$ or seven points on a quadratic
$q$, and so $f_1$ and $f_2$ have a common factor by Bezout, a
contradiction.
\end{proof}
\section{Hilbert's Method}
We begin this section with a general perturbation result.
\begin{lemma}[The Perturbation Lemma]
Suppose $f,g \in \mathbb R_{n,2d}$ satisfy the following conditions:
\begin{enumerate}
\item The polynomial $f$ is positive with no zeros at infinity,
and $2d = \deg f \ge \deg g$;
\item There is a finite set $V_1$ so that if $v \in
V_1$, then $f$ is round at $v$ and $g$ vanishes to second-order at $v$;
\item The set $V_2:= {\mathcal Z}(f)\smallsetminus V_1$ is finite and if $w \in
V_2$, then $g(w) > 0$.
\end{enumerate}
Then there exists $c = c(f,g)>0$ so that $f+cg $ is a positive polynomial.
\end{lemma}
\begin{proof}
For $v \in V_1$, let $g_v$ denote the second-order (lowest degree)
term of the Taylor
series for $g$ at $v$. Since $f_v$ is positive definite, there
exists $\alpha(v) > 0$ so that $f_v + \alpha g_v$ is positive definite for $0
\le \alpha \le \alpha(v)$.
If $\alpha_0 = \min_v \alpha(v)$, then there exist neighborhoods
$\mathcal N_v$ of each $v$ so that $f + \alpha_0 g$ is
positive on each ${\mathcal N_v} \smallsetminus \{v\}$.
Further, for $w\in V_2$, $(f+\alpha_0 g)(w) = \alpha_0 g(w) > 0$,
hence there is a neighborhood $\mathcal N_w$ of $w$ on which $f + \alpha_0
g$ is positive. It follows that $f + \alpha_0g$ is non-negative on the open set
$\mathcal N = \cup \mathcal N_v \cup \mathcal N_w$.
Homogenize $f,g$ to forms $F,G$ of degree $2d$ in $n+1$. For
$x \in \mathbb R^{n}$, let $||x||
= (1+\sum_i x_i^2)^{1/2}$ and let $\widetilde {\mathcal N}$ be
the image of $\mathcal N$ under the map
\begin{equation}
(x_1,\dots,x_n) \mapsto \left( \frac{x_1}{||x||},\dots,
\frac{x_n}{||x||}, \frac 1{||x||}\right) \in S^n.
\end{equation}
Then $\widetilde {\mathcal N}$ is open and $(F+\alpha G)(x) \ge 0$ for $x
\in \widetilde {\mathcal N}$. By hypothesis, ${\mathcal Z}(F) \subset
\widetilde {\mathcal N}$, hence $F$ is positive on the complement
$\widetilde{\mathcal N}^c$, so it achieves a positive minimum on the
compact set
$\widetilde{\mathcal N}^c$. Since $G$ is bounded on $S^n$, there exists
$\beta > 0$ so that $(F+\beta G)(x) \ge 0$ for $x \in \widetilde
{\mathcal N}^c$. It follows that
$F+cG$ is psd, where $c = \min\{\alpha_0,\beta\}$. The desired result follows
upon dehomogenizing.
\end{proof}
The following two theorems generalize the contradiction of Hilbert's
construction.
\begin{theorem}
If $p \in I_{2,2d}(A)$ is sos, then $p \in I_{1,d}^2(A)$.
\end{theorem}
\begin{proof}
If $p = \sum_k h_k^2$, then $p(a) = 0$ for $a \in A$, hence $h_k(a) =
0$, and so $h_k \in I_{1,d}(A)$, implying $p \in I_{1,d}^2 (A)$.
\end{proof}
Let $I_{1,d}(A)$ have basis $\{f_1,\dots,f_r\}$, and suppose
the $\binom {r+1}2$ polynomials $f_if_j, 1 \le i \le j \le r$ are
linearly independent; in other words, for each $p \in I_{1,d}^2(A)$ there is
a unique quadratic form $Q$ so that $p = Q(f_1,\dots,f_r)$. We call
this the {\it independent case}. (We have been unable to find $I_{1,d}(A)$
for which this does not hold.) Let
\begin{equation}
R_f:= \{ (f_1(x),\dots,f_r(x))\ : \ x \in \mathbb R^n \} \subseteq \mathbb
R^r.
\end{equation}
denote the range of the basis as an $r$-tuple.
\begin{theorem}
Suppose $p = Q(f_1,\dots,f_r) \in I_{1,d}^2(A)$ in the independent case:
\begin{enumerate}
\item $p$ is sos if and only if $Q$ is an sos quadratic form;
\item $p$ is psd if and only if $Q(u) \ge 0$ for $u \in R_f$;
\item if $n=2$, $r=2$, and $f_1$ and $f_2$ are relatively prime
polynomials with odd degree $d$, then $R_f
= {\mathbb R}^2$, hence $p \in I_{1,d}^2(A)$ is psd if and only if it is sos.
\end{enumerate}
\end{theorem}
\begin{proof}
If $p = \sum_k h_k^2$ is sos, then as in the last proof, $h_k \in
I_{1,d}(A)$. To be specific, if $h_k = \sum_\ell
c_{k\ell} f_\ell$, then by the uniqueness of $Q$,
$Q(u_1,\dots,u_r) = \sum_\ell
(\sum_\ell c_{k\ell}u_\ell)^2$. Conversely, if $Q = \sum_\ell T_\ell^2$ for
linear forms $T_\ell$, then $p = \sum_\ell T_\ell(f_1,\dots,f_r)^2$.
The assertion in (2) is immediate.
For (3), we first note that, since
$(f_1(\lambda x), f_2(\lambda x)) = \lambda^d (f_1(x),f_2(x))$, it suffices to
show that every line through the origin intersects $R_f$. By
hypothesis, ${\mathcal Z}(f_1)$ and ${\mathcal Z}(f_2)$ are infinite sets, but
$|{\mathcal Z}(f_1) \cap {\mathcal Z}(f_2)| \le d^2$.
It follows that there exist $\pi$ and $\pi'$ so that
$(f_1(\pi),f_2(\pi)) = (1,0)$ and $(f_1(\pi'),f_2(\pi')) = (0,1)$. Now
take a curve $\gamma(t) \in {\mathbb R}^2$ and so that $\gamma(0) = \pi$, $\gamma(1) =
\pi'$ and $\gamma(2) = -\pi$ and $\gamma(t) \notin {\mathcal Z}(f_1) \cap
{\mathcal Z}(f_2)$, and let $h(t) =(f_1(\gamma(t)),f_2(\gamma(t)))$. We have $h(0) =
(1,0)$, $h(1) = (0,1)$, $h(2) = (-1,0)$ and $h(t) \neq (0,0)$, so by
continuity,
each line through the origin contains some $h(t)$, $0 \le t \le 2$.
\end{proof}
The hypotheses of Theorem 3.3(3) applies in Hilbert's
original construction, with $d=3$. We show in Example 3.1 below
that $R_f \neq {\mathbb R}^r$ in general, and combine this with Theorem 3.3(2)
to give one instance of a positive form in a $I_{1,d}^2(A)$ which is
not sos.
\begin{theorem}[Hilbert's Method]
Suppose a finite set $A \subset \mathbb R^n$ is such that $I_{1,d}(A)$
has basis $\{f_1,\dots,f_s\}$, where $\tilde A$ is
finite, $A$ is full and $f = \sum_j f_j^2$ has no zeros at infinity.
Further, suppose there exists $g \in I_{2,2d}(A) \smallsetminus
I_{1,d}^2(A)$ so that $g(w) > 0$ for each $w \in \tilde A$. Then there
exists $c > 0$ so that
\begin{equation}
p_c = \sum_{j=1}^s f_j^2 + c g
\end{equation}
is positive and not sos.
\end{theorem}
\begin{proof}
In the notation of Lemma 3.1, let $V_1 = A$ and $V_2 = \tilde A$.
Since $f$ has no zeros at infinity, $\deg f = 2d$, and
$A$ is full, the hypotheses of Lemma 3.1 are satisfied.
Thus there exists $c>0$ so that $p_c$ is positive, and since $p_c \notin
I_{1,d}^2(A)$, it is not sos by Theorem 3.2.
\end{proof}
\begin{remarks}
\qquad
\begin{enumerate}
\item
If $\tilde A = \emptyset$,
then the Perturbation Lemma can be
applied to $(f,\pm g)$ for both signs, so that $f \pm c g$ is positive
for some $c >
0$ and both choices of sign.
\item
In any particular case, the condition that $f$ is round at $v \in V_1$
can be relaxed in the Perturbation Lemma, so long as a stronger
condition is imposed on $g$ to insure
that $f + \alpha g$ is positive in some punctured
neighborhood $\mathcal N_v$ of $v$.
\item
Since Hilbert's Method applies to any basis of $I_{1,d}(A)$, we may
replace $\sum_j f_j^2$ by any positive definite quadratic form in the
$f_j$'s.
\item
Hilbert's original sextic contradiction follows from
$(\phi\psi)(\pi_9) \neq 0$, which implies that $\phi\psi \in
I_{2,6}(A)\smallsetminus I_{1,3}^2(A)$.
\item
Theorem 4.3 covers a situation
in which $\tilde A = \emptyset$, but that $I_{2,2d}(A)\smallsetminus
I_{1,d}^2(A)$ is non-empty, so Hilbert's Method still applies.
\end{enumerate}
\end{remarks}
\begin{example}
We revisit Example 1.1, keeping the notation of (1.6). It is easy to
check that
\begin{equation}
\{F_1^2,F_2^2,F_3^2,F_1F_2,F_1F_3,F_2F_3\}
\end{equation}
is linearly independent, so that Theorem 3.3 applies.
Let
\begin{equation}
Q(u_1,u_2,u_3) = 5u_1^2 + 5u_2^2 + 5u_3^2 -6u_1u_2-6u_1u_3-6u_2u_3;
\end{equation}
evidently, $Q$ is not a psd quadratic form. We show now (in two ways) that
\begin{equation}
T:= Q(F_1,F_2,F_3)
\end{equation}
is psd; note that $T$ is not sos by Theorem 3.3(1).
Let
\begin{equation}
\begin{gathered}
P(v_1,v_2,v_3) : = v_1^4 +v_2^4+v_3^4 -
2v_1^2v_2^2-2v_1^2v_3^2-2v_2^2v_3^2 \\ =
(v_1+v_2+v_3)(v_1+v_2-v_3)(v_1-v_2+v_3)(v_1-v_2-v_3).
\end{gathered}
\end{equation}
A computation shows that
\begin{equation}
P(F_1,F_2,F_3) = (x^2-y^2)^2(x^2 - z^2)^2(y^2 - z^2)^2
\end{equation}
is psd, hence $R_F \subseteq \{(x,y,z): P(x,y,z) \ge 0\}$. We claim that
that $Q \ge 0$ on $R_F$ and so $T$ is psd by Theorem 3.3(2).
Since
\begin{equation}
5u_1^2 + 5u_2^2 + 5u_3^2 -6u_1u_2-6u_1u_3
\end{equation}
is psd, if $\bar u_2\bar u_3 < 0$, say, then $Q(\bar u_1,\bar u_2,\bar
u_3) \ge 0$. By symmetry, it follows that
$Q(v_1,v_2,v_3)\ge 0$ unless the $v_i$'s have the same sign, and it
suffices to suppose $v_1\ge v_2 \ge v_3 \ge 0$. The first
three linear factors of $P$ in (3.7) are always positive, so
$P(v_1,v_2,v_3) \ge
0$ if and only if $v_1 = v_2 + v_3 + t$ with $t \ge 0$. Since
\begin{equation}
Q(v_2+v_3+t,v_2,v_3) = 4(v_2-v_3)^2 + t(4v_2+4v_3+5t),
\end{equation}
the claim is verified.
The second proof is direct. We note that $T$ is symmetric:
\begin{equation}
T(x,y,z) = 5\sum^6 x^4y^2 + 6\sum^3 x^4yz + 6\sum^3 x^3y^3 -
6\sum^6 x^3y^2z - 30x^2y^2z^2.
\end{equation}
A calculation shows that
\begin{equation}
\begin{gathered}
2(x^2+y^2+z^2-xy-xz-yz)T(x,y,z) =
(x-y)^4(xy + 3xz+3yz+z^2)^2 \\ + (x-z)^4(xz + 3xy+3yz+y^2)^2 +
(y-z)^4(yz + 3xy+3xz+x^2)^2,
\end{gathered}
\end{equation}
so $T$ is psd. Although $|{\mathcal Z}(T)| = 7$, the zeros at
$(1,1,-1),(1,-1,1),(-1,1,1)$ are not round. In fact, $T(1+t,1-t,-1) =
48t^4 + 4t^6$, etc. These singularities are
useful in constructing the representation (3.12).
\end{example}
\section{Two applications of Hilbert's Method to ternary sextics}
In this section we show that Robinson's simplification of Hilbert's
Method works in general. By Theorem 2.2(5), the assumption that no
three of the nine points are on a line and no six are on a quadratic
is equivalent to saying that no $\alpha f_1 + \beta f_2$ is reducible.
Theorem 4.1 removes this restriction. In Theorem 4.3, we show that
Hilbert's Method also applies to the set of ternary
sextics which share seven zeros, no four in a line, no seven on a
quadratic.
\begin{theorem}
Suppose $f_1(x,y)$ and $f_2(x,y)$ are two
relatively prime real cubics with exactly nine distinct real
common zeros. Then Hilbert's Method applies to any subset $A$ of eight of
the common zeros.
\end{theorem}
\begin{proof}
Lemma 2.4 shows that if $A = \{\pi_1,\dots,\pi_8\}$
is copacetic, as is assumed here, then $\tilde A =\{\pi_9\}$ and $A$ is full.
It follows from (2.5) that $\dim I_{2,6}(A) \ge \binom 82 - 3
\cdot 8 = 4$. Since $I_{1,3}^2(A)$ is spanned by $\{f_1^2,f_1f_2,f_2^2\}$,
there exists $0 \neq g \in I_{2,6}(A) \smallsetminus
I_{1,3}^2(A)$. If we can show that $g(\pi_9)
\neq 0$, then $\pm g(\pi_9) > 0$ for some choice of sign, and
Theorem 3.4 applies.
Suppose to the contrary that $g(\pi_9) = 0$. Either $g$ is
singular at $\pi_9$, or there exists $(\alpha_1,\alpha_2) \neq (0,0)$ so that
the tangents of $g$ and $\alpha_1f_1+\alpha_2f_2$ are parallel at
$\pi_9$. Since the choice of basis for $I_{1,3}(A)$ was arbitrary, we
may assume without loss of generality that $(\alpha_1,\alpha_2) = (1,0)$
from the beginning. In either case, $ {\mathcal I}_{\pi_9}(f_1,g) \ge 2$, so
\begin{equation}
\sum_{j=1}^9 {\mathcal I}_{\pi_j}(f_1,g) \ge 2 \cdot 9 = \deg(f_1) \cdot \deg(g).
\end{equation}
Since $f_1$ is a real cubic, there exists $\pi_0 \notin A \cup \tilde
A$ so that $f_1(\pi_0) = 0$
and, necessarily, $f_2(g_0) \neq 0$. Now let
\begin{equation}
\tilde g = g - \frac{g(\pi_0)}{f_2^2(\pi_0)} f_2^2,
\end{equation}
so that $\tilde g(\pi_0) = 0$.
Observe that $\tilde g \in I_{2,6}(A) \smallsetminus I_{1,3}^2(A)$, and $g$
and $\tilde g$ agree to second-order at $\pi_9$. In particular, they
are either both singular or have the same tangents. Thus, we may
replace $g$ by $\tilde g$ for purposes of the argument, and
assume that $g(\pi_0) = 0$. Combining ${\mathcal I}_{\pi_0}(f_1,g) \ge 1$ with
(4.1), we see that $f_1$ and $g$ have a common factor by Bezout. Let $d =
\deg(\gcd(f_1,g))$.
If $d=3$, then $g = f_1 k$ for some cubic $k$. Since $g$ is singular
on $A$ and $f_1$ is singular at no point of $A$, we must have $k \in
I_{1,3}(A)$, so that $g \in I_{1,3}^2(A)$, a contradiction. (Under
Hilbert's restrictions, $f_1$ is irreducible, so this is the
only case.)
Suppose $d = 2$ and write $f_1 = \ell q$ and $g = p q$, where $\ell$
is linear, $q$ is quadratic and $p$ is quartic and $\ell$ and $p$ are
relatively prime. Then $\ell=0$
on exactly three of the $\pi_i$'s. After reindexing, there are two
cases: either $\ell = 0$ on $\{\pi_1,\pi_2,\pi_3\}$ or $\ell = 0$ on
$\{\pi_1,\pi_2,\pi_9\}$, with $q = 0$ on the complementary sets.
In the first case, $q(\pi_i)\neq 0$ for $i=1,2,3$, so $p$ is singular
at these three points and ${\mathcal I}_{\pi_1}(\ell,p) + {\mathcal I}_{\pi_2}(\ell,p) +
{\mathcal I}_{\pi_3}(\ell,p) \ge 6 > 1 \cdot 4$. Since $\ell$ and $p$ are
relatively prime, this is a contradiction by Bezout. In the second
case, $p$ is still singular at $\pi_1,\pi_2$ and $q(\pi_9) \neq 0$, so
$p(\pi_9) = 0$ and
${\mathcal I}_{\pi_1}(\ell,p)
+ {\mathcal I}_{\pi_2}(\ell,p) + {\mathcal I}_{\pi_9}(\ell,p) \ge 5 > 2+2+1$,
another contradiction.
Finally, suppose $d = 1$ and write $f_1 = \ell q$ and $g = \ell p$,
where $\ell$ is linear, $q$ is quadratic and $p$ is quintic and $q$
and $p$ are relatively prime. With either case for $\ell$ as above,
$\ell\neq0$ and $p$ is singular at $\pi_4,\dots,\pi_8$ and ${\mathcal I}_{\pi_4}(q,p) +
\dots + {\mathcal I}_{\pi_8}(q,p) \ge 10 = 2 \cdot 5$. In the first case,
$\ell(\pi_9) \neq 0$,
so ${\mathcal I}_{\pi_9}(q,p) \ge 1$; in the second case, $\ell(\pi_3) \neq 0$, so
${\mathcal I}_{\pi_3}(q,p) \ge 2$. In either case Bezout implies that $q$
and $p$ are not relatively prime, and this contradiction completes the
proof.
\end{proof}
It is possible for $g$ and the $f_i$'s to have a common factor,
provided it does not contain $\pi_9$.
This happens in Robinson's example: $f_1 = x(x^2-1)$, $f_2
= y(y^2-1)$ and $g = (x^2-1)(y^2-1)(1-x^2-y^2)$.
\begin{corollary}
If $A$ is copacetic, then there exists a positive sextic polynomial
$p(x,y)$ so that $A \subseteq {\mathcal Z}(p)$ and $p$ is not sos.
\end{corollary}
\begin{theorem}
Suppose $A = \{\pi_1,\dots,\pi_7\}\subset \mathbb R^2$,
with no four $\pi_i$'s in a line and
not all seven on one quadratic. Then Hilbert's Method applies to $A$.
\end{theorem}
\begin{proof}
It follows from Lemma 2.6 that
$A$ is full and $\tilde {\mathcal A} = \emptyset$. We have
$\dim I_{1,3}(A) = 3$, so $\dim I_{1,3}^2(A) \le 6$, but by (2.5),
$\dim I_{2,6}(A) \ge \binom 82 - 7\cdot \binom 32 = 7$. Thus there
exists $g \in \dim I_{2,6}(A) \smallsetminus I_{1,3}^2(A)$ and since
$\tilde {\mathcal A} = \emptyset$, Hilbert's Method can be applied.
\end{proof}
Theorem 4.3 is implemented in Examples 1.1 and 6.3.
\begin{corollary}
If $A$ is a set of seven points in $\mathbb R^2$, no four on a line and not all
on a quadratic, then there exists a positive sextic polynomial
$p(x,y)$ so that $A \subseteq {\mathcal Z}(p)$ and $p$ is not sos.
\end{corollary}
\section{Psd and sos sections}
We now consider $I_{2,6}({\mathcal A}) \cap P_{3,6}$ and
$I_{2,6}({\mathcal A}) \cap \Sigma_{3,6}$ in detail.
Our motivation is that $P_{3,6}$ and $\Sigma_{3,6}$ lie
in $\mathbb R^{28}$ and are difficult to visualize. These two
sections, in general, lie in $\mathbb R^4$, and thus are
more comprehensible. We work in the homogeneous case.
\begin{theorem}
In the notation of Theorem 4.1, suppose
\begin{equation}
P = c_1 F_1^2 + 2 c_2 F_1F_2 + c_3 F_2^2 + c_4 G.
\end{equation}
If $P$ is sos, then $c_4 = 0$. If $c_4=0$, then $P$ is sos if and only
if $P$ is psd if and only if $c_1 \ge 0$, $c_3 \ge 0$ and $c_1c_3 \ge c_2^2$.
\end{theorem}
\begin{proof}
These are Theorems 3.2 and 3.3(1),(2) in the homogeneous case.
\end{proof}
Because $G$ is only defined modulo $I_{1,d}^2({\mathcal A})$, it is
difficult to make any general statements about the circumstances under
which $P$ is psd. However, one can identify the possible zeros of
$P$.
\begin{theorem}
Suppose $P = c_1 F_1^2 + 2 c_2 F_1F_2 + c_3 F_2^2 + c_4 G$ is psd,
where $c_4 \neq 0$ and let $J$ be the Jacobian of $F_1,F_2$ and $G$. Then
\begin{equation}
{\mathcal Z}(P) \subseteq {\mathcal Z}(F_1) \cup {\mathcal Z}(F_2) \cup {\mathcal Z}(J).
\end{equation}
\end{theorem}
\begin{proof}
If $P(a) = 0$ and $(F_1(a),F_2(a)) \neq (0,0)$, then $P$ and
$(F_1(a)F_2-F_2(a)F_1)^2$ are linearly independent sextics which are both
singular at $a$. Thus the Jacobian of $(F_1^2, F_1F_2, F_2^2,G)$, when
evaluated at $a$, has rank $\le 2$. In
particular, the $3 \times 3$ minor omitting $F_1F_2$ vanishes; this
minor reduces to $4F_1F_2J$.
\end{proof}
A maximal perturbation
might not lead to a new zero, but rather to a greater singularity
at a pre-existing zero; see Example 3.1.
In the special case of Robinson's example, we are able to give a much
more precise description of these sections. Let $A =
\{-1,0,1\}^2 \smallsetminus \{(0,0)\}$. A routine calculation shows that
$f_1(x,y) = x^3-x$ and $f_2(x,y) = y^3-y$ span $I_{1,3}(A)$ and
$f_1^2,f_1f_2,f_2^2$ and $g(x,y) =
(x^2-1)(y^2-1)(1-x^2-y^2)$ span $I_{2,6}(A)$. It is convenient to
replace $g$ with $f_1^2+f_2^2+g$, which homogenizes to $R$.
Consider now
\begin{equation}
\begin{gathered}
\Phi[c_1,c_2,c_3,c_4](x,y,z):= c_1F_1^2 + 2c_2F_1F_2+c_3F_2^2+c_4R
\\ = c_1(x^3-xz^2)^2 +
2c_2(x^3-xz^2)(y^3-yz^2)+c_3(y^3-yz^2)^2 \\
+ c_4(x^6 + y^6 + z^6 - x^4y^2 - x^2y^4 - x^4z^2-y^4z^2 -
x^2z^4-y^2z^4+3x^2y^2z^2).
\end{gathered}
\end{equation}
This is the general form of $\Phi \in I_{2,6}({\mathcal A})$, where
\begin{equation}
{\mathcal A} = \{(\pm 1,0,1), (0,\pm 1,1),(\pm 1, \pm 1,1)\}.
\end{equation}
Theorem 5.1 implies that
$\Phi(c_1,c_2,c_3,0)$ is psd if and only if it is sos if and only if
$c_1,c_3,c_1c_3-c_2^2 \ge 0$, so we may henceforth assume that $c_4 \neq 0$.
We begin our discussion of positivity with a collection of short
observations.
\begin{lemma}
Suppose $\Phi[c_1,c_2,c_3,c_4]$ is psd. Then the following are true:
\begin{enumerate}
\item $c_4 \ge 0$;
\item $\Phi[c_1,-c_2,c_3,c_4]$ and $\Phi[c_3,c_2,c_1,c_4]$ are psd;
\item $\Gamma(x,y):=
(c_1+c_4) x^6 - c_4 x^4y^2 + 2c_2 x^3y^3 - c_4 x^2y^4 + (c_3+c_4)
y^6$ is psd;
\item $\Phi[c_1,0,c_3,c_4]$ is psd.
\end{enumerate}
\end{lemma}
\begin{proof}
The first observation follows from evaluation at $(0,0,1)$, the second
from taking $(x,y,z)\mapsto (x,-y,z),(y,x,z)$, the third from setting
$z=0$, and the fourth from averaging the psd forms $\Phi[c_1,\pm
c_2,c_3,c_4]$.
\end{proof}
In view of Lemma 5.3(1), it suffices now to assume $c_4 = 1$.
For $t > 0$, let
\begin{equation}
\alpha(t) = \frac{2t^2+t^4}3, \qquad \beta(t) = \frac{1+2t^2}{3t^4}, \qquad
\gamma(t) = \beta(\alpha^{-1}(t)).
\end{equation}
Then $\beta(t) = \alpha(t^{-1})$, and as $t$ increases from 0 to $\infty$,
so does $\alpha(t)$, monotonically.
\begin{lemma}
For $t > 0$, the sextic $\Phi_t(x,y):= \alpha(t)x^6 - x^4y^2 - x^2y^4
+ \beta(t)y^6$ is positive with zeros at $(1,\pm t)$.
\end{lemma}
\begin{proof}
A computation shows that
\begin{equation}
\Phi_t(x,y) = \frac {(t^2x^2-y^2)^2((t^4+2t^2)x^2 + (2t^2+1)y^2)}{3t^4}.
\end{equation}
\end{proof}
Let $K= \{(x,y): x > 0,\ y \ge \gamma(x))\}$ denote the region
lying above the curve $C = \{(\alpha(t),\beta(t)): t > 0\}$, which
partially parametrizes the quartic curve $27x^2y^2
- 18x y - 4x - 4y - 1 =0$. For this reason,
\begin{equation}
\gamma(x) = \frac{2+9x + 2(1+3x)^{3/2}}{27x^2}.
\end{equation}
\begin{lemma}
The binary sextic $\Psi(x,y) = r x^6 - x^4y^2 - x^2y^4 + s y^6$ is psd
if and only if $(r,s) \in K$.
\end{lemma}
\begin{proof}
A necessary condition for the positivity of $\Psi$ is $r > 0$. Let $t_0
= \alpha^{-1}(r)>0$, so
\begin{equation}
\Psi(x,y) = \Phi_{t_0}(x,y) + (s - \gamma(t_0))y^6.
\end{equation}
If $(r,s) \in K$; that is, if $s\ge \gamma(t_0)$, then Lemma 5.4 and
(5.8) show that
$\Psi$ is positive. Conversely, $\Psi(1,t_0) = (s - \gamma(t_0))t_0^6$,
so if $\Psi$ is positive, then $s\ge \gamma(t_0)$.
\end{proof}
\begin{theorem}
The sextic $\Phi[c_1,0,c_3,1]$ is psd if and only if $(1+c_1,1+c_3)
\in K$.
\end{theorem}
\begin{proof}
One direction is clear by Lemmas 5.3(3) and 5.5. For the converse,
note that $(1+c_1,1+c_3) \in K$ if and only if $1+c_1 = \alpha(t_0)$
implies $1+c_3 \ge \beta(t_0)$. In other words, we need to show that,
with $\lambda = 1 + c_3 - \beta(t_0)$,
\begin{equation}
\Phi[\alpha(t_0) -1,0,\beta(t_0)+\lambda -1,1] = \Phi[\alpha(t_0) -1,0,\beta(t_0)-1,1]
+ \lambda F_2^2
\end{equation}
is psd whenever $\lambda \ge 0$. To this end, for $t >0$, define
\begin{equation}
\begin{gathered}
R_t(x,y,z) := \Phi[\alpha(t) -1,0,\beta(t)-1,1](x,y,z) = \\
\left(\frac{t^4+2t^2-3}{3} \right)^2 F_1^2(x,y,z)+
\left(\frac{1+2t^2-3t^4}{3t^4}\right)F_2^2(x,y,z) + R(x,y,z).
\end{gathered}
\end{equation}
Note that $R_1 = R$, $R_{1/t}(x,y,z) = R_t(y,x,z)$ and that for $t \neq
1$, the coefficients of $F_1^2$ and $F_2^2$ have opposite sign.
The following algebraic identity gives $Q_tR_t$ as
a sum of four squares for a psd quadratic form $Q_t(x,y)$, which
implies that $R_t$ is psd, and completes the proof.
\begin{equation}
\begin{gathered}
((2t^4+t^2)x^2+(t^2+2)y^2)3t^4R_t(x,y,z)
\\= 3t^6(1 + 2t^2)x^2z^2(x^2 - z^2)^2 + 3t^4(2 + t^2)y^2z^2(y^2 - z^2)^2 \\+
t^2(t^2 - 1)^2x^2y^2(t^2x^2 - y^2 + (1 - t^2)z^2)^2\\ + (2 + t^2)(1
+ 2t^2)(t^4x^4 - y^4 - t^4x^2z^2 + y^2z^2)^2. \end{gathered}
\end{equation}
\end{proof}
For $t=1$, (5.11) essentially appears in \cite[p.273]{Ro}.
In view of the foregoing, ${\mathcal Z}(R_t)$ contains, at least, ${\mathcal A} \cup
\{(1,\pm t,0)\}$. If $R_t(a,b,c)=0$, then each of the squares in (5.9)
vanishes. In particular, $cF_1(a,b,c)=cF_2(a,b,c) = 0$, so either
$c=0$ or $(a,b,c) \in {\mathcal A} \cup \{(0,0,1)\}$. These cases have
already been
discussed and we may conclude that ${\mathcal Z}(R_t)= {\mathcal A} \cup
\{(1,\pm t,0)\}$ and $|{\mathcal Z}(R_t)| = 10$.
We now complete our discussion of the psd case.
\begin{theorem}
The sextic $\Phi[c_1,c_2,c_3,1]$ is psd if and only if $(c_1,c_3) \in
K$ and $|c_2| \le \sigma(c_1,c_3)$ for a function $\sigma(c_1,c_3) \ge 0$
defined on $K$ (see (5.15)). If $c_2 = \pm \sigma(c_1,c_3)$, then
$\Phi[c_1,c_2,c_3,1]=R_t+\alpha(t^3 F_1 \pm F_2)^2$ (for suitable $t,
\alpha$ and choice of sign).
\end{theorem}
\begin{proof}
First, suppose $\Phi[c_1,c_2,c_3,1]$ is psd. Then $(c_1,c_3) \in K$ by Lemma
5.3(4) and Theorem 5.6. Setting $z=0$, we obtain the psd binary sextic
\begin{equation}
\Gamma(x,y) = (1+c_1)x^6 - x^4y^2 + 2c_2 x^3y^3 - x^2y^4 + (1+c_3)y^6.
\end{equation}
Define $t_0$ so that $1+c_1 = \alpha(t_0)$. If $1+c_3 = \beta(t_0)$, then
$\Gamma(1,\pm t_0) = \pm c_2t_0^3$ implies that $c_2 = 0$; otherwise,
$(1+c_1,1+c_3)$ lies
strictly above $C$. Suppose now that $c_2 < 0$ without loss of
generality (taking $y \mapsto -y$ if necessary), so that for $u > 0$,
\begin{equation}
\Gamma(1,-u) > \Gamma(1,u) = (1+c_1) - u^2 - 2|c_2| u^3 - u^4
+(1+c_3)u^6 \ge 0.
\end{equation}
Let $\Psi(u) = (1+c_1)u^{-3} - u^{-1} - u + (1+c_3)u^3$, so that
\begin{equation}
0 \le u^{-3}\Gamma(1,u) = u^3(\Psi(u) - 2 |c_2|).
\end{equation}
Now define
\begin{equation}
\sigma(c_1,c_3): = \min_{u > 0} \tfrac 12 \Psi(u) = \tfrac 12 \Psi(v);
\end{equation}
since $\Psi(u) \to \infty$ as $t \to 0$ or $t \to \infty$,
the minimum exists. It follows that $|c_2| \le
\sigma(c_1,c_3)$. (Although $\sigma(c_1,c_3)$ is computable explicitly, it is
quite complicated. For example,
$2\sigma(1,0)$ is the unique real positive root of the sextic
$729x^6 - 22518 x^4 + 182774 x^2 - 111392$, approximately $.81392$.)
We must now show that every $\Phi[c_1,\pm\sigma(c_1,c_3),c_3,1]$ is
psd. Since $\Psi'(v) = 0$, we have the system
\begin{equation}
\begin{gathered}
\sigma(c_1,c_3) = \frac12 \left( (1+c_3)v^3 - v - v^{-1} +
(1+c_1)v^{-3}\right); \\
3(1+c_3)v^2 - 1 + v^{-2} - 3(1+c_1)v^{-4} = 0.
\end{gathered}
\end{equation}
A calculation shows that (5.16) implies
\begin{equation}
\Phi[c_1,-\sigma(c_1,c_3),c_3,1] = R_v + \mu (v^3F_1 -F_2)^2,
\end{equation}
where $R_v$ is defined in (5.10) and
\begin{equation}
\mu = \frac{3(1+c_3)v^4 -(2v^2+1)}{3v^4}.
\end{equation}
We are done if we can show that $\mu \ge 0$.
By hypothesis, both sides of (5.17) vanish at $(1,v,0)$. But if we
evaluate (5.17) at $(1,-v,0)$, we have already seen that the left-hand side is
positive, and the right-hand side is $0+4v^6\mu$, hence $\mu > 0$.
\end{proof}
If $\Phi[c_1,c_2,c_3,1](a,b,c) = 0$, then Theorem 5.2 implies that
$(a,b,c) \in \mathcal A$ or
\begin{equation}
abc(a^2-c^2)(b^2-c^2)(a^2-ab+b^2-c^2)(a^2+ab+b^2-c^2)= 0.
\end{equation}
This includes the new zeros of $R_t$ on $c=0$ but also the extraneous
points $(a,b,c)$ for which $a^2+b^2-c^2 = \pm ab$, which never appear
non-trivially as zeros for any $R_t$.
To sum up, we have described sections of the two cones
\begin{equation}
\begin{gathered}
P = \{(c_1,c_2,c_3,c_4) : c_1F_1^2 +2c_2F_1F_2 + c_3F_2^2 + c_4R \in
P_{3,6} \} \subseteq \mathbb R^4, \\
\Sigma= \{(c_1,c_2,c_3,c_4) : c_1F_1^2 +2c_2F_1F_2 + c_3F_2^2 + c_4R
\in \Sigma_{3,6} \} \subseteq \mathbb R^4;
\end{gathered}
\end{equation}
at $c_4=0$ and at $c_4=1$.
In the first case, the sections coincide and are literally a right
regular cone. In the second case
$\Sigma$ disappears, and if we think of $(c_1,c_3)$ as lying in a
plane and $c_2$ as the vertical dimension, then $P$ is a kind of
clam-shell, with a convex boundary curve $C$ lying in the plane and
rays emanating at
varying angles from the points on the boundary.
\section{More ternary sextic examples}
\begin{example}
Let $A= \{\pi_i\} = \{(a_i,b_i)\}$
be given by $\pi_1 = (-1,0), \pi_2 = (-1,-1), \pi_3 = (0,1), \pi_4 =
(0,-1), \pi_5 = (1,0), \pi_6 = (2,2), \pi_7 = (2,-2), \pi_8 =
(1,-3)$. By looking at the $3 \times 3$ minors of the matrix with rows
$(1,a_i,b_i)$ and the $6 \times 6$ minors of the matrix with rows
$(1,a_i,b_i,a_i^2,a_ib_i,b_i^2)$, one can check that no three of the
$\pi_i$'s lie in a line, and no six on a quadratic. According to
Mathematica, $I_{1,3}(A)$ is spanned by
\begin{equation}
\begin{gathered}
f_1(x,y) = -42 + 49 x + 42x^2 - 49x^3 - 20 y - 38xy + 4x^2y + 42y^2 +
20y^3, \\
f_2(x,y) = -22 + 31 x + 22 x^2 - 31x^3 - 12y - 18xy + 22y^2 +
4xy^2 + 12y^3,
\end{gathered}
\end{equation}
and $\tilde A = \{\left(\frac{2516}{1297},\frac{4991}{2594}\right)\}$,
so $A$ is
copacetic. In Hilbert's notation, $\phi(x,y) = x^2 -xy+y^2-1$ and
\begin{equation}
\begin{gathered}
\psi(x,y) = -6136 + 2924x + 5784x^2 - 2924x^3 + 352x^4\\ - 2804y -
7000xy + 6299x^2y - 1049x^3y + 5818y^2\\ - 7803xy^2 +
1811x^2y^2 + 2804y^3 - 1402xy^3 + 318y^4.
\end{gathered}
\end{equation}
It follows that there exists $c>0$ so that $f_1^2+f_2^2 + c\phi\psi$
is psd and not sos. We do not offer an estimate for $c$.
\end{example}
In the examples in the rest of this section, the
symmetries are more clearly seen when the polynomials are homogenized.
\begin{example}
We present one of several ways to generalize Robinson's
original set of eight points. For $t > 0$, let
\begin{equation}
A_t = \{(\pm 1,\pm 1), (\pm t, 0), (0, \pm t)\}.
\end{equation}
It is not hard to see that $A_t$ is copacetic (with ninth point
$(0,0)$) unless $t = \sqrt 2$, in
which case $A_t$ lies on $x^2 + y^2 =2$. Since $A_t \mapsto
A_{2/t}$ under the invertible map
$(x,y) \mapsto ((x+y)/t,(x-y)/t)$,
we may assume $0 < t < \sqrt 2$. After homogenizing to
$\mathcal A_t $, we note that a basis of $I_{1,3}({\mathcal A_t})$ is given by
\begin{equation}
\{F_{1,t},F_{2,t}\} =
\{x(x^2 + (t^2-1)y^2 - t^2z^2), y((t^2-1)x^2 + y^2 - t^2z^2)\}
\end{equation}
and that $\tilde{\mathcal A_t} = (0,0,1)$.
It is not hard to see that
\begin{equation}
G_t(x,y,z) =
(x^2 + (t^2-1)y^2 - t^2z^2)((t^2-1)x^2 + y^2 - t^2z^2)(-x^2-y^2+t^2z^2)
\end{equation}
is singular on $\mathcal{A}_t$ and is positive on $(0,0,1)$.
(Robinson's example is recovered by setting $t=1$.)
Consider now
\begin{equation}
\begin{gathered}
P_t := F_{1,t}^2 + F_{2,t}^2 + 1\cdot G_t^2 = (2-t^2)(x^6
-x^4y^2-x^2y^4+ y^6) + \\
(2t^4 - 3t^2)(x^4+y^4)z^2 +(6t^2-4t^4+t^6)x^2y^2z^2 - t^6 (x^2z^4 +
y^2z^4-z^6).
\end{gathered}
\end{equation}
The proof that $P_t$ is psd follows from the identity
\begin{equation}
\begin{gathered}
(x^2+y^2)P_t = (2-t^2)(x^2-y^2)^2(x^2+y^2-t^2z^2)^2 + \\
t^2x^2z^2(x^2+(t^2-1)y^2-t^2z^2)^2
+t^2y^2z^2((t^2-1)x^2+y^2-t^2z^2)^2.
\end{gathered}
\end{equation}
For $t=1$, this formula is in \cite{Ro}. For $t = 0, \sqrt 2$, $P_t$
is sos. It is not hard to show that if $0<t<\sqrt 2$, then ${\mathcal Z}(P_t) =
{\mathcal A}_t \cup \{(1,\pm 1,0) \}$ has 10 points and $P_t$ is not sos.
\end{example}
\begin{example}
Let
\begin{equation}
{\mathcal A} = \{(1,0,0),(0,1,0),(0,0,1),(1,1,0),(1,0,1),(1,1,0),(1,1,1) \}.
\end{equation}
It is again simple to show that $I_{1,3}({\mathcal A})$ is spanned by
\begin{equation}
F_1(x,y,z) = xy(x-y),\quad F_2(x,y,z) = yz(y-z),\quad F_3(x,y,z) = zx(z-x),
\end{equation}
and that
\begin{equation}
G(x,y,z) = xyz(x-y)(y-z)(z-x)
\end{equation}
is in $I_{2,6}({\mathcal A}) \smallsetminus I_{1,3}^2({\mathcal
A})$. Accordingly, by Theorem 4.3, there exists $c>0$ so that
\begin{equation}
U_c(x,y,z) = x^2y^2(x-y)^2 + y^2z^2(y-z)^2 + z^2x^2(z-x)^2 +
cxyz(x-y)(y-z)(z-x)
\end{equation}
is psd and not sos.
Since $U_c(x,y,z) \ge 0$ whenever $xyz=0$, we define
\begin{equation}
\begin{gathered}
Q_c(x,y,z) := \frac{U_c(x,y,z)}{x^2y^2z^2} \\ = \frac {(x-y)^2}{z^2} +
\frac{(y-z)^2}{x^2} + \frac{(z-x)^2}{y^2} + c \left(\frac {x-y}{z}\right)
\left(\frac {y-z}{x}\right) \left(\frac {z-y}{x}\right).
\end{gathered}
\end{equation}
It is now sensible to make a substitution: let
\begin{equation}
u := \frac {x-y}{z};\quad v := \frac {y-z}{x}; \quad w := \frac {z-x}{y}.
\end{equation}
Then $Q_c = u^2 + v^2 + w^2 + cuvw$; somewhat surprisingly,
$\{u,v,w\}$ is not algebraically independent: in fact,
\begin{equation}
u+v+w+uvw = 0.
\end{equation}
An application of Lagrange multipliers to minimize $Q_c$, subject to
(6.14), shows that two of $\{u,v,w\}$ are equal; by symmetry, we
may take $u=v$, so that $w = -\frac{2u}{u^2+1}$, and
\begin{equation}
Q_c\left(u, u, -\tfrac{2u}{u^2+1}\right) =
\frac{2u^2(u^4+2u^2+3 -cu(1+u^2))}{(1+u^2)^2}.
\end{equation}
Let $\sigma = \sqrt{\sqrt 2 + 1}$. A little calculus
shows that the numerator is psd provided $|c| \le c_0:=
4/\sigma$, with $Q_{c_0} = 0$ when $u = \pm \sigma$. Solving back for
$(x,y,z)$ yields, up to multiple, that $(1+\sigma,
1+\sigma^2,1-\sigma)$ and its cyclic images are in ${\mathcal Z}(U_{c_0})$, together
with (6.8). Here, $| {\mathcal Z}(U_{c_0}) | = 10$.
\end{example}
\begin{example}
The Motzkin form $M$ cannot be derived directly from Theorems 4.1 or
4.3 because $|{\mathcal Z}(M)| = 6$;
however, $M$ has zeros at $(1,0,0)$ and $(0,1,0)$ which vanish
to the sixth order in the $z$-direction. It is possible to construct
psd ternary sextics $M_t$ with $|{\mathcal Z}(M_t)| = 10$ for $t>0$ and such
that $M_t \to M$ as $t \to 0$. We do this with an Ansatz by supposing
that there is a non-zero even ternary sextic which is
symmetric in $(x,y)$ and lies
in $I_{2,6}({\mathcal A_t})$ for
\begin{equation}
{\mathcal A_t} = \{(1,0,0), (0,1,0), (1, 0, \pm t), (0, 1, \pm t), (1,
\pm 1, \pm 1)\}.
\end{equation}
Although these impose 30 equations on the 28 coefficients of a ternary
sextic, there is some redundancy, and it can be verified that
\begin{equation}
\begin{gathered}
M_t(x,y,z) = (1-2t^2)(x^4y^2+x^2y^4) + t^4(x^4z^2+y^4z^2)\\- (3 -
8t^2+2t^4)x^2y^2z^2 -2t^2(x^2+y^2)z^4 + z^6
\end{gathered}
\end{equation}
satisfies this criterion. It is
not clear that $M_t$ is psd; in fact, it is not psd when
$t^2 > 1/2$. We
note that $M_0 = M$ and $M_t$ is a square when $t^2 = 1/2$. The
proof that $M_t$ is psd for $t^2 < 1/2$ is given by an sos representation of
$Q_tM_t$:
\begin{equation}
\begin{gathered}
(x^2+y^2)M_t(x,y,z) = (1-2t^2)x^2y^2(x^2+y^2-2z^2)^2 + \\
y^2z^2(t^2(x^2-y^2)-(x^2-z^2))^2 + x^2z^2(t^2(y^2-x^2)-(y^2-z^2))^2.
\end{gathered}
\end{equation}
This equation also shows that, at least when $t^2 < 1/2$, ${\mathcal Z}(M_t)=
\mathcal A_t$.
We may also derive $M_t$ using Theorem 4.1, by first
choosing any eight points in $\mathcal A_t$.
\end{example}
\begin{example}
Similarly, one can approach $S(x,y,z)$ by Ansatz and look for a
cyclically symmetric even sextic $S_t$ which is singular at
\begin{equation}
{\mathcal A_t} = \{(\pm t,1,0), (0, \pm t,1), (1,0,\pm t), (1, \pm 1,
\pm 1)\}.
\end{equation}
Again, although there is no reason to expect a non-zero solution,
there is one:
\begin{equation}
\begin{gathered}
S_t(x,y,z) = t^4(x^6+y^6+z^6) + (1-2t^6)(x^4y^2+
y^4z^2+z^4x^2)\\ +
(t^8 - 2t^2)(x^2y^4+y^2z^4+z^2x^4)-3(1-2t^2+t^4-2t^6+t^8)x^2y^2z^2.
\end{gathered}
\end{equation}
We find that
$t^8S_{1/t}(x,y,z) = S_t(x,z,y)$, $S_0(x,y,z) = S(x,y,z)$ and
$S_1(x,y,z) = R(x,y,z)$.
The proof that $S_t$ is psd follows from yet
another algebraic identity:
\begin{equation}
\begin{gathered}
(x^2+y^2)S_t(x,y,z) =
(t^2x^4+x^2y^2-t^4x^2y^2-t^2y^4-x^2z^2+t^4y^2z^2)^2\\ +
y^2z^2(y^2-x^2+t^2(x^2-z^2))^2 + t^4x^2z^2(y^2-z^2 +
t^2(x^2-y^2))^2 \\
+(t^2-1)^2x^2y^2((z^2-x^2)+t^2(y^2-z^2))^2.
\end{gathered}
\end{equation}
When $t=1$, (5.11) and (6.21) coincide. This example was
announced, without proof, in \cite[p.261]{Re2}.
\end{example}
Robinson \cite[p.273]{Ro} observed that $(ax^2+by^2+cz^2)R(x,y,z)$ is sos, ``at
least if $0 \le a \le b+c,\ 0 \le b \le a+c,\ 0 \le c \le a+b$."
We revisit this situation and simultaneously
illustrate the method used to discover (5.11), (6.7), (6.18) and (6.21).
\begin{theorem}
If $r,s,t \ge 0$, then
$(r^2x^2 + s^2y^2 + t^2z^2)R(x,y,z)$ is sos if
and only if $r \le s+t$, $s \le r+t$ and $t \le r+s$.
\end{theorem}
\begin{proof}
It was shown in \cite[p.569]{CLR2} (by a polarization argument)
that an even sos polynomial $F$ has an sos
representation $F = \sum H_j^2$ in which each $H_j^2$ is even. Suppose
\begin{equation}
(r^2x^2 + s^2y^2 + t^2z^2)R(x,y,z) = \sum_{j=1}^r H_j^2(x,y,z)
\end{equation}
is such an ``even'' representation.
Then ${\mathcal Z}(R) \subseteq {\mathcal Z}(H_j)$ for the quartic $H_j$'s
(c.f. (5.4)). It follows that
\begin{equation}
\begin{gathered}
H_j(x,y,z) = c_{1j} xy(x^2-y^2) + c_{2j} xz(x^2-z^2) + c_{3j}
yz(y^2-z^2) \\ + (c_{4j}(x^2-z^2)(x^2-y^2+z^2)+c_{5j}(y^2-z^2)(-x^2+y^2+z^2)).
\end{gathered}
\end{equation}
Each $H_j^2$ is even, so the only
cross-terms which can appear in any $H_j^2$ are $c_{4j}c_{5j}$ and
\begin{equation}
\begin{gathered}
(r^2x^2 + s^2y^2 + t^2z^2)R(x,y,z) = \lambda_1x^2y^2(x^2-y^2)^2 +
\lambda_2x^2z^2(x^2-z^2)^2 \\ + \lambda_3 y^2z^2(y^2-z^2)^2 + \lambda_4
(x^2-z^2)^2(x^2-y^2+z^2)^2 +\\ 2\lambda_5
(x^2-z^2)(x^2-y^2+z^2)(y^2-z^2)(-x^2+y^2+z^2) \\+ \lambda_6
(y^2-z^2)^2(-x^2+y^2+z^2)^2,
\end{gathered}
\end{equation}
for $\lambda_j$'s, defined by
\begin{equation}
\begin{gathered}
\lambda_1 = \sum_j c_{1j}^2,\quad \lambda_2 = \sum_j c_{2j}^2, \quad \lambda_3 = \sum_j
c_{3j}^2,\\ \lambda_4 = \sum_j c_{4j}^2,\quad \lambda_5 = \sum_j
c_{4j}c_{5j}, \quad \lambda_6 = \sum_j c_{5j}^2.
\end{gathered}
\end{equation}
We solve for the $\lambda_j$ in (6.24):
\begin{equation}
\lambda_1 = t^2,\quad \lambda_2 = s^2,\quad \lambda_3 = r^2,\quad \lambda_4 =
r^2,\quad \lambda_6 = s^2,\quad \lambda_5 = (t^2-r^2-s^2)/2.
\end{equation}
There exist $c_{ij}$ to satisfy (6.25) and (6.26) if and only if
\begin{equation}
0 \le \lambda_4\lambda_6- \lambda_5^2 = \frac14 (r+s-t)(r+t-s)(s+t-r)(r+s+t)
\end{equation}
If, say, $r \ge s \ge t\ge 0$, then $r+s\ge t$ and $r+t \ge s$
automatically, and so (6.27) holds if and only if $s+t \ge r$. By
symmetry, we see that (6.27) is true if and only if all three
inequalities hold.
\end{proof}
\section{Extremal psd ternary forms}
In 1980, Choi, Lam and the author \cite{CLR1} studied
$|{\mathcal Z} (F)|$ for $F \in P_{3,m}$. Let
\begin{equation}
\alpha(m):= \max \left( \frac {m^2}4, \frac {(m-1)(m-2)}2 \right).
\end{equation}
By Theorem 3.5 in \cite{CLR1}, if $F \in P_{3,m}$, then $|{\mathcal Z}(F)| >
\alpha(m)$ implies $|{\mathcal Z}(F)| =
\infty$, and this occurs if and only if $F$ is divisible by the square
of an indefinite form. Let
\begin{equation}
B_{3,m} = \{ \sup |{\mathcal Z}(F)| \ : \ F \in P_{3,m},\ |{\mathcal Z}(F)| < \infty \}.
\end{equation}
Then by Theorem 4.3 in \cite{CLR1},
\begin{equation}
\begin{gathered}
\frac{m^2}4 \le B_{3,m} \le \frac {(m-1)(m-2)}2; \\B_{3,6k} \ge 10k^2,\quad
B_{3,6k+2} \ge 10k^2+1,\quad B_{3,6k+4} \ge 10k^2+4.
\end{gathered}
\end{equation}
In particular, $B_{3,6} = 10$. Further, if $F \in P_{3,6}$, and $|{\mathcal Z}(F)| >
10$, then $|{\mathcal Z}(F)| = \infty$ and
$F\in \Sigma_{3,6}$ is a sum of three squares (Theorem 3.7).
If $G$ is a ternary sextic and
$|{\mathcal Z}(G)| = 10$, then one of $\pm G$ is psd and not sos (Corollary
4.8). We wrote (p.12): ``it would be of interest to determine, if
possible, {\it all} forms $p \in P_{3,6}$ with exactly 10 zeros. From
a combinatorial point of view, it would already be of interest to
determine (or classify) all configurations of 10-point sets $S \subset
\mathbb P^2$ for which there exist $p \in P_{3,6}$ such that $S =
{\mathcal Z}(p)$ $\dots$ The only known psd ternary sextic
with 10 zeros is $R$.''
Sections five and six of this paper are inspired by this remark.
\begin{lemma}
If $F \in P_{3,6}$ is reducible, then $F \in \Sigma_{3,6}$.
\end{lemma}
\begin{proof}
If $F$ has an indefinite factor $H$, then $F = H^2G$, where
$G \in P_{3,2d} = \Sigma_{3,2d}$ for $2d \le 4$. If $F = F_1F_2$ for
definite $F_i$, then $\deg F_i \le 4$ again implies $F \in \Sigma_{3,6}$.
\end{proof}
A form $F$ in the closed convex cone
$P_{n,m}$ is {\it extremal} if $F = G_1 + G_2$ for $G_j \in
P_{n,m}$ implies that $G_j = \lambda_j F$ for $0 \le \lambda_j \in
{\mathbb R}$. Equivalently, $F$ is extremal if $F \ge G \ge 0$ implies $G =
\lambda F$. The set of extremal forms in $P_{n,m}$ is denoted by
$E(P_{n,m})$.
\begin{theorem}
Suppose $F \in P_{3,6}$ and $|{\mathcal Z}(F)| = 10$. Then $F \in E(P_{3,6})$.
\end{theorem}
\begin{proof}
Since $F \in \Delta_{3,6}$ by \cite{CLR1}, Lemma 7.1 implies that $F$ is
irreducible. Suppose $F \ge G \ge 0$. Then $F$ and $G$ are both
singular at the ten zeros of $F$, and since $10\cdot 2^2 > 6\cdot 6$,
Bezout implies that $F$ and $G$ have a common factor. Thus $G = \lambda F$
and $F$ is extremal.
\end{proof}
Theorems 5.1 and 5.7 imply that if $F \in E(P_{3,6})$ has Robinson's 8
zeros, then either $F = P_t \in \Delta_{3,6}$ for some $t > 0$ has ten
zeros, or $F = (\alpha F_1 + \beta F_2)^2 \in E(\Sigma_{3,6})$.
We can use the Perturbation Lemma to put a strong restriction on those
extremal forms which only have round zeros.
\begin{theorem}
If $P \in E(P_{3,2d})\cap \Delta_{3,2d}$ and all zeros of $P$ are
round, then $|{\mathcal Z}(P)| \ge \frac{(d+1)(d+2)}2$.
\end{theorem}
\begin{proof}
Suppose $P$ is psd, all its zeros are round, and $|{\mathcal Z}(P)|<
\frac{(d+1)(d+2)}2$. Then there exists a non-zero $H \in
I_{1,d}({\mathcal Z}(P))$
and the Perturbation Lemma applies to $(P,\pm H^2)$.
It follows that $P \pm cH^2$ is psd for some $c>0$ and
$P$ is not extremal because
\begin{equation}
P= \tfrac 12(P - cH^2) + \tfrac 12(P+ cH^2);
\end{equation}
$P \neq \lambda H^2$ since $P$ is not sos.
\end{proof}
\begin{corollary}
If $p \in E(P_{3,6})\cap \Delta_{3,6}$ and all zeros of $P$ are
round, then $|{\mathcal Z}(p)|=10$.
\end{corollary}
\begin{lemma}
If $P \in P_{3,6}$, and ${\mathcal Z}(P)$ contains four points in a line or
seven points on a quadratic, then $P \in \Sigma_{3,6}$.
\end{lemma}
\begin{proof}
If ${\mathcal Z}(P)$ contains four points $\pi_i$ on the line $L$, then
since $P$ is singular at its zeros, Bezout implies that
$L$ divides $P$ and $P \in \Sigma_{3,6}$ by Lemma 7.1. Similarly, if
${\mathcal Z}(P)$ contains seven points $\pi_i$ on the quadratic $Q$, then
Bezout again implies that $P$ is reducible.
\end{proof}
\begin{theorem}
If $P \in E(P_{3,6})\cap \Delta_{3,6}$ and all zeros of $P$ are
round, then $P$ can be derived by Hilbert's Method using Theorem 4.3.
\end{theorem}
\begin{proof}
Let $A$ denote any subset of seven of the ten zeros of $P$. By Lemma
7.5, $A$ meets the hypothesis of Theorem 4.3.
\end{proof}
Given positive $f \in \mathbb R_{n,2d}$ and $\pi \in \mathbb R^n$, let
$E(f,\pi)$ denote the set of $g \in \mathbb R_{n,d}$
such that there exists a neighborhood ${\mathcal N}_g$ of $\pi$ and $c
> 0$ so that $f - cg^2$ is non-negative on
${\mathcal N}_g$.
\begin{lemma}
$E(f,\pi)$ is a subspace of $\mathbb R_{n,d}$.
\end{lemma}
\begin{proof}
Clearly, $g \in E(f,\pi)$ implies $\lambda g \in
E(f,\pi)$ for $\lambda \in \mathbb R$. Suppose $g_1, g_2 \in E(f,\pi)$;
specifically, $f - c_1g_1^2 \ge 0$ on ${\mathcal N}_1$ and $f -
c_2g_2^2 \ge 0$ on ${\mathcal N}_2$, and let
$\mathcal N = {\mathcal N}_{1} \cap {\mathcal N}_{2}$ and $c =
\min(c_1,c_2)$. The identity
\begin{equation}
f - \tfrac c4(g_1+g_2)^2 = \tfrac 12( f - cg_1^2) + \tfrac 12(f -
cg_2^2) + \tfrac c4(g_1-g_2)^2
\end{equation}
shows that $g_1+g_2 \in E(f,\pi)$.
\end{proof}
If $f(\pi) > 0$, then $E(f,\pi) = \mathbb R_{n,d}$. Let
\begin{equation}
\delta(f,\pi) := \binom{n+d}d - \dim E(f,\pi)
\end{equation}
measure the singularity of the zero of $f$ at $\pi$; the argument of
the Perturbation Lemma shows that
$\delta(f,\pi) = 1$ if and only if $f$ has a round zero at $\pi$. These
definitions also apply in the obvious way to the homogeneous case.
\begin{theorem}
If $P \in E(P_{3,2d})\cap\Delta_{3,2d}$, then
\begin{equation}
\delta(P): = \sum_{\pi \in {\mathcal Z}(P)} \delta(P,\pi) \ge \frac{(d+1)(d+2)}2.
\end{equation}
\end{theorem}
\begin{proof}
If $f(\pi) > 0$, then $E(f,\pi) = \mathbb
R_{n,d}$.
Let
\begin{equation}
{\mathcal E}:= \bigcap_{\pi \in {\mathcal Z}(P)} E(f,\pi).
\end{equation}
Since
\begin{equation}
\dim {\mathcal E} \ge \frac{(d+1)(d+2)}2 - \delta(P),
\end{equation}
if (7.7) fails, then there exists $0 \neq H \in {\mathcal E}$. The
argument of
the Perturbation Lemma applies to $(P,\pm H^2)$, so that (7.4)
holds for some $c > 0$, and $P$ is not extremal.
\end{proof}
It can be checked that $M$ has round zeros at $(1,\pm 1,
\pm 1)$. Let $\pi = (1,0,0)$. If $M - c F^2$ is non-negative near
$(1,0,0)$ for a ternary cubic $F$, then by the method of cages (see
\cite[\S 3]{CLR3}), $x^3, x^2z, xz^2$ cannot appear in $F$, whereas every
other monomial is in $E(M,\pi)$, and so $\delta(M,\pi) =
3$. By symmetry, $\delta(M,(0,1,0)) = 3$,
so that $\delta(M) = 4\cdot 1 + 2 \cdot 3 = 10.$
A similar calculation for $S$ shows that it has round zeros at $(1,\pm 1,
\pm 1)$ and that $\delta(S,e_i) = 2$ at the unit vectors $e_i$
so $\delta(S) = 4 \cdot 1 + 3 \cdot 2 = 10$ as well.
Examples 6.4 and 6.5 were constructed under a heuristic in which
``coalescing'' zeros explain higher-order singularities. These lead to
a perhaps overly-optimistic conjecture:
\begin{conjecture}
If $P \in E(P_{3,6}) \cap \Delta_{3,6}$, then $\delta(P) = 10$,
and either $P$ has ten round zeros, or is the limit of psd extremal
ternary sextics with ten round zeros.
\end{conjecture}
These results are likely more complicated in higher
degree. The ternary octic
\begin{equation}
T(x,y,z) = x^4y^4 + x^2z^6+y^2z^6 - 3x^2y^2z^4 = x^4y^4z^6M(1/x,1/y,1/z)
\end{equation}
is in $E(P_{3,8}) \cap \Delta_{3,8}$; see \cite[p.372]{Re0}. It has
five round zeros at $(0,0,1)$ and $(1,\pm1, \pm 1)$, and more
singular zeros at $(1,0,0)$ and $(0,1,0)$ at which $\delta = 5$,
so that $\delta(T) = 15$. On the other hand, for
\begin{equation}
U(x,y,z) = x^2(x-z)^2(x-2z)^2(x-3z)^2 + y^2(y-z)^2(y-2z)^2(y-3z)^2 \in
\Sigma_{3,8},
\end{equation}
${\mathcal Z}(U) = \{(i,j,1): 0 \le i, j \le 3\}$, so $\delta(U) = 16$.
Thus, there is no threshold value for $\delta$ separating
$\Sigma_{3,8}$ and $\Delta_{3,8}$, as there is for sextics.
\section{Ternary forms in higher degree}
For $d \ge 3$, let
\begin{equation}
T_d = \{(i,j)\ : \ 0 \le i, j,\ i+j \le d\} \subset \mathbb Z^2
\end{equation}
denote a right triangle of $\frac{(d+1)(d+2)}2$ lattice points.
Define the falling product by
\begin{equation}
(t)_m=\prod_{j=0}^{m-1}(t-j).
\end{equation}
The following
construction is due to Biermann \cite{Bie}, see \cite[pp.31-32]{Re1}.
For $(r,s) \in T_d$, let
\begin{equation}
\phi_{r,s,d}(x,y) := \frac{ (x)_r(y)_s(d-x-y)_{d-r-s}}{r!s!(d-r-s)!}.
\end{equation}
\begin{lemma}
If $(i,j) \in T_d$, then $\phi_{r,s,d}(i,j) = 0$ if $(i,j) \neq (r,s)$
and $\phi_{r,s,d}(r,s) = 1$.
\end{lemma}
\begin{proof}
Observe that $(n)_m=0$ if $n \in \{0,\dots,m-1\}$ and $(m)_m =
m!$. If $(i,j) \in T_d$, then $0 \le i$, $0 \le j$ and $0 \le d - i -
j$. Thus $\phi_{r,s,d}(i,j) = 0$ unless $i \ge r$, $j \ge s$ and $d-i-j
\ge d - r - s$, or $i+j \le r+s$; that is, unless $(i,j) = (r,s)$.
The second assertion is immediate.
\end{proof}
\begin{theorem}
Suppose $B \subseteq T_d$ and $A = T_d \smallsetminus B$. Then a basis for
$I_{1,d}(A)$ is given by $\{\phi_{r,s,d} : (r,s) \in B\}$.
\end{theorem}
\begin{proof}
The set $\{\phi_{r,s,d}: (r,s) \in T_d\}$ consists of the correct
number of linearly independent polynomials and so is a basis for
${\mathbb R}_{2,d}$.
If $p \in {\mathbb R}_{2,d}$, then upon evaluation at $(r,s)
\in T_d$, we immediately obtain
\begin{equation}
p(x,y) = \sum_{(r,s) \in T_d} p(r,s)\phi_{r,s,d}(x,y).
\end{equation}
If $p \in I_{1,d}(A)$, then $\phi_{r,s,d}$ has non-zero
coefficient in (8.4) only if $(r,s) \in B$.
\end{proof}
We use this construction in the following example, which was inspired by
looking at the regular pattern of pine trees below the Sulphur
Mountain tram, during a break in the October 2006 BIRS program on
``Positive Polynomials and Optimization''.
\begin{example}[The Banff Gondola Polynomials]
Suppose $d \ge 3$ and let
\begin{equation}
A_d = T_d \smallsetminus \{(d,0),(0,d)\} = \{(i,j): 0 \le i,j \le d-1,
i+j \le d\}.
\end{equation}
By Theorem 8.2, $I_{1,d}(A_d)$ is spanned by $f_1(x,y) =
\phi_{d,0,d}(x,y) = (x)_d$ and
$f_2(x,y) = \phi_{0,d,d}(x,y) = (y)_d$, and it is easy to see that
${\mathcal Z}(f_1) \cap {\mathcal Z}(f_2)
= \{0,\dots,d-1\}^2$, so that
\begin{equation}
\tilde A_d = \{(i,j): 0 \le i,j \le d-1, i+j \ge d+1\}.
\end{equation}
Note that $(i,j) \in \tilde A_d$ implies that $i,j \ge 2$.
Let
\begin{equation}
\begin{gathered}
g_d(x,y) = (x)_2(y)_2(x+y-2)_{d-1}(x+y-4)_{d-3} \\=
x(x-1)y(y-1)(x+y-2)(x+y-3) \prod_{k=0}^{d-4} (x+y-4-k)^2,
\end{gathered}
\end{equation}
We claim that $g_d$ is singular at $\pi \in A_d$ and positive at $\pi \in
\tilde A_d$. First, it is easy to check that each point
in $A_3$ lies on at least two of the lines, and $g_3(2,2) =8$. Now
suppose $d \ge 4$ and $(r,s) \in A_d$. If $4 \le r+s \le d$, then
$(r,s)$ lies on a squared factor; if $2 \le r+s \le 3$, then $(r,s)$
lies on $x+y-2=0$ or $x+y-3=0$, but also, at least one of $\{r,s\}$ is 0
or 1. Finally, if $0 \le r+s \le 1$, then $\{r,s\}\subseteq \{0,1\}$.
If $(r,s) \in \tilde A_d$ for any $d$, then $r,s \ge 2$ and $r+s \ge d+1$,
so each factor in $g_d$ is positive at $(r,s)$. It follows from
Theorem 3.4 that there exists $c_d > 0$ so that
\begin{equation}
(x)_d^2 + (y)_d^2 + c_d(x)_2(y)_2(x+y-2)_{d-1}(x+y-4)_{d-3}
\end{equation}
is positive and not a sum of squares. Note that this polynomial has at
least $|A_d|$ zeros, so $B_{3,2d} \ge \frac{d^2+3d-2}2$. This improves
the lower bound in (7.3) for $2d = 8, 10$.
It can be shown that $c(3) = 4/3$ (exactly) and
that $c(d) \le 12d^{-2}$, so $c(d) \to 0$.
\end{example}
We conclude with some speculations about Hilbert's Method in
degree $d\ge 4$. Suppose $A$ is a set of $\binom {d+2}2-2$
points in general position, so that $I_{1,d}(A)$ has basis
$\{f_1,f_2\}$. By Bezout, we can only say that $|\tilde A| \le d^2 - |A| =
\binom{d-1}2$ as the common zeros do not have to be real or distinct. We have
$\dim I_{1,d}^2(A) = 3$ and, from (2.5),
\begin{equation}
\dim I_{2,2d}(A) \ge \binom{2d+2}2 - 3\left(\binom {d+2}2-2\right) =
\binom{d-1}2 + 3.
\end{equation}
There exist $\binom{d-1}2$ linearly independent polynomials
in $I_{2,2d}(A) \smallsetminus I_{1,d}^2(A)$, and it is plausible that
one is positive on $\tilde A$. If so, then
Hilbert's Method could be applied.
If $r \ge 3$, and $A$ is a set of $\binom {d+2}2-r$
points in general position, so that $\dim I_{1,d}(A) = r$,
then it is plausible to expect $\tilde {\mathcal A} =
\emptyset$. We have
\begin{equation}
\begin{gathered}
\dim I_{2,2d}(A) \ge \binom{2d+2}2 - 3\left(\binom {d+2}2-r\right) =
\binom{d-1}2 + 3r-3\\ = \frac {r(r+1)}2 + \frac{(d+1-r)(d+r-4)}2
\ge \dim I_{1,d}^2 + \frac{(d+1-r)(d+r-4)}2,
\end{gathered}
\end{equation}
so if $r \le d$, $I_{2,2d}(A) \smallsetminus
I_{1,d}^2(A)$ would be non-empty, and again Hilbert's Method
could be applied. We hope to return to these questions elsewhere.
|
2,877,628,091,097 | arxiv | \section{Introduction}
CAD models describe surfaces using a collection of patches that meet in curves and points. Ideally, the CAD surface is watertight, but in practice, there are often gaps or overlaps between neighboring patches. These gaps/overlaps may cause serious meshing and finite element analysis problems and in practical applications the CAD model often needs to be corrected before meshing is possible. This paper develops a robust isogeometric method \cite{IGABook} for handling CAD surfaces with gaps/overlaps. The main idea is to
cover the gaps/overlaps with a three-dimensional mesh and then use a hybrid variable on this mesh together with a Nitsche-type formulation.
The hybrid variable transfers data between neighboring patches, and there is no direct communication between the patches. To
obtain a convergent method, the hybrid variable must be given enough stiffness in the directions normal to the interface. We show that this
can be done by adding a suitable term to the weak statement.
We allow trimmed patches and add appropriate stabilization terms to control the behavior of the finite element functions in the vicinity of the
trimmed boundaries using techniques from CutFEM, see \cite{BurCla15}. In practice, we suggest an octree mesh for setting up the hybrid mesh to facilitate efficient computation of the involved terms. We allow standard conforming finite element spaces as well as spline
spaces with higher regularity. We derive error estimates and present several numerical examples illustrating the method's convergence and application to a realistic CAD model.
\paragraph{Related Work.}
A framework that is also based on a patchwise parametrically described geometry combined with a Nitsche type method to couple the solution over patch interfaces is the discontinuous Galerkin isogeometric analysis \cite{MR3630844,MR3643563}, which considers gaps/overlaps in \cite{MR3566910, MR3547686}. One major difference to the present work is that the method involves the explicit construction of a parametric map between corresponding points over interfaces with gaps, which in our method is implicit through the stabilization of the hybrid variable. In our view the hybridized approach leads to a considerably more convenient and robust implementation that also has the benefit of supporting interfaces coupling more than two patches, cf. \cite{HanJonLar17}.
Our usage of the hybrid variable in spirit resembles the idea of the bending strip method for Kirchhoff plates \cite{MR2672111}, in which strips of fictitious material with unidirectional bending stiffness and zero membrane stiffness are placed to cover the gaps and are used for coupling the solution over the patch interfaces.
The coupling of solutions over imperfect interfaces is also addressed in overlapping mesh problems where the solution is defined on two separate meshes whose boundaries do not match, but rather intersect each other's meshes. This was extended to gaps in \cite{MR2431595,MR2344045} where elements close to the interface were modified to cover the gap, eliminating the gap regions and creating an overlapping mesh situation instead. However, it is not clear how overlapping mesh techniques could be utilized to couple solutions on surfaces since the patch meshes do not necessarily lie on the same smooth surface.
\paragraph{Outline.}
The paper is organized as follows: In Section 2 we present the method, in Section 3 we show stability and error estimates, and in Section 4
we present numerical experiments and examples.
\section{Model Problem and Method}
The main contribution of this paper is the robust coupling of solutions over patch interfaces with gaps/overlaps and to simplify the derivation and analysis of the method, we consider a simplified model problem that allows us to focus on the central issue and avoid complicated notation and unrelated technical arguments. We include remarks and references on how the method is extended to more general problems on CAD surfaces.
\subsection{Model Problem}
We introduce a two-dimensional model problem with a gap at an internal interface, derive a hybridized formulation
and the corresponding finite element method, together with the necessary notation to proceed with the analysis.
\paragraph{Model for a Domain with Gap.} We introduce the following set-up and notation, illustrated in Figure~\ref{fig:model-domain}:
\begin{itemize}
\item Consider a domain $\Omega\subset \IR^2$ and let $\Omega_1$ and $\Omega_2$ be a partition of $\Omega$ into two subsets separated by a smooth interface $\Gamma$, such that $\Omega_1$ is the exterior domain and $\Omega_2$ is the interior domain. Let $U_\delta(\Gamma)$ be the open tubular neighborhood of $\Gamma$ with thickness $2 \delta$. Then there is $\delta_0>0$ such that the closest point mapping $p_\Gamma:U_{\delta_0} (\Gamma) \rightarrow \Gamma$ is well defined.
\item Let $\Omega_{i,\delta}$ be obtained by perturbing $\Gamma$ in the normal
direction by a function $\gamma_i \in C(\Gamma)$ such that
\begin{equation}\label{eq:gapbound}
\| \gamma_i \|_{L^\infty(\Gamma)} \lesssim \delta \leq \delta_0
\end{equation}
More precisely
\begin{align}
\partial \Omega_{i,\delta} = \bigcup_{x \in \Gamma} x + \gamma_i(x) n_\Gamma(x)
\end{align}
where $n_\Gamma(x)$ is the unit normal to $\Gamma$ exterior to $\Omega_2$. Note that the
functions $\gamma_1$ and $\gamma_2$ are different and therefore the domains
$\Omega_{1,\delta}$ and $\Omega_{2,\delta}$ do not perfectly match at the interface, instead there
may be a gap or an overlap but in view of (\ref{eq:gapbound}) we will have
\begin{align}
\partial \Omega_{1,\delta} \cup \partial \Omega_{2,\delta} \subset U_{\delta}(\Gamma)
\subset U_{\delta_0}(\Gamma)
\end{align}
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[t]{0.27\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-model-problem-1}
\subcaption{Two patch domain}
\label{fig:model-a}
\end{subfigure}
\begin{subfigure}[t]{0.27\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-model-problem-2}
\subcaption{Perturbed patches}
\label{fig:model-b}
\end{subfigure}
\begin{subfigure}[t]{0.27\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-model-problem-3}
\subcaption{Tubular neighborhood}
\label{fig:model-c}
\end{subfigure}
\\[1.0ex]
\begin{subfigure}[t]{0.7\linewidth}\centering
\includegraphics[width=0.8\linewidth]{model-problem-3d-blockmesh}
\subcaption{Three-dimensional hybrid mesh covering the interface}
\label{fig:model-bx}
\end{subfigure}
\caption{\emph{Model problem with gap/overlap.} \emph{Top:} In the derivation and analysis of the method we use this conceptual construction of a two-dimensional two-patch domain with gaps/overlaps stemming from perturbation of the patch boundaries facing the interface.
\emph{Bottom:} While the perturbed two-patch domain entirely lives in the two-dimensional plane, the hybrid variable for increased generality will live on a three-dimensional mesh covering the imperfect interface.}
\label{fig:model-domain}
\end{figure}
\paragraph{Exact Model Problem.}
Consider the following model interface problem on the exact partition of $\Omega$ (without a gap/overlap): Find $u$ fulfilling
\begin{align} \label{eq:model-problem}
-\Delta u_i = f_i \qquad \text{in $\Omega_i$},\quad i=1,2
\end{align}
with interface conditions
\begin{align} \label{eq:model-problem-interface}
u_1 = u_2, \qquad \nabla_{n_1} u_1 + \nabla_{n_2} u_2 = 0 \qquad \text{on $\Gamma$}
\end{align}
and a homogeneous Dirichlet boundary condition $u=0$ on $\partial\Omega$.
Here $u_i$ indicates the solution on the patch $\Omega_i$, and analogously we let $u_0$ denote the solution on the interface $\Gamma$. We assume a regularity of the weak solution on each patch $u_i \in H^{s}(\Omega_i) \cap H^1_0(\Omega)|_{\Omega_i}$, where $s>3/2$. Further, for the solution on the interface we assume $u_0 \in H^{s}(\Gamma)$, which is likely $1/2$ more regularity than is required since this is essentially the trace along $\Gamma$ but we maintain this assumption for simplicity. In summary, we assume a weak solution with the following decomposition into three fields
\begin{align}
u = (u_0; u_1; u_2) \in W = V_0 \otimes V_1 \otimes V_2 = H^s(\Gamma) \otimes H^s(\Omega_1)\cap H^1_0(\Omega)|_{\Omega_1} \otimes H^s(\Omega_2)
\end{align}
\paragraph{Extended Solution.}
We will next derive a weak formulation on the perturbed patches $\Omega_{i,\delta}$ instead of on the exact patches $\Omega_i$. To make sense of the exact solution $u$ in such a formulation we must first extend $u$ to the perturbed domains. We recall that there is an extension operator $E_i : H^s(\Omega_{i}) \rightarrow H^s(\IR^2)$, independent of $s$,
such that
\begin{align}
\| E_i v \|_{H^s(\IR^2)} \lesssim \| v \|_{H^s(\Omega_{i})}
\end{align}
and $E_i v = v$ on $\Omega_i$, see \cite{stein70}. For the derivation of the hybridized formulation we introduce fields $u_0,v_0$ defined on a domain $\Omega_0 \subset \IR^3$ such that
\begin{equation} \label{eq:extension-patch}
\partial \Omega_{1,\delta} \cup \partial \Omega_{2,\delta} \cup \Gamma
\subset \Omega_0 \subset U_{\delta_0}(\Gamma)
\end{equation}
and hence we must also extend the exact solution $u$ on $\Gamma$ to $\Omega_0$. To this end we define an extension $E_0: H^s(\Gamma) \rightarrow H^s(U_{\delta_0}(\Gamma))$
such that $(E_0 v)|_x = v\circ p_\Gamma (x)$. Clearly, $E_0 v = v$ on $\Gamma$. We then have
\begin{align} \label{eq:extension-interface}
\| E_0 v \|_{H^s(U_{\delta_0}(\Gamma))} \lesssim \delta_0 \| v \|_{H^s(\Gamma)}
\end{align}
see \cite{BurHanLarMas18}.
For compactness we introduce the notation
\begin{align}
u^e = (u_0^e; u_1^e; u_2^e) = (E_0 u_0; E_1 u_1; E_2 u_2)
\end{align}
where it is implied by the subscript of the field which extension operator is used.
We also apply this notation to spaces such that, for instance, $W^e = \{ v = w^e \,:\, w \in W \}$.
\paragraph{Hybridized Weak Formulation.}
Since an extended function coincides with the original function on its original domain, we may replace the fields in the continuous problem \eqref{eq:model-problem}--\eqref{eq:model-problem-interface} by their extensions.
We then, patchwise, multiply \eqref{eq:model-problem} by a test function $v_i^e \in V_i^e$, integrate over the perturbed patch $\Omega_{i,\delta}$, and apply a Green's formula to obtain
\begin{align}
\sum_{i=1}^2 (f_i^e,v_i^e)_{\Omega_{i,\delta}} &= \sum_{i=1}^2 (-\Delta u_i^e, v_i^e)_{\Omega_{i,\delta}}
\\
&= \sum_{i=1}^2 (\nabla u_i^e, \nabla v_i^e)_{\Omega_{i,\delta}}
- (\nabla_n u^e, v_i^e)_{\partial \Omega_{i,\delta}}
\\
&= \sum_{i=1}^2 (\nabla u_i^e, \nabla v_i^e)_{\Omega_{i,\delta}} - (\nabla_n u_i^e, v_i^e - v_0^e)_{\partial \Omega_{i,\delta}} - (\nabla_n u_i^e, v_0^e)_{\partial \Omega_{i,\delta}}
\\
&\approx \sum_{i=1}^2 (\nabla u_i^e, \nabla v_i^e)_{\Omega_{i,\delta}} - (\nabla_n u_i^e, v_i^e - v_0^e)_{\partial \Omega_{i,\delta}}
- (u_i^e - u_0^e, \nabla_n v_i^e)_{\partial \Omega_{i,\delta}}
\\
&\qquad\quad
+ \beta h^{-1} ( u_i^e - u_0^e, v_i^e - v_0^e)_{\partial \Omega_{\delta,i}}
- (\nabla_n u_i^e, v_0^e)_{\partial \Omega_{i,\delta}}
\end{align}
where we added and subtracted functions $u_0^e = u_0 \circ p_\Gamma = u|_\Gamma \circ p_\Gamma$ and $v_0^e$, and in the
last step we added terms involving $u^e - u_0^e$ that are not exactly zero since they are
evaluated on the perturbed curves $\partial \Omega_{i,\delta}$, which differ from $\Gamma$.
The functions $u_0^e$ and $v_0^e$ will, due to the construction of $E_0$ using the closest point mapping $p_\Gamma$, in the continuous problem be constant in the directions orthogonal to $\Gamma$. In the discrete setting, this property will instead be imposed weakly since it is not straightforward to implement strongly.
\paragraph{Application to Surfaces.} The model problem can be directly extended to a setting with a surface built up by a set of patches,
$\mcO = \{ \Omega_i : i \in I\}$ with $I$ an index set, and interfaces $\{\Gamma_{ij} = \partial \Omega_i \cap \partial \Omega_j\}$. The patches
are defined by a mapping $F_i : \IR^2 \supset \widehat{\Omega}_i \rightarrow \Omega_i \subset \IR^3$, and a set of trim curves
$\widehat{\Gamma}_{ij}$. In the model problem \eqref{eq:model-problem} the Laplace operator is replaced by the Laplace-Beltrami operator $\Delta_\Omega$, the gradients are replaced by tangential gradients $\nabla_\Omega$, and
the interface conditions are
\begin{align}
u_i = u_j, \qquad \nabla_{\nu_i} u_i + \nabla_{\nu_j} u_j = 0 \qquad \text{on $\Gamma_{ij}$}
\end{align}
where and $ \nabla_{\nu_i} = \nu_i \cdot \nabla_\Omega$ are the tangential derivatives along the exterior unit co-normals $\nu_i$ to $\partial \Omega_{i,\delta}$. Note that here $\nu_i$ may be different from $-\nu_j$ and
thus $\Gamma_{ij}$ may be a sharp edge on the surface across which the surface normal is discontinuous. The perturbation of the surface may
be precisely defined by first extending $\Omega_i$ to a slightly larger smooth surface $\widetilde{\Omega}_i$ and then assuming that
$\partial \Omega_{i,\delta}$ is smooth curve on $\widetilde{\Omega}_i$ such that
\begin{align}\label{eq:pert-int}
\partial \Omega_{i,\delta} \subset U_\delta(\Gamma)
\end{align}
The surface patches can be further perturbed by the action of a rigid body motion in $\IR^3$ with norm less than $\delta$. The analysis we
present is basically directly applicable to this setting since the key assumption is (\ref{eq:pert-int}). A further difficulty that we do not consider
here is a more general perturbation of the mapping $F$. We have chosen to present the method
and analysis in the simple setting outlined in the previous paragraph since it captures the main new challenges and the notation is much simpler.
\paragraph{Implementation.} In practice we first import a number of patches that do not match perfectly. These patches $\{ \Omega_i : i \in I \}$
are each described by the mapping $F_i$ together with a set of trim curves $\{ \gamma_j : j \in J_I \}$ defining the boundary of the patch in
the reference domains. We then compute the intersection with the mapped trim curves $F(\gamma_i)$ and voxels in an octree which allows local refinement. We can then extract a suitable cover of the gaps between the mapped patches consisting of a face-connected set of voxels which is the mesh used for the hybrid variable. The precise formulation of such algorithms is not the focus of this paper and we leave that for future work. Note, in particular, that no information is passed directly between two patches instead all information is passed through the hybrid variable.
\subsection{Hybridized Finite Element Method}
\paragraph{Finite Element Spaces.}
To define the finite element spaces we assume that we have polygonal domains $\Omega_i \subset \widetilde{\Omega}_{i} \subset \IR^2$
and families of quasiuniform meshes $\widetilde{\mcT}_{h,i} $ on $\widetilde{\Omega}_i$ with mesh parameter $h \in (0,h_0]$, for $i=1,2.$
We define the active meshes and the corresponding discrete domains by
\begin{equation}
\mcT_{h,i} = \{ T \in \widetilde{\mcT}_{h,i} : T \cap \Omega_i \neq \emptyset \}, \qquad \Omega_{h,i} = \cup_{T \in \mcT_{h,i} }T , \qquad i =1,2
\end{equation}
For the hybrid mesh we instead consider a polygonal domain $ \Gamma \subset U_{\delta_0}(\Gamma) \subset \widetilde{\Omega}_0 \subset \IR^3$
and a family of quasiuniform meshes $\widetilde{\mcT}_{h,0}$ on $\widetilde{\Omega}_{h,0}$ with mesh parameter $h \in (0,h_0]$. Then we define the active mesh by
\begin{equation}
\mcT_{h,0} = \{ T \in \widetilde{\mcT}_{h,0} : T \cap U_\delta(\Gamma) \neq \emptyset \}, \qquad \Omega_{h,0} = \cup_{T \in \mcT_{h,0}} T
\end{equation}
Next we let $\widetilde{V}_{h,i}$ be a conforming finite element or spline space on $\widetilde{\mcT}_{h,i}$ and we define the active finite
element spaces by restriction to the active mesh
\begin{equation}
V_h = \widetilde{V}_h |_{\Omega_{h,i},} \qquad i = 0,1,2
\end{equation}
Finally, the finite element space is the direct sum of our three spaces
\begin{align}
W_h = V_{h,0} \oplus V_{h,1} \oplus V_{h,2}
\end{align}
Here we emphasize that the space $V_{h,0}$ is defined on the three dimensional mesh $\mcT_{h,0}$ and the spaces $V_{h,i}$ are defined on the two dimensional meshes $\mcT_{h,i}$, $i=1,2.$
\paragraph{Definition of the Method.}
Based on the derivation we define the method: find $u_{h} \in W_h$ such that
\begin{equation}
A_h(u_h,v) = l_h(v)\qquad \forall v \in W_h
\end{equation}
where
\begin{align}
A_h(v,w) &= s_{h,0}(v_0,w_0) + \sum_{i=1}^2 a_{h,i}(v_i,w_i) + s_{h,i}(v_i,w_i)
\\
l_h(v) &= \sum_{i=1}^2 (f_i^e, v_i)_{\Omega_{i,\delta}}
\\
\shortintertext{and we have the hybrid variable stabilization}
\label{eq:blockstab}
s_{h,0}(v_0,w_0) &= \tau_0
h^{-\alpha}
\Bigl(
( \nabla_\Gamma^\perp v_0,\nabla_\Gamma^\perp w_0)_{\mcT_{h,0}}
+
\sum_{l=1}^p h^{2l + 1} \bigl( \llbracket \nabla_n^l v_0 \rrbracket , \llbracket \nabla_n^l w_0 \rrbracket \bigr)_{\mcF_{h,0}}
\Big)
\end{align}
where $\alpha$ is a parameter, $\nabla_\Gamma^\perp$ is the component of $\nabla_{\IR^3}$ normal to $\Gamma$, and $\llbracket \nabla_n^l w \rrbracket$ denotes the jump over a face in the $l$:th directional derivative of $w$ in the direction of the face normal. The forthcoming analysis shows that $\alpha = 2$ is a suitable choice. The remaining forms are defined by
\begin{align} \label{eq:ahi}
a_{h,i}(v_i,w_i) &= (\nabla v_i , \nabla w_i)_{\Omega_{i,\delta}}
- (\nabla_n v_i, w_i - w_0)_{\partial \Omega_{i,\delta}}
- (v_i - v_0, \nabla_n w_i)_{\partial \Omega_{i,\delta}}
\\ \nonumber
&\qquad
+ \beta h^{-1} ( v_i - v_0, w_i - w_0)_{\partial \Omega_{\delta,i}}
\\ \label{eq:ghost-penalty}
s_{h,i}(v_i,w_i) &= \tau_i \sum_{l=1}^p h^{2l-1} \bigl( \llbracket \nabla_n^l v_i \rrbracket , \llbracket \nabla_n^l w_i \rrbracket \bigr)_{\mcF_{h,i}}
\end{align}
where $\tau_0,\tau_1,\tau_2$ and $\beta$ are positive parameters.
For simplicity, we do not consider the implementation of the Dirichlet boundary condition on the
exterior boundary $\partial \Omega$. We could either assume that we have a matching mesh at $\partial \Omega$ and use strong boundary conditions
or use a weak Nitsche-type method.
\begin{rem}[Hybrid Variable Stabilization] \label{rem:hybrid-stab}
The first term in the stabilization \eqref{eq:blockstab} of the hybrid variable is the most important and provides the necessary control of the
variation of the hybrid variable across the gap, see estimate \eqref{eq:v0-pert} below. The second term is added to increase
robustness and the well-conditioning of the algebraic equations.
In the first term we must be able to evaluate the gradient $\nabla_\Gamma^\perp = (I - t_\Gamma \otimes t_\Gamma)\nabla_{\IR^3}$, where $t_\Gamma$ is the tangent to $\Gamma$, extended to the complete hybrid variable domain $\Omega_{h,0}$. One option is to extend $t_\Gamma$ to $\mcT_{h,0}$ using the closest point mapping $p_\Gamma(x)$. While $\Gamma$ in the description above is the location of the exact interface, this in most practical situations is unknown. However, since $\Gamma$ is just a theoretical construction we instead define the position of $\Gamma$ based on the perturbed interfaces, for instance as the midpoint between the closest point on $\partial\Omega_{1,\delta}$ respectively on $\partial\Omega_{2,\delta}$.
A more elaborate option would be to introduce a discrete field variable for $t_\Gamma$ on $\Omega_{h,0}$ that is determined via projection of the tangent vectors of $\partial\Omega_{i,\delta}$. Such an approach would have the benefits of not relying on identifying closest points and facilitating higher-order approximations of how information flows over the gap.
For suitable stabilization when there is no gap/overlap, see \cite{MR4021278}, where a similar patch coupling with a hybridized approach is considered.
\end{rem}
\begin{rem}[Patch Stabilization] \label{rem:sh_i}
On each patch we include \eqref{eq:ghost-penalty}, which is a so-called ghost penalty stabilization term \cite{Bur10}. The inclusion of this stabilization allows us to use so-called cut finite element methods \cite{BurCla15,JonLarLar17} for discretizing the solution on each patch. Essentially, the mesh on each patch is not required to conform to the patch geometry --- it is sufficient that the mesh covers the geometry ---
and still, the method enjoys the same approximation and stability properties as a standard FEM. In a cut setting, it is natural to use a weak Nitsche-type method for implementing the Dirichlet boundary condition.
\end{rem}
\begin{rem}[Extension to Isogeometry]
In the surface CAD description, each surface patch $\Omega_{i,\delta} \subset \IR^3$, is described using a parametric map $F_i:\widehat \Omega_{i,\delta} \to \Omega_{i,\delta}$ from a two-dimensional reference domain $\widehat \Omega_{i,\delta} \subset [0,1]^2$. Following the procedure outlined above for extension to surfaces, we then patchwise transform the problem back to $\widehat \Omega_{i,\delta}$ before discretizing. For instance, this means that the form corresponding to \eqref{eq:ahi} will take the structure
\begin{align}
\label{eq:ahi-surf}
a_{h,i}(v_i,w_i) &= (|G_i|^{1/2} G_i^{-1}\nabla v_i , \nabla w_i)_{\widehat\Omega_{i,\delta}}
\\ \nonumber &\qquad
- (n \cdot (|G_i|^{1/2} G_i^{-1 }\nabla v_i, w_i - w_0\circ F_i)_{\widehat\partial \Omega_{i,\delta}}
\\ \nonumber &\qquad
- (v_i - v_0\circ F_i, n\cdot(|G_i|^{1/2} G_i^{-1 }\nabla w_i))_{\widehat\partial \Omega_{i,\delta}}
\\ \nonumber &\qquad
+ \beta h^{-1} ( |G_i|^{1/2} n\cdot G_i^{-1 } \cdot n (v_i - v_0\circ F_i), w_i - w_0\circ F_i)_{\widehat\partial \Omega_{\delta,i}}
\end{align}
where $G_i$ is the metric tensor implied by the map $F_i$. Note that the patch mesh in this case is directly defined on the two-dimensional reference domain, and so is the patch stabilization. For more details on this topic, we refer to our work in \cite{JonLarLar17}.
\end{rem}
\section{Error Estimates}
In this section, we derive an error estimate for the method applied to the model problem. To keep the complexity of the paper at a minimal level we limit our analysis to the most fundamental stability and energy estimates.
\paragraph{Norms, Stabilization, and Poincar\'e Inequality.}
Define the energy norm
\begin{equation} \label{eq:energy-norm}
\tn v \tn_h^2 = \| v_0 \|^2_{s_{h,0}} + \sum_{i=1}^2 \| \nabla v_i \|^2_{\Omega_{i,\delta}}
+ \| v_i \|^2_{s_{h,i}}
+ h \| \nabla v_i \|^2_{\partial \Omega_{i,\delta}}
+ h^{-1} \| v_i - v_0\|^2_{\partial \Omega_{i,\delta}}
\end{equation}
where $\| w \|^2_{s_{h,i}} = s_{h,i}(w,w)$, $i=0,1,2$, and $\| w \|_\omega^2 = \int_\omega w^2$ is the usual $L^2(\omega)$ norm.
The stabilization forms provide the control
\begin{align}
\label{eq:sh-inverseL2}
\| \nabla^m v_i \|^2_{\Omega_{h,i}} &\lesssim \| \nabla^m v_i \|^2_{\Omega_{i,\delta}} + \| v_i \|^2_{s_{h,i}}, \qquad i=1,2, \quad m=0,1
\\
\label{eq:sh0-inverseL2}
h^{-2} \| v_0 \|^2_{\Omega_{h,0}}
&\lesssim
\| v_0 \|^2_{\partial\Omega_{i,\delta}} + \| v_0 \|^2_{s_{h,0}}
\end{align}
see \cite{BurHanLarMas18, HanLarLar17, LarZah20} for proofs. We also have the following result that quantifies the control provided by the stabilization of the hybrid variable.
\begin{lem}[Hybrid Variable Control] There are constants such that for $v_0 \in V_{h,0}$ and $i=1,2,$
\begin{alignat}{2}
\label{eq:v0-pert}
\| v_0 - v_0^e \|_{\partial\Omega_{i,\delta}}^2
&\lesssim \delta^2 h^{\alpha-2} \| v_0 \|_{s_{h,0}}^2
\\
\label{eq:v0-control}
\| v_0 \|_{\partial\Omega_{i,\delta}}^2
&\lesssim
\delta^2 h^{\alpha-2} \| v_0 \|_{s_{h,0}}^2 + \tn v \tn_h^2
\end{alignat}
where $v_0^e(x) = v_{0}\circ p_\Gamma(x)$. Assuming $\alpha\geq 2$, these bounds may be simplified since then
\begin{align} h^{\alpha-2} \| v_0 \|_{s_{h,0}}^2 \leq \tn v \tn_h^2
\end{align}
\end{lem}
\begin{proof} {\bf (\ref{eq:v0-pert}).} Let $I(x,p_\Gamma(x))$ be the line segment connecting $x$ and $p_\Gamma(x)$. We then have
\begin{align}
v_0(x) - v_{0}^e(x)
=
v_0(x) - v_{0} (p_\Gamma(x)) = \int_{ I(x,p_\Gamma(x)) } t \cdot \nabla_\Gamma^\perp v_0
\end{align}
where $t$ is the unit tangent vector to $I(x,p_\Gamma(x))$. Estimating the right-hand side using a Hölder inequality
we get
\begin{align}
|v_0(x) - v_{0}^e (x) | \leq \delta \| \nabla_\Gamma^\perp v_0 \|_{L^\infty( I(x,p_\Gamma(x)) ) }
\end{align}
Squaring and integrating over $\partial\Omega_{i,\delta}$ give
\begin{align}
\| v_0 - v_{0}^e \|_{\partial\Omega_{i,\delta}}^2
&
\leq
\delta^2 \int_{\partial\Omega_{i,\delta}} \| \nabla^\perp_\Gamma v_0 \|_{L^\infty( I(x,p_\Gamma(x)))}^2 \, dx
\\& \label{eq:here-tech-bound}
\lesssim \delta^2 h^{-2} \| \nabla^\perp_\Gamma v_0 \|_{\mcT_{h,0}}^2
\\&
\lesssim \delta^2 h^{-2} h^\alpha \| v_0 \|_{s_{h,0}}^2
\end{align}
which is our desired estimate.
In \eqref{eq:here-tech-bound} we used the following technical bound
\begin{equation}\label{eq:technical-a}
\int_{\partial\Omega_{i,\delta}} \| w \|_{L^\infty( \tilde I(x,p_\Gamma(x))\cap T)}^2 \, dx
\lesssim h^{-2} \| w \|_T^2
\end{equation}
for an element $T \in \mcT_{h,0}$, where $w \in \mathbb{P}_k(T)$, the polynomials of degree $k$ on $T$, and $\tilde I(x,p_\Gamma(x))$ is the straight line covering $I(x,p_\Gamma(x))$. To verify (\ref{eq:technical-a}) we first
recall that since the elements are shape regular and the mesh quasi-uniform there are balls $B_{r_1} \subset T \subset B_{r_2}$, with the same
center and radii that satisfy $r_1 \sim r_2 \sim h$. For any line $l$ in $\IR^3$ that intersects $T$ we have the inverse inequality
\begin{align}
\| w \|^2_{L^\infty(l \cap T)} &\lesssim \| w \|^2_{L^\infty(l \cap B_{r_2})} \lesssim \| w \|^2_{L^\infty(l \cap B_{2 r_2})}
\\ \label{eq:tech-b}
&\qquad \lesssim h^{-1} \| w \|^2_{l \cap B_{2 r_2}} \lesssim h^{-3} \| w \|^2_{B_{2 r_2}} \lesssim h^{-3} \| w \|^2_{B_{r_1}} \lesssim h^{-3} \| w \|^2_T
\end{align}
where we used the fact that the length $|l \cap B_{2 r_2}|$ of the line segment $l \cap B_{2 r_2}$ satisfy $|l \cap B_{2 r_2}| > r_2 \gtrsim h$, an
inverse inequality to pass from the line to the ball $B_{2 r_2}$, and finally an inverse inequality to pass to $B_{r_1}$ which is contained in $T$
by shape regularity. Using (\ref{eq:tech-b}) we get
\begin{align}
\int_{\partial \Omega_{\delta,i}} \| w \|^2_{L^\infty( \tilde I(x,p_\Gamma(x))\cap T)} dx
&\lesssim
\int_{\partial \Omega_{\delta,i}} h^{-3} \| w \|^2_T dx
\\
&\lesssim |\partial \Omega_{\delta,i} \cap p_\Gamma^{-1} (T)| h^{-3} \| w \|^2_T
\\
&\lesssim h^{-2} \| w \|^2_T
\end{align}
where we finally used the fact that $|\partial \Omega_{\delta,i} \cap p_\Gamma^{-1} (T)|\lesssim h$. This completes the verification of \eqref{eq:technical-a}, and hence, the proof of \eqref{eq:v0-pert}.
\noindent{\bf (\ref{eq:v0-control}).} For $i=1$ we add and subtract $v_1 \in V_{h,1}$ and estimate using standard inequalities
\begin{align}
\| v_0 \|_{\partial\Omega_{1,\delta}}^2
&\lesssim \| v_0 - v_1 \|_{\partial\Omega_{1,\delta}}^2 + \| v_1 \|_{\partial\Omega_{1,\delta}}^2
\\
&\lesssim \| v_0 - v_1 \|_{\partial\Omega_{1,\delta}}^2 + \| v_1 \|_{\Omega_{1,\delta}}^2 + \| \nabla v_1 \|_{\Omega_{1,\delta}}^2
\\
&\lesssim \| v_0 - v_1 \|_{\partial\Omega_{1,\delta}}^2 + \| \nabla v_1 \|_{\Omega_{1,\delta}}^2
\\ \label{eq:tech-c}
&\leq \tn v \tn_h^2
\end{align}
where we used a trace inequality followed by the control provided by the Dirichlet condition on $\partial \Omega$. In the case $i=2$
we instead add and subtract $v_{0}^e$,
\begin{align}
\| v_0 \|_{\partial\Omega_{2,\delta}}
&\leq
\| v_0 - v_{0}^e \|_{\partial\Omega_{2,\delta}}
+
\| v_{0}^e \|_{\partial\Omega_{2,\delta}}
\\&\lesssim
\delta h^{\alpha/2-1} \| v_0 \|_{s_{h,0}}
+
\| v_{0}^e \|_{\partial\Omega_{1,\delta}}
\\&\leq
\delta h^{\alpha/2-1} \| v_0 \|_{s_{h,0}}
+
\| v_0 - v_{0}^e \|_{\partial\Omega_{1,\delta}}
+
\| v_0 \|_{\partial\Omega_{1,\delta}}
\\&\lesssim \label{eq:last-stab-bound}
\delta h^{\alpha/2-1} \| v_0 \|_{s_{h,0}}
+ \tn v \tn_h
\end{align}
where we used (\ref{eq:v0-pert}), the fact that $v_{0}$ is constant orthogonally to $\Gamma$ to pass from
$\partial \Omega_{\delta,2}$ to $\partial \Omega_{\delta,1}$, and then the bound (\ref{eq:tech-c}) for $i=1$. This concludes the proof of \eqref{eq:v0-control}.
\end{proof}
\begin{lem}[Poincar\'e Inequality]
Assuming $\alpha \geq 2$, there is a constant such that
\begin{align}\label{eq:poincare}
\boxed{
h^{-2} \| v_0 \|^2_{\mcT_{h,0}} + \sum_{i=1}^N \| v_i \|^2_{\mcT_{h,i}} \lesssim \tn v \tn^2_h , \qquad v \in W_h
}
\end{align}
and as a consequence $\tn \cdot \tn_h$ is a norm on $W_h$.
\end{lem}
\begin{proof}
Let $\phi$ be the solution to the dual problem
\begin{equation}\label{eq:poincare-dual}
\text{$-\Delta \phi = \psi$ in $\Omega$} ,
\qquad
\text{$\phi = 0$ on $\partial \Omega$}
\end{equation}
with $\psi \in L^2(\Omega)$, which satisfies the standard regularity estimate
\begin{equation}\label{eq:poincare-dual-reg}
\| \phi \|_{H^{2}(\Omega)} \lesssim \|\psi \|_\Omega
\end{equation}
Consider first the estimation of the bulk
subdomain contributions. Using \eqref{eq:sh-inverseL2} we have
\begin{align}
\label{eq:init-calc-a}
\sum_{i=1}^2 \| v_i \|^2_{\mcT_{h,i}} &\lesssim \sum_{i=1}^2 \| v_i \|^2_{\Omega_{i,\delta}} + \| v_i \|^2_{s_{h,i}}
\end{align}
where the last term is trivially bounded by $\tn v\tn_h^2$.
To estimate $\sum_{i=1}^2 \| v_i \|^2_{\Omega_{i,\delta}}$ we multiply the dual problem
\eqref{eq:poincare-dual} by $v_i \in V_{h,i}$ and then using integration by parts on
each of the patch domains
$\Omega_{i,\delta}$, $i=1,2$, we obtain
\begin{align}\label{eq:poincare-proof-a}
\sum_{i=1}^2 (v_i,\psi)_{\Omega_{i,\delta}}
&=
\sum_{i=1}^2 (\nabla v_i, \nabla \phi )_{\Omega_{i,\delta}} - (v_i, \nabla_n \phi )_{\partial \Omega_{i,\delta}}
\\ \label{eq:poincare-proof-b}
&=
\sum_{i=1}^2 ( \nabla v_i, \nabla \phi )_{\Omega_{i,\delta}}
- (v_i - v_0, \nabla_n \phi )_{\partial \Omega_{i,\delta}}
- (v_{0}, \nabla_n \phi )_{\partial \Omega_{i,\delta}}
\\ \label{eq:poincare-proof-c}
&\lesssim
\sum_{i=1}^2 \| \nabla v_i\|_{\Omega_{i,\delta}} \|\nabla \phi \|_{\Omega_{i,\delta}}
\\\nonumber &\qquad\quad
+ \left(\|v_i - v_{0} \|_{\partial \Omega_{i,\delta}} + \| v_{0} \|_{\partial \Omega_{i,\delta}}\right) \| \nabla \phi \|_{\partial \Omega_{i,\delta}}
\\ \label{eq:poincare-proof-e}
&\lesssim
(1+\delta h^{\alpha/2-1}) \tn v \tn_h \underbrace{\Big( \sum_{i=1}^2 \|\nabla \phi \|^2_{\Omega_{i,\delta}}
+ \| \phi \|^2_{H^{2}(\Omega_{i,\delta})} \Big)^{1/2}}_{ \lesssim \| \phi \|_{H^{2}(\Omega)}}
\\ \label{eq:poincare-proof-f}
&\lesssim (1+\delta h^{\alpha/2-1}) \tn v \tn_h \| \psi \|_{\Omega}
\end{align}
where in (\ref{eq:poincare-proof-b}) we added and subtracted $v_{0}$
in the boundary terms; in (\ref{eq:poincare-proof-c}) we
used the Cauchy-Schwarz inequality; in (\ref{eq:poincare-proof-e}) we used the definition
of the energy norm (\ref{eq:energy-norm}), the control for $v_0$ we have from \eqref{eq:v0-control}, and a standard
trace inequality for $\phi$ on $\Omega_{\delta,i}$;
and finally, in (\ref{eq:poincare-proof-f}) we used the regularity assumption (\ref{eq:poincare-dual-reg}). Choosing the data $\psi\in L^2(\Omega)$ to the dual problem as
\begin{align}
\psi=
\left\{
\begin{alignedat}{2}
&v_1 &\quad&\text{on $\Omega_{1,\delta}$}
\\
&v_2 &\quad&\text{on $\Omega_{2,\delta}\setminus\Omega_{1,\delta}$}
\\
&0 &\quad&\text{on $\Omega\setminus (\Omega_{1,\delta}\cup \Omega_{2,\delta})$}
\end{alignedat}
\right.
\end{align}
we have
\begin{align}
\| \psi \|_{\Omega}^2 = \| v_1 \|_{\Omega_{1,\delta}}^2 + \| v_2 \|_{\Omega_{2,\delta}\setminus\Omega_{1,\delta}}^2
\leq \sum_{i=1}^2 \| v_i \|_{\Omega_{i,\delta}}^2
\end{align}
and thus we obtain
\begin{equation} \label{eq:poincare-patch}
\sum_{i=1}^2 \| v_i \|^2_{\mcT_{h,i}} \lesssim (1+\delta h^{\alpha/2-1}) \tn v \tn_h^2
\lesssim
\tn v \tn_h^2
\end{equation}
where we in the last inequality use $\alpha\geq 2$.
Finally, using \eqref{eq:sh0-inverseL2}, \eqref{eq:v0-control}, and $\alpha \geq 2$ we directly obtain a bound for the hybrid variable
\begin{align}
h^{-2} \| v_0 \|^2_{\mcT_{h,0}}
\lesssim
\| v_0 \|^2_{\partial\Omega_{1,\delta}} + \| v_0 \|^2_{s_{h,0}}
\lesssim
\tn v \tn_h^2
\end{align}
which concludes the proof.
\end{proof}
\paragraph{Continuity and Coercivity.}
The form $A_h$ is continuous
\begin{align}\label{eq:cont}
\boxed{
A_h(v,w) \lesssim \tn v \tn_h \tn w \tn_h, \qquad v,w \in W^e + W_h
}
\end{align}
and for $\beta > 0$ large enough coercive
\begin{align}\label{eq:coer}
\boxed{
\tn v \tn_h^2 \lesssim A_h(v,v), \qquad v \in W_h
}
\end{align}
The continuity follows from the Cauchy-Schwarz inequality and for the coercivity, we note that
\begin{align}
A_h(v,v) &= \| v_0 \|^2_{s_{h,0}} + \sum_{i=1}^2 \|\nabla v_i \|^2_{\Omega_{i,\delta}} + \| v_i \|_{s_{h,i}}^2
\\&\nonumber \qquad\qquad\qquad\quad
- 2(\nabla_n v_i, v_i - v_0)_{\partial \Omega_{i,\delta}}
+ \beta h^{-1} \| v_i - v_0 \|^2_{\partial \Omega_{\delta,i}}
\end{align}
and we can use the usual arguments provided the parameter $\beta$ is large enough.
\paragraph{Interpolation.}
Before deriving the error estimates we recall some interpolation results.
By virtue of the patch extensions \eqref{eq:extension-patch} and interface extension \eqref{eq:extension-interface} the three fields of a function $v\in W^e$ is defined on the full mesh domains $\Omega_{h,0}$, $\Omega_{h,1}$, and $\Omega_{h,2}$.
We define an interpolation operator
\begin{align}
\pi_{h} : W^e \ni v=(v_0;v_1;v_2) \mapsto (\pi_{h,0}v_0;\pi_{h,1}v_1;\pi_{h,2}v_2) \in W_h
\end{align}
where $\pi_{h,i} : H^1(\Omega_{h,i}) \rightarrow V_{h,i}$ is the Scott-Zhang interpolation operator. We
choose the Scott-Zhang operator to preserve strong Dirichlet boundary conditions on $\partial \Omega$.
We now derive an interpolation estimate in the energy norm \eqref{eq:energy-norm}.
First, we consider the interpolation of the patch fields.
Combining standard interpolation error estimates and the stability of the extension operator we obtain
\begin{align} \label{eq:patch-interpolant}
\| ( I - \pi_{h,i} ) v_i^e \|_{H^m(\Omega_{i,\delta})} &\lesssim h^{p+1 - m} \| v_i \|_{H^{p+1}(\Omega_{i})}, \qquad m=0,1
\end{align}
In the boundary terms in $\tn v \tn_h$ we separate the patch fields $v_i$ from the hybrid variable field $v_0$ using the triangle inequality, and then move $v_i$ onto $\Omega_{i,\delta}$ using a trace inequality.
The remaining patch field
term in $s_{h,i}$ can be directly estimated using elementwise trace inequalities and interpolation estimates.
Next, we consider the interpolation of the hybrid variable field.
Similarly, as for \eqref{eq:patch-interpolant} we combine standard interpolation estimates with the stability of the extension operator and obtain
\begin{align}
\| ( I - \pi_{h,0} ) v_0^e \|_{H^m(\Omega_{h,0})} &\lesssim h^{p+2 - m} \| v_0 \|_{H^{p+1}(\Gamma)}, \qquad m=0,1
\end{align}
On the boundary terms, we apply an elementwise trace inequality to move onto $\Omega_{h,0}$ and then apply the above estimate.
What remains is to estimate the $s_{h,0}$-norm, where the first term from \eqref{eq:blockstab} is estimated
\begin{align} \label{:eq:sh0-partII}
h^{-\alpha} \|\nabla_\Gamma^\perp (I - \pi_{h,0}) u_0^e \|^2_{\Omega_{h,0}}
&\leq
h^{-\alpha}
\| ( I - \pi_{h,0} ) u_0^e \|_{H^1(\Omega_{h,0})}^2
\lesssim
h^{2p+2 - \alpha} \| u_0 \|_{H^{p+1}(\Gamma)}
\end{align}
which holds for $\alpha \leq 2$, and the second term is estimated analogously to the patchwise $s_{h,i}$-norm.
Combining these estimates we obtain
\begin{equation}\label{eq:interpol}
\boxed{\tn v - \pi_h v \tn_h \lesssim h^p \Big( \| v \|_{H^{p+1}(\Omega)} + \| v \|_{H^{p+1}(\Gamma)} \Big) }
\end{equation}
\paragraph{Error Estimate.} We are now ready to prove an error estimate in the energy norm.
\begin{thm} \label{thm:energy}
There is a constant such that for $\alpha = 2$,
\begin{align}
\boxed{\tn u^e - u_h \tn_h \lesssim (h^p + h^{-1/2}\delta) \Big( \| u \|_{H^{p+1}(\Omega)} + \| u \|_{H^{p+1}(\Gamma)} + \| u_0^e \|_{W^{2}_\infty(\Omega\cap U_\delta(\Gamma))} \Big) }
\end{align}
\end{thm}
\begin{proof} It follows from coercivity that
\begin{align}
\tn \pi_h u^e - u_h \tn_h
\lesssim \inf_{v \in W_h \setminus \{0\}}
\frac{A_h(\pi_h u^e - u_h,v)}{\tn v \tn_h}
\end{align}
and we need to estimate the numerator. We have
\begin{align}
A_h(\pi_h u^e - u_h,v)&=A_h(\pi_h u^e ,v) - A_h(u_h,v)
\\
&= A_h(\pi_h u^e ,v) - l_h(v)
\\
&= a_h(\pi_h u^e ,v) + s_{h,0}(\pi_h u_0^e,v) - l_h(v)
\\
&= \underbrace{a_h(\pi_h u^e - u ,v)}_{I} + \underbrace{s_{h,0}(\pi_h u_0^e,v)}_{II}
+ \underbrace{ a_h(u^e,v) - l_h(v) }_{III}
\end{align}
Here $I$ is estimated using continuity (\ref{eq:cont}) and the interpolation error estimate (\ref{eq:interpol}),
\begin{align}
a_h(\pi_h u^e - u^e ,v) \lesssim \tn \pi_h u^e - u^e \tn_h \tn v \tn_h
\end{align}
For $II$ we have
\begin{equation}\label{eq:II}
\| \pi_h u_0^e \|_{s_{h,0}} = \| (\pi_h - I) u_0^e \|_{s_{h,0}} \lesssim h^p \| u_0 \|_{H^{p+1}(\Gamma)}
\end{equation}
where we, without affecting the value, can subtract $u_0^e$ in the $s_{h,0}$-norm since we for the first term in the norm have
\begin{align}
h^{-\alpha} \|\nabla_\Gamma^\perp \pi_{h,0} u_0^e \|^2_{\Omega_{h,0}}
&=
h^{-\alpha} \|\nabla_\Gamma^\perp (\pi_{h,0} - I) u_0^e \|^2_{\Omega_{h,0}}
\end{align}
as the extension $u_0^e$ is constant orthogonal to $\Gamma$,
and the second term is defined in terms of jumps over mesh edges, which are zero for sufficiently regular $u_0^e$.
For $III$ we use partial integration
\begin{align}
III &= \sum_{i=1}^2 (\nabla u_i^e, \nabla v_i)_{\Omega_{i,\delta}} - (\nabla_n u_i^e, v_i - v_0)_{\partial \Omega_{i,\delta}}
\\
&\qquad \quad
- (u_i^e - u_0^e, \nabla_n v_i)_{\partial \Omega_{i,\delta}}
+ \beta h^{-1} ( u_i^e - u_0^e, v - v_0)_{\partial \Omega_{\delta,i}}
- (f_i^e,v_i )_{\Omega_{\delta,i}}
\\
&= \sum_{i=1}^2 \underbrace{-(\Delta u_i^e, v_i)_{\Omega_{i,\delta}}
+ (\nabla_n u_i^e, v_i )_{\partial \Omega_{i,\delta}}
- (\nabla_n u_i^e, v_i - v_0)_{\partial \Omega_{i,\delta}} - (f_i^e,v_i )_{\Omega_{\delta,i}}}_{=\sum_{i=1}^2 (\nabla_n u_i^e, v_0)_{\partial \Omega_{i,\delta}}}
\\
&\qquad \quad
- (u_i^e - u_0^e, \nabla_n v_i)_{\partial \Omega_{i,\delta}}
+ \beta h^{-1} ( u_i^e - u_0^e, v_i - v_0)_{\partial \Omega_{\delta,i}}
\\
&= \sum_{i=1}^2 (\nabla_n u_i^e, v_0)_{\partial \Omega_{i,\delta}}
- (u_i^e - u_0^e, \nabla_n v_i)_{\partial \Omega_{i,\delta}}
+ \beta h^{-1} ( u_i^e - u_0^e, v_i - v_0)_{\partial \Omega_{\delta,i}}
\\
&\leq \Big| \sum_{i=1}^2 (\nabla_n u_i^e, v_0)_{\partial \Omega_{i,\delta}} \Big|
\\
&\qquad + \sum_{i=1}^2 \|u_i^e - u_0^e \|_{\partial \Omega_{i,\delta}} \| \nabla_n v_i\|_{\partial \Omega_{i,\delta}}
+ \beta h^{-1} \| u_i^e - u_0^e \|_{\partial \Omega_{i,\delta}} \|v_i - v_0 \|_{\partial \Omega_{\delta,i}}
\\ \label{eq:xyz}
&\lesssim \underbrace{\Big| \sum_{i=1}^2 (\nabla_n u_i^e, v_0)_{\partial \Omega_{i,\delta}} \Big|}_{III_1}
+ \underbrace{\Big( \sum_{i=1}^2 h^{-1/2} \|u_i^e - u_0^e \|_{\partial \Omega_{i,\delta}}\Big)}_{III_2} \tn v \tn_h
\end{align}
To estimate $III_1$, we add and subtract $v_{0}^e$ and utilize the interface condition \eqref{eq:model-problem-interface} to insert $0=\sum_{i=1}^2 ((\nabla_{n_i} u_i)|_\Gamma)^e$, where the implied extension is $E_0$, in the second
term,
\begin{align}
&\sum_{i=1}^2 (\nabla_n u_i^e, v_0)_{\partial \Omega_{i,\delta}} = \sum_{i=1}^2 (\nabla_n u_i^e , v_0 - v_{0}^e)_{\partial \Omega_{i,\delta}}
+ \sum_{i=1}^2 (\nabla_n u_i^e - ((\nabla_n u_i)|_\Gamma)^e , v_{0}^e)_{\partial \Omega_{i,\delta}}
\\
&\lesssim \sum_{i=1}^2 \|\nabla_n u_i^e\|_{\partial \Omega_{i,\delta}} \|v_0 - v_{0}^e\|_{\partial \Omega_{i,\delta}}
+ \sum_{i=1}^2 \| \nabla_n u_i^e - ( (\nabla_n u_i)|_\Gamma )^e \|_{\partial \Omega_{i,\delta}} \|v_{0}^e\|_{\partial \Omega_{i,\delta}}
\\
&\lesssim \sum_{i=1}^2 \|u_i^e \|_{H^2 (\Omega_{i,\delta})} \delta h^{\alpha/2 -1} \| v_0 \|_{s_{h,0}}
+ \sum_{i=1}^2\delta \| u \|_{ W^2_\infty( \Omega \cap U_\delta(\Gamma))} \|v_{0}^e\|_{\partial \Omega_{i,\delta}}
\end{align}
In the last inequality, we utilize \eqref{eq:v0-pert} for the first term and a Taylor argument for the second term. We then use $\|v_{0}^e\|_{\partial \Omega_{i,\delta}} \lesssim h^{-1} \|v_{0}\|_{\mcT_{h,0}}$ and the Poincar\'e inequality \eqref{eq:poincare} to bound the test function in terms of the energy norm.
Finally, for $III_2$ we have using similar estimates
\begin{align}
h^{-1} \|u^e - u_0^e \|^2_{\partial \Omega_{i,\delta}} \lesssim h^{-1} \delta^2 \| u \|^2_{W^1_\infty(U_\delta(\Gamma))}
\end{align}
and thus the proof is complete.
\end{proof}
\section{Numerical Experiments}
\paragraph{Implementation.}
The method was implemented in MATLAB, largely following the details presented in \cite{JonLarLar17,HanJonLar17}. This implementation utilizes the available parametric mappings in the surface description, where patchwise surface terms are pulled back to a two-dimensional reference domain before assembly. An upshot of the hybridized approach is that the assembly of the interface terms is done patchwise, such that no knowledge about other patches on the other side of the interface is required. Hence, there is also no need of finding the corresponding point in adjacent patches, which can be cumbersome to do efficiently and robustly since it involves the inverse of surface mappings -- in particular when the interfaces are not exact.
A new component for this work is the implementation of the hybrid variable, which entails the construction of its approximation space and the assembly of the hybrid variable stabilization. In our implementation, the hybrid variable mesh is extracted from a three-dimensional structured background hexahedral grid, by traversing all patch boundaries without boundary conditions and marking elements passed in the background grid, and we equip this mesh with a continuous approximation space.
The assembly of the stabilization includes evaluation of $\nabla_\Gamma^\perp = (I - t_\Gamma \otimes t_\Gamma)\nabla_{\IR^3}$, the part of the gradient normal to the (artificial) interface $\Gamma$. We base our implementation on interpolation of $t_\Gamma\otimes t_\Gamma$ using tensor product Lagrange elements of degree $p$, where the value for $t_\Gamma$ at each interpolating point is set to the tangent value at the nearest closest point on the patch boundaries. Different approaches to this assembly are outlined in Remark~\ref{rem:hybrid-stab}, and we believe that, in practice, an octree-based mesh structure in combination with a projection-based method for extending the tangential field $t_\Gamma$, avoiding the closest point mapping, would give the most flexible, efficient and robust implementation.
For the import of CAD geometries into MATLAB we utilize IGES Toolbox~\cite{IGES}, which allows us to import CAD surfaces in IGES format. This gives us a set of patches described via NURBS as well as a set of trim curves. To evaluate the NURBS and its derivatives we use the NURBS toolbox\cite{NURBS}.
\paragraph{Parameter Choices and Approximation Spaces.}
As described above we cover all patch boundaries corresponding to interfaces with a structured hexahedral mesh with global mesh size $h$, where typically $h \geq \delta$, which is the mesh for the hybrid variable. We equip each surface patch $\Omega_{i,\delta}$ with a structured quadrilateral mesh in the two-dimensional reference domain, covering $\widehat\Omega_{i,\delta}$, where the mesh size in the reference domain is chosen such that the mapped elements on the surface approximately have size $h$. On each mesh, we define an approximation space using full regularity tensor product B-splines of degree $p$, where $p=2$ unless otherwise stated.
For the Nitsche penalty parameter we use $\beta = 25 p^2$ and for the stabilization parameters we use $\tau_0=\tau_i=0.01$.
\paragraph{Convergence Studies.}
As a model problem for our quantitative studies we consider the Laplace-Beltrami problem $-\Delta_\Omega u = f$ with non-homogeneous Dirichlet boundary conditions. We construct a sequence of surface domains with a gap, where we can vary the gap size $\delta$, and which is illustrated in Figure~\ref{fig:surface-model-problem}.
Specifically, we map the unit square onto the surface of a torus, where the unit square has the partition
\begin{align}
\widehat\Omega_{1,\delta} = \{ (\hatx,\haty) : 0 < \hatx,\haty < 1 \,;\ \hatx^2+ \haty^2 > 0.9^2 \}
\,,\
\widehat\Omega_{2,\delta} = \{ (\hatx,\haty) : \hatx^2+ \haty^2 < 0.9^2 \}
\end{align}
where $\widehat\Omega_{2,\delta}$ is an inner disc and $\widehat\Omega_{1,\delta}$ is the remaining outer part.
We map these reference domains onto the surface, such that $\Omega_i = F_i(\widehat\Omega_{i,\delta})$, using the mappings
\begin{align}
F_1(\hatx, \haty) &= [(R + r \cos{\scriptstyle\frac{5\pi\hatx}{3}} ) \cos{\scriptstyle\frac{5\pi\haty}{18}},\,
(R + r \cos{\scriptstyle\frac{5\pi\hatx}{3}}) \sin{\scriptstyle\frac{5\pi\haty}{18}},\,
r \sin{\scriptstyle\frac{5\pi\hatx}{3}} ]
\\
F_2(\hatx, \haty) &= F_1(\hatx, \haty)
+ \delta [\cos{\scriptstyle\frac{5\pi}{6}} \cos{\scriptstyle\frac{5\pi}{36}},\, \cos{\scriptstyle\frac{5\pi}{6}} \sin{\scriptstyle\frac{5\pi}{36}},\, \sin{\scriptstyle\frac{5\pi}{6}} ]
\label{eq:param_torus}
\end{align}
where we note that the latter mapping is shifted a distance $\delta$ in the normal direction of the disc midpoint.
We manufacture a problem on the exact ($\delta=0$) surface with known analytical solution $u=\sin(3x)\sin(3y)\sin(3z)$. This ansatz is a restriction of a function of three-dimensional Cartesian coordinates to the surface, and to evaluate the data $f=-\Delta_\Omega u$ we express the Laplace-Beltrami operator
$\Delta_\Omega u = \Delta_{\IR^3} u - \partial_{nn} u - 2H\partial_n u$
where $\Delta_{\IR^3}$ is the three-dimensional Laplacian, $\partial_{n}$ and $\partial_{nn}$ are the first and second order derivatives in the direction of the surface normal $n$, and $H$ is the mean curvature of the surface.
When measuring the error in the experiments below, we on the shifted patch $\Omega_{2,\delta}$ lift the analytical solution from the exact surface using the closest point mapping of the torus.
In the standard situation we foresee, the gap is caused by the finite precision in the parameterization of the trim curves in the CAD description, meaning that the gap size $\delta$ is fixed with respect to the mesh size $h$. Convergence results for the model problem with various sizes of a fixed gap are presented in Figure~\ref{fig:fixed-gap-convergence}. As expected, we note optimal order convergence until the error levels out due to the geometric error induced by the gap, where a smaller gap size gives a smaller lower bound on the error.
To give some validation to our error estimate in Theorem~\ref{thm:energy}, we in Figure~\ref{fig:gap-scaling-convergence} also consider the convergence of the model problem where the gap size $\delta$ is scaled by the mesh size $h$ to various powers. We note that gap size scalings of $\delta \sim h^p$ and $\delta \sim h^{p+1}$ seem to be needed to achieve optimal order convergence in $H^1$-seminorm and $L^2$-norm, respectively. The former result is $h^{1/2}$ better than would be expected considering the energy norm bound in Theorem~\ref{thm:energy}. We believe our estimate to be sharp and that the reason for this discrepancy is that the $H^1$-seminorm on the patches is in fact better than the full energy norm that also includes the interface terms. We will return to the analysis of this in another contribution.
\begin{figure}
\centering
\begin{subfigure}[t]{0.35\linewidth}\centering
\includegraphics[width=0.9\linewidth]{torus}
\subcaption{Perturbed surface domain}
\label{fig:cad}
\end{subfigure}
\begin{subfigure}[t]{0.35\linewidth}\centering
\includegraphics[width=0.9\linewidth]{torus_block5}
\subcaption{Hybrid variable mesh}
\label{fig:physdom}
\end{subfigure}
\caption{\emph{Surface model problem.} A sequence of model geometries is constructed by decomposing the unit square into two parts by cutting away a circle, mapping the two parts onto the surface of a torus, and shifting the inner part (disc) a distance $\delta$ in the normal direction at its midpoint. The resulting domain (a) features a gap where the direction of the gap varies over the interface, where in some parts the gap is mainly in the tangential plane of the torus surface and some parts are mainly normal to the surface.
In (b) the hybrid variable mesh is shown, which covers both sides of the interface and is extracted from a uniform background grid.
}
\label{fig:surface-model-problem}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.35\linewidth}\centering
\includegraphics[width=0.9\linewidth]{H1_c1spline_fix_gapsize.pdf}
\subcaption{Error in $H^1$-seminorm}
\end{subfigure}
\begin{subfigure}{0.35\linewidth}\centering
\includegraphics[width=0.9\linewidth]{L2_c1spline_fix_gapsize.pdf}
\caption{Error in $L^2$-norm}
\end{subfigure}
\caption{\emph{Convergence for a fixed gap size.} In most practical situations the gap size $\delta$ is not something we can choose but is rather given by a provided CAD surface. Here, we consider convergence in the surface model problem for a number of different fixed gap sizes $\delta$ when discretizing using full regularity B-spline basis functions of degree $p=2$. We note optimal order convergence until the error eventually levels out due to the geometric error.}
\label{fig:fixed-gap-convergence}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.3\linewidth}\centering
\includegraphics[width=0.95\linewidth]{L2_c0spline_potenser_alpha=2.pdf}
\caption{$p=1$}
\label{fig:torus_L2_q1_alpha}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}\centering
\includegraphics[width=0.95\linewidth]{L2_c1spline_potenser_alpha=2.pdf}
\caption{$p=2$}
\label{fig:torus_L2_q2_alpha}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}\centering
\includegraphics[width=0.95\linewidth]{L2_c2spline_potenser_alpha=2.pdf}
\caption{$p=3$}
\label{fig:torus_L2_q3_alpha}
\end{subfigure}
\\[0.5em]
\begin{subfigure}{0.3\linewidth}\centering
\includegraphics[width=0.95\linewidth]{H1_c0spline_potenser_alpha=2.pdf}
\subcaption{$p=1$}
\label{fig:torus_H1_q1_alpha}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}\centering
\includegraphics[width=0.95\linewidth]{H1_c1spline_potenser_alpha=2.pdf}
\subcaption{$p=2$}
\label{fig:torus_H1_q2_alpha}
\end{subfigure}
\begin{subfigure}{0.3\linewidth}\centering
\includegraphics[width=0.95\linewidth]{H1_c2spline_potenser_alpha=2.pdf}
\caption{$p=3$}
\label{fig:torus_H1_q3_alpha}
\end{subfigure}
\caption{\emph{Convergence with gap scaling.}
In this numerical study, we investigate how the gap size $\delta$ must scale with $h$ to maintain optimal order convergence. We scale the gap size for the surface model problem as $\delta = 0.1h_0^{s}$ for various values of $s$.
Here, $h_0$ is the mesh size $h$ normalized by the largest mesh size to have the same initial gap size independently of $s$.
We discretize using full regularity B-spline basis functions of degree $p$.
}
\label{fig:gap-scaling-convergence}
\end{figure}
\paragraph{Hybrid Variable Studies.}
Next, we study the behavior of the hybrid variable stabilization. To facilitate better visualizations of the numerical solution, including the hybrid variable, we construct a model problem in the two-dimensional plane by taking the unit square, cutting out a disc, and shifting this disc a distance $\delta$ in the plane causing a gap. While this model geometry is entirely defined in the two-dimensional plane the hybrid variable is still defined on a three-dimensional mesh covering the gap, so for visual clarity, we plot the hybrid variable solution along its intersection with the plane.
Intuitively, the desired effect of the hybrid variable stabilization is to make the hybrid variable solution constant across the gap while being sufficiently weak not to affect the solution along either side of the gap. In Figure~\ref{fig:blockstab-weak-good-strong} we vary the strength of this stabilization in one gap situation and plot the hybrid variable solution. We note that a too weak stabilization causes the hybrid variable solution to vary significantly over the gap, while an apt stabilization as desired keeps the solution constant across the gap. On the other hand, a too strong stabilization induces looking due to the curved interfaces, which deteriorates the solution also along the gap. This illustrates the importance of choosing an accurate scaling of the hybrid variable stabilization.
In Figure~\ref{fig:increasing-gap} we look at how the patch error is qualitatively affected by the gap size $\delta$. Looking at the outer patch, whose location is constant with respect to the gap size, we as expected see that the error increases with the gap size. The hybrid variable stabilization seems to do its job since the hybrid variable solution keeps approximately constant across the gap for all gap sizes. Due to the way we extract the hybrid mesh in our implementation, there are, in the case of the largest gap, some elements missing in the region covering the gap. This, however, seems to have little influence on the hybrid variable solution, which is likely thanks to the extended support of the B-spline basis functions.
\begin{figure}
\centering
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-blockmeshSol_lowStab.png}
\subcaption{Too weak stabilization}
\label{fig:square:blocksol:low}
\end{subfigure}
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-blockmeshSol_midStab.png}
\subcaption{Suitable stabilization}
\label{fig:square:blocksol:mid}
\end{subfigure}
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-blockmeshSol_highStab.png}
\subcaption{Too strong stabilization}
\label{fig:square:blocksol:high}
\end{subfigure}
\caption{\emph{Effect of hybrid variable stabilization.} Numerical solution of the hybrid variable using a too weak, a suitable, respectively a too strong hybrid variable stabilization. For reference, the (faded) numerical solution in the patches is also presented.
In the case of a too weak stabilization as seen in (a) the hybrid variable varies substantially across the gap, whereas a suitable stabilization as in (b) yields the desired behavior where the hybrid variable is almost constant across the gap. Using a too strong stabilization, as in (c), comes with the risk of locking in the hybrid variable as the coupling between the normal and tangential components, induced by the curvature of the interface, may become dominant.
}
\label{fig:blockstab-weak-good-strong}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-dh=0,05_abserr_blocks.png}
\subcaption{$\log|u-u_h|$, $\delta h^{-1}=0.05$}
\label{fig:square:abserr:005}
\end{subfigure}
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-dh=0,05_mesh.png}
\subcaption{$u_{h,0}$, $\delta h^{-1}=0.05$}
\end{subfigure}
\\[0.5em]
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-dh=0,2_abserr_blocks.png}
\subcaption{$\log|u-u_h|$, $\delta h^{-1}=0.2$}
\label{fig:square:abserr:02}
\end{subfigure}
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-dh=0,2_mesh.png}
\subcaption{$u_{h,0}$, $\delta h^{-1}=0.2$}
\end{subfigure}
\\[0.5em]
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-dh=0,5_abserr_blocks.png}
\subcaption{$\log|u-u_h|$, $\delta h^{-1}=0.5$}
\label{fig:square:abserr:05}
\end{subfigure}
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-dh=0,5_mesh.png}
\subcaption{$u_{h,0}$, $\delta h^{-1}=0.5$}
\end{subfigure}
\\[0.5em]
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-dh=1_abserr_blocks.png}
\subcaption{$\log|u-u_h|$, $\delta h^{-1}=1$}
\label{fig:square:abserr:1}
\end{subfigure}
\begin{subfigure}[t]{0.3\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-dh=1_mesh.png}
\subcaption{$u_{h,0}$, $\delta h^{-1}=1$}
\end{subfigure}
\caption{\emph{Effect of gap increase.} Sequence of two-patch domain with a gap, where the gap size $\delta$ is gradually increased. \emph{Left:} The absolute error for the numerical solution in the outer patch. \emph{Right:} The numerical solution of the hybrid variable on top of the (faded) numerical solution in the outer patch.}
\label{fig:increasing-gap}
\end{figure}
\paragraph{Surface CAD Example.}
As a final example, we consider the surface CAD geometry of a tube intersection presented in Figure~\ref{fig:cad-surface}. This geometry was created using the surface CAD modeling software Rhino \cite{rhino} and exported in IGES format. In the CAD each of the three tubes is described as a parametric mapping from $[0,1]^2$ onto a tube surface along with trim curves in $[0,1]^2$ defining parts of the tube surface to remove, which in this case is given by the tube intersections. Note that this surface CAD description does not include any connectivity information. To emphasize the gaps along the interfaces, we manually shifted the tube pieces for the final geometry.
In Figure~\ref{fig:cad-example} we present a numerical solution to a Dirichlet problem without load, where we impose different constant values on each of the four tube ends. Looking at the surface solution we note that it seems to flow nicely over the gaps. The hybrid variable stabilization seems to do its job as the hybrid variable solution does not appear to vary across the gap. Due to the exaggerated gap size a quite large mesh size is used for the hybrid variable, and, while seemingly not problematic in this example, we realize that the hybrid variable actually has some unwanted coupling between the various interfaces. This is a potential drawback of the simple implementation of the method where we define the hybrid variable for all interfaces using one continuous field. On the other hand, in practice this is not an issue for a problem with a more reasonable gap size and the simple and robust implementation are strengths of the method.
\begin{figure}
\centering
\includegraphics[width=0.35\linewidth]{fig-cad-surface}
\caption{\emph{CAD surface with gaps.}
The three tube pieces are patchwise described, where each piece is given by a parametric mapping from $[0,1]^2\subset\IR^2$ onto the tube surface. The interfaces where the tubes intersect are described using trim curves in each patch, defining parts of the surface to remove. Since the trim curves only give approximations to the true interfaces, the CAD surface includes small gaps over the interfaces.
In this example, we have exaggerated the gaps by translating the upper tubes vertically.}
\label{fig:cad-surface}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\linewidth}\centering
\includegraphics[width=0.86\linewidth]{fig-cad-surface-uh}
\subcaption{Surface solution}
\label{fig:cad-uh}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}\centering
\includegraphics[width=0.9\linewidth]{fig-cad-surface-uh0}
\subcaption{Hybrid variable solution}
\label{fig:cad-uh0}
\end{subfigure}
\caption{\emph{Solution on CAD surface.}
In (a) we see the numerical solution to a Dirichlet problem with zero load where a constant value is imposed along each tube ending, with a different value for each ending. The solution flows nicely over the interfaces, both when coupling different patches over the gaps and when coupling a patch to itself.
As seen in (b), both types of couplings are handled by the hybrid variable.
}
\label{fig:cad-example}
\end{figure}
\section{Summary}
In this contribution we have utilized weak enforcement techniques for interface problems, based on a hybridized Nitsche's formulation, to robustly couple solutions on surface CAD geometries with gaps/overlaps at the interfaces. Our approach has several benefits:
\begin{itemize}
\item \emph{Convenient and robust implementation.}
The use of a hybridized Nitsche formulation for the coupling makes for a very convenient and robust implementation.
The convenience lies in that surface patches only directly couple to the hybrid variable, so assembly is naturally done patchwise, and that the hybrid variable is defined on a structured (potentially octree) grid in an embedding Euclidean space, which makes operations such as identifying in what element a point is located, easy and efficient.
In contrast to other multipatch methods based on Nitsche formulations, the hybrid formulation limits the need for computing inverses of the NURBS mappings in adjacent patches, which increases robustness.
Further, the use of CutFEM techniques in the patches makes for very flexible and convenient discretization choices, since the computational meshes are not required to conform to the trimmed reference domains.
\item \emph{Ease of application.} Since surface CAD models do not always include good connectivity information, i.e., the topological relationship of how the patch boundaries are coupled to each other, it significantly simplifies the application of the method that this information is not needed, but is rather implicit through the hybrid variable. Actually, the method is agnostic to both the number of surface patches joining at an interface, and whether the interface couples a patch to itself or to another patch.
\item \emph{Mathematical and numerical analysis.} Our preliminary mathematical analysis shows that we can devise an optimal order method using this technique, how the error is affected by the gap size, and what a suitable scaling of the hybrid variable stabilization is.
Our numerical results give further verification of the performance and insights into the behavior of the method.
\end{itemize}
A limitation in our current extraction of the hybrid variable mesh $\mcT_{h,0}$ is that we essentially assume the gap size $\delta$ to be smaller than the mesh size $h$, since there, from an accuracy perspective, is little motivation to use smaller $h$. However, it would be interesting to make the method robust also when $h \ll \delta$. This would require a technique for estimating the gap sizes along with an approach for padding $\mcT_{h,0}$ such that it always is simply connected across the gap.
\bigskip
\paragraph{Acknowledgement.} This research was supported in part by the Swedish Research
Council Grants Nos.\ 2017-03911, 2021-04925, and the Swedish
Research Programme Essence.
\bibliographystyle{habbrv}
\footnotesize{
|
2,877,628,091,098 | arxiv | \section{Introduction}
Bound pairs of spinning black holes (BHs) orbiting around each other, known as binary BHs (BBHs), have only very recently been observed. The first gravitational waves detected by LIGO \cite{2016PhRvL.116f1102A} confirmed the existence of BBHs in the Universe, by detecting the final stages of their inspiral and their merger. The subsequent LIGO-Virgo detections~\cite{Abbott:2016nmj,Abbott:2017vtc,Abbott:2017gyy,Abbott:2017oio} confirmed an abundant BBHs population, when one considers mildly cosmological distances. In our galaxy, on the other hand, BBH \textit{mergers} will be extremely rare events, but indirect evidence from the electromagnetic channel, supports the existence of BH binaries. For instance, recent observations have detected an abundant number of binary systems that contain stellar-mass BHs in the central parsec of the Galactic Centre, where the supermassive BH, Sagittarius A* resides~\cite{2018Natur.556...70H}. This finding is in agreement with the current models of galactic stellar dynamics, which also predicts a population of isolated BHs and of BBHs in this central galactic region. Thus, BBHs are expected to be common astrophysical systems.
Theoretical and phenomenological properties of BBHs have been studied for a long time - see, $e.g.$~\cite{1987thyg.book..330T,Schutz:1989yw,Kulkarni:1993fr,Sigurdsson:1993zrm,Colpi:2003wb,2016PhRvL.116f1102A,Belczynski:2016ieo}. A particularly interesting feature is their strong lensing effect. Like stationary isolated BHs, dynamical BBHs bend light in their proximity creating deformed images, or even multiple images of background bright objects. Moreover, these dynamical sources cast {\it shadows} - regions in the local sky lacking radiation, associated with null geodesics that, when propagated backwards in time are absorbed by the BHs (see~\cite{Perlick:2004tq,Cunha:2018acu} for reviews). Solving for the lensing effects, including their shadows, of general-relativistic BBHs is, however, more challenging than for isolated cases. The spacetime geometry created by astrophysical binaries is {\it dynamical} and not known analytically. Thus, the lensing effects/shadows are typically resolved to high accuracy via performing ray tracing on top of dynamical fully non-linear numerical simulations. Specific features of the shadows of BBHs have been identified in these numerical studies. For instance, in dynamical BBHs there are two prominent visible shadows, each associated with one of the two BHs, with narrow secondary `eyebrow' shadows close to the outside of each primary shadow. Such eyebrows also occur in static double BH configurations~\cite{Yumoto:2012kz,Nitta:2011in,Shipley:2016omi,Cunha:2018gql}. In this static binary systems one typically has axial symmetry, with the lensing images, that include aligned eyebrows and main shadows, manifesting this symmetry. In dynamical BBHs, by contrast, both the intrinsic spin of each BH and the orbital spin of the binary are responsible for frame-dragging, producing a shift of the eyebrow's position in the direction opposite to the spin, as shown in \cite{2015CQGra..32f5002B}.
Motivated by the recent BBHs discoveries, in this paper we report on a computationally simpler method to reproduce, as a proxy, what an observer in the vicinity of a BBH would see, due to the strong lensing of light induced by the dynamical binary. In particular, this conceptually simple method is able to reproduce the leading effects of the orbital angular momentum of the binary.
The method presented herein is based on a {\it quasi}-static approach to resolve the photon orbits for BBHs. The strategy is to locally compute null geodesics on top of an exact \textit{static or stationary} BBH background, such as the double Schwarzschild (a.k.a. Bach-Weyl) geometry~\cite{Bach:1922}, and periodically adjust them by small rigid rotational corrections along an axial vector field that does not coincide with the axi-symmetry of the exact solution, thus mimicking the orbital spin of the BHs. These corrections along the photon positions will be discrete rotations, with the frame of the two BHs fixed. This procedure provides a proxy to computing the paths of light rays that meet an observer in the vicinity of a truly dynamical binary. A snapshot of such a {\it quasi}-static evolution of the geodesics on a static double-Schwarzschild BH solution \cite{Israel:1966rt}, using an image of the Milky Way as background, is depicted in Fig.~\ref{stars}. Supplementary movies for the shadows and lensing due to this quasi-static BBHs can be found in \cite{webpage}. As we shall illustrate below the leading characteristic features of the shadows of the full dynamical BBHs are replicated by this procedure.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.4\textwidth]{binary2.jpg}
\caption{\small
Shadows and lensing in a quasi-static binary BH, using ESO's Milky Way sky~\cite{ESO-sky} as background. The separation between the (equal mass) BHs is $z_o=3M$, where M is the mass of each component, and the counter-clockwise rotation is $\omega M=0.02$ (see Section 2). Supplementary movies can be found in \cite{webpage}.}
\label{stars}
\end{center}
\end{figure}
To study the effects of the \textit{intrinsic} (rather than orbital) angular momenta of the BHs in the BBHs system we also compute the shadows of stationary (non dynamical) spinning BBHs solutions of general relativity. We shall use the double-Kerr BH solutions \cite{Herdeiro:2008kq,Manko:2017avt,Cabrera-Munguia:2017dol,Costa:2009wj,Manko:2013iva} that are known exactly. These are asymptotically flat metrics that represent two Kerr BHs with a conical singularity between them (in the representation we use). Similar conical defects are found in the double-Schwarzschild BH solutions. In the latter case, it was observed in~\cite{Cunha:2018gql} that thanks to the underlying cylindrical symmetry, of both the geometry and the spatial part of the fundamental photon orbits, the conical singularity has essentially no observable effect on the shadows. Since a cylindrical symmetry is also present in the double Kerr solution, we also expect no observable effect in the shadows due to the conical singularities. This is confirmed by computing the null geodesics in these backgrounds: we are able to produce images of the shadows of the co-rotating (even) and counter-rotating (odd) exact stationary BBHs configurations. Our resulting images show complex and in some cases self-similar structure across different angular scales. Among the stationary BBHs there is a set of extremal, maximally spinning solutions. The extremal configurations have finite size (event horizon area) and zero temperature. While we present a few images of the shadows of stationary BBHs spinning near extremality, images of the first representations of the exactly extremal BBHs will be presented in a forthcoming publication, where we shall make contact with the recent analysis of the near horizon geometry of these BH binaries~\cite{Ciafre:2018jpe}.
In what follows we describe, in Section 2, the shadows of \textit{quasi}-static BBHs, to address the effect of orbital angular momentum on the lensing. We focus on explaining the method we developed to trace light rays and present our results. In Section 3, we turn to the effect of the intrinsic spin on the lensing, considering the double-Kerr BBHs where we do not consider any kind of orbital spin. In section \ref{section:even}, we compute the shadows of the stationary double-Kerr BH \cite{Manko:2017avt,Cabrera-Munguia:2017dol} with co-rotating (even) spins. And in section \ref{section:odd} we find and analyze the shadows of the stationary counter-rotating (odd) double-Kerr BH \cite{Costa:2009wj,Manko:2013iva}. These more analytical approaches that we introduce (compared to ray tracing on numerical simulations) will hopefully enable a better understanding of the shadows of astrophysical BBHs.
\section{\textit{Quasi}-static Binary Black Holes}
\label{section:quasi}
The double-Schwarzschild BH is a {\it static} solution of the vacuum Einstein's equations. Starting from it, however, we can construct a rotation proxy that mimics the leading effects of a fully dynamical BH binary, in what concerns lensing effects. In this section we will describe such a proxy, focusing on signatures at the level of the shadows.
The rotation of the binary will be assumed to be {\it adiabatic-like}, $i.e.$ the BHs will move rather slowly when compared with the light ray travel time for a typical photon reaching the observer. Under this approximation, photons will locally follow null geodesics in the double-Schwarzschild (static) background, with the trajectory periodically suffering small corrections due to the rotation of the BHs. These corrections will simply be discrete (active) rotations of the photon position along their path, with the frame of the two BHs fixed, with such a procedure being straightforward to implement numerically. At the end of the trajectory the photon position is rotated back into the observer's frame. This system will be dubbed a \textit{quasi}-static BH binary.
The static BH binary (the double Schwarzschild solution) will be described in Weyl coordinates $x^\alpha=(t,\rho,\varphi,z)$ - see~\cite{Cunha:2018gql} for the details of the solution. Consider then a map
\begin{eqnarray}
\Omega: &&\mathcal{M} \to \mathcal{M} \\
&& x^{\alpha} \to x^{\alpha'}=\Omega^{\alpha'}(x^{\alpha})\ ,
\end{eqnarray}
where $x^{\alpha'}=(t',\rho',\varphi',z')$, that takes each point of our manifold $\mathcal{M}$ to another point in $\mathcal{M}$. In order to naively mimic a Cartesian rotation, the map $\Omega$ is defined as follows (in Weyl coordinates):
\[\
\begin{cases}
t'=t\\
\rho'=\sqrt{x'^2+y'^2}\\
\varphi'=\left\{
\begin{array}{l}
\displaystyle{ \textrm{asin}\frac{y'}{\rho'} \quad \textrm{if}\quad(x'\geqslant 0)}\\
\displaystyle{ \pi-\textrm{asin}\frac{y'}{\rho'}\,\, \textrm{if}\quad(x'<0)} \end{array} \right. \\
z'= z',
\end{cases} \ ,
%
\left(
\begin{array}{l}
x' \\
y' \\
z' \\
\end{array}
\right) =
\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & \cos \omega\delta t & \sin\omega\delta t \\
0 &- \sin\omega\delta t & \cos \omega\delta t \\
\end{array}
\right)
%
\left(
\begin{array}{l}
x \\
y \\
z \\
\end{array}
\right) \ ,
\]
and $x+iy=\rho e^{i\varphi}$.
Hence, after a time interval $\delta t$, the photon position is corrected by changing its initial location $P=(t,\rho,\varphi,z)$ to a new point $P'=(t',\rho',\varphi',z')$ under the map $\Omega$. We remark that $\Omega$ is well defined even when $(\omega\delta t)\gg 1$, although this will not be usually the case during the numerical integration of the trajectory.
The next issue is how the photon's 4-momentum should be modified. The vector $p=p^\mu\partial_\mu$ at point $P$ can be projected via $\Omega$ into the push-forward vector $(\Omega^*p)$ at $P'$~\cite{Carroll:1997ar,Wald:1984rg}:
\begin{align*}
(\Omega^*p)=p^\mu\partial_\mu\Omega^{\alpha'} \partial_{\alpha'}= p^t\partial_{t'} + (p^i\partial_i\Omega^{a'}) \partial_{a'},
\end{align*}
where $i\in\{\rho,\varphi,z\}$ and $a'\in\{\rho',\varphi',z'\}$. However, restrictions have to be imposed to $(\Omega^*p)$, in order for it to represent the 4-momentum at $P'$. We impose the new momentum $\widetilde{p}$ should satisfy the following two requirements:
\begin{itemize}
\item The photon's local energy $\mathcal{E}$ is the same for a static observer in $P$ and $P'$, $i.e.$ $\mathcal{E}=\sqrt{-g_{tt}}\,p^t=\sqrt{-g'_{tt}}\,\widetilde{p}^{t'}$~\cite{Cunha:2016bjh}. This is reasonable because the physical rotation is performed by the BHs, and an observer in $P$ can be identified with one in $P'$.
\item The norm of $\widetilde{p}$ vanishes, $i.e.$ $\widetilde{p}^{\alpha'}\,\widetilde{p}_{\alpha'}=0$.
\end{itemize}
It follows that the new momentum $\widetilde{p}$ at $P'$ is then defined as
\[\widetilde{p}= \left(\sqrt{\frac{g_{tt}}{g'_{tt}}}\,p^t\right) \partial_{t'} + \gamma\left(p^i\partial_i\Omega^{a'}\right)\partial_{a'},\]
where the (positive) factor $\gamma$ enforces the vanishing of the norm. We further remark that this procedure modifies the values of the photon's energy $E=-p_t$ and axial angular momentum $L=p_\varphi$ with respect to infinity, which otherwise would be Killing constants of motion. This implies that a photon can in principle escape the system with more (or less) energy than it started with. We stress that this operation does not amount (generically) to a simple coordinate transformation.
Although the angular frequency $\omega$ of the BH binary is a free parameter that was introduced in the model, a physically reasonable value of $\omega$ can be estimated from the Keplerian orbital frequency:
\[\omega\sim \left(\Delta z+1\right)^{-3/2}\,M^{-1},\]
where $M$ is the ADM mass, and $\Delta z$ is the proper distance between the two BHs. The latter can be computed with a complete elliptic integral of the second kind (see~\cite{PhysRevD.80.104036}):
\[\Delta z =(2z_o+1)\left(1-\frac{1}{4z_o^2}\right)E\left(\frac{2z_o-1}{2z_o+1}\right).\]
The parameter $z_o$ in the previous expression parametrises the BH distance and is the same that was used in~\cite{Cunha:2018gql}.
Implementing the approach we have just described to the double Schwarzschild solution, using the same setup as used in~\cite{Cunha:2018gql} (see also~\cite{Cunha:2015yba,Cunha:2016bjh,Cunha:2017eoe}), we obtain the lensing and shadows displayed in~Fig.~\ref{rotation}. The first column of this figure displays the lensing and shadows of a static double Schwarzschild BH with $z_o=3M$, already discussed in~\cite{Cunha:2018gql}. The second column displays the corresponding quasi-static binary with $\omega=0.02 M^{-1}$, with the BHs rotating \textit{counter-clockwise} in the image (see movie in~\cite{webpage}). Observe that the shadows are twisted \textit{clockwise} in the image with respect to the static case. This can be interpreted as follows. The observation image was taken at coordinate time $t=0$; at this time the binary had the same vertical orientation as in the static case. Since light takes a finite amount of time to get to the observer, the shadows are actually recording the BH positions at a past time ($t<0$), when the BHs were rotated clockwise with respect to $t=0$. The shadow eyebrows are a second order lensing effect, related to a time even further into the past, thus presenting an additional clockwise rotation with respect to the main shadows. For illustration purposes we have included Fig.~\ref{stars} with the same lensing and shadows of the rightmost image of Fig.~\ref{rotation}, but replacing the colored background with an image of the Milky Way (see movie in~\cite{webpage}).
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.35\textwidth]{A3-w0_0.png}
\ \ \ \includegraphics[width=0.35\textwidth]{A3-w0_02.png}
\includegraphics[width=0.35\textwidth]{A3-w0_0-shadow.png} \ \ \
\includegraphics[width=0.35\textwidth]{A3-w0_02-shadow.png}
\caption{\small \textit{Top row:} Lensing of a static (left) quasi-static (right) BH binary with $z_o=3M$ and $\omega M=\{0\,,\,0.02\}$. \textit{Bottom row:} Shadows of the previous images. The observer sits along the axis of orbital rotation (the $x$-axis). The BHs rotate counter-clockwise in the image for positive $\omega$.}
\label{rotation}
\end{center}
\end{figure}
To assess the accuracy of the method described above as a proxy to the lensing in a dynamical BBH, we perform, in Fig.~\ref{bohn} a lensing comparison of a fully dynamical binary in~\cite{2015CQGra..32f5002B} close to merger with a similar quasi-static binary. Despite clear differences concerning specific details of the lensing, the overall \textit{qualitative} resemblance between both cases at the level of the shadow structure is uncanny. Still, in order to have a more \textit{quantitative} comparison between both images in Fig.~\ref{bohn}, we define two parameters $\chi,\psi$. The first parameter, $\chi$, is the ratio between the shadow area\footnote{The shadow area corresponds to a solid angle in the observer's sky.} of the main shadows and the one of the associated eyebrows; one obtains $\chi\simeq \{15,20\}$, respectively for the left (right) image of Fig.~\ref{bohn}.
The second parameter, $\psi$, is an angle that parametrises the eyebrows' angular displacement with respect to the main shadows. By first computing the average position of all points within each shadow element, one can draw two straight lines connecting similar average position points, $e.g.$ eyebrow to eyebrow and main shadow to main shadow. It is then possible to define $\psi$ as the angle formed between these two lines. We obtain $\psi\simeq\{42^{\circ}, 33^{\circ}\}$ respectively for the left (right) image of Fig.~\ref{bohn}. We remark that $\psi=0$ for the static BH binary, by symmetry (see left image of Fig.~\ref{rotation}).
Although the values $\chi,\psi$ are not exactly the same for both images in Fig.~\ref{bohn}, the quasi-static binary is here displayed mainly as a proof of concept. In particular, the values of $\{z_o,\omega\}$ of the quasi-static binary are quite \textit{ad-hoc}, leaving some room for optimization. Moreover, note that we have chosen a binary BBH close to merger, in which case the adiabatic approximation of the quasi-static binary is starting to break down, as the BHs change their positions on a time scale comparable to the light ray's travel time towards the observer. In addition, unusual effects at the level of the shadow start to be noticeable, in particular a non-smooth edge ($i.e.$ a cusp) due to the combination of the conical singularity and rotation.\\
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.4\textwidth]{Bohn3.png} \ \ \
\includegraphics[width=0.4\textwidth]{Bohn-similar3.jpg}
\caption{\small {\it Left:} Shadows and lensing of a fully dynamical binary of equal-mass BHs with no
spin (adapted from~\cite{2015CQGra..32f5002B}). {\it Right:} Quasi-static BH binary with $z_o=1.5M$ and $\omega=0.06\,M^{-1}$.}
\label{bohn}
\end{center}
\end{figure}
\section{Stationary Binary Black Holes}
The previous section illustrated the mimicked effect of the orbital angular momentum of a dynamical binary in the lensing of light, by using the double Schwarzschild solution in a quasi-static approximation. Generically, however, dynamical binaries also have intrinsic BH spin. We now show that the lensing effect of the intrinsic spin is taking into account by considering a \textit{stationary} binary, rather than static, described by the double Kerr solution. We shall be interested in the particular cases of the Kerr solution describing two equal mass BHs with either equal (even case) or opposite (odd case) spins. In both cases the double Kerr solution has a conical singularity in between the BHs and is described by the line element, in Weyl coordinates:
\begin{equation}
ds^2= -f(dt-\omega\,d\varphi)^2 + \frac{e^{2\gamma}}{f}\left(d\rho^2 +dz^2\right) +\frac{\rho^2}{f}d\varphi^2,
\end{equation}
where the metric functions $f,\gamma,\omega$ only depend on the coordinates $\rho,z$.
\subsection{The even case}
\label{section:even}
For two equal mass and equal spin BHs, the metric functions are defined as~\cite{Manko:2017avt,Cabrera-Munguia:2017dol}:
\[f=\frac{A\bar{A}-B\bar{B}}{(A+B)(\bar{A}+\bar{B})},\quad e^{2\gamma}=\frac{A\bar{A}-B\bar{B}}{K_o^2R_{11}R_{01}R_{10}R_{00}}\quad \omega=4a-\frac{2\textrm{Im}\left\{(\bar{A}+\bar{B})G\right\}}{A\bar{A}-B\bar{B}},\]
where the overbar denotes complex conjugation and
\[A=4z_o^2(R_{11}-R_{01})(R_{10}-R_{00})-4\sigma^2\left(R_{11}-R_{10}\right)\left(R_{01}-R_{00}\right),\]
\[B=8z_o\sigma\bigg[(z_o+\sigma)(R_{01}-R_{10})-(z_o-\sigma)(R_{11}-R_{00})\bigg],\]
\vspace{0.2cm}
\[G=-zB + 8z_o\sigma\bigg[z_o(R_{01}R_{00}-R_{11}R_{10}) +\sigma(R_{11}R_{01}-R_{10}R_{00}) -(z_o^2-\sigma^2)(R_{11}-R_{01}-R_{10}+R_{00})\bigg],\]
\vspace{0.2cm}
\[R_{jk}(\rho,z)=\frac{-2(\epsilon\sigma+\kappa z_o)+2id}{1+4(\kappa z_o+ia)(\epsilon\sigma+ia)}\sqrt{\rho^2+(z+\kappa z_o+\epsilon\sigma)^2},\qquad \epsilon=2j-1,\quad \kappa=2k-1,\]
\vspace{0.2cm}
\[\sigma=\sqrt{\frac{1}{4}-a^2 + d^2\left(4z_o^2-1+4a^2\right)^{-1}},\qquad K_o=16\sigma^2\left\{\frac{(2z_o^2+z_o+2a^2)^2-a^2}{(z_o+1/2)^2+a^2}\right\},\]
\vspace{0.2cm}
\[d=\frac{a(4z_o^2-1+4a^2)}{(4z_o^2+2z_o+4a^2)},\]
with quantities normalized to the ADM mass $M$ of the solution. This solution has two free parameters, $z_o$ and $a$, with $z_o$ denoting the coordinate position of each BH in the $z$-axis (see Fig.~\ref{setup}), whereas $a$ is a spin parameter related to ADM axial angular momentum $J=2a-d$.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.22\textwidth]{setup.pdf}
\caption{\small Schematic representation of the equal double-Kerr BH system with identical BHs. The solid black rods along the $z$-direction represent each a BH while the dashed line in between these rods correspond to the conical singularity. The quantity $\sigma$ is proportional to the horizon temperature~\cite{Cabrera-Munguia:2017dol}.}
\label{setup}
\end{center}
\end{figure}
The physical domain of the parameter space $\{z_o,a\}$ obeys the condition $z_o\geqslant \sigma\geqslant 0$, with $\sigma$ and all metric functions real. The domain with $a\geqslant 0$ has the following limits (see Fig.~\ref{domain}):
\begin{enumerate}
\item[I.] Double-Schwarzschild solution ($a=0\implies J=0$), with $z_o\geqslant 1/2$;
\item[II.] Single Kerr BH, given by $\sigma=z_o$; this leads to $a^2 + z_o^2=1/4$ (blue dashed line in Fig.~\ref{domain});
\item[III.] Extremal limit, provided by $\sigma=0\implies$ vanishing temperature (green solid line in Fig.~\ref{domain});
\item[IV.] Two isolated Kerr BHs with $z_o\to \infty$.
\end{enumerate}
We remark that there is an additional independent region which also satisfies $z_o\geqslant \sigma\geqslant 0$ but for which the metric can have Closed-Timelike-Curves (CTCs)~\cite{Wald:1984rg}; it is thus discarded as unphysical.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.5\textwidth]{solution-space.pdf}
\caption{\small Parameter space ($z_o,a$) of the double-Kerr solution with identical co-rotating BHs. The shaded regions are considered unphysical, with the dashed (solid) line representing the limit II (III). The shadows of the configurations 1 $\to$ 4 are displayed in Fig.~\ref{shadows-2Kerr}.}
\label{domain}
\end{center}
\end{figure}
The shadows and lensing of four solutions, marked in Fig.~\ref{domain} with red dots, are displayed in Fig.~\ref{shadows-2Kerr}. There appear to be no strikingly new features in the shadows. In particular the D-like shadow profile, characteristic of a fast spinning (single) Kerr BH, still holds in the double Kerr case (namely solution 3), as one could have naively anticipated. The third row of Fig.~\ref{shadows-2Kerr} also displays observations outside the equatorial plane, with $\theta_o=\textrm{acos}(z/\sqrt{z^2+\rho^2})=\pi/4$.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.25\textwidth]{z0_3-a0_52.png}\includegraphics[width=0.25\textwidth]{z1-a0_5.png}\includegraphics[width=0.25\textwidth]{z2-a0_50925.png}\includegraphics[width=0.25\textwidth]{z2-a0_4.png}\\
\includegraphics[width=0.25\textwidth]{z0_3-a0_52-shadow.png}\includegraphics[width=0.25\textwidth]{z1-a0_5-shadow.png}\includegraphics[width=0.25\textwidth]{z2-a0_50925-shadow.png}\includegraphics[width=0.25\textwidth]{z2-a0_4-shadow.png}\\
\includegraphics[width=0.25\textwidth]{z0_3-a0_52th45-shadow.png}\includegraphics[width=0.25\textwidth]{z1-a0_5th45-shadow.png}\includegraphics[width=0.25\textwidth]{z2-a0_50925th45-shadow.png}\includegraphics[width=0.25\textwidth]{z2-a0_4th45-shadow.png}
\caption{\small Lensing of configurations 1 $\to$ 4 of Fig.~\ref{domain} (columns from left to right). The second (third) row displays only the shadows, as observed with $\theta_o=\pi/2$ ($\theta_o=\pi/4$).}
\label{shadows-2Kerr}
\end{center}
\end{figure}
\subsection{The odd case}
\label{section:odd}
For two equal mass and opposite spin BHs, the metric functions are defined as~\cite{Costa:2009wj,Manko:2013iva,Manko:2008pv}:
\[f=\frac{A\bar{A}-B\bar{B}}{(A+B)(\bar{A}+\bar{B})},\quad e^{2\gamma}=\frac{A\bar{A}-B\bar{B}}{(4z_o\sigma)^4R_{11}R_{01}R_{10}R_{00}}\quad \omega=-\frac{2\textrm{Im}\left\{(\bar{A}+\bar{B})G\right\}}{A\bar{A}-B\bar{B}},\]
where the overbar denotes complex conjugation and
\[A=\sigma^2(R_{11}R_{01}+R_{10}R_{00}) +z_o^2(R_{11}R_{10}+R_{01}R_{00})+\]
\[+(R_{11}R_{00}+R_{01}R_{10})\left(\frac{z_o}{2}+\sigma^2[8z_o^2-1]\right) -4ia\sigma z_o(2z_o-1)(R_{11}R_{00}-R_{01}R_{10}),\]
\vspace{0.2cm}
\[B=4\sigma^2z_o^2(R_{11}+R_{01}+R_{10}+R_{00})-\sigma z_o\bigg(1 +2ia[2z_o-1]\bigg)(R_{11}-R_{01}-R_{10}+R_{00}),\]
\vspace{0.2cm}
\begin{align*}
G=&-zB + 2\sigma^2z_o(R_{10}R_{00}-R_{11}R_{01})+2\sigma z_o^2(R_{01}R_{00}-R_{11}R_{10}) +\\
&+z_o\sigma(z_o+\sigma)(R_{11}-R_{00})\bigg(4z_o\sigma-1-2ia[2z_o-1]\bigg) +\\&+ z_o\sigma(z_o-\sigma)(R_{01}-R_{10})\bigg(4z_o\sigma+1+2ia[2z_o-1]\bigg),
\end{align*}
\vspace{0.2cm}
\[R_{jk}(\rho,z)=\sqrt{\rho^2+(z+\kappa z_o+\epsilon\sigma)^2},\qquad \epsilon=2j-1,\quad \kappa=2k-1,\]
\vspace{0.2cm}
\[\sigma=\sqrt{\frac{1}{4}- a^2\left(\frac{2z_o-1}{2z_o+1}\right)},\]
with quantities normalized to the ADM mass $M$ of the solution. Again, this solution has two free parameters, $z_o$ and $a$, with $z_o$ denoting the coordinate position of each BH in the $z$-axis (see Fig.~\ref{setup}), whereas $a$ is a spin parameter proportional to the (Komar) angular momentum of the lower BH $J^-=a/2$. We further remark that the total ADM angular momentum vanishes since the upper BH has $J^+=-a/2$.\\
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.5\textwidth]{solution-space_Kerr_odd.pdf}
\caption{\small Parameter space ($z_o,a$) of the double-Kerr (odd) solution with counter-rotating BHs. The shaded regions are considered unphysical. The shadows of the configurations 1 $\to$ 6 are displayed in Fig.~\ref{shadows-2Kerr-odd}.}
\label{domain-2Kerr-odd}
\end{center}
\end{figure}
Again, the physical domain of the parameter space $\{z_o,a\}$ obeys the condition $z_o\geqslant \sigma\geqslant 0$, with $\sigma$ and all metric functions real. The domain with $a\geqslant 0$ has the following limits (see Fig.~\ref{domain-2Kerr-odd}):
\begin{enumerate}
\item[I.] Double-Schwarzschild solution ($a=0\implies J^{\pm}=0$), with $z_o\geqslant 1/2$;
\item[II.] Single BH, given by $\sigma=z_o=1/2$; (vertical dotted line in Fig.~\ref{domain-2Kerr-odd});
\item[III.] Extremal limit, provided by $\sigma=0\implies a=\pm \frac{1}{2}\sqrt{{(2z_o+1)}/{(2z_o-1)}}$ (blue line in Fig.~\ref{domain-2Kerr-odd});
\item[IV.] Two isolated Kerr BHs with $z_o\to \infty$ and opposite rotation.
\end{enumerate}
The boundary II corresponds to a Schwarzschild BH when $a=0$ and $z_o=1/2$, whereas for $a\neq 0$ and $z_o=1/2$ the horizon is singular~\cite{Costa:2009wj}. Nevertheless, in terms of shadows and gravitational lensing, the boundary II appears to be indistinguishable from the Schwarzschild case.
The shadows and lensing of six solutions, marked in Fig.~\ref{domain-2Kerr-odd} with red dots, are displayed in Fig.~\ref{shadows-2Kerr-odd}.\footnote{Geodesics in the counter rotating Kerr-Newman solution were previously discussed in~\cite{Dubeibe:2016vhp}.} The lensing and shadows appear to display a rotation effect, similar to that in Fig.~\ref{rotation}. However, despite the apparent similarities, both cases are quite different, with the anti-symmetry of the (odd) double-Kerr only giving the appearance of an image rotation. For instance, notice that the surface $z=0$ is not a totally geodesic sub-manifold, $i.e.$ a geodesic initially tangent to that plane can leave the latter, going up or down the plane depending on the sign of the geodesic angular momentum $L$. This effect together with anti-symmetric frame-dragging leaves the perception of a rotation at the level of the lensing. The image is stationary and not dynamical, in contrast to the quasi-static case in Fig.~\ref{rotation}. Another observation is that as $a\to \infty$ the shadows look increasingly Schwarzschild like, although there is still some shadow inner structure that quickly becomes imperceptible (see configuration 1 in Fig.~\ref{shadows-2Kerr-odd}). In addition, the shadow topology changes along the solutions, with configuration 3 displaying a shadow close to a topological transition.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.25\textwidth]{1-z0_6-a1_5.png}\includegraphics[width=0.25\textwidth]{2-z0_9-a0_9336.png}\includegraphics[width=0.25\textwidth]{3-z1_097-a0_737.png}\\
\includegraphics[width=0.25\textwidth]{1-z0_6-a1_5-shadow.png}\includegraphics[width=0.25\textwidth]{2-z0_9-a0_9336-shadow.png}\includegraphics[width=0.25\textwidth]{3-z1_097-a0_737-shadow.png}\\
\vspace{1.cm}
\includegraphics[width=0.25\textwidth]{4-z1_2-a0_7.png}\includegraphics[width=0.25\textwidth]{5-z2-a0_645.png}\includegraphics[width=0.25\textwidth]{6-z2-a0_3.png}\\
\includegraphics[width=0.25\textwidth]{4-z1_2-a0_7-shadow.png}\includegraphics[width=0.25\textwidth]{5-z2-a0_645-shadow.png}\includegraphics[width=0.25\textwidth]{6-z2-a0_3-shadow.png}
\caption{\small Lensing of configurations 1 $\to$ 6 of Fig.~\ref{domain-2Kerr-odd} (from left to right and from top to bottom).}
\label{shadows-2Kerr-odd}
\end{center}
\end{figure}
\section{Discussion}
In this paper we have studied the effect of the orbital and intrinsic angular momentum in the lensing of light due to a BH binary, by using \textit{analytically} known solutions of General Relativity. In order to consider the effect of the orbital angular momentum, we have studied the double-Schwarzschild solution, which is static, under a quasi-static procedure that mimics an orbital rotation. The corresponding lensing is able to reproduce the main features of the shadows observed in dynamical binaries, obtained through a considerably more complex procedure which relies on producing fully non-linear numerical evolutions of BHs and performing ray tracing on top of these numerical evolutions.
To observe the effect of the intrinsic spin of the BHs in the binary, we have considered the double-Kerr solution, which is stationary, for two particular cases: equal masses and equal or opposite spins. The lensing effects and shadow structure can be quite different in these two cases. In particular for the odd case, an effect on the shadows similar, to some extent, to that of the orbital angular momentum can be observed, that can be traced back to the opposite dragging effects acting in the vicinity of the two BHs.
One obvious further step would be to apply the quasi-static method of Section 2 to the double-Kerr stationary binaries of Section 3. Whereas the procedure should be straightforward, the involved nature of the double-Kerr metric makes it cumbersome. We expect the end result for the shadows to be a superposition of the orbital effect seen in Section 2 with the corresponding intrinsic spin effect seen in Section 3.
\section*{Acknowledgements}
We would like to thank the authors of Ref.~\cite{2015CQGra..32f5002B} for their permission to use the left panel of Fig.~\ref{bohn}. We would also like to thank E. Radu for discussions. P.C. is supported by Grant No. PD/BD/114071/2015 under the FCT-IDPASC Portugal Ph.D. program.
C.H. acknowledges funding from the FCT-IF programme. This work was partially supported by the H2020-MSCA-RISE-2015 Grant No. StronGrHEP-690904,
the H2020-MSCA-RISE-2017 Grant No. FunFiCO-777740 and by the CIDMA project UID/MAT/04106/2013. The authors would like to acknowledge networking support by the
COST Action CA16104. The work of MJR was supported by the Max Planck Gesellschaft through the Gravitation and Black Hole Theory Independent Research Group and by NSF grant PHY-1707571 at Utah State University.
|
2,877,628,091,099 | arxiv | \section{Introduction}
The running charge plays a particularly distinguished role in QCD, both in the history of QCD and in extensive applications. Its perturbative behavior in the UV exemplifies asymptotic freedom, and it is constantly invoked in phenomenology. But it has its difficulties: One must choose a single momentum scale for the running charge, while for many processes (for example, octet $q\bar{q}$ annihilation into two gluon jets) it may be more appropriate physically to use the parent three-point function with its three momentum scales.
The PT \cite{cornbinpap} offers a way to construct off-shell Green's functions of a non-Abelian gauge theory (NAGT) that are gauge- and process-independent. Recently, a trivial extension was introduced \cite{corn141} that made PT Green's functions also renormalization-group independent (RGI), meaning independent of any renormalization mass $\mu$. This is possible because the PT leads to ghost-free Ward identities, like those of QED, and through these to the equality of certain renormalization constants.
First, we describe conventional renormalization in order to contrast it with the PT-RGI procedure. Consider a renormalizable\footnote{Not superrenormalizable---so either in $d=6$, or in $d=4$ with additional four-point couplings.} field theory with a bare coupling term $g_0\phi_1\phi_2\phi_3$, yielding an unrenormalized vertex $\Gamma_U$. Use subscripts $U$ to denote fully-dressed but unrenormalized Green's functions or fields, and $R$ to denote renormalized quantities. These are related by renormalization constants:
\begin{equation}
\label{genvert}
\phi_{Ui}=Z_i^{1/2}\phi_{Ri},\;\;g_0=\frac{Z_Vg_R}{(Z_1Z_2Z_3)^{1/2}},\;\;\Gamma_U=\frac{\Gamma_R}{Z_V}.
\end{equation}
In this generic theory the renormalized coupling is arbitrary, and there are four different renormalization constants. A typical renormalization equation applies to the propagators $D_i$:
\begin{equation}
\label{typrenorm}
D_{Ui}(p;\Lambda_{UV})=Z_i(\mu,\Lambda_{UV})D_{Ri}(p; \mu)
\end{equation}
in which $\Lambda_{UV}$ is a UV cutoff and $\mu$ is an essentially arbitrary mass scale---the renormalization point. (Of course, one can introduce many renormalization points, but usually one is enough.) Now $D_{Ri}$ is finite and independent of $\Lambda_{UV}$, but the price paid is that this propagator depends on $\mu$. Various choices of $\mu$ are related by the renormalization group.
Our objective is to modify the PT so that this $\mu$-dependence goes away at the level of off-shell Green's functions, and not just at the level of the S-matrix.
Although we will not carry out any calculations in a ghost-free gauge such as $n\cdot A =0$, it simplifies understanding of the PT-RGI connection to imagine using such a gauge\footnote{As in the original Pinch Technique \cite{corn076}.} for conventional (gauge-dependent) Feynman graphs. This cannot affect any properties of the PT itself, which always deals with gauge-invariant quantities constructed from re-organizing the Feynman graphs. We begin with pure gluonic Green's functions; it is a simple matter to add closed quarks loops,\footnote{And also closed ghost loops in covariant gauges. The point is that such closed loops only depend on the product $\Gamma S$, where $\Gamma$ is, for example, the quark-gluon vertex and $S$ the quark propagator; this product is RGI in the PT.} which do not change any of our conclusions for Green's functions with only gluon external legs.
An essential feature of the PT is that even for an NAGT the fundamental Ward identities are QED-like. Indeed, for the NAGT in a ghost-free gauge they are QED-like even before applying the PT re-organization. This means, as we show below, that there is only one renormalization constant in a ghost-free gauge for the PT: The propagator, three-vertex, and four-vertex renormalization constants are equal to a common value $Z$. Let us denote the unrenormalized PT propagator by $d_{U\mu\nu}(p)$, the PT three-vertex by $\Gamma_{U\mu\nu\alpha}(p_1,p_2,p_3)$, and the PT four-vertex by $\Gamma^{(4)}_{U\mu\nu\alpha\beta}(p_1,p_2,p_3,p_4)$ (suppressing group indices; often we also suppress vector indices when there can be no confusion).
For future reference, we introduce the proper self-energy of the PT propagator $d_{U\mu\nu}(p)$:
\begin{equation}
\label{ptpropform}
d^{-1}_{U\mu\nu}(p)=d^{-1}_{0\mu\nu}(p)+\Pi_{U\mu\nu}(p)
\end{equation}
where $d^{-1}_{0\mu\nu}(p)$ is the inverse free propagator in whatever gauge is chosen; it carries all the gauge dependence. The self-energy is explicitly free of all gauge dependence.
We need not indicate whether the Green's functions in the Ward identities are unrenormalized or renormalized. In the PT $\Pi_{\mu\nu}$ is independent of the gauge chosen, and in a ghost-free gauge it cannot depend on the vector $n_{\mu}$ that sets the gauge $n\cdot A =0$. Therefore it is of the form
\begin{equation}
\label{piform}
\Pi_{\mu\nu}= P_{\mu\nu}(p)\Pi (p)
\end{equation}
where
\begin{equation}
\label{proj}
P_{\alpha\beta}(p) = \delta_{\alpha\beta}-\frac{p_{\alpha}p_{\beta}}{p^2}
\end{equation}
is a projector. [In the light-cone gauge, the self-energy does depend on $n_{\mu}$ before applying the PT, and there is another conserved form that is eliminated by the PT; see \cite{corn076}.]
Just as in QED the Ward identity implies that $Z_1=Z_2$, the present Ward identities tell us that there is just one renormalization constant $Z$, such that:
\begin{equation}
\label{renormdef}
d_U=Zd_R;\;\;\Gamma_U=\frac{\Gamma_R}{Z};\;\;\Gamma_U^{(4)}=\frac{\Gamma_R^{(4)}}{Z};\;\; g_0^2=\frac{g_R^2}{Z}.
\end{equation}
Throughout this paper we work in a Euclidean metric.
\section{PT-RGI Green's functions and Ward identities}
All that is necessary to form PT-RGI Green's functions is to divide the
(proper) PT Green's functions by $g_0^2$. The resulting Green's functions $\Delta, G$, and $G^{(4)}$ are the same whether unrenormalized or renormalized and therefore independent of the renormalization point $\mu$, as eqn.~(\ref{renormdef}) shows:
\begin{eqnarray}
\Delta_R = \Delta_U & = & g_0^2d_U = g_R^2d_R, \\ \nonumber
G_R(p_i) = G_U(p_i) & = & \frac{\Gamma_U(p_i)}{g_0^2}=\frac{\Gamma_R(p_i)}{g_R^2}, \\ \nonumber
G^{(4)}_R(p_i)= G^{(4)}_R(p_i) & = & \frac{\Gamma^{(4)}_U(p_i)}{g_0^2} = \frac{\Gamma^{(4)}_R(p_i)}{g_R^2}
\end{eqnarray}
We sometimes write
\begin{equation}
\label{znotation}
d_U(p)=\frac{H(p)}{\tilde{Z}_U(p)},\;\;\bar{g}^2(p)=\frac{g^2}{\tilde{Z}_R(p)},\;\;
\tilde{Z}_U(p)=\frac{\tilde{Z}_R(p)}{Z}
\end{equation}
with $H_U=H_R\equiv H$ an RGI function.
In perturbation theory, $H=1/p^2$, so we {\bf define}
\begin{equation}
\label{hdefinition}
H=\frac{1}{p^2+m^2(p^2)}
\end{equation}
at all momenta.
Moreover,
\begin{equation}
\label{tildez}
\tilde{Z}(p)/g^2 =\bar{g}^{-2}(p)
\end{equation}
is RGI. Finally, we have:
\begin{equation}
\label{hruneqn}
\Delta_R(p^2)=H(p^2)\bar{g}^2(p^2).
\end{equation}
From now on we drop the irrelevant subscripts $U,R$ on PT-RGI Green's functions. These relations mean that, for example, $\Delta$ is independent of $\mu$ as well as of $\Lambda_{UV}$. We show below that the ghost-free Ward identities continue to be satisfied by the PT-RGI Green's functions. And finally, the process of dividing by $g_0^2$ is unnecessary for all Green's functions with five or more legs, since the skeleton graphs for these are finite.
Now we come to the essence of constructing PT-RGI Green's functions.
The PT Ward identities are:
\begin{equation}
\label{wardiden1}
p_{\mu}\Pi_{\mu\nu}(p)=0;
\end{equation}
\begin{equation}
\label{wardiden3}
p_{1\alpha}\Gamma_{\alpha\beta\gamma}(p_1,p_2,p_3) =
d^{-1}(p_2)_{\beta\gamma}(p_2)-d^{-1}(p_3)_{\beta\gamma}(p_3);
\end{equation}
and
\begin{eqnarray}
\label{wardiden4}
p_{1\mu}\Gamma^{(4)abcd}_{\mu\nu\alpha\beta} & = & f^{aeb}\Gamma^{ecd}_{\nu\alpha\beta}(p_1+p_2,p_3,p_4)\\ \nonumber
~ & + & f^{aed}\Gamma^{edb}_{\nu\alpha\beta}(p_1+p_3,p_4,p_2)\\ \nonumber
~ & + & f^{aec}\Gamma^{ebd}_{\alpha\nu\beta}(p_1+p_4,p_2,p_3).
\end{eqnarray}
Note that the Ward identities are of the same form in terms of PT-RGI Green's functions, for example:
\begin{equation}
\label{genwi}
p_{1\alpha}G_{\alpha\beta\gamma}(p_1,p_2,p_3)=P_{\beta\gamma}(p_2)\Delta^{-1}(p_2)
-P_{\beta\gamma}(p_3)\Delta^{-1}(p_3).
\end{equation}
Certain quantities require special treatment, notably seagull graphs. These are indeed formally PT-RGI as one can easily check, but still naively divergent, and there is no mass term in the Lagrangian to absorb this divergence. But it has been argued \cite{papag} that, due to an equation given in \cite{corn076}, there is a cancellation between the seagull and other graphs that removes this divergence.
\section{Schwinger-Dyson equations}
The SDEs of NAGTs are complex in any formulation, and have additional features---both complexities and simplifications---in the PT. One key point for the PT is that it has ghost-free Ward identities allowing precise construction of the two-point function (propagator) from the three-point vertex.\footnote{In contrast to the gauge technique \cite{cornbinpap}, which attempts to approximate the three-vertex from the propagator, and which unlike the PT-RGI approach is not exact in the UV.} (Similarly, the three-vertex can be constructed from the four-vertex, but we will not mention the four-vertex further in this initial investigation.) Of course, the three-vertex depends on the propagator, so this sounds like a vicious circle. We suggest here some methods for breaking into this circle.
The skeleton-graph expansion equivalent to the SDEs will be expressed in terms of fully-dressed vertices, and then the Ward identity will produce a propagator SDE with only fully-dressed vertices. This is the easiest way to see full Bose symmetry, etc, and it also avoids difficult issues of multiplying things by infinite Zs, instead of subtracting, as in eqn.~(\ref{approxvert}) below. Such an expansion with only dressed vertices and propagators differs in structure, but is equivalent to, the more conventional SDE treatment with some vertices being bare.
A key point of this paper is the suggestion that a good approximation to the fully-dressed one-loop SDEs is to use free vertices and free massive propagators in the loop and then apply the PT as in \cite{corn099}. Some minor ``by hand" adjustments to seagulls yields an output which satisfies all the requirements for a one-loop PT-RGI three-vertex and, through the Ward identity, a one-loop PT-RGI gluon propagator, both of which are well-behaved in the IR.
\subsection{The SDE in the UV}
It turns out that the Ward identities allow a precise characterization of the UV behavior of the full non-linear SDEs; to no one's surprise, this is exactly the same as in perturbation theory, because the gauge theory is asymptotically-free.
Write the vertex as
\begin{equation}
\label{vertequ}
G_{\alpha\beta\gamma}(p_i)=G^0_{\alpha\beta\gamma}G(p_i)+\dots
\end{equation}
where
\begin{equation}
\label{bornvert}
G^0_{\alpha\beta\gamma}(p_i)=(p_1-p_2)_{\gamma}\delta_{\alpha\beta}+(p_2-p_3)_{\alpha}\delta_{\beta\gamma}
+(p_3-p_1)_{\beta}\delta_{\alpha\gamma}
\end{equation}
is the Born term.
For the propagator, write
\begin{equation}
\label{zeromom}
\Delta_{\beta\gamma}(q) = \Delta (q)P_{\beta\gamma}(q)+ \mathrm{gauge-fixing \;term}
\end{equation}
where $q$ is one of the $p_i$.
If we save only the coefficient $G$ of the Born term in the vertex, the Ward identity has two terms, each of which says:
\begin{equation}
\label{uvwi}
G(p_i)\Delta (q)=\frac{1}{q^2}.
\end{equation}
Clearly, this cannot be an equation for all $p_i$, but in the UV region $p_i\sim q$ with $q$ large, it is true to leading order of logarithms; non-leading orders are compensated in the terms we omitted in the vertex.\footnote{Note that omitted terms are necessarily non-leading and cannot contribute to UV divergences; if they did, there would have to be a corresponding counterterm in the Lagrangian.} We do not have space here to show a crucial result, which is however elementary for $\phi^3_6$: Using a ghost-free gauge for the PT, all one-loop skeleton graphs for the vertex depend only on the product $G\Delta$. But this in the UV is the same, according to eqn.~(\ref{uvwi}), as if all vertices and propagators are free.\footnote{Since the one-loop skeleton graphs contain four-vertices, it is necessary to show that this result holds in the presence of such four-vertices, which we have done.}
\subsection{The SDE in the IR}
What happens beyond the UV and into the IR, where eqn.~(\ref{uvwi}) needs correction? This is a hard problem, and we have only some suggestive remarks. These are based on extending the calculation in \cite{corn099} of the perturbative one-loop three-vertex (see Fig.\ref{3gv}) to be phenomenologically useful in the IR.
\begin{figure}
\begin{center}
\includegraphics[width=5in]{trent-2.pdf}
\caption{\label{3gv} The PT graphs for the one-loop three-vertex are extracted from an S-matrix element for on-shell quarks through pinching out internal quark lines with Ward identities, as explained in \cite{cornbinpap}.}
\end{center}
\end{figure}
We are interested in these graphs as skeleton graphs, so all vertices and lines are to be thought of as dressed.
The simplest possible IR stabilization of an asymptotically-free theory is to change free massless propagators into massive ones in the PT calculation of the one-loop three-vertex of \cite{corn099}. This calculation is effectively in the background-field Feynman gauge, so we use as the propagator
\begin{equation}
\label{proppart}
d_{\alpha\beta}(p)=P_{\alpha\beta}(p) d(p)+\frac{p_{\alpha}p_{\beta}}{p^4}
\end{equation}
and
\begin{equation}
\label{invproppart}
d^{-1}_{\alpha\beta}(p)=P_{\alpha\beta}(p)d^{-1}(p)+p_{\alpha}p_{\beta}
\end{equation}
with $d(p)=1/(p^2+m^2)$. It is important for what follows that $m^2$ is constant in momentum space. The corresponding vertex is the free vertex in the background-field Feynman gauge, which is not Bose-symmetric on all three gluon lines, but obeys a simple Ward identity on one of the lines. We always choose this special line to be one of the gluon lines attached to quark lines in Fig.~\ref{3gv}. The Ward identity is:
\begin{equation}
\label{usuwi}
p_{1\alpha}\Gamma^F_{\alpha\mu\nu}(p_1,p_2,p_3)=\delta_{\mu\nu}[(p_3^2+m^2)-(p_2^2+m^2)]=
\Delta^{-1}_{\mu\nu}(p_3)-\Delta^{-1}_{\mu\nu}(p_2).
\end{equation}
and the crucial point is that if $m^2$ is constant in momentum space, the mass terms in the inverse propagators cancel in this Ward identity. In consequence, since the PT is based on repeatedly applying such Ward identities, one can choose all numerators in the graphs of Fig.~\ref{3gv} to be the same as in the massless case.
We are not trying to calculate $m^2$ self-consistently, which would involve a careful treatment of seagulls, so we take the liberty of defining seagull terms, with numerators $\sim m^2$, as necessary for satisfying Ward identities. Furthermore, for the same reason,
at the end of the calculation we must add a term as prescribed in \cite{corn090} to account for the massless poles in the propagator side of the Ward identity. This term cannot contribute to S-matrix elements, since it annihilates conserved currents, and is of the form:
\begin{eqnarray}
\label{v2def}
V_{\alpha\beta\gamma}(p_1,k_3,-k_2) & = & \frac{m^2}{2}[-\frac{p_{1\alpha}k_{3\beta}(p_1-k_3)_{\lambda}P_{\lambda\gamma}(k_2)}{p_1^2k_3^2} +\\ \nonumber
& & \frac{k_{2\gamma}k_{3\beta}(k_3+k_2)_{\lambda}P_{\lambda\alpha}(p_1)}{k_2^2k_3^2}-
\frac{p_{1\alpha}k_{2\gamma}(p_1+k_2)_{\lambda}P_{\lambda\beta}(k_3)}{p_1^2k_2^2}]
\end{eqnarray}
This addition corrects the vertex side of the Ward identity to have the massless poles of the inverse propagators on the right-hand side. Of course, none of these tweaks affect the UV properties.
This one-loop approach is guaranteed to be self-consistent in the UV, for reasons already given. But it gives useful results in the IR too. In fact, it reproduces an earlier suggestion \cite{corn138} for the
for the PT-RGI propagator:
\begin{eqnarray}
\label{fullprop}
\Delta^{-1}_{\alpha\nu}(p) = P_{\alpha\nu}(p^2)(p^2/g_0^2) & - & ~ \\ \nonumber
- \frac{11NP_{\alpha\nu}(p)}{48\pi^4}\int\!\mathrm{d}^4k\,\frac{1}{(k^2+m^2)((k+p)^2+m^2)}[p^2-\frac{m^2}{11}] & + & P_{\mu\nu}(p^2)M^2
\end{eqnarray}
where $M^2$ is chosen so that $\Delta$ has a pole at $p^2=-m^2$.
This form is also essentially the same as given in \cite{corn076,corn090}. As discussed in \cite{corn076,corn138}, a good approximation for spacelike momentum is
\begin{equation}
\label{delta82}
\Delta(p)\equiv g^2(\mu )d(p) = \frac{1}{(p^2+m^2)b\ln [\frac{p^2+4m^2}{\Lambda^2}]}.
\end{equation}
From this we suggest that
\begin{equation}
\label{runningch}
\frac{1}{\bar{g}^2(q)}=b\ln [\frac{q^2+4m^2}{\Lambda^2}]
\end{equation}
is a decent approximation; a better form useable for both spacelike and timelike momentum is given in \cite{corn138}.
Next we study a version of $\phi^3_6$ that shows very similar results.
\section{\label{phisec} The main issues for the three-vertex SDE, illustrated in $\phi^3_6$}
It is far too complicated to go very far with the SDEs for an NAGT, and now we will illustrate general features in a modified version of asymptotically-free $\phi^3_6$.
Fig.~\ref{sde} shows the only one-loop skeleton graph for $\phi^3_6$.
\begin{figure}
\begin{center}
\includegraphics[width=4in]{trent-1.pdf}
\caption{\label{sde} The one-loop skeleton graph for $\phi^3_6$.}
\end{center}
\end{figure}
Although there is no Ward identity as such in $\phi^3_6$, it is easy to construct one ``by hand", through giving two of the $\phi$ particles an Abelian charge. We arrange coefficients so that the Ward identity for the charged particles is analogous to eqn.~(\ref{genwi}):
\begin{equation}
\label{phi3wi}
p_{1\alpha}G_{\alpha}(p_1,p_2,p_3)=\Delta^{-1}(p_2)-\Delta^{-1}(p_3).
\end{equation}
Analogously to eqn.~(\ref{vertequ}) we define a scalar form factor $G(p_i)$ through:
\begin{equation}
\label{vertequ2}
G_{\alpha}(p_i)=(p_2+p_3)G(p_i)+\dots
\end{equation}
and equate this with the three-vertex of Fig.~\ref{sde}.
Then in the UV, where $p_i^2$ scales like a common large momentum $p^2$, it should be that that $G\Delta\rightarrow 1/p^2$. We assume it is a decent approximation, therefore, to include IR effects by the simple rule
\begin{equation}
\label{simplerule}
G(p_i)\Delta (p) \rightarrow \frac{1}{p^2+m^2}.
\end{equation}
In effect, the vertex and propagator loop corrections cancel each other. Then we assert that it is a good approximation, exact in the UV, to calculate the skeleton graph of Fig.~\ref{sde} by using free vertices and the free massive propagator of eqn.~(\ref{simplerule}).
The result is:
\begin{equation}
\label{approxvert}
G(p_i)= \frac{1}{g_0^2}-b\int\![\mathrm{d}z]\ln [\frac{\Lambda_{UV}^2}{D+m^2}]
\end{equation}
where, as in the gauge theory, at one loop the bare coupling is
\begin{equation}
\label{barecoupl}
\frac{1}{g_0^2}=b\ln (\frac{\Lambda_{UV}^2}{\Lambda^2})
\end{equation}
and
\begin{eqnarray}
\label{zint}
\int\![\mathrm{d}z] & = & 2\int_0\!\mathrm{d}z_1\,\int_0\!\mathrm{d}z_2\,\int_0\!\mathrm{d}z_3
\,\delta (1-\sum z_i),\\
D & = & p_1^2\,z_2z_3+p_2^2\,z_3z_1+p_3^2\,z_1z_2 .
\end{eqnarray}
(The Feynman parameter $z_i$ goes with the line labeled $k_i$.)
Clearly when any momentum is large, say $p_1^2\approx p^2 \gg m^2$, $G$ behaves like
\begin{equation}
\label{approxvert2}
G(p_i)= b\int\![\mathrm{d}z]\ln [\frac{D+m^2}{\Lambda^2}]\approx b\ln p^2.
\end{equation}
In the same UV limit we expect $\Delta \rightarrow 1/(bp^2\ln p^2)$ ({\em e.g.}, \cite{corn076}). So the UV behavior, for large $k_i$, is self-consistent, since $G\Delta \approx 1/p^2$ and the integral corresponding to Fig.~\ref{sde} behaves exactly like our hypothesis.
In an NAGT a similar thing happens for $\Gamma^{(4)}$, as one sees by inspecting graphs and using PT-RGI Ward identities based on eqn.~(\ref{wardiden4}).
For example, the one-loop skeleton graph with two $G^{(4)}$ depends only on the product $G^{(4)}\Delta \sim 1/p^2$, so as with the three-vertex we get an output vertex $\sim \ln p^2$ by inputting {\bf free} $G^{(4)},\Delta$.
\section{Running charge from the three-vertex: A low-energy Ward identity}
The main point of this paper is the derivation of a ``low-energy" theorem for PT-RGI Green's functions that yields the running charge $\bar{g}^2(q)$, at all momenta, as the value of the scalar coefficient $G(0,q,-q)$ associated with Born-term kinematics in the three-vertex, evaluated when one momentum is zero. The general Green's function $G(p_1,p_2,p_3)$ is in some sense the extension of the running charge to a gauge-invariant, scheme-independent, process-independent, renormalization-point independent vertex that depends on three momenta.
The running charge is related to the propagator via eqns.~(\ref{hdefinition},\ref{hruneqn}), or:
\begin{equation}
\label{zeromom2}
\Delta_{\beta\gamma}^{-1}(q) = \frac{(q^2+m^2)}{\bar{g}^2(q)}P_{\beta\gamma}(q)+ \mathrm{gauge-fixing \;term}
\end{equation}
which has a massless longitudinal pole with residue $\sim m^2$. (We need not indicate the momentum dependence of the mass.)
We are only interested in that part of the three-vertex having the Born kinematic structure, of the many different ones in this vertex. Terms not having this structure include the longitudinal poles, so we can ignore the $m^2$ terms in eqn.~(\ref{zeromom}).
Saving only the kinematical structure of eqn.~(\ref{bornvert}), the linear terms both on the left and right of the Ward identity have the kinematics
\begin{equation}
\label{linterms}
p_{\alpha}[2q_{\alpha}\delta_{\beta\gamma}-\delta_{\alpha\beta}q_{\gamma}-\delta_{\alpha\gamma}q_{\beta}].
\end{equation}
Equating coefficients of the linear terms in the Ward identity at $p=0$ yields:
\begin{equation}
\label{lethm}
G(0,q,-q)=\bar{g}^{-2}(q).
\end{equation}
If eqn.~(\ref{approxvert2}) holds for $G$, then
\begin{equation}
\label{approxrunch}
\bar{g}^{-2}(q)=b\int_0^1\!\mathrm{d}z\,2(1-z)\ln [\frac{q^2z(1-z)+m^2}{\Lambda^2}]
\end{equation}
which has the correct UV behavior, a threshhold at $-q^2=4m^2$, and a somewhat larger value of $\bar{g}^2(0)$ at the same value of the mass $m$ than given by the supposition of eqn.~(\ref{runningch}). Such a discrepancy is only to be expected, given our approximations in the IR. Probably the most accurate IR approximation to date is that of \cite{corn138}, based on eqn.~(\ref{fullprop}) and other considerations that we cannot describe here.
The ``standard" RGI physical Green's function $\tilde{G}(p_1,p_2,p_3)$ of Eq.~(42) of \cite{corn138} is just:
\begin{equation}
\label{tildevert}
\tilde{G}(p_1,p_2,p_3)= \frac{g\Gamma (p_1,p_2,p_3)}{[\tilde{Z}(p_1)\tilde{Z}(p_2)\tilde{Z}(p_3)]^{1/2}}=G(p_1,p_2,p_3)\bar{g}(p_1)\bar{g}(p_2)\bar{g}(p_3).
\end{equation}
In view of (\ref{lethm}) we have
\begin{equation}
\label{vertzero2}
\tilde{G}_{\alpha\beta\gamma}(0,q,-q)\approx G^0_{\alpha\beta\gamma}(0,q,-q)\bar{g}(0)+ \dots
\end{equation}
where the omitted terms have massless longitudinal poles, but do not contribute to the S-matrix.
[One must be careful to analyze the massless poles in the vertex and propagator before blindly using the Ward identity, because it is perfectly possible for there to be a pole in the vertex whose existence is unsuspected from the Ward identity alone. For example, consider an Abelian vertex:
\begin{equation}
\label{poleex}
\Gamma_{\alpha}(p,q)=\frac{q_{\alpha}p\cdot q}{q^2}+\dots
\end{equation}
Then $q\cdot \Gamma =p\cdot q+\dots$ could as well have come from a term $\Gamma_{\alpha}=p_{\alpha}+\dots$ with no pole. The difference of these two possible vertices is $p_{\beta}P_{\alpha\beta}(q)$.]
\section{The PT-RGI three-vertex and the beta-function}
Long ago \cite{corn099} there were speculations on deriving the beta-function from the PT three-vertex. We can now be more precise. From eqn.~(\ref{lethm}) and the usual definition of the beta-function equation for the running charge we find
\begin{equation}
\label{betaeqn}
\beta (\bar{g})=-\frac{\bar{g}^3}{2}\frac{q\partial}{\partial q}G(0,q,-q)
\end{equation}
where on the left-hand side we could replace $\bar{g}=G(0,q,-q)$, but this is unnecessary. If we use the speculation of eqn.~(\ref{approxrunch}) then the one-dressed-loop beta-function is
\begin{equation}
\label{specbeta}
\beta (\bar{g})=-b\bar{g}^3\int_0^1\!\mathrm{d}z\,2(1-z)\frac{q^2z(1-z)}{q^2z(1-z)+m^2}
\end{equation}
from which we are to eliminate $q^2$ in favor of $\bar{g}^2$ with the help of eqn.~(\ref{approxrunch}). In perturbation theory ($m^2=0$) all of this trivially yields $\beta (g)=-bg^3$. When $m^2\neq 0$ the elimination of $q^2$ in favor of $\bar{g}^2$ is not possible analytically, but the approximate running charge of eqn.~(\ref{runningch}) easily yields, with this procedure,
\begin{equation}
\label{massbeta}
\beta (g) = -bg^3[1-\frac{4m^2}{\Lambda^2}e^{-1/bg^2}],
\end{equation}
showing the expected non-perturbative behavior coming from $m^2$.
In general, a non-perturbative term such as the mass in the approximate running charge of eqn.~(\ref{runningch}), or the $\exp [-1/(bg^2)]$ in the beta-function, yields inverse powers of $q^2$ (modulo logarithms) in the UV asymptotics. If the mass does not run, possibly corresponding to a bare mass term, this condensate could be interpreted as an $\langle A^2\rangle$ condensate, but this is not what we have in mind. In conventional QCD the one-loop operator-product expansion of the PT propagator gives \cite{lavelle} a mass running in the UV as $m^2(q)\rightarrow C\langle G_{\mu\nu}^2\rangle/q^2$ where $C$ is a positive constant, given in \cite{lavelle}. This, along with eqn.~(\ref{runningch}), yields a condensate term in the UV expansion of the form
\begin{equation}
\label{condterm}
\bar{g}^{-2}(q)\rightarrow b\ln (\frac{q^2}{\Lambda^2})+\frac{4bC\langle G_{\mu\nu}^2\rangle}{q^4}+\dots
\end{equation}
The fate of higher-order terms is not known, since this behavior is based on a one-dressed-loop result. Earlier it was suggested \cite{corn112} that condensate terms were closely-associated with the taming of the factorial divergences coming from IR renormalons; of course, a mass automatically does this.
\newpage
|
2,877,628,091,100 | arxiv | \section{Introduction}
A blind search has been made for cosmic ray sources of neutral hadrons yielding a peak just above the knee. We report support for such a possibility, i.e., a peak at $5.86\pm 0.75 PeV,$ using data from an international collaboration (Tunka).\citep{Tunka,Tunka1,tunka1} This search was motivated by a 1999 claim by this author of such a peak at $4.5\pm 2.2 PeV,$\citep{Ehrlich2} and also some recent results by at least three experiments showing a $E\approx 5.6 PeV$ peak in the all-particle cosmic ray spectrum.\citep{rio_conf} Moreover, its existence had been suggested before the first 1999 claim, based on a model\citep{Ehrlich1} that fit the cosmic ray spectrum, assuming the knee is the threshold for proton beta decay -- a process that becomes energetically allowed if the electron neutrino were a tachyon of mass $m^2\approx -0.25eV^2,$ based on an idea proposed by Chodos, Hauser, and Kostelecky.\citep{Chodos} As noted in ref.~\citep{Ehrlich2} the threshold in PeV for proton decay can be written as: $E_{th}\approx 1.7/\sqrt{-m_{\nu}^2}$
Furthermore, apart from the peak itself, it is noteworthy that the 1999 cosmic ray spectrum fit (Fig. 1 of ref.~\citep{Ehrlich1}) showed significant oscillations in the region above the knee that match those in a recent report of fine structure in this energy region (see Fig. 1 of ref.~\citep{rio_conf}). Regretably, that 1999 model did err for the region $E > 10^{20} eV$ in failing to include a GZK cut-off.\citep{GZK} Despite that flaw, the inclusion of a GZK cut-off was not a significant feature of the model. In fact, had one merely assumed a greater distance for the extragalactic sources that would have accommodated a GZK cut-off without affecting the fit for $E < 10^{20} eV.$ The one essential feature of the 1999 model was that cosmic ray protons began to decay when $E>E_{knee},$ resulting in a decay chain: $p\rightarrow n \rightarrow p\rightarrow n \rightarrow \cdots$ Such a decay chain would continue until the baryon's energy drops below $E_{knee},$ thereby giving rise to the knee, and resulting in a pile-up of neutrons just above it, i.e., a peak at $E = 4.5 \pm 2.2 PeV.$\citep{Ehrlich1} Neutrons, being uncharged, mostly point back to their sources, unlike protons whose directions are randomized by the galactic magnetic field. Thus, if the baryon in the decay chain spends most of its time as a neutron most of its directional information should be preserved en route to Earth. Moreover, the hypothesized decay chain could allow PeV neutrons to reach us from sources normally considered too distant, given the neutron lifetime such as Cygnus X-3.
The first claim\citep{Ehrlich2} for a $4.5 PeV$ peak was based on Lloyd-Evans data for Cygnus X-3.\citep{Lloyd-Evans} As a binary having an orbital period of 4.79 h, for certain rotation phases jets emanating along the rotation axis of one member of the binary might point towards Earth and increase the observed signal, which is what the Lloyd-Evans data seemed to show. In fact, the reported signal was seen by selecting events in a particular $2.5\%$ wide phase window, and using as background the remaining $97.5\%$. If the signal was real, such a phase window cut would reduce background by a factor as much as 40, greatly enhancing the signal. Using the Lloyd-Evans data Ehrlich showed that the two bins straddling $5 PeV$ had $28.4\pm 4.7$ excess events ($6.0\sigma$).\citep{Ehrlich2} Apart from skepticism of this claim, there is also much skepticism about the existence of Cygnus X-3 as a source of PeV cosmic rays. However, the basis of that skepticism may be poorly justified, especially if Cygnus X-3 is an episodic source, and if a weak $E=4.5 PeV$ signal needs cuts to suppress background, as discussed in more detail in Appendix I.
\section{The Tunka experiment}
Recent support for a cosmic ray peak just above the spectrum knee can be found in data reported by the Tunka collaboration, even though the authors characterize their observation instead as an example of a ``remarkable fine structure" seen above the knee in the all-particle cosmic ray spectrum.\citep{rio_conf} Nevertheless, Fig. 1 of their paper does show an unambiguous peak at $E\approx 5.6 PeV$ in the combined Tunka-25 and Tunka-133 data. One might well be suspicious in any peak that occurs just at an energy where the spectra from the two data sets join On the other hand, it is quite significant that Fig. 1 of their paper shows data from three other experiments that exhibit the same peak as Tunka (KASCADE Grande, Ice Top, and Tibet).\citep{Kascade,Ice_top,Tibet1} The Tunka authers interpret the fine structure above the knee as being quite consistent with a combined source model (galactic SN remnants plus extragalactic source(s)), with a suitable choice of free parameters. However, the authors do not consider an alternative hypotheses that can also account for the observed ``remarkable fine structure" (including the $E\approx 5.6 PeV$ peak) -- in particular they do not mention the 1999 prediction and subsequent observation of just such a peak previously predicted and then observed at $4.5 \pm 2.2 PeV.$ In what follows, we present an independent analysis of the Tunka-133 data, which seeks corroborating evidence for such a peak by attempting to find possible cosmic ray sources in a blind all-sky survey. We have no knowledge of the position of Dr. Kuzmichev, head of Tunka, or other members of the collaboration on the results presented here.
\subsection{Tunka: history, efficiency, and exposure}
Tunka began in the 1990's and it observes the extensive air showers produced by cosmic rays. Originally Tunka operated using 25 Cherenkov counters, but the newer Tunka-133 array used 133 Cherenkov counters covering an area of $1 km^2.$ The Tunka-133 data analyzed here was collected during three successive winter seasons from 2009 to 2012 during clear moonless nights. It consists of approximately $1.8\times10^6$ events with zenith angle less than $45^0$ and energies $E>1PeV$ measured to an precision of $15\%.$\citep{Tunka} The efficiency of the detectors as a function of zenith is close to $100\%$ up to $30^0$ and reduces to $50\%$ at zenith angles above $45^0$. As a function of energy, the efficiency for $E > 6 PeV$ is $100\%$ and the threshold is $1 PeV,$ so at $E=4.5 PeV$ it is above $50\%.$ Tunka-133 also has good exposure in the Northern Hemisphere, with a field of view of $\pi$ steradians.\citep{Exposure}
\subsection{Data analysis}
The only cut made on the Tunka-133 data set was that the zenith angle should be less than $45^0,$ above which the acceptance drops below $50\%.$ An energy histogram of the whole data set using energy intervals of width $\Delta Log_{10} E =0.1$ shows about 280,000 events in the peak energy bin straddling $2.8 PeV.$ In order to search for evidence for a peak near $E \approx 5.6 PeV$ we do not focus on any specific possible source such as Cygnus X-3, but instead first identify a large number of ``candidate sources," defined arbitrarily as small non-overlapping circular regions of the sky having $S > 3.3\sigma$ excess counts above background in each energy bin, and then examine the energy distribution of these candidate sources in five energy bins.
The statistical significance S is found in terms of the on-source counts $n_{on}$ and the background counts $n_{Bkd}$ using:
\begin{equation}
S=\frac{n_{on}- n_{Bkd}}{\sqrt{n_{Bkd}}}
\end{equation}
In Eq. 1, the background count is found using the shuffling method,\citep{Cassiday} which involves generating artificial events by shuffling the arrival times of all events having very close altitude and azimuth coordinates to generate a set of artificial events. In the absence of any real sources, this method allows one to calculate an accurate background in celestial coordinates (right ascension and declination). The systematic error using the shuffling method is less than 0.0008 events for $0.2^0 \times 0.2^0$ bins, based on 10,000 cycles of shuffling. Thus, for the largest radius search window used ($4.5^0$) one would expect a systematic error of less than 1.3 counts, which is negligible compared to the statistical fluctuations.
The search procedure is to scan the sky in right ascension $\alpha$ and declination $\delta,$ and calculate $S$ based on the number of events above background in each small non-overlapping circular region. The procedure is then repeated many times, each time shifting the search pattern of circles by $1^0$ in $\alpha$ or $\delta$. Note that in the model in which the $4.5 \pm 2.2 PeV$ peak was predicted, there was no mention of the angular spread of neutrons arriving from cosmic ray sources, because that would depend on proton decay dynamics and assumptions on the distribution of source distances. Therefore, when doing a blind search it is reasonable to use multiple search radii, and the following three are used: $r=1.5^0,$ $3.0^0,$ and $4.5^0.$ No other search radii were tried. This choice of the three radii was based on the following considerations: (a) the smallest window used is large enough compared to the angular resolution\citep{tunka1} to give meaningful results, (b) the largest one equals the largest window for which signals had earlier been claimed for Cygnus X-3. Finally, (c) one intermediate choice between them was considered sufficiently different from the others that one might expect to find many possible candidate sources not picked up by the other two.
Let us define $n(r, S,E)$ as the number of times we find an excess above background at a significance level S using a search radius $r$ for cosmic rays having for the energy bin centered on energy $E$ (in PeV), and we also define:
\begin{equation}
N(S,E)=n(1.5^0, S,E) +n(3.0^0, S,E) +n(4.5^0, S,E)
\end{equation}
as the number of times we find an excess above background at a significance level S using any of the three search radii. We only use cases, however, where each $n(r, S, E)>50,$ so as to avoid spurious large values of S resulting from very small numbers, and to avoid a breakdown in the Gaussian $\sqrt{n}$ approximation of errors.
\subsection{Results}
We have examined evidence for a peak near the one at $E\approx 5.6 PeV$ seen by Tunka and three other experiments in the all-particle cosmic ray spectrum. Note, however, that an all-sky spectrum of Tunka-133 data (not combined with Tunka-25) shows no hint of a peak at $E\approx 5.6 PeV;$ it is only when searching for numbers of candidate sources that one appears. Specifically, when we use five bins of width $\Delta Log_{10} E = 0.1$ centered on $5.86 PeV,$ it is that central energy bin which shows the greatest excess number of candidate sources above what chance predicts. A peak at this energy is quite consistent with both the all-particle spectrum peak at $E \approx 5.6 PeV,$ as well as the earlier reported peak at $E = 4.5\pm 2.2 PeV.$ Fig. 1(a) shows a histogram of $N(S,5.86)$ versus S, i.e., the number of times various positive and negative $S$ values are found in the all-sky scan for the energy bin centered on $E=5.86 PeV.$ As can be seen, the data agree quite well with the expected $\sigma=1$ Gaussian distribution for $S<0,$ but for $S >0$ the departure from the Gaussian appears more pronounced at large $S.$ For example, while there are 68 candidate sources, i.e., $S > +3.3$ regions, there are only 18 cases where $S < -3.3 ,$ which is very close to what the Gaussian distribution predicts, i.e., 20. For greater clarity in showing the magnitude of the deviation from the Gaussian for large $S >0,$ Fig 1 (b) shows a blow-up of Fig 1(a) for the region $S > 3.0.$
Most interestingly, this very pronounced deviation from a Gaussian for large positive $S$ is seen ${\emph only}$ when the S-distribution is examined for the energy bin centered on $E=5.86 PeV,$ and it is significantly less at smaller and larger energies -- see Fig 2(a), which shows the number of candidate sources as a function of energy in excess of the 20 that the Gaussian distribution predicts for the only five energy bins we have examined. It must be emphasized that it is the $\emph{excess}$ number of candidate sources above background that is plotted in Fig 2(a), so that one expects the five data points in the absence of real sources to be consistent with zero at all energies, not with an arbitrary horizontal line. Thus, in contrast, to the systematic effect seen for the candidate sources, Fig. 2(b) shows that the number of excess candidate ``sinks", i.e., $S < -3.3 $ regions in fact shows no statistically significant departures from zero for all energy bins.
\subsection{The oversampling bias}
The over-sampling the same regions of the sky in our search procedure can inflate greatly the statistical significance of the number of candidate sources. Thus, based on Fig. 2(a) it would {\emph not} be valid to conclude that the $\emph excess$ number of candidate sources for the $E=5.86 PeV$ bin is $68 -20 = 48 \pm \sqrt 20.$ The oversampling bias can be removed only partly by enlarging the uncertainty on the number of sources using 3 search windows by a factor of $\sqrt{3},$ yielding for the number of excess candidate sources above chance to $48 \pm 7.8.$ This penalty factor does not go nearly far enough, however, because the biggest source of oversampling (especially for the largest size search window) results from the fact that a localized excess or deficiency having a size comparable to the search window might be picked up by many nearby windows of that size, as we step the whole grid in $1^0$ steps in $\alpha$ and $\delta$ across the sky. A partial way to remove this source of oversampling is to use the distribution in the number of candidate sinks to estimate the uncertainty in the number that chance would predict. In other words, since no real sinks are physically possible their rms deviations from zero serves as a good estimate of the uncertainty in the excess numbers of sources and sinks. We find using the five energy bins, an rms deviation of 12, which how the size of the error bars in Figs. 2(a) and 2(b) were determined. Combining the excess counts for the bin centered on $5.86 PeV$ with the two adjacent bins, one finds an excess of 102.7 with an uncertainty of $12\sqrt{3}.$ Were we confident that the oversampling has been removed based on this analysis, the result could be expressed as $102.7\pm 20.8 (5.0 \sigma)$ excess candidate sources above chance, $\emph{but no such claim can be made here}$, and hence our result remains to be corroborated by other data sets.
Fig. 3 shows the locations (right ascension $\alpha$ and declination $\delta$) of the 68 candidate sources (upper graph) and $1/4$ as numerous sinks (lower graph) for the $5.86 PeV$ energy bin. It is clear that the largest fraction of the candidate sources in this energy bin have the $4.5^0$ radius. The concentrations of candidate sources in specific $\alpha, \delta$ regions could in principle have a physical basis, but it also is undoubtedly due to the previously discussed over-sampling bias, especially for the concentration in the vicinity of $\alpha=10^0$ and $\delta=15^0$ where the degree of concentration is most extreme. In the other three concentrations near $(\alpha, \delta) = (260^0, 15^0), (260^0, 50^0), (10^0, 50^0),$ the spacings between candidate sources is sufficiently large that real physical concentrations of sources is a realistic possibility, even if some amount of oversampling also exists.
\subsection{Summary}
In summary, support for a peak at $5.86 \pm 0.75 PeV$ is presented -- a value consistent with (a) the $E\approx 5.6 PeV$ peak seen in the all-particle spectra from Tunka and three other groups, (b) the 1999 claim of a peak at $E=4.5\pm 2.2 Pev$ reported for Cygnus X-3 data, and (c) the prior 1999 prediction of such a peak at this energy. The new evidence is $\emph {not}$ in the form of excess counts above what background for any suspected identifiable source(s), but rather an excessive number of candidate sources, i.e., small circular regions of the sky having at least $3.3\sigma$ excess counts above chance in energy bins near $5.86 PeV$ -- a procedure that has the effect of greatly enhancing any weak signal, assuming that there are numerous real sources at unknown locations. Moreover the numbers of candidate sources seen versus energy falls as one moves away from $5.86 PeV$ in either direction, as expected, and no comparable excess is seen in the number of candidate ``sinks" for any of the five energy bins examined. Although Fig. 2(b) might suggest a hint of an energy dependence for the numbers of excess sinks above chance, the chi square probability for a fit to a horizontal straight line at zero height $(29\%)$ indicates consistency with the previous assertion. Despite the possible evidence for a peak at $5.86 \pm 0.75 PeV$ presented, it is recognized that in view of the methodology used, i.e., searching for sources at locations where no objects are known to exist, and a failure to deal completely with the oversampling bias, our result can only serve as a motivation to others to see if there is evidence for the enhancement claimed for sources in the same regions of the sky that have been identified here.
If the peak is real, one reason it may not have been claimed previously by others is that most experimenters who saw no statistically significant excess of cosmic rays from the direction of any suspected source probably would have little reason to look at any particular energy band. Obviously, the peak identified here will need to be seen in the new data collected by Tunka and/or by others having data sets with sufficient numbers of events near the knee before it will be regarded seriously. Any positive result from future searches would be quite interesting -- $\emph{particularly}$ if the locations of most of the candidate sources match those in Fig. 3 (a).
\begin{acknowledgments}
I am immensely grateful to Dr. Kuzmichev for providing access to the Tunka data, and to Dr. Mikhail Zotov for providing his analysis of the data, and to both of them for supplying specific requested information about the Tunka data.
\end{acknowledgments}
|
2,877,628,091,101 | arxiv | \section{Introduction}
Pfaffian forms are a particular case of differential forms: the \textit{1-forms}. The treatment of this subject via differential forms, including the use of exterior algebra and differentiable manifolds more general than ${\mathbb {R}}^{n}$, is beyond the scope of this paper\footnote{To the reader interested in an extensive approach to the subject via differential forms, we suggest consulting the book by Flanders \cite{flanders}, for a applied exposition in physics, or the book by Morita \cite{morita}, for one based in pure mathematics.}. The study of pfaffian forms has theoretical and practical relevance in itself, particularly, in mathematical-physics \cite{antoniou,arens}.
This paper has two main objectives. The first is to present to the reader, in some detail, the lesser known conditions for the integrability of pfaffian forms on ${\mathbb {R}}^{n}$, which have a \textit{local} character and rarely appear in textbooks of differential equations and differential forms. The second is to discuss the possibility and obtainability of a \textit{global} integrability criterion for pfaffian forms. Frobenius Theorem does not fit the proposed roadmap for this paper and thus will be omitted. Exception is a briefly comment in section \ref{sec:finais}.
Well, first studied by Clairaut, Fontaine, and Euler, according to Katz \cite{katz}, the pfaffian forms were so named in honor of Pfaff, who between 1814 and 1815, treated the subject in greater detail \cite{samelson}. Notable mathematicians later went on to expand on Pfaff's work, most notably Frobenius \cite{frobenius} and Cartan \cite{cartan}.
A suitable way to define pfaffian forms here is similar to how Morita \cite{morita} does.
\begin{definition}\label{itm:1.1}{\normalfont(Pfaffian form)}. Let be a collection of n independent variables $x_{1},x_{2},...,x_{n}$ in ${\mathbb {R}}^{n}$, and a collection of n functions $F_{i}=F_{i}(x_{1},x_{2},...,x_{n})$ of class $C^{\infty}$ on an open set $B\:{\subseteq}\:{\mathbb {R}}^{n}$. For the $\delta\xi$ objects,
\begin{align*}
\delta{\xi}=\sum_{i=1}^{n}F_{i}(x_{1},x_{2},...,x_{n})dx_{i},
\end{align*}
defined in $B$, with $\delta\xi$ representing the infinitesimal of a certain finite quantity $\xi$ in B, we call them pfaffian forms in n variables.
\end{definition}
Henceforth when we refer to a generic pfaffian form we will be referring to a pfaffian form $\delta\xi$, whose parameters are exemplified from Definition \ref{itm:1.1}. Now, a pfaffian form may, or may not, refer to what we call the differential of a function. When a pfaffian form $\delta\xi$ does not have the functions $F_{i}$ identified as the partial derivatives of a function $\xi=\xi(x_{1},x_{2},... ,x_{n})$ with respect to the respective variables $x_{i}$, ($F_{i}={\partial}{\xi}/{\partial}x_{i}$), then it does not represent the differential of such a function $\xi$ and we call $\delta\xi$ an inexact differential. That is, in this case, the quantity ${\delta}{\xi}$ represents only the infinitesimal of a certain finite quantity ${\xi}$, and ${\xi}$ is not a function of the $n$ independent variables $x_{i}$ in the sense of ${\xi}=\xi(x_{1},x_{2},...,x_{n})$.
Otherwise, if the pfaffian form ${\delta}\xi$ has the functions $F_{i}$ identified as the partial derivatives of a function ${\xi}=\xi(x_{1},x_{2},.... ,x_{n})$ with respect to the respective variables $x_{i}$, ($F_{i}={\partial}{\xi}/{\partial}x_{i}$), then it is the differential of such a function $\xi$, actually existing. Moreover, in this case, we also call $\delta{\xi}$ an exact differential and replace, in the symbolism of its designation, the symbol $\delta$ by the usual symbol $d$, traditionally used to denote the infinitesimal of a quantity that is a usual function.
It turns out that in some cases, even if the pfaffian form $\delta\xi$ is an inexact differential, it can be written as the result of the product of a function ${\mu}={\mu}(x_{1},x_{2},...,x_{n})$ with an exact differential $d\psi$, where $d\psi$ is a function of the $n$ independent variables $x_{1},x_{2},. ...,x_{n}$, that is, $\psi=\psi(x_{1},x_{2},...,x_{n})$. In other words, there being $\mu$, and being $\mu(x_{1},x_{2},...,x_{n})\;{\neq}\;0$, in an open set $A$, where $A\:{\subseteq}\:B$, it follows that the quantity ${\delta}{\xi}/{\mu}$ will be an exact differential $d\psi$ in $A$. When this happens we say that $\delta\xi$ is integrable in $A$, and we call the function ${\mu}^{-1}$ the integrating factor of $\delta\xi$. Furthermore, given the smoothness of functions $F_{i}$ on all $B$, for the discussion of integrability we will also assume that the studied pfaffian forms are always \textit{non-singular} in $B$; i.e., not identically null on all $B$.
\begin{definition}\label{itm:1.2}{\normalfont(Integrable pfaffian form)}. Let $\delta\xi$ be a non-singular pfaffian form. If there exist functions ${\mu}={\mu}(x_{1},x_{2},...,x_{n})$, with $\mu(x_{1},x_{2},...,x_{n})\;{\neq}\;0$, and $\psi=\psi(x_{1},x_{2},...,x_{n})$ such that
\begin{align*}
\delta{\xi}=\mu{d\psi},
\end{align*}
on an open $A\:{\subseteq}\:B$, then $\delta\xi$ is said to be integrable on $A$. Also, the function ${\mu}^{-1}$ is called the integrating factor of $\delta\xi$.
\end{definition}
Naturally, by Definition \ref{itm:1.2}, every $\delta\xi$ such that $\delta\xi=d\xi$ is integrable.
To continue our discussion we need to mention the important situation that occurs on paths in $B$ such that a pfaffian form nullifies, where we get the so-called Pfaff equation associated with that pfaffian form.
\begin{definition}\label{itm:1.3}{\normalfont(Pfaff equation)}. The associated Pfaff equation for the pfaffian form $\delta{\xi}$ is
\begin{align*}
\delta{\xi}=0.
\end{align*}
\end{definition}
It is very important to be said that a Pfaff equation \textit{not} says that $\delta{\xi}$ is identically null at $B$; that equation says at which \textit{paths} of $B$ the equation $\delta{\xi}=0$ has solution\footnote{This can be exemplified with some uses of Pfaff equations in physics. In Analytic Mechanics, {\textit{constraints}} are often modeled by a Pfaff equation \cite{papastavridis}, and in Classical Thermodynamics the usual condition for an adiabatic {\textit{infinitesimal process}} $\delta{\altmathcal{Q}}=0$ is precisely the Pfaff equation of the pfaffian form {\textit{heat}}, $\delta{\altmathcal{Q}}$ \cite{silvajunior}.}. Next, to seek more familiarity with the idea of integrable pfaffian forms, we will aim to analyze the solutions of the Pfaff equations associated with them.
\begin{definition}\label{itm:1.4}{\normalfont(Pfaff Exact and Integrable Equation)}. The Pfaff equation associated with the pfaffian form $\delta{\xi}$,
\begin{align*}
\delta{\xi}=0,
\end{align*}
is called exact if, and only if, $\delta{\xi}$ is an exact differential, $\delta{\xi}=d{\xi}$; if, and only if, the pfaffian form $\delta{\xi}$ is integrable, the associated Pfaff equation is called integrable.
\end{definition}
If $\delta{\xi}$ is a pfaffian form that constitutes an exact differential, then $\delta{\xi}=d{\xi}$ and the associated Pfaff equation $d{\xi}=0$ has as solution ${\xi}={\xi}(x_{1},x_{2},...,x_{n})$ = constant, which is geometrically a hypersurface of $n-1$ dimension in $B$.
On the other hand, if $\delta{\xi}$ is a pfaffian form that is an inexact differential but integrable, then $\delta{\xi}=0$ occurs in the same paths where $d\psi=0$, i.e., where $\delta{\xi}=\mu{d\psi}$ holds, according to Definition \ref{itm:1.2}. In this situation, the solution of $\delta{\xi}=0$ is $\psi=\psi(x_{1},x_{2},...,x_{n})$ = constant, which defines a hypersurface of $n-1$ dimension, now, in $A$. Of course, if $\delta{\xi}$ is a non-integrable Pfaffian form, then the solution of the equation $\delta{\xi}=0$ does not need to define any geometric object restricted to $n-1$ dimensions as in the previous cases.
This said, we can now ask the main question: in which situations is a pfaffian form integrable? The sections \ref{sec:local} and \ref{sec:global} give us the answer.
\section{Local Integrability} \label{sec:local}
This section deals with integrability conditions that have \textit{local} character for pfaffian forms; i.e., conditions that, when satisfied, are restricted to some neighborhood $M$ contained in $B$, around some point $p$ of $B$. This will become clearer in the section \ref{sec:global}, where we will discuss conditions for \textit{global} integrability. For now, it is interesting that a definition for local integrability be formalized.
\begin{definition}\label{itm:2.1}{\normalfont(Local integrability)}. If the non-singular pfaffian form $\delta{\xi}$ is integrable restrictedly to some neighborhood $M\:{\subset}\:B$ of every point $p\:{\in}\:B$, we say that $\delta{\xi}$ is locally integrable on $B$.
\end{definition}
We will first discuss the simplest cases for local integrability: those of pfaffian forms in two and three variables.
\subsection{Pfaffian forms in two and three variables}
To avoid unnecessary repetition, we will henceforth assume only \textit{non-singular} pfaffian forms. According to Definition \ref{itm:1.2}, let be a pfaffian form $\delta{\xi}$ in two variables, ${x_1}$ and ${x_2}$:
\begin{equation}\label{eq:1}
\delta{\xi}=F_{1}({x_1},{x_2})dx_{1}+F_{2}({x_1},{x_2})dx_{2}.
\end{equation}
\noindent
The respective Pfaff equation associated with the pfaffian form of the expression (\ref{eq:1}) is
\begin{equation}\label{eq:2}
F_{1}({x_1},{x_2})dx_{1}+F_{2}({x_1},{x_2})dx_{2}=0,
\end{equation}
\noindent
which defines the following first-order ordinary differential equation,
\begin{equation}\label{eq:3}
\frac{d{x_2}}{d{x_1}}=-\frac{F_{1}({x_1},{x_2})}{F_{2}({x_1},{x_2})}\,{\equiv}\,f({x_1},{x_2}),
\end{equation}
\noindent
where $x_{2}=x_{2}(x_{1})$. Now, by the Existence and Uniqueness Theorem for ordinary differential equations\footnote{Attention spent on formally stating this theorem is redundant to the purpose of this paper. For more details of this fundamental theorem we suggest reading the book by Coddington and Levinson \cite{coddington}. }, if $f({x_1},{x_2})$ and ${\partial}f({x_1},{x_2})/{\partial}x_{2}$ are continuous on the open $B$, then given some point $p=({x_1}^{0},{x_2}^{0})\;{\in}B\;{\subseteq}\:{\mathbb {R}}^{2}$, there then exists in $B$ a single curve ${\psi}({x_1},x_{2}(x_{1}))=$ constant, parametrized by $x_1$, which provides the function ${x_2}=x_{2}({x_1})$ solution of the equation (\ref{eq:3}), such that it satisfies ${x_2}^{0}={x_2}({x_1}^{0})$ on some open interval $I$ containing ${x_1}^{0}$. We guarantee that the functions $F_{1}({x_1},{x_2})$ and $F_{2}({x_1},{x_2})$ are $C^{\infty}$ on $B$ by definition, and we must assume here that they are also, by construction, always non-null on $B$, given the arbitrariness generated by setting up the equation (\ref{eq:3}) so that $x_{2}=x_{2}(x_{1})$, instead of $x_{1}=x_{1}(x_{2})$. Otherwise, by this arbitrariness, it could be $\delta{\xi}$ identically null, or indeterminate. These collocations allow the use of this theorem for the equation (\ref{eq:3}). With this in mind, we are able to deal with one of the most important theorems in the theory of integrability of pfaffian forms.
\begin{theorem}\label{itm:1} Every pfaffian form in two variables on an open $B$ is locally integrable on $B$.
\end{theorem}
\noindent
\textbf{Proof.} Every Pfaff equation of a pfaffian form in two variables, whether this equation is exact or not, defines an first-order ordinary differential equation as the equation (\ref{eq:3}) that ensures, by the Existence and Uniqueness Theorem, at least \textit{locally}, the existence of a unique solution curve ${\psi}({x_1},x_{2}(x_{1}))=$ constant. Consider $d\psi$ in an open $B\:{\subseteq}\:{\mathbb {R}}^{2}$:
\begin{equation}\label{eq:4}
d\psi=\frac{{\partial}\psi}{{\partial}x_{1}}d{x_1}+\frac{{\partial}\psi}{{\partial}x_{2}}d{x_2}=0.
\end{equation}
Notice that equation (\ref{eq:4}) describes the same curves $x_{2}=x_{2}(x_{1})$ as equation (\ref{eq:3}). By then substituting equation (\ref{eq:3}) into equation (\ref{eq:4}), we have:
\begin{equation}\label{eq:5}
d\psi=\frac{{\partial}\psi}{{\partial}x_{1}}d{x_1}-\frac{F_{1}({x_1},{x_2})}{F_{2}({x_1},{x_2})}\frac{{\partial}\psi}{{\partial}x_{2}}d{x_1}=0.
\end{equation}
Rearranging the equation (\ref{eq:5}), and in view of Definition \ref{itm:1.2}, we are invited to define the function ${\mu}={\mu}(x_{1},x_{2})$, clearly non-zero, given by,
\begin{equation}\label{eq:6}
{{\mu}(x_{1},x_{2})}^{-1}\,{\equiv}\,\frac{1}{F_{1}({x_1},{x_2})}\frac{{\partial}\psi}{{\partial}x_{1}}=\frac{1}{F_{2}({x_1},{x_2})}\frac{{\partial}\psi}{{\partial}x_{2}},
\end{equation}
\noindent
from which it immediately follows that:
\begin{equation}\label{eq:7}
{\mu}d{\psi}=F_{1}({x_1},{x_2})dx_{1}+F_{2}({x_1},{x_2})dx_{2}=\delta{\xi}.
\end{equation}
\hfill$\square$
Another proof for Theorem \ref{itm:1} can be found in \cite{diaz}. In studying the integrability of a pfaffian form $\delta{\xi}$ in three variables, $x_1$, $x_2$ and $x_3$,
\begin{equation}\label{eq:8}
\delta{\xi}=F_{1}({x_1},{x_2},{x_3})dx_{1}+F_{2}({x_1},{x_2},{x_3})dx_{2}+F_{3}({x_1},{x_2},{x_3})dx_{3},
\end{equation}
\noindent
it is more pertinent to use the vector notation: $\textbf{F}\,{\equiv}\,(F_{1},F_{2},F_{3})$, $d\textbf{r}\,{\equiv}\,(d{x_1},d{x_2},d{x_3})$. Thus, $\delta{\xi}$ and its associated Pfaff equation are represented by, respectively: $\delta{\xi}=\textbf{F}\,{\cdot}\,d\textbf{r}$ and $\textbf{F}\,{\cdot}\,d\textbf{r}=0$. In an open set, verification of the following equation is necessary and sufficient for the integrability of the pfaffian form in question:
\begin{equation}\label{eq:9}
\textbf{F}\,{\cdot}\,\nabla{\times}\textbf{F}=0.
\end{equation}
The preceding statement is a theorem. The proof of this result using the means discussed so far is long, in particular as to whether the equation (\ref{eq:9}) is sufficient for integrability, and so will be omitted in this paper. Later, when we deal with the integrability of pfaffian forms in any number of variables, the demonstration of the condition (\ref{eq:9}) for the case of three variables will be immediately retrieved. A fact to be pointed out now is that in the complete demonstration of the integrability condition (\ref{eq:9}) use is made of the Theorem \ref{itm:1} for the conclusion of the integrability of pfaffian forms in three variables \cite{sneddon}. As previously discussed, Theorem \ref{itm:1} has exclusively \textit{local} character. The result of this is that the condition (\ref{eq:9}) guarantees the integrability of pfaffian forms in three variables also only \textit{locally}, for an appropriate open $B\:{\subseteq}\:{\mathbb {R}}^{3}$.
\begin{theorem}\label{itm:2} A pfaffian form in three variables on an open $B$ is locally integrable on $B$ if, and only if, $\textbf{F}\;{\cdot}\,\nabla{\times}\textbf{F}=0$ on $B$.
\end{theorem}
\noindent
\textbf{Indication of proof.} Noting the Theorem \ref{itm:1}, see Chapter 1 of Sneddon's book \cite{sneddon}.
\hfill$\square$
However, it is not difficult to see that the equation (\ref{eq:9}) is a necessary condition for the integrability of a pfaffian form in three variables. Let us see, with the vector notation presented earlier, from the vector calculus we take that if $\nabla{\times}\textbf{F}\neq\textbf{0}$, then $\delta{\xi}$ is an inexact differential, because of Schwarz's Theorem. In order for $\delta{\xi}$ to be integrable, then there must exist a non-identically null function $\mu$ such that, $\nabla{\times}({\mu}^{-1}\textbf{F})=\textbf{0}$. In other words,
\begin{equation}\label{eq:10}
{\mu}^{-1}\nabla{\times}\textbf{F}+\nabla({\mu}^{-1}){\times}\textbf{F}=\textbf{0}.
\end{equation}
\noindent
By the scalar multiplication of (\ref{eq:10}) by $\textbf{F}$, we obtain that the condition sought is:
\begin{equation}\label{eq:11}
\textbf{F}\,{\cdot}\,\nabla{\times}\textbf{F}=0.
\end{equation}
For pfaffian forms in $n$ variables, we start by finding a necessary condition for integrability that generalizes (\ref{eq:11}).
\subsection{Pfaffian forms in $n$ variables}
The important first results that follow were initially \cite{katz} obtained by Clairaut.
\begin{lemma}\label{lemma1} A necessary condition for the integrability of a pfaffian form in n variables is that {} $\mathfrak{R}_{ijk}$:
\begin{align*}
\mathfrak{R}_{ijk}\:{\equiv}\:F_{i}\Bigg[\frac{{\partial}F_{k}}{{\partial}x_{j}}-\frac{{\partial}F_{j}}{{\partial}x_{k}}\Bigg]+F_{j}\Bigg[\frac{{\partial}F_{i}}{{\partial}x_{k}}-\frac{{\partial}F_{k}}{{\partial}x_{i}}\Bigg]+F_{k}\Bigg[\frac{{\partial}F_{j}}{{\partial}x_{i}}-\frac{{\partial}F_{i}}{{\partial}x_{j}}\Bigg],
\end{align*}
\noindent
be annulled, for any $i$, $j$, $k$.
\end{lemma}
\noindent
\textbf{Proof.} We start with writing a pfaffian form $\delta{\xi}$ in $n$ variables, $x_1, x_2,..., x_n$:
\begin{equation}\label{eq:12}
\delta{\xi}=\sum_{i=1}^{n}F_{i}(x_{1},x_{2},...,x_{n})dx_{i}.
\end{equation}
From Definition \ref{itm:1.2}, if $\delta{\xi}$ is integrable, then there exist functions $\mu$ and $\psi$ which, under the appropriate conditions, satisfy $\delta{\xi}=\mu{d{\psi}}$. It follows that, for each $i$:
\begin{equation}\label{eq:13}
\frac{{\partial}\psi}{{\partial}x_{i}}=\frac{1}{\mu}F_{i}.
\end{equation}
Now, if we derive the equation (\ref{eq:13}) with respect to some other variable, namely $x_j$, we will have,
\begin{equation}\label{eq:14}
\frac{{\partial}^{2}\psi}{{\partial}x_{j}{\partial}x_{i}}=\frac{{\partial}{\mu}^{-1}}{{\partial}x_{j}}F_{i}+{\mu}^{-1}\frac{{\partial}F_{i}}{{\partial}x_{j}}.
\end{equation}
\noindent
By Schwarz's Theorem, ${{\partial}^{2}\psi}/{{\partial}x_{j}{\partial}x_{i}}={{\partial}^{2}\psi}/{{\partial}x_{i}{\partial}x_{j}}$, so,
\begin{equation}\label{eq:15}
\frac{{\partial}{\mu}^{-1}}{{\partial}x_{i}}F_{j}+{\mu}^{-1}\frac{{\partial}F_{j}}{{\partial}x_{i}}=\frac{{\partial}{\mu}^{-1}}{{\partial}x_{j}}F_{i}+{\mu}^{-1}\frac{{\partial}F_{i}}{{\partial}x_{j}}
\end{equation}
\noindent
Regrouping (\ref{eq:15}) and multiplying the whole equation by $\mu$,
\begin{equation}\label{eq:16}
\frac{{\partial}F_{j}}{{\partial}x_{i}}-\frac{{\partial}F_{i}}{{\partial}x_{j}}={\mu}F_{i}\frac{{\partial}{\mu}^{-1}}{{\partial}x_{j}}-{\mu}F_{j}\frac{{\partial}{\mu}^{-1}}{{\partial}x_{i}}.
\end{equation}
Compared to the condition (\ref{eq:11}) for three variables, the left-hand side of (\ref{eq:16}) invites us to look for ways to nullify it and thus obtain an integrability condition that depends only on the derivatives of the functions $F_i$. This occurs if we multiply (\ref{eq:16}) by another function $F_k$, and then cyclically add terms analogous to $F_k[{{\partial}F_{j}}/{{\partial}x_{i}}-{{\partial}F_{i}}/{{\partial}x_{j}}]$ so that,
\begin{equation}\label{eq:17}
\mathfrak{R}_{ijk}\,{\equiv}\,F_{i}\Bigg[\frac{{\partial}F_{k}}{{\partial}x_{j}}-\frac{{\partial}F_{j}}{{\partial}x_{k}}\Bigg]+F_{j}\Bigg[\frac{{\partial}F_{i}}{{\partial}x_{k}}-\frac{{\partial}F_{k}}{{\partial}x_{i}}\Bigg]+F_{k}\Bigg[\frac{{\partial}F_{j}}{{\partial}x_{i}}-\frac{{\partial}F_{i}}{{\partial}x_{j}}\Bigg]=0,
\end{equation}
\noindent
because the terms on the right-hand side of (\ref{eq:16}) cancel out with the analogous terms when we add them up.
\hfill$\square$
Immediately one sees the recovery of condition (\ref{eq:11}) by setting $(i,j,k)=(1,2,3)$ on (\ref{eq:17}). The reciprocal of the Lemma \ref{lemma1} is valid, at least \textit{locally}. To show this, we first need to observe that: if the quantity $\mathfrak{R}_{ijk}$ is null in a collection of variables, it will remain null by a change of those variables.
\begin{lemma}\label{lemma2} The nullity of {} $\mathfrak{R}_{ijk}$ is invariant by a change of variables.
\end{lemma}
\noindent
\textbf{Proof.} Let be a pfaffian form $\delta{\xi}$ in $n$ variables $x_1, x_2,..., x_n$ such that it undergoes a change of variables to new $n$ variables ${\bar{x}_1}, {\bar{x}_2},..., {\bar{x}_n}$. The differential of a variable $x_{i}=x_{i}({\bar{x}_1}, {\bar{x}_2},..., {\bar{x}_n})$ is then:
\begin{equation}\label{eq:18}
d{x_i}=\sum_{j=1}^{n} \frac{{\partial}x_{i}}{{\partial}{\bar{x}}_{j}}d{\bar{x}}_{j}.
\end{equation}
The pfaffian form $\delta{\xi}$ can be represented in both collections of variables, with their associated functions. Hence,
\begin{equation}\label{eq:19}
\delta{\xi}=\sum_{i=1}^{n} {F_i}({x}_1, {x}_2,..., {x}_n)d{x}_{i}=\sum_{j=1}^{n} {\bar{F}}_{j}({\bar{x}_1}, {\bar{x}_2},..., {\bar{x}_n})d{\bar{x}}_{j},
\end{equation}
\noindent
where, substituting (\ref{eq:18}) into (\ref{eq:19}), we obtain:
\begin{equation}\label{eq:20}
{\bar{F}}_{j}({\bar{x}_1}, {\bar{x}_2},..., {\bar{x}_n})=\sum_{i=1}^{n}\frac{{\partial}x_{i}}{{\partial}{\bar{x}}_{j}}{F_i}({x}_1, {x}_2,..., {x}_n).
\end{equation}
Suppressing the explicit dependence to variables, by (\ref{eq:17}), in the collection of new variables, the quantity ${\bar{\mathfrak{R}}}_{ijk}$ is:
\begin{equation}\label{eq:21}
{\bar{F}}_{i}\Bigg[\frac{{\partial}{\bar{F}}_{k}}{{\partial}{\bar{x}}_{j}}-\frac{{\partial}{\bar{F}}_{j}}{{\partial}{\bar{x}}_{k}}\Bigg]+{\bar{F}}_{j}\Bigg[\frac{{\partial}{\bar{F}}_{i}}{{\partial}{\bar{x}}_{k}}-\frac{{\partial}{\bar{F}}_{k}}{{\partial}{\bar{x}}_{i}}\Bigg]+{\bar{F}}_{k}\Bigg[\frac{{\partial}{\bar{F}}_{j}}{{\partial}{\bar{x}}_{i}}-\frac{{\partial}{\bar{F}}_{i}}{{\partial}{\bar{x}}_{j}}\Bigg].
\end{equation}
We will initially analyze only the second term of (\ref{eq:21}), thus appropriately substituting the new functions given in (\ref{eq:20}). By doing so,
\begin{equation}\label{eq:22}
\sum_{i=1}^{n}\frac{{\partial}x_{i}}{{\partial}{\bar{x}}_{j}}{F_i}\Bigg[\sum_{k=1}^{n}\sum_{j=1}^{n}\frac{{\partial}x_{k}}{{\partial}{\bar{x}}_{i}}\frac{{\partial}{F}_{k}}{{\partial}{x}_{j}}\frac{{\partial}x_{j}}{{\partial}{\bar{x}}_{k}}-\sum_{j=1}^{n}\sum_{k=1}^{n}\frac{{\partial}x_{j}}{{\partial}{\bar{x}}_{k}}\frac{{\partial}{F}_{j}}{{\partial}{x}_{k}}\frac{{\partial}x_{k}}{{\partial}{\bar{x}}_{i}}\Bigg],
\end{equation}
\noindent
we see that we obtain a term proportional to the first term of ${\mathfrak{R}}_{ijk}$, since the second order partial derivatives in the variables vanish, by Schwarz's Theorem. Repeating the same for the remaining terms of (\ref{eq:21}), we have that:
\begin{equation}\label{eq:23}
{\bar{\mathfrak{R}}}_{ijk}=\Bigg[\sum_{i=1}^{n}\sum_{j=1}^{n}\sum_{k=1}^{n}\frac{{\partial}x_{i}}{{\partial}{\bar{x}}_{j}}\frac{{\partial}x_{j}}{{\partial}{\bar{x}}_{k}}\frac{{\partial}x_{k}}{{\partial}{\bar{x}}_{i}}\Bigg]{\mathfrak{R}}_{ijk}.
\end{equation}
\noindent
Therefore, if ${\mathfrak{R}}_{ijk}=0$, then ${\bar{\mathfrak{R}}}_{ijk}=0$.
\hfill$\square$
\begin{theorem}\label{itm:3} A sufficient condition for the local integrability of a pfaffian form in n variables, in an open B, is that {} $\mathfrak{R}_{ijk}$ annuls, for any $i$, $j$, $k$.
\end{theorem}
\noindent
\textbf{Proof.} We will present a proof via finite induction which initially seeks to show that the condition $\mathfrak{R}_{ijk}=0$, for any $i$, $j$, $k$, is sufficient for integrability. From this it will follow that the most we can say about such a condition is that, in fact, it is sufficient to \textit{local} integrability, only.
For a pfaffian form in one variable, $x_1$, by construction, it is clear that $\delta{\xi}$ is always integrable. Whereas for a pfaffian form in $n$ variables,
\begin{equation}\label{eq:24}
\delta{\xi}=\sum_{i=1}^{n}F_{i}(x_{1},x_{2},...,x_{n})dx_{i},
\end{equation}
\noindent
we assume that ${\mathfrak{R}}_{ijk}=0$, for any $i,j,k$. Next, we choose to examine $\delta{\xi}$ for a path in an open $B\;{\subseteq}\;{\mathbb{R}}^{n}$ such that $d{x_n}=0$. The pfaffian form that results from fixing $x_n$ at (\ref{eq:24}), is,
\begin{equation}\label{eq:25}
{\delta}\eta=\sum_{i=1}^{n-1}F_{i}(x_{1},x_{2},...,x_{n})dx_{i},
\end{equation}
\noindent
so that naturally ${\mathfrak{R}}_{ijk}=0$ remains unchanged in (\ref{eq:25}), as the induction hypothesis, since the nullity of this quantity does not change because we fix a variable. We then assume that ${\delta}\eta$ is integrable, under the circumstance ${\mathfrak{R}}_{ijk}=0$, for any $i,j,k$ different from $n$ in $B$. On account of this, there must exist functions $\lambda$, with $\lambda(x_{1},x_{2},...,x_{n-1})\;{\neq}\;0$, and ${\sigma}={\sigma}(x_{1},x_{2},...,x_{n-1})$, in some open $A\:{\subseteq}\:B$, such that:
\begin{equation}\label{eq:26}
{\delta}\eta=\lambda{d\sigma}=\lambda\sum_{i=1}^{n-1}\frac{{\partial}\sigma}{{\partial}{x_i}}d{x_i}.
\end{equation}
Now, by letting $x_n$ vary, we can rewrite ${\delta}\xi$ as a function of ${\delta}\eta$, with ${\delta}\eta$ integrable by hypothesis, as we put it. That is,
\begin{equation}\label{eq:27}
{\delta}\xi=\lambda{d\sigma}+{F_n}d{x_n},
\end{equation}
\noindent
where, since ${\delta}\xi$ is a pfaffian form in $n$ variables, writing (\ref{eq:27}) is equivalent to a change of variables in ${\delta}\xi$, from the variables $x_1, x_2,..., x_n$, to certain new variables ${\bar{x}_1}, {\bar{x}_2},..., {\bar{x}_{n-2}},{\sigma},{x_n}$. In this new collection of variables it occurs that ${{\bar{F}}_i}=0$, for all $i=\{1,2,...,n-2\}$. From the Lemma \ref{lemma2}, the hypothesis of the nullity of ${\mathfrak{R}}_{ijk}$ holds for the new collection of variables. Again, this relationship is preserved when examining only the collection of variables ${\bar{x}_1}, {\bar{x}_2},..., {\bar{x}_{n-2}}$. Explicitly,
\begin{equation}\label{eq:28}
{\bar{\mathfrak{R}}}_{ijk}={\lambda}\frac{{\partial}{F_n}}{{\partial}{\bar{x}}_{i}}-{F_n}\frac{{\partial}\lambda}{{\partial}{\bar{x}}_{i}}=0,
\end{equation}
\noindent
and we obtain that, on the variables ${\bar{x}_1}, {\bar{x}_2},..., {\bar{x}_{n-2}},{\sigma},{x_n}$, the quotient ${F_n}/\lambda$ must depend solely on $\sigma$ and $x_n$. With $\lambda\;{\neq}\;0$, we can rewrite (\ref{eq:27}) as:
\begin{equation}\label{eq:29}
{\delta}\xi=\lambda\Bigg({d\sigma}+\frac{F_n}{\lambda}d{x_n}\Bigg),
\end{equation}
The term in parentheses in (\ref{eq:29}) is a pfaffian form in two variables, which is, by Theorem \ref{itm:1}, locally integrable. Therefore, there exist functions $\mu$ and $\psi$ such that,
\begin{equation}\label{eq:30}
{\delta}\xi=\lambda\mu{d\psi},
\end{equation}
\noindent
under the appropriate conditions, and so ${\delta}\xi$ is locally integrable on $B$.
\hfill$\square$
We will now present one last result. Originally obtained in the formalization of Classical Thermodynamics by C. Carathéodory in 1909 \cite{caratheodory}, and probably figuring as the criterion for integrability of pfaffian forms most absent from differential equations textbooks since then. This is a verification that provides the local integrability of a pfaffian form from a topological condition of the set $B$ to which the pfaffian form in question resides. The proof of this Carathéodory's Theorem is presented here in a little more detail than in the works that first investigated it \cite{buchdahl, buchdahl2}, after Carathéodory's original proof \cite{caratheodory}.
\begin{theorem}\label{itm:4} {\normalfont(Carathéodory's Theorem)} A necessary and sufficient condition for the local integrability of a pfaffian form ${\delta}{\xi}$ in n variables in an open B, is that in every neighborhood $M\:{\subset}{\:}B$ arbitrarily close to any point $p\:{\in}{\:}B$ there exist points unreachable from $p$ by curves such that ${\delta}{\xi}=0$.
\end{theorem}
\noindent
\textbf{Proof.} \cite{buchdahl2} Let be the pfaffian form ${\delta}\xi$ in $n$ variables, in an open $B\:{\subseteq}\:{\mathbb {R}}^{n}$, and $p=({x_1}^{0},{x_2}^{0},...,{x_n}^{0})$, $q=({x_1}^{*},{x_2}^{*},...,{x_n}^{*})$, $r=({x_1}^{**},{x_2}^{**},...,{x_n}^{**})$ points of $B$. Let be the curves ${\gamma}_1$ and ${\gamma}_2$ in $B$, smooth, parametrized by a real parameter $t$,
\begin{subequations}\label{31}
\begin{gather}
{\gamma}_{1}(t)=(f_{1}(t),f_{2}(t),...,f_{n}(t))\,{\equiv}{\,}(f_{i}(t)), \label{31a}\\
{\gamma}_{2}(t)=(f_{1}(t)+{\nu}g_{1}(t),f_{2}(t)+{\nu}g_{2}(t),...,f_{n}(t)+{\nu}g_{n}(t))\,{\equiv}{\,}(f_{i}(t)+{\nu}g_{i}(t)), \label{31b}
\end{gather}
\end{subequations}
\noindent
with ${\nu}$ real, such that ${\delta}{\xi}=0$, with the following conditions, respectively,
\begin{subequations} \label{32}
\begin{align}
{\gamma}_{1}(t_{0})=&{\,}p,\quad {\gamma}_{1}(t_{*})=q, \label{32a}\\
{\gamma}_{2}(t_{0})=&{\,}p,\quad {\gamma}_{2}(t_{**})=r, \label{32b}
\end{align}
\end{subequations}
\noindent
where $t_{0}<t_{*}<t_{**}$, with $|t_{*}-t_{0}|<\epsilon_{1}$ and $|t_{**}-t_{*}|<\epsilon_{2}$, for arbitrarily small $\epsilon_{1}$ and $\epsilon_{2}$. With a sufficiently small $\nu$, our goal is to examine the situation $\epsilon_{2}\;{\rightarrow}\;0$. The Pfaff equation to which ${\gamma}_{2}$ is solution is given by,
\begin{equation} \label{33}
\begin{aligned}
&\sum_{i=1}^{n}F_{i}(f_{l}(t)+{\nu}g_{l}(t))d(f_{i}(t)+{\nu}g_{i}(t))=0\\
=&\sum_{i=1}^{n}F_{i}(f_{l}(t)+{\nu}g_{l}(t))[\dot{f}_{i}(t)+{\nu}\dot{g}_{i}(t)],
\end{aligned}
\end{equation}
\noindent
with $d{f_i}(t)/dt\,{\equiv}{\,}\dot{f}_{i}(t)$, $d{g_i}(t)/dt\,{\equiv}{\,}\dot{g}_{i}(t)$. Deriving (\ref{33}) by $\nu$, at $\nu=0$,
\begin{equation}\label{eq:34}
\sum_{i=1}^{n}F_{i}(f_{l}(t))\dot{g}_{i}(t)+\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{{\partial}F_{i}(f_{l}(t))}{{\partial}{x_j}}\dot{f}_{i}(t){g}_{j}(t)=0,
\end{equation}
\noindent
or
\begin{equation}\label{eq:35}
\sum_{i=1}^{n}F_{i}(f_{l}(t))\dot{g}_{i}(t)=-\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{{\partial}F_{i}(f_{l}(t))}{{\partial}{x_j}}\dot{f}_{i}(t){g}_{j}(t).
\end{equation}
Equation (\ref{eq:35}) is equivalent to us choosing $n-1$ of the functions ${g}_{j}(t)$ in an arbitrary manner and the $n$-th one we ensure obeys (\ref{eq:35}). Let then be that $n$-th function ${g}_{k}(t)$, so that, isolating the $k$-index terms, we have:
\begin{equation}\label{eq:36}
\begin{aligned}
F_{k}(f_{l}(t))\dot{g}_{k}(t)+&\sum_{\substack{i=1 \\ i\neq k}}^{n}\frac{{\partial}F_{i}(f_{l}(t))}{{\partial}{x_k}}\dot{f}_{i}(t){g}_{k}(t)=\\-&\sum_{\substack{j=1 \\ j\neq k}}^{n}F_{j}(f_{l}(t))\dot{g}_{j}(t)-\sum_{\substack{i=1 \\ i\neq k}}^{n}\sum_{\substack{j=1 \\ j\neq k}}^{n}\frac{{\partial}F_{i}(f_{l}(t))}{{\partial}{x_j}}\dot{f}_{i}(t){g}_{j}(t).
\end{aligned}
\end{equation}
The function ${g}_{k}(t)$ then becomes the object to be studied if we make $\epsilon_{2}\;{\rightarrow}\;0$. The left-hand side of (\ref{eq:36}) is a first-order linear ordinary differential equation in ${g}_{k}(t)$, so, by Leibniz's method of the integrating factor, let be the function ${\eta}={\eta}(t)$, non-null, such that:
\begin{equation}\label{eq:37}
\frac{d({\eta}(t)F_{k}(f_{l}(t)){g}_{k}(t))}{dt}={\eta}(t)\Bigg[F_{k}(f_{l}(t))\dot{g}_{k}(t)+\sum_{\substack{i=1 \\ i\neq k}}^{n}\frac{{\partial}F_{i}(f_{l}(t))}{{\partial}{x_k}}\dot{f}_{i}(t){g}_{k}(t)\Bigg].
\end{equation}
\noindent
Developing the left-hand side of (\ref{eq:37}),
\begin{equation}\label{eq:38}
\begin{aligned}
\frac{d({\eta}(t)F_{k}(f_{l}(t)){g}_{k}(t))}{dt}&\\={\eta}(t)F_{k}(f_{l}(t))\dot{g}_{k}(t)+&{\eta}(t)\sum_{\substack{i=1 \\ i\neq k}}^{n}\frac{{\partial}F_{k}(f_{l}(t))}{{\partial}{x_i}}\dot{f}_{i}(t){g}_{k}(t)+\dot{{\eta}}(t)F_{k}(f_{l}(t)){g}_{k}(t),
\end{aligned}
\end{equation}
\noindent
and with the comparison between (\ref{eq:37}) and (\ref{eq:38}) comes,
\begin{equation}\label{eq:39}
\dot{{\eta}}(t)F_{k}(f_{l}(t))={\eta}(t)\dot{f}_{i}(t)\sum_{\substack{i=1 \\ i\neq k}}^{n}\Bigg[\frac{{\partial}F_{i}(f_{l}(t))}{{\partial}{x_k}}-\frac{{\partial}F_{k}(f_{l}(t))}{{\partial}{x_i}}\Bigg].
\end{equation}
Now, perpetuating Leibniz's method and multiplying the two sides of (\ref{eq:36}) by ${\eta}(t)$, using (\ref{eq:38}) and (\ref{eq:39}), we can isolate ${g}_{k}(t)$ by integrating the left-hand side of (\ref{eq:37}), which turns out to be equal to the right-hand side of (\ref{eq:36}) when we multiply the latter by ${\eta}(t)$. We then have:
\begin{equation}\label{eq:40}
{\eta}(t')F_{k}(f_{l}(t')){g}_{k}(t')=-\int_{t_0}^{t'}{\eta}(t)\Bigg[\sum_{\substack{j=1 \\ j\neq k}}^{n}F_{j}(f_{l}(t))\dot{g}_{j}(t)+\sum_{\substack{j=1 \\ j\neq k}}^{n}\sum_{\substack{i=1 \\ i\neq k}}^{n}\frac{{\partial}F_{i}(f_{l}(t))}{{\partial}{x_j}}\dot{f}_{i}(t){g}_{j}(t)\Bigg]dt,
\end{equation}
\noindent
where $g_i(t_0)=0$ for all $i$, by the conditions (\ref{32}). Integrating by parts the first term in the integrand of (\ref{eq:40}), using (\ref{eq:39}) and again that $g_i(t_0)=0$, we obtain directly:
\begin{equation}\label{eq:41}
\begin{aligned}
&-\int_{t_0}^{t'}{\eta}(t)\sum_{\substack{j=1 \\ j\neq k}}^{n}F_{j}(f_{l}(t))\dot{g}_{j}(t)dt=-{\eta}(t')\sum_{\substack{j=1 \\ j\neq k}}^{n}F_{j}(f_{l}(t')){g}_{j}(t')\\&+\int_{t_0}^{t'}{\eta}(t)\sum_{\substack{i=1 \\ i\neq k}}^{n}\sum_{\substack{j=1 \\ j\neq k}}^{n}\dot{f}_{i}(t){g}_{j}(t)\Bigg(\frac{F_{j}(f_{l}(t))}{F_{k}(f_{l}(t))}\Bigg[\frac{{\partial}F_{i}(f_{l}(t))}{{\partial}{x_k}}-\frac{{\partial}F_{k}(f_{l}(t))}{{\partial}{x_i}}\Bigg]+\frac{{\partial}F_{j}(f_{l}(t))}{{\partial}{x_i}}\Bigg)dt.
\end{aligned}
\end{equation}
\noindent
Substituting (\ref{eq:41}) into (\ref{eq:40}), and putting the function $F_{k}(f_{l}(t))$ in evidence on the integrand, we have:
\begin{equation}\label{eq:42}
\begin{aligned}
{\eta}(t')F_{k}(f_{l}(t')){g}_{k}(t')=&-{\eta}(t')\sum_{\substack{j=1 \\ j\neq k}}^{n}F_{j}(f_{l}(t')){g}_{j}(t')\\&+\sum_{\substack{i=1 \\ i\neq k}}^{n}\sum_{\substack{j=1 \\ j\neq k}}^{n}\int_{t_0}^{t'}\frac{{\eta}(t)\dot{f}_{i}(t){g}_{j}(t)}{F_{k}(f_{l}(t))}\Bigg(F_{j}(f_{l}(t))\Bigg[\frac{{\partial}F_{i}(f_{l}(t))}{{\partial}{x_k}}-\frac{{\partial}F_{k}(f_{l}(t))}{{\partial}{x_i}}\Bigg]\\&+F_{k}(f_{l}(t))\Bigg[\frac{{\partial}F_{j}(f_{l}(t))}{{\partial}{x_i}}-\frac{{\partial}F_{i}(f_{l}(t))}{{\partial}{x_j}}\Bigg]\Bigg)dt.
\end{aligned}
\end{equation}
\noindent
Since, by the curve $\gamma_{1}$ it is true that,
\begin{equation}\label{eq:43}
\sum_{i=1}^{n}F_{i}(f_{l}(t))\dot{f}_{i}(t)=0,
\end{equation}
\noindent
then,
\begin{equation}\label{eq:44}
\sum_{\substack{i=1 \\ i\neq k}}^{n}\sum_{\substack{j=1 \\ j\neq k}}^{n}\int_{t_0}^{t'}\frac{{\eta}(t)\dot{f}_{i}(t){g}_{j}(t)}{F_{k}(f_{l}(t))}F_{i}(f_{l}(t))\Bigg[\frac{{\partial}F_{k}(f_{l}(t))}{{\partial}{x_j}}-\frac{{\partial}F_{j}(f_{l}(t))}{{\partial}{x_k}}\Bigg]dt=0.
\end{equation}
Substituting (\ref{eq:44}) into (\ref{eq:42}), isolating ${g}_{k}(t')$ in the process, and identifying ${\mathfrak{R}}_{ijk}$ in the integrand, we have:
\begin{equation}\label{eq:45}
\begin{aligned}
{g}_{k}(t')=\frac{1}{{\eta}(t')F_{k}(f_{l}(t'))}\Bigg\{&-{\eta}(t')\sum_{\substack{j=1 \\ j\neq k}}^{n}F_{j}(f_{l}(t')){g}_{j}(t')\\&+\sum_{\substack{i=1 \\ i\neq k}}^{n}\sum_{\substack{j=1 \\ j\neq k}}^{n}\int_{t_0}^{t'}\frac{{\eta}(t)\dot{f}_{i}(t){g}_{j}(t)}{F_{k}(f_{l}(t))}{\mathfrak{R}}_{ijk}dt\Bigg\}.
\end{aligned}
\end{equation}
If $t'{\rightarrow}{t_{*}}$, with $\epsilon_{1}$ arbitrarily small, where $|t_{*}-t_{0}|<\epsilon_{1}$, as well as $\epsilon_{2}$, where $|t_{**}-t_{*}|<\epsilon_{2}$, then, in the limit $\epsilon_{2}\;{\rightarrow}{\;}0$, we will have the formation of a neighborhood $M{\:}{\subset}{\:}B$ around $p$, arbitrarily close to $p$, such that there are points reachable from $p$ by curves on which ${\delta}{\xi}=0$. The only exception to this is if every function ${g}_{k}(t)$ is identically null, for all $t$. In the integrand of (\ref{eq:45}), this requires that,
\begin{equation}\label{eq:46}
\sum_{\substack{i=1 \\ i\neq k}}^{n}\dot{f}_{i}(t){\mathfrak{R}}_{ijk}=0,
\end{equation}
\noindent
since, by construction, ${\eta}={\eta}(t){\:}{\neq}{\:}0$, the ${g_j}(t)$ cannot be fixed as zero and, obviously, ${F_{k}(f_{l}(t))}^{-1}{\:}{\neq}{\:}0$, for all $k$. Since the $\dot{f}_{i}(t)$ can be arbitrary, except $\dot{f}_{k}(t)$, which does not appear in (\ref{eq:46}), we conclude that ${\mathfrak{R}}_{ijk}=0$, for any $i$, $j$, $k$. By the Lemma \ref{lemma1} and the Theorem \ref{itm:3} this proof ends.
\hfill$\square$
\section{Global Integrability}\label{sec:global}
The Carathéodory's Theorem \ref{itm:4} has historically been placed under debate for guaranteeing only a \textit{local} integrability for pfaffian forms \cite{boyling}. However, its use in Classical Thermodynamics provides clues that the local nature of the Carathéodory's Theorem lies in the generality of its premise in topological terms. More than that, when analyzed according to the descriptive needs of Classical Thermodynamics \cite{silvajunior}, this theorem appears to ask for more than it needs to obtain an integrating factor, by presuming a non-connection relation between points in space (in the present context, the ${\mathbb {R}}^{n}$) valid for any, arbitrarily small, neighborhood in this space.
This indicates the possibility of a hidden content in Carathéodory's Theorem that can be revealed by appropriate modification of the notion of \textit{neighborhood}. We will do this next, and obtain as a result a sufficient condition for the integrability of pfaffian forms on all $B$, except for a set of measure zero contained in $B$; what we will call here \textit{global} integrability on $B$.
\begin{definition}\label{itm:3.1}{\normalfont(Surrounding line)}. Given a point $p=({x_1}^{0},{x_2}^{0},...,{x_n}^{0})\:{\in}{\:}B\:{\subseteq}{\:}{\mathbb {R}}^{n}$, for the line ${\Pi}(x_{i})$ of the points $({x_1}^{0},{x_2}^{0},...,{x_i},...,{x_n}^{0})$, with the arbitrary variable $x_i$ called the free variable, we call the surrounding line to $p$ associated with $x_i$.
\end{definition}
Notice that the union $\bigcup_{i=1}^{n} {\Pi}(x_{i})$ of the $n$ possible surrounding lines ${\Pi}(x_{i})$ to a point $p\:{\in}{\:}B\:{\subseteq}{\:}{\mathbb {R}}^{n}$ does not constitute neighborhood $M$ of $p$.
\begin{theorem}\label{itm:5} A sufficient condition for the global integrability of a pfaffian form ${\delta}\xi$ in n variables in an open B is that on the surrounding line {} ${\Pi}(x_{i})$ of any point $p\:{\in}{\:}B$, for some free variable $x_{i}$, there exist points arbitrarily close to $p$ unreachable from $p$ by curves such that ${\delta}\xi=0$.
\end{theorem}
\noindent
\textbf{Proof.} Let be the pfaffian form ${\delta}\xi$ in $n$ variables, on an open $B\:{\subseteq}\:{\mathbb {R}}^{n}$, and $p=({x_1}^{0},{x_2}^{0},...,{x_n}^{0})$ an arbitrary point of $B$. Let $q=({x_1}^{0},{x_2}^{0},...,{x_n}^{*})$ also be a point on the surrounding line to $p$ associated with the variable $x_n$, ${\Pi}(x_{n})$, and ${\gamma}$ a curve in $B$ such that:
\begin{equation}\label{eq:47}
\sum_{i=1}^{n}{F_i}({\gamma})d{x_i}({\gamma})=0.
\end{equation}
With $|{x_n}^{*}-{x_n}^{0}|<{\varepsilon}$, let us assume that $\gamma$ passes through $p$ but does not pass through $q$, for any arbitrarily small $\varepsilon$. It follows from this that the equation,
\begin{equation}\label{eq:48}
d{x_n}({x_1},{x_2},...,{x_{n-1}})=-\sum_{i=1}^{n-1}\frac{{F_i}({x_1},{x_2},...,{x_n})}{{F_n}({x_1},{x_2},...,{x_n})}d{x_i},
\end{equation}
\noindent
arising from (\ref{eq:47}), denotes $d{x_n}$ as the differential of a function ${x_n}={x_n}({x_1},{x_2},...,{x_{n-1}})$, so since, as we know, the function $F_n({x_1},{x_2},...,{x_n})$ is not identically null. We obtain ${x_n}={x_n}({x_1},{x_2},...,{x_{n-1}})$ explicitly by integrating (\ref{eq:48}),
\begin{equation}\label{eq:49}
{x_n}({x_1},{x_2},...,{x_{n-1}})={x_n}^{0}-\int_{({x_1}^{0},{x_2}^{0},...,{x_{n-1}}^{0})}^{({x_1},{x_2},...,{x_{n-1}})}\sum_{i=1}^{n-1}\frac{{F_i}({x_1},{x_2},...,{x_n})}{{F_n}({x_1},{x_2},...,{x_n})}d{x_i},
\end{equation}
\noindent
where ${x_n}^{0}={x_n}({x_1}^{0},{x_2}^{0},...,{x_{n-1}}^{0})$. Now, if we make the quantities ${x_1}^{0},{x_2}^{0},...,{x_{n-1}}^{0}$ vary, we get ${x_1},{x_2},...,{x_n}^{0}$ as the new independent variables on which the function ${x_n}$, now ${x_n}={x_n}({x_1},{x_2},...,{x_n}^{0})$, depends. The function ${x_n}$ is continuous with respect to the variables ${x_1},{x_2},...,{x_{n-1}},{x_n}^{0}$, and differentiable with respect to the variables ${x_1},{x_2},...,{x_{n-1}}$, by (\ref{eq:48}). Hence:
\begin{equation}\label{eq:50}
\frac{{\partial}{x_n}}{{\partial}{x_i}}=-\frac{F_i}{F_n}.
\end{equation}
Furthermore, by equation (\ref{eq:49}), the quotients ${F_i}/{F_n}$ might depend on ${x_n}^{0}$ in some way. However, if we fix the other variables ${x_1},{x_2},...,{x_{n-1}}$, at (\ref{eq:49}), we get that ${x_n}={x_n}({x_1},{x_2},...,{x_n}^{0})$ is a monotone function of ${x_n}^{0}$. This does not change for any closed interval contained in ${\Pi}(x_{n})$ on which we can make the same assumptions that have been posited so far. As a consequence, by Lebesgue's Differentiation Theorem \cite{botsko}, ${x_n}={x_n}({x_1},{x_2},...,{x_n}^{0})$ is a differentiable function of ${x_n}^{0}$ on all $B$, except for a set of measure zero contained in $B$. Thus, we observe that the differential of the function ${x_n}={x_n}({x_1},{x_2},...,{x_n}^{0})$,
\begin{equation}\label{eq:51}
d{x_n}({x_1},{x_2},...,{x_n}^{0})=\sum_{i=1}^{n-1}\frac{{\partial}{x_n}}{{\partial}{x_i}}d{x_i}+\frac{{\partial}{x_n}}{{\partial}{x_n}^{0}}d{x_n}^{0},
\end{equation}
\noindent
in the same way refers to all $B$, except for the same set of measure zero contained in $B$. Retaking (\ref{eq:47}) explicitly, and using (\ref{eq:51}) and (\ref{eq:50}), we have:
\begin{equation}\label{eq:52}
\begin{aligned}
{\delta}{\xi}=&\sum_{i=1}^{n-1}{F_i}d{x_i}+{F_n}d{x_n}\\=&\sum_{i=1}^{n-1}{F_i}d{x_i}+{F_n}\Bigg(\sum_{i=1}^{n-1}\frac{{\partial}{x_n}}{{\partial}{x_i}}d{x_i}+\frac{{\partial}{x_n}}{{\partial}{x_n}^{0}}d{x_n}^{0}\Bigg)\\=&{F_n}\frac{{\partial}{x_n}}{{\partial}{x_n}^{0}}d{x_n}^{0}.
\end{aligned}
\end{equation}
\noindent
Therefore, ${\delta}{\xi}$ is integrable over almost everywhere on $B$.
\hfill$\square$
\section{Applications and concluding remarks}\label{sec:finais}
In this paper we introduce the reader to the integrability conditions of pfaffian forms on ${\mathbb {R}}^{n}$, with except for the well-known Frobenius Theorem. We divide our discussion between local aspects, in section \ref{sec:local}, and global aspects, in section \ref{sec:global}, with respect to integrability. Inspired by Carathéodory's Theorem, and its use in Classical Thermodynamics \cite{silvajunior}, an integrability criterion of \textit{global} character, namely, the Theorem \ref{itm:5}, was obtained in section \ref{sec:global}.
The Theorem \ref{itm:5} when applied on Classical Thermodynamics may generate a very important result: the construction of a differentiable almost everywhere entropy function, so that the regions which the differentiability vanish are, by the experimental justification of the theory, the points on thermodynamic space of states related to phase transitions. Indeed, introducing the zeroth law of thermodynamics and consequently the concept of a empirical temperature $\vartheta$, this quantity can be now identified how the free variable in the context of Theorem \ref{itm:5}; i.e., we assume that $\vartheta=x_n$. Next, in according with \cite{buchdahl2}, by identifying also ${\delta}{\xi}$ as the quantity of heat ${\delta}{\altmathcal{Q}}$, ${F_n}={F_{\vartheta}}$, and ${x_n}^{0}={\vartheta}^{0}$, the last expression in (\ref{eq:52}) can be rewritten if one observes the deduction of the premise of Theorem \ref{itm:5} from Kelvin's, or Clausius's, statement of the second law of thermodynamics \cite{silvajunior}. Then taking a change for new variables, ${\mu}$ and $\sigma$, one obtains,
\begin{equation*}
{\delta}{\altmathcal{Q}}={\mu}d{\sigma},
\end{equation*}
\noindent
where it is not difficult to check that $\sigma$ is an differentiable almost everywhere function of appropriate variables under consideration. With a little more thermodynamics arguments \cite{buchdahl2}, one can conclude that
\begin{equation*}
{\delta}{\altmathcal{Q}}=Td{\altmathcal{S}},
\end{equation*}
\noindent
being $T$ the absolute temperature, and ${\altmathcal{S}}$ the absolute entropy. This absolute entropy function, or just entropy function, have the following properties: a) additivity (over thermodynamics systems); b) extremization (maximization or minimization\footnote{For the reader with familiarity in physics, this possibility of maximization or minimization, in addition with the absence of a property of monotone increasing variation with respect the internal energy, actually are the blessings of Carathéodory's thermodynamics; there is a wide set of experimental evidences that supports this apparent gaps in the theory \cite{lavis}.}) on a irreversible adiabatic process (the premise of Theorem \ref{itm:5} can be generalized from the thermodynamic point of view of irreversible processes); c) differentiability almost everywhere. With the experimental based assumption of local bounded variation of the thermodynamics quantities \cite{lieb}, we can assume that the entropy function have also its first-derivatives with the local bounded variation property. This, together with the property of differentiability almost everywhere, generates a final property for the entropy function in question: d) local Lipschitz continuity. Actually, due to Rademacher's Theorem, the properties of this entropy function can be summarized in: additivity, extremization, and local Lipschitz continuity.
In Analytical Mechanics as well, the application of a integrability criterion for pfaffian forms is the procedure which leads to the verification of whether non-holonomic constraints are, or are not, integrable. Constraints are relations between the mechanical generalized coordinates, generalized velocities, and eventually the time coordinate; when a constraint is a relation between only the generalized coordinates and time, it is called holonomic, otherwise it is called non-holonomic. A set of $n$ non-holonomic constraints imposed to a mechanical system are often represented by $n$ correspondents Pfaff equations: $\delta{{\xi}_{1}}=0,...,\delta{{\xi}_{n}}=0$. There are a practical importance for Analytical Mechanics in the verification of the integrability of non-holonomic constrains: the constraints can be applied directly in the lagrangian function of the mechanical system, thus making it easier to obtain the equations of motion by solving the Euler-Lagrange equations \cite{nivaldo}. Besides that, the traditional method to do the integrability test is by using the Frobenius Theorem, which calls for handling with exterior algebra and the analytical knowledge of the Pfaff equations in question.
On the other hand, in terms of original Carathéodory's Theorem the test of integrability for non-holonomic constraints can be performed without more mathematical features, instead making use of physical inferences in the phase space of the mechanical system and its respective constraints. For example, the traditional problem of a perfect cylinder rolling without sliding on an inclined plane can be easily visualized from the perspective that there are states in the phase space of the system that are not accessible, hence the respective constraint must be integrable. In this point of view, the original Carathéodory's Theorem show itself to be more physically substantial than Frobenius Theorem, although the latter have a greater rigour and be most useful, specially with more complicated constraints. However, the application and respective descriptive consequences of a specie of Carathéodory's Theorem, like Theorem \ref{itm:5}, in the integrability of non-holonomic constraint must be better investigate.
|
2,877,628,091,102 | arxiv | \section{Introduction}
\label{sec_intro}
Considerable progress has been made in coronal seismology thanks to the abundantly identified waves
and oscillations in the structured solar atmosphere
(for a recent review,
see~\citeauthor{2012RSPTA.370.3193D}~\citeyear{2012RSPTA.370.3193D},
and also~\citeauthor{2007SoPh..246....1B}~\citeyear{2007SoPh..246....1B},
\citeauthor{2009SSRv..149....1N}~\citeyear{2009SSRv..149....1N},
\citeauthor{2011SSRv..158..167E}~\citeyear{2011SSRv..158..167E} for three recent topical issues).
Equally important is a detailed theoretical understanding of the collective
wave modes supported by magnetized cylinders~\citep[e.g.,][]{2000SoPh..193..139R}.
While kink waves (with azimuthal wavenumber $m=1$) have attracted much attention
since their measurements with TRACE~\citep{1999Sci...285..862N,1999ApJ...520..880A},
sausage waves prove important in interpreting second-scale quasi-periodic pulsations (QPPs)
in the lightcurves of solar flares~\citep[see][for a recent review]{2009SSRv..149..119N}.
Their importance is strengthened given their {\bf recent detection} in both the chromosphere~\citep{2012NatCo...3E1315M}
and the photosphere~\citep{2014A&A...563A..12D,2015ApJ...806..132G}.
Standing sausage modes are well understood.
For instance, two distinct regimes are known to exist, depending on
the longitudinal wavenumber $k$~\citep{2005LRSP....2....3N}.
The trapped regime results when $k$ exceeds some critical value $k_{\rm c}$,
whereas the leaky regime
arises when the opposite is true.
Both eigen-mode analyses~\citep{2007AstL...33..706K,2014ApJ...781...92V}
and numerical simulations from an initial-value-problem perspective~\citep[e.g.,][]{2012ApJ...761..134N}
indicate that the period $P$ of sausage modes increases smoothly with decreasing $k$
until reaching some $P_0$ in the thin-cylinder limit
($ka\rightarrow 0$ with $a$ being the cylinder radius).
Likewise, being identically infinite in the trapped regime, the attenuation time $\tau$ decreases
with decreasing $k$ before saturating at $\tau_0$ when $ka =0$.
Furthermore, $P_0$ is determined primarily by $a/v_{\rm Ai}$, where $v_{\rm Ai}$ is the
Alfv\'en speed in the cylinder~\citep{1984ApJ...279..857R}, with the detailed transverse density distribution
playing {\bf an important role (\citeauthor{2012ApJ...761..134N}~\citeyear{2012ApJ...761..134N},
also \citeauthor{2015arXiv150702169C}~\citeyear{2015arXiv150702169C})}.
This is why second-scale QPPs are attributed to standing sausage modes, since $a/v_{\rm Ai}$ is of the order
of seconds for typical coronal structures.
On the other hand, the ratio $\tau_0/P_0$ is basically proportional to the density contrast~\citep[e.g.,][]{2007AstL...33..706K},
meaning that high-quality sausage modes are associated with coronal structures with densities
considerably exceeding their surroundings.
Interestingly, the dispersive behavior of trapped modes is important also in understanding impulsively
generated sausage waves~\citep{1984ApJ...279..857R}.
When measured at a distance from the source, the signals from these propagating waves possess three phases:
periodic, quasi-periodic, and decay.
The frequency dependence of the longitudinal group speed $v_{\rm gr}$ is crucial in this context.
In particular, whether the quasi-periodic phase exists depends on the existence of a local minimum in $v_{\rm gr}$,
which in turn depends on the density profile transverse
to the structure~\citep{1988A&A...192..343E, 1995SoPh..159..399N}.
This analytical expectation, extensively reproduced in numerical
simulations~\citep[e.g.,][]{1993SoPh..144..101M,2004A&A...422.1067S,2004MNRAS.349..705N},
well explains the time signatures of the wave trains discovered with the Solar Eclipse Coronal
Imaging System~\citep{2001MNRAS.326..428W,2002MNRAS.336..747W,2003A&A...406..709K}
{\bf as well as those measured with SDO/AIA~\citep{2013A&A...554A.144Y}}.
We intend to examine the spatial damping of leaky sausage waves propagating
along coronal cylinders in response to photospheric motions due to, say, granular convection~\citep{1996ApJ...472..398B}.
One reason for conducting this study is that,
besides the observations showing that propagating sausage waves abound
in the chromosphere~\citep{2012NatCo...3E1315M},
a recent study clearly demonstrates the spatial damping of sausage waves propagating from
the photosphere to the transition region in a pore~\citep{2015ApJ...806..132G}.
Another motivation is connected to the intensive interest~\citep{2010A&A...524A..23T,2013A&A...551A..39H,2013A&A...551A..40P}
in employing resonant absorption to understand the spatial damping of propagating
kink waves measured with the Coronal Multi-Channel Polarimeter (CoMP) instrument~\citep{2007Sci...317.1192T,2009ApJ...697.1384T}.
A leading mechanism for interpreting the temporal damping of standing
kink modes~\citep[][and references therein]{2002ApJ...577..475R, 2003ApJ...598.1375A},
resonant absorption is found to attenuate propagating kink waves with
a spatial length inversely proportional to wave frequency~\citep{2010A&A...524A..23T}.
If attributing the generation of these kink waves to broadband photospheric perturbations,
one expects that resonant absorption essentially filters out the high-frequency components.
One then naturally asks: What role does wave leakage play in attenuating propagating sausage waves?
And what will be the frequency dependence of the associated damping length?
This manuscript is structured as follows.
We present the necessary equations, the dispersion relation (DR) in particular, in Sect.\ref{sec_model},
and then present our numerical solutions to the DR in Sect.\ref{sec_numres}
where we also derive a couple of analytical approximations to the DR for validation purposes.
Finally, a summary is given in Sect.\ref{sec_conc}.
\section{Problem Formulation}
\label{sec_model}
We consider sausage waves propagating in a structured corona modeled by a plasma cylinder with radius $a$
embedded in a uniform magnetic field ${\bf B}=B\hat{z}$,
where a cylindrical coordinate system $(r,\theta,z)$ is adopted.
The cylinder is directed along the $z$-direction.
A piece-wise constant (top-hat) density profile is adopted, with the densities inside and external to
the cylinder being $\rho_{\rm i}$ and $\rho_{\rm e}$, respectively ($\rho_{\rm e} < \rho_{\rm i}$).
The Alfv\'en speeds, $v_{\rm Ai}$ and $v_{\rm Ae}$, follow from the definition $v_{\rm A} = \sqrt{B^2/4\pi\rho}$.
Appropriate for the solar corona, zero-$\beta$, ideal MHD equations are adopted.
In such a case, sausage waves do not perturb the $z$-component of the plasma velocity.
Let $\delta v_r$ denote the radial velocity perturbation, and
$\delta b_r$, $\delta b_z$ denote the radial and longitudinal components of the perturbed magnetic field $\delta \vec{b}$,
respectively.
The perturbed magnetic pressure, or equivalently total pressure in the zero-$\beta$ case,
is then $\delta p_{\mathrm{tot}} = B\delta b_z/4\pi$.
Let us Fourier-decompose any perturbed value $\delta f(r, z;t)$ as
\begin{eqnarray}
\label{eq_Fourier}
\delta f(r,z;t)={\rm Re}\left\{\tilde{f}(r)\exp\left[-i\left(\omega t-kz\right)\right]\right\}~.
\end{eqnarray}
{\bf With the definition
\begin{eqnarray}
\label{eq_mu}
\mu_{\rm i}^2=\frac{\omega^2}{v_{\rm Ai}^2}-k^2, \hspace{0.2cm}
\mu_{\rm e}^2=\frac{\omega^2}{v_{\rm Ae}^2}-k^2 \hspace{0.2cm}
(-\frac{\pi}{2} < \arg{\mu}_{\rm i}, \arg{\mu}_{\rm e} \le \frac{\pi}{2}) ,
\end{eqnarray}
linearizing the ideal MHD equations then leads to~\citep[e.g.,][]{1986SoPh..103..277C}
}
\begin{eqnarray}
\label{eq_ptilde}
\frac{1}{r}\left(r\tilde{p}'_{\rm tot}\right)'
+ \mu^2 \tilde{p}_{\rm tot} = 0 ,
\end{eqnarray}
where the prime $' = d/dr${\bf, and this equation is valid
both inside (denoted by the subscript i) and outside (the subscript e) the cylinder.}
The solutions to Eq.~(\ref{eq_ptilde}) are given by
\begin{eqnarray}
\tilde{p}_{\rm tot,i} = A_{\rm i} J_0(\mu_{\rm i} r),
\tilde{p}_{\rm tot,e} = A_{\rm e} H_0^{(1)}(\mu_{\rm e} r),
\end{eqnarray}
where $J_n$ and $H_n^{(1)}$ are the $n$-th-oder Bessel and Hankel functions of
the first kind, respectively (here $n=0$).
For future reference, we also give the expressions for the Fourier amplitudes
of some relevant perturbations,
\begin{eqnarray}
\label{eq_Fourier_amp}
\tilde{b}_z = \frac{4\pi \tilde{p}_{\rm tot}}{B}, \hskip 0.1cm
\tilde{v}_r = \frac{-i \omega}{\mu^2}\frac{\tilde{b}_z'}{B}, \hskip 0.1cm
\mbox{and} \hskip 0.1cm
\tilde{b}_r = -\frac{k B}{\omega} \tilde{v}_r .
\end{eqnarray}
In addition, the energy flux density carried by the sausage waves is given by
\begin{eqnarray}
\label{eq_eflx}
\vec{F} = -\frac{1}{4\pi}\left(\delta \vec{v}\times \vec{B}\right)\times \delta \vec{b}
= \delta p_{\mathrm{tot}}\delta v_r \hat{r} - \frac{\delta v_r \delta b_r}{4\pi}B\hat{z} .
\end{eqnarray}
The dispersion relation (DR) for sausage waves follows from the requirements that
{\bf the Fourier amplitudes of both the total pressure perturbation $\tilde{p}_{\rm tot}$
and the radial velocity perturbation $\tilde{v}_r$}
be continuous at the
cylinder boundary $r=a$.
{\bf Note that more properly, the continuity of the Fourier amplitude of the radial Lagrangian displacement $\tilde{\xi}_r$
should be used in place of that of $\tilde{v}_r$.
Even though in the static case the two requirements are equivalent given that $\tilde{v}_r = -i\omega\tilde{\xi}_r$,
the equivalence will not be present when axial flows exist
\citep[e.g.,][]{1992SoPh..138..233G, 2014A&A...568A..31L}.}
The DR reads~\citep[e.g.,][]{1986SoPh..103..277C}
\begin{eqnarray}
\label{eq_DR}
\frac{\mu_{\rm i}}{\mu_{\rm e}}=\frac{J_1(\mu_{\rm i}a)}{J_0(\mu_{\rm i}a)}
\frac{H^{(1)}_0(\mu_{\rm e}a)}{H^{(1)}_1(\mu_{\rm e}a)}.
\end{eqnarray}
Throughout this study, we examine only the solutions that correspond to the lowest
critical longitudinal wavenumber in the trapped regime.
Unless otherwise specified, we solve Eq.~(\ref{eq_DR}) by assuming that
the angular frequency $\omega$ is real,
and the longitudinal wavenumber $k$ is complex ($k = k_{\rm R} + i k_{\rm I}$).
For these propagating waves, the energy flux density averaged over a wave period $2\pi/\omega$ is
\begin{eqnarray}
&& \left< F_r \right> = \frac{1}{2}\mathrm{Re}\left(\tilde{p}_{\rm tot} \tilde{v}_r^*\right) \exp(-2 k_{\rm I} z) ,
\label{eq_eflx_r} \\
&& \left< F_z \right> = \frac{1}{2}\mathrm{Re}\left( -\frac{B}{4\pi}\tilde{b}_r\tilde{v}_r^*\right) \exp(-2 k_{\rm I} z) ,
\label{eq_eflx_z}
\end{eqnarray}
where the subscripts $r$ and $z$ denote the radial and longitudinal components, respectively.
Furthermore, $\tilde{f}^*$ means taking the complex conjugate of some complex-valued $\tilde{f}$.
{\bf (See Appendix~A for a derivation of Eqs.~(\ref{eq_eflx_r}) and (\ref{eq_eflx_z}).)}
{\bf Before proceeding, we note that a comprehensive study was conducted recently
by~\citet[][hereafter M15]{2015A&A...578A..60M}
to work out the specific expressions for the energy and energy flux densities
for both fast and slow sausage waves.
That study differs from ours primarily in the objectives.
To enable a calculation of the energy content of
sausage waves in a variety of solar structures, M15 employed
the specific expressions for the eigen-functions and focused on trapped waves.
However, the aim of the present study is to examine the spatial damping of propagating leaky waves.
For this purpose, the specific expressions for the energy flux densities are not necessary.
Rather, we only need to evaluate the sign of $\left< F_r \right>$ at large distances from
the cylinder to offer a physical explanation for the spatial damping.
Likewise, the sign of $\left< F_z \right>$ is necessary to address whether the spatially damped waves
can impart their energy upwards.
We would like to stress that, while the energetics of leaky waves cannot be addressed with an eigen-value-problem approach
(see our discussion towards the end of Sect.~\ref{sec_numres}),
evaluating the signs of $\left< F_r \right>$ and $\left< F_z \right>$ makes physical sense.
A similar conclusion was reached in M15 for the trapped waves at the critical axial wavenumber,
below which trapped waves are no longer allowed.
Regarding the technical details, in the present study the energy flux density, Eq.~(\ref{eq_eflx}), is equivalent to
the Poynting flux given the cold MHD approximation.
In other words, the contribution due to thermal pressure, given by $\left<\vec{T}\right>$ in Eq.~(4) of M15,
vanishes here.
}
\section{Results}
\label{sec_numres}
Figure~\ref{fig_vali} presents the solutions to the DR for two different density ratios,
one mild ($\rho_{\rm i}/\rho_{\rm e} =4$, the red curves
and symbols)
and the other rather large ($\rho_{\rm i}/\rho_{\rm e} = 100$, blue).
The real (the filled dots) and imaginary (open) parts of the longitudinal wavenumber $k$ are presented
as functions of the angular frequency $\omega$.
The vertical dash-dotted lines separate the trapped regime (where $k_{\rm I} \equiv 0$)
from the leaky one ($k_{\rm I} \ne 0$), and correspond to
$\omega_{\rm c} = k_{\rm c} v_{\rm Ae}$
with $k_{\rm c} = 2.4048/(a\sqrt{\rho_{\rm i}/\rho_{\rm e}-1})$~\citep{2005LRSP....2....3N}.
Let us leave the physical interpretation of the results till later,
and for now provide a validation study on the numerical solutions.
To this end, the dispersion curves in the neighborhood of $\omega_{\rm c}$
are amplified in Fig.~\ref{fig_vali}b.
Assuming that $|\Delta \omega| = |\omega - \omega_{\rm c}| \ll \omega_{\rm c}$,
one can derive an analytical expression,
\begin{eqnarray}
\label{eq_kc}
\Delta k =
\frac{1+\frac{i}{\pi}
\left\{\ln\left[\frac{k^2_{\rm c}a^2}{2}\left(\frac{\Delta \omega}{\omega_{\rm c}}-\frac{\Delta k}{k_{\rm c}}\right)\right]
+2\gamma-\frac{v_{\rm Ae}^2}{v_{\rm Ai}^2}\right\}}
{1+\frac{i}{\pi}
\left\{\ln\left[\frac{k^2_{\rm c}a^2}{2}\left(\frac{\Delta \omega}{\omega_{\rm c}}-\frac{\Delta k}{k_{\rm c}}\right)\right]
+2\gamma-1\right\}}
\frac{k_{\rm c}}{\omega_{\rm c}}\Delta \omega ,
\end{eqnarray}
where $\Delta k = k-k_{\rm c}$.
Interestingly, Eq.~(\ref{eq_kc}) agrees with Eq.~(16) in~\citet{2014ApJ...781...92V} even though standing modes
with real $k$ and complex-valued $\omega$ are examined there.
The primary difference between the two is
the appearance of the Euler constant $\gamma = 0.577$, which we found is necessary to retain
when the logarithmic term does not substantially exceed unity.
In Fig.~\ref{fig_vali}b, the solid curves represent the solutions to Eq.~(\ref{eq_kc}).
It can be seen that they well approximate the solutions to the full DR given by the dots
in both trapped and leaky regimes.
Another portion of the dispersion curves that is analytically tractable is when $\omega \rightarrow 0$.
In this case, the DR can be approximated by
\begin{eqnarray}
\label{eq_smallomega}
\mu_{\rm i}a = {\rm arctan}\left(\frac{\mu_{\rm e}-i\mu_{\rm i}}{\mu_{\rm e} +i\mu_{\rm i}}\right)~.
\end{eqnarray}
To arrive at Eq.~(\ref{eq_smallomega}), we have assumed that $|\mu_{\rm i} a|, |\mu_{\rm e} a| \gg 1$,
which can be justified a posteri.
The real (imaginary) part of the solution to Eq.~(\ref{eq_smallomega})
is presented by the curves in Fig.~\ref{fig_vali}c (Fig.~\ref{fig_vali}d),
and is found to agree well with the solutions to the full DR, represented by the dots.
It is also clear that
in the low-frequency portion the solutions to the DR for the two density ratios are close to each other,
despite that the ratios differ substantially.
This is understandable from Eqs.~(\ref{eq_smallomega}) and (\ref{eq_mu}),
because $\mu_{\rm i}^2$ and $\mu_{\rm e}^2$ both approach $-k^2$ when $\omega$ approaches zero,
therefore having little dependence on the Alfv\'en speeds or densities.
\begin{figure}
\centering
\includegraphics[width=0.95\hsize]{f1.eps}
\caption{Dependence of the real ($k_{\rm R}$, the solid dots) and imaginary ($k_{\rm I}$, open dots)
parts of the longitudinal wavenumber on the angular frequency $\omega$.
Two density ratios $\rho_{\rm i}/\rho_{\rm e} = 4$ and $100$ are examined, and are given
by the symbols and curves in red and blue, respectively.
The vertical dash-dotted lines correspond to the critical angular frequency $\omega_{\rm c}$ that
separates the leaky (to the left of $\omega_{\rm c}$) from trapped (right) regimes.
Figure~\ref{fig_vali}a presents an overview of the dispersion curves,
while Fig.\ref{fig_vali}b (Figs.~\ref{fig_vali}c and \ref{fig_vali}d)
examines the portion where $\omega$ is close to $\omega_{\rm c}$ ($\omega$ approaches zero).
The dots are found by numerically solving the full dispersion relation (DR, Eq.~(\ref{eq_DR}))
by assuming a real $\omega$ but a complex-valued $k=k_{\rm R} + i k_{\rm I}$,
while the curves represent the solutions to the approximate DR (see text for details).
}
\label{fig_vali}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\hsize]{f2.eps}
\caption{Dispersion curves of (a) propagating and (b) standing sausage waves.
In (a) the real ($k_{\rm R}$) and imaginary ($k_{\rm I}$)
parts of the longitudinal wavenumber are given as functions of the real
angular frequency $\omega$.
In (b) the real ($\omega_{\rm R}$) and imaginary ($\omega_{\rm I}$) parts of
$\omega$ are shown as functions of the real wavenumber $k$, which is
given by the vertical axis though.
In both panels, the solid (dashed) curves represent the trapped (leaky) regime.
Two density ratios ($\rho_{\rm i}/\rho_{\rm e} = 4$ and $100$) are examined
as represented by the red and blue curves, respectively.
}
\label{fig_prop_vs_stand}
\end{figure}
Figure~\ref{fig_prop_vs_stand}a presents once again the dispersion curves for propagating sausage waves,
where the real and imaginary parts of $k$ are given as functions of $\omega$.
For comparison, Fig.~\ref{fig_prop_vs_stand}b presents the dispersion curves for standing waves,
obtained by solving Eq.~(\ref{eq_DR}) for complex-valued $\omega$ ($\omega = \omega_{\rm R} + i \omega_{\rm I}$)
at given real values of $k$.
Note that here $k$ is given by the vertical axis, and $-\omega_{\rm I}$ is plotted instead of $\omega_{\rm I}$
because $\omega_{\rm I} \le 0$.
In both cases, we examine two density ratios with $\rho_{\rm i}/\rho_{\rm e}$ being $4$ (the red curves)
and $100$ (blue), and present the trapped (leaky) regime with the solid (dashed) curves.
One can see that in the trapped regime, Figs.~\ref{fig_prop_vs_stand}a and \ref{fig_prop_vs_stand}b
agree exactly with one another, as expected for the situation where both $\omega$ and $k$ are real.
In the trapped regime, however, some distinct differences appear.
For standing waves, with decreasing $k$, the real (imaginary) part of the angular frequency $\omega_{\rm R}$ ($\omega_{\rm I}$)
decreases (increases in magnitude) and saturates when $k$ approaches zero.
This $k$-dependence of both $\omega_{\rm R}$ and $\omega_{\rm I}$ is well
understood~\citep[e.g.,][]{2007AstL...33..706K,2012ApJ...761..134N,2014ApJ...781...92V}.
It suffices to note that
the temporal damping clearly depends on $\rho_{\rm i}/\rho_{\rm e}$: the higher
the density ratio, the slower the temporal damping.
In fact, for large $\rho_{\rm i}/\rho_{\rm e}$, a simple but reasonably accurate expression
exists, namely, $|\omega_{\rm I}|/\omega_{\rm R} \approx (\pi/2)(\rho_{\rm e}/\rho_{\rm i})$~\citep{2007AstL...33..706K}.
When examining propagating waves, Fig.~\ref{fig_prop_vs_stand}a shows that the DR permits
waves with $\omega$ much lower than $\omega_{\rm c}$, in contrast to the standing case.
In addition, for $\omega \lesssim v_{\rm Ai}/a$, neither $k_{\rm R}$ nor $k_{\rm I}$
shows a significant dependence on the density ratio.
This is particularly true for $k_{\rm I}$, and has been explained in view of Eq.~(\ref{eq_smallomega}).
{\bf Why should the results be different if one simply changes from one perspective, where a DR
is solved for complex $\omega$ as a function of real $k$, to another standpoint where the same DR
is solved for complex $k$ as a function of real $\omega$?
As has been discussed in detail by~\citet[][hereafter TFS95]{1995A&A...299..940T},
indeed the two perspectives have a close relationship when wave attenuation is weak or absent.
However, some considerable difference between the two may result when strong attenuation takes place.
An example for this can be found in~TFS95 where the authors examined linear Alfv\'en waves in a partially ionized gas
where ions and neutrals are imperfectly coupled.
Solving the relevant DR for complex $\omega$ at real $k$ yields that $\omega$ may be purely imaginary in certain ranges of $k$,
in other words, the waves may be overdamped and non-propagating.
In contrast, solving the same DR for complex $k$ at real $\omega$ yields that the real part of $k$ is always non-zero, meaning that
the waves are always propagating.
This led TFS95 to conclude that how to choose a perspective depends on how the waves are excited.
One chooses complex $k$ and real $\omega$ to examine the spatial variation of the waves
excited at a given location with a given real frequency.
On the other hand, one chooses complex $\omega$ and real $k$ to follow the temporal variation of the waves
in response to perturbations initially imposed in a coherent way over many wavelengths.
These discussions are also valid if one examines the differences in Figs.~\ref{fig_prop_vs_stand}a and \ref{fig_prop_vs_stand}b.
In particular, the absence of standing waves with frequencies below a certain value (see Fig.~\ref{fig_prop_vs_stand}b)
can be understood given that in view of Fig.~\ref{fig_prop_vs_stand}a,
an initial perturbation cannot stay coherent in a longitudinal spatial range spanning many wavelengths
and hence many damping lengths.
Instead, the system will select the frequencies and damping rates corresponding to the spatial periods
enforced externally.
}
Let us now focus on Fig.~\ref{fig_prop_vs_stand}a, where one can see that for both density ratios,
once in the leaky regime, $k_{\rm I}$ increases with decreasing $\omega$ and
somehow levels off when $\omega \lesssim 1.5 v_{\rm Ai}/a$.
On the other hand, $k_{\rm R}$ decreases when $\omega$ decreases from $\omega_{\rm c}$
before increasing monotonically when $\omega$ further decreases.
Let $\omega_{\rm m}$ denote the point where $k_{\rm R}$ reaches a local minimum.
It then follows that the apparent group speed $d \omega/d k_{\rm R} \le 0$ for $\omega \le \omega_{\rm m}$.
One may question whether the sausage waves in this low-frequency portion can impart energy upwards.
As discussed in \citeauthor{1960Brillouin} (\citeyear{1960Brillouin}, chapter V),
when waves are heavily damped, the apparent group velocity may not represent
the velocity at which energy propagates.
In this case, we may directly use Eq.~(\ref{eq_eflx_z}) to evaluate the $z$-component of
the wave energy flux density.
In view of Eq.~(\ref{eq_Fourier_amp}), one finds that
\begin{eqnarray*}
\left< F_z \right> = \frac{B^2}{8\pi}\left|\tilde{v}_r\right|^2\left(\frac{k_{\rm R}}{\omega}\right) \exp(-2 k_{\rm I} z) ,
\end{eqnarray*}
which is positive for positive $k_{\rm R}/\omega$, meaning that
the low-frequency waves in question can still direct their energy upwards.
However, in this case the plasma cylinder is such an inefficient waveguide
that the wave energy is attenuated over
a longitudinal distance of the order of the cylinder radius.
More insights can be gained by further comparing the dispersive behavior of
leaky standing and propagating sausage waves as given in Fig.~\ref{fig_prop_vs_stand}.
To start, let us note that for standing waves with real $k$ and complex $\omega$
(propagating waves with real $\omega$ and complex $k$)
the radial energy flux density in the external medium,
when averaged over a longitudinal wavelength $2\pi/k$ (a wave period $2\pi/\omega$), evaluates to
\begin{eqnarray}
\label{eq_eflx_r_stand}
\left<F_r\right> = \frac{2\pi}{B^2}\left|\tilde{p}_{\rm tot, e}\right|^2
\mathrm{Re}\left(\frac{-i\omega}{\mu_{\rm e}^2}\frac{\tilde{p}'_{\rm tot, e}}{\tilde{p}_{\rm tot, e}}\right)
\begin{cases}
{\rm e}^{2\omega_{\rm I}t} & \quad \text{standing} \\
{\rm e}^{-2 k_{\rm I} z} & \quad \text{propagating} .
\end{cases}
\end{eqnarray}
We note that $\left<F_r\right>$ for standing waves was originally derived in
\citeauthor{1986SoPh..103..277C}(\citeyear{1986SoPh..103..277C}, Eq.~(3.2)).
{\bf (See Appendix~A for a derivation of Eq.~(\ref{eq_eflx_r_stand}).)}
Be the waves standing or propagating, $\tilde{p}_{\rm tot, e} \propto H_0^{(1)}(\mu_{\rm e}r)$ at large distances
can be approximated by
\begin{eqnarray}
\label{eq_ptot_largeR}
\tilde{p}_{\rm tot, e} \propto \sqrt{\frac{2}{\pi \mu_{\rm e}r}}\exp\left[i\left(\mu_{\rm e}r-\frac{\pi}{4}\right)\right] ,
\end{eqnarray}
resulting in $\tilde{p}'_{\rm tot, e}/\tilde{p}_{\rm tot, e} \approx i\mu_{\rm e}$.
It then follows that
\begin{eqnarray}
\label{eq_eflx_r_stand_largeR}
\left<F_r\right> \approx \frac{2\pi}{B^2}\left|\tilde{p}_{\rm tot, e}\right|^2
\mathrm{Re}\left(\frac{\omega}{\mu_{\rm e}}\right)
\begin{cases}
{\rm e}^{2\omega_{\rm I}t} & \quad \text{standing} \\
{\rm e}^{-2 k_{\rm I} z} & \quad \text{propagating} .
\end{cases}
\end{eqnarray}
From Fig.~\ref{fig_prop_vs_stand}b one can find that ${\rm Im}\left(\mu_{\rm e}\right) <0$ for leaky standing modes,
meaning that $|\tilde{p}_{\rm tot, e}|$ tends to grow exponentially
(Eq.~(\ref{eq_ptot_largeR}), barring the $r^{-1/2}$-dependence)
and an eigen-mode analysis does not allow us to examine the energetics of the system.
However, this does not mean that this analysis is physically irrelevant because what matters is that
the apparent temporal damping can be accounted for by the outwardly going energy flux density
($\left<F_r\right> >0$ because ${\rm Re}(\omega/\mu_e) >0$, see Eq.~(\ref{eq_eflx_r_stand_largeR})).
Indeed, numerical studies starting with initial standing waves suggest that the temporal damping
after a transient stage matches exactly
the attenuation rate given by the eigen-mode analysis~\citep{2007SoPh..246..231T}.
For propagating waves, one finds from Fig.~\ref{fig_prop_vs_stand}a that
once again ${\rm Im}\left(\mu_{\rm e}\right) <0$ and ${\rm Re}(\omega/\mu_e) >0$.
Hence similar to the standing case, an eigen-mode analysis does not permit an investigation
into the energetics of propagating waves.
However, if a coronal structure is perturbed with a harmonic boundary driver,
one expects to see that the apparent spatial damping after some transient phase will be given
by the attenuation length $1/k_{\rm I}$ obtained from this eigen-mode analysis.
And this attenuation is once again associated with the outwardly directed $\left<F_r\right>$.
We note that this expectation can be readily numerically tested, in much the same way that the expected
spatial damping of kink waves due to resonant absorption
was tested~\citep{2013A&A...551A..40P}.
\section{Summary}
\label{sec_conc}
This study is motivated by the apparent lack of a dedicated study on the role that wave leakage plays in
spatially attenuating propagating sausage waves supported by density-enhanced cylinders in the corona.
To this end, we worked in the framework of cold magnetohydrodynamics (MHD),
and numerically solved the dispersion relation~(DR, eq.~(\ref{eq_DR})) for complex-valued longitudinal
wavenumbers $k=k_{\rm R} + i k_{\rm I}$ at given real angular frequencies $\omega$.
To validate our numerical results, we also provided the analytical approximations to the full DR
in the low-frequency limit $\omega\rightarrow 0$
and in the neighborhood of $\omega_{\rm c}$, the critical angular frequency separating
trapped from leaky waves.
Our solutions indicate that while sausage waves can propagate for $\omega<\omega_{\rm c}$
and can direct their energy upwards, they suffer substantial spatial attenuation.
The attenuation length ($1/k_{\rm I}$) is of the order of the cylinder radius $a$
and shows little dependence on frequency or
the density contrast between a coronal structure and its surroundings
for $\omega \lesssim 1.5 v_{\rm Ai}/a$, where $v_{\rm Ai}$ is
the Alfv\'en speed in the cylinder.
This means that when a coronal cylinder is subject to boundary perturbations with a broadband spectrum
(e.g., granular motions), wave leakage removes the low-frequency components rather efficiently.
A comparison with the solutions to the DR for standing waves (real $k$, complex $\omega$) indicates
that a close relationship between propagating and standing waves
exists only when the waves are trapped or weakly damped.
In addition, while an eigen-mode analysis does not allow an investigation into the energetics of
propagating waves, the attenuation length is expected to play an essential role
in numerical simulations where coronal structures are perturbed by harmonic boundary drivers.
\begin{acknowledgements}
This research is supported by the 973 program 2012CB825601, National Natural Science Foundation of China
(41174154, 41274176, 41274178, and 41474149),
the Provincial Natural Science Foundation of Shandong via Grant JQ201212,
and also by a special fund of Key Laboratory of Chinese Academy of Sciences.
\end{acknowledgements}
|
2,877,628,091,103 | arxiv | \section{Introduction}
The Minimal Model Program (MMP) is one of the main guiding principles in the study of algebraic varieties. It predicts that every projective variety $X$ with mild singularities should have a birational model $X'$ which is also mildly singular and either has $K_{X'}$ nef (minimal model) or admits a Mori fiber space. So, informally speaking, all algebraic varieties should be constructed, using birational equivalences and fibrations, from those of the three classes: $K_X$ positive (general type), $K_X$ numerically trivial (Calabi--Yau) and $K_X$ negative (Fano).
The works of Brunella \cite{Bru97, Bru03, Bru15}, Mendes \cite{Men00} and McQuillan \cite{McQ08} established a version of the MMP for holomorphic foliations on surfaces. Analogously to the case of varieties, for a foliation $\FF$ with mild singularities it is possible to define the canonical class $K_{\FF}$ and its invariants such as Kodaira and numerical dimension. Foliations on surfaces admit a classification according to these invariants; moreover, this classification is rather explicit for foliations ``not of general type''. The final step in the classification of minimal models of foliations on surfaces is the following theorem (see \cite[Theorem on p. 122]{Bru03} and \cite[Section IV.5]{McQ08}).
\begin{theorem}\label{BruMcQ} Let $\FF$ be a foliation with reduced singularities on a surface $S$ of general type. Suppose that $K_{\FF}$ is nef and not big. Then either
\begin{enumerate}
\item $\FF$ is algebraically integrable and induced by an isotrivial fibration, $\kappa(K_{\FF}) = \nu(K_{\FF}) = 1$;
\item $\FF$ is transcendental and the pair $(S, \FF)$ is isomorphic to a Hilbert modular surface with a Hilbert modular foliation, $\kappa(K_{\FF}) = -\infty, \nu(K_{\FF}) = 1$.
\end{enumerate}
\end{theorem}
The setting of the above theorem is interesting from several points of view. It involves the interplay between positivity of $K_X$ and $K_{\FF}$ (see e.g. \cite{CP15, CP19} for related topics); the failure of abundance for foliations (see \cite{McQ08, Tou16}). Another context, in which these foliations appear, is the study of Kobayashi hyperbolicity (see e.g. \cite{Kob70, Dem97}). Entire curves tangent to foliations on surfaces have been studied in connection with the Green--Griffiths conjecture (see \cite{McQ98, Bru99, DR15}). In higher dimensions we have, for example, the following theorem \cite[Theorem F]{GPR13}.
\begin{theorem} Let $\FF$ be a codimension one foliation with canonical singularities on a projective threefold $X$. Suppose that there exists a generically nondegenerate meromorphic map $f \colon \mathbb{C}^2 \dasharrow X$ such that the image $f(\mathbb{C}^2)$ is Zariski-dense in $X$ and tangent to $\FF$. Then the canonical class of $\FF$ is not big.
\end{theorem}
Recently, in a series of works \cite{Spi20, CS21, SS19} the MMP for codimension $1$ foliations on threefolds has been established. Moreover, some structure theorems for codimension $1$ foliations are known in higher dimensions, for example in the case $K_{\FF} \equiv 0$ \cite{LPT18, Dru21} or $-K_{\FF}$ ample \cite{AD13, AD17, AD19}. These results suggest that the MMP-type classification for codimension $1$ foliations should hold in any dimension.
Motivated by the above results, we would like to describe codimension $1$ foliations $\FF$ with canonical singularities and $\nu(K_{\FF}) < 3$ on threefolds of general type. In this paper we are able to prove the following partial analogue of Theorem \ref{BruMcQ}.
\begin{ttheorem}[A] Let $\FF$ be a codimension one foliation with canonical singularities on a threefold $X$ of general type. Suppose that $K_{\FF}$ is not big and the algebraic rank $r_a(\FF)$ is positive. Then one of the following two cases occurs.
\begin{enumerate}
\item $\FF$ is algebraically integrable. Then, up to a generically finite morphism, the threefold $X$ splits as a product $S \times C$ of a surface $S$ and a curve $C$, and $\FF$ is the relative tangent bundle of the projection $S \times C \to C$. We have $\kappa(K_{\FF}) = \nu(K_{\FF}) = 2$.
\item The algebraic rank of $\FF$ is equal to $1$. Then, up to a generically finite morphism, the threefold $X$ splits as a product $S \times C$ of a Hilbert modular surface $S$ and a curve $C$, and $\FF$ is the pullback via the projection $S \times C \to S$ of a Hilbert modular foliation $\GG$ on $S$. We have $\kappa(K_{\FF}) = -\infty$ and $\nu(K_{\FF}) = 2$.
\end{enumerate}
\end{ttheorem}
At the moment we do not know how to classify purely transcendental foliations with canonical singularities and not of general type on threefolds of general type. However, if we restrict ourselves to foliations regular in codimension $2$, then we are able to prove a result which confirms our expectations. The starting point for us is another remarkable theorem by Brunella (see \cite[pp. 587-588]{Bru97}).
\begin{theorem} \label{brureg}
Let $\FF$ be a regular foliation on a minimal surface $X$ of general type. Then the conormal bundle $N^*_{\FF}$ is pseudoeffective (even numerically effective).
\end{theorem}
This theorem essentially follows from Baum--Bott formula, Riemann--Roch theorem and intersection theory. Assuming that $\FF$ is non-singular in codimension $2$, we use the Baum--Bott formula and intersection computations in a similar way to study purely transcendental foliations in higher dimensions. Together with Lefschetz-type theorems, it allows us to ``lift'' positivity of the conormal bundle from hyperplane sections. Then we use classification results of Touzet \cite{Tou13, Tou16} to describe foliations from this class.
\begin{ttheorem}[B] Let $X$ be a smooth projective manifold of general type, $\dim(X) = n \geqslant 2$. Let $\FF$ be a codimension 1 foliation on $X$. Suppose that \begin{enumerate}
\item $K_{\FF}$ is not big;
\item $\FF$ is purely transcendental;
\item $\mathrm{codim}_X\Sing(\FF) \geqslant 3$.
\end{enumerate}
Then the foliation $\FF$ is induced by a Hilbert modular foliation via a morphism $X \to M_H$, generically finite onto its image.
\end{ttheorem}
Our paper is organized as follows. In section \ref{prelim} we gather the information on Kodaira and numerical dimensions for $\QQ$-divisors, basic notions from foliation theory and foliated birational geometry. In section \ref{main1} we recall some important definitions and results concerning fibrations with fibers of general type. Then we use these results to prove Theorem (A), see Proposition \ref{AlgInt} and Theorem \ref{MainThm} below. In section \ref{main2} we prove Theorem (B) (see Theorem \ref{terminal}) and pose some questions for future research.
\subsection*{Acknowledgements} This paper is a part of a project on hyperbolicity and foliations we have been working on for a long time. We would like to thank our advisor Constantin Shramov for encouraging us to write down these results. We also thank Jorge Vit\'orio Pereira, Erwan Rousseau, Jean-Pierre Demailly, Misha Verbitsky and Vladimir Lazi\'c for very helpful discussions on various topics related to this work.
\section{Preliminaries} \label{prelim}
\subsection{Kodaira and numerical dimensions} In this subsection we recall the notions of Kodaira and numerical dimension for $\QQ$-divisors.
\begin{definition} Let $D$ be a $\QQ$-divisor on a normal projective variety $X$. The Kodaira dimension (or Kodaira--Iitaka dimension) of $D$ is defined as $$\kappa(X, D) = \max\{k \mid \limsup_{m \to \infty}\frac{h^0(X, \OO_X(\lfloor mD\rfloor)}{m^k} > 0\}$$ if $h^0(X, \OO_X(\lfloor mD\rfloor)) > 0$ for some $m$ and $\kappa(X, D) = -\infty$ otherwise.
\end{definition}
\begin{definition} \label{num} Let $X$ be a normal projective variety, $D$ an $\QQ$-divisor on $X$ and $A$ an ample divisor. We define the quantity $$\nu(D, A) = \max\left\{k \in \mathbb{Z}_{\geqslant 0} \mid \limsup_{m \to \infty}\frac{h^0(X, \OO_X(\lfloor mD\rfloor + A))}{m^k} > 0\right\}.$$ The numerical dimension of $D$ is defined to be $$\nu(D) = \max\{\nu(D, A) \mid \mbox{$A$ ample}\}.$$ If $D$ is not pseudoeffective then we set $\nu(D) = -\infty$.
\end{definition}
The main property of $\nu(D)$ is that it depends only on the numerical class of a divisor.
\begin{proposition} \label{Numprop} We have the following basic properties of Kodaira and numerical dimensions (see \cite[Lemma II.3.11, Proposition III.5.7]{Nak04} and \cite[Sec. V]{Nak04} for proofs).
\begin{enumerate}
\item We have $\kappa(X, D) \leqslant \nu(X, D) \leqslant n$ for any $\QQ$-divisor $D$;
\item For a nef $\QQ$-divisor $D$ we have $\nu(X, D) = \max\{k \mid D^k \cdot A^{n-k} \neq 0\}$
\item If $D$ and $E$ are pseudoeffective $\QQ$-divisors then $\nu(D + E) \geqslant \max\{\nu(D), \nu(E)\};$
\item If $f \colon Y \to X$ is a surjective morphism then $\kappa(Y, f^*D) = \kappa(X, D)$ and $\nu(Y, f^*D) = \nu(X, D)$.
\item If $\varphi \colon \widetilde X \to X$ is a birational morphism and $D$ is a $\QQ$-divisor on $X$ then $$\kappa(X, D) = \kappa(\widetilde X, \varphi^*D + E) \quad \mbox{and} \quad \nu(X, D) = \nu(\widetilde X, \varphi^*D + E)$$ for any effective $\varphi$-exceptional $\QQ$-divisor $E$.
\end{enumerate}
\end{proposition}
We also recall the notion of \emph{movable intersection product} for pseudoeffective classes from \cite{BDPP13} (see also \cite[Sec. 4]{Leh13}).
\begin{theorem} \label{posprod} Let $X$ be a smooth projective variety of dimension $n$. Denote by $\mathcal{E}$ the cone of pseudoeffective $(1,1)$-classes on $X$. Then for every $k \in \{1, \ldots, n\}$ there exists a map $$\prod_{i=1}^k\mathcal{E}_i \to H^{k,k}_{\geqslant 0}(X, \mathbb{R}), \quad (L_1, L_2, \ldots, L_k) \to \langle L_1 \cdot L_2 \cdots L_k\rangle$$ called the movable intersection product, such that the following properties hold.
\begin{enumerate}
\item We have $\mathrm{vol}(L) = \langle L\rangle^n$;
\item The movable intersection product is increasing. homogeneous of degree $1$ and superadditive in each variable: $$\langle L_1 \cdots (M + N) \cdots L_k\rangle \geqslant \langle L_1 \cdots M\cdots L_k\rangle + \langle L_1 \cdots N\cdots L_k\rangle.$$
\item For $k=1$ the movable intersection product gives a divisorial Zariski decomposition: $$L = P_L + N_L$$ where $P_L = \langle L\rangle$ is nef in codimension $1$.
\end{enumerate}
\end{theorem}
\begin{remark} This product was used in \cite[Definition 3.6]{BDPP13} to define another version of numerical dimension for a pseudoeffective class $D$:
$$\nu_{BDPP}(D) = \max\{k \mid \langle D\rangle^k \neq 0 \}.$$ The equivalence of this definition to Definition \ref{num} was claimed in \cite{Leh13}, see the subsequent corrections in \cite{E16, Les19}. Still, by the results in \cite[Sec. 6]{Leh13} we have an inequality $$\nu_{BDPP}(D) \leqslant \nu(D)$$ for any pseudoeffective $\QQ$-divisor $D$.
\end{remark}
\subsection{Foliations} In this subsection we recollect basic definitions and facts from foliation theory. General references for this subsection are e.g. \cite{AD13, Bru15}. We denote by $X, Y$ normal and $\mathbb{Q}$-factorial projective varieties over $\mathbb{C}$ and by $\FF$ (resp. $\GG$) coherent sheaves of $\OO_X$-modules (resp. $\OO_Y$-modules). If $\mathscr{E}$ is a coherent torsion-free sheaf of rank $r$ then by $\det(\mathscr{E})$ we denote the reflexive hull $(\bigwedge^{r}\mathscr{E})^{**}$. A subset $U \subset X$ is called \emph{big} if the codimension of $U$ in $X$ is at least $2$.
\begin{definition} A {\it foliation} on a variety $X$ is a coherent subsheaf $\FF \subset T_X$ which is \begin{itemize} \item saturated, that is, the quotient $T_X / \FF$ is torsion-free; \item closed under the Lie bracket, i. e. the map $ \bigwedge^2 \FF \to T_X /\FF$ is zero.\end{itemize} The {\it rank} $r = \rk(\FF)$ of the foliation is the rank of the sheaf $\FF$ at a general point of $X$ and the {\it codimension} of $\FF$ is $q = \dim(X) - \rk(\FF)$.
\end{definition}
A foliation on a smooth variety $X$ is {\it regular} if $\FF$ is a subbundle of $T_X$ at every point of $X$. In general, let $X^{\reg}$ be the maximal open subset of $X$ such that $\FF|_{X^{\reg}}$ is regular. Then $X^{\reg}$ is a big subset of $X$; the complement $X \setminus X^{\reg}$ is denoted by $\Sing(\FF)$ and is a closed subscheme of codimension at least $2$ (this follows from the fact that $\FF$ is saturated).
\begin{remark} The {\it normal sheaf} to $\FF$ is defined to be $N_{\FF} = (T_X/\FF)^{**}$. Taking the $q$-th wedge power of the map $N^{*}_{\FF} \hookrightarrow \Omega_X^1$ gives rise to a $q$-form $\omega_{\FF} \in H^0(X, \Omega^1_X \otimes \det N_{\FF})$ with zero locus of codimension at least two. This twisted $q$-form is locally decomposable and integrable, which means that locally in the Euclidean topology around a general point of $X$ we have $$\omega_{\FF} = \omega_1 \wedge \omega_2 \wedge \cdots \wedge \omega_q$$ for locally defined $1$-forms $\omega_1, \ldots, \omega_q$ satisfying the integrability condition $d\omega_i \wedge \omega_{\FF} = 0$. Conversely, any locally decomposable and integrable twisted $q$-form $\omega \in H^0(X, \Omega^q_X \otimes \mathscr{L})$ defines a codimension $q$ foliation. The subsheaf $\FF \subsetneq T_X$ is obtained as the kernel of the morphism $c_{\omega} \colon T_X \to \Omega^{q-1}_X \otimes \mathscr{L}$ given by contraction with $\omega$.
\end{remark}
\begin{remark}\label{pullback} We will need the following fact about behaviour of foliations under morphisms and rational maps: \begin{itemize} \item Let $f \colon Y \dasharrow X$ be a dominant rational map of varieties restricting to a morphism $f^{\circ} \colon Y^{\circ} \to X^{\circ}$ where $Y^{\circ} \subset Y$ and $X^{\circ} \subset X$ are Zariski open subsets. Let $\FF$ be a foliation on $X$ given by a $q$-form $\omega_{\FF} \in H^0(X, \Omega^q_X \otimes \det N_{\FF})$. Then we have an induced $q$-form $$\omega_{Y^{\circ}} \in H^0(Y^{\circ}, \Omega_{Y^{\circ}}^q \otimes (f^{\circ})^*(\det N_{\FF}|_{X^{\circ}}))$$ which defines a foliation $\GG$ on $Y^{\circ}$. We define the {\it pulled-back} foliation $f^{-1}\FF$ to be the saturation of $\GG$ in $T_Y$.
\item Let $f \colon Y \to X$ be a dominant morphism and let $\GG$ be a foliation on $Y$. We have an induced map $df \colon T_Y \to f^*T_X$. The foliation $\GG$ is called {\it projectable} under $f$ if for a general point $x \in X$ the image $df_{y}(\GG_y)$ does not depend on the choice of $y \in f^{-1}(x)$ and $\dim(df_{y}(\GG_y)) = r = \rk(\GG)$. In particular, if $f \colon Y \to X$ is a birational contraction then any foliation $\GG \subsetneq T_Y$ is projectable under $f$ and induces a foliation $\FF = f_*\GG$ of the same rank on $X$.
\item Let $Z \subseteq X$ be a smooth subvariety transverse to a foliation $\FF$ given by a twisted $q$-form $\omega_{\FF} \in H^0(X, \Omega^q_X \otimes \det N_{\FF})$. Suppose that the restriction of $\omega_{\FF}$ to $Z$ is nonzero. Then we obtain a nonzero induced $q$-form $\omega_Z \in H^0(Z, \Omega^q_Z \otimes \det N_{\FF}|_Z)$. Let $B$ be the maximal effective divisor on $Z$ such that $$\omega_Z \in H^0(Z, \Omega^q_Z \otimes \det N_{\FF}|_Z(-B)).$$ This $q$-form defines a codimension $q$ foliation $\FF_Z$ on $Z$.
\end{itemize}
\end{remark}
\begin{definition} Let $\FF$ be a foliation on a smooth variety $X$. Since the integrability condition holds for $\FF$, the Frobenius theorem implies that for every point $x \in X^{\reg}$ there exists an open neighbourhood $U = U_x$ and a submersion $p \colon U \to V$ such that $\FF|_U = T_{U/V}$. A {\it leaf} of $\FF$ is a maximal connected, locally closed submanifold $L \subset X^{\reg}$ such that $\FF|_L = T_L$. A leaf $L$ is {\it algebraic} if it is open in its Zariski closure or, equivalently, if $\dim(L) = \dim({\overline L}^{\Zar})$. In this case we use the word ``leaf'' for the Zariski closure of a leaf as well.
\end{definition}
\begin{definition}[Tangent and transverse subvarieties] Let $W$ be an irreducible and reduced subvariety of $X$. We say that $W$ is {\it tangent} to $\FF$ if tangent space $T_W$ factors through $\FF$ at all points $x \in X^{\reg}$. A subvariety is {\it transverse} to $\FF$ if it is not tangent to $\FF$. If $\FF$ factors through the tangent space of $W$ then $W$ is called {\it invariant} by $\FF$. In the special case of a codimension $1$ foliation we call a hypersurface tangent to $\FF$ an {\it invariant hypersurface}.
\end{definition}
\begin{definition}[Algebraic integrability] A foliation $\FF$ on $X$ is called {\it algebraically integrable} or simply {\it algebraic} if the leaf of $\FF$ through a general point of $X$ is algebraic. An example of an algebraically integrable foliation is given by (the saturation of) the relative tangent sheaf $\FF = (T_{X/Y})^{\sat}$ of a fibration $f \colon X \to Y$ where $\dim(Y) < \dim(X)$. The leaves of $\FF$ in this case are just the fibers of $f$. We say that $\FF$ is {\it induced by the fibration} $f$.
\end{definition}
\begin{proposition}[Rational first integrals] Let $\FF$ be an algebraically integrable foliation on $X$. Then there is a unique irreducible subvariety $W$ of the cycle space $\mathrm{Chow}(X)$ parameterizing the closure of a general leaf of $\FF$. Let $V \subset W \times X$ be the universal cycle with universal morphisms $\pi \colon V \to W$ and $e \colon V \to X$. Then the morphism $e$ is birational and for $w \in W$ general $e(\pi^{-1}(w)) \subset X$ is the closure of a leaf of $\FF$. The normalization $\widetilde{W}$ of $W$ is called the {\it space of leaves} of $\FF$ and the induced rational map $X \dasharrow \widetilde{W}$ is called {\it rational first integral} of $\FF$. In other words, an algebraically integrable foliation is induced by a fibration on a suitable birational model of $X$.
\end{proposition}
\begin{proposition}[Algebraic and purely transcendental parts] \label{algred} Let $\FF$ be a foliation on $X$. Then there exists a normal variety $Y$ with a dominant rational map $\varphi \colon X \dasharrow Y$ with connected fibers and a foliation $\GG$ on $Y$ such that \begin{itemize} \item The foliation $\GG$ is {\it purely transcendental}, that i, there is no subvariety tangent to $\GG$ through a general point of $Y$; \item We have $\FF = \varphi^{-1}\GG$.\end{itemize} The pair $(Y, \GG)$ is unique up to a birational equivalence. The foliation induced by $\varphi$ is called the {\it algebraic part} of $\FF$ and denoted by $\FF^{\alg}$. The rank of $\FF^{\alg}$ is called the {\it algebraic rank} of $\FF$; by construction, it is a birational invariant.
\end{proposition}
\begin{proposition}[Baum--Bott formula, \cite{BP06}] Let $\FF$ be a codimension $1$ foliation on a complex manifold $X$ of dimension at least $2$. We have the following equality in $H^4(X, \mathbb{C})$: $$c^2_1(N_{\FF}) = \sum_{Y}BB(\FF, Y)[Y],$$ where $Y$ ranges over irreducible components of $\mathrm{Sing}(\FF)$ of codimension $2$, and $BB(\FF, [Y])$ is a number, called the Baum--Bott index of $\FF$ at $Y$.
\end{proposition}
\subsection{Canonical class and singularities of foliations}
\begin{definition}By the {\it canonical class} of a foliation $\FF$ we denote a linear equivalence class of Weil divisors $K_{\FF}$ such that $\OO_X(K_{\FF}) \cong \det(\FF)^*$. We have the relation $$ K_X = K_{\FF} + \det(N^*_{\FF}).$$
\end{definition}
\begin{definition} Let $f \colon Y \to X$ be a birational morphism and let $\FF$ be a foliation on $X$. Then we have an induced foliation $\widetilde\FF = f^{-1}\FF$ on $Y$. We express the canonical divisor $K_{\widetilde\FF}$ as $$ K_{\widetilde\FF} = f^*K_{\FF} + \sum_ia(E_i, \FF, X)E_i$$ where the sum is over all prime $f$-exceptional divisors on $Y$. The foliation $\FF$ is said to have {\it canonical (resp., terminal) singularities} if all discrepancies $a(E_i, \FF, X)$ are nonnegative (resp., positive) for any such birational morphism.
\end{definition}
\begin{remark} By property (5) in Proposition \ref{Numprop}, if $\FF$ is a foliation with canonical singularities and $\varphi \colon (\widetilde{X}, \widetilde{\FF}) \to (X, \FF)$ is a birational morphism, then $$\kappa(K_{\FF}) = \kappa(K_{\widetilde\FF}) \quad \mbox{and} \quad \nu(K_{\FF}) = \nu(K_{\widetilde\FF}).$$ Thus, the class of foliations with canonical singularities is natural to consider in birational geometry. A famous theorem of Seidenberg \cite{Sei68} says that any foliation singularity on a smooth surface can be transformed to a \emph{reduced} singularity by a finite number of blow-ups. For codimension one foliations in dimension $3$ there is a resolution theorem of Cano \cite{Can04}, in terms of so-called \emph{simple} foliation singularities. Both reduced and simple foliation singularities are canonical.
\end{remark}
\begin{remark} Unlike canonical ones, terminal foliation singularities form a rather restricted class. A result of McQuillan \cite[Corollary I.2.2]{McQ08} says that terminal foliations on (normal $\QQ$-Gorenstein) surfaces are, up to finite cyclic covers, regular foliations on smooth surfaces. Thus terminal singularities of codimension one foliations on smooth projective varieties are regular in codimension 2. For more details on terminal foliation singularities on threefolds see \cite[Section 5]{SS19}.
\end{remark}
To express the canonical class of an algebraic foliation we recall the notion of ramification (see e.g. \cite[Definition 2.5]{AD19}).
\begin{definition} \label{ramif} Let $f \colon X \dasharrow Y$ be a dominant rational map between normal and $\QQ$-factorial projective varieties. Let $Y^{\circ} \subset Y$ be a maximal open subset such that $f^{\circ} = f|_{f^{-1}(Y^{\circ})} \colon f^{-1}(Y^{\circ}) \to Y^{\circ}$ is an equidimensional morphism. Define $$R(f^{\circ}) = \sum_{D}\left ((f^{\circ})^*D - ((f^{\circ})^*D)_{\mathrm{red}}\right),$$ where the sum is over all prime divisors $D$ on $Y^{\circ}$. The ramification divisor $R(f)$ of $f$ is defined as the Zariski closure of $R(f)$ in $X$.
\end{definition}
\begin{proposition}[{\cite[2.5]{AD19}}] \label{canclassalg} Let $\FF$ be a foliation induced by an equidimensional morphism $\pi \colon X \to Y$. Then the canonical class of $\FF$ is given by the formula $$K_{\FF} = K_{X/Y} - R(f).$$
\end{proposition}
We also state the Hurwitz formula for codimension one foliations (see e.g. \cite[Proposition 3.7]{Spi20}).
\begin{proposition} \label{hurwitz} Let $f \colon \overline X \to X$ be a finite surjective morphism of projective varieties. Let $\FF$ be a codimension $1$ foliation on $X$ and denote by $\overline\FF$ the induced foliation on $\overline X$. Then the canonical classes of $\FF$ and $\overline\FF$ are related by the formula $$K_{\overline\FF} = f^*K_{\FF} + \sum_D \epsilon(D)(r_D - 1)D.$$ Here the sum is over prime divisors $D$ on $\overline X$ with ramification index $r_D$, and $\epsilon(D)$ is zero if $D$ is $\overline\FF$-invariant and $1$ otherwise.
In particular, if $f \colon \overline{X} \to X$ is a ramified cover with $\FF$-invariant branch divisor then $K_{\overline \FF} = f^*K_{\FF}$.
\end{proposition}
We will need an adjunction formula for foliations induced on general hyperplane sections of a projective variety \cite[Lemma 2.9]{AD19}.
\begin{proposition} \label{adj} Let $X$ be a smooth projective variety and let $\FF$ be a codimension $1$ foliation on $X$. Let $A$ be a very ample divisor on $X$ and take a general element $D \in |A|$. Then $\FF$ induces a codimension $1$ foliation $\FF|_D$ on $D$ such that $$K_{\FF|_D} = (K_{\FF} + D)|_D \quad \mbox{and} \quad N^*_{\FF|_D} = (N^*_{\FF})|_D.$$
\end{proposition}
\section{Foliations with positive algebraic rank} \label{main1}
\subsection{Families with general fibers of general type} We start by recalling a few facts about fibrations on varieties of general type.
\begin{definition} A fibration is a proper and surjective map $\pi \colon X \to Y$ between normal projective varieties, such that the fibers of $\pi$ are connected or, equivalently, $\pi_*\OO_X = \OO_Y$.
\end{definition}
The fibrations we consider in this section are algebraic parts of our foliations. In particular, general fibers of such fibrations are either curves or surfaces of general type. These fibrations belong to a wider class considered by Kawamata in his work \cite{Kaw85} on the Iitaka conjecture. Namely, he considered fibrations with the geometric generic fiber $\overline{X_{\eta}}$ having a good minimal model. For these fibrations it is possible to define the \emph{birational variation} and to compare this invariant to the Kodaira dimension of the relative canonical bundle.
\begin{proposition}{\cite[Theorem 7.2]{Kaw85}} Let $\pi \colon X \to Y$ be a fibration such that the geometric generic fiber $\overline{X_{\eta}}$ has a good minimal model. We call a minimal closed field of definition of $\pi$ a minimal element in the set of all algebraically closed subfields $K \subset \overline{k(Y)}$ satisfying the condition $$\mathrm{Frac}(L \otimes_K \overline{k(Y)}) \cong \mathrm{Frac}(k(X) \otimes_{k(Y)} \overline{k(Y)}) \quad \mbox{over $\overline{k(Y)}$}$$ for some finitely generated extension $L \supset K$. Then a minimal closed field of definition exists and is unique.
\end{proposition}
\begin{definition} Let $\pi \colon X \to Y$ be a fibration as above. We define the \emph{birational variation} $\Var(\pi)$ as transcendence degree over $\mathbb{C}$ of the minimal closed field of definition of $\pi$.
\end{definition}
\begin{remark}\label{Var} Suppose that the fibration $\pi$ is semistable and there is a moduli space for the fibers (for example, the fibers are curves or canonically polarized varieties). Then $\Var(\pi)$ is equal to the variation in the sense of moduli theory.
\end{remark}
\begin{theorem}{\cite[Theorem 1.1]{Kaw85}} \label{Kaw1} Let $\pi \colon X \to Y$ be a fibration between smooth projective varieties. Suppose that the geometric generic fiber $X_{\eta}$ of $\pi$ has a good minimal model. Then the following inequalities hold:
\begin{enumerate}
\item $\kappa(Y, \det(\pi_*\OO_X(mK_{X/Y}))) \geqslant \Var(\pi)$ for some $m \in \mathbb{N}$;
\item If $L$ is a line bundle on $Y$ such that $\kappa(Y, L) \geqslant 0$, then $$\kappa(X, \OO_X(K_{X/Y}) \otimes \pi^*L) \geqslant \kappa(X_{\eta}) + \max\{\kappa(Y, L), \Var(\pi)\}.$$
\end{enumerate}
\end{theorem}
\begin{corollary}{\cite[Corollary 1.2]{Kaw85}} \label{cor:Kaw2} In the assumptions of Theorem \ref{Kaw1}, let $F$ be a general fiber of $\pi$. Then we have the inequality $$\kappa(X, \OO_X(K_{X/Y})) \geqslant \kappa(K_F) + \Var(\pi).$$
\end{corollary}
We reproduce here a very useful construction from the proof of \cite[Theorem 7.1]{CKT16}. Informally speaking, this construction allows us to ``eliminate the ramification'' of the algebraic part of a foliation $\FF$ by generically finite base change, preserving the Kodaira and numerical dimension of $\FF^{\mathrm{alg}}$.
\begin{construction} \label{CKT} Let $f \colon X \dasharrow Y$ be a rational map with connected fibers between normal projective varieties. Then there exists a diagram
$$
\xymatrix{
\overline{X} \ar[rr]^{a} \ar[d]^{\overline{f}} && \widetilde{X} \ar[r]^b \ar[d] & X \ar@{-->}[d]^f\\
\overline{Y} \ar[r]_{\alpha} & \widetilde{Y} \ar[r]_{\beta} & Y \ar@{=}[r] & Y
}
$$
where the maps are as follows:
\begin{itemize}
\item $b \colon \widetilde{X} \to X$ is a resolution of indeterminacies of $f$ and of singularities of $X$;
\item $\beta \colon \widetilde{Y} \to Y$ is an adapted Galois cover of the pair $(Y, B)$, where $B$ is the orbifold branch divisor of the map $f$;
\item $\alpha \colon \overline{Y} \to \widetilde{Y}$ is a log resolution of the pair $(\widetilde{Y}, \beta^*B)$;
\item $a \colon \overline{X} \to \widetilde{X}$ is a log resolution of the fiber product $\overline{Y} \times_{Y}\widetilde{X}$.
\end{itemize}
As a result, we obtain a morphism of smooth varieties $\overline{f} \colon \overline{X} \to \overline{Y}$ and generically finite morphisms $b\circ a \colon \overline{X} \to X$ and $\beta \circ \alpha \colon \overline{Y} \to Y$. Moreover, there exist big open subsets $X^{\circ} \subset X$ and $Y^{\circ} \subset Y$ with preimages $\overline{X}^{\circ} = (b \circ a)^{-1}(X^{\circ})$ and $\overline{Y}^{\circ} = (\beta \circ \alpha)^{-1}(Y^{\circ})$ also being big and such that $$K_{\overline{X}^{\circ}/\overline{Y}^{\circ}} \sim_{\QQ} (b \circ a)^*(K_{X^{\circ}/Y^{\circ}} - R(f)).$$
\end{construction}
\begin{remark} \label{KawMF} Suppose that we have a fibration $f \colon X \to Y$ satisfying the assumptions of Theorem \ref{Kaw1}. Then Construction \ref{CKT} gives us a morphism $\overline{f} \colon \overline{X} \to \overline{Y}$. Let us consider the Stein decomposition of $\overline{f}$:
$$
\xymatrix{
\overline{X} \ar[rr]^{\overline{f}} \ar[dr]_{g} && \overline{Y}\\
& Z \ar[ur]_{h} &
}
$$
Here the morphism $g \colon \overline{X} \to Z$ is a fibration and $h \colon Z \to \overline{Y}$ is finite. Thus we can set $$\Var(\overline{\pi}) := \Var(g).$$ By Hurwitz formula we have $K_Z > h^*K_{\overline{Y}}$, therefore $K_{\overline{X}/Z} < K_{\overline{X}/\overline{Y}}$. Moreover, if $F$ and $\overline{F}$ are general fibers of $\pi$ and $g$, respectively, then again by Hurwitz formula we have $K_{\overline{F}} > (b \circ a)^*K_{F}$. Therefore, $$\kappa(K_{\overline{X}^{\circ}/\overline{Y}^{\circ}}) = \kappa(K_{\overline{X}/\overline{Y}}) \geqslant K_{\overline{F}} + \Var(\overline{\pi}),$$ so Corollary \ref{cor:Kaw2} works in this case as well. Analogously, we can check that the same is true about part (2) in Theorem \ref{Kaw1}.
\end{remark}
\subsection{The classification: positive algebraic rank}
\begin{proposition} \label{AlgInt}
Let $\FF$ be a codimension 1 foliation with canonical singularities on a threefold $X$ of general type. Suppose that $\FF$ is algebraically integrable and that the canonical class $K_{\FF}$ is not big. Then, up to a generically finite morphism, the threefold $X$ splits as a product $S \times C$ of a surface $S$ and a curve $C$, and $\FF$ is the relative tangent bundle of the projection $S \times C \to C$. In particular, we have $\nu(K_{\FF}) = 2$.
\end{proposition}
\begin{proof} Consider a fibration $\pi \colon \widetilde X \to C$ which is a resolution of indeterminacies of a rational map $X \dasharrow C$ inducing the foliation $\FF$. We have the induced foliation $\widetilde \FF = T_{\widetilde X/C}^{\sat}$ on $\widetilde{X}$. By Proposition \ref{canclassalg} the canonical class of $\widetilde{\FF}$ is given by the formula $$K_{\widetilde \FF} = K_{\widetilde X/C} - R(\pi).$$
We apply Construction \ref{CKT} to $\pi \colon \widetilde X \to C$ and obtain a morphism $\overline{\pi} \colon \overline{X} \to \overline{C}$ and a foliation $\overline{\FF} = T_{\overline{X}/\overline{C}}$.
Then by Remark \ref{KawMF} we can apply Corollary \ref{cor:Kaw2} to the map $\overline{\pi}$ and obtain $$2 \geqslant \nu(K_{\widetilde{X}/C} - R(\pi)) = \nu(K_{\overline{X}^{\circ}/\overline{C}^{\circ}}) \geqslant \kappa(K_F) + \Var(\overline{\pi}).$$ By adjunction, a general fiber $F$ of $\pi$ is a surface of general type, that is, $\kappa(K_F) = 2$. Therefore we have $\Var(\overline{\pi}) = \Var(\pi) = 0$ and by definition of birational variation, some finite cover of $X$ birationally splits as a product $S \times C$. We have $\nu(K_{X/C}) = 2$ for the foliation $T_{X/C}$ on $X = S \times C$. Since $\FF$ has canonical singularities, by Proposition \ref{hurwitz} we have $\nu(K_{\FF}) = 2$ as well. The proposition is proved.
\end{proof}
Next, we treat the case of foliations with algebraic rank $1$. The idea is the same as in the proof of Proposition \ref{AlgInt}. We express the canonical class of $\FF$ in terms of the canonical classes of its algebraic and transcendental parts. Then we use our assumption $\nu(K_{\FF}) < 3$ together with Construction \ref{CKT} and Theorem \ref{Kaw1} in order to obtain restrictions on the birational variation of the algebraic reduction $\pi$ of $\FF$. The rest of the proof is case-by-case analysis according to possible values of $\Var(\pi)$ and $\nu(K_{\GG})$.
\begin{theorem} \label{MainThm} Let $\FF$ be a codimension 1 foliation with canonical singularities on a threefold $X$ of general type. Suppose that the algebraic rank of $\FF$ is equal to 1 and that the canonical class of $\FF$ is not big. Then, up to a generically finite morphism, the threefold $X$ is isomorphic to $S \times C$, where $S$ is a Hilbert modular surface, $C$ is a curve of genus at least 2, and the foliation $\FF$ is the pullback of a Hilbert modular foliation on $S$ by the projection $S \times C \to S$. The numerical dimension of $K_{\FF}$ is equal to 2.
\end{theorem}
\begin{proof} We consider the algebraic reduction of $\FF$, see Proposition \ref{algred}. After resolving the indeterminacies and singularities, we obtain a fibration $\pi \colon X \to S$ from a smooth projective threefold (which we also denote by $X$) of general type to a smooth surface $S$. Moreover, the foliation $\FF$ is the pullback of a transcendental foliation $\GG$ on $S$. The canonical class of $\FF$ can then be expressed by the formula $$K_{\FF} = \pi^*K_{\GG} + K_{X/S} - R(\pi).$$ Since $\GG$ is transcendental, the canonical class $K_{\GG}$ is pseudoeffective. By \cite[Theorem 1.3]{CP19}, the canonical class $K_{X/S} - R(\pi)$ of the algebraic part of $\FF$ is pseudoeffective as well. We have the inequalities $\nu(K_{X/S} - R(\pi)) \leqslant \nu(K_{\FF}) \leqslant 2$ by our assumption on $\FF$. On the other hand, we apply Construction \ref{CKT} to obtain a diagram as before:
$$
\xymatrix{
\overline{X} \ar[r] \ar[d]_{\overline{f}} & X \ar[d]^f\\
\overline{S} \ar[r] & S
}
$$
We can use this diagram to estimate the numerical dimension of $K_{X/S} - R(\pi)$ from below. Indeed, by Corollary \ref{cor:Kaw2} and Remark \ref{KawMF} we obtain $$\nu(K_{X/S} - R(\pi)) = \nu(K_{\overline{X}^{\circ}/\overline{S}^{\circ}}) \geqslant \kappa(K_{\overline{X}/\overline{S}}) \geqslant \kappa(K_{F}) + \Var(\overline{\pi}) = 1 + \Var(\pi),$$ since a general fiber $F$ of $\pi$ is a curve of general type. Thus we are left with two possibilities: either $\Var(\pi) = 0$ or $\Var(\pi) = 1$.
If $\Var(\pi) = 0$ then after passing to a finite cover, the threefold $X$ becomes birational to a product $S \times C$ of a surface and a curve. In particular, both the surface $S$ and the curve $C$ are of general type. Moreover, if $p_S \colon S \times C \to S$ and $p_C \colon S \times C \to C$ are projections and $\FF = p_S^{-1}\GG$ then $K_{\FF} = p_S^*K_{\GG} + p_C^*K_C$. From this formula we see that if $K_{\FF}$ is not big on $X = S \times C$ then $K_{\GG}$ is not big on $S$. Since the foliation $\GG$ is moreover transcendental and the singularities of $\GG$ are canonical, by Theorem \ref{BruMcQ} we conclude that $\GG$ is birational to a Hilbert modular foliation on a Hilbert modular surface. The case $\Var(\pi) = 0$ is thus settled.
Suppose now that $\Var(\pi) = 1$; then since a general fiber $F$ of $\pi$ is a curve of general type, we nesessarily have that $$\nu(K_{X/S} - R(\pi)) = 2.$$ Again, we apply Construction \ref{CKT} to obtain a morphism $\overline{\pi} \colon \overline{X} \to \overline{S}$ and induced foliations $\overline{\FF}$ and $\overline{\GG}$. By Remark \ref{KawMF} we can apply part (2) of Theorem \ref{Kaw1} to $\overline{\pi}$, taking $L = \OO_{\overline{S}}(K_{\overline{\GG}})$, and obtain $$2 \geqslant \kappa(K_{\overline{\FF}}) = \kappa(\pi^*K_{\overline{\GG}} + K_{\overline{X}/\overline{S}}) \geqslant 1 + \max\{\Var(\pi), \kappa(K_{\overline{\GG}})\},$$ provided that $\kappa(K_{\GG}) \geqslant 0$. Therefore, the foliation $\overline{\GG}$ (and therefore $\GG$) is not of general type. To complete the proof, we need to exclude the remaining cases $\nu(K_{\GG}) = 0$ and $\nu(K_{\GG}) = 1$.
If $\nu(K_{\GG}) = 0$ then by the classification theorem of McQuillan \cite[Theorem 2 IV.3.6]{McQ08} (see also \cite[Theorem 8.2]{Bru15}), we can replace $\overline{S}$ by a further finite cover followed by a sequence of birational contractions such that the foliation $\overline{\GG}$ is given by a holomorphic vector field with isolated zeroes. Let $D \subset \overline{S}$ be a minimal reduced divisor such that the restriction of $\overline{\pi}$ is a smooth family over $\overline{S}\setminus D$. Take a minimal log resolution $\varphi \colon (\widetilde{S}, \widetilde{D}) \to (\overline{S}, D)$. Then the induced foliation $\widetilde{\FF}$ is logarithmic for the pair $(\widetilde{S}, \widetilde{D})$, since the support of $\widetilde{D} = $ is snc and every component of $\widetilde{D}$ is $\widetilde{F}$-invariant. Moreover, the foliation $\widetilde{F}$ is given by a section $$s \in H^0(\widetilde{S}, K_{\widetilde{\GG}}^{-1}) \subset H^0(\widetilde{S}, T_{\widetilde{S}}(-\log\widetilde{D})).$$ On the other hand, the restriction of $\widetilde{\pi}$ is a smooth family of genus $\geqslant 2$ curves over the complement $\widetilde{S}\setminus\widetilde{D}$. Our assumption $\Var(\pi) = 1$ together with Remark \ref{Var} and Torelli theorem for curves gives us a non-trivial variation of polarized Hodge structures supported on $\widetilde{S}\setminus\widetilde{D}$. By a result of Brunebarbe \cite[Theorem 1.2]{Brun18}, there exists a section $$\sigma \in H^0(\widetilde{S}, \Sym^m\Omega^1_{\widetilde{S}}(\log \widetilde{D})).$$ Therefore, the line bundle $\OO_{\PP(\Omega^1_{\widetilde{S}}(\log \widetilde{D}))}(1)$ is $\QQ$-effective. However, since there is a section $$s \in H^0(\widetilde{S}, T_{\widetilde{S}}(-\log\widetilde{D})) = H^0(\PP(\Omega^1_{\widetilde{S}}(\log \widetilde{D})), \OO_{\PP(\Omega^1_{\widetilde{S}}(\log \widetilde{D}))}(-1)),$$ the logarithmic tangent bundle $T_{\widetilde{S}}(-\log\widetilde{D})$ has to be trivial. Then by adjunction we have $$K_{\widetilde{X}} + \widetilde{D_X} = \widetilde{\pi}^*(K_{\widetilde{S}} + \widetilde{D}) + K_{\widetilde{X}/\widetilde{S}}.$$ The left hand side of this linear equivalence is a big divisor class, while on the right hand side we have the sum of a trivial divisor class $\widetilde{\pi}^*(K_{\widetilde{S}} + \widetilde{D})$ and the class $K_{\widetilde{X}/\widetilde{S}}$. By the above discussion we have $\nu(K_{\widetilde{X}/\widetilde{S}}) = \nu(K_{X/S} - R(\pi)) = 2$, so the class on the right hand side is not big. We obtain a contradiction, which shows that the case $\nu(K_{\GG}) = 0$ does not occur.
Finally, we need to exclude the case $\nu(K_{\GG}) = 1$ and $\Var(\pi) = 1$. In this case we have $$\nu(K_{X/S} - R(\pi)) = 2.$$ Let us consider the divisorial Zariski decomposition $K_{X/S} - R(\pi) = P + N$. Then $P$ is nef in codimension 1 and $\nu(P) = 2$. By the MMP for foliations on surfaces, we can assume $K_{\GG}$ to be nef. Then by the formula for $K_{\FF}$ and by properties of the restricted positive product from Theorem \ref{posprod} we obtain \begin{alignat}{2}0 & = \mathrm{vol}(K_{\FF}) = \langle K_{\FF}\rangle^3 \geqslant \langle\pi^*K_{\GG}\rangle^3 + 3\langle\pi^*K_{\GG}\cdot P^2\rangle +3\langle\pi^*K_{\GG}^2\cdot P\rangle +\langle P\rangle^3 = \\ & = 0 + 0 + 0 + 3\pi^*K_{\GG}\cdot\langle P\rangle^2\end{alignat} On the other hand, since $\nu(P) = 2$, it follows from duality that every movable class orthogonal to $\langle P\rangle^2$ is proportional to $P$. The class $\pi^*K_{\GG}$ is nef of numerical dimension 1 and is obviously not proportional to $P$. Therefore, $\pi^*K_{\GG}\cdot\langle P\rangle^2 > 0$, which contradicts $\nu(\pi^*K_{\GG} + P) \leqslant \nu(K_{\FF}) = 2$. Therefore this case is impossible and the theorem is proved.
\end{proof}
\section{The case of purely transcendental foliations} \label{main2} In this section we consider purely transcendental foliations on smooth projective varieties of general type. Assuming that the foliation is non-singular in codimension $2$, we can obtain the following description for these foliations.
\begin{theorem} \label{terminal} Let $X$ be a smooth projective manifold of general type, $\dim(X) = n \geqslant 2$. Let $\FF$ be a codimension 1 foliation on $X$. Suppose that \begin{enumerate}
\item $K_{\FF}$ is not big;
\item $\FF$ is purely transcendental;
\item $\mathrm{codim}_X\Sing(\FF) \geqslant 3$.
\end{enumerate}
Then the foliation $\FF$ is induced by a Hilbert modular foliation via a morphism $X \to M_H$, generically finite onto its image.
\end{theorem}
\begin{proof} If $X$ is a surface then by assumption $\FF$ is regular, therefore canonical (see \cite{AD13}). The statement then follows from Theorem \ref{BruMcQ} with the morphism $X \to M_H$ being the minimal model of $\FF$.
Suppose now that $\dim(X) = n \geqslant 3$.
\emph{Step 1: Find a suitable complete intersection surface.} By our assumptions, $K_X$ is big and $K_{\FF}$ is not big. From Proposition \ref{Numprop} and from the formula $$K_{\FF} = K_X + N_{\FF}$$ we obtain that the normal bundle $N_{\FF}$ is not pseudoeffective. Since $X$ is smooth, by \cite[Theorem 1.5]{BDPP13} this is equivalent to the following condition: there exists a birational model $\varphi \colon \widetilde X \to X$ and a complete intersection class $\alpha = H_1 \cap \cdots \cap H_{n-1}$ on $\widetilde X$ such that \begin{equation} N_{\FF} \cdot \varphi_*(H_1 \cap \cdots \cap H_{n-1}) = \varphi^*N_{\FF} \cdot H_1 \cdots H_{n-1} < 0. \label{norm}\end{equation} By Bertini's theorem we can take the above ample divisors $H_2, \ldots, H_{n-1}$ and large multiples $m_i$ for $i \in \{2, \ldots, n-1\}$ such that a general element $$D \in |m_2H_2 \cap \cdots \cap m_{n-1}H_{n-1}|$$ is a smooth surface. Moreover, since $\FF$ is purely transcendental, a very general $D$ as above satisfies the following condition: every compact curve $C \subset D$, invariant under $\FF|_D$, is an intersection $C = D \cap E$ for an $\FF$-invariant hypersurface $E \subset X$. Note that this condition implies that the number of $\FF|_D$-invariant curves is finite. Indeed, if the number of $\FF|_D$-invariant curves is infinite, then the number of $\FF$-invariant surfaces $E \subset X$ is infinite as well. Therefore by Jouanolou's theorem \cite{Jou78} the foliation $\FF$ has to be algebraically integrable, which is not the case.
\emph{Step 2: Prove pseudoeffectivity of $\varphi^*N^*_{\FF}$ on $D$.} Recall from Remark \ref{pullback} that we have the relation $$N^*_{\widetilde\FF} = \varphi^*N^*_{\FF} + E$$ where $E$ is an effective $\varphi$-exceptional divisor. Therefore, if $(\varphi^*N^*_{\FF})|_D$ is pseudoeffective then the same is true for $N^*_{\widetilde\FF}|_D$. Let us consider the following $\QQ$-divisors on $\widetilde X$: $$L_{\varepsilon} := \varphi^*N^*_{\FF} + \varepsilon H_1, \quad \varepsilon \in \QQ_{>0}.$$ By the Riemann--Roch theorem we obtain $$h^0(D, mL_{\varepsilon}|_D) + h^2(D, mL_{\varepsilon}|_D) \geqslant C_{\varepsilon}\cdot(L_{\varepsilon}|_D)^2m^2$$ for a positive constant $C_{\varepsilon}$ and $m$ large and sufficiently divisible. The intersection number is equal to \begin{multline} (L_{\varepsilon}|_D)^2 = (\varphi^* N^*_{\FF}|_D + \varepsilon H_1|_D)^2 = ((\varphi^* N^*_{\FF})^2 + 2\varepsilon \varphi^* N^*_{\FF} \cdot H_1 + \varepsilon^2 H_1^2)\cdot D = \\ = (\varphi^* N^*_{\FF})^2\cdot m_2 H_2 \cdots m_{n-1} H_{n-1} + 2\varepsilon \varphi^* N^*_{\FF} \cdot H_1 \cdot m_2H_2 \cdots m_{n-1}H_{n-1} + \varepsilon^2 H_1 \cdot m_2 H_2 \cdots m_{n-1} H_{n-1}.\end{multline} The second summand in the above equality is positive by the condition \eqref{norm}. The third one is also positive, since $H_i$ are ample divisors. As for the first summand, by the projection formula we have $$(\varphi^*N^*_{\FF})^2 \cdot H_2 \cdots H_{n-1} = N^*_{\FF} \cdot \varphi_*(\varphi^*N^*_{\FF} \cdot H_2 \cdots H_{n-1}) = N^2_{\FF} \cdot \varphi_*(H_2 \cap \cdots \cap H_{n-1}).$$ By our assumption $\mathrm{codim}_X\Sing(\FF) \geqslant 3$ and by the Baum--Bott formula we have
\begin{equation}N^2_{\FF} \cdot \varphi_*(H_2 \cap \cdots \cap H_{n-1}) = 0 \quad \mbox{since} \quad N^2_{\FF} \equiv 0.\label{BBott}\end{equation} Thus $(L_{\varepsilon}|_D)^2 > 0$ for all $\varepsilon \in \QQ_{>0}$. Moreover, by Serre duality and by the condition \eqref{norm} we have $$h^2(D, mL_{\varepsilon}|_D) = h^0(D, K_D + m\varphi^*N_{\FF} - m\varepsilon H_1) = 0$$ for $m$ large enough. Therefore for every $\varepsilon \in \QQ_{>0}$ we obtain $$h^0(D, mL_{\varepsilon}|_D) > C \cdot (L_{\varepsilon}|_D)^2 m^2 > 0$$ for $m$ large and sufficiently divisible (depending on $\varepsilon$). So the class of $L_0|_D = \varphi^*N^*_{\FF}|_D$ is a limit of classes of $\QQ$-effective divisors, hence it is pseudoeffective. Then the conormal line bundle $$N^*_{\widetilde\FF|_D} = N^*_{\widetilde\FF}|_D = (\varphi^* N^*_{\FF} + E)|_D$$ is pseudoeffective as well.
\emph{Step 3: Case-by-case analysis.}
By the classification result of Touzet \cite[Proposition 2.14]{Tou13}, we have three possibilities for the numerical and Kodaira dimensions of $N_{\widetilde\FF}^*|_D$: \begin{enumerate}
\item $\nu(N_{\widetilde\FF}^*|_D) = 0$;
\item $\nu(N_{\widetilde\FF}^*|_D) = 1, \kappa(N_{\widetilde\FF}^*|_D) = 1$;
\item $\nu(N_{\widetilde\FF}^*|_D) = 1, \kappa(N_{\widetilde\FF}^*|_D) = -\infty$.
\end{enumerate}
We consider these three cases separately.
\textit{Case $\nu(N_{\widetilde\FF}^*|_D) = 0$}. Since $\varphi^*N^*_{\FF}|_D$ is pseudoeffective and $N^*_{\widetilde\FF}|_D = (\varphi^*N^*_{\FF} + E)|_D$, it follows that $\nu(\varphi^*N^*_{\FF}|_D) = 0$. We consider the Zariski decomposition $(\varphi^*N_{\FF}^*)|_D = L + \sum_ia_iC_i$, where $L$ is numerically trivial and $C_i$ are exceptional curves on $D$. We obtain \begin{equation}\begin{split}(\varphi^*N^*_{\FF}|_D)^2 &= (\varphi^*N^*_{\FF})^2\cdot m_2H_2 \cdots m_{n-1}H_{n-1} = 0 \qquad \mbox{(by the equality \eqref{BBott})} \\ &= L^2 + (\sum a_iC_i)^2 = (\sum a_iC_i)^2.\end{split}\end{equation} Since the Gram matrix of $\{C_i\}$ is negative definite, we obtain that $C_i = 0$, so that $\varphi^*N^*_{\FF}|_D = L \equiv 0$. By the Lefschetz hyperplane section theorem the map $i^* \colon H^2(\widetilde X, \mathbb{C}) \to H^2(D, \mathbb{C})$ is injective, therefore we have $\varphi^*N^*_{\FF} \equiv 0$ on $\widetilde X$. However, this implies $\nu(\varphi^*K_X) = \nu(\varphi^*K_{\FF}) = n$, which contradicts our assumptions.
\textit{Case $\nu(N_{\widetilde\FF}^*|_D) = 1, \kappa(N_{\widetilde\FF}^*|_D) = 1$}. In this case we apply a result of Bogomolov \cite[Lemma 12.4]{Bog79}
and obtain that $\widetilde\FF|_D$ is algebraically integrable. However, the algebraic reduction of $\widetilde\FF|_D$ is induced by that of $\widetilde\FF$ (see \cite[Lemma 4]{PS20}), so the algebraic rank of $\widetilde\FF$ has to be positive, which is not the case by assumption.
\textit{Case $\nu(N_{\widetilde\FF}^*|_D) = 1, \kappa(N_{\widetilde\FF}^*|_D) = -\infty$}. By a theorem of Touzet \cite[Theorem 1]{Tou16}, there exists a map $f \colon D \to M_H$, where $M_H = \mathbb{D}^N/\Gamma$ is a Hilbert modular variety, and $\widetilde\FF|_D$ is the pullback via $f$ of one of the Hilbert modular foliations on $M_H$. The map $f$ is constructed from a monodromy representation $$\rho_D \colon \pi_1(D\setminus \mathrm{Supp}(N_D)) \to PSL_2(\mathbb{C})$$ using the Corlette--Simpson correspondence. Here $N_D$ is the negative part in the Zariski decomposition of $N^*_{\widetilde\FF}|_D$. By our assumption on $D$ we have $N_D = N_{\widetilde X}|_D$ where $N_{\widetilde X}$ is a divisor on $\widetilde X$ with $\mathrm{Supp}(N_{\widetilde X})$ being $\widetilde\FF$-invariant. Applying the Lefschetz hyperplane section theorem for quasi-projective varieties \cite[Theorem 1.1.1]{HL85}, we obtain that $$\pi_1(D\setminus(\mathrm{Supp}(N_D))) \simeq \pi_1(\widetilde X\setminus\mathrm{Supp}(N_{\widetilde X})).$$ Therefore we have a representation of $\pi_1(\widetilde X\setminus\mathrm{Supp}(N_{\widetilde X}))$ satisfying all the assumptions used in the proof of Theorem $1$ in \cite{Tou16}. By the same argument as in \textit{loc. cit.} we can construct a map $f_{\widetilde X} \colon \widetilde X \to M_H$ such that $\widetilde \FF$ is induced via $f_{\widetilde X}$ by one of the Hilbert modular foliations on $M_H$. In particular, the conormal bundle $N^*_{\widetilde\FF}$ is pseudoeffective. The map $f_X$ has to be generically finite onto its image, since $\FF$ is purely transcendental.
Finally, the conormal bundle $N^*_{\FF} = \varphi_*N^*_{\widetilde\FF}$ is pseudoeffective as well. From the above analysis it follows that the only possible case is $\nu(N^*_{\FF}) = 1$ and $\kappa(N^*_{\FF}) = -\infty$. Applying \cite[Theorem 1]{Tou16}, we obtain the desired conclusion.
\end{proof}
Finally, we list some questions which are natural to ask in view of our results.
\begin{question} Let $\FF$ be a purely transcendental foliation of codimension $1$ on a threefold $X$ of general type. Suppose that $\FF$ has canonical singularities and $\nu(K_{\FF}) < 3$. Is the conormal (or log conormal, for some $\FF$-invariant boundary) bundle always pseudoeffective in this case? The logarithmic version of Touzet's theorem \cite[Theorem 2]{Tou16} gives an affirmative answer to this question for singular Hilbert modular foliations.
\end{question}
\begin{question} Let $\FF$ a foliation be as in the previous question. Can the numerical dimension be equal to $1$? More generally, if $X$ is an $n$-dimensional variety of general type, what is the minimal numerical dimension of a foliation $\FF$ on $X$?
\end{question}
\begin{question} Let $\FF$ be a codimension $1$ foliation with canonical singularities on a projective threefold. Suppose that $\kappa(K_{\FF}) \geqslant 0$; does it follow that $\kappa(K_{\FF}) = \nu(K_{\FF})$?
\end{question}
|
2,877,628,091,104 | arxiv |
\subsection{Dataset}
We evaluate our method MACDA on the Davis dataset \cite{Davis2011ComprehensiveSelectivity},
which contains the drug-target binding affinity of 442 target proteins
and 72 drugs. We use $pK_{D}$ (log kinase dissociation constant) to measure the binding affinity between the target protein and
the drug molecule, similar to \cite{Ozturk2018DeepDTA:Prediction}. The drug-target pairs between Tyrosine-protein
kinase ABL1 (Human) and 50 drugs in the training set are chosen to
generate counterfactual instances. We choose ABL1 as the crystallized complex of ABL1 with various drugs are available for evaluation.
\subsection{Baselines}
We compare MACDA with 2 baseline methods.
First, for \textbf{Joint-List}, we choose the top ten drug and protein counterfactual instances having highest $\Delta$ affinity and similarity separately (i.e., top 10 for drugs and top 10 for proteins). Then, the two lists are joined to form drug-protein counterfactual instances. Second, for \textbf{MA-MEG}, we extend the molecule counterfactual generation MEG framework \cite{Numeroso2020ExplainingCounterfactuals}
to the drug-target counterfactual generation task. As the MEG framework
only has a single agent handling the optimization for drug molecule,
we add another agent handling the protein sequence optimization. The
protein agent has the action space described in Sec.~\ref{sec:protgen}.
The protein agent calculates and updates its Q-function in the same
manner as the drug agent. Two agents work independently to optimize
the common reward function (see Eq.~(\ref{eq:reward})).
\subsection{Implementation detail}
The MACDA framework is implemented in Python using Pytorch. GraphDTA-GCNNet
\cite{Nguyen2020GraphDTA:Networks} is used as a drug-target binding
affinity prediction model because of its simplicity and high performance.
GraphDTA-GCNNet receives the drug molecule graph and protein sequence
as the inputs. The drug molecule observation in MACDA framework is
the drug fingerprint. The protein observation in MACDA framework is
the alphabet sequence encoded to integer sequence. The protein sequence
length is fixed at 1000 residues. To follow the similarity constraint and alanine scanning procedure \cite{Numeroso2021MEG:Networks}, we set the original drug molecule and protein sequence as the starting point and set the episode length to 1. The balance coefficients $\alpha_{r} = 1.0$, $\alpha_{d} = 0.05$. and $\alpha_{p} = 0.01$ in Eq. \ref{eq:reward} are chosen based on our experience.
For each drug-target instance, the top ten counterfactual instances
with the highest reward are chosen. The hyperparameters in the experiment
are shown in Table~\ref{tab:hyperparam}. The hyperparameters are chosen based on our experience.
\begin{table}[h]
\centering{}\caption{The hyper-parameters used in the experiments.\label{tab:hyperparam}}
\begin{tabular}{p{7cm}p{1cm}}
\toprule
Hyper parameters & Value\tabularnewline
\midrule
$\gamma$ & 0.99\tabularnewline
Batch size & 1024 \tabularnewline
Policy learning rate & 0.001 \tabularnewline
Critic learning rate & 0.001\tabularnewline
Number of episode & 10000\tabularnewline
\bottomrule
\end{tabular}
\end{table}
\subsection{Evaluation metrics}
Two methods are evaluated using four metrics: average
drug encoding similarity, average protein encoding similarity, average $\Delta_{joint}$ between the predicted binding affinity of
original and counterfactual instance defined in Eq. \ref{eq:deltajoint}, and drug-likeness (QED) \cite{Bickerton2012QuantifyingDrugs}. The similarity
score is defined in Eq.~(\ref{eq:sim}). The protein, drug similarity, and the $\Delta$ affinity evaluate how good the generated counterfactual is. These two metrics follows the counterfactual definition in Sec. \ref{sec:problem}. The $\Delta$ affinity metric makes sure the generated counterfactual has substantial change in the affinity. The drug and protein encoding similarity estimates how much the generated drug and protein resemble the original instance. The QED assesses the validity of the drug counterfactual
instance \cite{Bickerton2012QuantifyingDrugs}. Because we use alanine scanning as the protein single point mutation, the protein sequence does not change significantly. Therefore, the protein sequence validation is not necessary.
\section{Introduction}
\input{intro.tex}
\section{Related works}
\input{relatedworks.tex}
\section{Methods and materials}
\subsection{Problem definition}
\label{sec:problem}
\input{method_problemstatement}
\subsection{Counterfactual generation with RL}
\label{sec:SingleAC}
\input{method_CF_RL}
\subsection{Drug-protein pair counterfactual generation with MARL}
\input{method_overview.tex}
\subsubsection{Available drug actions}
\label{subsec:Drug-gen}
\input{method_drug.tex}
\subsubsection{Available protein actions} \label{sec:protgen}
\input{method_protein.tex}
\subsubsection{Multi-agent actor-attention-critic for counterfactual generation}
\label{sec:maac}
\input{method_MAAC.tex}
\input{exp.tex}
\section{Results and Discussion}
\input{result_discuss}
\section{Conclusion}
\input{conclusion.tex}
\bibliographystyle{natbib}
\subsubsection{Actor-Critic RL}
\label{sec:SingleAC} \textbf{Actor-critic} In this learning framework,
the agent learns to maximize the expected discount returns over the
future $T$ steps: $J=\mathbb{E}\left[\sum_{t=1}^{T}\gamma^{t}r^{t}\right]$
for discount factor $\gamma\in[0,1]$. To optimize the agent policy
with respects to the expected discount returns, the gradient is estimated
as:
\begin{equation}
\nabla_{\theta}J(\pi_{\theta})=\nabla_{\theta}\log\pi_{\theta}(a_{t}|s_{t})R(t)
\end{equation}
where $R(t)=\sum_{t'=T}^{\infty}\gamma^{t'-t}r^{t'}(s^{t'},a^{t'})$.
However, due to the high variance of $R(t)$, the critic function
$Q_{\psi}(s_{t},a_{t})$ is used instead.
\begin{equation}
Q_{\psi}(s_{t},a_{t})=\mathbb{E}_{a_{1}\sim\pi_{1},...,a_{N}\sim\pi_{N},s\sim T}[R(t)]
\end{equation}
The gradient $\nabla_{\theta}J(\pi_{\theta})$ is replaced as:
\begin{equation}
\nabla_{\theta}J(\pi_{\theta})=\nabla_{\theta}\log(\pi_{\theta}(a_{t}|s_{t}))Q_{\psi}(s_{t},a_{t})
\end{equation}
The $Q_{\psi}(s_{t},a_{t})$ function is learned by minimizing the
TD loss:
\begin{align}
\mathcal{L}_{Q}(\psi) & =\mathbb{E}_{(s,a,r,s')\sim D}\left[(Q_{\psi}(s,a)-y)^{2}\right]\label{eq:critic_target}\\
y & =r(s,a)+\psi\mathbb{E}_{a'\sim\pi(s')}[Q_{\Tilde{\psi}}(s',a')]
\end{align}
where $y$ is the target, $Q_{\Tilde{\psi}}$ is the target Q-value
function which is the value of previous Q-functions stored in the
replay buffer.
\textbf{Soft Actor-Critic} To avoid converging to the local minimum,
the entropy term $-\alpha\log\pi_{\theta}(a|s)$ is incorporated into the policy gradient:
\begin{align*}
\nabla_{\theta}J(\pi_{\theta}) & =\mathbb{E}_{s\sim D,a\sim\pi}\left[\nabla_{\theta}\log\pi_{\theta}(a|s)Q^{*}(a,s)\right]\\
Q^{*}(a,s) & =Q_{\psi}(s,a)-\alpha\log\pi_{\theta}(a|s)-b(s)
\end{align*}
where $b(s)$ is a state-dependent baseline, $\alpha>0$ is trade-off coefficient. Then the target $y$ in Eq.~(\ref{eq:critic_target})
is updated as:
\begin{equation}
y=r(s,a)+\psi\mathbb{E}_{a'\sim\pi(s')}\left[Q_{\Tilde{\psi}}(s',a')-\alpha\log\pi_{\Tilde{\theta}}(a'|s')\right]
\end{equation}
where $\Tilde{\psi}$ and $\Tilde{\theta}$ are the network parameters
of the target critics and policies.
\subsubsection{Multi-agent reinforcement learning (MARL)}
\label{sec:MARL} The multi-agent reinforcement learning framework
is described as Markov Decision game with a tuple $<N,S,\{O_{i}\}_{i\in N},\{A_{i}\}_{i\in N}>,P,\{R_{i}\}_{i\in N}>$,
where $N$ is the number of agent, $S$ is the set of state, $\{O_{i}\}_{i\in N}$
is the observation space of N agents, $\{A_{i}\}_{i\in N}$ is joint
action of N agents, and $\{R_{i}\}_{i\in N}$ is reward for each agents.
The agent $i$ chooses an action based on the policy function $\pi_{\theta_{i}}:O_{i}\rightarrow P(A_{i})$
where $P(A_{i})$ is the distribution over action set $A_{i}$ given
observation $O_{i}$. Then the state $s$ and joint action $A$ lead
to the next state $s'$ with the probability function $P$. The agent
$i$ receives a reward based on the state and action of other
agents $R_{i}:S\times\{A_{i}\}_{i\in N}$.
\subsection{Drug-target counterfactual problem formulation}
Given the the drug $D$ and the target protein $P$, the counterfactual instance of drug-target complex ($D-P$) is the drug $\mathcal{\Tilde{D}} = D + \Delta D$ and the target protein $\mathcal{\Tilde{P}} = P + \Delta P$.
We denote the drug-target binding affinity prediction model as:
\begin{equation}
A=\mathcal{F}_{\theta}(P,D),
\end{equation}
The counterfactual instance of the drug-target complex satisfies two properties: the substantial change in the binding affinity and minimizing the change in drug and target protein. Therefore, the counterfactual instance generation can be formulated as maximization problem:
\begin{equation}
\argmax_{\Tilde{D}, \Tilde{P}} \mathcal{L}(\mathcal{F}_{\theta}(P,\mathcal{D}), \mathcal{F}_{\theta}(\Tilde{P},\Tilde{D})) + \mathcal{K}(P,\mathcal{\Tilde{P}}) + \mathcal{K}(D,\mathcal{\Tilde{D}})
\end{equation}
where K is the similarity function.
\subsection{Drug-target affinity models}
Drug-target binding affinity, measured by a disassociation constant
$K_{d}$, indicates the strength of the binding force between the target
protein and its ligand (drug or inhibitor). Drug-target binding affinity
prediction methods can be categorized into two main approaches: structural
approach and non-structural approach \cite{Thafar2019ComparisonAffinities}.
The structural approach \cite{Meng1992AutomatedEvaluation,Jorgensen1983ComparisonWater,Pullman2013IntermolecularForces,Raha2007TheDesign}
uses the 3D information of the protein structure and ligand to run
a drug-target interaction simulation. The non-structural
approach \cite{Nguyen2020GraphDTA:Networks,Ozturk2018DeepDTA:Prediction,Ozturk2019WideDTA:Affinity,Tri2020GEFA:Prediction}
uses other information such as protein sequence, atom valence, hydrophobic,
and others to
from existing databases to
predict the binding affinity.
In MACDA, we take the non-structural approach.
\subsection{Explaining deep neural networks for DTA}
Recently, high accuracy DTA models are often based on deep neural networks, which are mostly black-box. This poses a great question of how to explain the behaviours of the deep models, and distill the explicit knowledge from them.
There are two general approaches relevant to the explanation of drug-target affinity models: feature attribution and graph-based
methods.
Feature attribution measures the relevance score of the input feature
with respect to the predicted affinity score $y$, either using the gradient
\cite{Preuer2019InterpretableDiscovery,Pope2019ExplainabilityNetworks}
or a surrogate model \cite{Rodriguez-Perez2020InterpretationValues}. Gradient-based
methods take advantage of the derivative of the output with respect
to the input. \citet{McCloskey2019UsingChemistry} use integrated gradients
on graph convolution model trained on the molecular binding synthesis
dataset to analyze the binding mechanism. However, gradient-based
methods could be misleading or prone to gradient saturation \cite{Ying2019GNNExplainer:Networks}.
Surrogate-based methods generate a surrogate explanatory model $g$
which is interpretable (linear or decision tree) and can approximate
the original function $f$.
Graph-based methods are suitable for DTA because the structure of
the drug molecule can be represented naturally with the graph structure.
The graph can be explained by subsets of edges and node features which
are important for model $f$ prediction of class $c$. For example,
GNNExplainer \cite{Ying2019GNNExplainer:Networks} finds a subgraph $G'$
of input graph $G$, and subfeature $X'$ of input feature $X$ which
maximizes the mutual information between $f(G,X)$ and $f(G',X')$,
but is argued to not generalize well \cite{Numeroso2020ExplainingCounterfactuals}.
Attention-based graph neural network \cite{Velickovic2018GraphNetworks,Shang2018EdgeNetworksb,Ryu2018DeeplyNetwork}
is a mechanism that can facilitate explanation such as the influence
of substructure to the solubility property \cite{Shang2018EdgeNetworksb},
visualizing the importance of neighbor nodes via an attention score.
In MACDA, we use a graph-based method.
\subsection{Reinforcement learning: Single and Multi-agent}
Reinforcement learning (RL) is the process of agent learning to find
the optimal action for situations that maximizes the long-term rewards
\cite{Sutton2018ReinforcementIntroduction}. For the single agent case, an agent
interacts with the environment. At each time step $t$, the agent
observes environment state $s_{t}\in S$ and chooses an action $a_{t}\in A$
using its policy $\pi(a_{t}|s_{t})$. By completing the action, the
agent receives a reward $r_{t}$ and changes the environment to the
next state $s_{t+1}$.
There are two main approaches to RL: value-based and policy-based.
The value-based methods estimate the value function for each state,
where the action is chosen based
on the action value.
The policy-based methods, on the other hand, optimize the agent's
policy as a function $\pi(a|s,\theta)$ where $\theta$ is the parameter. The two
methods can be combined, e.g., both value function and the policy
are estimated, as in the celebrated actor-critic methods \cite{Konda1999Actor-CriticTypeProcesses,Morimura2009AAlgorithm}.
In MACDA, we use an actor-critic method.
Multi-agent reinforcement learning (MARL) is the generalization from
a single agent to multiple agents that share the same environment.
Each agent interacts with the environment and with other agents. The
challenge of multi-agent RL is finding the optimal policy for each
agent with respect to not only the environment but also to the other agent's
policy. Many approaches solving the multi-agent setting have been
proposed, ranging from cooperative communication \cite{Tan1993Multi-AgentAgents,Fischer2004HierarchicalCoordination}
to competitive environment \cite{Littman1994MarkovLearning,Perez-Liebana2019TheCompetition}.
In MACDA, we use cooperative communication.
\subsection{Using reinforcement learning for explanations}
Instead of assigning a relevance score to the input features of
the model, counterfactual explanation finds the simplest perturbed
instance with a maximal difference in model prediction outcome \cite{Wachter2017CounterfactualGDPR}.
The motivation here is that if a small change in part of the input causes a big change in the output, then that part of the input is important.
Reinforcement learning has been used previously to generate counter-factual explanations
\cite{Hendricks2016GeneratingExplanations,Li2016UnderstandingErasure}. \citet{Numeroso2020ExplainingCounterfactuals} generate a counterfactual
explanation for a molecule using MEG framework, a multi-objective
reinforcement learning, to maximize the prediction model output change
while maintaining the similarity between original and counterfactual molecule instances.
We use the multi-agent version of the MEG framework as the baseline
for our experiments. Compared to multi-agent MEG framework, our proposed
MACDA framework uses multi-agent actor-critic approach in which each modification to drug and protein are considered simultaneously.
\subsection{MACDA produces highly parsimonious explanations}
Table~\ref{tab:result} shows a comparison of our proposed MACDA framework with baseline methods, where our method exhibits state-of-the-art average $\Delta_joint$ affinity, drug similarity, protein similarity, and QED. The first baseline, \textbf{Joint-List}, simply chooses the top drug and protein counterfactuals, then joins them together. As such, it cannot find an interacting counterfactual pair. Thus, the average $\Delta_{joint}$ of the first baseline is negative. The second baseline, \textbf{MA-MEG} does better, having an average $\Delta_{joint}$ that is above zero, but still underperforms MACDA.
QED measures the drug-likeness of the molecules, providing an estimate of the validity of a generated drug.
In a variant to MACDA, called \textbf{MACDA-QED}, we incorporate the QED into the reward function to increase the validity of the generated drug counterfactual. As a result, MACDA-QED increases the QED by $13.1\%$ compared to the QED of the original data. As the QED is higher, the drug similarity is also slightly higher. However, as a trade off, the average $\Delta_{joint}$ is lower since the QED imposes another constraint on the generated drug distribution. Therefore, the counterfactual drug distribution is closer to the original drug distribution.
The average drug similarity and protein similarity columns show us that the MACDA counterfactuals are very similar to the original drug/protein input. Yet, the changes have a big impact on $\Delta_{joint}$.
We consider MACDA explanations to be more parsimonious because small changes in the input can produce big changes in joint affinity prediction.
\begin{table}[h]
\caption{The average $\Delta_{joint}$, drug encoding similarity, target
encoding similarity, and QED. MACDA is highly parsimonious in that it can find small changes to the input that produce big changes to the joint predicted affinity.\label{tab:result}}
\resizebox{\columnwidth}{!}{{%
\begin{tabular}{p{2cm}ccccc}
\toprule
Method & Avg. $\Delta_{joint}$ $\uparrow$ & Avg. Drug Sim.$\uparrow$ & Avg. Protein Sim. $\uparrow$ & QED $\uparrow$\tabularnewline
\midrule
\textbf{Original drug/protein} & 0 & 1 & 1 & 0.4366 \tabularnewline
\textbf{Joint-List baseline} (Ours) & -0.0085 & 0.9208 & 0.9992 & 0.4051 \tabularnewline
\textbf{MA-MEG*} \cite{Numeroso2020ExplainingCounterfactuals} & 0.0178 & 0.9274 & \textbf{0.9993} & 0.4086 \tabularnewline
\textbf{MACDA} (Ours) & \textbf{0.0254} & 0.9209 & \textbf{0.9993} & 0.4056 \tabularnewline
\textbf{MACDA-QED} (Ours) & 0.0224 & \textbf{0.9481} & \textbf{0.9993} & \textbf{0.4586} \tabularnewline
\midrule
\multicolumn{5}{l}{(*) Our extended version of single-agent MEG framework}\tabularnewline
\bottomrule
\end{tabular}}{ } }
\end{table}
\subsection{MACDA explains DTA model binding site}
We measure the frequency of mutation points in the ABL1 kinase domain over 500 counterfactual instances made from the Davis dataset (see Fig.~\ref{fig:prot_mutation_pos}). Top common mutation points are MET.244, LYS.247, VAL.260, GLU.286, LYS.291, and VAL.448. Importantly, most of these are, or in close proximity to, known binding sites of various drugs such as Nilotinib (LYS. 247, GLU.286, LYS.291, and MET.318, see Fig. \ref{fig:ABL1_Nilotinib}), Imatinib (LYS.247, GLU.286, and LYS.291), and Asciminib (VAL.448, see Fig. \ref{fig:ABL1_Asciminib}).
\begin{figure}[h]
\centering{}\includegraphics[width=0.5\textwidth]{img/prot_mutation_pos-adta_align_joint_loss_des_aff}
\caption{The mutation point distribution in the kinase domain of ABL1 over 500 counterfactual instances
of 50 ABL1-drugs pairs. The top occurrences of alanine replacements are highlighted in red: residues MET.244, LYS.247, VAL.260, GLU.286, LYS.291, and VAL.448 cause the highest change in binding affinity. \label{fig:prot_mutation_pos}}
\end{figure}
\begin{figure}[h]
\centering{}\includegraphics[width=0.45\textwidth]{img/ABL1_Nilotinib.PNG}
\caption{Visualization of residue LYS.247, GLU.286, and LYS.291 of protein ABL1 in the ABL1-Nilotinib (PDB
3CS9). Note the proximity between Nilotinib and residues LYS.247, GLU.291. Another important residue, GLU.286, is the binding site of ABL1-Nilotinib. Figure best viewed in color. \label{fig:ABL1_Nilotinib}}
\end{figure}
\begin{figure}[h]
\centering{}\includegraphics[width=0.45\textwidth]{img/ABL1_Asciminib.PNG}
\caption{Visualization of residue VAL.448 of protein ABL1-Asciminib (PDB
5MO4). The residue VAL.448 is the binding site of ABL1-Asciminib. Figure best viewed in color.\label{fig:ABL1_Asciminib}}
\end{figure}
\subsection{ABL1-Nilotinib study case}
We choose Nilotinib for an in-depth case study because the ABL1-Nilotinib has the smallest prediction error in all ABL1-drugs pairs that also have the interaction and crystal structure available for assessment (c.f., PDB 3CS9).
Fig.~\ref{fig:Nilotinib_org_cf}
shows the 3 Nilotinib counterfactuals that have the highest $\Delta_{joint}$ when interacting with ABL1.
In counterfactuals (b) and (c), the (trifluoromethyl)benzenes and residue GLU.286 are modified which leads to high $\Delta_{joint}$. We interpret this to mean that the interaction betwen the (trifluoromethyl)benzenes and E286 contributes strongly to the DTA model prediction for ABL1-Nilotinib.
This also seems to be a biologically plausible binding mechanism.
Based on the crystal structure, there is a hydrogen bond between the (trifluoromethyl)benzenes and the GLU.286 (see Fig. \ref{fig:nilotinib_abl1}).
In counterfactual (d), it can be suggested that the DTA model also takes into account the interaction between pyrimidine and MET.318 in its prediction.
\begin{figure}[h]
\centering{}\includegraphics[width=0.5\textwidth]{img/Nilotinib_cfs.png}
\caption{Panel (a) is the original Nilotinib molecule. Panels (b)-(d) are three high $\Delta_{joint}$ counterfactual
instances of Nilotinib coupled with counterfactual instances of ABL1 as protein reference. The modification is circled in red. The counterfactual samples explain the interaction between the drug substructures and residues. For example in panel (b), the (trifluoromethyl)benzenes and residue E286 are modified which leads to the highest $\Delta_{joint}$. Therefore, MACDA suggests that the interaction between (trifluoromethyl)benzenes and E286 contributes most to the DTA model decision. \label{fig:Nilotinib_org_cf}}
\end{figure}
\begin{figure}[h]
\centering{}\includegraphics[width=0.45\textwidth]{img/Nilotinib_cf_bond.PNG}
\caption{The hydrogen bond between GLU.286 of ABL1 and the (trifluoromethyl)benzenes group
of Nilotinib (PDB 3CS9). Our model suggests cooperative binding between this position which seems plausible given the crystal structure. Figure best viewed in color. \label{fig:nilotinib_abl1}}
\end{figure}
|
2,877,628,091,105 | arxiv | \section{Introduction}
Recently appeared a number of papers \cite{besp-08},\cite{besp-09} claiming that the
chromosphere-corona
transition region of the solar atmosphere should be considered in the non-collisional
approximation.
In these papers it is said that ion-acoustic turbulence is the reason of differentiation of solar
plasma into two regions with high
($ T_{\rm e} \stackrel{ > }{ _{\sim} } 10^6 $~K) and low ($ T_{\rm e} \stackrel{ < }{ _{\sim} } 10^4
$~K) electron temperature, correspondingly.
In this paper we present the solution of the equation of balance between
the thermal heating and radiative cooling with classical electron conductivity. This solution
explain differentiation of solar plasma to the high and low electron temperature. The characteristic
thickness of the chromosphere-corona transition region is greater then thickness corresponding to the
free path for thermal electron collisions and such temperature distribution agrees with observed
solar radiation.
\section{ Calculation of temperature distribution with depth }
In this paragraph we obtain temperature $T$ distribution with depth $\xi$ for the heating solar
plasma by thermal flux, taking radiation loss into account.
{\em The thickness\/} is defined as
\eq{
\xi=\int_0^x n(x) dx,
}
where $n(x)$ is the number density of particles.
The thickness represents the number of atoms in the column of unit cross-section along the
direction of thermal flux.
The distribution $T=T(\xi)$ can be obtained with the help of the equation of balance between
the
thermal heating and radiative cooling:
\eql{tepl}{n\frac{d}{d\xi} \left( \kappa n \frac{dT}{d\xi}
\right) = L(T) n^2 - P_\infty,}
where $\kappa$ is electron thermal conductivity,
\eql{kappa}{
\kappa \approx \kappa_0\ T^{5/2}=\frac{1.84 \times 10^{-5} \, T^{5/2}}{ \ln \Lambda} \, , }
\cite{Braginskij},\cite{spitzer},
$L=L(T)$ is the distribution of radiative energy losses with temperature
(see Fig.\ref{fig1}),
à $P=L(T_\infty)n^2$ -- stationary thermal heating at the infinity.
Parameter $L$ of the equation \re{tepl} are known:
$L=L(T)$ was taken from \cite{po4ta}
To determine the number density dependence on the temperature $n=n(T)$ let us consider the cases
of slow and fast heating.
One has to specify doundary conditions for the temperature in the equation \re{tepl}:
\eq{\frac{dT}{d\xi}\Big|_{\xi \rightarrow \infty}=0, \quad
T\Big|_{\xi=0} =T_0.}
According to \cite{Shm},
to obtain the dependence $T=T(\xi)$
from the equation of the thermal balance \re{tepl} one has to multiply both parts of the
equation \re{tepl} by
$\kappa\frac{dT}{d\xi}$.
Then after simple manipulations we have the following expression for the thermal flux $F$:
\eql{F(T)}{F= \left( \int_{T_\infty}^T 2(L(T')-L(T_{\infty}))
\kappa n^2 dT' \right)^{\! 1/2} ,}
where the thermal flux $F$ is defined as:
\eql{defF}{F=-\kappa n \frac{dT}{d\xi}.}
To express \re{F(T)} in dimensionless form we multiply \re{F(T)} by
$$\frac{\xi_\i^2}{n_\i^2 \kappa_\i T_\i L(T_\i)},$$
where
$\xi_\i,n_\i,T_\i,\kappa_\i$ are the units of the depth, concentration,
temperature and electron conductivity, correspondingly.
We obtain
\eq{F= \left( \int_1^T 2 K_1 (L(T)-1)(\frac{T^{5/2}}{ln\Lambda})n^2 dT
\right)^{\! 1/2} ,}
where $\ln \Lambda$ is the Coulomb logarithm,
\begin{displaymath}
\ln \Lambda = \left\{
\begin{array}{c}
\ln \left( 1.24 \times 10^4\ (T\ T_\infty)^{3/2}/(n\ n_\infty)^{1/2}\right)
,\ \ \ T_e < 5.8\times 10^5\ K,\\
\ln \left( 9.44 \times 10^6\ (T\ T_\infty)/(n\ n_\infty)^{1/2} \right)
,\ \ \ T_e \ge 5.8\times 10^5\ K
\end{array}
\right.
\end{displaymath}
And $K_1$
\eq{K_1=\frac{\xi_\infty^2 L(T_{\infty})}{\kappa_\infty T_\infty}.}
Let us choose the units to be the following (recall that these values are simply our choosen units
and not the actual values at the infinity):
$$T_\infty=10^4 K,$$
$$n_\infty=10^{10} cm^{-3},$$
\eq{ \xi_\infty = n_\infty l_\infty = 3.15 \times 10^{15}\, {\rm cm}^{-2} , \quad
l_\infty = \left(
\frac{\kappa_\infty T_\infty }{ L (T_{\infty}) n_\infty^2}
\right)^{1/2} = 3.15 \times 10^5\, {\rm cm} .}
From equation \re{defF} we obtain the following expression for $\xi$:
\eq{\xi=\int_T^{T_c}\kappa n \frac{dT}{F(T)},}
We choose the origin of $\xi$-coordinate to be the point where the temperature has a fixed value
$T_c$, $T_c=10^6 K$ ( $\xi=0$ when $T=10^6 K$).
\newline
Finally, to get the distribution $T(\xi)$ we can find $F(T)$ from equation \re{F(T)}
and then, after putting $F(T)$ into \re{defF}, obtain $\xi(T)$ and thus $T(\xi)$.
In the case of fast heating, when concentration is constant throughout the layer depth ($n=const$),
let us set $n=n_\i$, or in dimensionless form $n=1$.
In the case of slow heating gas pressure is constant throughout the layer depth ($p=const$).
In this case,
because
\eq{p=nk_BT,}
concentration becomes \eq{n=\frac{p}{k_B T}} or in dimensionless form:
\eq{n=\frac{1}{T}.}
\newline
The dependence of the thermal flux on the temperature $F=F(T)$ for the cases $n=const$ and
$p=const$ is shown in Fig.\ref{fig2}. Here
\begin{displaymath}
F_\infty=\frac{\kappa(T_\infty,n_\infty)\ n_\infty}{\xi_\infty}=425\, {\rm erg/s}.
\end{displaymath}
The dependence of the temperature along the depth $T=T(\xi)$
for the cases $n=const$ and $p=const$
is shown in Fig.\ref{fig3}.
\section{The width of the transition region ($\delta\xi(T)$ and $\xi_e$ comparison) }
Classical collisional heat conduction \cite{Braginskij},\cite{spitzer} is valid if two following
conditions are satisfied:
\eql{cond1}{\lambda_e<l_T=\frac{T_e}{|\nabla T_e|}}
where $\lambda_e$ is the mean free path for thermal electron collisions, $l_T$
is the characteristic scale of length on the temperature profile.
Condition \re{cond1} may be written as:
\eql{cond1-1}{\delta \xi > \xi_e,}
where $\delta \xi$ is the characteristic depth for equilibrium temperature distribution obtained
above, and $\xi_e$ is the thickness corresponding to the free path for thermal electron
collisions.
\eq{ \delta \xi (T) = \frac{ d \xi(T) }{ d \ln(T) },}
\eq{ \xi_e = n_e \lambda_e =
\frac{ k_{_{\rm B}}^2 T^2}{\pi e^4 \ln \Lambda}.}
The dependence $\delta\xi=\delta\xi(T)$ for the cases $n=const$ and $p=const$ and the dependence
$\xi_e=\xi_e(T)$ are shown in Fig.\ref{fig4}.
As we can see from Fig.\ref{fig4} $\delta\xi \gg \xi_e$, that is the characteristic thickness at
which
temperature is changing is more then thickness corresponding to the free path for thermal
electron
collisions in 400-500 times.
For example, on the temperature $T=10^5 K$ temperature is changing on the 35 km and the free path
for thermal
electron collisions 70 m.
Thus the chromosphere-corona transition region of the solar atmosphere shall be
considered in the collisional approximation.
\section{Stability of the solution}
Linear theory of the thermal instability was constructed in monography \cite{fild} due to Field.
The uniform medium in the thermal and mechanical balance in linear theory is characterised by 3
dimensionless parameters $\alpha$, $\beta$, $\gamma$.
If the heating is such that energy gain per second to the gramm of substance is now dependent on
temperature and density, and cooling of the medium becomes formed by volume radiation energy
losses ($n^2\ L(T)$), than the parameter $\alpha$ will depend only on temperature.
And in this case $\alpha$ is the logarithmic derivative of $L(T)$
\eql{alpha}{
\alpha(T)=\frac{d\ lnL}{d\ lnT}.
}
Parameter $\beta$ characterises comparative significance of the thermal conductivity. If
the conductivity is defined only by free electrons \re{kappa}, then
\eql{beta}{
\beta(T)=\frac{(\gamma-1)^2}{\gamma}\frac{\mu \kappa_0}{k_B^3}\sqrt{T}\ L(T)
}
That is, $\beta$ also depends only on the temperature; here $\mu$ is the effective
molecular weight (for plasma with cosmic abundance of elements $\mu \approx 1,44 m_H$).
Dependence of $\alpha$ and $\beta$ on the temperature is shown in the Fig.\ref{fig5}.
We assume $\gamma=5/3$ in the equation \re{beta}.
On figure 7 we can see regions where perturbations of the following types can be unstable: 1)
\textit{Isobaric perturbations}, for which $p=const$. Regions where $\alpha\ <\ 1$
correspond
to isobaric perturbations. This mode is called \textit{condensation} mode of thermal instability.
2)
In regions where $\alpha < -3/2$ \textit{adiabatic} (entropy $ = const$) perturbations
are unstable. This mode of thermal instability is named \textit{wave} or
\textit{sonic}. 3) In regions with $\ \alpha < 0$ \textit{isochoric} ($n = const$) perturbations
are unstable \cite{152}.
Let \cite{fild},16:
\eq{
\xi_\rho=\frac{n}{k_\rho}=\frac{\gamma^{1/2}}{\gamma-1}\frac{k_B^{3/2}}{\mu^{1/2}}\frac{T^{3/2}}{
L(T)}
,}
\eq{
\xi_T=\frac{n}{k_T}=\alpha^{-1}\ \xi_\rho
,}
\eq{
\xi_\kappa=\frac{n}{k_\kappa}=\beta\ \xi_\rho
.}
On the scales which are smaller than critical \cite{fild},26, thermal instability is
stabilized by condactivity:
\eql{cc}{
\xi_{cc}=\xi_\rho\ \beta^{1/2}\ (1-\alpha)^{-1/2}
,}
\eql{cw}{
\xi_{cw}=\xi_\rho\ \beta^{-1/2}\ (-\alpha-\frac{1}{\gamma-1})^{-1/2}
}
for the condensation and wave mode. These values of thickness which also depend only on
temperature, are shown in the Fig.\ref{fig7}.
The values of characteristic thickness which correspond to the biggest increase rate of
thermal instability are \cite{fild},46:
\eql{mc}{
\xi_{mc}=\left(\frac{(1-\alpha)^2}{\gamma^2}+\frac{\alpha(1-\alpha)}{\gamma}\right)^{-1/4}
\left(\xi_\rho \xi_{cc}\right)^{1/2}
,}
\eql{mw}{
\xi_{mw}=|\frac{\alpha-1}{\gamma}|^{-1/2}(\xi_\rho\ \xi_{cw})^{1/2}
}
for the condensation and wave mode, correspondingly; they are also shown in the Fig.\ref{fig7}.
The characteristic thicknesses corresponding to the stationary chromosphere heating in the cases
$p=const$ and $n = const$ for the temperature profiles are shown in Fig.\ref{fig7}.
At all points of distribution the balance between the thermal heating and radiative cooling has
its place.
Let us look at Fig.\ref{fig7}. If $\lambda<\lambda_{cr}$, then perturbations are smoothened out by
electron
conductivity,
if $\lambda \approx \lambda_{cr}$, then pertrubations will grow.
Note, that our solutions (in the cases $ð=const$ and $n=const$) are crossed by the curve
$\xi_{cc}$ corresponding to condensation
instability. Then, if in the places of instability there is a spontaneous change of the
temperature
profile, the profile will return to its original position.
So, we have undrestood that the temperature profile cannot be flatter than equilibrium plofile.
It also cannot be steeper, because then the same temperature will be accumulated at smaller
thickness, so the emission in this temperature range will be lower.
\section{The temperature profile emission}
The ability to emit of a certain region is named the emission measure ($ME$).
\eql{defME}{
ME=\int\limits_0^x\ n_e^2\ dl,
}
here $x$ is the length of emitting region along the line of sight, $dl$ is the interval of length.
Differential emission measure ($DME$) is the derivative of emission measure with respect to
temperature.
\eql{defDME}{
DME=\frac{d\ ME}{d\ T}
.}
Let us rewrite $DME$ in terms of $\xi$:
\eql{DME}{
DME=\frac{d\ ME}{d\ T}=\frac{n_e^2\ dx}{dT}=n_e\ \frac{n_e\ d\ x}{d\ T}=n_e\ \frac{d\ \xi}{d\ T}
}
Now, we can calculate the distribution $DME=DME(T)$ for the cases $n=const$ and $p=const$, using
the dependences $\xi(T)$ and $n_e=n(T)$.
The result and measured DME points (for different lines) is shown in the Fig.\ref{fig8}.
\section{Conclusion}
The distribution of temperature with depth was found, assuming that the electon conductivity had
place, and at all points of distribution the balance between the
thermal heating and radiative cooling had place. Our solution is stable (see IV) and
observed UV-radiation can be explained by it (see V).
The obtained results can be used to show that temperatures are distributed in such a way
that
\textit{classical collisional} heat conduction is valid in the chromosphere-corona transition
region of the solar atmosphere, because the characteristic thickness, at which
temperature is changing greater then thickness, corresponds to the free path for thermal electron
collisions.
\section{Acknowledgements}
O.P. thanks P. Dunin-Barkowski for useful discussions.
This work is partly supported by RFBR grants
10-02-01315, 08-02-01033-a and by the Russian President's Grant of Support for the Scientific
Schools NSh-65290.2010.2.
|
2,877,628,091,106 | arxiv | \section{\label{sec:Lagrangian} Heisenberg--Euler Lagrangian}
The Heisenberg--Euler Lagrangian in the weak field approximation is given by
\begin{equation}
\mathcal{L}_{HE}=\mathcal{L}_{0}+\mathcal{L}',\label{eq:Lagrangian}
\end{equation}
where $\mathcal{L}_{0}=-(1/16\pi)F_{\mu\nu}F^{\mu\nu}$ is the classical electromagnetic Lagrangian, $F_{\mu\nu}$ is the electromagnetic field tensor $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$, $\mu,\nu={0,1,2,3}$; $A_{\mu}$ is the 4-vector of the electromagnetic field and $\mathcal{L}'$ is the radiation correction in the Heisenberg--Euler theory, \cite{HeylHernquist,Electrodynamics}. In the weak field approximation, the Heisenberg--Euler Lagrangian has a form
\begin{align}
\mathcal{L}'=\kappa_{1}\left\{4 \mathcal{F}^2 + 7 \mathcal{G}^2 + \frac{90}{315}\mathcal{F}\left[16 \mathcal{F}^2 + 13 \mathcal{G}^2\right]\right\},\label{eq:Lagrangiandash}
\end{align}
where $\kappa_{1}=e^4/360 \pi^2 {m}^4$, $\mathcal{F}$ and $\mathcal{G}$ are the Poincar\'{e} invariants, which are defined in terms of the field tensor $F_{\mu\nu}$,
\begin{align}
&\mathcal{F}=\frac{1}{4}F_{\mu\nu}F^{\mu\nu}=\tfrac{1}{2}({\bf B}^2-{\bf E}^2),\\
&\mathcal{G}=\frac{1}{4}F_{\mu\nu}{\overset{\star}{F}}{}^{\mu\nu}={\bf E}\cdot {\bf B},\\
&{\overset{\star}{F}}{}^{\mu\nu}=\frac{1}{2}\varepsilon^{\mu\nu\rho\sigma}F_{\rho\sigma},
\end{align}
where ${\bf E}$ and ${\bf B}$ are electric and magnetic fields, $\varepsilon^{\mu\nu\rho\sigma}$ being the Levi-Civita symbol in four dimensions and we use the units $c=\hbar=1$.
The Lagrangian $\mathcal{L}'$ can be used if $\omega \ll m$ and $E \ll E_{S}$, where $\omega$ is characteristic frequency of the radiation, the field
\begin{equation}
E_{S}=m^2_{e}/e\; (m^2_{e}c^3/e\hbar),
\end{equation}
is the critical field (or Schwinger field) in QED, $m_{e}$ is the electron rest mass, $e$ is the electron elementary charge.
Expanding the Lagrangian (\ref{eq:Lagrangiandash}) in the series, we keep the terms to the third order in the field amplitude within the weak field approximation to describe the singular solutions. The contributions of the fourth order cancel each other in calculation of dispersive properties of the QED vaccum. The remaining contribution is of the same order as from the Heisenberg--Euler Lagrangian expansion to the sixth order in the fields. The first two terms on the right hand side in the Lagrangian (\ref{eq:Lagrangiandash}) describe four interacting photons and the last two terms correspond to six photon interaction.
The field invariants $\mathcal{F}=\mathcal{G}=0$ in the limit of co--propagating waves.
The field equations are given by
\begin{equation}
\partial_{\mu}(\partial\mathcal{L}'/\partial(\partial_{\mu}{\Phi}))-\partial{\mathcal{L}'}/\partial\Phi=0,
\end{equation}
where
\begin{equation}
\Phi=(-\phi,\bf{A}).
\end{equation}
The first pair of Maxwell field equations reads
\begin{align}
\nabla \cdot {\bf B}&=0,\nonumber\\
\nabla \times {\bf E}&=-{\partial_{t} {\bf B}}. \label{FirstMax}
\end{align}
The second pair can be found by varying the Heisenberg--Euler Lagrangian (\ref{eq:Lagrangian}) which gives the field equations. The second pair of equations can be written as
\begin{align}
\nabla \times {\bf H}&=\partial_{t} {\bf D},\nonumber\\
\nabla \cdot {\bf D}&=0, \label{SecondMax}
\end{align}
where
\begin{align}
{\bf D}&={\bf E}+4\pi{\bf P},\nonumber\\
{\bf H}&={\bf B}-4\pi{\bf M},\label{eq:DH}\\
{\bf P}&=\partial_{\bf E}{\mathcal{L}_{\textrm{HE}}},\nonumber\\
{\bf M}&=\partial_{{\bf B}}{\mathcal{L}_{\textrm{HE}}}\nonumber,
\end{align}
and ${\bf P}$ and ${\bf M}$ are the electric and magnetic polarization vectors. The derivatives are defined as
\begin{align}
\partial_{\bf E}&=(\partial_{E_{x}},\partial_{E_{y}},\dots),\nonumber\\
\partial_{\bf B}&=(\partial_{B_{x}},\partial_{B_{y}},\dots)\label{eq:EB}.
\end{align}
\section{\label{sec:DerivationEquations} Heisenberg--Euler field equations}
We work in the orthogonal coordinate system, $(x,y,z)$, where the two waves propagate along the $x-$axes.
For the ordinary wave case, we assume ${\bf E}=(0,0,E_{z})$ and ${\bf B}=(0,B_{y},0)$,
the simple case of non--vanishing components $E_{z}$ and $B_{y}$ in order to investigate the crossed field case, (${\bf E}\cdot{\bf B}=0$). Then the first equation comes from the set of equations (\ref{FirstMax}), the second equation was found by varying the Lagrangian (\ref{eq:Lagrangian}) according to the potential ${\bf A}$:
\begin{equation}
\partial_{t}B_{y}-\partial_{x}E_{z}=0,\label{eq:nonlinear1}
\end{equation}
\begin{align}
-\left[1+8\kappa_{1} E^2_{z}+4(E^2_{z}-B^2_{y})(\kappa_{1}-3\kappa_{2}E^2_{z})\right.&\nonumber\\
-6\left.\kappa_2(E^2_{z}-B^2_{y})^2\right]&\partial_{t}E_{z}\nonumber\\
+\left[1-8\kappa_{1} B^2_{y}+4(E^2_{z}-B^2_{y})(\kappa_{1}+3\kappa_{2}B^2_{y})\right.&\nonumber\\
-6\left.\kappa_2(E^2_{z}-B^2_{y})^2\right]&\partial_{x}B_{y}\nonumber\\
+4\left[2\kappa_{1}-3\kappa_{2}(E^2_{z}-B^2_{y})\right] E_{z}B_{y}(\partial_{t}B_{y}+\partial_{x}E_{z})=&0,\label{eq:nonlinearr}
\end{align}
where we denote $\kappa_{2}=180/315\kappa_{1}$, $E_{z}\equiv E$ and $B_{y}\equiv B$, and add weak linear amplitude corrections to the fields,
\begin{align}
E&=E_{0}+a(x,t),\nonumber\\
B&=B_{0}+b(x,t).\label{eq:EB}
\end{align}
The fields $E_{0}, B_{0}$ represent the constant electromagnetic background field, $a(x,t)$ and $b(x,t)$ are functions of $x$ and $t$. Using expressions (\ref{eq:EB}), equations (\ref{eq:nonlinearr}) can be rewritten in a form
\begin{align}
\partial_{t}b(x,t)&=\partial_{x}a(x,t), \label{eq:abequations}\\
\alpha\,\partial_{t}a(x,t)&-\beta\,[\partial_{x}a(x,t)+\partial_{t}b(x,t)]-\gamma\,\partial_{x}b(x,t)=0,\label{eq:shift}
\end{align}
where the coefficients $\alpha, \beta$ and $\gamma$ are:
\begin{align}
\alpha&=1+8\kappa_{1} (E_{0}+a)^2\nonumber\\
+&4\left[(E_{0}+a)^2-(B_{0}+b)^2\right](\kappa_{1}-3(E_{0}+a)^2\kappa_{2})\\
-&6\kappa_{2}\left[(E_{0}+a)^2-(B_{0}+b)^2\right]^2,\nonumber\\
\beta&=4(E_{0}+a)(B_{0}+b)\left[2\kappa_{1}-3\kappa_{2}[(E_{0}+a)^2-(B_{0}+b)^2]\right],\\
\gamma&=1-8\kappa_{1} (B_{0}+b)^2\nonumber\\
+&4\left[(E_{0}+a)^2-(B_{0}+b)^2\right](\kappa_{1}+3(B_{0}+b)^2\kappa_{2})\label{eq:ABC}\\
-&6\kappa_{2}\left[(E_{0}+a)^2-(B_{0}+b)^2\right]^2.\nonumber
\end{align}
Assuming that $a(x,t)=b(x,t)=0$, and the crossed field case $E_{0}=B_{0}$, we obtain that
\begin{align}
\alpha_{0}&=1+8\kappa_{1} E^2_{0},\nonumber\\
\beta_{0}&=8\kappa_{1} E^2_{0},\label{eq:ABCcrossed}\\
\gamma_{0}&=1-8\kappa_{1} E^2_{0}.\nonumber
\end{align}
To find the wave phase velocity from the linearized equations (\ref{eq:abequations}) and (\ref{eq:shift}) we look for the solutions in the form,
\begin{equation}
a \propto \exp(-i\omega t + i qx),\;\; b \propto \exp(-i\omega t + i qx), \label{eq:ab}
\end{equation}
where $q$ is the wave number and $\omega$ is the frequency.
Substituting (\ref{eq:ab}) into the equations (\ref{eq:abequations}) and (\ref{eq:shift}), and dividing them by wave vector $q$, we obtain algebraic set of equations for the wave velocity $v={\omega}/q$ (since the medium is dispersionless in our study, see Eq.~(\ref{eq:finalResult})), we denote the phase and the group velocity as one $v=v_{ph}=v_{g}; v_{ph}=\omega/q, v_{g}=\partial{\omega}/\partial{q}$. It yields equations
\begin{align}
a+vb&=0,\nonumber\\
v(b\beta_{0}-a\alpha_{0})-(a\beta_{0}+b\gamma_{0})&=0, \label{eq:vphasetwo}
\end{align}
whose solution is
\begin{align}
v_{1,2}&=\frac{-\beta_{0}\pm \sqrt{\beta^2_{0}+\alpha_{0}\gamma_{0}}}{\alpha_{0}}.\label{eq:vphsolutionExtra}
\end{align}
Using relationships given by Eq.~(\ref{eq:ABCcrossed}) we find
\begin{align}
v_{1}&=-1,\nonumber\\
v_{2}&=\frac{\gamma_{0}}{\alpha_{0}}=\frac{1-8\kappa_{1} E^2_{0}}{1+8\kappa_{1} E^2_{0}}. \label{eq:vphasetwo}
\end{align}
This is the phase velocity $v=v_{1,2}$ for the wave propagating over the crossed background field in the weak field approximation of the Heisenberg--Euler theory. Similar problem is studied in \cite{Bialynicka, Marklund, DittrichGies} and \cite{Rozanov} where the strong static homogeneous background field is considered. The obtained result is used further as a limit case for the background crossed field.
We assuming the coefficients $\alpha, \beta$ and $\gamma$ in the form:
\begin{align}
\alpha&=\alpha_{0}+\alpha_{a}a+\alpha_{b}b,\nonumber\\
\beta&=\beta_{0}+\beta_{a}a+\beta_{b}b,\label{eq:ABCdelta}\\
\gamma&=\gamma_{0}+\gamma_{a}a+\gamma_{b}b,\nonumber
\end{align}
where
\begin{align}
\alpha_{a}&=(\partial_{a} {\alpha})|_{a=0},\quad
\alpha_{b}=(\partial_{b}{\alpha})|_{b=0},\nonumber\\
\beta_{a}&=(\partial_{a}{\beta})|_{a=0},\quad
\beta_{b}=(\partial_{b}{\beta})|_{b=0},\label{eq:alphabeta}\\
\gamma_{a}&=(\partial_{a}{\gamma})|_{a=0},\quad
\gamma_{b}=(\partial_{b}{\gamma})|_{b=0}.\nonumber
\end{align}
We can identify the coefficients $\alpha_{a}, \beta_{a}, \gamma_{a}$ and $\alpha_{b}, \beta_{b}, \beta_{b}$, with the general form of $\alpha, \beta, \gamma$ (\ref{eq:ABC}) for the crossed field $E_{0}=B_{0}$. It yields
\begin{align}
\alpha_{a}&=24 E_{0}(\kappa_{1} - \kappa_{2} E^2_{0})+48\kappa_{2}E_{0}(b^2+2E_{0}b)|_{a=0},\nonumber\\
\alpha_{b}&=-8E_{0}(\kappa_{1} - 3\kappa_{2}E^2_{0})+48\kappa_{2}E_{0}(a^2+2E_{0}a)|_{b=0},\nonumber\\
\beta_{a}&=4(E_{0}+b|_{a=0}) \left[2\kappa_{1}-6\kappa_{2}E^2_{0}-3\kappa_{2}(b^2+2E_{0}b)|_{a=0}\right],\nonumber\\
\beta_{b}&=4(E_{0}+a|_{b=0}) \left[2\kappa_{1}+6\kappa_{2}E^2_{0}-3\kappa_{2}(a^2+2E_{0}a)|_{b=0}\right],\nonumber\\
\gamma_{a}&=8E_{0}(\kappa_{1}+3\kappa_{2}E^2_{0})+48\kappa_{2}E_{0}(b^2+2E_{0}b)|_{a=0},\label{eq:koef}\\
\gamma_{b}&=-24E_{0}(\kappa_{1}+\kappa_{2}E^2_{0})+48\kappa_{2}E_{0}(a^2+2E_{0}a)|_{b=0}.\nonumber
\end{align}
The terms
\begin{align}
b|_{a=0}&=0,\quad
a|_{b=0}=0, \label{ordinary:conditions}
\end{align}
should be equal to zero because they are not linear and break the linear approximation we assume. We will use these conditions (\ref{ordinary:conditions}) to specify the constant in Eq.~(\ref{eq:finalOrdinary2}) while solving the non--linear equations.
\section{\label{sec:SolvingEquations} Self--similar solutions}
First, we consider the equations (\ref{eq:abequations}, \ref{eq:shift}) for the ordinary wave with functions $\alpha(a,b), \beta(a,b)$ and $\gamma(a,b)$ (\ref{eq:ABC}) in linear approximation (\ref{eq:ABCdelta}). We solve the non--linear equations using simple wave concept (Riemann wave) known in nonlinear wave theory \cite{KadomtsevKarpman, Kadomtsev, Whitham}.
Equivalently, we assume the dependence $b=b(a)$,
and subsequently $\partial_{t}b=({\rm d} b/{\rm d} a)\partial_{t}a,\;\partial_{x}b=({\rm d} b/{\rm d} a)\partial_{x}a$. Eqs.~(\ref{eq:abequations}, \ref{eq:shift}) become
\begin{align}
\partial_{t}a&=\frac{{\rm d} a}{{\rm d} b} \partial_{x}a,\label{eq:partOne}\\
\partial_{t}a&=\frac{1}{\alpha}\left(2\beta+\gamma\frac{{\rm d} b}{{\rm d} a}\right) \partial_{x}a,\label{eq:partTwo}
\end{align}
while comparing the two equations, we obtain a quadratic equation for function $b(a)$. It has a form
\begin{align}
\gamma\left(\frac{{\rm d}b}{{\rm d}a}\right)^2+2\beta\frac{{\rm d}b}{{\rm d}a}-\alpha=0,\label{eq:quadraticOrdinary}
\end{align}
and has two solutions
\begin{align}
\left(\frac{{\rm d}b}{{\rm d}a}\right)=\frac{-\beta\pm \sqrt{\beta^2+\alpha\gamma}}{\gamma}.\label{eq:solutionsI}
\end{align}
We use a weak but finite amplitude approximation, assuming that the solution has a form
\begin{align}
\left(\frac{{\rm d}b}{{\rm d}a}\right)=\nu,\quad \nu=\nu_{0}+\nu_{a}a+\nu_{b}b.\label{eq:linearSolutionsOO}
\end{align}
For the calculation, we use the definition of tangent to a surface at a point $(\alpha_{0}, \beta_{0},\gamma_{0})$ as
\begin{align}
f(\alpha,\beta,\gamma)&=f(\alpha,\beta,\gamma)|_{\alpha_{0},\beta_{0},\gamma_{0}}+\partial_{\alpha}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}(\alpha-\alpha_{0})\nonumber\\
+&\partial_{\beta}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}(\beta-\beta_{0})+\partial_{\gamma}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}(\gamma-\gamma_{0}),
\end{align}
where ${{\rm d}b}/{{\rm d}a}=f(\alpha,\beta,\gamma)$.
As a results we obtain coefficients
\begin{align}
\nu_{0}=&f|_{\alpha_{0},\beta_{0},\gamma_{0}}=\frac{-\beta_{0}\pm 1}{\gamma_{0}},\label{eq:nu0}\\
\partial_{\alpha}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}&=\pm\frac{1}{2},\label{eq:nu1}\\
\partial_{\beta}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}&=\frac{1}{\gamma_{0}}\left(-1\pm\beta_{0}\right),\label{eq:nu2}\\
\partial_{\gamma}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}&=\pm\frac{\alpha_{0}}{2\gamma_{0}}-\frac{\left(-\beta_{0}\pm 1 \right)}{\gamma_{0}^2},\label{eq:nu3}
\end{align}
and $\alpha-\alpha_{0}=\alpha_{a}a+\alpha_{b}b$, $\beta-\beta_{0}=\beta_{a}a+\beta_{b}b$ and $\gamma-\gamma_{0}=\gamma_{a}a+\gamma_{b}b$ where we have used $\beta^2_{0}+\alpha_{0}\gamma_{0}=1$.
The complete set of linear coefficients in (\ref{eq:linearSolutionsOO}) is
\begin{align}
\nu_{0}=&f|_{\alpha_{0},\beta_{0},\gamma_{0}},\nonumber\\
\nu_{a}=&\alpha_{a}f_{\alpha}+\beta_{a}f_{\beta}+\gamma_{a}f_{\gamma},\label{eq:nu}\\
\nu_{b}=&\alpha_{b}f_{\alpha}+\beta_{b}f_{\beta}+\gamma_{b}f_{\gamma},\nonumber
\end{align}
where the derivatives are denoted
\begin{align}
f_{\alpha}=&\partial_{\alpha}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}},\;
f_{\beta}=\partial_{\beta}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}},\;
f_{\gamma}=\partial_{\gamma}{f}|_{\alpha_{0},\beta_{0},\gamma_{0}}.\label{eq:fff}
\end{align}
Since we have two solutions of the equation (\ref{eq:solutionsI}) we need to choose the physical one, which corresponds to the case of two counter propagating waves. We can do that by knowing the phase velocity for such case, phase velocity $v=v_{2}>0$ (\ref{eq:vphasetwo}), and expression for $\nu_{0}$ (\ref{eq:nu0}). It shows that we need to choose the $-$ solutions, the $+$ solutions correspond to two waves propagating in the same direction.
Therefore evaluating $f_{\alpha}, f_{\beta}, f_{\gamma}$ (\ref{eq:fff}) by using expressions (\ref{eq:nu0}), (\ref{eq:nu1}), (\ref{eq:nu2}) and (\ref{eq:nu3}), we get
\begin{align}
f_{\alpha}&=-\frac{1}{2},\nonumber\\
f_{\beta}&=-\frac{1}{\gamma_{0}}\left(1 + \beta_{0}\right),\label{eq:ff}\\
f_{\gamma}&=-\frac{\alpha_{0}}{2\gamma_{0}}+\left(\frac{\beta_{0}+1}{\gamma^2_{0}}\right).\nonumber
\end{align}
Now, we observe that the problem reduces to solving the differential equation (\ref{eq:linearSolutionsOO}). The equation is in a form of total differential. It can be solved by the method of integration factor, choosing it as $m(a)=\exp(-\nu_{b}a)$. The dependence $b=b(a)$ is
\begin{equation}
\frac{1}{\nu_{b}}\exp{(-\nu_{b}a)}\left((\nu_{0}+\nu_{b}b)+\frac{\nu_{a}}{\nu_{b}}(\nu_{b}a+1)\right)=\delta,\label{eq:solutionImplicit}
\end{equation}
where $\delta$ is arbitrary constant. Therefore the function $b=b(a)$ has a form
\begin{equation}
b=\delta\,\exp(\nu_{b}a)-\frac{\nu_{a}}{\nu_{b}}(\nu_{b}a+1)-\frac{\nu_{0}}{\nu_{b}}.\label{eq:solutionExplicit}
\end{equation}
The remaining constant $\delta$ can be determined by the conditions (\ref{ordinary:conditions}) and therefore it allows one to find the constant,
\begin{equation}
\delta=\frac{\nu_{a}+\nu_{0}\nu_{b}}{\nu^2_{b}}.\label{eq:constOrdinary}
\end{equation}
Then the coefficients (\ref{eq:koef}) get a final form
\begin{align}
\alpha_{a}&=24 E_{0}(\kappa_{1} - \kappa_{2} E^2_{0}),\;
\alpha_{b}=-8E_{0}(\kappa_{1} - 3\kappa_{2}E^2_{0}),\nonumber\\
\beta_{a}&=8E_{0} \left[\kappa_{1}-3\kappa_{2}E^2_{0}\right],\;
\beta_{b}=8E_{0} \left[\kappa_{1}+3\kappa_{2}E^2_{0}\right],\label{eq:koeffinal}\\
\gamma_{a}&=8E_{0}(\kappa_{1}+3\kappa_{2}E^2_{0}),\; \gamma_{b}=-24E_{0}(\kappa_{1}+\kappa_{2}E^2_{0}).\nonumber
\end{align}
In order to use the weak amplitude approximation, we perform Taylor expansion of the first term in (\ref{eq:solutionExplicit}) to the first order, $\exp{(\nu_{b}a)}\approx 1+\nu_{b}a+\dots$ and it gives
\begin{equation}
b=\delta\,(\nu_{b}a+1)-\frac{\nu_{a}}{\nu_{b}}(\nu_{b}a+1)-\frac{\nu_{0}}{\nu_{b}}.\label{eq:solutionExplicitExp}
\end{equation}
After substituting (\ref{eq:constOrdinary}) into (\ref{eq:solutionExplicitExp}), we obtain the solution showing a linear relationship between $a$ and $b$:
\begin{equation}
b=\nu_{0}a,\label{eq:baOrdinary}
\end{equation}
where
\begin{equation}
\nu_{0}=-1/v,\label{eq:nuphase}
\end{equation}
and $v$ is the phase velocity (\ref{eq:vphsolutionExtra}).
Now, we will get back to the equations (\ref{eq:partOne}) and (\ref{eq:partTwo}).
It is more convenient to use Eq.~(\ref{eq:partOne}), which we rewrite as
\begin{equation}\label{eq:finalOrdinary}
\partial_{t}a-\frac{1}{\nu}\partial_{x}a=0,
\end{equation}
where it is denoted
\begin{equation}
\nu=\nu_{0}+\nu_{a}a+\nu_{b}b.
\end{equation}
We perform another linearization of $1/\nu$ as
\begin{align}
f(\nu)&=f(\nu)|_{\nu_{0}}+\partial_{\nu}{f}|_{\nu_{0}}(\nu-\nu_{0}),
\end{align}
and obtain
\begin{align}
\frac{1}{\nu}=\frac{1}{\nu_{0}}\left(1-a\frac{\nu_{a}+\nu_{0}\nu_{b}}{\nu_{0}}\right).\label{eq:1nu}
\end{align}
It is convenient to rewrite Eq.~(\ref{eq:finalOrdinary}) with $1/\nu$ (\ref{eq:1nu}) to a final form:
\begin{equation}\label{eq:finalOrdinary2}
\partial_{t}a+f(a)\partial_{x}a=0,
\end{equation}
with
\begin{equation}\label{eq:finalResult}
f(a)=-\frac{1}{\nu_{0}}\left[1-a\frac{(\nu_{a}+\nu_{0}\nu_{b})}{\nu_{0}}\right]
\end{equation}
or
\begin{equation}\label{eq:finalResult1}
f(a)=v+a\frac{(\nu_{a}+\nu_{0}\nu_{b})}{\nu_{0}^2},
\end{equation}
where $v$ is the phase velocity of the electromagnetic wave.
The equation (\ref{eq:finalOrdinary2}) can be rewritten for the function
\begin{equation}
\bar{a}=\frac{(\nu_{a}+\nu_{0}\nu_{b})}{\nu_{0}^2} a,\label{eq:koef}
\end{equation}
in a standard form, \cite{KadomtsevKarpman,Kadomtsev},
\begin{equation}\label{eq:finalOrdinary22}
\partial_{t}\bar{a}+(v+\bar{a})\partial_{x}\bar{a}=0.
\end{equation}
This is the final equation, which we analyze further. The form of the equation (\ref{eq:finalOrdinary2}) corresponds to the equation of non--linear wave without dispersion
\cite{Kadomtsev}. The wave steepening takes place. The ordinary wave overturns as we demonstrate in detail together with the higher--order harmonics analysis in the next Section \ref{sec:AnalyzingEquations}.
In the limit $a=0$, the wave moves with the phase velocity for the unperturbed case.
\section{\label{sec:AnalyzingEquations} Properties of self--similar solutions}
In this Section, the equation (\ref{eq:finalOrdinary2}) is analyzed
\subsection{\label{sub:char} Analyzing the equations using characteristics}
The equation (\ref{eq:finalOrdinary2}) can be solved by method of characteristics. Characteristic equations for Eq.~(\ref{eq:finalOrdinary2}) are
\begin{equation}
\frac{{\rm d}x}{{\rm d}t}=f(a),\; \frac{{\rm d}a}{{\rm d}t}=0.
\end{equation}
Their solutions are $a(x,t)=A_{0}(x_{0})$ and $x=f(A_{0}(x_{0}))t+x_{0}$. The function $a(x,t)$ transfers along the characteristic $x_{0}$ without any distortion. Therefore for any differentiable function $A=A(x)$ we can write solution $a$ in a form
\begin{equation}
a(x,t)=A_{0}(x_{0})=A_{0}[x-f(a(x,t))t],\label{eq:A}
\end{equation}
where $A_{0}$ is an arbitrary function determined by initial condition, $a(x)|_{t=0}=A_{0}(x)$. We will choose the arbitrary function as $a(x,t)=A_{0}(x_{0})=a_{m}\sin(kx_{0})$ giving
\begin{equation}
a(x,t)=a_{m}\sin{[k(x-f(a(x,t))t)]}.\label{eq:axt}
\end{equation}
\subsection{\label{sub:wave} The wave breaking}
The wave breaking is typical behavior of waves in nonlinear dispersionless media. The solution of equation (\ref{eq:finalOrdinary2}) can be written in a implicit form (\ref{eq:A}) with the Euler coordinate $x$ dependent on the Lagrange coordinate $x_{0}$ and time.
The location where the wave breaks is determined by the gradient of function $a(x,t)$, the wave breaks when gradient becomes infinite, \cite{Panchenko}. We obtain such result by deriving (\ref{eq:A}), as
\begin{align}
\partial_{x}a=\frac{A'_{0}(x_{0})}{1+A'_{0}(x_{0})f'\,t},\quad t_{br}=-\frac{1}{A'_{0}(x_{0})f'},\label{eq:gradient}
\end{align}
where it is denoted $A'(x_{0})=\rm{d}A_{0}/\rm{d}x_{0}$ and $f'=\partial_{a}f(a)$.
The gradient becomes infinite at time $t_{br}$ when the denominator of (\ref{eq:gradient}) vanishes at some point $x_{br}$. At the time $t_{br}$ when the wave breaks the velocity $a_(x_br,t_{br})$ remains constant. Such singularity is called the wave breaking or the gradient catastrophe.
By using our ansatz for the solution (\ref{eq:axt}) we obtain
\begin{align}
A'_{0}(x_{0})&=a_{m}k\cos{(kx_{0})},\\
f'&=\frac{\nu_{a}+\nu_{0}\nu_{b}}{\nu^2_{0}},
\end{align}
therefore
the gradient (\ref{eq:gradient}) and the wave breaking time $t_{br}$ result in
\begin{align}
\partial_{x}a=\frac{a_{m}k\cos{(kx_{0})}}{1+a_{m}k f'\,\cos{(kx_{0})}t},\nonumber\\
t_{b_{br}}=-\frac{1}{a_{m}k\cos{(k x_{0})} f'}, \label{eq:time}
\end{align}
and at the coordinate $x_{0}$ where $a_{m}k\cos{kx_{0}}$ is maximal, the velocity $a(x_{br},t_{br})=a_{m}\sin{[k(x_{br}-f(a(x_{br},t_{br}))]}$ remains constant. In \cite{KadlecovaKornBulanov2019}, we have showed the wave steepening evolving in time. Here, we will concentrate on investigation of the direction of the wave breaking in detail. The direction of the wave breaking depends on the sign of $f'$ in (\ref{eq:time}) which we discuss in the next subsection.
\subsection{\label{sub:char} Analyzing character of wave breaking}
We need to investigate the expression for
\begin{equation}
\bar{a}=f' a.\label{eq:koefi}
\end{equation}
The resulting electromagnetic wave propagates along the $x-$coordinate according to (\ref{eq:finalOrdinary2}) and the direction of the wave breaking is given by the sign in front of function $f_{1}$.
As noted above, $f_{0}=v>0$, which is the phase velocity of the background field.
By susbtituting $\alpha_{0}$, $\beta_{0}$ and $\gamma_{0}$ (\ref{eq:ABCcrossed}) into $f_{\alpha}, f_{\beta}, f_{\gamma}$ (\ref{eq:ff}) we observe it is convenient to express the functions in terms of the phase velocity $v$. It yields
\begin{align}
f_{\alpha}&=- \frac{1}{2},\;
f_{\beta}=-\frac{1}{v},\label{eq:ff11}\;
f_{\gamma}=\frac{1}{2}\frac{1}{v^2}.\nonumber
\end{align}
Then the coefficients $\nu_{a}, \nu_{b}$ are
\begin{align}
\nu_{a}&=\frac{4E_{0}}{v^2}\left[\kappa_{1}(1-2v-3v^2)+3\kappa_{2}E^2_{0}(v^2+2v-1)\right],\nonumber\\
\nu_{b}&=-\frac{4E_{0}}{v^2}\left[\kappa_{1}(3-2v-v^2)+3\kappa_{2}E^2_{0}(v-1)^2\right].
\end{align}
The function $f_{1}$ (\ref{eq:ff}) becomes
\begin{align}
f'=4E_{0}&\left[\kappa_{1}\left(-1-3v-3v^2+\frac{3}{v}\right)\right.\nonumber\\
+&3\left.\kappa_{2}E^2_{0}\left(v^2+3v-3+\frac{1}{v}\right)\right],\label{eq:f1res}
\end{align}
where $v=v_{2}$ (\ref{eq:vphasetwo}) as
\begin{equation}
v=\frac{1-8\kappa_{1} E^2_{0}}{1+8\kappa_{1} E^2_{0}}.\label{eq:velocity}
\end{equation}
We have obtained more general formula for the steepening factor $f'$ than in \cite{KadlecovaKornBulanov2019}, where $f'=-2(4\epsilon^2_{2}+3\epsilon_{3})W^3$, $W^3=-2\sqrt{2}E^3_{0}$ and $\epsilon_{2}=8\kappa_{1}$.
If we substitute a Taylor expansion of $(\ref{eq:velocity})$ as
\begin{equation}
v\approx 1-16\kappa_{1}E^2_{0}+16\kappa^2_{1}E^4_{0}, \label{eq:Taylor}
\end{equation}
into (\ref{eq:f1res}) and look for the terms with $E^3_{0}$,
we obtain
\begin{equation}
f'=48E^3_{0}[12\kappa^2_{1}+\kappa_{2}],
\end{equation}
which corresponds to the result in \cite{KadlecovaKornBulanov2019} where the wave has rarefaction character.
We can rewrite the function $f'$ in a final form as
\begin{align}
f'=&\frac{4E_{0}}{v}\left\{3(\kappa_{1}+\kappa_{2}E^2_{0})-v(\kappa_{1}+9\kappa_{2}E^2_{0})\right.\nonumber\\
-&3v^2\left.(\kappa_{1}-9\kappa_{2}E^2_{0})-3v^3(\kappa_{1}-\kappa_{2}E^2_{0})\right\},\label{eq:f1explicit}
\end{align}
where the phase velocity $v < 1$ and the constants $\kappa_{1}=\alpha/360\pi^2\times 1/E^2_{S}$ and $\kappa_{2}=\kappa_{1}\times 180/315$ and $\alpha=1/137$. The constants without the scaling factor $1/E^2_{S}$ have values $\kappa_{1}\approx 2\times 10^{-6}$ and $\kappa_{2}\approx 10^{-6}$.
When the singularity is formed, the electromagnetic wave breaking formes a shock wave, which has a forward character for $f'>0$ and rarefaction character, i.e. the wave breaks in the backwards direction, for $f'<0$.
The rarefaction character of the wave steepening is shown in Fig.~\ref{fig:shift}, where we plot
\begin{equation}
x=x_{0}+(1+f'a_{0}(x_{0}))t,\quad a_{0}(x_{0})=a_{m}\sin(x_{0}),\label{eq:image}
\end{equation}
for $f'=-0.35$ and $a_{m}=1$. The wave front shifts to left gradually. The situation in 3D is shown in Fig.~\ref{fig:shift3D} for $x_{0}\in<-2\pi, 2\pi>$.
\begin{figure}[h]
\centering
\includegraphics[width=0.515\textwidth]{Fig00.eps
\caption{\label{fig:shift} The equation (\ref{eq:image}) is visualized, the shifting of the wave in time to the left hand side is visible. }
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{WaveSteepening3DD.eps
\caption{\label{fig:shift3D} The equation (\ref{eq:image}) is visualized, the shifting of the wave in time to left hand side is visible in 3D. }
\end{figure}
\subsection{\label{sub:pert} Analyzing the equations by perturbation method}
Another way how to describe the wave breaking is to use the perturbation method \cite{Panchenko, Kadomtsev} to find the solutions of the equation (\ref{eq:finalOrdinary2}). We can write the solution as
\begin{equation}
a=a^{(0)}+\varepsilon a^{(1)}+\varepsilon^2 a^{(2)}+\dots,\label{per}
\end{equation}
where $\varepsilon\ll 1$ and we assume that in the zeroth order the wave amplitude is homogeneous with the velocity, $a^{(0)}$, and constant in space and time.
In the first order $\varepsilon^{0}$, we obtain
\begin{equation}
\partial_{t}a^{(1)}+f(a)|_{a=0}a^{(1)}=0,\label{eq:first}
\end{equation}
where $f(a)|_{a=0}=-1/\nu_{0}=v$, (\ref{eq:nuphase}). We have obtained the simplest wave equation describing the wave with the frequency and wave number related via dispersion relation $\omega=k f(a)|_{a=0}$. Therefore we obtained that the wave propagates without dispersion and both the phase velocity, $\omega/k$, and the group velocity $\partial \omega/ \partial k$ are equal to $f(a)|_{a=0}=v$.
The solution of Eq.~(\ref{eq:first}) is arbitrary function of $x-vt$, where $v=f(a)|_{a=0}$, therefore we choose the same form as before,
\begin{equation}
a^{(1)}=a_{m}\sin{[k(x-vt)]}.
\end{equation}
To the second order, $\varepsilon^{1}$, we obtain
\begin{align}
\partial_{t}a^{(2)}+3v\partial_{x}a^{(2)}&=-a^{(1)}f'\partial_{x}a^{(1)},\nonumber\\
&=-\frac{bk}{2}\sin{[2k(x-vt)]},\label{eq:secondorder}
\end{align}
where $b=a^2_{m}f'$. The solution of this equation,
\begin{equation}
a^{(2)}=\frac{b}{8v}\left[\cos{[2k(x-vt)]}-\cos{(2kx)}\right],
\end{equation}
where we assumed $a^{(2)}|_{t=0}=0$, describes the second harmonic with the resonant growth of amplitude in time. In the third order, $\varepsilon^{(2)}$, we will find that the third harmonic grows as $\sin{[3k(x-vt)]}$ and so on.
In general, the harmonics spectrum in the expansion can be estimated as
\begin{equation}
a_{n}=\left(f'\frac{E_{pulse}}{E_{S}}\right)^n,
\end{equation}
where $n$ is the order of the harmonic, $E_{pulse}$ is a typical field of the electromagnetic pulse which is not scaled with $E_{S}$.
We have demonstrated that the second harmonic is in resonance with the first harmonic and the two counter propagating electromagnetic waves propagate in vacuum without dispersion. The higher harmonics are generated up to the point of wave overturning.
\subsection{\label{sub:pert2} Analyzing the solution by perturbation method}
It is possible to directly analyze the solution (\ref{eq:axt}), which is in implicit form, by perturbation method. We assume the form of solutions as (\ref{per}) and we rewrite function $f(a)$ as
\begin{equation}
f(a)=f_{0}+\varepsilon a, \label{eq:favar}
\end{equation}
and
\begin{equation}
a(x,t)=a_{m}\sin{[k(x-(f_{0}+\varepsilon a(x,t))t)]},\label{eq:axti}
\end{equation}
where $\varepsilon=f'$ and $f_{0}=-1/\nu_{0}$. In the first order $\varepsilon^{0}$, we obtain
\begin{equation}
a_{0}=a_{m}\sin{[k(x-f_{0}t)]}.
\end{equation}
In the second order $\varepsilon^{1}$ and using Taylor expansion on right hand side, we
obtain
\begin{equation}
a_{1}=-\frac{a^2_{m}kt}{2}\sin{(2kx_{0})},
\end{equation}
which describes the second harmonic with the resonant growth of amplitude in time. Again, we have demonstrated the resonance between the first two harmonics, which is true for all harmonics because the phase velocity is the same for all harmonics and it does not depend on the wave number. This again leads to wave breaking.\\
\subsection{\label{sub:deformationExample} Self--similar solutions with uniform deformation}
We assume the solution $a(x,t)$ of Eq.~(\ref{eq:finalOrdinary2}) in the form
\begin{equation}
a(x,t)=a_{0}(t)+a_{1}(t)x.\label{eq:triangular}
\end{equation}
It represents a triangular shape of the solution $a(x,t)$.
The function $f(a)$ (\ref{eq:finalResult}) can be rewritten as
\begin{equation}
f(a)=f_{0}+f' a, f_{0}=-\frac{1}{\nu_{0}}, f'=\frac{\nu_{a}+\nu_{0}\nu_{b}}{\nu^2_{0}}.
\end{equation}
After substituting the solution (\ref{eq:triangular}) into the equation (\ref{eq:finalOrdinary2}), we obtain the set of equations:
\begin{align}
\partial_{t}a_{0}+a_{1}(f_{0}+f'a_{0})&=0,\nonumber\\
\partial_{t}a_{1}+f'a^2_{1}&=0.\label{eq:Ex2}
\end{align}
The profile $a_{1}$ of the solution can be investigated by solving the second Eq.~(\ref{eq:Ex2}) as
\begin{equation}
a_{1}=\frac{a_{1}(0)}{1+f'a_{1}(0)t},\label{eq:steep}
\end{equation}
where $a_{1}(0)=a_{1}(t)|_{t=0}$.
We can analyze the profile, for $a_{1}(0)>0$, and for $f'<0$, $a_{1} \rightarrow \infty$ and $t \rightarrow -1/f'a_{1}(0)$. The wave steepens to the left hand side in the opposite direction than the direction of propagation along the positive $x$ axes, i.e. has a rarefaction character, such behaviour is showed in Fig.~\ref{fig:ex1}.
\begin{figure}[!tbp]
\centering
\subfloat[The rarefaction wave breaks.]{\includegraphics[width=0.43\textwidth]{Fig1.eps}\label{fig:ex1}}
\hfill
\subfloat[The rarefaction waves do not break.]{\includegraphics[width=0.45\textwidth]{Fig2.eps}\label{fig:ex2}}
\caption{The equation (\ref{eq:triangular}) together with solution (\ref{eq:steep}) is visualized. In the Fig.~\ref{fig:ex1}, the shifting of the wave to left hand side is visible in time. We have chosen the function $a_{1}(0)=\cos(0)=1>0$, $a_{0}(t)=-3t+1.8$, $f'=-0.5$. In the Fig.~\ref{fig:ex2}, we have just changed the function $a_{1}(0)=\cos(\pi)=-1<0$ and $f'=0.5$ to positive value. The wave does not break and continues to infinity in time.}
\end{figure}
If $a_{1}(0)<0$, and for $f'<0$, $a_{1} \rightarrow 1/f't$ and $t \rightarrow \infty$. The wave does not break and continues till infinity, such behaviour is showed in Fig.~\ref{fig:ex2}.
For the case when $f'>0$, the direction of propagation just changes to the opposite direction.
\section{\label{sec:discussion} Dissipation due to the electron--positron pair creation}
Our work is performed within the approximation of Heisenberg--Euler theory of QED in the low photon energy region $\omega\ll m$, i.e. in the weak field limit. Therefore our results are limited to this low energy regime and will lose validity if we approach the Swinger limit $E_{S}$. After the ordinary wave breaks, we can not predict its behaviour in this approximation.
As we have showed in \cite{KadlecovaKornBulanov2019}, the long--wavelength approximation breaks when the frequencies of the interacting waves, $\omega_{\gamma}$ and $\Omega$ become high enough as
\begin{equation}
\omega_{\gamma}\Omega > m^2_{e}c^4/\hbar^2,
\end{equation}
at this level the photon--photon interaction can result in creation of real electron--positron pairs via Breit--Wheeler process \cite{BreitWheeler}, in saturation of wave steepening and in the electromagnetic shock wave formation. Near the threshold, the electron--positron creation cross section has a form \cite{Electrodynamics,GouldSchreder},
\begin{equation}
\sigma_{\gamma\gamma\rightarrow ep}=\pi r_e^2 \sqrt{\frac{\hbar^2 \omega \Omega}{m_e^2 c^4}-1},
\label{eq:sigma-e-p}
\end{equation}
where $\omega_{\gamma}$ and $\Omega$ are the frequencies of high energy photons and low frequency counter--propagating electromagnetic waves, respectively, and $r_{e}=e^2/m_{e}c^2$ is the classical electron radius.
\begin{figure}[h]
\centering
\includegraphics[width=0.52\textwidth]{Fig.eps
\caption{\label{fig:CrossSectionEE} The cross section $\sigma_{\gamma\gamma\rightarrow ep}$ dependence on the photon energy $\hbar\omega/m_{e}c^2$. The detailed view on the graph near threshold is shown in the right insect. Reaching the energies for the electron--positron generation requires much less intense laser intensities than reaching Schwinger field $E_{S}$. }
\end{figure}
The cross section of the formation of an electron pair in the collision of two photons is given by general formulae, \cite{Electrodynamics},
\begin{align}
\sigma_{\gamma\gamma\rightarrow ep}&=\frac{1}{2}\pi r_e^2(1-\beta_{e}^2)\times\nonumber\\
&\left\{(3-\beta_{e}^4)\log\left(\frac{1+\beta_{e}}{1-\beta_{e}}\right)-2\beta_{e}(2-\beta_{e}^2)\right\}{\rm d}\beta_{e},
\label{eq:sigma-e-e}
\end{align}
where
\begin{equation}
\beta_{e}=\sqrt{1-\frac{m_{e}^2 c^4}{\hbar^2 \omega_{1}\omega_{2}}},
\end{equation}
the $\omega_{1}$ and $\omega_{2}$ are frequencies of the two colliding photons.
The cross section (\ref{eq:sigma-e-p}) is plotted in Fig.~\ref{fig:CrossSectionEE} with detailed view on the area around the value one where the electron--poitron pair creation starts to appear.
When the shock wave has been formed we can find the electron pair number in a straightforward way using the energy--momentum conservation.
For a laser pulse of length $l_{pulse}$ and the beam profile $S'$, we can obtain the energy of the pulse as
\begin{equation}
\mathcal{E}_{pulse} \approx \frac{E^2_{pulse}S'l_{pulse}}{4\pi}=\frac{I_{pulse}S'l_{pulse}}{c},
\end{equation}
where $E_{pulse}$, $I_{pulse}$ are a typical field and intensity of the electromagnetic pulse,
then the number of the electron pairs and their creation rate are given by
\begin{equation}
N_{e^{\pm}}=\frac{\mathcal{E}_{pulse}}{2mc^2},\quad \frac{{\rm d} N_{e^{\pm}}}{{\rm d} t}=\frac{\mathcal{E}_{pulse}}{2mc^2}\frac{\delta v}{l_{pulse}},
\end{equation}
where we have denoted $\delta v=\bar{a}$. It can be shown that the creation rate of the electron pairs can be expressed as
\begin{equation}
N_{e^{\pm}}=\alpha\frac{I^3_{em}}{I_{S}^2},
\end{equation}
where $I_{em}$ is the intensity of electromagnetic field.
In our proposed model we target the lower energy region around the value one where the cross section curves start in Fig.~\ref{fig:CrossSectionEE}.
The electron--positron pairs created at the electromagnetic shock wave front being accelerated by the
electromagnetic wave emit gamma-ray photons which lead to the
electron--positron avalanche via the multi-photon Breit-Wheeler mechanism \cite{NikishovRitus} as discussed in Refs. \cite{BellKirk, Fedotov} (see also review article \cite{DiPizzaReview} and the literature cited therein).
Recently, in \cite{Yu2019}, a novel approach was developed to demonstrate the two-photon Breit--Wheeler process by using collimated and wide-bandwidth $\gamma$-ray pulses driven by $10$ PW lasers. The positron signal, which is roughly 100 times higher than the detection limit, can be measured by using the existing spectrometers. This approach, which could demonstrate the electron--positron pair creation process from two photons, would provide important tests for two-photon physics and other fundamental physical theories.
Additional terms corresponding to the dissipation effect due to viscosity and dispersion in the Eq.~(\ref{eq:finalOrdinary2}) can lead to saturation of high order harmonics \cite{Bulanov}. The dissipation effect can be described, for example, by additional term $\mu\partial_{xx}a^{(2)}=-\mu k^2 a^{(2)}$ on the left hand side of Eq.~(\ref{eq:secondorder}) as
\begin{align}
\partial_{t}a^{(2)}-\mu\partial_{xx}a^{(2)}=-\frac{bk}{2}\sin{[2k(x-vt)]},\label{eq:secondorderDiss}
\end{align}
assuming the dissipation effect have the same order as the second order in the wave amplitude perturbations.
The dispersion effect which is equivalent to the dependence of the phase
velocity on the wave number also can lead to saturation of the high harmonic
generation is described by additional term $\tau\partial_{xxx}a^{(2)}=-i\tau k^3 a^{(2)}$ on the left hand side of Eq.~(\ref{eq:secondorder}) as
\begin{align}
\partial_{t}a^{(2)}-\tau\partial_{xxx}a^{(2)}=-\frac{bk}{2}\sin{[2k(x-vt)]}.\label{eq:secondorderDiss1}
\end{align}
Both solutions of Eq.~(\ref{eq:secondorderDiss}, \ref{eq:secondorderDiss1}), amplitudes $a^{(2)}$, are less than the first harmonic amplitude in the limit $t\rightarrow\infty$. Saturation of the amplitude growth in the dispersive media appears due to the propagation velocity of the second harmonic being different from the background velocity $v$.
\section{\label{sec:conclusion} Conclusion}
In conclusion, we have presented an analytical method of solving the system of non--linear Heisenberg--Euler electrodynamics equations for a problem describing the
finite amplitude electromagnetic wave counter-propagating to the crossed electromagnetic field presented in \cite{KadlecovaKornBulanov2019}. We have used the weak field approximation to the sixth order in the field amplitude to include four and six photon interactions to study the singularity formation. It was shown that the non--linear field equations decouple for the ordinary wave case when we look for the solution in the form of a simple wave, i.e. Rieman wave, we have solved the equations exactly.
The solution has a form of non--linear wave equation for the relatively short wavelength pulse in the linear approximation and generalizes our previous result in \cite{KadlecovaKornBulanov2019}. The solution was analyzed by method of characteristics or by perturbation method and demonstrated in more detail that the solution describes high order harmonic generation, wave steepening and formation of a shock wave. The properties of the solution were discussed in detail, for example in the case of self--similar solutions with uniform (homogenious) deformation.
We analyze the electromagnetic wave steepening or wave breaking direction, it depends on the strength of the electromagnetic field $E_{0}$ (sign of $f'$) and has forward character for weak field and rarefaction shock wave character for stronger fields, as illustrated in Figs.~\ref{fig:shift} and \ref{fig:shift3D}.
In general, photon--photon scattering in a vacuum is governed by the dimensionless parameter $\alpha (I_{em}/I_S)$, as it concerns shock-like configuration formation, high order harmonics generation and the electron-positron and gamma ray flash at the electromagnetic shock wave front.
\begin{acknowledgments}
We thank Dr.~T. Pech\'{a}\v{c}ek for motivating discussions.
Supported by the project High Field Initiative (CZ$.02.1.01/0.0/0.0/15\_003/0000449$) from European Regional Development Fund.
\end{acknowledgments}
|
2,877,628,091,107 | arxiv | \section{Introduction}
\hspace{0.5cm}The use of string rewriting systems or Thue systems has been proved to be a very efficient tool to solve the word
problem. Indeed, Book shows that there is a linear-time algorithm to decide the word problem for a monoid that is
defined by a finite and complete rewriting system \cite{book_linear}.
A question that arises naturally is whether the use of rewriting systems may be an efficient tool for solving other
decision problems, specifically the conjugacy problem. Several authors have studied this question, see
\cite{naren_otto2,naren_otto}, \cite{otto}, and \cite{pedersen}. The complexity of this question is due to some facts. One point is that for monoids the conjugacy problem and the word problem are independent of each other \cite{otto}. This is different from the situation for groups.
Another point is that in semigroups and monoids, there are several different notions of conjugacy that are not
equivalent in general. We describe them in the following.
Let $M$ be a monoid (or a semigroup) generated by $\Sigma$ and let $u$ and $v$ be two words in the free monoid
$\Sigma^{*}$. The right conjugacy problem asks if there is a word $x$ in the free monoid $\Sigma^{*}$ such that
$xv=_{M}ux$, and is denoted by $\operatorname{RConj}$. The left conjugacy problem asks if there is a word $y$ in
the free monoid $\Sigma^{*}$ such that $vy=_{M}yu$, and is denoted by $\operatorname{LConj}$. The conjunction of the
left and the right conjugacy problems is denoted by $\operatorname{Conj}$. The relations $\operatorname{LConj}$ and
$\operatorname{RConj}$ are reflexive and transitive but not necessarily symmetric, while $\operatorname{Conj}$ is an
equivalence relation.
A different generalization of conjugacy asks if there are words $x,y$ in the free monoid such that $u=_{M}xy$ and
$v=_{M}yx$. This is called the \emph{transposition problem} and it is denoted by $\operatorname{Trans}$.
This relation is reflexive and symmetric, but not necessarily transitive.
In general, if the answer to this question is positive then the answer
to the above questions is also positive, that is $\operatorname{Trans} \subseteq \operatorname{Conj} \subseteq
\operatorname{LConj},\operatorname{RConj}$. For free monoids, Lentin and Schutzenberger show that
$\operatorname{Trans}= \operatorname{Conj}= \operatorname{LConj}=\operatorname{RConj}$ \cite{schutz} and for monoids
with a special presentation (that is all the relations have the form $r=1$) Zhang shows that $\operatorname{Trans}=
\operatorname{RConj}$ \cite{zhang}. We denote by $\operatorname{Trans^{*}}$ the transitive closure of
$\operatorname{Trans}$. Choffrut shows that $\operatorname{Trans^{*}}= \operatorname{Conj}=
\operatorname{LConj}=\operatorname{RConj}$ holds in a free inverse monoid $FIM(X)$ when restricted to the set of
non-idempotents \cite{choffrut}. He shows that $\operatorname{LConj}$ is an equivalence relation on $FIM(X)$ and he
proves the decidability of this problem in this case. Silva generalized the results of Choffrut to a certain class of
one-relator inverse monoids. He proves the decidability of $\operatorname{Trans}$ for $FIM(X)$ with one idempotent
relator \cite{silva}.
In this work, we use rewriting systems in order to solve the conjugacy problems presented above in some semigroups and
monoids.
A special rewriting system satisfies the condition that all the rules have the form $l \rightarrow 1$, where $l$ is any
word. Otto shows that $\operatorname{Trans}= \operatorname{Conj}= \operatorname{LConj}$
for a monoid with a special complete rewriting system and that $\operatorname{Trans}$ is an equivalence relation.
Moreover, he shows that whenever the rewriting system is finite then the conjugacy problems are solvable \cite{otto}.
Narendran and Otto show that $\operatorname{LConj}$ and $\operatorname{Conj}$ are decidable for a finite,
length-decreasing and complete rewriting system \cite{naren_otto} and that $\operatorname{Trans}$
is not decidable \cite{naren_otto2}. We describe our approach to solve the conjugacy problems using rewriting systems
in the following.
Let $M$ be the finitely presented monoid $\operatorname{Mon}\langle\Sigma\mid R\rangle$ and let $\Re$
be a complete rewriting system for $M$.
Let $u$ be a word in $\Sigma^{*}$, we consider $u$ and all its cyclic conjugates in $\Sigma^{*}$, $\{u_{1}=u,
u_{2},..,u_{k}\}$, and we apply on each element $u_{i}$ rules from $\Re$ (whenever this is possible).
We say that a word $u$ is \emph{cyclically irreducible} if $u$ and all its cyclic conjugates are irreducible modulo
$\Re$.
If for some $1 \leq i \leq n$, $u_{i}$ reduces to $v$, then we say that $u$ \emph{cyclically reduces to $v$} and we
denote it by $u \looparrowright v$, where $\looparrowright$ denotes a binary relation on the words in $\Sigma^{*}$.
We define on $\looparrowright$ the properties of terminating and confluent in the same way as for $\rightarrow$ and
if $\looparrowright$ is terminating and confluent then each word $u$ reduces to a unique cyclically
irreducible element denoted by $\rho(u)$. We have the following result that describes the relation between
$\looparrowright$ and the conjugacy problems, we write $\rho(u) \bumpeq \rho(v)$ for $\rho(u)$ and $\rho(v)$ are cyclic conjugates in the free monoid $\Sigma^{*}$.
\begin{thm_A}
Let $M$ be the finitely presented monoid $\operatorname{Mon}\langle\Sigma\mid R\rangle$ and let $\Re$
be a complete rewriting system for $M$. Let $u$ and $v$ be words in $\Sigma^{*}$. Assume that $\looparrowright$ is
terminating and confluent. Then\\
$(i)$ If $u$ and $v$ are transposed, then $\rho(u) \bumpeq \rho(v)$.\\
$(ii)$ If $\rho(u) \bumpeq \rho(v)$, then $u$ and $v$ are left and right conjugates.
\end{thm_A}
A \emph{completely simple semigroup} is a semigroup that has no non-trivial two-sided ideals and that
possesses minimal one-sided ideals. Using the results of McKnight and Storey in \cite{macknight}, it holds that
$\operatorname{Trans}= \operatorname{Conj}$ in a completely simple semigroup.
So, in the case of completely simple semigroups and monoids with a finite special complete rewriting system, our
result gives a solution to the conjugacy problems, whenever $\looparrowright$ is terminating and confluent. Assuming that $\looparrowright$ is terminating, we find a sufficient condition for the confluence of
$\looparrowright$ that is based on an analysis of the rules in $\Re$. Using this condition, we give an algorithm of
cyclical completion that is very much inspired by the Knuth-Bendix algorithm of completion. We have the following
main result.
\begin{thm_A}
Let $M$ be the finitely presented monoid $\operatorname{Mon}\langle\Sigma\mid R\rangle$ and let $\Re$
be a complete rewriting system for $M$. Assume that $\looparrowright$ is terminating. Then there exists an algorithm
that gives as an output an equivalent relation $\looparrowright^{+}$ that is terminating and confluent (whenever it
converges).
\end{thm_A}
The paper is organized as follows. In Section $2$, we define the binary relation $\looparrowright$ on the words in
$\Sigma^{*}$ and we
establish its main properties. In Section $3$, we describe the connection between a
terminating and confluent relation $\looparrowright$ and the conjugacy problems. In Section $4$, we adopt a local
approach as it is very difficult to decide wether a relation $\looparrowright$ is terminating, we define there the
notion of triple that is $\widetilde{c}$-defined. In Section $5$, we give a sufficient condition for the confluence
of $\looparrowright$, given that it terminates. In Section $6$, using the results from Section $5$, we give an
algorithm of cyclical completion that is very much inspired by the Knuth-Bendix algorithm of completion. Given a
terminating relation $\looparrowright$, if it is not confluent then some new cyclical reductions are added in
order to obtain an equivalent relation $\looparrowright^{+}$ that is terminating and confluent. At last, in Section
$7$, we address the case of length-preserving rewriting systems. All along this paper, $\Re$ denotes a complete
rewriting system, not necessarily a finite one.
\begin{acknowledgment}
This work is a part of the author's PhD research, done at the Technion. I am very grateful to
Professor Arye Juhasz, for his patience, his encouragement and his many helpful remarks.
I am also grateful to Professor Stuart Margolis for his comments on this result. I would like to thank the anonymous referee for his comments which significantly helped in improving the presentation of the paper.
\end{acknowledgment}
\section{Definition of the relation $\looparrowright$}
\hspace{0.5cm}Let $\Sigma$ be a non-empty set. We denote by $\Sigma^{*}$ the free monoid
generated by $\Sigma$; elements of $\Sigma^{*}$ are finite sequences
called \emph{words} and the empty word will be denoted by 1.
A \emph{rewriting system} $\Re$ on $\Sigma$ is a set of ordered
pairs in $\Sigma^{*}\times \Sigma^{*}$.
If $(l,r) \in \Re$ then for any words $u$ and $v$ in $\Sigma^{*}$,
we say that the word $ulv$ \emph{reduces} to the word $urv$ and we
write $ulv\rightarrow urv$ . A word $w$ is said to be \emph{reducible}
if there is a word $z$ such that $w\rightarrow z$. If there is no
such $z$ we call $w$ \emph{irreducible}. A rewriting system $\Re$ is called \emph{terminating} \emph{(or
Noetherian)} if there is no infinite sequence of reductions.
We denote by ``$\rightarrow^{*}$'' the reflexive
transitive closure of the relation ``$\rightarrow$''.
A rewriting system $\Re$ is called \emph{confluent} if for any words $u,v,w$ in $\Sigma^{*}$ ,
$w\rightarrow^{*}u$ and $w\rightarrow^{*}v$ implies that there is
a word $z$ in $\Sigma^{*}$ such that $u\rightarrow^{*}z$ and
$v\rightarrow^{*}z$ (that is if $u$ and $v$ have a common ancestor
then they have a common descendant). A rewriting system $\Re$ is called \emph{complete (or convergent)} if $\Re$ is
terminating and confluent. If $\Re$ is complete then every
word $w$ in $\Sigma^{*}$ has a unique irreducible equivalent word
that is called the \emph{normal form} of $w$.
We refer the reader to \cite{book,sims,handbook} for more details.
Let $\operatorname{Mon}\langle \Sigma \mid R \rangle$ be a finitely presented monoid $M$ and let $\Re$
be a complete rewriting system for $M$. Let $u$ and $v$ be elements in $\Sigma^{*}$. We define the following binary
relation
$ u \circlearrowleft^{1}v$ if $v$ is a cyclic conjugate of $u$ obtained by moving the first letter of $u$ to be the
last
letter of $v$. We define $u\circlearrowleft^{i} v$ if $v$ is a cyclic conjugate of $u$ obtained from $i$
successive applications of $\circlearrowleft^{1}$.
We allow $i$ being $0$ and in this case if $u\circlearrowleft^{0} v$ then $v=u$
in the free monoid $\Sigma^{*}$. As an example, let $u$ be the word $abcdef$ in
$\Sigma^{*}$. If $u\circlearrowleft^{1} v$ and
$u\circlearrowleft^{4} w$, then $v$
is the word $bcdefa$ and $w$ is the word $efabcd$ in $\Sigma^{*}$.
We now translate the operation of taking cyclic conjugates and reducing them using the rewriting system $\Re$ in terms
of a binary relation. We say that \emph{$u$ cyclically reduces to $v$} and we write
\begin{gather}
u\looparrowright v
\end{gather} if there is a sequence
\begin{gather}
u\circlearrowleft^{i} \widetilde{u}\rightarrow v
\end{gather}
From its definition, the relation $\looparrowright$ is \emph{not} compatible with concatenation. We define by
$\looparrowright^{*}$ the reflexive and transitive closure of $\looparrowright$, that is $u\looparrowright^{*} v$ if
there is a sequence $u\looparrowright u_{1}\looparrowright
u_{2}\looparrowright...u_{k-1}\looparrowright v$. We call such a sequence \emph{a sequence of cyclical reductions}. A
sequence of cyclical reductions is \emph{trivial} if it is equivalent to $\circlearrowleft^{*}$. We use the following notation:\\
- $\widetilde{u}$ denotes a cyclic conjugate of $u$ in the free monoid $\Sigma^{*}$.\\
- $u\bumpeq v$ if $u$ and $v$ are cyclic conjugates in the free monoid $\Sigma^{*}$.\\
- $u =_{M} v$ if the words $u$ and $v$ are equal as elements in $M$.\\
- $u = v$ if the words $u$ and $v$ are equal in the free monoid $\Sigma^{*}$.\\
Now, we define the properties of terminating and confluent for $\looparrowright$ in the same way as it is done for
$\rightarrow$. Note that given words $u$ and $v$ if we write $u\looparrowright v$ or $u\looparrowright^{*} v$, we
assume implicitly that this is done in a finite number of steps.
\begin{defn}
We say that $\Re$ is \emph{cyclically terminating} (or $\looparrowright$ is \emph{terminating}) if there is no
(non-trivial) infinite sequence of cyclical reductions.
We say that $\Re$ is \emph{cyclically confluent}
(or $\looparrowright$ is \emph{confluent}) if for any words $u,v,w$ in $\Sigma^{*}$,
$w\looparrowright^{*}u$ and $w\looparrowright^{*}v$ implies that
there exist cyclically conjugates words $z$ and $z'$ in $\Sigma^{*}$ such that
$u\looparrowright^{*}z$ and $v\looparrowright^{*}z'$.
We say that $\Re$ is \emph{locally cyclically confluent}
(or $\looparrowright$ is \emph{locally confluent}) if for any words $u,v,w$ in $\Sigma^{*}$,
$w\looparrowright u$ and $w\looparrowright v$ implies that
there exist cyclically conjugates words $z$ and $z'$ in $\Sigma^{*}$ such that
$u\looparrowright^{*}z$ and $v\looparrowright^{*}z'$.
We say that $\Re$ is \emph{cyclically complete} if $\Re$ is cyclically terminating
and cyclically confluent.
\end{defn}
\begin{ex}\label{ex_not_termin_no_cyc_irred}
Let $\Re=\{ ab\rightarrow bc,cd\rightarrow da\}$, $\Re$ is a complete and finite rewriting system.
Consider the word $bcd$, we have $bcd\rightarrow
bda\circlearrowleft^{2} abd\rightarrow bcd \rightarrow..$, that is there is an infinite sequence of cyclical
reductions. So, $\Re$ is not
cyclically terminating.
\end{ex}
\begin{defn}
We say that a word $u$ is \emph{cyclically irreducible} if $u$ and all its cyclic conjugates are irreducible modulo
$\Re$, that is there is no $v$ in $\Sigma^{*}$ such that
$u\looparrowright v$ (unless $u \bumpeq v$). We define a \emph{cyclically irreducible
form of $u$} (if it exists) to be a
cyclically irreducible word $v$ (up to $\bumpeq$) such that
$u\looparrowright^{*}v$.
We denote by $\rho(u)$ a cyclically irreducible
form of $u$, if it exists.
\end{defn}
\begin{ex}
Let $\Re=\{ab\rightarrow bc,cd\rightarrow da\}$ as before. From Ex. \ref{ex_not_termin_no_cyc_irred}, $bcd$ does not
have any cyclically irreducible form. But, the word $acd$ has a unique cyclically irreducible form $ada$ since
$acd\rightarrow ada$ and no rule from $\Re$ can be applied on $ada$ or on any cyclic conjugate of $ada$ in
$\Sigma^{*}$.
\end{ex}
As in the case of $\rightarrow$, the following facts hold also for $\looparrowright$ with a very similar proof. If
$\Re$ is cyclically terminating, then each word in $\Sigma^{*}$ has at least one cyclically irreducible form. If $\Re$
is cyclically confluent, then each word in $\Sigma^{*}$ has at most one cyclically irreducible form. So, if $\Re$ is
cyclically complete, then each word in $\Sigma^{*}$ has a unique cyclically irreducible form. Moreover, if $w \bumpeq
w'$, then $w$ and $w'$ have the same cyclically irreducible form (up to $\bumpeq$).
Given that $\looparrowright$ is terminating, $\Re$ is cyclically confluent if and only if $\Re$ is locally cyclically
confluent.
\begin{ex}\label{ex_not_cyc_confluent}
In \cite{hermiller+meier}, Hermiller and Meier construct a finite and complete rewriting system for the group $\operatorname{Gp} \langle a,b \mid aba=bab \rangle$, using another set of generators. For the monoid with the same presentation, the set of generators is: $\{a,b,\underline{ab},\underline{ba},\Delta=\underline{aba}\}$, where the underlining of a sequence of letters means that it is a generator in the new generating set.
The complete and finite rewriting system is $\Re=\{ ab \rightarrow \underline{ab},ba\rightarrow\underline{ba},
a\underline{ba}\rightarrow \Delta, \underline{ab}a\rightarrow \Delta, b\underline{ab}\rightarrow \Delta, \underline{ab}\,\underline{ab}\rightarrow a \Delta, \underline{ba}b\rightarrow \Delta,
\underline{ba}\,\underline{ba}\rightarrow b\Delta, \Delta a\rightarrow b\Delta, \Delta b\rightarrow a\Delta, \Delta \underline{ab}\rightarrow \underline{ba}\Delta,\Delta\underline{ba}\rightarrow \underline{ab}\Delta
\}$. Let consider the word $ab$, then $ab\rightarrow \underline{ab}$ and $ab\circlearrowleft^{1}ba\rightarrow \underline{ba}$. That is, $ab\looparrowright \underline{ab}$ and $ab\looparrowright \underline{ba}$, where both $\underline{ab}$ and $\underline{ba}$ are cyclically irreducible, so $\Re$ is not cyclically confluent (nor locally cyclically confluent).
\end{ex}
\section{The relation $\looparrowright$ and the conjugacy problems}
\hspace{0.5cm}We denote by
$u\equiv_{M} v$ the following equivalence relation: there are words $x,y$ in $\Sigma^{*}$ such that $ux=_{M} xv$ and
$yu=_{M}vy$, that is $u$ and $v$ are left and right conjugates.
We describe the connection between the relations $\looparrowright$, $\equiv$ and the transposition problem.
\begin{prop} \label{prop_samerho_conj}
Let $M$ denote the finitely presented monoid $\operatorname{Mon} \langle \Sigma\mid R \rangle$ and let $\Re$ be a
complete rewriting system for $M$. Let $u$ and $v$ be in $\Sigma^{*}$.\\
(i) If $u\looparrowright^{*}v$, then the pair $(u$, $v)$ is in the transitive closure of the transposition relation
and therefore $u \equiv_{M} v$.\\
(ii) If $\rho(u) \bumpeq \rho(v)$, then $u \equiv_{M} v$ (whenever $\rho(u)$ and $\rho(v)$ exist).
\end{prop}
\begin{proof}
$(i)$ If the sequence of cyclical reductions has the following form: $u\circlearrowleft^{i} \widetilde{u}
\rightarrow^{*}v$, then $u$ and $v$ are transposed. Otherwise,
if $u=u_{1}\circlearrowleft^{i} \widetilde{u} \rightarrow^{*}u_{2}\circlearrowleft^{i} \widetilde{u_{2}}
\rightarrow^{*}u_{3}... \rightarrow^{*}u_{k}=v$, then each pair $(u_{i},$ $u_{i+1})$ is transposed. So, the pair
$(u$, $v)$ is in the transitive closure of the transposition relation and therefore $u \equiv_{M} v$. $(ii)$ From $(i)$, $u \equiv_{M} \rho(u)$ and $v \equiv_{M} \rho(v)$, so $u \equiv_{M} v$, since $\rho(u) \bumpeq
\rho(v)$ and $\equiv_{M}$ is an equivalence relation.
\end{proof}
The converse of $(ii)$ is not true in general, namely $u \equiv_{M} v$ does not imply that $\rho(u) \bumpeq \rho(v)$.
Let $\Re=\{ bab\rightarrow aba,ba^{n}ba\rightarrow aba^{2}b^{n-1}, n \geq
2\}$. Then $\Re$ is a complete and infinite rewriting system for the braid monoid presented by
$\operatorname{Mon}\langle a,b \mid aba=bab \rangle$.
It holds that $a\equiv_{M}b$, since $a(aba)=_{M}(aba)b$ and
$(aba)a=_{M}b(aba)$, but $\rho(a)=a$ and $\rho(b)=b$ and they are not cyclic conjugates.
This example is due to Patrick Dehornoy.
\begin{lem}\label{lem_equal_same_cycl}
Let $\Re$ be a complete and cyclically complete rewriting system for $M$. Let $u$ and $v$ be words in $\Sigma^{*}$. If
$u=_{M}v$, then $\rho(u)\bumpeq \rho(v)$.
\end{lem}
\begin{proof}
Assume that $u \looparrowright^{*} z$
and $v \looparrowright^{*} z'$, where $z,z'$ are cyclically irreducible. We show that $z\bumpeq z'$.
Since $\Re$ is a complete rewriting system, equivalent words (modulo $\Re$) reduce to the same normal form. Here
$u=_{M}v$, so there is a unique irreducible word $w$ such
that $u \rightarrow ^{*} w$ and $v \rightarrow ^{*} w$. \\
We have the following diagram:
$\begin{array}{cccccccc}
u & \looparrowright^{*}& z\\
&\searrow^{*} \\
&& w \\
& \nearrow^{*} \\
v & \looparrowright^{*}& z'
\end{array}$\\
Assume that $w \looparrowright^{*} z''$, so $u \looparrowright^{*} z''$ and $v \looparrowright^{*} z''$.
But $u \looparrowright^{*} z$ and $v \looparrowright^{*} z'$ and $\Re$ is cyclically complete, so $z\bumpeq
z''\bumpeq z'$.
\end{proof}
\begin{thm}
Let $\Re$ be a complete and cyclically
complete rewriting system for $M$. Let $u$ and $v$ be words in
$\Sigma^{*}$. \\
$(i)$ If $u$ and $v$ are transposed, then $\rho(u) \bumpeq \rho(v)$.\\
$(ii)$ If $\rho(u) \bumpeq \rho(v)$, then $u \equiv_{M} v$.
\end{thm}
\begin{proof}
$(i)$ Since $u$ and $v$ are transposed, there are words $x$ and $y$ in
$\Sigma^{*}$ such that $u =_{M}xy$ and $v=_{M}yx$.
From lemma \ref{lem_equal_same_cycl}, $\rho(xy) \bumpeq \rho(u)$ and $\rho(yx) \bumpeq \rho(v)$. Moreover, since $xy
\bumpeq yx$ and $\Re$ is cyclically complete, $\rho(xy) \bumpeq \rho(yx)$, so $\rho(u) \bumpeq \rho(v)$.
$(ii)$ holds from Proposition \ref{prop_samerho_conj} in a more general context.
\end{proof}
\section{A local approach for $\looparrowright$: definition of $\operatorname{Allseq}$$(w)$}
\hspace{0.5cm}Given a complete rewriting system $\Re$, it is a very hard task to determine if $\Re$ is cyclically terminating, since
we have to check a potentially infinite number of words. So, we adopt a local approach, that is for each word $w$ in
$\Sigma^{*}$ we consider all the possible sequences of cyclical reductions that begin by each word from
$\{w_{1},..,w_{k}\}$, where $w_{1}=w,w_{2},..,w_{k}$ are all the cyclic conjugates of $w$ in $\Sigma^{*}$. We call the
set of all these sequences \emph{$\operatorname{Allseq}$$(w)$}. We say that $\operatorname{Allseq}$$(w)$
\emph{terminates} if there is no infinite sequence of cyclical reductions in $\operatorname{Allseq}$$(w)$.
Clearly, $\Re$ is cyclically terminating if and only if $\operatorname{Allseq}$$(w)$ terminates for every $w$ in
$\Sigma^{*}$.
\begin{ex}\label{ex_braid_B3}
Let $\Re=\{ bab\rightarrow aba,ba^{n}ba\rightarrow aba^{2}b^{n-1},$ where $n\geq 2\}$.
Then $\Re$ is a complete and infinite rewriting system for the braid monoid presented by
$\operatorname{Mon}\langle a,b \mid aba=bab \rangle$.
We denote by $w$ the word $ba^{2}ba$. We have the following infinite
sequence of cyclical reductions: $ba^{2}ba \rightarrow aba^{2}b
\circlearrowleft^{1} ba^{2}ba$, that is $\operatorname{Allseq}$$(w)$ does not terminate.
This holds also for $ba^{n}ba$ for each $n \geq 2$.
\end{ex}
We say that $\operatorname{Allseq}$$(w)$ \emph{converges} if a unique cyclically irreducible form is achieved in
$\operatorname{Allseq}$$(w)$ (up to $\bumpeq$). Clearly, if $\Re$ is cyclically confluent then
$\operatorname{Allseq}$$(w)$ converges for every $w$ in $\Sigma^{*}$. The converse is true only if $\Re$ is cyclically
terminating. We illustrate this with an example.
\begin{ex}\label{ex_braid_uniquecyclicalform}
Let $\Re=\{ bab\rightarrow aba,ba^{n}ba\rightarrow aba^{2}b^{n-1},$ where $n\geq 2\}$ as in Ex. \ref{ex_braid_B3}.
It holds that $\operatorname{Allseq}$$(ba^{2}ba)$ does not terminate (see Ex. \ref{ex_braid_B3}). Yet,
$\operatorname{Allseq}$$(ba^{2}ba)$ converges, since $a^{3}ba$ is the unique cyclically irreducible form achieved in
$\operatorname{Allseq}$$(w)$. Indeed, there is the following sequence of cyclical reductions: $ba^{2}ba
\circlearrowleft^{1}a^{2}bab \rightarrow a^{3}ba$ and all the cyclic conjugates of $w$ cyclically reduce to
$a^{3}ba$. So, although $\operatorname{Allseq(ba^{2}ba)}$ does not terminate, a unique cyclically irreducible form
$a^{3}ba$ is achieved.
\end{ex}
We find a condition that ensures that $\operatorname{Allseq}$$(w)$ converges, given that $\operatorname{Allseq}$$(w)$
terminates. Before we proceed, we give the following definition.
\begin{defn}
Let $\Re$ be a complete rewriting system and let $w$ be a word in $\Sigma^{*}$. Let $r_{1}$ and $r_{2}$ be rules in $\Re$ such that $r_{1}$ can be applied on a cyclic conjugate of $w$ and $r_{2}$ can be applied on another one.
We say that \emph{the triple $(w,r_{1},r_{2})$ is $\widetilde{c}$-defined} if there is a cyclic conjugate
$\widetilde{w}$ of $w$ such that both rules $r_{1}$ and $r_{2}$ can be applied on
$\widetilde{w}$. We allow an empty entry in a triple $(w,r_{1},r_{2})$, that is only $r_{1}$ or $r_{2}$ can be applied.
\end{defn}
\begin{ex}\label{ex_def_triple}
Let $\operatorname{Mon}\langle x,y,z \mid xy=yz=zx \rangle$, this is the Wirtinger presentation of the trefoil knot
group. Let $\Re= \{xy \rightarrow zx, yz \rightarrow zx, xz^{n}x \rightarrow zxzy^{n-1}, n \geq 1\}$ be a complete and
infinite rewriting system for the monoid with this presentation (see \cite{chou1}). Let consider the word $yxz^{2}x$,
$yxz^{2}x$ and $xyxz^{2}$ are cyclic conjugates on which the rules $xz^{2}x \rightarrow zxzy$ and $xy \rightarrow zx$
can be applied respectively. We claim that the triple $(yxz^{2}x,xz^{2}x \rightarrow zxzy, xy \rightarrow zx)$ is
$\widetilde{c}$-defined.
Indeed, there is the cyclic conjugate $xz^{2}xy$ on which both the rules $xz^{2}x \rightarrow zxzy$ and $xy \rightarrow
zx$ can be applied. But, as an example the triple $(xz^{2}xz^{3},xz^{2}x \rightarrow zxzy, xz^{3}x \rightarrow
zxzy^{2})$ is not $\widetilde{c}$-defined.
\end{ex}
In what follows, we show that if $\operatorname{Allseq}$$(w)$ terminates and all the triples occurring there are
$\widetilde{c}$-defined, then $\operatorname{Allseq}$$(w)$ converges. The following lemma is the induction basis of
the proof. For brevity, we write $u \looparrowright^{r_{1}} v_{1}$ for $u\circlearrowleft u_{1} \rightarrow^{r_{1}}
v_{1}$, where $u_{1} \rightarrow^{r_{1}} v_{1}$ means that $v_{1}$ is obtained from the application of the rule
$r_{1}$ on $u_{1}$.
\begin{lem}\label{lem_one_step}
Let the triple $(w,r_{1},r_{2})$ be $\widetilde{c}$-defined.
Assume that $w \looparrowright^{r_{1}} v_{1}$ and $w \looparrowright^{r_{2}} v_{2}$, then there are cyclically
conjugates words $z_{1}$ and $z_{2}$ such that $v_{1} \looparrowright^{*} z_{1}$ and
$v_{2} \looparrowright^{*} z_{2}$.
\end{lem}
\begin{proof}
We denote by $\ell_{1}$ and $\ell_{2}$ the left-hand sides of the rules $r_{1}$ and $r_{2}$ respectively and by $m _{1}$
and $m_{2}$ the corresponding right-hand sides. Then $\ell_{1}$ has an occurrence in $w_{1}$ and $\ell_{2}$ has an
occurrence in $w_{2}$, where $w_{1} \bumpeq w_{2} \bumpeq w$.
Since $(w,r_{1},r_{2})$ is $\widetilde{c}$-defined, there exists $\widetilde{w}$ such that $\widetilde{w} \bumpeq
w$ and $\ell_{1}$ and $\ell_{2}$ both have an occurrence in $\widetilde{w}$. Then one of the following holds:\\
$(i)$ $\widetilde{w}= x\ell_{1}y\ell_{2}s$, where $x,y,s$ are words.\\
$(ii)$ $\widetilde{w}= x\ell_{2}y\ell_{1}s$, where $x,y,s$ are words.\\
$(iii)$ $\widetilde{w}= x\ell_{1}\ell''_{2}y$, where $x,y$ are words, $\ell_{1}=\ell'_{1}\ell''_{1}$, $\ell_{2}=\ell'_{2}\ell''_{2}$ and
$\ell''_{1}=\ell'_{2}$.\\
$(iv)$ $\widetilde{w}= x\ell_{2}\ell''_{1}y$, where $x,y$ are words, $\ell_{1}=\ell'_{1}\ell''_{1}$, $\ell_{2}=\ell'_{2}\ell''_{2}$ and
$\ell''_{2}=\ell'_{1}$.\\
$(v)$ $\widetilde{w}= x\ell_{2}y$, where $x,y$ are words, $\ell_{1}$ is a subword of $\ell_{2}$.\\
$(vi)$ $\widetilde{w}= x\ell_{1}y$, where $x,y$ are words, $\ell_{2}$ is a subword of $\ell_{1}$.\\
We check the cases $(i)$, $(iii)$ and $(v)$ and the other three cases are symmetric.
If both $\ell_{1}$ and $\ell_{2}$ have an occurrence in $w_{1}$ and in $w_{2}$, then obviously there are words $z_{1}$ and
$z_{2}$ such that $v_{1} \looparrowright^{} z_{1}$ and
$v_{2} \looparrowright^{} z_{2}$, where $z_{1} \bumpeq z_{2}$. So, assume that
$\ell_{1}$ has no occurrence in $w_{2}$ and $\ell_{2}$ has no occurrence in $w_{1}$.\\
Case $(i)$: Assume that $\widetilde{w}= x\ell_{1}y\ell_{2}s$. Then the words $w_{1}$ and $w_{2}$ have the
following form:
$w_{1}= \ell''_{2} sx\ell_{1}y\ell'_{2}$ and $w_{2}=\ell''_{1} y\ell_{2}sx\ell'_{1}$, where $\ell_{1}=\ell'_{1}\ell''_{1}$ and
$\ell_{2}=\ell'_{2}\ell''_{2}$. This is due to the fact that $\ell_{1}$ has no occurrence in $w_{2}$ and $\ell_{2}$ has no occurrence
in $w_{1}$. So, $w_{1}= \ell''_{2} sx\ell_{1}y\ell'_{2} \rightarrow \ell''_{2} sxm_{1}y\ell'_{2}\circlearrowleft^{i}
sxm_{1}y\ell'_{2}\ell''_{2}\rightarrow sxm_{1}ym_{2}$ and
$w_{2}=\ell''_{1} y\ell_{2}sx\ell'_{1} \rightarrow \ell''_{1} ym_{2}sx\ell'_{1} \circlearrowleft^{j}
ym_{2}sx\ell'_{1}\ell''_{1}\rightarrow ym_{2}sxm_{1}$.
We take then $z_{1}$ to be $sxm_{1}ym_{2}$ and $z_{2}$ to be $ym_{2}sxm_{1}$.\\
Case $(iii)$: Assume that $\widetilde{w}= x\ell_{1}\ell''_{2}y$, where $\ell''_{1}=\ell'_{2}$.
There is an overlap ambiguity between these rules which resolve, since $\Re$ is complete:\\
$\begin{array}{cccccccccc}
&&\ell'_{1}\ell''_{1}\ell''_{2}\\
&\swarrow &&\searrow\\
m_{1}\ell''_{2} &&&& \ell'_{1}m_{2}\\
&\searrow^{*}&&\swarrow^{*}\\
&&z
\end{array}$ \\
The words $w_{1}$ and $w_{2}$ have the following form:
$w_{1}= \ell''_{2} yx\ell_{1}$
and $w_{2}=\ell_{2} yx\ell'_{1}$.
So, $w_{1}= \ell''_{2} yx\ell_{1} \rightarrow \ell''_{2} yxm_{1} \circlearrowleft^{i} m_{1}\ell''_{2} yx \rightarrow^{*}
zyx$
and $w_{2}=\ell_{2} yx\ell'_{1}\rightarrow m_{2} yx\ell'_{1} \circlearrowleft^{j} \ell'_{1}m_{2} yx \rightarrow^{*} zyx$.
So, we take $z_{1}$ and $z_{2}$ to be $zyx$.\\
Case $(v)$: Assume that $\widetilde{w}= x\ell_{2}y$, where $\ell_{2}=s\ell_{1}t$.
There is an inclusion ambiguity between these rules which resolve, since $\Re$ is complete:\\
$\begin{array}{cccccccccc}
&&\ell_{2}=s\ell_{1}t\\
&\swarrow &&\searrow\\
sm_{1}t &&&& m_{2}\\
&\searrow^{*}&&\swarrow^{*}\\
&&z
\end{array}$ \\
The words $w_{1}$ and $w_{2}$ have the following form:
$w_{1}= tyxs\ell_{1}$
and $w_{2}=\widetilde{w}= x\ell_{2}y$.
So, $w_{1}= tyxs\ell_{1} \rightarrow tyxsm_{1} \circlearrowleft^{i} sm_{1}tyx \rightarrow^{*}
zyx$
and $w_{2}= x\ell_{2}y\rightarrow xm_{2}y \rightarrow^{*} xzy$.
So, we take $z_{1}$ to be $zyx$ and $z_{2}$ to be $xzy$.\\
\end{proof}
\begin{prop}\label{prop_triple_defined_converge}
Let $w$ be a word in $\Sigma^{*}$ and assume that $\operatorname{Allseq}$$(w)$ terminates.
Assume all the triples in
$\operatorname{Allseq}$$(w)$ are $\widetilde{c}$-defined, then $\operatorname{Allseq}$$(w)$ converges.
\end{prop}
\begin{proof}
We show that the restriction of $\looparrowright$ to $\operatorname{Allseq}$$(w)$ is confluent. Since
$\operatorname{Allseq}$$(w)$ terminates, it is enough to show that the restriction of $\looparrowright$ to
$\operatorname{Allseq}$$(w)$ is locally confluent. All the triples in $\operatorname{Allseq}$$(w)$ are
$\widetilde{c}$-defined, so from lemma \ref{lem_one_step} the restriction of $\looparrowright$ to
$\operatorname{Allseq}$$(w)$ is locally confluent.
\end{proof}
\section{A sufficient condition for the confluence of $\looparrowright$}
\hspace{0.5cm}We find a sufficient condition for the confluence of $\looparrowright$, that is based on an analysis of the rules in
$\Re$. For that, we translate the signification of a triple that is not $\widetilde{c}$-defined in terms of the rules
in $\Re$.
\begin{defn}
Let $w=x_{1}x_{2}x_{3}..x_{k}$ be a word, where the $x_{i}$ are
generators for $1 \leq i \leq k$. Then we define the following
sets of words:\\
$\operatorname{pre}(w)=\{x_{1},x_{1}x_{2},x_{1}x_{2}x_{3},..,x_{1}x_{2}x_{3}..x_{k}\}$\\
$\operatorname{suf}(w)=\{x_{k},x_{k-1}x_{k},x_{k-2}x_{k-1}x_{k},..,x_{1}x_{2}x_{3}..x_{k}
\}$
\end{defn}
\begin{lem}\label{lem:no_Conj_pref_suff}
Let $(w,r_{1},r_{2})$ be a triple and let $\ell_{1}$ and $\ell_{2}$ denote the left-hand sides of the rules $r_{1}$ and
$r_{2}$, respectively. If $\operatorname{pre}(\ell_{2})\cap \operatorname{suf}(\ell_{1}) = \emptyset$ or
$\operatorname{pre}(\ell_{1})\cap
\operatorname{suf}(\ell_{2}) = \emptyset$, then the triple $(w,r_{1},r_{2})$ is $\widetilde{c}$-defined.
\end{lem}
\begin{proof}
From the assumption, $\ell_{1}$ is a subword of $w_{1}$ and $\ell_{2}$ is a subword of $w_{2}$, where $w_{1}$ and $w_{2}$ are
cyclic conjugates of $w$.
We show that there exists a cyclic conjugate of $w$, $\widetilde{w}$, such that both $\ell_{1}$ and $\ell_{2}$ are subwords
of $\widetilde{w}$.
If $\operatorname{pre}(\ell_{2})\cap \operatorname{suf}(\ell_{1}) = \emptyset$ and $\operatorname{pre}(\ell_{1})\cap
\operatorname{suf}(\ell_{2}) = \emptyset$ or if $\operatorname{pre}(\ell_{2})\cap \operatorname{suf}(\ell_{1}) \neq \emptyset$
and $\operatorname{pre}(\ell_{1})\cap \operatorname{suf}(\ell_{2})= \emptyset$, take $\widetilde{w}$ to be such that it ends in $\ell_{2}$ and then $\ell_{1}$ will also have an occurrence in $\widetilde{w}$. If
$\operatorname{pre}(\ell_{2})\cap \operatorname{suf}(\ell_{1}) = \emptyset$ and $\operatorname{pre}(\ell_{1})\cap
\operatorname{suf}(\ell_{2}) \neq \emptyset$, take $\widetilde{w}$ to be such that it
ends in $\ell_{1}$ and then $\ell_{2}$ will also have an occurrence in $\widetilde{w}$.
\end{proof}
Note that if $\operatorname{pre}(\ell_{2})\cap \operatorname{suf}(\ell_{1}) \neq \emptyset$ and
$\operatorname{pre}(\ell_{1})\cap
\operatorname{suf}(\ell_{2}) \neq \emptyset$, then it does not necessarily imply
that all the triples of the form $(w,r_{1},r_{2})$ are not $\widetilde{c}$-defined.
Yet, as the following example illustrates it, there exists a triple $(w,r_{1},r_{2})$ that is not
$\widetilde{c}$-defined.
\begin{ex}\label{ex_wirt_tripledefined}
Let $\Re= \{xy \rightarrow zx, yz \rightarrow zx, xz^{n}x \rightarrow zxzy^{n-1}, n \geq 1\}$ from Ex.
\ref{ex_def_triple}. The rules $xz^{2}x\rightarrow zxzy$ and $xz^{3}x\rightarrow zxzy^{2}$ satisfy
$\operatorname{pre}(xz^{2}x)\cap \operatorname{suf}(xz^{3}x)=\{x\}$ and $\operatorname{pre}(xz^{3}x)\cap
\operatorname{suf}(xz^{2}x)=\{x\}$. Yet, the triple $(xz^{2}xz^{3}x,\allowbreak xz^{2}x \rightarrow zxzy, xz^{3}x
\rightarrow zxzy^{2})$ is $\widetilde{c}$-defined, but the triple $(xz^{2}xz^{3},xz^{2}x \rightarrow zxzy, xz^{3}x
\allowbreak \rightarrow zxzy^{2})$ is not $\widetilde{c}$-defined.
\end{ex}
\begin{lem}\label{lem_notdefined_form_cyclicaloverlap}
Let $(w,r_{1},r_{2})$ be a triple and we denote by $\ell_{1}$ and $\ell_{2}$ the left-hand sides of the rules $r_{1}$ and
$r_{2}$, respectively.
Assume that $(w,r_{1},r_{2})$ is not $\widetilde{c}$-defined. Then $\ell_{1}=xuy$ and $\ell_{2}=yvx$, where $u,v$ are words
and $x,y$ are non-empty words.
\end{lem}
\begin{proof}
The triple $(w,r_{1},r_{2})$ is not $\widetilde{c}$-defined, so from lemma \ref{lem:no_Conj_pref_suff},
$\operatorname{pre}(\ell_{2})\cap \operatorname{suf}(\ell_{1}) \neq \emptyset$ and $\operatorname{pre}(\ell_{1})\cap
\operatorname{suf}(\ell_{2}) \neq \emptyset$. Assume that $\operatorname{pre}(\ell_{2})\cap
\operatorname{suf}(\ell_{1})\supseteq \{x\}$ and $\operatorname{pre}(\ell_{1})\cap
\operatorname{suf}(\ell_{2}) \supseteq \{y\}$, where $x,y$ are non-empty words.
So, $\ell_{1}$ and $\ell_{2}$ have one of the following forms:\\
$(i)$ $\ell_{1}=xuy$ and $\ell_{2}=yvx$, where $u,v$ are words.\\
$(ii)$ $\ell_{1}=xy$ and $\ell_{2}=yx''$, where $x=x'x''$, $y=y'y''$ and $y''=x'$.\\
$(iii)$ $\ell_{1}=xy''$ and $\ell_{2}=yx$, where $x=x'x''$, $y=y'y''$ and $x''=y'$.\\
$(iv)$ $\ell_{1}=xy''$ and $\ell_{2}=yx''$, where $x=x'x''$, $y=y'y''$, and $y''=x'$, $x''=y'$.\\
We show that only case $(i)$ occurs, by showing that in the cases $(ii)$, $(iii)$ and $(iv)$ the triple
$(w,r_{1},r_{2})$ is $\widetilde{c}$-defined. This is done by describing $\widetilde{w}$ on which both $r_{1}$ and
$r_{2}$ can be applied.
In any case, $w_{1}$ has to contain an occurrence of $\ell_{1}$ and $w_{2}$ has to contain an occurrence of $\ell_{2}$, where
$w_{1}$ and $w_{2}$ are cyclic conjugates of $w$. In case $(ii)$, $\ell_{1}=x'x''y'y''$ and $\ell_{2}=y'y''x''$, where
$y''=x'$, so there exists $\widetilde{w}=x'x''y'y''x''$ that contains an occurrence of $\ell_{1}$ and an
occurrence of $\ell_{2}$. Case $(iii)$ is symmetric to case $(ii)$ and we consider case $(iv)$.
In case $(iv)$, $\ell_{1}=x'x''y''$ and $\ell_{2}=y'y''x''$, where $y''=x'$ and $x''=y'$, so using the same argument as
before, take $\widetilde{w}$ to be $x'x''y''x''$.
So, case $(i)$ occurs and $w$ has the form $xuyv$.
\end{proof}
\begin{defn}
We say that there is a \emph{cyclical overlap} between rules, if
there are two rules in $\Re$ of the form $xuy \rightarrow u'$ and
$yvx \rightarrow v'$, where $u',v'$ are words, $u,v,x,y$ are non-empty words and such that
$u'v$ and $v'u$ are not cyclic conjugates in $\Sigma^{*}$.
We say that there is a \emph{cyclical inclusion} if there are
two rules in $\Re$, $l\rightarrow v$ and $l' \rightarrow v'$,
where $l,v,l',v'$ are words and
$l'$ is a cyclic conjugate of $l$ or $l'$ is a proper subword of a cyclic conjugate of
$l$. Whenever $l'$ is a cyclic conjugate of $l$, $v$ and $v'$ are not cyclic
conjugates in $\Sigma^{*}$ and whenever $l'$ is a proper subword of $\ell_{1}$, where $\ell_{1}$ is a cyclic conjugate of
$l$ (there is a non-empty word $u$ such that $\ell_{1}=ul'$),
then it holds that $l \rightarrow r$ and $l\circlearrowleft^{i}
\ell_{1}=ul' \rightarrow
uv'$ and $v$ and $uv'$ are not cyclic conjugates in $\Sigma^{*}$.
\end{defn}
In Example \ref{ex_wirt_tripledefined}, there is a cyclical overlap between the rules $xz^{2}x \rightarrow zxzy$ and
$xz^{3}x \rightarrow zxzy^{2}$. In Example \ref{ex_not_cyc_confluent}, there is a cyclical inclusion between the rules $ab \rightarrow \underline{ab}$ and $ba \rightarrow \underline{ba}$, since $ab$ is a cyclic conjugate of $ba$. In Example \ref{ex_braid_B3}, there is a cyclical
inclusion of the rule $bab\rightarrow aba$ in the rule $ba^{2}ba \rightarrow aba^{2}b$, since $bab$ is a subword of $baba^{2}$ (a cyclic conjugate of $ba^{2}ba$).
\begin{lem}\label{lem_pref_overlap}
Let $(w,r_{1},r_{2})$ be a triple and let $\ell_{1}$ and $\ell_{2}$ be the left-hand sides of the rules $r_{1}$ and
$r_{2}$, respectively.
Assume that the triple $(w,r_{1},r_{2})$ is not $\widetilde{c}$-defined.
Then there is a cyclical overlap or a cyclical inclusion between $r_{1}$ and $r_{2}$.
\end{lem}
\begin{proof}
The triple $(w,r_{1},r_{2})$ is not $\widetilde{c}$-defined, so from lemma \ref{lem_notdefined_form_cyclicaloverlap},
$\ell_{1}=xuy$ and $\ell_{2}=yvx$, where $x,y$ are non-empty words and
$u,v$ are words. If $u$ and $v$ are both the empty word, then
$\ell_{1}$ and $\ell_{2}$ are cyclic conjugates, that is there is a
cyclical inclusion. If $u$ is the empty word but $v$ is not the
empty word, then $\ell_{1}=xy$ and $\ell_{2}=yvx$, which means that
$\ell_{1}$ is a subword of a cyclic conjugate of $\ell_{2}$. So, in this
case and in the symmetric case (that is $v$ is the empty word but $u$
is not the empty word) there is a cyclical inclusion. If none of
$u$ and $v$ is the empty word, then $\ell_{1}=xuy$ and $\ell_{2}=yvx$,
that is there is a cyclical overlap between these two rules.
\end{proof}
\begin{prop}\label{prop_nooverlap_allseq_converges}
Let $w$ be a word in $\Sigma^{*}$
and assume that $\operatorname{Allseq}$$(w)$ terminates.
If there are no cyclical overlaps and cyclical inclusions in $\operatorname{Allseq}$$(w)$, then
$\operatorname{Allseq}$$(w)$ converges.
\end{prop}
\begin{proof}
If $\operatorname{Allseq}$$(w)$ does not converge, then from Proposition \ref{prop_triple_defined_converge}, this
implies that
there is a triple $(w,r_{1},r_{2})$ in $\operatorname{Allseq}$$(w)$ that is not $\widetilde{c}-$defined.
From lemma \ref{lem_pref_overlap}, this implies that there is a cyclical overlap or a cyclical inclusion in
$\operatorname{Allseq}$$(w)$.
\end{proof}
Note that the converse is not necessarily true, that is there may be a cyclical overlap or a cyclical inclusion in
$\operatorname{Allseq}$$(w)$ and yet a unique cyclically irreducible form is achieved in $\operatorname{Allseq}$$(w)$,
as in the following example.
\begin{ex}
Let $\Re=\{ bab\rightarrow aba,ba^{n}ba\rightarrow aba^{2}b^{n-1}, n \geq 2\}$.
Let $w=ba^{2}ba$, then $\operatorname{Allseq}$$(w)$ does not terminate (see Ex. \ref{ex_braid_B3}). The triple
$(w,bab\rightarrow aba, ba^{2}ba \rightarrow aba^{2}b )$ is not $\widetilde{c}-$defined since there is a cyclical
inclusion of the rule $bab\rightarrow aba$ in the rule $ba^{2}ba \rightarrow aba^{2}b$. Nevertheless, $w$ has a unique
cyclically irreducible form $ba^{4}$ (up to $\bumpeq$): $ba^{2}ba\rightarrow aba^{2}b \circlearrowleft^{4}
baba^{2}\rightarrow abaa^{2}$.
In fact, each $w=ba^{n}ba$ where $n \geq 2$ has a unique cyclically irreducible form $ba^{n+2}$ (up to $\bumpeq$).
\end{ex}
\begin{thm}\label{theo:conf_over_inc_amb_resolv}
Let $\Re$ be a complete rewriting system that is
cyclically terminating. If there are no rules in $\Re$ with cyclical overlaps or cyclical inclusions,
then $\Re$ is cyclically confluent.
\end{thm}
\begin{proof}
From Proposition \ref{prop_nooverlap_allseq_converges}, if there are no rules in $\Re$ with cyclical overlaps or
cyclical inclusions then $\operatorname{Allseq}$$(w)$ converges for all $w$.
Since $\Re$ is cyclically terminating, $\Re$ is cyclically confluent if and only if
$\operatorname{Allseq}$$(w)$ converges for all $w$, so the proof is done.
\end{proof}
\section{The algorithm of cyclical completion}
\hspace{0.5cm}Knuth and Bendix have elaborated an
algorithm which for a given finite and terminating rewriting system $\Re$, tests its completeness and if $\Re$ is not
complete
then new rules are added to complete it. This procedure can have one of three outcomes: success in finding a finite and
complete system, failure in finding anything or looping and and generating an infinite number of rules (see
\cite{handbook}). Instead of testing the
confluence of $\Re$, the algorithm tests the locally confluence of
$\Re$, since for a terminating rewriting system locally
confluence and confluence are equivalent. Two rewriting systems
$\Re$ and $\Re'$ are said to be \emph{equivalent} if :\
$w_{1}\leftrightarrow^{*}w_{2}$ modulo $\Re$ if and only if
$w_{1}\leftrightarrow^{*}w_{2}$ modulo $\Re'$. So, by applying the
Knuth-Bendix algorithm on a terminating rewriting system $\Re$ a
complete rewriting system $\Re'$ that is equivalent to $\Re$ can
be found (if the algorithm does not fail). Our aim in this section is to provide an algorithm of
cyclical completion which is much inspired by the Knuth-Bendix
algorithm of completion.
Let $\Re$ be a complete and cyclically terminating rewriting system, we assume that $\Re$ is finite.
From Theorem \ref{theo:conf_over_inc_amb_resolv},
if there are no cyclical overlaps or cyclical inclusions then $\Re$ is cyclically
confluent. Nevertheless, if there is a cyclical overlap or a cyclical inclusion, we define when it resolves in the
following way. We say that the cyclical overlap between the rules $xuy \rightarrow u'$ and
$yvx \rightarrow v'$, where $u,v,u',v'$ are words, $x,y$ are non-empty words \emph{resolves} if there exist cyclically
conjugate words $z$ and $z'$ such that $u'v\looparrowright^{*}z$ and
$uv'\looparrowright^{*}z'$.
If there is a cyclical inclusion between the rules $l\rightarrow v$ and $l' \rightarrow v'$,
where $l,v,l',v'$ are words and
$l'$ is a cyclic conjugate of $l$ or $l'$ is a proper subword of a cyclic conjugate of
$l$, then we say that it resolves if there exist cyclically conjugate words $z$ and $z'$ such that
$v\looparrowright^{*}z$ and
$v'\looparrowright^{*}z'$ in the first case or
$v\looparrowright^{*}z$ and $uv'\looparrowright^{*}z'$
in the second case ($z \bumpeq z'$).
\begin{ex}
We consider the complete and finite rewriting system from Ex. \ref{ex_not_cyc_confluent}. Since there is a cyclical inclusion between the rules $ab \rightarrow \underline{ab}$ and $ba \rightarrow \underline{ba}$, it holds that $ab \looparrowright \underline{ab}$
and $ab \looparrowright \underline{ba}$, where $\underline{ab}$ and $\underline{ba}$ are cyclically irreducible. We can decide arbitrarily wether $\underline{ab} \looparrowright^{+}\underline{ba}$ or $\underline{ba} \looparrowright^{+}\underline{ab}$, in any case this cyclical inclusion resolves.
\end{ex}
In the following, we describe the algorithm of cyclical completion in which we add some new cyclical reductions. We
denote by $\Re^{+}$ the rewriting system with the added cyclical reductions and we add ``$+$'' in
$\looparrowright^{+}$ for each cyclical reduction that is added in the process of cyclical completion. We assume that
$\Re$ is a finite, complete and cyclically terminating rewriting system. The algorithm is described in the
following.\\
$(i)$ If there are no cyclical overlaps or cyclical inclusions, then $\Re$ is cyclically
confluent, from Theorem \ref{theo:conf_over_inc_amb_resolv} and $\Re^{+}=\Re$.\\
$(ii)$ Assume there is a cyclical overlap or a cyclical inclusion in the word $w$:
$ w \looparrowright z_{1}$ and $ w \looparrowright z_{2}$.\\
With no loss of generality, we can assume that $z_{1}$ and $z_{2}$ are cyclically irreducible (since otherwise we can
first cyclically reduce them), then decide $z_{1}\looparrowright^{+} z_{2}$ or $z_{2}\looparrowright^{+} z_{1}$.
If at a former step, no $z_{i}\looparrowright^{+} u$ or $u\looparrowright^{+} z_{i}$ for $i=1,2$ was added, then
we can decide arbitrarily wether $z_{1}\looparrowright^{+} z_{2}$ or $z_{2}\looparrowright^{+} z_{1}$.
As an example, if $z_{1}\looparrowright^{+} u$ was added, then we choose $z_{2}\looparrowright^{+} z_{1}$.
The algorithm fails if the addition of a new cyclical reduction creates a contradiction: assume $z_{1}$ and
$z_{2}$ are cyclically irreducible and we need to add $z_{1}\looparrowright^{+} z_{2}$ or
$z_{2}\looparrowright^{+} z_{1}$ but $z_{1}\looparrowright^{+} u$ and $z_{2}\looparrowright^{+} v$ are
already in $\Re^{+}$. In the Knuth-Bendix algorithm of completion, the addition of the new rules may create
some additional overlap or inclusion ambiguities. We show in the following that this is not the case with the
algorithm of cyclical completion and this is due to the fact that the relation $\looparrowright$ is not
compatible with concatenation. From Proposition \ref{prop_samerho_conj}, if $u \looparrowright^{*} v$ then
$u\equiv_{M} v$. In the following lemma, we show that this holds also with $\looparrowright^{+}$.
\begin{lem}\label{lem_new_cycl_rule_equiv}
Let $\Re$ be a complete and cyclically terminating rewriting system. We
assume that $\Re$ is finite.
Let $\Re^{+}$ be the cyclical rewriting system obtained from the application of the algorithm of cyclical completion on
$\Re$. If $u \looparrowright^{+} v$ then $u \equiv_{M} v$ modulo $\Re$.
\end{lem}
\begin{proof}
There are two cases to check: if $u \looparrowright^{+} v$ and if $u \looparrowright^{+}u_{2}
\looparrowright^{+}u_{3}.. \looparrowright^{+} v$.
If $u \looparrowright^{+} v$, then from the algorithm of cyclical completion, there is a word $w$ such that $w
\looparrowright^{*} u$ and $w \looparrowright^{*} v$. From Proposition \ref{prop_samerho_conj}, this implies $w
\equiv_{M} u $ and $w \equiv_{M} v $ (modulo $\Re$), so $u \equiv_{M} v$ (modulo $\Re$). If $u
\looparrowright^{+}u_{2} \looparrowright^{+}u_{3}.. u_{k} \looparrowright^{+} v$, then $u_{i} \equiv_{M} u_{i+1}$
(modulo $\Re$) from the first case, so $u \equiv_{M} v$ (modulo $\Re$).
\end{proof}
Given two complete and cyclically terminating rewriting systems $\Re$ and $\Re'$, we say that $\Re$ and $\Re'$
are \emph{cyclically equivalent} if the following condition holds: $u\equiv_{M} v$ modulo $\Re'$ if and only if
$u\equiv_{M} v$ modulo $\Re$. We show that the cyclical rewriting system $\Re^{+}$ obtained from the application of
the algorithm of cyclical completion on $\Re$ is cyclically equivalent to $\Re$.
\begin{lem}
Let $\Re$ be a complete and cyclically terminating rewriting system, we assume that $\Re$ is finite.
Let $\Re^{+}$ be the cyclical rewriting system obtained from the application of the algorithm of cyclical completion
on $\Re$. Then $\Re^{+}$ and $\Re$ are cyclically equivalent, that is $u\equiv_{M} v$ modulo $\Re^{+}$ if and only if
$u\equiv_{M} v$ modulo $\Re$.
\end{lem}
\begin{proof}
It holds that $u\equiv_{M} v$ modulo $\Re$ if and only if there are words $x,y$ in $\Sigma^{*}$ such that $ux=_{M}xv$
and $yu=_{M}vy$. Since the (linear) rules in $\Re^{+}$ are the same as those in $\Re$, this holds if and only if
$u\equiv_{M} v$ modulo $\Re^{+}$ also.
\end{proof}
We say that there is a \emph{cyclical ambiguity} in $w$ if
$w\looparrowright^{*}u$ and $w\looparrowright^{*}v$, where $u$ and $v$ are not cyclic conjugates.
If there exist cyclically conjugate words $z$ and $z'$ in $\Sigma^{*}$ such that
$u\looparrowright^{*}z$ and $v\looparrowright^{*}z'$, then we say that this cyclical ambiguity \emph{resolves}.
Clearly, a rewriting system is cyclically confluent if and only if all the cyclical ambiguities resolve. Now, we show
that whenever the algorithm of cyclical completion does not fail, the rewriting system obtained $\Re^{+}$ is cyclically
complete.
\begin{prop}\label{thm_proof_algo_compl}
Let $\Re$ be a complete and cyclically terminating rewriting system, we
assume that $\Re$ is finite.
Let $\Re^{+}$ be the cyclical rewriting system obtained from the application of the algorithm of cyclical completion on
$\Re$. Then $\Re^{+}$ is cyclically complete.
\end{prop}
\begin{proof}
We need to show that $\Re^{+}$ is cyclically confluent.
Clearly, by the application of the algorithm of cyclical completion on $\Re$ the cyclical overlaps and inclusions in
$\Re$ are resolved. So, it remains to show that the addition of the new cyclical rules in $\Re^{+}$ does not create a
cyclical ambiguity. If a cyclical ambiguity occurs, then there should be one of the following kind of rules in
$\Re^{+}$:\\
- $u \looparrowright^{+} v$ and $l\rightarrow x$, where $l\bumpeq u$.\\
- $u \looparrowright^{+} v$ and $l \looparrowright^{+} x$, where $l\bumpeq u$.\\
The first case cannot occur, since $u$ is cyclically irreducible modulo $\Re$ and the second case cannot occur, since
in this case the algorithm of cyclical completion fails.
\end{proof}
\section{Length-preserving rewriting systems}
\hspace{0.5cm}We say that a rewriting system $\Re$ is
\emph{length-preserving} if $\Re$ satisfies the condition that
the left-hand sides of rules have the same length as their
corresponding right-hand sides. We show that if $\Re$ is a length-preserving
rewriting system, then an infinite sequence of cyclical reductions occur only if there is a
repetition of some word in the sequence or if a word and its cyclic conjugate occur there. Using this fact, we define
an equivalence relation on the words that permits us to obtain some partial results in the case that $\Re$ is not
cyclically terminating.
\begin{lem}\label{lem_length_termin}
Let $\Re$ be a complete rewriting system that is length-preserving. If there is an infinite sequence of cyclical reductions, then it contains
(at two different positions) words that are cyclic conjugates .
\end{lem}
\begin{proof}
From the assumption,
applying $\Re$ on a word $u$ does not change its length $\ell(u)$, so all the words appearing in such an infinite
sequence have the same length. Since the number of
words of length \emph{$\ell(u)$} is finite, an infinite sequence of cyclical reductions occurs only if it contains
words that are cyclic conjugates at two different positions.
\end{proof}
Note that using the same argument as in lemma \ref{lem_length_termin}, we have that if $\Re$ is length-decreasing,
that is all the left-hand sides of rules have length greater than their
corresponding right-hand sides, then there is no infinite sequence of cyclical reductions, that is $\Re$ is cyclically
terminating. In the following lemma, we show that if there is an infinite sequence of cyclical reductions that results
from the occurrence of a word $w$ and its cyclic conjugate $\widetilde{w}$, then there are some relations of
commutativity involving $w$ and $\widetilde{w}$.
This is not clear if these relations of commutativity are a sufficient condition for the occurrence of an infinite
sequence, nor if such a sufficient condition can be found.
\begin{lem}
Assume there is an infinite sequence $w \looparrowright^{*}\widetilde{w}$,
where $w \bumpeq \widetilde{w}$.
Then there are words $x,y$ such that $yx\widetilde{w}=_{M}\widetilde{w}yx$ and
$xyw=_{M}wxy$.
\end{lem}
\begin{proof}
From Proposition \ref{prop_samerho_conj}, $w \equiv_{M} \widetilde{w}$, that is there are words $x,y$ in $\Sigma^{*}$
such that $wx=_{M}x\widetilde{w}$ and $yw=_{M}\widetilde{w}y$. So, $wxy=_{M}x\widetilde{w}y=_{M}xyw$ and
$yx\widetilde{w}=_{M}ywx=_{M}\widetilde{w}yx$.
\end{proof}
We now define the following equivalence relation $\sim$ on $\Sigma^{*}$. Let $u,v$ be different words in $\Sigma^{*}$.
We define $u \sim v$ if and only if $u\looparrowright^{*}v$ and
$v\looparrowright^{*}u$, this is an equivalence relation.
Clearly, if $\Re$ is cyclically terminating, then each
equivalence class contains a single word, up to $\bumpeq$.
Now, we show that there is a partial solution to the left and right conjugacy problem, using
$\sim$ in the case that $\Re$ is not cyclically terminating. Note that given a word $w$ such that
$\operatorname{Allseq}$$(w)$ does not terminate, it may occur one of the following; either there is no cyclically
irreducible form achieved in $\operatorname{Allseq}$$(w)$ (as in Ex. \ref{ex_not_termin_no_cyc_irred}) or there is a
unique cyclically irreducible form achieved in $\operatorname{Allseq}$$(w)$ (as in Ex.
\ref{ex_braid_uniquecyclicalform}).
\begin{prop}
Let $u$ and $v$ be in $\Sigma^{*}$. If there exists a word $z$ such that $u \sim z$ and $v \sim z$, then $u \equiv_{M}
v$.
\end{prop}
\begin{proof}
If there exists a word $z$ such that $u \sim z$ and $v \sim z$, then from the definition of $\sim$ there are sequences
$u \looparrowright^{*} z $ and $v \looparrowright^{*}z$. From Proposition
\ref{prop_samerho_conj}, this implies $u \equiv_{M} z$ and $v \equiv_{M} z$, so $u \equiv_{M} v$.
\end{proof}
Note that the converse is not true as the following example illustrates it.
\begin{ex}
Let $\Re=\{ bab\rightarrow aba,ba^{n}ba\rightarrow aba^{2}b^{n-1}, n \geq 2\}$.
It holds that $a\equiv_{M}b$, since $a(aba)=_{M}(aba)b$ and
$(aba)a=_{M}b(aba)$. Yet, there is no sequence of cyclical reductions such that $a \sim b$.
\end{ex}
We can consider a rewriting system that is not length increasing (that is all the rules preserve or decrease the
length) to be \emph{cyclically terminating up to $\sim$} and apply on it the algorithm of cyclical completion and
obtain that it is \emph{cyclically complete up to $\sim$}. This is due to the fact that also in this case infinite
cyclical sequences would result from the occurrence of a word and its cyclic conjugate. If there exists a cyclically
irreducible form then it is unique, but the existence of a cyclically irreducible form is not ensured.
The complete and finite rewriting system $\Re$ from Ex. \ref{ex_not_cyc_confluent} illustrates this situation. It is not length increasing and not cyclically terminating, since there are infinite sequences of cyclical reductions (as an example $\Delta a \rightarrow b \Delta \circlearrowleft ^{1} \Delta b \rightarrow a \Delta$). The application of the algorithm of cyclical completion on $\Re$ gives $\Re^{+}=\Re \cup \{\underline{ab} \looparrowright^{+} \underline{ba}\}$ that is cyclically complete up to $\sim$. But, nevertheless there are words that have no cyclically irreducible form ($\Delta a$ for example).
\bibliographystyle{amsplain}
|
2,877,628,091,108 | arxiv | \section{Introduction}
A high-energy lepton collider (LC) \cite{Heuer:2012gi} offers the possibility of direct
production of lepton number ($L$) carrying heavy states (such as sleptons $\tilde l$ in supersymmetry) and of doing
precision physics profiting from low QCD activity. The possibility of polarising the colliding beams with
great accuracy makes the physics potential of a lepton collider even more ambitious. Indeed, by polarising
the initial beams, one is able to (approximately) project the chirality of the couplings intervening in the
primary production process, thus probing the chirality structure of an underlying new physics model. Polarisation can also be
used to suppress the background from Standard Model (SM) processes (e.g.\ from weak charged currents) to charged
lepton flavour violation (cLFV) signals. Moreover, threshold scans of chirality projected primary produced particles can be used to determine
their masses \cite{Feng:1998ud}.
Supersymmetry (SUSY) remains one of the most attractive extensions of the SM. Thus, a very appealing mechanism to explain
the smallness of neutrino masses and generate neutrino mixing is to consider a supersymmetric
seesaw. A type-I seesaw mechanism in which the right-handed (RH) neutrinos have masses close to the grand
unification (GUT) scale, generates neutrino masses and mixings with naturally large Yukawa couplings,
and can offer an explanation for the observed baryon asymmetry of the universe. However, the drawback
of such a set-up is that it is very hard to probe since the very heavy RH neutrinos cannot be produced at
colliders. On the other hand, in a supersymmetric type-I seesaw (usually dubbed SUSY seesaw) the radiative
corrections generate flavour violation in the slepton sector \cite{Borzumati:1986qx}, giving
additional sources of LFV at low energy. At a high-energy LC, sleptons can be copiously produced and their decays
lead to potentially observable cLFV final states, thus providing a unique probe of this mass
generation mechanism.
SUSY-seesaw induced cLFV at a LC has been studied in \cite{Deppisch:2003wt,Deppisch:2004pc} focusing on
$\tau$-$\mu$ flavour violation, while in \cite{Carquin:2011rg} slepton driven cLFV was also considered but
without relying on a particular origin of slepton mixing.
Motivated by the excellent muon reconstruction capabilities and the recently improved upper-limit on
$\mu\to e\gamma$ \cite{Adam:2011ch}, we studied the SUSY seesaw induced $\mu$-$e$
cLFV and its discovery potential at a LC working with polarisable beams \cite{Abada:2012re}.
\section{The SUSY seesaw}
The SUSY seesaw model comprises the minimal supersymmetric standard model (MSSM)
extended by three\footnote{We assume that the number of RH neutrino generations mimics the number of generations of SM fermions. In fact,
two are sufficient in order to comply with neutrino oscillation parameters.} chiral superfields
$\hat N^c_i \sim \left( \nu^c, \tilde \nu^\dagger_R \right)_i$, so-called RH neutrino superfields, that are singlets under the SM gauge group.
The leptonic part of the superpotential is given by
\begin{equation}
\mathcal{W}^{\text{lepton}} = \frac{1}{2} \hat N^c M_N \hat N^c + \hat N^c Y^\nu \hat L \hat H_2 + \hat E^c Y^l \hat L \hat H_1 \,.
\end{equation}
Hereafter we work in a flavour basis where the charged lepton Yukawa couplings $Y^l$ and the RH neutrino mass matrix $M_N$
are diagonal. The slepton soft breaking terms are
\begin{eqnarray}
\mathcal{V}^{\text{slepton}}_{\text{soft}} & = & m^2_{\tilde L} \tilde \ell_L \tilde \ell^*_L + m^2_{\tilde E} \tilde \ell_R \tilde \ell^*_R + m^2_{\tilde \nu_R} \tilde \nu_R \tilde \nu^*_R \nonumber\\
&& + \left( A_l H_1 \tilde l_L \tilde l^*_R + A_\nu H_2 \tilde \nu_L \tilde \nu^*_R + B_\nu \tilde \nu_R \tilde \nu_R + \text{H.c.} \right) \, .
\end{eqnarray}
Our analysis is conducted in a framework where SUSY breaking is flavour blind (as in minimal supergravity mediated SUSY breaking) and
the soft breaking parameters satisfy certain universality conditions at a high-energy scale, which we take to be the gauge coupling
unification scale $M_{\text{GUT}} \sim 10^{16}$ GeV:
\begin{eqnarray}
&& \left( m_{\tilde L} \right)^2_{ij} = \left( m_{\tilde E} \right)^2_{ij} = \left( m_{\tilde \nu_R} \right)^2_{ij} = m^2_0 \delta_{ij} \,,\\
&& \left( A_l \right)_{ij} = A_0 \left( Y^l \right)_{ij} \,, \left( A_\nu \right)_{ij} = A_0 \left( Y^\nu \right)_{ij} \,,
\end{eqnarray}
where $m_0$ and $A_0$ are the universal scalar soft-breaking mass and trilinear couplings of the constrained MSSM (cMSSM), and $i,j$
denote lepton flavour indices ($i,j = 1,2,3$).
After electroweak (EW) symmetry breaking, the neutrino mass matrix is approximately given by $m_\nu \simeq -{m^\nu_D}^T M^{-1}_N m^\nu_D$,
where $m^\nu_D = Y^\nu v_2$ and $v_i$ is the vacuum expectation value of $H_i$ ($v_{1(2)} = v \cos(\sin)\beta$, with $v = 174$ GeV).
A convenient means of parametrizing the neutrino Yukawa couplings, while at the same time allowing to accommodate the experimental data,
is given by the Casas-Ibarra parametrization \cite{Casas:2001sr}, which reads at the seesaw scale $M_N$
\begin{equation}
Y^\nu v_2 = m^\nu_D = i \sqrt{M^{\text{diag}}_N} R \sqrt{m^{\text{diag}}_\nu} U^\dagger_{\text{MNS}} \,, \label{eq.CasasIbarra}
\end{equation}
where $U_{\text{MNS}}$ is the light neutrino mixing matrix and $R$ is a $3 \times 3$ complex orthogonal matrix that encodes the possible
mixings involving RH neutrinos. We use the standard parametrization for the $U_{\text{MNS}}$, with the three mixing angles in the intervals
favoured by current data \cite{GonzalezGarcia:2012sz}.
\subsection{Flavour violation in the slepton sector}
Due to the non-trivial flavour structure of $Y^\nu$, the running from $M_{\text{GUT}}$ down to the seesaw scale will induce
flavour mixing \cite{Borzumati:1986qx} in the otherwise approximately flavour conserving slepton soft breaking terms.
This running is more
pronounced in the soft breaking terms involving slepton doublets since these have local interactions with the RH (s)neutrinos.
At leading order, the flavour mixing induced by these radiative corrections has the form
\begin{eqnarray}
&& \left( \Delta m^2_{\tilde L} \right)_{ij} = -\frac{1}{8\pi^2} \left( 3 m^2_0 + A^2_0 \right) \left( {Y^\nu}^\dagger L Y^\nu \right)_{ij} \,,\\
&& \left( \Delta A_l \right)_{ij} = -\frac{3}{16\pi^2} A_0 Y^l_{ij} \left( {Y^\nu}^\dagger L Y^\nu \right)_{ij} \,;\, L_{kl} \equiv \log\left(\frac{M_{\text{GUT}}}{M_{N_k}}\right) \delta_{kl} \,.
\end{eqnarray}
The amount of flavour violation in the slepton sector is encoded in $\left( {Y^\nu}^\dagger L Y^\nu \right)_{ij}$ which,
as made explicit by equation \eqref{eq.CasasIbarra}, is related to high- and low-energy neutrino parameters.
\section{cLFV in $e^{\pm} e^-$ collisions}
At collider energies, i.e.\ at energies of the order of the TeV, the (high-scale) SUSY seesaw with the aforementioned
assumptions for SUSY breaking can be interpreted as a minimal deviation from the cMSSM that allows for flavour mixing in
the slepton soft breaking parameters, since processes mediated by RH (s)neutrinos are greatly suppressed due to the
very high seesaw scale. We stress that, contrary to other analysis, the origin of this flavour mixing is rooted on, albeit
not singly determined by, neutrino oscillations.
In the SUSY seesaw model, as in the MSSM, the sparticle production in $e^+ e^-$ collisions are dominated by the following 2-body processes
\begin{equation}
e^+ e^- \to \tilde \ell^+_i \tilde \ell^-_j\,,~ \chi^0_A \chi^0_B\,,~ \chi^+_A \chi^-_B\,,~ \tilde\nu^*_i \tilde\nu_j \,. \label{eq.EpEm2body}
\end{equation}
For $e^- e^-$ collisions, the available channels are considerably restricted, and we have
\begin{equation}
e^- e^- \to \tilde \ell^-_i \tilde \ell^-_j \,. \label{eq.EmEm2body}
\end{equation}
Since SUSY-seesaw induced cLFV final states are dominated by slepton decays via slepton-lepton-(EW gaugino) interactions, it is useful to
distinguish two cases with respect to the slepton/EW gaugino mass hierarchy. A dark-matter motivated scenario in which
sleptons are heavier than the EW gauginos occurs in the so-called ``Higgs funnel'' region, while the opposite hierarchy
happens in the so-called co-annihilation region. We thus define two types of points to guide the subsequent analysis
\begin{itemize}
\item F points: $m_{\tilde \ell, \tilde \nu} > m_{\chi^0_2, \chi^+_1}$ and $m_{\chi^0_2, \chi^+_1} > m_{\tilde\tau_1}$;
\item C points: $m_{\tilde \ell, \tilde \nu} < m_{\chi^0_2, \chi^+_1}$ and $m_{\chi^0_1} \lesssim m_{\tilde\tau_1}$.
\end{itemize}
In table \ref{tb.points} we give two C- and one F-type points that will be used in the numerical analysis.
\begin{table}[h]
\caption{\label{tb.points} Representative points used in the numerical analysis.}
\begin{center}
\lineup
\begin{tabular}{*{5}{l}}
\br
& C$_1$ & C$_2$ & F \cr
\mr
$m_0$ (GeV) & 150 & 200 & 750 \cr
$M_{1/2}$ (GeV) & 727.9 & 949.2 & 872.1 \cr
$\tan\beta$ & \010 & \010 & \052 \cr
$A_0$ (GeV) & \0\00 & \0\00 & \0\00 \cr
sign($\mu$) & \0\01 & \0\01 & \0\01 \cr
\br
\end{tabular}
\end{center}
\end{table}
In the model under consideration, R-parity is conserved implying that the final states we are interested in, i.e.\
those with intervening sleptons, have an even number of $\chi^0_1$ (the lightest supersymmetric particle, the LSP) which
escape the detector without interacting.
Hence, we focus our attention on cLFV in processes with di-lepton final states plus missing transverse momenta,
$\slash E_T$. The main source of background arises from weak charged currents producing neutrinos which, analogously
to the LSP, escape undetected. These are of two types: the SM charged current backgrounds (type B) in which all $\slash E_T$
is due to neutrinos; SUSY charged current backgrounds (type C) in which $\slash E_T$ contains both neutrinos and LSPs.
Here, we study the following possibilities:
\begin{eqnarray}
e^\pm e^- \to \left\{ \begin{array}{ll}
e^\pm \mu^- + 2 \chi^0_1 & \text{(A)} \\
e^\pm \mu^- + 2 \chi^0_1 + (2,4)\nu & \text{(B)} \\
e^\pm \mu^- + (2,4)\nu & \text{(C)}
\end{array} \right.
\end{eqnarray}
where (A) is the signal. In table \ref{tb.bckgs} we classify the main sources of (B) and (C) backgrounds in $e^+ e^-$
collisions, in which we allow for leptonically decaying $\tau$s, $W$s and $Z$s. Backgrounds in $e^- e^-$ collisions can
be similarly classified. However, $e^- e^- \to W^- W^-$ is very small, being suppressed by powers of $m_\nu / M_W$, and
$e^- e^- \to \tau^-\tau^-$ vanishes.
\begin{table}[h]
\caption{\label{tb.bckgs} SUSY charged current backgrounds (B) and SM charged current backgrounds (C), with
the corresponding total cross section (order of magnitude) for unpolarised beams, for
the signal $e^+ \mu^- + 2\chi^0_1$.}
\begin{center}
\lineup
\begin{tabular}{lll}
\br
\multicolumn{1}{l}{\multirow{3}{*}{\begin{tabular}{c}SUSY bckg (type B)\\ $\lesssim 5$ fb\end{tabular}}} & $0\tau$ & $e^+ \mu^- + (\bar\nu\nu,2\bar\nu\nu) + 2\chi^0_1$ \\ \cline{2-3}
\multicolumn{1}{l}{} & $\tau$+$0\nu$ & $({e^+ \tau^-}, {\tau^+ \mu^-}, \tau^+\tau^-) + 2\chi^0_1$ \\ \cline{2-3}
\multicolumn{1}{l}{} & $\tau$+$2\nu$ & $(e^+ \tau^-, \tau^+ \mu^-) + \bar\nu\nu + 2\chi^0_1$ \\ \mr
\multicolumn{1}{l}{\multirow{3}{*}{\begin{tabular}{c}SM bckg (type C)\\ $\approx 10^2$ fb\end{tabular}}} & $W$-strahlung & $W^- (e^+,\tau^+) \nu$, $W^+ (\mu^-,\tau^-) \bar\nu$ \\ \cline{2-3}
\multicolumn{1}{l}{} & $W$-pair & $W^+ W^-$ \\ \cline{2-3}
\multicolumn{1}{l}{} & {$\tau$}-pair & $\tau^+ \tau^-$ \\ \br
\end{tabular}
\end{center}
\end{table}
SM charged current backgrounds could {\it in principle} be reduced by devising kinematical cuts that rely
on the fact that neutrinos are much lighter than the LSP and that the seeding processes have different topologies.
However, that analysis is beyond the scope of our work. In our work \cite{Abada:2012re} we conduct a phenomenological analysis, focusing on theoretical
estimations of the expected number of events at a LC operating at a given centre of mass energy with the possibility of polarised beams.
More precisely, our analysis is based on an algorithmic calculation of the possible production and decay modes, considering that the majority
of the events proceeds from an on-shell primary production (so that there are no interference effects between the different
contributions), with subsequent two-body cascade decays (the exception being 3-body decays of the $\tau$).
For sparticle primary production, we have considered the aforementioned channels\footnote{Total lepton number violating processes
such as $e^+ e^- \to \tilde \nu \tilde \nu$ and
$e^- e^- \to \chi^-_A \chi^-_B$ are severely suppressed by the seesaw scale.}, i.e.\ those listed in equations \eqref{eq.EpEm2body} and
\eqref{eq.EmEm2body}, while for SM primary production we have
considered those listed in table \ref{tb.bckgs}, i.e.\ $W$-strahlung for both $e^+ e^-$ and $e^- e^-$ collisions, and
$W$- and $\tau$-pair productions for $e^+ e^-$ collisions.
In figure \ref{fig.C1Fsqrts} we show the results for $e^+ e^-$ unpolarised beams for
point C$_1$ (left hand side) and F (right hand side). The cross sections for the signal (red crosses), SUSY background
(blue times) and SM background (green asterisks) are given as a function of the centre of mass energy, $\sqrt s$. We have
taken a degenerate RH neutrino spectrum with $M_R = 10^{12}$ GeV, and $\theta_{13} = 10^\circ$.
\begin{figure}[h]
\begin{minipage}{150mm}
\includegraphics[width=75mm]{figs/figC1.eps}
\hspace{2pc}\includegraphics[width=75mm]{figs/figF.eps}
\caption{\label{fig.C1Fsqrts} Cross sections for $e^+ e^- \to e^+ \mu^- + \slash E_T$
(with $\slash E_T = 2 \chi^0_1, 2\chi^0_1 + (2,4)\nu, (2,4)\nu$),
for points C$_1$ and F (left and right panels, respectively) as
a function of the centre of mass energy $\sqrt s$.
We fix $M_R = 10^{12}$ GeV, and denote the signal (A) with red crosses,
the SUSY charged current background (B) with blue times, and the SM
charged current background (C) by green asterisks. We have taken a
degenerate right-handed neutrino spectrum and set $\theta_{13} = 10^\circ$.}
\end{minipage}
\end{figure}
As can be seen, the SM background, consisting mainly of $W$-strahlung and $W$-pair production, dominates the signal by approximately
one order of magnitude for C$_1$ and three orders of magnitude for F. Moreover, the SUSY background dominates (is comparable to)
the signal in F (C$_1$). To understand these two observations, notice that the principal decay modes of sleptons and EW gauginos
in F-type points are given by
\begin{eqnarray}
&& (\tilde \ell^-,\tilde\nu^*) \to \chi^0_2 (\ell^-,\bar\nu)\,,~ \chi^-_1 (\nu, \ell^+)\,,~ (\ell^-,\bar\nu) \chi^0_1 \,, \\
&& \chi^-_1 \to \tilde\tau^-_1 \bar\nu, W^- \chi^0_1 \,, \\
&& \chi^0_2 \to \tilde\tau^-_1 \tau^+, Z \chi^0_1 \,.
\end{eqnarray}
while for C-type points we have
\begin{eqnarray}
&& \chi^-_1 \to \tilde \ell^- \bar\nu\,,~ \tilde\nu^* l^- \,, \\
&& \chi^0_2 \to \tilde \ell^\pm l^\mp\,,~ \tilde \nu \bar\nu\,,~ \tilde \nu^* \nu \,, \\
&& (\tilde \ell^-,\tilde \nu) \to (\ell^-, \nu) \chi^0_1 \,.
\end{eqnarray}
Thus, solely from the width of the decays we expect $\Gamma_{\text{total}}(\tilde \ell)_F > \Gamma_{\text{total}}(\tilde \ell)_C$,
which, for the same amount of flavour violation, gives that the expected signal cross sections for F points should be lower than for C points.
In fact, we find that typically $\left( \text{BR($\tilde e_L \to e \chi^0_1$)}_C \right)^2 \approx 10 \times \left( \text{BR($\tilde e_L \to e \chi^0_1$)}_F \right)^2$.
Additionally, we observe that in C-type points, by taking a centre of mass energy above the $\tilde \ell_L \tilde \ell_R$ production
threshold and below the production threshold of both $\chi^+_1 \chi^-_1$ and $\chi^0_2 \chi^0_2$, it is possible to evade the largest
contribution from SUSY charged current backgrounds. For C$_1$ this happens above $\sqrt s \approx 800$ GeV and below $\sqrt s \approx 1200$ GeV,
as can be inferred from figure \ref{fig.C1Fsqrts}.
\subsection{Beam polarisation effect}
Since $W$-pair production and $W$-strahlung backgrounds dominate the signal in $e^+ e^-$
collisions, it is interesting to consider how beam polarisation can be exploited to suppress their
contribution without compromising the signal. We will do this by considering limiting cases, that is,
$100$\% polarisations.
Since slepton flavour mixing occurs predominantly in the LL slepton sector, i.e.\ via
decays of mostly left-handed slepton mass eigenstates, to avoid suppressing the $e^+ \mu^-$ signal
we require the $e^-$ beam to be $100\%$ left-polarised, while no constraint is placed on the positron
beam.
Constraints on the positron beam arise by requiring the suppression of both $W$-pair production and
$W$-strahlung production cross sections. In $W$-pair production, the s-channel is strongly dominated by
vector boson interactions which can be suppressed by taking equally polarised beams. The t-channel
is suppressed either by $e^+_L$ or $e^-_R$ beams. Therefore, we can suppress both channels by left-polarising
the positron beam. Regarding $W$-strahlung, a maximal suppression would require $e^+_L e^-_R$ which would
strongly suppress the signal. Nevertheless, the choice $e^+_L e^-_L$ is preferable to $e^+_R e^-_L$ one.
In figure \ref{fig.C1LL} we show the result of left-polarising both positron and electron beams for
point C$_1$, with parameters and line colour code as in figure \ref{fig.C1Fsqrts}. Clearly,
the signal is enhanced by a factor of $\approx 4$ near the ($\tilde e^+_R \tilde e^-_L$) production
threshold, while at higher energies the enhancement becomes smaller. The low energy tail of the SUSY background
``disappears'', as is understandable from the fact that such processes proceed via $\tilde \tau^+_1 \tilde \tau^-_1$
primary production. Moreover, both $\chi^+_1 \chi^-_1$ and $\chi^0_2 \chi^0_2$ primary production are
greatly suppressed. As expected, the SM background is also suppressed, becoming comparable to the signal.
\begin{figure}[h]
\begin{center}
\includegraphics[width=75mm]{figs/figC1LL.eps}
\end{center}
\caption{\label{fig.C1LL} Cross section for $e^+ e^- \to e^+ \mu^- + \slash E_T$ (with $\slash E_T = 2\chi^0_1,2\chi^0_1+(2,4)\nu,
(2,4)\nu$), for point C$_1$, as a function of the centre of mass energy $\sqrt s$, for $100$\% LL
polarised beams. We have taken degenerate right-handed neutrino spectrum with $M_R = 10^{12}$, and set
$\theta_{13} = 10^\circ$. Line and colour code as in figure \ref{fig.C1Fsqrts}.}
\end{figure}
\subsection{Probing the SUSY seesaw}
We now discuss how the observation of a signal would allow to probe the underlying mechanism
of slepton flavour mixing. If compatible
with seesaw predictions, an observation would indeed strengthen the seesaw hypothesis, since a single mechanism would be able to
explain many observables: the smallness of neutrino masses while generating neutrino mixings; and
collider cLFV. Moreover, low energy observables such as CR($\mu$-$e$, N) and
BR($\mu \to e \gamma$) would serve to further strengthen the hypothesis, provided that
the predictions would be compatible with observations/exclusion limits.
In figure \ref{fig.nevents} we display the expected number of $e^- \mu^- + 2\chi^0_1$ events
from $80$\% LL polarised electron beams, as a function of the seesaw scale, $M_R$, for point
C$_2$. For illustrative purposes we have chosen $\sqrt s = 2$ TeV.
\begin{figure}[h]
\begin{center}
\includegraphics[width=75mm]{figs/nevents.eps}
\end{center}
\caption{\label{fig.nevents} Number of events for $e^- e^- \to e^- \mu^- + 2\chi^0_1$ for point
C$_2$ as a function of the seesaw scale $M_R$, for $(P_{e^-},P_{e^-})=(-80\%,-80\%)$
polarised beams. In both cases, we fix $\sqrt s = 2$ TeV, and we have taken a degenerate
right-handed neutrino spectrum, with $\theta_{13}=10^\circ$. Vertical lines denote the $M_R$-corresponding
value of BR($\mu \to e \gamma$) while the (grey) shaded region represents values of $M_R$
already excluded by the present experimental bound on BR($\mu \to e \gamma$).}
\end{figure}
Due to the high number of expected events, if no $e^- \mu^-$ signal event is observed
at an LC, then a high-scale SUSY seesaw is clearly disfavoured. Supposing that events
are indeed observed, two possibilities arise. If the observed number of events is not accommodated
by predictions, then it is likely that either the SUSY seesaw is not the unique source of LFV or
that another mechanism for neutrino mass generation is at work. If the observed
number of events can be accommodated by predictions, as illustrated in figure \ref{fig.nevents} for
point C$_2$, then one can constrain (or even hint at) the seesaw scale. For example, if SUSY is realised
with a C$_2$-like spectrum and more than $10^5$ signal events are observed for a total integrated
luminosity of approximately $3$ ab$^{-1}$, then the seesaw scale, $M_R$, should be above $10^{12}$ GeV.
Moreover, evading the exclusion limits on $\mu\to e\gamma$ sets an upper bound on this scale, so that
we would be led to $10^{12}$ GeV $\lesssim$ $M_R$ $\lesssim$ $10^{14}$ GeV. If we now suppose that $\mu\to e\gamma$
is observed at MEG, e.g.\ with a branching ratio of the order of $10^{-12}$, we would expect
a number of $e^-\mu^-$ signal events below $10^{5}$. This incompatibility could be due to (unaccounted for)
destructive interferences in collider processes, or additional sources of LFV that only contribute to low-energy
observables.
\subsection{$\mu^-\mu^-$ golden channel}
Same sign di-muon final states may possibly be a ``golden channel'' for the
detection of cLFV at an LC. From an experimental point of view, the efficiency of
the muon detectors can be fully explored when looking for di-muon
final states. On the theoretical side, higher signal significances are expected,
in particular due to very suppressed SM backgrounds.
In figure \ref{fig.mumu} we display the $e^- e^- \to \mu^-\mu^- + \slash E_T$ cross section
for the signal and the SUSY background, considering unpolarised beams, as a function of the
centre of mass energy.
\begin{figure}[h]
\begin{center}
\includegraphics[width=75mm]{figs/ee_mumu.eps}
\end{center}
\caption{\label{fig.mumu} Cross section for $e^- e^- \to \mu^- \mu^- + \slash E_T$ (with $\slash E_T = 2\chi^0_1,2\chi^0_1+(2,4)\nu$),
for point C$_1$ as a function of the centre of mass energy, $\sqrt s$, for the case of
unpolarised beams. The signal is denoted by red crosses and the SUSY background by
green asterisks. We have taken a degenerate right-handed neutrino spectrum with $M_R = 10^{12}$ GeV
and set $\theta_{13} = 10^\circ$.}
\end{figure}
The SM background, not displayed in figure \ref{fig.mumu}, is dominated by double-$W$-strahlung
production, i.e.\ $e^- e^- \to W^- W^- \nu \nu \to \mu^-\mu^- 2(\bar\nu\nu)$, whose cross section is
of the order of $1$ fb. Therefore, we confirm that the $\mu^-\mu^-$ signal event is indeed much cleaner than
$e^-\mu^-$ (and $e^+\mu^-$ in $e^+e^-$ collisions).
\section{Concluding remarks}
A high-energy lepton collider offers an enormous potential for cLFV discovery.
Beam polarisation can be
instrumental in maximising the signal significance, rendering the signal ``visible''
in a large part of the high-scale SUSY seesaw parameter space, even without dedicated
cuts, which could improve the observation prospects even further. We have also pointed out
that cLFV discovery at an LC, complemented with low-energy LFV observables, could substantiate
or disfavour the high-scale SUSY seesaw.
Finally, we commented on the truly remarkable channel for cLFV discovery: $e^- e^- \to \mu^- \mu^- + 2\chi^0_1$.
\ack
This work has been done partly under the ANR project CPV-LFV-LHC {NT09-508531}.
The work of A.\ J.\ R.\ F.\ has been supported by {\it Funda\c c\~ao para a Ci\^encia e a
Tecnologia} through the fellowship SFRH/BD/64666/2009. A.\ A.\, A.\ J.\ R.\ F.\ and A.\ M.\ T.\
acknowledge partial support from the European Union FP7 ITN INVISIBLES (Marie Curie Actions,
PITN-GA-2011-289442). A.\ J.\ R.\ F.\ and J.\ C.\ R.\ also acknowledge the financial support from
the EU Network grant UNILHC PITN-GA-2009-237920 and from {\it Funda\c{c}\~ao para a Ci\^encia
e a Tecnologia} project PEst-OE/FIS/UI0777/2011 and grants CERN/FP/116328/2010 and PTDC/FIS/102120/2008.
\section*{References}
|
2,877,628,091,109 | arxiv |
\section{Supplemental Material}
\begin{table*}[h!]
\caption{The maximum Pearson correlation coefficient obtained for {\it minimal sets} at densities $\rho$ between the individual EOS components and different star properties. The maximum correlation coefficient $r_{x,y}$ at a given $\rho$ in fm$^{-3}$ for $x$ =\{ $\beta-$equilibrium pressure ($P_\beta$), the SNM pressure ($P_{\rm SNM}$), the nuclear symmetry energy ($S(\rho)$) and slope of the symmetry energy $L(\rho)$ \} and $y$=\{ the NS maximum mass M$_{\rm max}$, the radius $R_{1.4}$ and $R_{2.075}$ for 1.4 and 2.075 $M_\odot$ NS, respectively, the dimensionless tidal deformability $\Lambda_{1.4}$ for 1.4 M$_{\odot}$ NS and the minimum NS mass at which nucleonic direct Urca occurs M$_{\rm dUrca}$ \}.}
\setlength{\tabcolsep}{10.5pt}
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{ccccccccccc}
\hline \hline
\multirow{3}{*}{x} & \multicolumn{10}{c}{y} \\
& \multicolumn{2}{c}{$M_{\rm max}$} & \multicolumn{2}{c}{$R_{1.4}$} & \multicolumn{2}{c}{$R_{2.075}$} & \multicolumn{2}{c}{$\Lambda_{1.4}$} & \multicolumn{2}{c}{$M_{\rm dUrca}$} \\
& $\rho$ & $r_{x,y}$ & $\rho$ & $r_{x,y}$ & $\rho$ & $r_{x,y}$ & $\rho$ & $r_{x,y}$ & $\rho$ & $r_{x,y}$ \\ \hline
$P_{\beta}$ & 0.662 & 0.99 & 0.255 & 0.96 & 0.710 & 0.64 & 0.279 & 0.99 & 0.136 & -0.74 \\
$P_{\rm SNM}$ & 0.710 & 0.96 & 0.303 & 0.74 & 0.710 & 0.62 & 0.327 & 0.85 & \multicolumn{2}{c}{NA} \\
$S(\rho)$ & \multicolumn{2}{c}{NA} & 0.303 & 0.73 & \multicolumn{2}{c}{NA} & 0.327 & 0.54 & 0.560 & -0.92 \\
$L(\rho)$ & \multicolumn{2}{c}{NA} & 0.231 & 0.67 & \multicolumn{2}{c}{NA} & 0.231 & 0.54 & 0.375 & -0.90 \\ \hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth,angle=0]{NS_mdurca.png}
\caption{Corner plots for the marginalized posterior distributions (PDs) of neutron star properties, namely gravitational mass $M_{\rm max}$, baryonic mass $M_{\rm B, max}$, the square of central speed-of-sound $c_s^2$, the central baryonic density $\rho_{c}$, the radius $R_{1.4}$ and the dimensionless tidal deformability $\Lambda_{1.4}$ for 1.4 $M_\odot$ NS
for the model with 1.4 $M_{\rm dUrca}$ (dark red), 1.6 $M_{\rm dUrca}$ (salmon) and 1.8 $M_{\rm dUrca}$ (violet) constraints considered in Set 1, 2 and 3, respectively. The vertical lines indicate 68\% min, median and 68\% max CI, respectively, and the different tonalities from dark to light indicate, respectively, the 1$\sigma$, 2$\sigma$, and 3$\sigma$ CI.}
\end{figure*}
\begin{table*}[]
\caption{The median values and the associated 90\% CI of the NS properties the gravitational mass $M_{\rm max}$, baryonic mass $M_{\rm B, max}$, radius $R_{\rm max}$, central energy density $\varepsilon_c$, central number density for baryon $\rho_c$ and square of central speed-of-sound $c_s^2$ of the maximum mass NS, as well as the radius $R_M$ and the dimensionless tidal deformability $\Lambda_M$ for a NS having a solar mass $M$. {The $\tilde \Lambda_{q=1}$ refers to GW170817 and corresponds to $M=1.36\,M_{\odot}$.}}
\setlength{\tabcolsep}{8.5pt}
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{ccccccccccc}
\hline \hline
\multirow{3}{*}{Quantity} & \multirow{3}{*}{Units} & \multicolumn{3}{c}{$M_{\rm dUrca}=1.4 ~M_\odot$} & \multicolumn{3}{c}{$M_{\rm dUrca}=1.6 ~M_\odot$} & \multicolumn{3}{c}{$M_{\rm dUrca}=1.8 ~M_\odot$} \\ \cline{3-11}
& & \multirow{2}{*}{median} & \multicolumn{2}{c}{90\% CI} & \multirow{2}{*}{median} & \multicolumn{2}{c}{90\% CI} & \multirow{2}{*}{median} & \multicolumn{2}{c}{90\% CI} \\
& & & min & max & & min & max & & min & max \\ \hline
$M_{\rm max}$ & M $_\odot$ & $2.119$ & $2.017$ & $2.304$ & $2.120$ & $2.017$ & $2.310$ & $2.118$ & $2.015$ & $2.309$ \\
$M_{\rm B, max}$ & M $_\odot$ & $2.487$ & $2.350$ & $2.735$ & $2.492$ & $2.354$ & $2.746$ & $2.492$ & $2.357$ & $2.748$ \\
$c_{s}^2$ & $c^2$ & $0.59$ & $0.48$ & $0.66$ & $0.59$ & $0.47$ & $0.66$ & $0.59$ & $0.47$ & $0.66$ \\
$\rho_c$ & fm$^{-3}$ & $0.927$ & $0.872$ & $0.947$ & $0.927$ & $0.868$ & $0.949$ & $0.930$ & $0.870$ & $0.953$ \\
$\varepsilon_{c}$ & MeV fm$^{-3}$ & $1173$ & $1130$ & $1173$ & $1173$ & $1119$ & $1173$ & $1173$ & $1130$ & $1173$ \\
$R_{\rm max}$ & \multirow{8}{*}{km} & $11.30$ & $10.83$ & $11.86$ & $11.28$ & $10.80$ & $11.85$ & $11.24$ & $10.74$ & $11.83$ \\ \\
$R_{0.8}$ & & $13.49$ & $13.08$ & $13.91$ & $13.33$ & $12.91$ & $13.74$ & $13.18$ & $12.71$ & $13.61$ \\
$R_{1.0}$ & & $13.33$ & $12.92$ & $13.76$ & $13.19$ & $12.77$ & $13.61$ & $13.06$ & $12.60$ & $13.50$ \\
$R_{1.2}$ & & $13.21$ & $12.78$ & $13.68$ & $13.10$ & $12.66$ & $13.55$ & $12.98$ & $12.51$ & $13.46$ \\
$R_{1.4}$ & & $13.09$ & $12.63$ & $13.61$ & $13.00$ & $12.53$ & $13.51$ & $12.90$ & $12.41$ & $13.43$ \\
$R_{1.6}$ & & $12.93$ & $12.42$ & $13.52$ & $12.85$ & $12.34$ & $13.44$ & $12.77$ & $12.24$ & $13.37$ \\
$R_{1.8}$ & & $12.69$ & $12.10$ & $13.38$ & $12.63$ & $12.04$ & $13.32$ & $12.56$ & $11.95$ & $13.26$ \\
$R_{2.075}$ & & $12.21$ & $11.37$ & $13.09$ & $12.18$ & $11.34$ & $13.07$ & $12.14$ & $11.28$ & $13.04$ \\ \\
$\Lambda_{0.8}$ & \multirow{8}{*}{-} & $12044$ & $9994$ & $14515$ & $11384$ & $9376$ & $13741$ & $10777$ & $8789$ & $13130$ \\
$\Lambda_{1.0}$ & & $3866$ & $3171$ & $4772$ & $3693$ & $3012$ & $4556$ & $3520$ & $2840$ & $4382$ \\
$\Lambda_{1.2}$ & & $1335$ & $1074$ & $1701$ & $1287$ & $1029$ & $1638$ & $1237$ & $978$ & $1586$ \\
$\Lambda_{1.4}$ & & $543$ & $425$ & $717$ & $527$ & $409$ & $696$ & $509$ & $391$ & $678$ \\
$\Lambda_{1.6}$ & & $214$ & $160$ & $299$ & $209$ & $155$ & $293$ & $203$ & $149$ & $287$ \\
$\Lambda_{1.8}$ & & $88$ & $61$ & $135$ & $87$ & $60$ & $133$ & $85$ & $58$ & $131$ \\
$\Lambda_{2.075}$ & & $24$ & $12$ & $44$ & $23$ & $12$ & $44$ & $23$ & $12$ & $44$ \\
$\tilde \Lambda_{q=1}$ & & $665$ & $524$ & $870$ & $645$ & $504$ & $843$ & $622$ & $481$ & $820$ \\ \hline
\end{tabular}
\end{table*}
|
2,877,628,091,110 | arxiv | \section{Introduction}
\IEEEPARstart{W}{ind} power generation (WPG) introduces variability and uncertainty in operational planning. As defined in \cite{ela_2012}, \textit{variability} of WPG is the random fluctuation of wind speed caused by physical processes in the atmosphere while \textit{uncertainty} of WPG results from wind forecast errors. To account for variability and uncertainty, approaches to robust unit commitment (RUC) \cite{guan_2014}, chance constrained optimal power flow (CC-OPF) \cite{bienstock_2009} and distributionally robust CC-OPF (RCC-OPF) \cite{bienstock_2009},\cite{new_submission} have been formulated and tested. However, the performance of these models depend on the accuracy of parameters of probability distributions that define the uncertainty sets used; therefore, the uncertainty in distribution parameters must be accounted for. \textcolor{black}{This letter makes two contributions: i) we use a data-driven analysis to relate intra-hour wind speed variability to the hourly-average wind speed and ii) using this relationship we construct uncertainty sets that can be interpreted in terms of variability and uncertainty of WPG.}
Bienstock et al \cite{bienstock_2009} show that the CC-OPF formulation with Gaussian-distributed deviations of WPG and precisely known mean and variance can be extended to the RCC-OPF formulation, where the parameters of the Gaussian deviations (both mean and variance) fall within uncertainty sets. This RCC-OPF is implemented and tested on a large-scale system in \cite{new_submission}. \textcolor{black}{In this letter, we interpret the set for the mean in terms of the wind power uncertainty, while the set for the variance is deemed as the wind power variability. This approach differs from the uncertainty sets on a random variable in \cite{guan_2014} by introducing uncertainty sets on distribution parameters describing the random variable. We derive these sets with a data-driven approach such that the resulting RCC-OPF remains tractable \cite{bienstock_2009, new_submission}, yielding a practical approach for modeling WPG. Although the RCC-OPF model is nominally more conservative than the CC-OPF model, its solution incurs a lower operating cost when tested against actual realizations of WPG \cite{new_submission}.}
\vspace{-5pt}
\section{Methodology}
\subsubsection{Data}
\label{sec::data}
{We use historical wind speed measurements at 5-minute resolution from the Goodnoe meteorological station in the Bonneville Power Authority (BPA) system \cite{bpa} and one-hour resolution wind speed forecasts produced by the NOAA Rapid Refresh numerical weather prediction model \cite{noaa} for the same location. The historical measurements and forecasts are detrended by using data from the same calendar season (December--February) and range of hours (00:00--04:00 AM). }
\subsubsection{Wind Speed Variability}
\label{sec::variability}
\begin{figure}[b]
\vspace{-20pt}
\begin{tabular}{c}
\subfloat{\includegraphics[width=0.95\linewidth]{figure_1a_new.eps}}\\
\subfloat{\includegraphics[width=0.95\linewidth]{figure_1b_new_rev2.eps}}\\
\end{tabular}
\caption{a) Empirical distribution $f^E(w_{t} | \mu^*)$ (histograms) and their Gaussian best fits for a few $\mu^*$. b) Empirical relationship between $\sigma*$ and $\mu^*$ and its linear fit $\sigma^*(\mu^*)$. }
\label{fig:BPApdfs}
\end{figure}
{For any subinterval $\tau$ of hour-long interval $t$ wind speed $w_{\tau}$ can be written as $w_{\tau}= \mu_t + \epsilon_{\tau}$ \cite{bienstock_2009}, where $\mu_t$ and $\epsilon_{\tau}$ are the hourly-average wind speed and intra-hour, zero-mean variability around $\mu_t$ \cite{ela_2012}, respectively. \textcolor{black}{To parametrize $w_{\tau}$, we calculate the average wind speed ($\mu_t$) for each hourly interval within the studied period using the 5-minute resolution Goodnoe data \cite{bpa}. Next, we bin the hourly intervals based on their hourly-averaged wind speed $\mu_t$. This binning is in the range from 0 m/s to 25 m/s (the typical operating range of wind turbines) and has the width of 1 m/s. For each bin we obtain a histogram that represents an empirical intra-hour wind speed distribution $f^E(w_{\tau}|\mu_t)$ conditioned by the hourly-average wind speed ($\mu_t$). Each histogram is then fit to the Gaussian distribution, yielding estimated mean ($\mu^*$) and standard deviation values ($\sigma^*$) for every bin. Both the histogram and its Gaussian distribution are shown in Fig.~\ref{fig:BPApdfs}a) for several values of $\mu_t$. Figure~\ref{fig:BPApdfs}b) shows that $\sigma^*$ scales linearly with $\mu^*$ as $\sigma^*(\mu^*) = 0.231 + 0.197 \cdot \mu^*$ for $\mu^* \in \left[0, 25\right]$ m/s, which is consistent with velocity distributions for high Reynolds number atmospheric flows \cite{kaimal}.} If there is no error in a given hourly-averaged wind speed forecast $\mu_t^f$, the wind speed variability is estimated using the linear mapping $\sigma^*(\mu^*)$ shown in Fig.~\ref{fig:BPApdfs}b). The resulting distribution is then given as $N [w_{\tau}; \mu_t^f, (\sigma^*(\mu_t^f))^2 ]$, and can be used in \cite{guan_2014} and \cite{bienstock_2009} for uncertainty sets accounting for wind speed variability.
\subsubsection{Wind Speed Uncertainty and Variability}
\label{sec::uncertainty}
{Wind forecast errors cause $\mu_t\neq \mu_t^f$. The wind forecast error $e_t$ is calculated from the NOAA \cite{noaa} data as $e_{t}(\Delta T)=\mu_t-\mu_t^f(\Delta T)$, where $\mu_t^f(\Delta T)$ is the forecast for hour $t$ made $\Delta T$ hours in advance. The empirical distribution $f^E \left(e_{t};\Delta T \right)$ for $\Delta T=1$ hour is shown in Fig.~\ref{fig:NOAApdds}a). We propose to represent $f^E(\cdot)$ with a generalized Gaussian distribution, $f^G(\cdot)$:}
\begin{equation}
\label{eq1}
f^G(e_t;\Delta T,\mu^-,\mu^+)=\frac{ \int_{\mu^-}^{\mu^+}d\mu N[e_t;\mu,\sigma^*(\mu)] }{(\mu^+ - \mu^-)},
\end{equation}
{\noindent where $\sigma^*(\mu)$ is the fit from Fig.~\ref{fig:BPApdfs}b) and the dependence of $\mu^+$ and $\mu^-$ on $\Delta T$ is suppressed. \textcolor{black}{The advantage of $f^G(\cdot)$ over a single Gaussian distribution is that it improves the goodness-of-fit to empirical data \cite{hahn_1967} by encapsulating a linear superposition of Gaussian distributions over a range of means $[\mu^-, \mu^+]$ that represents the wind uncertainty. Furthermore, \cite{hahn_1967} explains that other single non-Gaussian distributions can be modelled using $f^G(\cdot)$.} The best fit $\mu^-$ and $\mu^+$ are computed by solving the optimization problem:}
\begin{equation}
\label{eq2}
\arg\min_{\!\!\!\!\!\!\mu^-, \mu^+} \int [f^G(e_t;\Delta T,\mu^-, \mu^+) -f^E(e_t;\Delta T)]^2 de_t,
\end{equation}
{which minimizes the mean square difference between $f^E(\cdot)$ and $f^G(\cdot)$ and ensures a better fit to the historical data than a single Gaussian distribution, as shown in Fig. \ref{fig:NOAApdds}a). We interpret the range $[\mu_t^{f-},\mu_t^{f+}]$, where $\mu_t^{f-}=\mu^f_t+\mu^-$ and $\mu_t^{f+}=\mu^f_t+\mu^+$, as the bounds of the uncertainty set for the mean wind speed. Fig.~\ref{fig:NOAApdds}b) displays $\mu_t^{f-}$ and $\mu_t^{f+}$ for $\mu^f_{t} = 10$ m/s for different $\Delta T$. Using $\sigma^*(\mu^*)$ from Fig.~\ref{fig:BPApdfs}b), we compute the bounds on $\sigma^*$ as $\sigma^{*} (\mu_t^{f-})$ and $\sigma^{*} (\mu_t^{f+})$, which are shown in Fig.~\ref{fig:BPApdfs}c). The ranges $[\mu_t^{f-},\mu_t^{f+}]$ and $[\sigma^{*} (\mu_t^{f-}),\sigma^{*} (\mu_t^{f+})]$, if converted to wind generation as explained below, can be used in the RCC-OPF formulation from \cite{bienstock_2009}, \cite{new_submission}.
\begin{figure}[t]
\begin{tabular}{c}
\subfloat{\includegraphics[width=0.95\linewidth]{fig_3_new_new_new.eps}}\\
\subfloat{\includegraphics[width=0.95\linewidth]{figure_4speed.eps}}\\
\end{tabular}
\caption{a) Empirical distribution $f^E(e_t;\Delta T)$ (histogram), the best fit single Gaussian distribution (black) and the best fit distribution $f^G(\cdot)$ (red) from Eq.~(\ref{eq1}) for $\Delta T$= 1 h. b) $\mu_t^{f-}$ and $\mu_t^{f+}$ versus $\Delta T$ for $\mu^f_{t} = 10$ m/s. c) $\sigma^*(\mu_t^{f-})$ and $\sigma^*(\mu_t^{f+})$ for the data in b).}
\label{fig:NOAApdds}
\end{figure}
\begin{figure}[t]
\vspace{-10pt}
\begin{tabular}{c}
\subfloat{\includegraphics[width=\linewidth]{figure_5_yury.eps}} \\
\subfloat{\includegraphics[width=\linewidth]{figure_6.eps}}\\
\end{tabular}
\caption{a) Typical wind power turbine curve $p(\mu)$ with four nominal operating regions \cite{ge}. b) Ranges for the on the mean WPG (i.e. forecast error) computed using Eq. (\ref{eq:power_interval}) for the data in Fig.~\ref{fig:NOAApdds}c). c) Ranges on WPG standard deviation (variability) computed using Eqs.~(\ref{eq:uset_sigma-}) and (\ref{eq:uset_sigma+}) for the data in b). }
\label{fig:ranges}
\vspace{-5pt}
\end{figure}
\subsubsection{Conversion to Wind Power}
\label{sec::conversion}
We illustrate the conversion to wind power using the single wind turbine power curve $p(\mu)$ shown in Fig.~\ref{fig:ranges}a), \cite{ge}. This procedure can be generalized to multiple turbines by using an aggregated wind power curve \cite{hayes_2011}. The conversion of the range $[\mu_t^{f-},\mu_t^{f+}]$ is given by:
\begin{align}
[\mu_t^{f-},\mu_t^{f+}]\rightarrow [p(\mu_t^{f-}),p(\mu_t^{f+})].\label{eq:power_interval}
\end{align}
Figure~\ref{fig:ranges}b) shows $p(\mu_t^{f-})$ and $p(\mu_t^{f+})$ corresponding to $\mu_t^{f-}$ and $\mu_t^{f+}$, respectively, from Fig.~\ref{fig:NOAApdds}b). In Fig.~\ref{fig:NOAApdds}b), the growth of $p(\mu_t^{f+})$ at larger $\Delta T$ is eventually clipped by $p(w)$ as $\mu_t^{f+}$ enters Region III of the turbine curve. If the entire range $[\mu_t^{f-},\mu_t^{f+}]$ is in Region III, then the range $[p(\mu_t^{f-}),p(\mu_t^{f+})]$ collapses to zero width around the maximum output.
The conversion of the range $[\sigma^{*} (\mu_t^{f-}),\sigma^{*} (\mu_t^{f+})]$ is shaped by the RCC-OPF formulation in \cite{new_submission}. Note that $\sigma_p$ can be obtained $\forall p (\mu_t) \in [p(\mu_t^{f-}),p(\mu_t^{f+})]$ using the relation $\sigma^*(\mu)$ from Fig.~\ref{fig:BPApdfs}b) and the slope $s$ of the turbine curve as $\sigma_p=s (\mu_t) \cdot \sigma^*(\mu_t)$. However, using the wind turbine power curve from Fig.~\ref{fig:ranges}a) results in difficult nonconvexity in distributionally robust formulations (e.g., Eq. (27)-(28) in \cite{new_submission}). To avoid this nonconvexity, we assume that the range on the mean value in (Eq.~\ref{eq:power_interval}) and range on the standard deviation are independent. Therefore, the range on the standard deviations is given by:
\begin{gather}
\sigma_p^- = \min_{\mu \in [\mu_t^{f-},\mu_t^{f+}]} s(\mu) \sigma^*(\mu )\label{eq:uset_sigma-} \\
\sigma_p^+ = \max_{\mu \in [\mu_t^{f-},\mu_t^{f+}]} s(\mu ) \sigma^*(\mu ) \label{eq:uset_sigma+}
\end{gather}
Figure~\ref{fig:ranges}c) shows the conversion to the robust interval on the standard deviation of WPG for the data in Fig.~\ref{fig:NOAApdds}c).
\vspace{-10pt}
\section{Conclusion} We have presented a data-driven method to develop robust intervals for distribution parameters, which preserves the physical relationship between instantaneous and hour-average wind speed and power. This method is suitable for uncertainty sets in the RUC \cite{guan_2014} and RCC-OPF \cite{bienstock_2009}, \cite{new_submission}, which can be interpreted in terms variability and uncertainty of WPG. \textcolor{black}{The case study in \cite{new_submission} shows that the RCC-OPF model with these uncertainty sets outperforms several benchmarks in terms of several cost and reliability metrics.}
\vspace{-10pt}
\section*{Acknowledgment}
This work was supported by the Advanced Grid Modeling Program in the U.S. Department of Energy Office of Electricity under Contract No. DE-AC52-06NA25396.
\vspace{-10pt}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
2,877,628,091,111 | arxiv | \section{INTRODUCTION}
In the past decades, different aspects of Shape Memory Alloys (SMA) have been
investigated intensively by mathematicians, physicists, and engineers \cite{Birman1997}.
This interest in a larger scientific community
is due to SMA unique properties of being able to convert thermal energy into mechanical and
vice versa. These properties are promising for many applications of SMAs, including mechanical and
control engineering, biomedicine, communication, robotics to name just a few \cite{Birman1997}.
Motivated by application developments of advance composite materials involving SMAs,
nonlinear wave propagations in these materials have
been investigated as a stepping ground for the prediction and understanding of dynamic
response of the composite under various dynamic loadings \cite{Abeyaratne1994,Berezovski2005,Falk1984}.
Compared to wave propagations in conventional solid materials, the impact induced wave
propagations in materials such as SMAs requires delicate treatments as additional
difficulties arise due to phase transformations \cite{Abeyaratne1994,Berezovski2005,Falk1984}.
In general, impact loadings of these materials will cause nonlinear
thermo-mechanical waves which are similar to those of other
thermo-elastic materials under impact loadings. The difference of
wave propagation in conventional solids and those in the
ferroelastic materials such as SMAs is that the first order
martensitic transformation may be induced in the latter case. The
transformation is reversible, and its native nonlinearity and
hysteresis will have a substantial influence on the wave propagation
and will make the wave propagation patterns
more complicated \cite{Abeyaratne1994,Berezovski2005,Falk1984}.
The first step to the modelling of impact induced wave propagations
and phase transformations is a sound constitutive theory upon which
the entire model can be built \cite{Abeyaratne1994,Falk1984}.
Various constitutive models have been proposed
on mesoscale and microscale to capture the phase boundary movement induced by
the dynamical loadings \cite{Abeyaratne2000,Dai2006}. Example of such a
constitutive model can be found in Ref. \cite{Abeyaratne1994,Abeyaratne2000} where
an one-dimensional model for the modelling of shock
wave propagations with phase transformation was constructed on the basis of a non-convex
Helmholtz free energy function. Under this approach the entire structure was split into
different domains due to the phase transformation and the movement of boundaries
between the
domains was modelled using the ``jump conditions". This approach is suitable for
microscopic problems, while for many engineering applications a
model is required at macroscale. In Ref.
\cite{Chen2000,Lagoudas2003}, the dynamic behaviour of
phase boundaries was modelled using a thermo-mechanical coupling approach.
The model was based on a linearized
constitutive theory, and hence its application potential was
inherently limited.
For many engineering applications, the dynamic response of SMA
materials caused by impact loadings need to be better understood at
macroscale for design and control of SMA-based devices. For this
purpose, displacement and temperature evolutions in the material are
normally sought. Models at mesoscale may not be sufficient for this
purpose as another model needs to be constructed to link macroscale
properties and mesoscale domain structures.
Another aspect of modelling the dynamics of such materials as SMAs
under impact loadings is the thermo-mechanical coupling effects. In most of the existing
investigations, the thermal dynamics are either neglected \cite{Abeyaratne1994,
Berezovski2005,Dai2006}, or modelled separately from the mechanical dynamics
\cite{Chen2000,Lagoudas2003}. However, the physics-based models should account for the
intrinsic coupling of thermal and mechanical fields in SMAs. When the SMAs are used for
damping purposes or for other purposes where the conversion of energy between the thermal and
mechanical fields is essential, the coupling effects are expected to be particularly
important, and the constitutive theory should be constructed by taking into account both
fields simultaneously.
In this paper, the nonlinear thermo-mechanical wave propagations in
SMA rods induced by impact loadings are modelled and analyzed at
macroscale. To capture the thermo-mechanical coupling and nonlinear
nature of the phase transformations, the Ginzburg-Landau-Devonshire
theory is applied for the modelling of the nonlinear dynamics. The
governing equations for the mechanical field are obtained by
minimizing the mechanical energy, while those for the thermal field
are obtained by using the conservation law of internal energy. The
intrinsic coupling of the two fields is built-in into the model by
including both fields in the potential energy functional. In the
following sections, a mathematical model describing SMA dynamics is
developed based on a system of coupled partial differential
equations which is re-cast in the form of differential algebraic
equations, and Chebyshev' collocation method
is employed together with the backward differentiation formula to
integrate the resulting system. Nonlinear wave propagation patterns caused by an impact stress
loading at one end of the rod are simulated with different initial
temperatures and computational parameters. Finally, the influence of
phase transformation on the wave propagations is analyzed
numerically, along with the influence of other effects such as
internal friction and capillary effects.
\section{THE INITIAL-BOUNDARY VALUE PROBLEM}
We restrict our analysis to one-dimensional cases as sketched in Figure (\ref{RodShock}). The
SMA rod under consideration occupies an interval $[0, L]$, and is subjected to an impact
loading from the right end $x=L$, while the other end $x=0$ is fixed. The rod is thermally
insulated at both ends so there is no heat loss (gain) to (from) the ambient environment.
Under external loadings, a material point $x$ in the SMA rod will be moved to a new position
$x+u(x,t)$ due to deformation, where $u(x,t)$ is the longitudinal displacement at time $t$.
Function $u(x,t)$ is assumed to be continuous at any time $t$ and position $x$ based on the
continuity of the rod at macroscale. The stress $\sigma$ is related to the deformation $\varepsilon$
by $\sigma(x,t) = \mathcal N (\varepsilon(x,t))$ where $\varepsilon(x,t)= \partial u(x,t)/ \partial x$
is the strain.
By now, it is well understood that the first order phase
transformation in SMAs holds the key to unique properties of the
material, such as the shape memory and pseudo-elastic effects.
Therefore, it is expected that the adequate mathematical model for
the dynamics of SMAs at macroscale should be able of capturing the
first order martensite phase transformations. On the other hand, the
intrinsic coupling of mechanical and thermal fields should also be
captured by the model. To satisfy these requirements, the
mathematical model based on the modified Ginzburg-Landau-Devonshire
theory has been established
\cite{Bubner1996,Bubner2000,Falk1984}:
\begin{eqnarray} \label{GoverningEq}
\rho \ddot u &=& \D{x}{}\left ( k_1(\theta -\theta_1) \varepsilon + k_2 \varepsilon^3 + k_3 \varepsilon^5 \right )
+ \nu \D{t}{}\DD{x}{u} - k_g \frac{\partial ^4u}{\partial x^4} + f, \\
c_{v}\D{t}{\theta}&=& k\DD{x}{\theta}+k_{1}\theta \varepsilon \D{t}{\varepsilon} +
+ \nu \left ( \D{t}{\varepsilon} \right)^2 + g ,
\end{eqnarray}
where $k_{1}$, $k_{2}, k_{3}$ and $k_g$ are material-specific constants, $\nu$
internal friction coefficient, $\rho$ the density of the material, $\theta_1$ the reference
temperature, $c_v$ the specific heat capacitance, $k$ heat conductance, and $f$ and
$g$ mechanical and thermal loadings, respectively.
It is essential that the above model is constructed on the basis of
the potential energy function $\mathcal F$, which a non-convex
function
of the chosen \emph{order parameters} and temperature $\theta$, at mesoscale, according to
Ginzburg-Landau-Devonshire theory. It is a sum of local energy
function $(\mathcal F_l)$ and non-local energy function ($\mathcal
F_g$). For the current one-dimensional problem,
the strain $\varepsilon(x,t)$ is chosen as the order parameter, and the local free energy density
can be constructed as the Landau free energy density $\mathcal F_l(\theta, \varepsilon)$, while
the non-local part can be formulated as a Ginzburg term $\mathcal F_g( \nabla \varepsilon)$:
\cite{Bales1991,Chaplygin2004,Falk1984}:
\begin{eqnarray} \label{Potentials}
\mathcal F(\theta, \varepsilon) &=& \mathcal F_l(\theta, \varepsilon) + \mathcal F_g( \nabla \varepsilon) ,
\nonumber \\
\mathcal F_l(\theta, \varepsilon) &=& \frac{k_1(\theta -\theta_1)}{2} \varepsilon^2 +
\frac{k_2}{4} \varepsilon^4 + \frac{k_3}{6} \varepsilon^6, \\
\mathcal F_g (\nabla \varepsilon) &=& k_g (\D{x}{\varepsilon})^2, \nonumber
\end{eqnarray}
The local minima of the local term (Landau free energy function) are introduced to characterize
martensite variants, while the non-local term (Ginzburg term) above accounts for
inhomogeneous strain field, which represents energy contributions from domain walls and
other boundaries among different phases. It will be translated into capillary effects at
macroscale.
In order to account for the internal friction, accompanying wave
propagations and phase transformations, which will be translated
into viscous effects at macroscale, a Rayleigh dissipation term is
also included in the above model by using the following term
\cite{Abeyaratne2000}:
\begin{eqnarray}
\mathcal F_R = \frac{1}{2}\nu (\D{t}{\varepsilon})^2.
\end{eqnarray}
The constitutive relations for the material in the above model at
macroscale can be obtained by using the thermodynamic equilibrium
conditions:
\begin{eqnarray}
\sigma = \D{\varepsilon}{H}, \quad e= \mathcal H - \theta \D{\theta}{\mathcal H},
\end{eqnarray}
where $\mathcal H(\theta, \varepsilon)= \mathcal F - c_v \theta \ln \theta$ is the Helmholtz free
energy function. The mechanical and thermal fields are intrinsically coupled since the
internal energy $e$ is associated with the same potential energy as above, and the
governing equations for the thermal field are formulated using the conservation law for the
internal energy \cite{Bubner1996,Falk1984}.
The difficulties associated with the above model are understood
better by analyzing the profiles of the non-convex potential energy
$F_l$, its temperature dependence, and the non-convex constitutive
curves,
as sketched in Figure~(\ref{GinzburgLandau}). At low temperature $\theta=201$ K, as sketched in
the top row, there are two local minima in the potential function,
which are introduced to characterize martensite plus and minus,
respectively (in 1D cases). The stress-strain relation of the
material under dynamical loadings will not follow the sketched
constitutive relations exactly. Instead, there will be jumps from
point ($A$ to $B$) or vice versa. Such jumps
are associated with the transition from martensite plus to minus or vice versa.
This is the origin of mechanical hysteresis which will dissipate mechanical energy quickly by
converting it into thermal form due to the thermo-mechanical coupling.
The amount of energy converted into thermal form can be estimated by using the area enclosed
under the hysteresis loop, as marked by the dashed lines in the stress-strain plot on the left.
When the SMA rod temperature is intermediate ($\theta=240$ K), there
are still hysteresis loops, and the jump phenomena become more
complicated, since there are three local minima present in the
potential energy function, which means that austenite and two
martensite variants may co-exist in the material. The dissipation of mechanical energy
will be slower in this case
since the hysteresis loop become smaller. At high temperature
($\theta=310$ K), it is shown that there are no jump phenomena any
longer, therefore no hysteresis, because there is only one local
minimum. In this case, there will be only austenite present and the dynamics become
fairly simple.
In the current paper, the mechanical loading is implemented in terms
of impact stress at one of the rod ends. Hence, it is convenient to
keep the constitutive relation
as an extra equation for the model and consider the stress as a dependent variable.
This representation will make the treatment of boundary conditions much easier for the
current discussion. The resulting system of Differential Algebraic Equations (DAE) can
be written as follows \cite{Melnik2000}:
\begin{equation} \label{SMAEq}
\begin{array}{l}
\displaystyle \D{t}{u} = v, \quad
\rho\D{t}{v} = \D {x}{\sigma} + \nu \D{x}{} \D{x}{v}
- k_g \D{x}{}\DD{x}{\varepsilon}, \\
\displaystyle
c_{v}\frac{\partial\theta}{\partial
t}=k\frac{\partial^{2}\theta}{\partial x^{2}}+k_{1}\theta \varepsilon
\D{t}{\varepsilon}, \\
\displaystyle
\sigma = k_{1}(\theta-\theta_{1}) \varepsilon + k_{2}\varepsilon^{3}+k_{3}\varepsilon^{5},
\end{array}
\end{equation}
where $v$ is the velocity. The mechanical and thermal loadings, $f$
and $g$, are all set to zero, so that only boundary loadings will be
taken into account in the current investigation.
In order to investigate the thermo-mechanical wave propagations in
the SMA rod, the following boundary conditions are employed for the
mechanical and thermal fields similarly as in
Ref.\cite{Bubner1996,Wang2006} :
\begin{eqnarray}
\left. \D{x}{\theta} \right |_{x=0} =0, \quad \left. \D{x}{\theta}\right |_{x=L} =0,
\nonumber \\
u(0, t) = 0, \quad \sigma(L, t) = S(t), \\
\left. \DD{x}{u}\right |_{x=0} = 0, \quad \left. \DD{x}{u} \right |_{x=L}=0.
\end{eqnarray}
where $S(t)$ is a given function describing the stress impact profile.
\section{Wave Propagations}
Since mechanical responses caused by external loadings in most materials are
normally much faster than thermal ones, which are also more interesting for numerical
investigations and many applications, the emphasis of the current discussion is
put on mechanical waves caused by mechanical loadings.
\subsection{Temperature Dependence}
For the analysis of elastic waves in the SMA rod, Equation~(\ref{GoverningEq}) is firstly
linearized at the point $(\sigma_L, \varepsilon_L)$ where $\sigma_L=0$:
\begin{eqnarray}\label{WaveLinear}
\rho \ddot u &=& \D{x}{}k_L \varepsilon + \nu \D{t}{}\D{x}{\varepsilon} -
k_g \frac{\partial ^4u}{\partial x^4},
\end{eqnarray}
where the external mechanical loading is dropped out. $k_L$ is the stiffness constant for the
linearized system, which is temperature dependent.
When the SMA is at high temperature ($\theta=310$ K), only austenite is stable and there is no
phase transformation, it can be easily calculated that
$\varepsilon_L=0$ so $k_L$ can be simply formulated as: $ k_L = k_1(\theta -\theta_1)$.
At lower temperature ($\theta=210$ K), the stress strain relation is
not a monotone curve any longer, and the linearization has to be
carried out by using at least three points, as indicated by the plot
in the top row of Figure~(\ref{GinzburgLandau}) (the three
intersections between the horizontal axis with the $\sigma-\varepsilon$
curve). The central one is $\varepsilon_L=0$, which is not a stable
equilibrium state of the system, and is not interesting for the
analysis. The other two intersections can be easily calculated using
the following condition:
\begin{eqnarray}
k_1(\theta -\theta_1) + k_2 \varepsilon^2 + k_3 \varepsilon^4= 0,
\end{eqnarray}
which gives the following formulation (using parameter values given in section 5):
\begin{eqnarray} \label{EquiPoint}
\varepsilon_L^2= \frac{-k_2 \pm \sqrt{ k_2^2-4 k_1k_3(\theta-\theta_1) } }{2 k_3},
\quad \varepsilon_L = \pm 0.115.
\end{eqnarray}
These two values are associated with strains for martensite plus ($\varepsilon_L=0.115$)
and minus ($\varepsilon_L=-0.115$), respectively.
The linearized coefficient then can be calculated as follow:
\begin{eqnarray}
k_L = k_1(\theta -\theta_1) + k_2 \varepsilon_L^2 + k_3 \varepsilon_L^4,
\end{eqnarray}
which gives $k_L$ the same value at the two points with different $\varepsilon_L$ values,
due to the symmetry property. The above analysis indicates that the
linearized wave motion in the material at martensite plus state will be the same
as those at martensite minus state.
For the cases where the SMA rod temperature is intermediate, the dependence
of $k_L$ on temperature is more
complicated since both austenite and martensite might occur and there might be $4$
values for $\varepsilon_L$ to satisfy $\sigma_L=0$, as indicated in the middle row in
Figure~(\ref{GinzburgLandau}). Although the symmetry property still
exists for martensitic variants, wave motions are different between
austenite and martensite states.
Let us consider the following wave solution to the linearized wave
motion given by Equation~(\ref{WaveLinear}):
\begin{eqnarray}
u = u(x- V t) = u(z), \quad z = x- V t,
\end{eqnarray}
where $V$ is a constant stands for wave velocity. By substitution, the following relation
can be easily obtained:
\begin{eqnarray} \label{WaveSimplified}
k_g\DD{x}{\varepsilon} = \left( k_L -\rho V^2 \right) \varepsilon - \nu V \D{x}{\varepsilon},
\end{eqnarray}
where $ K_L -\rho V^2$ can be positive or negative. The problem is now formulated
as an ordinary differential equation and its general solution can be written as:
\begin{eqnarray}
\varepsilon(z) = (C_1 + C_2 z) e^{z \sqrt{ (k_L -\rho V^2 )/k_g}},
\end{eqnarray}
where $C_1, C_2$ are coefficients to be determined by boundary
conditions. The viscous term is temporarily ignored.
If the wave velocity $V$ is less than the velocity of sound in the material
$V_s =\left ( k_L/\rho \right)^2$, then $ K_L -\rho V^2 >0$, and there is no limited solution
exists. It means that no waves can propagate in the material with velocity
$V$ smaller than the velocity of sound in the material. Since the velocity of sound of the material
is temperature dependent, so the allowed speed for waves to propagate in the SMA
rod varies along with the variation of its temperature.
When the viscous term is also taken into account, then the wave propagation will
always be accompanied by dissipation effects, which can be characterized by the
following exponential function:
\begin{eqnarray}
| \varepsilon(z)| = e^{-\xi \omega z}, \quad \xi =\frac{\nu V}{2\sqrt{(\rho V^2 -k_L) k_g}},
\quad \omega = \sqrt{\frac{(\rho V^2 -k_L)}{k_g}},
\end{eqnarray}
where the initial amplitude of the considered waves are assume to be
$1$. The dissipation effects can be estimated by the exponential
coefficient $\xi \omega = \frac{\nu V}{2k_g}$. For larger $V$,
faster dissipation will be induced. The dissipation effects are
independent of the material temperature.
\subsection{Effects of Ginzburg's Term}
In the wave equation, the term $k_g \frac{\partial^4 u}{\partial
x^4}$ is resulted from the interfacial energy contribution to the
potential energy function. It is also called Ginzburg's term, and
accounts for the capillary effects
\cite{Bubner1996,Bubner2000,Falk1984,Falk1987}.
It is easy to see that this term is related to dispersion of wave propagations. For the
analysis, the following solution to the linearized wave equation is considered:
\begin{eqnarray}
u = \sin\frac{2\pi}{\lambda }(x - Vt),
\end{eqnarray}
where $\lambda$ is the wave length. By substitution, the following relationship involves
the wave speed $V$ can be obtained if the viscous effects are ignored:
\begin{eqnarray} \label{WaveDisper}
\rho V^2 = k_L + \frac{4\pi k_g}{\lambda^2},
\end{eqnarray}
which indicates that the wave propagation speeds are increased due
to the contribution of non-local potential energy (capillary
effects). The dispersion effects caused by non-local contributions
are stronger for those waves with smaller wave lengthes. If the SMA
rod is at lower temperature, the linearized stiffness constant $k_L$
will be smaller, which will make the last term in
Equation~(\ref{WaveDisper}) more pronounced.
\section{NUMERICAL METHODOLOGY}
As mentioned in the previous section, the development of the
numerical methodology for simulation of the wave propagation based
on the above mathematical model is not a trivial task. In
particular, given that both dispersion and dissipation of wave
propagations are present in the physics of the problem, the
numerical algorithm for the problem has to be able to take care of
both dissipation and dispersion numerically, and the accuracy of
the algorithm will be affected by their treatment. In the present
paper, a multi-domain decomposition method combined with the
Chebyshev collocation methodology is the method of choice in
addressing the above issues. The compromise made here is that a
spectral method is employed to take advantage of its better
convergence property, while the domain decomposition method is
chosen for the purpose to reduce the order of basis functions for
the spectral method when the total node number is large.
\subsection{Chebyshev's Collocation Method}
For the Chebyshev pseudo-spectral approximation, a set of
Chebyshev points $\{x_i\}$ are chosen along the length direction as follows:
\begin{equation}
\label{num-eq4}
x_i = L\left(1-\cos(\frac{\pi i}{N})\right) /2, \quad i=1,2,\dots,N.
\end{equation}
Using these nodes, $u, v, \theta$, and $\sigma$ distributions in the rod can
be expressed in terms of the following linear approximation:
\begin{equation}
\label{num-eq2} f(x) = \sum_{i=1}^{N} f_i \phi_i(x),
\end{equation}
where $f(x)$ stands for any of $u, v, \theta$, or $\sigma$,
and $f_i$ is the function value at $x_i$. $\phi_i(x)$ is the $i^{th}$ interpolating
polynomial which has the following property:
\begin{equation}
\phi_i(x_j) = \left \{
\begin{array}{ll}
1, & i=j, \\ 0, & i\neq j.
\end{array} \right.
\end{equation}
It is easy to see that the well-known Lagrange interpolants satisfy
the interpolating requirements. Having obtained $f(x)$
approximately, the derivative $\partial f(x)/ \partial x$ can be
easily obtained by taking the derivative of the basis functions
$\phi_i(x)$ with respect to $x$:
\begin{equation}
\label{num-eq3}
\frac{\partial f}{\partial x} = \sum_{i=1}^{N} f_i
\frac {\partial \phi_i(x)}{\partial x},
\end{equation}
and similarly for the higher order derivatives. All these approximations can be formulated in
the matrix form, for the convenience of programming.
\subsection{Multi-Domain Decomposition}
It is known that the spectral methods are able to give a higher
accuracy with the same number of discretization nodes, compared to
finite difference methods or finite element methods. On the other
hand, when the solution to the problem does not have higher-order
derivatives, the spectral methods may lead to artificial
oscillations due to the Gibbs phenomenon. This may be expected for
the current problem when the impact induced wave propagation is
analyzed numerically. To avoid this, a multi-domain decomposition
method is employed.
The entire computational domain $\mathcal D = [0, L]$ is evenly decomposed into $P$
intervals (subdomains), with an overlap region between each
pair of consecutive intervals, as sketched in Figure (\ref{DomainDec}):
\begin{equation}
\mathcal D = \bigcup_{p=1}^{p=P} D_p,
\end{equation}
where the number of subdomains $P$ is chosen according to the specific problem under
consideration. In each interval, the Chebyshev collocation method discussed in the previous
section is employed to approximate the solution and its derivatives.
The coupling between each pair of consecutive intervals can be implemented
by setting the following requirements:
\begin{eqnarray}
y_p^n = y_{p+1}^2, \quad y_p^{n-1} = y_{p+1}^1,
\end{eqnarray}
where the subscript $p$ stands for the interval number, while the superscript $n$ stands for
the node number in each interval. Variable $y_p^n$ is the function value at point $x_p^n$
(the $n$th node in the $p$th interval), which could be any of the dependent variables we are
solving for. Point $x_p^n$ is actually the same node as $x_{p+1}^2$, and $x_p^{n-1}$ is the
same node as $x_{p+1}^1$.
The derivatives of functions in the overlapped nodes are approximated by taking the
average of their values evaluated from the two intervals involved:
\begin{eqnarray}
\left. \D{x}{y} \right |_{x_p^{n-1}} = \frac{1}{2} \left (
\left. \sum_{i=0}^{N} y_p^i \D{x}{\phi_i(x)} \right |_{x_p^{n-1}} +
\left. \sum_{i=0}^{N} y_{p+1}^i \D{x}{ \phi_i(x)} \right |_{x_{p+1}^1}
\right ), \\
\left. \D{x}{y} \right |_{x_p^n} = \frac{1}{2} \left (
\left. \sum_{i=0}^{N} y_p^i \D{x}{\phi_i(x)} \right |_{x_p^n} +
\left. \sum_{i=0}^{N} y_{p+1}^i \D{x}{ \phi_i(x)} \right |_{x_{p+1}^2}
\right ). \nonumber
\end{eqnarray}
The approximation to the second order derivatives can be found using the same average for the
nodes in the overlapped region.
\subsection{Backward Differentiation Formula Method}
By employing the multi-domain decomposition methods combined with the Chebyshev collocation
methodology, the given set of partial differential equations Eq. (\ref{SMAEq}) can be
converted into a DAE system, which can be generically written in the following form:
\begin{eqnarray}
\vec M \frac{d \vec X}{dt} + \vec N(t, \vec X, g(t)) = 0,
\end{eqnarray}
where $\vec X$ is the vector collecting all the variables we are solving for, $\vec M$ is
a singular matrix, $\vec N$ is a vector collecting nonlinear functions produced by spatial
discretization. The resultant
DAE system is a stiff system and has to be solved by an implicit algorithm. Here the second
order backward differentiation formula method is employed for this purpose. By discretizing
the time derivative using the second order backward approximation, the DAE system can be
converted into an algebraic
system at each time level, which can formally written as follows:
\begin{equation}
\vec M \left( \frac{3}{2} \vec X^n - 2 \vec X^{n - 1} + \frac{1}{2}
\vec X^{n - 2} \right) + \Delta t \vec N \left( t_n, \vec X^n, g(t_n) \right) = 0 ,
\end{equation}
where $n$ denotes the current computational time layer. For each computational
time layer, iterations must be carried out using
Newton's method for $\bf X^n$ by use of $\bf X^{n-1}$ and $\bf X^{n-2}$. Starting from the
initial value, the vector of unknowns ${\bf X}$ can be solved for at all specified time
instances employing this algorithm.
\section{Numerical Experiments}
A series of numerical experiments have been carried out to
investigate the nonlinear wave propagations in the SMA rod involving
phase transformations. All experiments reported here have been
performed on a $\textrm{Au}_{23}\textrm{Cu}_{30} \textrm{Zn}_{47}$
rod, with a length of $1$ cm. The physical parameters, except $\nu$
and $k_g$, for this specific material are taken the same as those in
\cite{Niezgodka1991}, which are listed as follows for convenience:
\begin{eqnarray*}
k_{1}=480\, g/ms^{2}cmK, \quad k_{2}=6\times10^{6}g/ms^{2}cmK,\qquad
k_{3}=4.5\times10^{8}g/ms^{2}cmK, \\
\theta_{1}=208K,\quad \rho=11.1g/cm{}^{3},\quad
c_{v}=3.1274g/ms^{2}cmK, \quad k=1.9\times10^{-2}cmg/ms^{3}K.
\end{eqnarray*}
Experiments indicate that the Ginzburg coefficient $k_g$ should be relatively
small compared to $k_1$, so it is taken as $k_g =10 g/ms^2$ first, by referring
to Ref.\cite{Bubner1996}. The internal
friction coefficient is not an easily measurable quantity. In what follows, we assume
it to be a small fraction (2\%) of $k_1$, that is approximately $10 g/(cm)(ms))$.
The entire rod is divided into $10$ sub-intervals, in each interval there are $15$ nodes used
for spatial discretization. All simulations have been carried out for the time span $[0,0.1]$ ms,
and the time step-size for the integration is chosen as $2.5 \times 10^{-5}$ms.
The first numerical experiment for nonlinear wave propagations in
the SMA rod is performed with a higher temperature $ \theta=310$ K,
for which there is no phase transformation.
Other initial conditions are chosen $u=v=s=0$, and mechanical loading
is employed in terms of a stress impact at right end, as follows:
\begin{eqnarray}
g(t) =\begin{cases}
4\times 10^3, & 0 \leq t \leq 0.005 \cr 0 , & t > 0.005
\end{cases}
\end{eqnarray}
which can be regarded as an approximation to a pulse stress impact on the SMA rod.
The mechanical wave propagations are presented by the strain evolution
while thermal waves by temperature evolution in Figure (\ref{Pulse310}).
It is shown that the impact induced waves start from $x=1$ and propagate
along the negative $x$ direction, hit the opposite boundary at $x=0$ and are bounced back.
The temperature evolution indicates that there are associated thermal waves induced
by the stress impact loading, due to the thermo-mechanical coupling effects.
The propagation patterns of the thermal waves
are similar to those of mechanical waves. The evolution of the
displacement distribution due to the stress impact is also presented
in Figure (\ref{Pulse310}), in the left bottom sub-figure. To
clarify the patterns of wave propagations, the strain distributions
in the SMA rod at three chosen time instants are plotted in the
right bottom sub-figure in Figure (\ref{Pulse310})
($t=0.01,~0.02,~0.03$ ms, respectively). The arrow attached to each
wave profile is to indicate its propagation direction. It is seen
that the strain distributions in the SMA rod are always smooth and
no obvious sharp jump occurs, since only austenite is stable with
this temperature and there is no phase transformation. The amplitude
of the wave decreases and the wave peak becomes broader during the
propagation, it can be easily explained by the fact that dissipation
effects and dispersion effects, caused by internal
friction, capillary effects, and thermo-mechanical coupling, are all included in the model.
The average wave propagation
speed can be estimated by the location of the wave frontier or wave peak plotted in the
figure. With the
current initial temperature, the strain wave is bounced back from $x=0$ and its peak
is located around $x=0.65$ cm when $t=0.003$ ms. This experiment simulates the
nonlinear thermo-mechanical waves like those in regular thermo-elastic materials.
The second example deals with the same computational conditions and
loading, except that the initial temperature now is set
$\theta=240K$, for which both martensite and austenite may co-exist
in the SMA rod. The numerical results for this case are presented
similarly in Figure (\ref{Pulse240}). It is easy to see that the
strain and temperature waves are not as regular as those in the
first experiment. There are some plateaus clearly shown in the
strain and temperature figures, which can be related to martensite
and austenite in the SMA rod. The frontier of the waves are now more
easily identified since abrupt jumps occur in the strain
distribution, which are caused by phase transformations between
austenite and martensite. In the displacement evolution now we have
only one peak within the simulated time span, while two peaks are
found
in the first experiment. On the three chosen time instants, the strain distributions are not as
smooth as those at high temperature. At $t=0.03$ ms, the wave
frontier is around $x=0.65$ cm, propagating along the positive $x$ direction. It indicates that
the wave speed is a little lower than that in the first experiment, as indicated by the analysis
given in section 3. Similarly, there are thermal waves caused by the mechanical loading
due to the coupling effects.
For the third experiment, the initial temperature is set at
$\theta=210$ K. Because only martensite is stable at this
temperature, the initial condition is chosen such that the SMA rod
is originally at martensite minus state, for which the displacement
is set $u=-0.115x$ so $\varepsilon_0=-0.115$. This strain value is one of
the local minima of the non-convex potential
energy plotted in Figure~(\ref{GinzburgLandau}), and calculated in Equation~(\ref{EquiPoint}).
Numerical results for this case are presented in
Figure (\ref{Pulse210}). It is seen that the entire SMA rod is
divided into two domains, one consists of martensite plus (with
$\varepsilon \approx 0.115$), and the other one - martensite minus ($\varepsilon
\approx -0.115$). The interface between the two domains is driven by
the impact stress loading, as sketched in the strain evolution plot
and the wave profiles on the chosen time instants in Figure
(\ref{Pulse210}). With the given computational conditions, the SMA
rod is converted from martensite minus to plus from the end which
under impact loading, the phase boundary is driven to propagate
along the negative $x$ direction. Due to the hysteretic nature of
the phase transformation, the input mechanical energy is dissipated
continuously during the propagation of the phase boundary, and is
not able to convert the whole rod into martensite plus. The
interface stops at around $x=0.5$. Correspondingly, there are also
thermal waves accompanying the martensite transformation.
The wave propagation speed is much lower compared to those in the previous
experiments, and the wave speed changes more remarkably during the
propagation process, as indicated by the plot of wave profiles plot
at chosen time instants.
At $t=0.03$ ms, the wave frontier is at $x=0.5$ cm and is unable to move further toward the
end $x=0$.
The forth experiment is to investigate the dissipation effects due
to internal friction in the material. As analyzed in section 3, the
dissipation effects are independent of temperature, so we set the
initial temperature at $\theta_0=310$ K to exclude phase
transformation, so that the comparison will be easier. The internal
friction $\nu$ is set three times larger at $30$, and all other
computational parameters are chosen the same as those in the first
experiment. The strain evolution and wave profiles at the same
three chosen time instants are presented in
Figure~(\ref{Pulse310Vis}). By comparing with those in
Figure~(\ref{Pulse310}), it is observed that the dissipation effects
are enhanced, the peak values of the wave decrease faster. At
$t=0.01$ ms, the peak value of the wave profile is around $0.075$,
located around $x=0.6$ cm, while the counterparts with $\nu=10$ are
peak value $0.089$ at around $x=0.055$ cm. This indicates that when
$\nu$ is increased, not only the peak value dissipated faster, but
the wave speed was also slightly slowed down.
The final experiment is to show numerically the dispersion effects
due to the Ginzburg term in the wave equation. As indicated by
Equation~(\ref{WaveDisper}), the dispersion effects are more
pronounced at low temperature since $k_L=k_1(\theta-\theta_1)$ will
be smaller. To perform the analysis, the initial conditions are set
the same with those in the third experiment, except that $k_g$ is
set three times larger at $30$. The strain evolution and wave
profiles for three chosen time instants are presented in
Figure~(\ref{Pulse210Kg}). By comparing the results with those in
Figure~(\ref{Pulse210}), it can be seen that the entire rod is still
divided into two domains, one for martensite plus ($\varepsilon \approx
0.115$) and the other - for martensite minus ($\varepsilon \approx
-0.115$). However, the wave propagation speed is faster with larger
$k_g$ value. The interface between martensite minus and plus is
located at around $x=0.35$ cm when $t=0.03$ ms, while for $k_g=10$
it is at $x=0.5$ cm. This observation agrees with the linearization
analysis carried out in section 3. At the same time, the entire rod
is converted from martensite minus to plus, which indicates that
smaller amount of input energy is demanded for the phase
transformation when $k_g$ value is larger. In other words, the phase
transformation becomes easier to deal with when the capillary
effects are enhanced.
From the above numerical experiments follow that nonlinear
thermo-mechanical wave propagations caused by impact loadings in the
SMA rod can be remarkably influenced by the material temperature,
internal friction, and capillary effects. Thermal waves could be
induced by impact mechanical loadings. Wave propagation patterns are
more complicated when phase transformations
are involved, and the dynamic response of the material in this case
is very different from those with no phase transformations.
\section{CONCLUSIONS}
In this paper, a mathematical model for the analysis of wave propagations in a shape memory
alloy rod induced by a stress impact was constructed. The modified
Ginzburg-Landau-Devonshire
theory was employed for modelling dynamic processes in SMA rods. The
first order martensite phase transformations and thermo-mechanical
coupling were incorporated into the model. A multi-domain
decomposition method
was employed in conjunction with the Chebyshev collocation method
for spatial discretization, and the backward differentiation formula was used for solving the
resulting differential-algebraic system. The nonlinear thermo-mechanical wave propagations
in the SMA rod were simulated with various initial temperatures (with and without phase
transformation), the effects of phase transformations on the wave propagations were
analysed numerically, along with the effects of internal friction and capillary.
|
2,877,628,091,112 | arxiv | \section{Introduction}
\label{intro}
In this paper, we propose new statistical inference procedures for linear functionals and their functions of two semicontinuous populations.
Specifically, suppose that we have two independent samples of interest, which are generated by the following mixture models:
\begin{equation}\label{eq:1}
X_{i1},\cdots, X_{in_i} \sim F_i(x)= v_iI(x \geq 0)+(1-v_i)I(x>0)G_i(x), \quad \mbox{for~}~i=0,1,
\end{equation}
where $v_i\in (0,1)$, $n_i$ is the sample size for the $i$th sample, $I(\cdot)$ is an indicator function, and $G_i(\cdot)$'s are cumulative distribution functions (CDFs) of positive observations in the $i$th sample.
We are interested in estimating linear functionals \citep[p. 6]{fernholz1983mises} of $F_0(x)$ and $F_1(x)$, defined as
\begin{equation}
\label{functional}
\mbox{\boldmath $\psi$}_0= \int_0^{\infty} {\bf a}(x) dF_0(x)
\quad \mbox{ and } \quad
\mbox{\boldmath $\psi$}_1= \int_0^{\infty} {\bf a}(x) dF_1(x)
\end{equation}
for some given function ${\bf a}(x)$,
and also functions of $\mbox{\boldmath $\psi$}_0$ and $\mbox{\boldmath $\psi$}_1$.
The parameters $\mbox{\boldmath $\psi$}_0$, $\mbox{\boldmath $\psi$}_1$, and their functions
include many important summary quantities such as the centered and uncentered moments, the coefficient of variation, the generalized entropy class of inequality measures of each population, and the mean ratio of two such populations as special cases. More details can be found in Section \ref{examples}.
Without loss of generality, we assume that ${\bf a}(0)={\bf 0}$ throughout the paper; this assumption is satisfied by all the examples considered in Section \ref{examples}.
Many statistical applications naturally produce semicontinuous data with a mixture of excessive zero values and skewed positive outcomes.
Examples include the medical cost data in public health research \citep{Zhou2000}, and seasonal activity pattern data of field mice in biological science \citep{mice}.
More examples can be found in \cite{Wang2017}, a special issue of the {\it Biometrical Journal} \citep{Bohning2016}, and references therein.
The parameters $\mbox{\boldmath $\psi$}_0$, $\mbox{\boldmath $\psi$}_1$, and their functions are widely used in many areas.
For example, mean ratio of two populations is a desirable summary {quantity} which characterizes the differences of medical cost in two groups in public health discipline \citep{Zhou2000}.
The moments and the generalized entropy class of inequality measures
are important summary measures in business and economic studies \citep{Dufour2019}.
Most existing procedures for making inference on $\mbox{\boldmath $\psi$}_0$, $\mbox{\boldmath $\psi$}_1$, and their functions
are either fully parametric or fully nonparametric.
The parametric procedures are developed under the parametric model assumption, for example, log-normal assumption, on $G_i$, $i= 0, 1$.
Under this assumption,
\cite{Tu1999} and \cite{ZhouTu1999} developed a Wald-type test and a
likelihood ratio test for the equality of two population means.
Under the same assumption,
\cite{Zhou2000} proposed a maximum likelihood method and a two-stage bootstrap method
to construct the confidence intervals (CIs) for the mean ratio;
\cite{Chen2006} developed a set of approaches for constructing CIs for the mean ratio based on
the generalized pivotal and likelihood ratio test statistic.
The fully nonparametric methods usually first estimate $F_0(x)$ and $F_1(x)$ by the corresponding empirical CDFs,
which are then used to construct the estimators for $\mbox{\boldmath $\psi$}_0$, $\mbox{\boldmath $\psi$}_1$, and their functions.
The asymptotic results of this type of estimators have been well studied in the literature.
See \cite{Serfling1980} for more details.
Nonparametric Wald-type method \citep{Brunner1997,Pauly2015, Dufour2019}
and empirical likelihood (EL) method \citep{kang2010empirical,Wu2012,satter2020jackknife}
may also be used to construct the CIs and perform hypothesis testing for $\mbox{\boldmath $\psi$}_0$, $\mbox{\boldmath $\psi$}_1$, and their functions.
In general, the methods based on the parametric assumption on $G_i$'s are quite efficient.
However, in many applications, the parametric assumption, for example, the log-normal assumption for $G_i$ may be violated.
The corresponding parametric inference results may not be robust to the model misspecification on $G_i$'s \citep{Nixon2004}.
The fully nonparametric methods are generally quite robust to the model assumption on $G_i$'s.
In two-sample setting, the two populations may share certain similar characteristics.
For example, the strength of lumber produced in Canada for different years may follow similar distributions \citep{ChenLiu2013,Cai2017,Cai2018}.
Certain relationship exists between distributions of biomarkers for diagnosing Duchenee Muscular Dystrophy in case and control groups \citep{yuan2020semiparametric}.
The fully nonparametric methods, however, ignore such information.
In this paper, we propose new semiparametric procedures for estimating $\mbox{\boldmath $\psi$}_0$, $\mbox{\boldmath $\psi$}_1$, and their functions
based on the semiparametric density ratio model (DRM) \citep{Anderson1979,Qin2017},
which effectively utilize the information in both populations.
Let $dG_i(x)$ be the probability density function of $G_i(x)$, $i=0,1$.
The DRM links
the two CDFs $G_0(x)$ and $G_1(x)$ in model (\ref{eq:1})
as
\begin{equation}
\label{drm}
dG_1(x) = \exp\{\alpha +\boldsymbol{\beta}^\top\boldsymbol q(x)\}dG_0(x)
\end{equation}
for a pre-specified, non-trivial, basis function $\boldsymbol q(x)$ of dimension $d$, and unknown parameters $\alpha$ and $\boldsymbol\beta$.
In (\ref{drm}), the baseline distribution $G_0(x)$ is not specified.
Hence the DRM is a semiparametric model and has the advantage to avoid making risky parametric assumptions on $G_0(x)$ and $G_1(x)$.
The DRM is also quite flexible and includes many important statistical models as special cases.
For example,
when $\boldsymbol{q}(x)=\log(x)$, the DRM embraces the log-normal distribution of same variance with respect to the log-scale,
as well as the gamma distribution with the same scale parameters \citep{kay1987transformations}.
{\cite{Jiang2012} pointed out that the DRM is actually broader than Cox proportional hazard models.}
The DRM is also closely related with the well-studied logistic regression \citep{Qin1997}.
The inference under the DRM can be converted to that under the logistic regression \citep{Wang2017}.
The DRM has been proved to be a useful tool for making inference when there is an excess of zeros in the data. \cite{Wang2017,Wang2018} developed the EL ratio (ELR) statistics for testing the homogeneity of distributions and the equality of population means, respectively.
{Under the same setup,
\cite{lu2020} considered a test for the equality of the zero proportions and the equality of the means of two positive components jointly.}
Their simulation results show that the proposed tests have great power advantages over the existing nonparametric tests.
To our best knowledge, the semiparametric inference procedures such as the point estimation and confidence regions
for the general parameters $\mbox{\boldmath $\psi$}_0$, $\mbox{\boldmath $\psi$}_1$, and their functions have not been explored
under the mixture model \eqref{eq:1} and the DRM \eqref{drm}. This paper aims to fill in the void.
Under the mixture model \eqref{eq:1} and the DRM \eqref{drm},
we consider a class of general parameters $\mbox{\boldmath $\psi$}$ of length $p$,
defined as
\begin{equation}
\label{psi}
\mbox{\boldmath $\psi$}= \int_0^\infty \boldsymbol u(x;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$}) dG_0(x),
\end{equation}
where $\mbox{\boldmath $\nu$}=(\nu_0,\nu_1)^\top$, $\mbox{\boldmath $\theta$} = (\alpha,\mbox{\boldmath $\beta$}^{\top})^{\top}$, and $\boldsymbol u(x;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$})=\left(u_1(x;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$}),\ldots,u_p(x;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$})\right)^\top$ is some given $p\times 1$ dimensional function,
and the parameters defined through ${\bf g}(\mbox{\boldmath $\psi$})$, where ${\bf g}(\cdot): p \to q$ is a smooth function of $\mbox{\boldmath $\psi$}$.
Note that $\mbox{\boldmath $\psi$}$ covers $\mbox{\boldmath $\psi$}_0$ and $\mbox{\boldmath $\psi$}_1$, defined in (\ref{functional}), as special cases.
To see this,
let
\begin{equation}
\label{ufun01}
\boldsymbol u(X;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$})=\left(\begin{array}{c}
\boldsymbol u_0(X;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$}) \\
\boldsymbol u_1(X;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$})
\end{array} \right)= \left(\begin{array}{c}
(1-\nu_0) {\bf a}(x) \\
(1-\nu_1) {\bf a}(x) \exp\{\alpha+\mbox{\boldmath $\beta$}^\top\boldsymbol q(x)\}
\end{array} \right).
\end{equation}
Then $\mbox{\boldmath $\psi$}=\left(\mbox{\boldmath $\psi$}_0^\top,\mbox{\boldmath $\psi$}_1^\top\right)^\top$ under the assumption that ${\bf a}(0)={\bf 0}$, as assumed after (\ref{functional}).
The parameters $\mbox{\boldmath $\psi$}$ and ${\bf g}(\mbox{\boldmath $\psi$})$ together cover many important summary quantities. See Section 2.4 for examples.
Using the EL of \cite{owen},
we construct the maximum EL estimators (MELEs) of $\mbox{\boldmath $\psi$}$ and ${\bf g}(\mbox{\boldmath $\psi$})$.
We establish the asymptotic normality of the MELEs of $\mbox{\boldmath $\psi$}$ and $\boldsymbol g(\mbox{\boldmath $\psi$})$.
These results enable us to construct confidence regions for $\mbox{\boldmath $\psi$}$ and $\boldsymbol g(\mbox{\boldmath $\psi$})$ and perform hypothesis testing on $\mbox{\boldmath $\psi$}$ and $\boldsymbol g(\mbox{\boldmath $\psi$})$.
We apply the results for general $\mbox{\boldmath $\psi$}$ to $\mbox{\boldmath $\psi$}_0$ and $\mbox{\boldmath $\psi$}_1$, and
show that the asymptotic variances of the MELEs of $\mbox{\boldmath $\psi$}_0$ and $\mbox{\boldmath $\psi$}_1$
are smaller than or equal to those of nonparametric estimators of $\mbox{\boldmath $\psi$}_0$ and $\mbox{\boldmath $\psi$}_1$.
The rest of this paper is organized as follows. In Section \ref{estimation}, we first present the MELE of $(\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$})$,
and then propose the MELEs of $\mbox{\boldmath $\psi$}$ and ${\bf g}(\mbox{\boldmath $\psi$})$.
We study the asymptotic property of the MELEs of $(\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$})$ as well as the MELEs of $\mbox{\boldmath $\psi$}$ and $\boldsymbol g(\mbox{\boldmath $\psi$})$.
These results are applied to the MELEs of $\mbox{\boldmath $\psi$}_0$ and $\mbox{\boldmath $\psi$}_1$.
We further provide examples for $\mbox{\boldmath $\psi$}$ and ${\bf g}(\mbox{\boldmath $\psi$})$ which cover several important summary quantities.
Simulation results are presented in Section \ref{simu} and two real data applications are given in Section \ref{realdata}.
We conclude the paper with some discussion in Section \ref{conclude}.
For the convenience of presentation, all the technical details are provided in the supplementary material.
\section{Main results}
\label{estimation}
We denote $n_{i0}$ and $n_{i1}$ as the (random) number of zero observations and positive observations, respectively, in each sample $i=0,1$. Clearly $n_i = n_{i0}+n_{i1}$, for $i=0,1$.
Without loss of generality, we assume that the first $n_{i1}$ observations in group $i$, $X_{i1},\cdots, X_{in_{i1}}$, are positive,
and the rest $n_{i0}$ observations are 0.
We use $n$ to denote the total (fixed) sample size, i.e., $n=n_0+n_1$.
\subsection{Point estimation of $\mbox{\boldmath $\psi$}$ and ${\bf g}(\mbox{\boldmath $\psi$})$}
We first discuss the maximum EL procedure for estimating the unknown parameters and functions in models (\ref{eq:1}) and (\ref{drm}).
With the two samples of observations from model (\ref{eq:1}), the full likelihood function is given as
\begin{eqnarray*}
\mathcal{L}_n& =& \prod_{i=0}^1 \left\{ v_i^{n_{i0}}\left(1-v_i\right)^{n_{i1}} \prod_{j=1}^{n_{i1}}dG_i\left(X_{ij}\right)\right\}.
\end{eqnarray*}
Following the EL principle \citep{owen}, we model the baseline distribution $G_0(x)$ as
\begin{equation} \label{elr.GA}
G_0(x)= \sum_{i=0}^1\sum_{j = 1}^{n_{i1}}p_{ij}I(X_{ij}\le x),
\end{equation}
where $p_{ij} = dG_0(X_{ij})$ for $i=0,1$ and $j=1,\ldots,n_{i1}$.
With (\ref{elr.GA}) and under the DRM (\ref{drm}), the full likelihood function can be rewritten as
\begin{eqnarray*}
\mathcal{L}_n& =& \prod_{i=0}^1 v_i^{n_{i0}}\left(1-v_i\right)^{n_{i1}}\cdot
\left\{\prod_{i=0}^1\prod_{j=1}^{n_{i1}}p_{ij}\right\}
\cdot
\left[ \prod_{j=1}^{n_{11}} \exp\left\{\alpha+\boldsymbol{\beta}^\top\boldsymbol q(X_{1j})\right\}\right],
\end{eqnarray*}
where $p_{ij}$'s satisfy the following constraints
\begin{equation}
\label{constraint1}
p_{ij}>0, \quad \sum_{i=0}^1\sum_{j=1}^{n_{i1}} p_{ij}=1, \quad \mbox{and} \quad \sum_{i=0}^1\sum_{j=1}^{n_{i1}} p_{ij}\exp\left\{\alpha +\boldsymbol \beta ^\top\boldsymbol q(X_{ij})\right\}=1.
\end{equation}
These constraints ensure that $G_0(x)$ and $G_1(x)$ are CDFs.
Let $\boldsymbol{P}=\{p_{ij}\}$.
The MELE of $(\boldsymbol{\nu},\boldsymbol{\theta}, \boldsymbol{P})$ is then defined as
$$
(\hat{\mbox{\boldmath $\nu$}},\hat{\boldsymbol{\theta}}, \hat {\boldsymbol{P}})=
\arg\max_{\mbox{\scriptsize \boldmath $\nu$}, \mbox{\scriptsize \boldmath $\theta$}, {\boldsymbol{P}} } \mathcal{L}_n
$$
subject to the constraints in (\ref{constraint1}).
To numerically calculate the MELE, we write the logarithm of the EL function $\mathcal{L}_n$ as
\begin{equation}\label{lik}
\tilde \ell(\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$}, G_0) = \ell_0\left(\mbox{\boldmath $\nu$}\right) + \tilde \ell_1\left(\boldsymbol{\theta},\boldsymbol{P}\right),
\end{equation}
where
\begin{equation*}
\ell_0\left(\mbox{\boldmath $\nu$}\right) = \sum_{i=0}^1\log\left\{ v_i^{n_{i0}}\left( 1-v_i\right)^{n_{i1}} \right\}
~\text{and}~
\tilde \ell_1\left(\boldsymbol \theta,\boldsymbol{P}\right) = \sum_{j=1}^{n_{11}}\left\{\alpha +\boldsymbol \beta ^\top\boldsymbol q(X_{1j})\right\}+\sum_{i=0}^1\sum_{j=1}^{n_{i1}}\log p_{ij}.
\end{equation*}
Here $\ell_0\left(\mbox{\boldmath $\nu$}\right)$ is the binomial log-likelihood function corresponding to the number of zero observations and $\tilde \ell_1\left(\boldsymbol \theta, \boldsymbol{P}\right)$ represents the empirical log-likelihood function associated with the positive observations.
Following \cite{Wang2017}, we have $\hat{\mbox{\boldmath $\nu$}} = \arg\max_{\mbox{\scriptsize \boldmath $\nu$}} \ell_0(\mbox{\boldmath $\nu$})$ and
\begin{equation*}
(\hat \mbox{\boldmath $\theta$},\hat {\boldsymbol{P}} ) = \arg\max_{\mbox{\scriptsize \boldmath $\theta$},{\boldsymbol{P}} }\left\{\tilde \ell_1\left(\boldsymbol \theta,\boldsymbol{P}\right): ~
p_{ij}>0, ~\sum_{i=0}^1\sum_{j=1}^{n_{i1}} p_{ij}=1,~\sum_{i=0}^1\sum_{j=1}^{n_{i1}} p_{ij}\exp\left\{\alpha +\boldsymbol \beta ^\top\boldsymbol q(X_{ij})\right\}=1\right\}.
\end{equation*}
By the method of Lagrange multiplier, $\hat \mbox{\boldmath $\theta$}$ can be obtained by maximizing the following dual empirical log-likelihood function \citep{Cai2017}:
\begin{equation*}
\ell_1(\mbox{\boldmath $\theta$})=
- \sum_{i=0}^1\sum_{j=1}^{n_{i1}}
\log\left\{ 1 + \hat{\rho}[\exp\{\alpha + \mbox{\boldmath $\beta$}^{\top} \boldsymbol q(X_{ij})\}-1] \right\}
+ \sum_{j=1}^{n_{11}} \{\alpha + \mbox{\boldmath $\beta$}^{\top} \boldsymbol q(X_{1j}) \},
\end{equation*}
where $\hat{\rho} = n_{11}\{n_{01}+n_{11}\}^{-1}$.
That is, $\hat{\mbox{\boldmath $\theta$}}= \arg \max_{\mbox{\scriptsize \boldmath $\theta$}}\ell_1(\mbox{\boldmath $\theta$})$.
Note that $\hat\rho$ is a random variable under our setup.
This is fundamentally different from the case when there is no excess of zeros in the data \citep{Qin1997}, and it creates new theoretical challenges for our asymptotic development in the next section.
Once $\hat {\boldsymbol{\theta}} $ is obtained,
the MELEs of $\hat p_{ij}$'s are given as \citep{Wang2017}
$$
\hat{p}_{ij}= \{n_{01}+n_{11}\}^{-1}\left\{1 + \hat{\rho}[\exp\{\hat\alpha + \hat\mbox{\boldmath $\beta$}^{\top} \boldsymbol q(X_{ij})\}-1] \right\}^{-1},
$$
and
the MELEs of $G_0(x)$ and $G_1(x)$ are
\begin{equation*}
\hat G_0(x)=\sum_{i=0}^1\sum_{j=1}^{n_{i1}} \hat{p}_{ij} I(X_{ij}\leq x)~~~\text{and}~~~\hat G_1(x)=\sum_{i=0}^1\sum_{j=1}^{n_{i1}} \hat{p}_{ij}\exp\{\hat{\alpha} + \hat\mbox{\boldmath $\beta$}^{\top} \boldsymbol q(X_{ij})\} I(X_{ij}\leq x).
\end{equation*}
By the definition of $\mbox{\boldmath $\psi$}$ in \eqref{psi}, $\mbox{\boldmath $\psi$}$ is a function of $(\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$})$ and $G_0$.
With MELEs $(\hat \mbox{\boldmath $\nu$},\hat \mbox{\boldmath $\theta$})$ and $\hat G_0$, the MELE of $\mbox{\boldmath $\psi$}$ is given by
\begin{eqnarray}
\label{hat.psi}
\hat{\mbox{\boldmath $\psi$}} =
\sum_{i=0}^1\sum_{j=1}^{n_{i1}} \hat p_{ij}\boldsymbol u(X_{ij};\hat\mbox{\boldmath $\nu$},\hat\mbox{\boldmath $\theta$}),
\end{eqnarray}
and the estimator of $\boldsymbol g(\mbox{\boldmath $\psi$})$ is $\boldsymbol g(\hat\mbox{\boldmath $\psi$})$.
{When $\boldsymbol u(x;\mbox{\boldmath $\nu$}, \mbox{\boldmath $\theta$})$ takes the specific form as in (\ref{ufun01}), we obtain the MELEs of $\mbox{\boldmath $\psi$}_0$ and $\mbox{\boldmath $\psi$}_1$, defined in (\ref{functional})}, as
\begin{eqnarray}
\label{hat.psi01}
\hat{\mbox{\boldmath $\psi$}}_0 &=& \sum_{i=0}^1\sum_{j=1}^{n_{i1}} \hat p_{ij}(1-\hat \nu_0) {\bf a}(X_{ij}) ~~\mbox{ and }~~
\hat{\mbox{\boldmath $\psi$}}_1 = \sum_{i=0}^1\sum_{j=1}^{n_{i1}} \hat p_{ij}(1-\hat \nu_1) {\bf a}(X_{ij})\exp\{\hat\alpha+\hat\mbox{\boldmath $\beta$}^\top\boldsymbol q(x)\}.
\end{eqnarray}
\subsection{Asymptotic properties}
In this section, we first study the asymptotic properties of $\hat\mbox{\boldmath $\eta$}=(\hat\mbox{\boldmath $\nu$}^\top,\hat\rho,\hat\mbox{\boldmath $\theta$}^\top)^\top$ and then apply these results
to establish the asymptotic properties of $\hat{\mbox{\boldmath $\psi$}}$ and $\boldsymbol g(\hat\mbox{\boldmath $\psi$})$.
For ease of presentation, we introduce some notations.
We use $\mbox{\boldmath $\nu$}^{*}$ and $\mbox{\boldmath $\theta$}^{*}$ to denote the true values of $\mbox{\boldmath $\nu$}$ and $\mbox{\boldmath $\theta$}$, respectively.
Let $\boldsymbol Q(x) = (1,\boldsymbol q(x)^\top)^\top$ and
\begin{eqnarray*}
&w = n_0/n,~\Delta^*=w(1-\nu_0^*)+(1-w)(1-\nu_1^*),
~\rho^*=\frac{(1-w)(1-\nu_1^*)}{\Delta^*},\\
&\omega(x)=\exp\{\mbox{\boldmath $\theta$}^{*\top} \boldsymbol Q(x)\},~h(x)=1+\rho^* \{\omega(x)-1\},~h_1(x)=\rho^* \omega(x)/h(x),\\
& \boldsymbol A_{\mbox{\scriptsize \boldmath $\nu$}} =
\mbox{diag}\left\{ \frac{w}{\nu_0^*(1-\nu_0^*)}, \frac{1-w}{\nu_1^*(1-\nu_1^*)} \right\},~
\boldsymbol A_{\mbox{\scriptsize \boldmath $\theta$}} =
\Delta^*(1-\rho^*)E_0\left\{h_1(X) \boldsymbol Q(X)\boldsymbol Q(X)^\top \right\},
\end{eqnarray*}
where $E_0(\cdot)$ represents the expectation operator with respect to $G_0$ and $X$ refers to a random variable from $G_0$.
Note that although $\omega(\cdot)$, $h(\cdot)$, and $h_1(\cdot)$ also depend on $\mbox{\boldmath $\theta$}^*$ and/or $\rho^*$, we drop these redundant parameters for notational simplicity.
The asymptotic results in this section are developed under the following regularity conditions.
\begin{itemize}
\item [C1:] The true value $\mbox{\boldmath $\nu$}_i^*$ lies between 0 and 1 for $i = 0,1$.
\item [C2:] As the total sample size $n$ goes to infinity, $n_0/n = w$ for some constant $w \in (0,1)$.
\item [C3:] The components of $\boldsymbol Q(x)$ are continuous and stochastically
linearly independent.
\item [C4:] $\int_0^{\infty} \exp\{\mbox{\boldmath $\beta$}^\top\boldsymbol q(x)\}dG_0(x) < \infty$ for all $\mbox{\boldmath $\beta$}$ in a neighbourhood of the true value $\mbox{\boldmath $\beta$}^*$.
\end{itemize}
Condition C1 ensures the binomial likelihood $\ell_0(\mbox{\boldmath $\nu$})$ has regular properties.
Condition C2 means that both $n_0$ and $n_1$ go to $\infty$ at the same rate.
Conditions C1 and C2 imply that $\boldsymbol A_{\mbox{\scriptsize \boldmath $\nu$}}$ is positive definite.
Condition C3 ensures that no linear combinations of any components of $\boldsymbol Q(x)$ can be 0 with probability 1 under $G_0$.
Condition C4 guarantees the existence of finite moments of $\boldsymbol q(X)$ in a neighborhood of $\mbox{\boldmath $\beta$}^*$ under both $G_0(x)$ and $G_1(x)$.
Conditions C3 and C4 together imply that $\boldsymbol A_{\mbox{\scriptsize \boldmath $\theta$}}$ is positive definite.
The following theorem establishes the asymptotic normality of $\hat{\mbox{\boldmath $\eta$}}$.
\begin{theorem}
\label{thm1}
Let $\mbox{\boldmath $\eta$}^*=(\mbox{\boldmath $\nu$}^{*\top},\rho^*,\mbox{\boldmath $\theta$}^{*\top})^\top$.
Assume that Conditions C1--C4 are satisfied.
As the total sample size $n\to\infty$,
\begin{eqnarray*}
n^{1/2}(\hat\mbox{\boldmath $\eta$}-\mbox{\boldmath $\eta$}^*)
\to
N\left(\mbox{\bf 0}, \mbox{\boldmath $\Lambda$} \right)
\end{eqnarray*}
in distribution, where
\begin{equation*}
\mbox{\boldmath $\Lambda$}=
\left( \begin{array}{ccc}
\boldsymbol A_{\mbox{\scriptsize \boldmath $\nu$}}^{-1} & \rho^*(1-\rho^*) \boldsymbol A_{\mbox{\scriptsize \boldmath $\nu$}}^{-1}\boldsymbol W^\top & \mbox{\bf 0} \\
\rho^*(1-\rho^*) \boldsymbol W \boldsymbol A_{\mbox{\scriptsize \boldmath $\nu$}}^{-1} & (\Delta^*)^{-1}\rho^*(1-\rho^*) \{\rho^*\nu_0^*+(1-\rho^*)\nu_1^*\} & \mbox{\bf 0} \\
\mbox{\bf 0} & \mbox{\bf 0} & \boldsymbol A_{\mbox{\scriptsize \boldmath $\theta$}}^{-1}-\frac{\boldsymbol e\bfe^\top}{\Delta^{*}\rho^*(1-\rho^*)} \\
\end{array}\right)
\end{equation*}
with $\boldsymbol W = \left( (1-\nu_0^*)^{-1}, -(1-\nu_1^*)^{-1} \right)$ and $\boldsymbol e=(1,\mbox{\bf 0}_{d\times1}^\top)^\top$.
\end{theorem}
\cite{Qin1997} considered the asymptotic normality of $\sqrt{n}(\hat\mbox{\boldmath $\theta$}-\mbox{\boldmath $\theta$}^*)$
when there is no excess of zeros in the data.
Theorem \ref{thm1} generalizes their results to the case when the data contains excessive zeros.
Furthermore, it establishes the joint limiting distribution of $\sqrt{n}(\hat\mbox{\boldmath $\theta$}-\mbox{\boldmath $\theta$}^*)$, $\sqrt{n}(\hat\mbox{\boldmath $\nu$}-\mbox{\boldmath $\nu$}^*)$, and $\sqrt{n}(\hat\rho-\rho^*)$,
where the latter two are induced by the semicontinuous data structure.
This joint limiting distribution plays an important role in deriving the asymptotic normality of $\hat\mbox{\boldmath $\psi$}$ in the following theorem.
\begin{theorem}
\label{thm2}
Let $\mbox{\boldmath $\psi$}^*$ be the true value of $\mbox{\boldmath $\psi$}$. Under the same conditions of Theorem \ref{thm1},
as $n\to\infty$,
(a)
$
\sqrt{n}(\hat\mbox{\boldmath $\psi$} - \mbox{\boldmath $\psi$}^*) \to N(\mbox{\bf 0}, \mbox{\boldmath $\Gamma$})
$
in distribution, where
\begin{eqnarray*}
\label{bgamma.form}
\mbox{\boldmath $\Gamma$}= \frac{1}{\Delta^*}E_0\left\{\frac{\boldsymbol u(X;\mbox{\boldmath $\nu$}^*,\mbox{\boldmath $\theta$}^*)\boldsymbol u(X;\mbox{\boldmath $\nu$}^*,\mbox{\boldmath $\theta$}^*)^\top}{h(X)}\right\}
- \frac{\mbox{\boldmath $\psi$}^*\mbox{\boldmath $\psi$}^{*\top}}{\Delta^*}
+ \mathcal{M}_1 \boldsymbol A_{\mbox{\scriptsize \boldmath $\nu$}}^{-1} \mathcal{M}_1 ^\top
- \frac{\mathcal{M}_2 \mathcal{M}_2^\top}{\Delta^*\rho^*(1-\rho^*)}
+ \mathcal{M}_3 \boldsymbol A_{\mbox{\scriptsize \boldmath $\theta$}}^{-1} \mathcal{M}_3 ^\top,
\end{eqnarray*}
with
\begin{eqnarray*}
\label{CM1}
\mathcal{M}_1&=& E_0\left\{ \frac{\partial \boldsymbol u(X;\mbox{\boldmath $\nu$}^*,\mbox{\boldmath $\theta$}^*)}{\partial\mbox{\boldmath $\nu$}} \right\}, \\
\label{CM2}\mathcal{M}_2&=& E_0\left[\left\{\partial\boldsymbol u(X;\mbox{\boldmath $\nu$}^*,\mbox{\boldmath $\theta$}^*)/\partial\mbox{\boldmath $\theta$}\right\}\boldsymbol e\right]- \rho^*\mbox{\boldmath $\psi$}^*,\\
\label{CM3}\mathcal{M}_3&=& E_0\left\{\partial\boldsymbol u(X;\mbox{\boldmath $\nu$}^*,\mbox{\boldmath $\theta$}^*)/\partial\mbox{\boldmath $\theta$} -
h_1(X)\boldsymbol u(X;\mbox{\boldmath $\nu$}^*,\mbox{\boldmath $\theta$}^*)\boldsymbol Q(X)^\top\right\};
\end{eqnarray*}
(b) for some smooth function ${\bf g}(\cdot): p \to q$,
$
\sqrt{n}\left\{ \boldsymbol g(\hat\mbox{\boldmath $\psi$}) - \boldsymbol g(\mbox{\boldmath $\psi$}^*)\right\} \to N\left(0, \mbox{\boldmath $\Gamma$}_{\boldsymbol g} \right)
$
in distribution, where
$$
\mbox{\boldmath $\Gamma$}_{\boldsymbol g} = \left\{\frac{\partial \boldsymbol g(\mbox{\boldmath $\psi$}^*)}{\partial \mbox{\boldmath $\psi$}}\right\} \mbox{\boldmath $\Gamma$} \left\{\frac{\partial \boldsymbol g(\mbox{\boldmath $\psi$}^*)}{\partial \mbox{\boldmath $\psi$}}\right\}^\top.
$$
\end{theorem}
\cite{li2018comparison} derived a similar result in their Theorem 2.1 for $\hat\mbox{\boldmath $\psi$}$ when there is no excess of zeros in the data and $p=1$.
Theorem \ref{thm2} covers the case when there exists excessive zeros.
The two results complement each other to cover both cases.
We now apply the results for $\hat\mbox{\boldmath $\psi$}$ in Theorem \ref{thm2} to $\hat\mbox{\boldmath $\psi$}_0$ and $\hat \mbox{\boldmath $\psi$}_1$ in (\ref{hat.psi01}),
and compare them with the fully nonparametric estimators $\tilde\mbox{\boldmath $\psi$}_0$ and $\tilde \mbox{\boldmath $\psi$}_1$:
$$
\tilde\mbox{\boldmath $\psi$}_0=\frac{1}{n_0}\sum_{j=1}^{n_0} {\bf a}(X_{0j})
\quad \mbox{ and }\quad
\tilde\mbox{\boldmath $\psi$}_1=\frac{1}{n_1}\sum_{j=1}^{n_1} {\bf a}(X_{1j}).
$$
For $i=0,1$, let
$$
{\bf V}_i=\int_{0}^\infty {\bf a}(x)\{ {\bf a}(x) \}^\top dF_i(x)- \int_{0}^\infty {\bf a}(x)dF_i(x) \int_{0}^\infty \{ {\bf a}(x) \}^\top dF_i(x) .
$$
Then
$
\sqrt{n}\left( \tilde\mbox{\boldmath $\psi$}_0^\top-\mbox{\boldmath $\psi$}_0^\top, \tilde\mbox{\boldmath $\psi$}_1^\top-\mbox{\boldmath $\psi$}_1^\top \right)^\top
$
has the asymptotic variance-covariance matrix
$$
\mbox{\boldmath $\Gamma$}_{non}=\left(
\begin{array}{cc}
w^{-1}{\bf V}_0 & \mbox{\bf 0} \\
\mbox{\bf 0} & (1-w)^{-1} {\bf V}_1
\end{array}
\right).
$$
In comparison with the asymptotic variance of the MELEs $\hat\mbox{\boldmath $\psi$}_0$ and $\hat \mbox{\boldmath $\psi$}_1$ given in (\ref{hat.psi01}), we have the following results.
\begin{corollary}
\label{thm3}
Under the same conditions of Theorem \ref{thm1},
as $n\to\infty$,
\begin{eqnarray*}
\sqrt{n}
\left(
\begin{array}{c}
\hat\mbox{\boldmath $\psi$}_0-\mbox{\boldmath $\psi$}_0 \\
\hat\mbox{\boldmath $\psi$}_1-\mbox{\boldmath $\psi$}_1\\
\end{array}
\right)
\to N(\mbox{\bf 0}, \mbox{\boldmath $\Gamma$}_{sem})
\end{eqnarray*}
in distribution,
where
$$
\mbox{\boldmath $\Gamma$}_{sem}=
\mbox{\boldmath $\Gamma$}_{non}-\Delta^*(1-\rho^*) E_0\left\{h_1(X)
\left(
\begin{array}{c}
w^{-1} {\bf d}(X)\\
-(1-w)^{-1}{\bf d}(X)\\
\end{array}
\right)
\left(
\begin{array}{c}
w^{-1} {\bf d}(X)\\
-(1-w)^{-1}{\bf d}(X)\\
\end{array}
\right)^\top
\right\},
$$
with
$$
{\bf d}(X)={\bf a}(X)- \Delta^*(1-\rho^*) E_0\left\{h_1(X){\bf a}(X)\boldsymbol Q(X)^\top\right\} \boldsymbol A_{\mbox{\scriptsize \boldmath $\theta$}}^{-1} \boldsymbol Q(X).
$$
\end{corollary}
Corollary \ref{thm3} implies that $ \mbox{\boldmath $\Gamma$}_{non}- \mbox{\boldmath $\Gamma$}_{sem}$ is positive semidefinite.
Hence the proposed MELEs of $\mbox{\boldmath $\psi$}_0$ and $\mbox{\boldmath $\psi$}_1$
are more efficient than the corresponding nonparametric ones.
Simulation studies in Section 3 further confirm this property.
\subsection{Confidence regions and hypothesis tests for $\mbox{\boldmath $\psi$}$ and $\boldsymbol g(\mbox{\boldmath $\psi$})$}
The two variance-covariance matrices $\mbox{\boldmath $\Gamma$}$ and $\mbox{\boldmath $\Gamma$}_{\boldsymbol g}$
may depend on $\mbox{\boldmath $\psi$}^*$ and $G_0(x)$.
Replacing them by $\hat\mbox{\boldmath $\psi$}$ and $\hat G_0(x)$,
we get the corresponding estimators $\hat\mbox{\boldmath $\Gamma$}$ and $\hat\mbox{\boldmath $\Gamma$}_{\boldsymbol g}$.
With the results in Theorem \ref{thm1}, it can be easily shown that both
$\hat\mbox{\boldmath $\Gamma$}$ and $\hat\mbox{\boldmath $\Gamma$}_{\boldsymbol g}$ are consistent. The details are hence omitted.
\begin{theorem}
\label{coro1}
Under the same conditions of Theorem \ref{thm1}, as $n \to \infty$,
$\hat\mbox{\boldmath $\Gamma$}\to \mbox{\boldmath $\Gamma$}$
and
$\hat\mbox{\boldmath $\Gamma$}_{\boldsymbol g}\to \mbox{\boldmath $\Gamma$}_{\boldsymbol g}$
both in probability.
\end{theorem}
Theorems \ref{thm2} and \ref{coro1} together imply that
$$
(\hat \mbox{\boldmath $\psi$}-\mbox{\boldmath $\psi$}^*)^\top \hat \mbox{\boldmath $\Gamma$}^{-1}(\hat \mbox{\boldmath $\psi$}-\mbox{\boldmath $\psi$}^*)
\quad \mbox{ and }\quad
\{ \boldsymbol g(\hat \mbox{\boldmath $\psi$})-\boldsymbol g(\mbox{\boldmath $\psi$}^* )\}^\top \hat \mbox{\boldmath $\Gamma$}_{\boldsymbol g}^{-1}\{ \boldsymbol g(\hat \mbox{\boldmath $\psi$})-\boldsymbol g(\mbox{\boldmath $\psi$}^* )\}
$$
converge in distribution to $\chi^2_p$ and $\chi^2_q$, respectively.
Hence both of them are asymptotically pivotal and can be used to construct Wald-type
confidence regions for $\mbox{\boldmath $\psi$}$ and $\boldsymbol g(\mbox{\boldmath $\psi$})$ and preform hypothesis tests about $\mbox{\boldmath $\psi$}$ and $\boldsymbol g(\mbox{\boldmath $\psi$})$.
For illustration, we consider the case when the dimension $q$ of ${\bf g}(\cdot)$ is 1, which is perhaps the most common situation in applications.
Let $\phi=\boldsymbol g(\mbox{\boldmath $\psi$})$.
Next, we explain how to apply the obtained results to construct a $100(1-\gamma)\%$ CI for $\phi$
and perform the hypothesis test for $H_0: \phi=0$.
For general $\mbox{\boldmath $\psi$}$ and $\boldsymbol g(\mbox{\boldmath $\psi$})$, the procedures can be followed similarly.
Let $\hat\phi=\boldsymbol g(\hat\mbox{\boldmath $\psi$})$ and $\hat\sigma_\phi^2=\hat\mbox{\boldmath $\Gamma$}_{\boldsymbol g}$.
Then a $100(1-\gamma)\%$ CI for $\phi$ is
\begin{equation}
\label{CI.phi}
\mathcal{I}_{\phi}=\left\{\phi: (\hat\phi-\phi)^2/\hat\sigma_\phi^2\leq\chi^2_{1,\gamma} \right\}
=\left[\hat\phi-z_{\gamma/2}\hat\sigma_\phi,\hat\phi+z_{\gamma/2}\hat\sigma_\phi\right],
\end{equation}
where $\chi^2_{1,\gamma}$ and $z_{\gamma/2}$ denote the ($1-\gamma$) quantile of $\chi^2_1$ distribution and the ($1-\gamma/2$) quantile of $N(0,1)$ distribution, respectively.
For testing $H_0: \phi=0$,
we reject the null hypothesis if
\begin{equation}
\label{test.phi}
\hat\phi ^2/\hat\sigma_\phi^2>\chi^2_{1,\gamma}
~~
\mbox{
or equivalently
}
~~
|\hat\phi/\hat\sigma_\phi|>z_{\gamma/2}
\end{equation}
under the given significance level $\gamma$.
\subsection{Examples of $\mbox{\boldmath $\psi$}$ and $\boldsymbol g(\mbox{\boldmath $\psi$})$}
\label{examples}
In this section, we provide some examples to demonstrate that $\mbox{\boldmath $\psi$}$ and $\boldsymbol g(\mbox{\boldmath $\psi$})$ cover many important summary quantities.
The proposed methods and the general results in Sections 2.1--2.3 can be readily applied to these quantities.
\begin{example} (Uncentered moments)
Let $\mu_i^{(k)}=\int_{0}^\infty x^k dF_i(x)$ be the $k$th (uncentered) moments of $F_i(x)$, $i=0,1$.
When $k=1$, we write $\mu_i=\mu_i^{(1)}$.
Clearly, when
\begin{equation}
\label{ufun.moments}
u_1(x;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$})=(1-\nu_0)x^k~~~\text{and}~~~u_2(x;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$}) = (1-\nu_1)x^k\exp\{\alpha + \mbox{\boldmath $\beta$}^{\top} \boldsymbol q(x)\},
\end{equation}
then
$\mbox{\boldmath $\psi$}=(\mu_0^{(k)},\mu_1^{(k)})^\top$.
\end{example}
\begin{example} (Mean ratio)
\label{example2}
Let $\delta=\mu_1/\mu_0$ denote the mean ratio of two populations.
Setting $k=1$ in (\ref{ufun.moments}), we obtain $\mbox{\boldmath $\psi$}=(\mu_0,\mu_1)^\top$.
Further let $g(x_1,x_2)=x_2/x_1$, then we get $\delta=g(\mbox{\boldmath $\psi$})$.
We can directly construct a CI of $\delta$ by using the result given in (\ref{CI.phi}).
An alternative way is to consider $g(x_1,x_2)=\log(x_2)- \log( x_1)$, then we get $g(\mbox{\boldmath $\psi$})=\log \delta$.
We can use the form of (\ref{CI.phi}) to construct a CI for $\log\delta$ first and then transform it to the CI for $\delta$.
Our simulation indicates that the second way leads to a CI with better coverage accuracy.
\end{example}
\begin{example} (Centered moments)
\label{example3}
Let $C_i^{(k)}=\int_{0}^\infty (x-\mu_i) ^k dF_i(x)$ be the $k$th centered moments of $F_i(x)$, $i=0,1$.
When $k=2$, we write $\sigma_i^2=C_i^{(2)}$.
As demonstrated in \cite{Serfling1980}, centered moments $C_i^{(k)}$ can be written as functions of $\mu_i^{(1)},\ldots, \mu_i^{(k)}$.
For illustration, we concentrate on $k=2$ and consider the variances of the two populations, $\sigma_0^2$ and $\sigma_1^2$.
Let
\begin{equation*}
\boldsymbol u(x;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$})=\left( (1-\nu_0)x, (1-\nu_0)x^2, (1-\nu_1)x\exp\{\alpha + \mbox{\boldmath $\beta$}^{\top} \boldsymbol q(x)\}, (1-\nu_1)x^2\exp\{\alpha + \mbox{\boldmath $\beta$}^{\top} \boldsymbol q(x)\}\right)^\top,
\end{equation*}
then
$\mbox{\boldmath $\psi$}=(\mu_0, \mu_0^{(2)},\mu_1,\mu_1 ^{(2)} )^\top$.
Define $\boldsymbol g(\cdot)$ as
$$
\boldsymbol g(x_1,x_2,x_3,x_4)=(x_2-x_1^2,x_4-x_3^2)^\top.
$$
We have $\boldsymbol g(\mbox{\boldmath $\psi$})=(\sigma_0^2,\sigma_1^2)^\top$.
The results in Theorem \ref{thm2} can be used to obtain the joint limiting distribution of $\sqrt{n}(\hat\sigma_0^2-\sigma_0^2, \hat\sigma_1^2-\sigma_1^2)^\top$, where $\hat\sigma_0^2$ and $\hat\sigma_1^2$ are the MELEs of $\sigma_0^2$ and $\sigma_1^2$, respectively.
If we choose
$$
\boldsymbol g(x_1,x_2,x_3,x_4)=(x_4-x_3^2)-(x_2-x_1^2).
$$
Then $\boldsymbol g(\mbox{\boldmath $\psi$})= \sigma_1^2-\sigma_0^2$, and the procedure described in (\ref{test.phi}) can be used to test $H_0:\sigma_0^2=\sigma_1^2$.
\end{example}
\begin{example} (Coefficient of variation)
\label{example4}
Let $CV_i=\sigma_i/\mu_i$ be the coefficient of variation of the $i$th population, $i=0,1$.
Suppose that the same $\boldsymbol u(\cdot)$ function specified in Example \ref{example3} is used. If we choose $\boldsymbol g(\cdot)$ to be
$$
\boldsymbol g(x_1,x_2,x_3,x_4)=( \sqrt{x_2}/x_1,\sqrt{x_4}/x_3)^\top,
$$
then $\boldsymbol g(\mbox{\boldmath $\psi$})=(CV_0,CV_1)^\top$.
If we choose $\boldsymbol g(x_1,x_2,x_3,x_4)=\sqrt{x_4}/x_3- \sqrt{x_2}/x_1 $, then $\boldsymbol g(\mbox{\boldmath $\psi$})=CV_1-CV_0$, and the procedure described in (\ref{test.phi}) can be used to test $H_0:CV_0=CV_1$.
\end{example}
\begin{example} (Generalized entropy class of inequality measures)
Let
$$
GE_{i}^{(\xi)}
=\left\{
\begin{array}{ll}
\frac{1}{\xi^2-\xi}\left\{ \int_{0}^\infty \left( \frac{x}{\mu_i} \right )^\xi d F_i(x)-1 \right\},&\mbox{if }\xi\neq0,1,\\
-\int_{0}^\infty \log\left(\frac{x}{\mu_i}\right )dF_i(x),&\mbox{if }\xi=0,\\
\int_{0}^\infty \frac{x}{\mu_i} \log\left(\frac{x}{\mu_i}\right )dF_i(x) ,&\mbox{if }\xi=1,\\
\end{array}
\right.
$$
be the generalized entropy class of inequality measures of the $i$th population, $i=0,1$.
Note that $GE_{i}^{(\xi)}$ are not well-defined for the population with excessive zeros when $\xi=0$.
Under our setup, $(GE_0^{(\xi)},GE_1^{(\xi)})^\top$ can also be written as $\boldsymbol g(\mbox{\boldmath $\psi$})$
with certain $\boldsymbol u(\cdot)$ and $\boldsymbol g(\cdot)$ functions as long as $\xi\neq 0$.
For illustration, we consider $\xi=1$.
Let
\begin{equation*}
\boldsymbol u(x;\mbox{\boldmath $\nu$},\mbox{\boldmath $\theta$})=\left( (1-\nu_0)x, (1-\nu_0)x \log(x), (1-\nu_1)x\exp\{\alpha + \mbox{\boldmath $\beta$}^{\top} \boldsymbol q(x)\}, (1-\nu_1)x\log(x) \exp\{\alpha + \mbox{\boldmath $\beta$}^{\top} \boldsymbol q(x)\}\right)^\top
\end{equation*}
and
$$
\boldsymbol g(x_1,x_2,x_3,x_4)=( x_2/x_1-\log x_1,x_4/x_3-\log x_3)^\top.
$$
Then $\boldsymbol g(\mbox{\boldmath $\psi$})=( GE_0^{(1)}, GE_1^{(1)} )^\top$.
Similar to Examples \ref{example3} and \ref{example4}, we can choose an appropriate $\boldsymbol g(\cdot)$ function to construct a testing procedure
for $H_0: GE_0^{(1)}=GE_1^{(1)}$.
\end{example}
\section{Simulation study}
\label{simu}
In this section, we conduct simulations to compare the finite sample performance of our proposed estimators and CIs with some existing methods.
We consider three parameters, the mean ratio $\delta$, discussed in Example \ref{example2},
and the population variances $\sigma^2_0$ and $\sigma^2_1$, discussed in Example \ref{example3},
for performance comparison of point estimators.
For comparison of CIs, we mainly focus on the mean ratio $\delta$.
\subsection{Simulation setup}
In our simulations, the random observations are generated from the mixture model (\ref{eq:1}), with $G_i$ being the log-normal distribution.
We use log-normal distribution in simulations because it has positive support and is highly skewed to the right.
These properties allow us to check the applicability of the proposed method for skewed data which is commonly seen in reality.
We use $\mathcal{LN}(a , b)$ to denote a log-normal distribution, where $a $ and $b $ are respectively the mean and variance in the log scale. The parameter settings for simulation studies are given in Table \ref{tab:1}.
\begin{table}[!htt]
\caption{Parameter settings for simulation studies: $G_0=\mathcal{LN}(a_0 , b_0)$ and $G_1=\mathcal{LN}(a_1 , b_1)$.}
\tabcolsep 1mm
\scriptsize
\centering
\begin{tabular}{ccccccc}
\hline
Model & $(v_0,v_1)$ & $(a_0,a_1)$ & $(b_0,b_1)$ & $(\mu_0,\mu_1)$ & $(\sigma_0^2,\sigma_1^2)$ & $\delta$ \\
\hline
1 & (0.30, 0.30) & (0.00, 0.00) & (1.00, 1.00) & (1.15, 1.15) & (3.84, 3.84) & 1.00 \\
2 & (0.70, 0.70) & (0.00, 0.00) & (1.00, 1.00) & (0.49, 0.49) & (1.97, 1.97) & 1.00 \\
3 & (0.30, 0.50) & (0.33, 0.66) & (1.00, 1.00) & (1.61, 1.59) & (7.43, 11.29) & 0.99 \\
4 & (0.50, 0.70) & (0.37, 0.89) & (1.00, 1.00) & (1.19, 1.20) & (6.32, 11.69) & 1.01 \\
5 & (0.50, 0.30) & (0.00, 0.00) & (1.00, 1.00) & (0.82, 1.15) & (3.02, 3.84) & 1.40 \\
6 & (0.70, 0.50) & (0.00, 0.00) & (1.00, 1.00) & (0.49, 0.82) & (1.97, 3.02) & 1.67 \\
7 & (0.60, 0.40) & (0.00, 0.00) & (1.00, 1.00) & (0.66, 0.99) & (2.52, 3.45) & 1.50 \\
8 & (0.30, 0.30) & (0.00, 0.50) & (1.00, 1.00) & (1.15, 1.90) & (3.84, 10.44) & 1.65 \\
9 & (0.70, 0.70) & (0.00, 0.75) & (1.00, 1.00) & (0.49, 1.05) & (1.97, 8.84) & 2.12 \\
10 & (0.40, 0.60) & (0.00, 1.00) & (1.00, 1.00) & (0.99, 1.79) & (3.45, 18.63) & 1.81 \\
\hline
\end{tabular}
\label{tab:1}
\end{table}
For all the models considered in Table \ref{tab:1}, the DRM \eqref{drm} is satisfied with $\boldsymbol q(x)=\log x$.
For each model, we consider four combinations of sample sizes $(n_0, n_1)$: $(50,50)$, $(100,100)$, $(50,150)$, and $(150,50)$.
The number of replications is 10,000 for each configuration of the parameter settings.
\subsection{Comparison for point estimators}
We first study the finite sample performance of point estimators. Under the model \eqref{eq:1} and DRM \eqref{drm}, our proposed estimators for $\delta$, $\sigma_0^2$, and $\sigma_1^2$ are
\begin{equation*}
\hat{\delta} = \frac{\hat\mu_1}{\hat\mu_0},~~
\hat \sigma_0^2 = (1-\hat v_0)\sum_{i=0}^1\sum_{j=1}^{n_{i1}} \hat p_{ij}X_{ij}^2-\hat \mu_0^2,
~~\text{and}~~
\hat \sigma_1^2 = (1-\hat v_1)\sum_{i=0}^1\sum_{j=1}^{n_{i1}} \hat p_{ij}\exp\left\{\hat\alpha+\boldsymbol{\hat\beta}^\top\mathbf{q}(X_{ij})\right\} X_{ij}^2-\hat\mu_1^2,
\end{equation*}
respectively, with
\begin{equation*}
\hat\mu_0 = (1-\hat v_0)\sum_{i=0}^1\sum_{j=1}^{n_{i1}}\hat p_{ij}X_{ij}
~~\text{and}~~
\hat \mu_1 = (1-\hat v_1)\sum_{i=0}^1\sum_{j=1}^{n_{i1}}\hat p_{ij}\exp\left\{\hat\alpha+\hat\mbox{\boldmath $\beta$}^\top\boldsymbol q(X_{ij})\right\}X_{ij}.
\end{equation*}
We use simulation studies to compare the proposed estimators $\hat\delta$, $\hat\sigma^2_0$, and $\hat\sigma^2_1$ with the fully nonparametric estimators
\begin{equation*}
\tilde\delta= \frac{\tilde \mu_1}{\tilde \mu_0}, ~~
\tilde \sigma^2_i = \frac{1}{n_i-1}\sum_{j = 1}^{n_i}(X_{ij}-\tilde \mu_i)^2
~~\text{with}~~
\tilde \mu_i=\frac{1}{n_i}\sum_{j = 1}^{n_i}X_{ij},
\end{equation*}
for $i = 0,1$.
The performance of a point estimator is evaluated in terms of the bias and mean square error (MSE).
The simulation results are summarized in Table \ref{tabratio}.
\begin{table}[!htt]
\caption{Bias and mean square error of point estimates for $\delta$, $\sigma^2_0$, and $\sigma^2_1$.}
\centering
\tabcolsep 1mm
\scriptsize
{
\begin{tabular}{ cl | cc cc | cccc | cccc }
\hline
& & \multicolumn{2}{c}{$\tilde \delta$} & \multicolumn{2}{c |}{$\hat\delta$}& \multicolumn{2}{c}{$\tilde \sigma_0^2$} & \multicolumn{2}{c|}{$\hat\sigma_0^2$} &
\multicolumn{2}{c}{$\tilde\sigma_1^2$} & \multicolumn{2}{c }{$\hat\sigma_1^2$} \\
\hline
Model & $(n_0,n_1)$& Bias & MSE & Bias & MSE& Bias & MSE& Bias & MSE& Bias & MSE& Bias & MSE \\
\hline
\multirow{4}{*}{1} & (50, 50) & 0.06 & 0.13 & 0.04 & 0.09 & 0.05 & 39.62 & -0.03 & 21.72 & 0.03 & 34.87 & -0.01 & 22.25 \\
& (50, 150) & 0.06 & 0.09 & 0.03 & 0.06 & -0.04 & 23.11 & 0.01 & 8.38 & 0.02 & 9.23 & -0.03 & 7.07 \\
& (150, 50) & 0.01 & 0.08 & 0.02 & 0.05 & -0.01 & 10.74 & -0.08 & 8.09 & -0.02 & 50.72 & 0.06 & 18.71 \\
& (100, 100) & 0.03 & 0.06 & 0.02 & 0.04 & -0.01 & 17.81 & -0.05 & 10.12 & -0.03 & 14.53 & -0.06 & 8.14 \\ \hline
\multirow{4}{*}{2} & (50, 50) & 0.17 & 0.59 & 0.13 & 0.39 & 0.09 & 61.65 & 0.03 & 39.30 & -0.01 & 20.32 & -0.02 & 15.92 \\
& (50, 150) & 0.18 & 0.41 & 0.11 & 0.26 & -0.01 & 9.91 & 0.09 & 6.71 & 0.02 & 6.63 & -0.03 & 4.58 \\
& (150, 50) & 0.06 & 0.25 & 0.06 & 0.18 & 0.00 & 3.62 & -0.04 & 2.86 & 0.04 & 14.89 & 0.10 & 6.11 \\
& (100, 100) & 0.09 & 0.21 & 0.06 & 0.15 & -0.01 & 5.85 & -0.01 & 3.72 & 0.01 & 5.51 & -0.03 & 3.19 \\ \hline
\multirow{4}{*}{3} & (50, 50) & 0.06 & 0.17 & 0.03 & 0.11 & -0.05 & 104.16 & -0.03 & 54.97 & 0.19 & 426.17 & -0.15 & 292.11 \\
& (50, 150) & 0.06 & 0.10 & 0.03 & 0.07 & -0.05 & 118.37 & 0.19 & 36.01 & 0.11 & 120.63 & -0.08 & 100.58 \\
& (150, 50) & 0.01 & 0.11 & 0.02 & 0.08 & 0.02 & 42.75 & -0.12 & 23.86 & -0.26 & 193.29 & -0.16 & 135.72 \\
& (100, 100) & 0.03 & 0.08 & 0.02 & 0.05 & -0.06 & 57.48 & -0.10 & 21.76 & -0.09 & 147.69 & -0.21 & 108.06 \\ \hline
\multirow{4}{*}{4} & (50, 50) & 0.09 & 0.33 & 0.07 & 0.24 & 0.08 & 147.42 & 0.03 & 54.06 & -0.04 & 458.52 & -0.31 & 398.77 \\
& (50, 150) & 0.09 & 0.19 & 0.05 & 0.14 & -0.02 & 72.97 & 0.24 & 26.55 & -0.08 & 160.82 & -0.27 & 138.70 \\
& (150, 50) & 0.03 & 0.21 & 0.04 & 0.16 & 0.00 & 32.66 & -0.09 & 18.79 & 0.02 & 438.73 & -0.02 & 289.92 \\
& (100, 100) & 0.04 & 0.14 & 0.03 & 0.11 & 0.03 & 57.22 & 0.01 & 19.11 & 0.03 & 275.65 & -0.11 & 233.74 \\ \hline
\multirow{4}{*}{5} & (50, 50) & 0.13 & 0.38 & 0.08 & 0.24 & 0.01 & 32.45 & -0.02 & 13.00 & -0.04 & 25.04 & -0.12 & 19.46 \\
& (50, 150) & 0.13 & 0.28 & 0.07 & 0.18 & -0.05 & 16.82 & 0.03 & 6.33 & -0.07 & 8.81 & -0.13 & 6.94 \\
& (150, 50) & 0.04 & 0.18 & 0.04 & 0.12 & 0.00 & 7.68 & -0.05 & 5.76 & -0.04 & 41.11 & 0.02 & 16.64 \\
& (100, 100) & 0.06 & 0.16 & 0.04 & 0.10 & 0.02 & 10.98 & 0.03 & 6.97 & 0.03 & 18.57 & -0.03 & 9.81 \\ \hline
\multirow{4}{*}{6} & (50, 50) & 0.05 & 0.13 & 0.04 & 0.08 & -0.02 & 27.01 & -0.08 & 17.19 & -0.07 & 25.68 & -0.14 & 13.10 \\
& (50, 150) & 0.05 & 0.09 & 0.02 & 0.06 & 0.06 & 28.59 & 0.06 & 12.45 & -0.07 & 8.70 & -0.11 & 6.01 \\
& (150, 50) & 0.02 & 0.08 & 0.02 & 0.05 & 0.00 & 9.55 & -0.05 & 8.88 & 0.06 & 53.31 & 0.07 & 13.47 \\
& (100, 100) & 0.03 & 0.06 & 0.02 & 0.04 & 0.04 & 17.30 & -0.02 & 9.59 & 0.00 & 14.84 & -0.02 & 8.38 \\ \hline
\multirow{4}{*}{7} & (50, 50) & 0.17 & 0.61 & 0.11 & 0.39 & 0.09 & 33.66 & 0.02 & 14.45 & -0.10 & 21.87 & -0.14 & 17.33 \\
& (50, 150) & 0.19 & 0.48 & 0.10 & 0.30 & 0.01 & 25.25 & 0.10 & 6.54 & -0.01 & 8.87 & -0.07 & 7.44 \\
& (150, 50) & 0.06 & 0.28 & 0.06 & 0.19 & 0.00 & 5.25 & -0.05 & 3.62 & -0.05 & 21.97 & 0.00 & 10.64 \\
& (100, 100) & 0.09 & 0.26 & 0.05 & 0.17 & -0.02 & 7.74 & 0.02 & 4.52 & 0.07 & 18.22 & -0.02 & 10.97 \\ \hline
\multirow{4}{*}{8} & (50, 50) & 0.10 & 0.37 & 0.06 & 0.27 & 0.02 & 33.28 & 0.04 & 9.29 & -0.14 & 190.88 & -0.41 & 160.50 \\
& (50, 150) & 0.09 & 0.24 & 0.05 & 0.17 & -0.01 & 27.55 & 0.15 & 6.82 & 0.11 & 106.76 & -0.03 & 97.86 \\
& (150, 50) & 0.03 & 0.22 & 0.02 & 0.15 & -0.05 & 8.84 & -0.05 & 4.72 & -0.12 & 173.09 & -0.35 & 109.84 \\
& (100, 100) & 0.04 & 0.17 & 0.03 & 0.12 & 0.02 & 18.88 & 0.00 & 3.54 & -0.12 & 211.21 & -0.22 & 196.79 \\ \hline
\multirow{4}{*}{9} & (50, 50) & 0.37 & 2.66 & 0.26 & 2.09 & 0.12 & 37.11 & 0.16 & 8.87 & 0.13 & 389.19 & -0.11 & 352.51 \\
& (50, 150) & 0.36 & 1.82 & 0.23 & 1.36 & -0.08 & 7.39 & 0.19 & 3.65 & 0.06 & 135.93 & -0.10 & 128.85 \\
& (150, 50) & 0.13 & 1.16 & 0.10 & 0.90 & -0.01 & 4.89 & 0.03 & 3.01 & 0.23 & 381.40 & -0.08 & 279.96 \\
& (100, 100) & 0.17 & 0.96 & 0.11 & 0.74 & -0.01 & 6.76 & 0.04 & 1.85 & -0.19 & 124.13 & -0.34 & 114.59 \\ \hline
\multirow{4}{*}{10} & (50, 50) & 0.12 & 0.76 & 0.08 & 0.62 & -0.04 & 18.98 & 0.13 & 7.76 & 0.12 & 1461.27 & -0.47 & 1334.57 \\
& (50, 150) & 0.13 & 0.45 & 0.08 & 0.35 & -0.03 & 22.09 & 0.16 & 4.31 & -0.09 & 356.85 & -0.29 & 345.40 \\
& (150, 50) & 0.03 & 0.48 & 0.02 & 0.40 & -0.03 & 9.00 & 0.03 & 3.78 & -0.37 & 743.33 & -0.95 & 640.08 \\
& (100, 100) & 0.06 & 0.33 & 0.04 & 0.28 & 0.00 & 14.03 & 0.09 & 3.61 & -0.04 & 654.22 & -0.34 & 603.20 \\
\hline
\end{tabular}}
\label{tabratio}
\end{table}
As we can see from Table \ref{tabratio} that the biases of $\hat\delta
$ and $\tilde\delta$ are quite negligible
in all the cases, and the
proposed estimator $\hat\delta$ has smaller biases in most cases.
Moreover, the proposed estimator $\hat\delta$ outperforms $\tilde\delta$ in terms of MSE.
This is as expected since $\hat\delta
$ uses more information to estimate the population means $\mu_0$ and $\mu_1$.
{For all settings considered in Table \ref{tabratio}, the biases of $(\hat\sigma_0^2,\hat\sigma_1^2)$ and $(\tilde\sigma_0^2,\tilde\sigma_1^2)$ are quite small.
Meanwhile, the MSEs of $(\hat\sigma_0^2,\hat\sigma_1^2)$ are significantly smaller than those of $(\tilde\sigma_0^2,\tilde\sigma_1^2)$.
In some settings, such as Model 8 with sample sizes $(n_0,n_1)=(100,100)$, the MSE of $\hat\sigma_0^2$ is smaller than 20\% of the MSE of $\tilde\sigma_0^2$.}
\subsection{Comparison for confidence intervals}
We now examine the finite sample behaviour of the following 95\% CIs of the mean ratio $\delta$:
\begin{enumerate}
\item[--]$\mathcal{I}_{1}$: {Wald-type CI} based on $\log\tilde\delta$ using the quantile of $N(0,1)$;
\item[--]$\mathcal{I}_{1B}$: {bootstrap Wald-type CI} based on $\log\tilde\delta$ using the quantile from the nonparametric bootstrap method;
\item[--]$\mathcal{I}_2$: ELR-based CI proposed by \cite{Wu2012} using the quantile of $\chi^2_1$ distribution;
\item[--]$\mathcal{I}_{2B}$: {bootstrap} ELR-based CI proposed by \cite{Wu2012} using the quantile from the nonparametric bootstrap method;
\item[--]$\mathcal{I}_{3}$: ELR-based CI under the DRM \eqref{drm} proposed by \cite{Wang2018} using the quantile of $\chi^2_1$ distribution;
\item[--] $\mathcal{I}_{4}$: proposed Wald-type CI based on $\hat\delta$;
\item[--] $\mathcal{I}_{4L}$: proposed Wald-type CI based on $\log\hat\delta$.
\end{enumerate}
{We note that the normal and $\chi^2_1$ distributions may not provide good approximation to $\log\tilde\delta$ and the ELR statistic in $\mathcal{I}_2$, respectively, especially when $n$ is not large enough.
This may be because of the specific features in the two-sample semicontinuous data from model \eqref{eq:1}: excessive zeros and severe positive/negative skewness of the positive observations.
Hence, we employ the nonparametric bootstrap method \citep{Efron1993}
to approximate the quantiles of target asymptotic distributions, which leads to $\mathcal{I}_{1B}$ and $\mathcal{I}_{2B}$.
The number of bootstrap samples is set to be $999$.}
{We construct the first four CIs without using the DRM assumption \eqref{drm} while the rest three CIs are constructed under the DRM.
The performance of a CI is evaluated}
in terms of the coverage probability (CP) and average length
(AL), which are calculated as follows:
\begin{equation*}
\mbox{CP} (\%) =100\times \frac{\sum_{h=1}^{10000} I(\delta^{(h)}_L<\delta<\delta^{(h)}_U)}{10000},~~~
\mbox{AL}=\frac{\sum_{h=1}^{10000} \left(\delta^{(h)}_U -\delta^{(h)}_L \right)}{10000}.
\end{equation*}
Here $[\delta^{(h)}_L, \delta^{(h)}_U]$ {denotes a} CI for $\delta$ calculated from the $h$th simulated data.
The simulation results are summarized in Table~\ref{tab:m1}.
\begin{table}[!htt]
\centering
\caption{Coverage probability (\%) and average length of 95\% CIs for $\delta$.}
\tabcolsep 1mm
\scriptsize
{\begin{tabular}{cl | cccc | cccc | cccccc}
\hline
& & \multicolumn{2}{c}{$\mathcal{I}_1$} & \multicolumn{2}{c|}{$\mathcal{I}_{1B}$}&
\multicolumn{2}{c}{$\mathcal{I}_2$} &
\multicolumn{2}{c|}{$\mathcal{I}_{2B}$}
&\multicolumn{2}{c}{$\mathcal{I}_3$} & \multicolumn{2}{c}{$\mathcal{I}_4$}&
\multicolumn{2}{c}{$\mathcal{I}_{4L}$} \\
\hline
Model & $(n_0,n_1)$ & CP & AL & CP & AL & CP & AL & CP & AL & CP & AL & CP & AL & CP & AL\\
\hline
\multirow{4}{*}{1} & (50, 50) & 92.6 & 1.37 & 94.6 & 1.73 & 91.7 & 1.34 & 94.1 & 1.59 & 94.6 & 1.18 & 93.7 & 1.09 & 94.4 & 1.15 \\
& (50, 150) & 92.6 & 1.09 & 94.0 & 1.26 & 91.8 & 1.08 & 93.7 & 1.22 & 94.9 & 0.94 & 94.2 & 0.90 & 94.8 & 0.93 \\
& (150, 50) & 92.5 & 1.08 & 94.3 & 1.86 & 91.7 & 1.08 & 93.6 & 1.31 & 94.9 & 0.92 & 94.0 & 0.88 & 94.7 & 0.91 \\
& (100, 100) & 93.9 & 0.94 & 95.1 & 1.04 & 93.2 & 0.95 & 94.7 & 1.05 & 94.9 & 0.79 & 94.6 & 0.76 & 95.0 & 0.78 \\ \hline
\multirow{4}{*}{2}& (50, 50) & 91.5 & 2.73 & 94.4 & 3.70 & 89.8 & 2.70 & 93.3 & 3.63 & 94.5 & 2.48 & 91.5 & 2.10 & 94.9 & 2.45 \\
& (50, 150) & 91.7 & 2.13 & 94.0 & 2.65 & 90.7 & 2.16 & 93.6 & 2.82 & 94.5 & 1.99 & 92.9 & 1.74 & 94.9 & 1.94 \\
& (150, 50) & 91.6 & 1.92 & 93.6 & 2.78 & 90.6 & 1.88 & 93.6 & 2.58 & 94.9 & 1.72 & 92.2 & 1.58 & 94.7 & 1.75 \\
& (100, 100) & 92.3 & 1.72 & 94.5 & 2.11 & 92.5 & 1.71 & 94.5 & 2.00 & 94.9 & 1.52 & 93.5 & 1.40 & 95.3 & 1.51 \\ \hline
\multirow{4}{*}{3}& (50, 50) & 92.3 & 1.55 & 94.5 & 1.92 & 91.5 & 1.52 & 94.1 & 1.84 & 94.3 & 1.36 & 92.8 & 1.25 & 94.8 & 1.34 \\
& (50, 150) & 92.6 & 1.17 & 94.3 & 1.36 & 93.0 & 1.15 & 94.7 & 1.30 & 94.8 & 1.04 & 93.5 & 0.97 & 95.0 & 1.01 \\
& (150, 50) & 92.5 & 1.27 & 94.2 & 1.66 & 91.3 & 1.25 & 93.7 & 1.56 & 94.8 & 1.11 & 93.3 & 1.06 & 95.2 & 1.11 \\
& (100, 100) & 94.2 & 1.06 & 95.4 & 1.19 & 92.7 & 1.06 & 93.9 & 1.18 & 94.8 & 0.92 & 94.2 & 0.88 & 95.2 & 0.91 \\ \hline
\multirow{4}{*}{4}& (50, 50) & 91.5 & 2.20 & 94.0 & 2.99 & 90.7 & 2.09 & 93.9 & 2.76 & 93.7 & 1.96 & 90.4 & 1.73 & 94.5 & 1.94 \\
& (50, 150) & 92.6 & 1.60 & 94.4 & 1.88 & 92.0 & 1.60 & 93.8 & 1.86 & 93.9 & 1.47 & 92.2 & 1.35 & 94.3 & 1.45 \\
& (150, 50) & 91.8 & 1.78 & 93.9 & 2.66 & 90.4 & 1.68 & 93.1 & 2.30 & 94.5 & 1.57 & 91.5 & 1.46 & 94.6 & 1.59 \\
& (100, 100) & 93.0 & 1.45 & 94.7 & 1.71 & 92.5 & 1.45 & 94.3 & 1.70 & 94.7 & 1.30 & 92.4 & 1.21 & 94.2 & 1.29 \\ \hline
\multirow{4}{*}{5} & (50, 50) & 92.9 & 2.23 & 95.1 & 2.69 & 91.5 & 2.22 & 93.9 & 2.65 & 95.1 & 1.96 & 93.2 & 1.80 & 94.6 & 1.92 \\
& (50, 150) & 91.8 & 1.88 & 93.4 & 2.21 & 91.0 & 1.86 & 93.4 & 2.18 & 94.7 & 1.69 & 93.8 & 1.57 & 94.7 & 1.65 \\
& (150, 50) & 92.9 & 1.64 & 94.6 & 1.97 & 92.5 & 1.61 & 93.9 & 1.88 & 94.8 & 1.40 & 94.0 & 1.33 & 94.8 & 1.38 \\
& (100, 100) & 93.4 & 1.52 & 94.7 & 1.68 & 93.5 & 1.52 & 94.7 & 1.67 & 94.9 & 1.30 & 94.4 & 1.24 & 95.0 & 1.28 \\ \hline
\multirow{4}{*}{6} & (50, 50) & 91.6 & 3.95 & 94.3 & 5.08 & 90.9 & 3.89 & 93.6 & 5.06 & 94.5 & 3.63 & 92.5 & 3.09 & 94.9 & 3.49 \\
& (50, 150) & 91.4 & 3.31 & 93.6 & 4.18 & 90.5 & 3.32 & 93.3 & 4.45 & 94.8 & 3.13 & 93.2 & 2.75 & 94.9 & 3.03 \\
& (150, 50) & 92.6 & 2.58 & 94.3 & 3.39 & 91.9 & 2.55 & 94.2 & 3.06 & 94.9 & 2.26 & 93.2 & 2.12 & 94.7 & 2.26 \\
& (100, 100) & 92.5 & 2.50 & 94.2 & 2.85 & 92.4 & 2.49 & 94.3 & 2.83 & 94.5 & 2.22 & 94.0 & 2.05 & 94.9 & 2.17 \\ \hline
\multirow{4}{*}{7} & (50, 50) & 92.1 & 2.82 & 94.3 & 3.53 & 91.4 & 2.81 & 94.1 & 3.45 & 94.6 & 2.52 & 92.9 & 2.27 & 95.0 & 2.48 \\
& (50, 150) & 92.3 & 2.37 & 94.1 & 2.86 & 90.9 & 2.38 & 93.2 & 2.90 & 94.6 & 2.18 & 93.8 & 1.99 & 95.0 & 2.13 \\
& (150, 50) & 92.5 & 2.00 & 94.1 & 2.52 & 91.6 & 1.96 & 93.7 & 2.31 & 94.4 & 1.73 & 93.7 & 1.63 & 94.7 & 1.71 \\
& (100, 100) & 93.3 & 1.87 & 94.8 & 2.10 & 92.4 & 1.86 & 93.9 & 2.08 & 94.8 & 1.63 & 94.0 & 1.54 & 95.0 & 1.61 \\ \hline
\multirow{4}{*}{8} & (50, 50) & 92.6 & 2.23 & 94.5 & 2.66 & 91.4 & 2.24 & 93.8 & 2.65 & 94.0 & 1.98 & 92.6 & 1.85 & 94.0 & 1.95 \\
& (50, 150) & 92.6 & 1.79 & 94.2 & 2.07 & 91.5 & 1.77 & 93.5 & 2.00 & 94.8 & 1.61 & 93.8 & 1.53 & 94.5 & 1.58 \\
& (150, 50) & 92.7 & 1.77 & 94.5 & 2.68 & 92.2 & 1.78 & 94.2 & 2.14 & 94.4 & 1.54 & 92.9 & 1.47 & 94.6 & 1.52 \\
& (100, 100) & 93.5 & 1.56 & 95.0 & 1.75 & 92.8 & 1.56 & 94.4 & 1.72 & 94.3 & 1.37 & 93.5 & 1.30 & 94.5 & 1.34 \\ \hline
\multirow{4}{*}{9} & (50, 50) & 91.3 & 5.78 & 93.8 & 7.90 & 90.8 & 5.70 & 94.5 & 7.74 & 93.3 & 5.41 & 89.3 & 4.60 & 93.5 & 5.42 \\
& (50, 150) & 92.0 & 4.56 & 94.0 & 5.72 & 90.9 & 4.63 & 93.6 & 6.49 & 94.2 & 4.45 & 91.8 & 3.86 & 94.0 & 4.33 \\
& (150, 50) & 91.7 & 4.10 & 94.0 & 8.56 & 91.1 & 3.89 & 93.6 & 5.23 & 93.3 & 3.65 & 90.2 & 3.35 & 94.0 & 3.71 \\
& (100, 100) & 92.5 & 3.66 & 94.3 & 4.31 & 91.9 & 3.64 & 94.0 & 4.26 & 93.7 & 3.39 & 91.7 & 3.08 & 94.3 & 3.34 \\ \hline
\multirow{4}{*}{10} & (50, 50) & 92.3 & 3.26 & 94.5 & 4.10 & 91.3 & 3.19 & 94.1 & 4.00 & 93.0 & 3.00 & 89.6 & 2.74 & 93.3 & 3.01 \\
& (50, 150) & 92.8 & 2.46 & 94.5 & 2.83 & 91.8 & 2.42 & 93.8 & 2.75 & 94.1 & 2.33 & 92.6 & 2.16 & 94.0 & 2.28 \\
& (150, 50) & 92.4 & 2.68 & 94.1 & 3.83 & 90.8 & 2.63 & 93.3 & 3.43 & 93.0 & 2.41 & 90.4 & 2.27 & 93.7 & 2.44 \\
& (100, 100) & 93.7 & 2.22 & 95.1 & 2.53 & 92.6 & 2.22 & 94.5 & 2.52 & 94.0 & 2.06 & 91.8 & 1.94 & 93.0 & 2.04 \\
\hline
\end{tabular}}
\label{tab:m1}
\end{table}
From Table~\ref{tab:m1}, {we observe that} the bootstrap Wald-type CI $\mathcal{I}_{1B}$ and bootstrap ELR-based CI $\mathcal{I}_{2B}$ have much better coverage accuracy than
$\mathcal{I}_{1}$ and $\mathcal{I}_{2}$, respectively.
Comparing $\mathcal{I}_{1B}$ and $\mathcal{I}_{2B}$, we see that $\mathcal{I}_{1B}$
has slightly more accurate CP in most cases, but $\mathcal{I}_{2B}$ has shorter AL in most cases.
The behaviour of $\mathcal{I}_{3}$ and $\mathcal{I}_{4L}$ are very comparable and satisfactory in terms of both CP and AL in all cases, while $\mathcal{I}_{4}$ gives shorter ALs and experiences lower coverage rates compared with $\mathcal{I}_{3}$ and $\mathcal{I}_{4L}$, especially in the cases with small sample sizes.
In general, the performance of the CIs constructed under the DRM is better than that of the CIs {constructed without using} the DRM assumption.
{In conclusion, the simulation results of $\mathcal{I}_{3}$ and $\mathcal{I}_{4L}$ seem most attractive regarding both CP and AL.
However, the computational time and complexity of the proposed $\mathcal{I}_{4L}$ is far less than that of $\mathcal{I}_{3}$, and thus may be preferred.}
\section{Real data analysis}
\label{realdata}
In this section, we illustrate the performance of our proposed method by analyzing two {real datasets.
Our interest lies in estimating the mean ratio} $\delta$ and population variances $\sigma^2_0,\sigma^2_1$, and constructing the CIs for $\delta$.
The first dataset is from a biological study of the seasonal activity patterns of a species of field mice, which is originally taken from \cite{mice}. It consists of the average distances (in meters) traveled between captures by those mice at least twice in a given month.
A summary of this dataset is presented in Table \ref{micedata}.
\begin{table}[!htt]
\centering
\caption{Summary of mice dataset. }
\tabcolsep 1mm
\scriptsize
\begin{tabular}{ccc}
\hline
Season & sample size & proportion (number) of zeros\\
\hline
Spring & 17& 0.176 (3)\\
Summer & 27 & 0.111 (3)\\
Autumn & 27 & 0.370 (10)\\
Winter & 34 & 0.294 (10)\\
\hline
\end{tabular}
\label{micedata}
\end{table}
From Table \ref{micedata}, there are considerable proportions of zero measurements, especially in Fall and Winter.
{\cite{Wang2018} have conducted hypothesis tests to discover if the mean traveled distance differs among the four seasons. They found no significant difference between the mean distance in Spring and that in Summer. Hence, we combine the distance measurements in Spring and Summer into one sample and denote it as sample $0$.
Similarly, sample $1$ is obtained by merging the distance measurements in Autumn and Winter.}
To analyze the dataset by our proposed method, we need to choose an appropriate $\boldsymbol q(x)$ in the DRM \eqref{drm}.
To balance the model fitting and model complexity, we choose $\boldsymbol q(x) = \log(x)$.
We apply the goodness-of-fit test proposed by \cite{Qin1997} for this choice to the mice data,
which gives $p$-value $=0.64$. This may indicate that $\boldsymbol q(x) = \log(x)$ is suitable for this dataset.
{All the methods discussed in our simulation studies are applied here. Our proposed estimate $\hat\delta = 0.487$ while the fully nonparametric estimate $\tilde \delta = 0.483$.
The proposed semiparametric estimates of the two-sample variances are $\hat\sigma^2_0 = 869.583$ for sample $0$ and $\hat\sigma^2_1 = 268.774$ for sample $1$. As a comparison, their fully nonparametric estimates are $\tilde\sigma^2_0 = 932.966$ for sample $0$ and $\tilde\sigma^2_1 = 239.961$ for sample $1$.}
Based on the simulation results in Table \ref{tabratio}, the proposed point estimates are expected to be more accurate.
The results of 95\% CIs for $\delta$ are presented in Table \ref{miceCI}.
{Among all the considered CIs, $\mathcal{I}_1$ is the shortest and $\mathcal{I}_{1B}$ has the longest interval length.
The lower and upper bounds of $\mathcal{I}_4$ tend to be smaller than those of other CIs.
The results of $\mathcal{I}_{2B}$, $\mathcal{I}_3$, and $\mathcal{I}_{4L}$ are similar.
All these CIs do not include 1, which indicates a significant mean difference between the two samples.}
\begin{table}[!htt]
\centering
\caption{95\% CIs of $\delta$ in mice data. }
\tabcolsep 1mm
\scriptsize
\begin{tabular}{cccccccc}
\hline
& $\mathcal{I}_1$ & $\mathcal{I}_{1B}$ & $\mathcal{I}_{2}$ & $\mathcal{I}_{2B}$ & $\mathcal{I}_{3}$ & $\mathcal{I}_{4}$&$\mathcal{I}_{4L}$\\
\hline
Lower bound & 0.341 & 0.314 & 0.318 & 0.322 & 0.325 &0.295 & 0.328 \\
Upper bound & 0.682 & 0.741 & 0.726 & 0.716 & 0.721 &0.679 & 0.722 \\
Length & 0.341 & 0.427 & 0.408 & 0.393 & 0.396 & 0.383 & 0.393 \\
\hline
\end{tabular}
\label{miceCI}
\end{table}
The second dataset is about the methylation of DNA,
which is a common method of gene regulation. The methylation patterns of tumor cells can be compared to those of normal cells; moreover, there are also differences between different types of cancer.
The DNA methylation can served as a biomarker of diagnosing cancers.
The dataset, collected from \cite{neuhauser2011nonparametric}, consists of two samples of methylation measurements: small cell lung cancer (sample $0$) and non-small cell lung cancer (sample $1$).
When methylation is undetectable or is partially present, the result of measurement is negative, which is treated as a zero value. The fully presence of methylation gives a positive value.
Sample $0$ contains 41 measurements, out of which 25 are zero values. There are 46 measurements in sample $1$ and 16 of them are zero values.
\cite{satter2020jackknife} argued that this dataset is highly skewed. This may suggest that the dataset can be well fitted by the DRM with $\boldsymbol q(x) = \log(x)$.
The goodness-of-fit test of \cite{Qin1997} gives $p$-value $=0.133$. Therefore, there is no strong evidence to reject the DRM with $\boldsymbol q(x) = \log(x)$.
{All the methods discussed in our simulation studies are applied here.
Our proposed estimate $\hat\delta = 2.906$ while the fully nonparametric estimate $\tilde \delta = 3.679$.
For the two-sample variances, our proposed semiparametric estimates are $\hat\sigma^2_0 = 388.562$ for sample $0$ and $\hat\sigma^2_1 = 1028.079$ for sample $1$, while the fully nonparametric estimates are $\tilde\sigma^2_0 = 406.796$ for sample $0$ and $\tilde\sigma^2_1 = 1017.072$ for sample $1$.
There are some differences between the proposed estimates and the fully nonparametric estimates, especially for estimating $\delta$.
We rely more on our proposed estimates since the performance of the proposed estimators is more accurate than other competitors as observed in our simulations.
}
Table \ref{DNACI} presents the results of 95\% CIs for $\delta$.
According to the simulation results in Table \ref{tab:m1}, the CIs $\mathcal{I}_{1B}$, $\mathcal{I}_{2B}$, $\mathcal{I}_3$, and $\mathcal{I}_{4L}$
are more trustable in terms of coverage accuracy.
Among these four CIs, the CIs $\mathcal{I}_{1B}$ and $\mathcal{I}_{2B}$ contain 1,
whereas $\mathcal{I}_3$ and $\mathcal{I}_{4L}$ do not contain 1.
This indicates that $\mathcal{I}_3$ and $\mathcal{I}_{4L}$ provide more evidence than $\mathcal{I}_{1B}$ and $\mathcal{I}_{2B}$
to reject $H_0:\delta=1$.
{Comparing between $\mathcal{I}_3$ and $\mathcal{I}_{4L}$, $\mathcal{I}_{4L}$ has slightly shorter interval length.}
\begin{table}[!htt]
\centering
\caption{95\% CIs of $\delta$ in methylation data. }
\tabcolsep 1mm
\scriptsize
\begin{tabular}{cccccccc}
\hline
& $\mathcal{I}_1$ & $\mathcal{I}_{1B}$ & $\mathcal{I}_{2}$ & $\mathcal{I}_{2B}$ & $\mathcal{I}_{3}$ & $\mathcal{I}_{4}$&$\mathcal{I}_{4L}$\\
\hline
Lower bound & 1.291 & 0.650 & 1.158 & 0.568 & 1.278 & 0.362 & 1.211 \\
Upper bound & 10.485 & 20.838 & 12.306 & 27.631 & 7.527 & 5.451 & 6.975 \\
Length & 9.194 & 20.189 & 11.148 & 27.063 & 6.249 & 5.089 & 5.764 \\
\hline
\end{tabular}
\label{DNACI}
\end{table}
\section{Concluding remarks}
\label{conclude}
{In this paper, we propose new statistical procedures for making semiparametric inference on the general parameters $\mbox{\boldmath $\psi$}$ defined in (\ref{psi}) and their functions
${\bf g}(\mbox{\boldmath $\psi$})$ with two samples of semicontinuous observations.}
The parameters $\mbox{\boldmath $\psi$}$ include the linear functionals $\mbox{\boldmath $\psi$}_0$ and $\mbox{\boldmath $\psi$}_1$ as special cases.
Under the semiparametric DRM \eqref{drm},
we construct the MELEs of $\mbox{\boldmath $\psi$}$ and establish the asympotic normality of the MELEs of $\mbox{\boldmath $\psi$}$.
The MELEs of $\mbox{\boldmath $\psi$}_0$ and $\mbox{\boldmath $\psi$}_1$ are shown to be more efficient than
{the fully nonparametric alternatives in both theory and simulations.
We further apply the developed asymptotic results to construct the confidence regions and perform hypothesis tests for $\mbox{\boldmath $\psi$}$ and ${\bf g}(\mbox{\boldmath $\psi$})$.
It is worth mentioning that the proposed methods and the general results can be applied to many important summary quantities, such as the uncentered and centered moments, the mean ratio, the coefficient of variation, and the generalized entropy class of inequality measures.
As illustration, we consider the construction of CI for the mean ratio of two such populations.
Simulation results show that the proposed Wald-type CIs have similar performance as the ELR-based CI under the DRM with less computational cost.}
We have implemented our methods in \texttt{R} language. The \texttt{R} code is available upon request.
{With the observable advantages in making semiparametric inference on the linear functions,}
it would be interesting to extend the current framework to
general expectation functionals and their functions, for example, the receiver operating characteristic (ROC) curve, the area under the ROC curve, and
Gini index.
The theoretical development may become even complicated and challenging.
We leave them as future research.
\section*{Acknowlegement}
The authors would like to thank Dr. Changbao Wu for his constructive and helpful comments.
Dr. Wang's work is supported in part by National Natural Science Foundation of China grant numbers 12001454, 11971404, 71988101, and Natural Science Foundation of Fujian Province grant number 2020J01031.
Dr. Li's work is supported in part by the Natural Sciences and Engineering Research Council of Canada grant number RGPIN-2020-04964.
\medskip
|
2,877,628,091,113 | arxiv | \subsection*{Single column equations}
Authors may use 1- or 2-column equations in their article, according to their preference.
To allow an equation to span both columns, options are to use the \verb|\begin{figure*}...\end{figure*}| environment mentioned above for figures, or to use the \verb|\begin{widetext}...\end{widetext}| environment as shown in equation \ref{eqn:example} below.
Please note that this option may run into problems with floats and footnotes, as mentioned in the \href{http://texdoc.net/pkg/cuted}{cuted package documentation}. In the case of problems with footnotes, it may be possible to correct the situation using commands \verb|\footnotemark| and \verb|\footnotetext|.
\begin{widetext}
\begin{align*}
(x+y)^3&=(x+y)(x+y)^2\\
&=(x+y)(x^2+2xy+y^2) \numberthis \label{eqn:example} \\
&=x^3+3x^2y+3xy^3+x^3.
\end{align*}
\end{widetext}
\subsection*{Supporting Information (SI)}
The main text of the paper must stand on its own without the SI. Refer to SI in the manuscript at an appropriate point in the text. Number supporting figures and tables starting with S1, S2, etc. Authors are limited to no more than 10 SI files, not including movie files. Authors who place detailed materials and methods in SI must provide sufficient detail in the main text methods to enable a reader to follow the logic of the procedures and results and also must reference the online methods. If a paper is fundamentally a study of a new method or technique, then the methods must be described completely in the main text. Because PNAS edits SI and composes it into a single PDF, authors must provide the following file formats only.
\subsubsection*{SI Text}
Supply Word, RTF, or LaTeX files (LaTeX files must be accompanied by a PDF with the same file name for visual reference).
\subsubsection*{SI Figures}
Provide a brief legend for each supporting figure after the supporting text. Provide figure images in TIFF, EPS, high-resolution PDF, JPEG, or GIF format; figures may not be embedded in manuscript text. When saving TIFF files, use only LZW compression; do not use JPEG compression. Do not save figure numbers, legends, or author names as part of the image. Composite figures must be pre-assembled.
\begin{figure
\centering
\includegraphics[width=.8\linewidth]{frog}
\caption{Placeholder image of a frog with a long example caption to show justification setting.}
\label{fig:frog}
\end{figure}
\subsection*{Digital Figures}
\label{sec:figures}
Only TIFF, EPS, and high-resolution PDF for Mac or PC are allowed for figures that will appear in the main text, and images must be final size. Authors may submit U3D or PRC files for 3D images; these must be accompanied by 2D representations in TIFF, EPS, or high-resolution PDF format. Color images must be in RGB (red, green, blue) mode. Include the font files for any text.
Figures and Tables should be labelled and referenced in the standard way using the \verb|\label{}| and \verb|\ref{}| commands.
Figure \ref{fig:frog} shows an example of how to insert a column-wide figure. To insert a figure wider than one column, please use the \verb|\begin{figure*}...\end{figure*}| environment. Figures wider than one column should be sized to 11.4 cm or 17.8 cm wide. Use \verb|\begin{SCfigure*}...\end{SCfigure*}| for a wide figure with side captions.
\subsection*{Manuscript Length}
PNAS generally uses a two-column format averaging 67 characters, including spaces, per line. The maximum length of a Direct Submission research article is six pages and a PNAS PLUS research article is ten pages including all text, spaces, and the number of characters displaced by figures, tables, and equations. When submitting tables, figures, and/or equations in addition to text, keep the text for your manuscript under 39,000 characters (including spaces) for Direct Submissions and 72,000 characters (including spaces) for PNAS PLUS.
\subsection*{Submitting Manuscripts}
All authors must submit their articles at \href{http://www.pnascentral.org/cgi-bin/main.plex}{PNAScentral}.
Authors must submit a 120-word maximum statement about the significance of their research paper written at a level understandable to an undergraduate educated scientist outside their field of speciality. The primary goal of the Significance Statement is to explain the relevance of the work in broad context to a broad readership. The Significance Statement appears in the paper itself and is required for all research papers. |
2,877,628,091,114 | arxiv | \section{Introduction}\label{sec:introduction}
Persistent homology is an algebraic invariant of filtered topological spaces commonly used in topological data analysis and in other areas of applied and computational topology. At its core, persistent homology is typically computed using factorizations of the boundary matrices obtained from applying the chain functor (with field coefficients) to a finite cell complex \cite{ZCComputingPH2005}. A variety of improvements and optimizations to this algorithm have been developed \cite{desilvaDualitiesPersistentCo2011, chenPersistentHomologyComputation2011,mischaikowMorseTheoryFiltrations2013,bauerClearCompressComputing2014,GUDHI15,otterRoadmapComputationPersistent2017} along with efficient implementations \cite{GUDHI15, Ripser19, Eirene16} which have allowed for the computation of persistent homology of increasingly large filtrations. However, a variety of problems require not just the computation of persistent homology of a single large filtration but of many related filtrations - examples include feature generation for data in machine learning tasks \cite{carlssonTopologyData2009,giustiTwoCompanyThree2016,cangIntegrationElementSpecific2018,gaoRepositioning8565Existing2020} as well as in continuous optimization problems with persistent homology included in the objective \cite{chenTopologicalRegularizerClassifiers2018,rickardCNN2019, leygonieFrameworkDifferentialCalculus2019,topologyLayerMachine2020, carrierePersLayNeuralNetwork2020,kimEfficientTopologicalLayer2020}. In this work, we develop an update scheme for computing persistent homology which updates the computation for a related problem with a warm-start and this scheme can be used efficiently in applications which require iterated computations.
\paragraph{Background on Persistent Homology} We provide a brief introduction to the necessary building blocks from algebraic topology to describe our algorithms. For a more complete introduction to computational topology and persistent homology, we refer to \cite{edelsbrunnerHarerBook2010, otterRoadmapComputationPersistent2017}. A cell complex $\mathcal{X}$ is a collection of contractible cells of varying dimensions in which $q$-dimensional cells are connected to $(q-1)$-dimensional cells with maps on their boundaries. For simplicity, one may consider simplicial or cubical complexes where these boundary maps are determined combinatorically. Furthermore, we will only consider finite cell complexes.
\emph{Homology} (with field coefficients) in dimension $q$ is a functor from a topological category to the category of vector spaces over a field $k$. The homological dimension $q$ captures information about $q$-dimensional features: $q=0$ encodes connected components, $q=1$ encodes loops, and general $q$ encodes $q$-dimensional voids.
A filtration, or filtered cell complex, is a sequence of cell complexes related by inclusion
\begin{equation}\label{eq:filtration}
\mathcal{X}_0 \subseteq \mathcal{X}_1 \subseteq \dots
\end{equation}\emph{Persistent homology} is the application of homology to the filtration in \Cref{eq:filtration}. The result can be considered as a $k[T]$ module \cite{ZCComputingPH2005} or as a diagram of vector spaces connected by linear maps induced by inclusion known as a type-A quiver representation \cite{ZZtheory2010,Oudot}. Both representations are characterized up to isomorphism by persistence barcodes, which describe the birth and death of homological features in the filtration.
\paragraph{Computing Persistent Homology} Persistent Homology is computed by first applying the cellular chain functor to cell complexes. A chain complex $C_\ast(\mathcal{X})$ consists of vector spaces $C_q(\mathcal{X})$, $q=0,1,\dots$ with a basis element for each $q$-dimensional cell, and maps
\begin{equation}\label{eq:chain_boundary}
B_q: C_q(\mathcal{X}) \to C_{q-1}(\mathcal{X})
\end{equation}
which maps the basis element of a cell to a linear combination of basis elements of cells in its boundary. The boundary maps have the property $B_{q-1} B_q = 0$, and homology is computed as the quotient vector space
\begin{equation}\label{eq:homology}
H_q(\mathcal{X}) = \ker B_q / \img B_{q+1}.
\end{equation}
Most algorithms for computing persistent homology are based on computing a factorization of filtered boundary matrix (meaning the rows and columns of $B_q$ are arranged in the order of appearance of cells in the filtration):
\begin{equation}\label{eq:factorization_in_q}
B_q U_q = R_q,
\end{equation}
where $U_q$ is upper-triangular and $R_q$ is \emph{reduced}, which means that it has unique \emph{low pivots}, i.e. the index of the last non-zero row of each column (if it exists) is unique. The computation of $R_q$ is implicit in the early work of Edelsbrunner, Lester, and Zomorodian \cite{edelsbrunner2000topological}, an explicit algorithm and analysis for $R_q$ was given by Zomorodian and Carlsson \cite{ZCComputingPH2005}, and then an factorization viewpoint was introduced by Cohen-Steiner, Edelsbrunner, and Morozov \cite{vinesvineyards06} when developing a scheme for updating persistent homology which is also the starting point for this work.
We can obtain the persistence information from the factorization in \Cref{eq:factorization_in_q} for each dimension $q$. Only $R_q$ is needed to read off persistent homology: a $q$-dimensional homology class is born when a cell is added that generates a zero column in $R_q$, and this class dies when the index of the birth cell is the pivot of a column of a cell in $R_{q+1}$ \cite{ZCComputingPH2005}. The formation of $U_q$ is only necessary if one wishes to obtain a representative for the homology class, or, as we shall see, update the factorization. A variety of optimizations have been developed for efficient computation of persistent homology which are incompatible with the formation of $U_q$, particularly the clearing \cite{chenPersistentHomologyComputation2011, desilvaDualitiesPersistentCo2011} and compression \cite{bauerClearCompressComputing2014} optimizations which are used by state-of-the-art implementations for computing persistent homology \cite{Eirene16, Ripser19}. Other practical accelerations for persistent homology include the use of discrete Morse theory \cite{mischaikowMorseTheoryFiltrations2013} and efficient data structures \cite{boissonnatSimplexTreeEfficient2014a}.
\paragraph{Motivations} Our work is motivated by several applications in topological data analysis. First, in exploratory data analysis, one may wish to compute the persistent homology of geometric filtrations (i.e. built using pairwise distances) on point-cloud data. Sometimes several constructions and metrics may be considered, and there may be large amounts of redundant computation done processing the same region for each choice. Second, in a variety of data analysis problems persistent homology is computed as a feature for each datum in a data set \cite{Dey2017ImprovedIC, garin2019topological_Classification_of_MNIST, Asaad2017TDA_Image_Tampering, Bae2017BeyondDR, Cang2018IntegrationOE, Qaiser2019FastAA}. Often there is a shared structure which we might expect to exploit. Finally, recent work using persistent homology in gradient-based optimization \cite{rickardCNN2019, topologyLayerMachine2020, carrierePersLayNeuralNetwork2020, chenTopologicalRegularizerClassifiers2018, kimEfficientTopologicalLayer2020, leygonieFrameworkDifferentialCalculus2019} produces a situation where a topological space undergoes relatively minor modifications in each gradient step. We seek to be able to reuse computation to the largest extent possible.
Theoretic stability results of persistence homology implies minor modifications on space will not cause many changes on persistence. As established in \cite{cohen-steiner_stability_2005}, the bottleneck distance between two persistent diagrams (a multiset of points in the extended plane encodes persistence information) built on two continuous and tame functions are bounded by the $L_\infty$ difference of the two functions. In other words, given a filtration induced by a function $f$, if we slightly perturb or modify its values on some cells, the persistent diagram will not change too much. This result also makes us believe that an updating persistence scheme is reasonable to develop.
\paragraph{Warm Starts:} The idea of simply updating the factorization in \Cref{eq:factorization_in_q} for a series of iterated problems is related to a variety of similar techniques in sparse numerical linear algebra and numerical optimization to update $LU$ factorizations \cite{SNOPT2005, gill1987, reid1982, saundersLUSOLSparseLU}. Our goal is to re-use a previous computation to the largest extent possible, known as a ``warm start'' to the problem.
\paragraph{Contributions} In this work we provide algorithms to compute persistent homology of one filtration starting from the persistent homology of another by updating the associated matrix factorizations. We analyze the complexity of this update in terms of how close the two filtrations are to each other, namely in terms of the number of cells added and deleted from the filtration and in terms of how the filtrations' order changes. This approach generalizes the earlier work of Cohen-Steiner, Edelsbrunner, and Morozov \cite{vinesvineyards06} to include addition and removal of cells from a filtration, and includes an analysis that can be applied to general updates beyond elementary permutations. We provide several examples of how our techniques provide practical speedups for both level set and geometric filtrations, and our implementations are made publicly available at
\url{https://github.com/YuanL12/TDA_Updating_Persistence}.
\section{Algorithms and Analysis}
\subsection{Matrix Reduction}
\paragraph{Notation:} We denote column $j$ of a matrix $A$ as $A[j]$, entry $i$ of a (column) vector $v$ as $v[i]$, and the entry in row $i$ and column $j$ of a matrix $A$ as $A[i,j]$. We say the (low) pivot of a column vector $v$, denoted $\piv(v)$ is the largest index $i$ such that the entry $v[i]$ is non-zero.
We first review the reduction algorithm for computing persistent homology \cite{ZCComputingPH2005} which we use and modify for our algorithms. We say a matrix $R$ is \emph{reduced} if every column $R[j]$ is either zero or has a unique pivot. The reduction algorithm, \Cref{alg:reduction}, produces a reduced matrix $R$ from an input matrix $B$ using elementary column operations that preserve the grading of columns (meaning column $j'$ can be added to column $j$ only if $j'<j$), which means the transformation can be encoded using an invertible upper triangular matrix $U$
\begin{equation}\label{eq:RU_decomposition}
BU = R.
\end{equation}
\Cref{eq:RU_decomposition} can be re-written as a factorization $B = RU^{-1}$, and we refer to this as a $RU$-decomposition of $B$. This is equivalent to the terminology used in \cite{vinesvineyards06} up to inversion of the matrix $U$.
\begin{algorithm}
\caption{Reduction Algorithm \cite{ZCComputingPH2005}}
\label{alg:reduction}
\begin{algorithmic}[1]
\State \textbf{Input:} $m \times n$ matrix $B$ and $n \times n$ matrix $U(= I_n$ default parameter)
\State \textbf{Result:} Decomposition $BU = R$
\State $R=B$
\For{ $j = 1,...,n$}
\While{$\piv(R[j]) >0$ and $j' < j$ exists so that $i = \piv(R[j]) =\piv(R[j'])$}
\State $\alpha = R[i, j]/R[i, j']$
\State $R[j] = R[j] - \alpha R[j']$
\State $U[j] = U[j] - \alpha U[j']$
\EndWhile
\EndFor
\State \textbf{return} $R$, $U$
\end{algorithmic}
\end{algorithm}
The persistence barcode can be read off from $R_q$ by computing this decomposition for each filtered boundary matrix in a filtered chain complex: a new $q$-dimensional bar is born for each column that is reduced to zero in $R_q$, and this bar dies when the same column index appears as a pivot of a column in $R_{q+1}$ \cite{ZCComputingPH2005}.
We will now examine the role of $U$ in the algorithm.
First, the algorithm \ref{alg:reduction} performs identical column operations on $U$ and $R$ and so even if
we pass into an argument $U$ different from $I_n$ into the reduction algorithm, then the decomposition of $B$ will not hold, but $U$ will still record those operations, which will be useful in our work.
Second, unless the visualization of a representative of each homology class is needed, the matrix $U$ that provides this representative information is often not formed practically in general. If so, then the line 8 of algorithm \ref{alg:reduction}, about performing the column operations on $U$ can be omitted and the algorithm in fact will not need $U$ at all, which can allow for optimizations such as clearing and compression \cite{bauerClearCompressComputing2014, chenPersistentHomologyComputation2011, desilvaDualitiesPersistentCo2011} based on the key observation $\piv R_{q+1}[j] = i$ implies $R_q[i] = 0$.
Third, as mentioned earlier, all column operations are added left-to-right, which will make $U$ upper-triangular and invertible (if the initial passing argument $U$ is upper-triangular and invertible). It also corresponds to the fact that, at an intermediate complex of a filtration, cells can be used to compute homology or homology representatives only if they have appeared in the inclusion.
Finally, the decomposition $BU = R$ is not generally unique, as we can add columns with lower pivot or zero columns to columns with higher pivot and higher index without altering the pivots in $R$ or the upper triangular structure of $U$. This observation is related to the non-uniqueness of homology representatives.
A straightforward complexity analysis of \Cref{alg:reduction} yields a run time of $O(mn\max{(m,n)})$, or cubic in the number of cells \cite{ZCComputingPH2005}. The decomposition can be obtained in matrix-multiplication time with more complex algorithms \cite{ZZmatmultime2011}, but practical implementations use variants of the standard algorithm and sparsity typically makes asymptotic run time bounds pessimistic in practice \cite{otterRoadmapComputationPersistent2017}.
\subsection{Permuting Filtration Order}\label{sec:perm_update}
Assuming we have computed the decompositions $B_q U_q = R_q$, $q=0,1,2,\dots$, of boundary matrices of a filtered cell complex, we would like to update this decomposition to compute persistent homology of the same cell complex with a different filtration order. If $B_q'$ is the boundary of this new filtration, then
\begin{equation}
B_q' = P_{q-1} B_q P_q
\end{equation}
Where $P_{q-1}$ and $P_q$ are permutations of the orderings of $(q-1)$-cells and $q$-cells respectively. We can then modify the decomposition of $B_q$:
\begin{align}
P_{q-1} B_q P_q P_q^T U_q &= P_{q-1} R_q\\
B_q' P_q^T U_q &= P_{q-1} R_q
\end{align}
where we use the identity $P_q P_q^T = I$. There are two obstacles that we must overcome to produce a decomposition $B_q' U_q' = R_q'$: first, $P_q^T U_q$ is not upper triangular, and second $P_{q-1} R_q$ may no longer have unique column pivots. We achieve the decomposition using \Cref{alg:perm_update}.
\begin{algorithm}
\caption{Update Decomposition with Permutation}
\label{alg:perm_update}
\begin{algorithmic}[1]
\State \textbf{Input:} $m \times n$ matrix $R$, $n \times n$ matrix $U$, $m \times m$ matrix $P_r$, and $n \times n$ matrix $P_c$
\State \textbf{Result:} Factorization $B' U' = R'$ where $B' = P_r R U^{-1} P_c$
\State $U = P_c^{T} U$
\State $R=P_r R$
\State $U, R$ = Reduction Algorithm($U$, $R$) \Comment{reduce $U$ using \Cref{alg:reduction}}
\For{ $j = 1,...,n$} \Comment{make $U$ upper-triangular}
\If {$\operatorname{piv}(U[j]) \neq j$ and $\operatorname{piv}(U[j^{'}]) = j$}
\State Swap column $U[j]$ and $U[j^{'}]$
\State Swap column $R[j]$ and $R[j^{'}]$
\EndIf
\EndFor
\State $R, U$ = Reduction Algorithm($R$, $U$) \Comment{reduce $R$ using \Cref{alg:reduction}}
\State \textbf{return} $R, U$
\end{algorithmic}
\end{algorithm}
\Cref{alg:perm_update} has a similar purpose to the algorithm of Cohen-Steiner, Edelsbrunner, and Morozov \cite{vinesvineyards06} which breaks up permutations into a sequence of elementary transpositions and applies updates based on one of four cases. In comparison, \Cref{alg:perm_update} is simpler to state and use of the standard reduction algorithm makes implementation straightforward in a package that already computes persistent homology. The algorithm of \cite{vinesvineyards06} uses a more specialized data structure to allow row swaps in constant time which allows for a better complexity bound in the case of elementary transpositions. While \Cref{alg:perm_update} could be adapted to use this optimization, the applications we consider typically permute enough elements of the matrix that there isn't any disadvantage to using whatever matrix data structure is already used for \Cref{alg:reduction}, typically a list-of-lists.
\paragraph{Proof of correctness:} Through line 4, \Cref{alg:perm_update} simply applies the requisite permutations to $U$ and $R$ to produce $B'U = R$. This decomposition invariant is maintained by applying the same column operations to $U$ and $R$ through the rest of algorithm, since \Cref{alg:reduction} performs identical column operations, and the for loop beginning on line 6 also performs identical column operations. If $U$ is already upper-triangular, then the call to \Cref{alg:reduction} on line 12 will keep $U$ upper triangular and reduce $R$. Thus, it suffices to show that lines 5--12 put $U$ in upper triangular form. Note that the call to \Cref{alg:reduction} in line 5 reverses the inputs $U$ and $R$ compared to the call in line 12, so it will have the effect of putting $U$ in reduced form. Because $U$ is invertible, after reduction no columns will be zeroed out and every column will have a distinct pivot. Because $U$ is square, there must be $n$ distinct pivots for each of the $n$ columns, so every row must appear as a pivot. In lines 6-12 we then simply permute the columns of the reduced $U$ so that the column with pivot $j$ is put in the $j$-th position, which puts $U$ in upper-triangular form by definition. Denoting the output of \Cref{alg:perm_update} as $R', U'$, we conclude that \Cref{alg:perm_update} produces a valid decomposition $B'U' = R'$ where $U'$ is upper triangular and $R'$ is reduced. $\qedsymbol$
\paragraph{Complexity} A trivial upper bound for the run time of \Cref{alg:perm_update} comes from the calls to \Cref{alg:reduction}. However, a tighter bound can be obtained based on how greatly the permutations change the filtration order.
\begin{theorem}
\Cref{alg:perm_update} performs the update in
\begin{equation}\label{eq:perm_update_bound}
O((m + n)(|P_r|_\tau + |P_c|_\tau) + \nnz(U) + \nnz(R))
\end{equation}
field operations, where $|P|_\tau$ is the Kendal-tau distance between the permutation $P$ and the identity permutation \cite{diaconisGroupRepresentationsProbability1988}.
\end{theorem}
A proof is given in \Cref{sec:perm_complexity}. Practically, this means that we expect an advantage to using \Cref{alg:perm_update} when filtration values are not changed too drastically, and since $|P_r|_\tau =O(m^2)$ and $|P_c|_\tau = O(n^2)$ the algorithm is also worst-case cubic in the number of cells, comparable to \Cref{alg:reduction} albeit with a worse scaling constant. Note that we can't expect to do better than this since we use the standard reduction algorithm as a subroutine.
\subsection{General Updates}\label{sec:general_update}
Permutation of filtration order is sufficient for applications such as computing persistent homology of different super-level set filtrations on a fixed complex. However, we may also wish to insert and delete cells in a filtration. One such example is Rips filtration, which is a useful tool for computing persistent homology on a metric space, where the persistence parameter is often truncated at the enclosing radius of the metric space for a practical speedup \cite{Eirene16, Ripser19}.
Suppose we have filtrations $F$ and $F'$ and wish to compute the decomposition $B_q' U_q' = R_q'$ for $F'$ starting from the decomposition $B_q U_q = R_q$. We first compute $I_q$ and $I_{q-1}$: the sets of cell indices which will be deleted from $F$ in dimensions $q$ and $q-1$; then $I_q'$ and $ I_{q-1}'$: the sets of cell indices which will be inserted into $F'$ in dimensions $q$ and $q-1$; and finally $P_q$ and $P_{q-1}$: the permutations of filtration order on the $q$ and $q-1$ cells that are present in both $F$ and $F'$. The key observations for our procedure are that in the context of the matrix decomposition $BU = R$,
\begin{enumerate}
\item\label{it:delete} Cells at the end of a filtration are trivial to remove without altering the upper-triangular structure of $U$ and the reduced structure of $R$;
\item\label{it:insert} Cells can be inserted in arbitrary locations without altering the upper-triangular structure of $U$.
\end{enumerate}
Observation \ref{it:delete} is because if a $q$-cell is the final cell in a filtration, then its column in $B_q$ will not be used to reduce any other columns in the $RU$-decomposition since it is furthest right. Furthermore, its row in $B_{q+1}$ will be the last row and will be entirely zero since it can not appear as a face in a valid filtration (since it is the final cell and faces must appear before a cell can appear). In contrast to observation \ref{it:delete}, deleting rows and columns in the middle of the filtration requires us to update columns to the right which use the deleted column in their reduction. Observation \ref{it:insert} is easy to see, since adding columns to the boundary $B_q$ (and thus rows and columns to $U_q$) does not invalidate the upper-triangular structure of $U_q$, although a final pass of \Cref{alg:reduction} is required to ensure $R_q$ and $R_{q+1}$ are reduced. We incorporate these observations into \Cref{alg:gen_update}, which generalizes \Cref{alg:perm_update}.
\begin{algorithm}
\caption{Update Decomposition with Permutation, Insertion, and Deletion}
\label{alg:gen_update}
\begin{algorithmic}[1]
\State \textbf{Input:} $m \times n$ matrix $R$, $n \times n$ matrix $U$, collections of rows and columns to delete $I_r$ and $I_c$, a collection of rows and column indices to insert $I_r'$ and $I_c'$ as well as the columns to be inserted $B_c'$, and $m \times m$ matrix $P_r$, and $n \times n$ matrix $P_c$ which permute rows and columns which are not inserted and deleted.
\State \textbf{Result:} Factorization $B' U' = R'$ incorporating updates
\State Form permutation matrices $Q_r$ and $Q_c$ that act as $P_r$ and $P_c$ on rows and columns which survive deletion and permute $I_r$ and $I_c$ to the end.
\State $U = Q_c^{T} U$
\State $R=Q_r R$
\State $U, R$ = Reduction Algorithm($U$, $R$) \Comment{reduce $U$ using \Cref{alg:reduction}}
\For{ $j = 1,...,n$} \Comment{make $U$ upper-triangular}
\If {$\operatorname{piv}(U[j]) \neq j$ and $\operatorname{piv}(U[j^{'}]) = j$}
\State Swap column $U[j]$ and $U[j^{'}]$
\State Swap column $R[j]$ and $R[j^{'}]$
\EndIf
\EndFor
\State Delete the last $|I_c|$ rows and columns from $U$ and the last $|I_r|$ columns from $R$.
\State Insert zero rows at locations $I_r'$ in $R$
\State Insert columns $B'_c$ in locations specified by $I_c'$ in $R$
\State Insert rows and columns specified by $I_c'$ in $U$ which act as the identity.
\State $R, U$ = Reduction Algorithm($R$, $U$) \Comment{reduce $R$ using \Cref{alg:reduction}}
\State \textbf{return} $R, U$
\end{algorithmic}
\end{algorithm}
\paragraph{Proof of correctness:} The discussion in \Cref{sec:perm_update} applies here as well. The key modifications are the addition and deletion of rows and columns to the matrices. From the discussion above, it is clear that by permuting the rows and columns to be deleted to the end of the matrices and then updating $U$ to be upper-triangular, we are free to delete these rows and columns in line 13 with no additional considerations while maintaining the decomposition invariant. After this, insertion of rows and columns in lines 14--16 maintains the decomposition invariant and keeps $U$ upper-triangular, so the final call to \Cref{alg:reduction} on line 17 will put $R$ in reduced form and give the desired result. $\qedsymbol$
\paragraph{Complexity} Again, we are interested in a tighter-than-cubic bound for \Cref{alg:gen_update}, and defer proof to \Cref{sec:general_complexity}.
\begin{theorem}
\Cref{alg:gen_update} performs the update in
\begin{equation}
O(\nnz(U) \log n + \nnz(R) \log m + (m + n)(|Q_c|_\tau + |Q_r|_\tau) + m \max(m,n)|I_c'|)
\end{equation}
field operations, where $|Q|_\tau$ is the Kendal-tau distance between the permutation $Q$ and the identity permutation.
\end{theorem}
\section{Computational Complexity}
We will now turn to analyze the computational complexity of \Cref{alg:perm_update,alg:gen_update}. We use $\nnz(A)$ to denote the number of non-zeros in a column or a matrix $(A)$. In this section, we will assume that matrices are stored in a list of lists format as is standard in implementations for persistent homology, that is a list of columns each of which is stored as a list of pairs of nonzero indices and values. This format is optimized for column operations which take the most time in the standard reduction algorithm. In our update schemes, the first two steps are to permute rows of a matrix which can be inefficient for very small permutations. One option when dealing with very minor perturbations may be to use a more specialized data structure as in \cite{vinesvineyards06}, but in a variety of cases using the standard list-of-lists format is a reasonable option.
\subsection{Permuting Filtrations}\label{sec:perm_complexity}
Let $B$ be a $m\times n$ matrix with $BU = R$, and $P_r$, $P_c$ be row and column permutations so that $B' = P_r B P_c$.
Since we use a list of lists format, applying row permutations to $U$ and $R$ generally will require us to alter each non-zero index and sort each column by index (for indices in each column is stored in increasing order). If we consider the application of a row permutation to a $m\times n$ matrix $A$, altering each nonzero index takes $\nnz(A)$ time, and sorting indices of non-zeros in each column using standard algorithms takes
$O(\sum_{j= 1}^n \nnz(A[j]) \log \nnz(A[j]))$ operations. Bounding $\nnz(A[j])$ by $m$ will give $O(\sum_{j=1}^n \nnz(A[j]) \log m) = O(\nnz(A) \log m)$. Thus, applying the row permutations in \Cref{alg:perm_update} takes $O(\nnz(U) \log n + \nnz(R) \log m)$.
Next, we analyze the complexity of reducing $U$ after the row permutation of \Cref{alg:perm_update}. We first will give a lemma and use the following notation: for a $m \times n$ matrix $M$, we say the row $i$ has $k$ pivots, if there are $k$ columns in $M$ that have $i$ as their pivots. i.e.,$| \{j \mid \piv (M[j])=i\}| = k$. We also use an elementary transposition to denote a swap between two adjacent elements of a list.
\begin{lemma}\label{lem:pivot_pair}
For an $m \times n$ matrix $M$, suppose row $i$ has $k$ pivots and row $i+1$ has $l$ pivots. Then, if we swap row $i$ and $i+1$, then row $i$ will have at most $l$ pivots and row $i+1$ will have at most $k+l$ pivots. We also say the adjacent swap maps the pivots pair $(k,l)$ to $(l, k+l)$.
\end{lemma}
\begin{proof}
We consider the change of pivots of rows in terms of the pivot change of each column, and there are 3 cases to be discussed. Denote the matrix after the adjacent swap by $M'$.
\begin{enumerate}
\item If $\piv(M[j]) < i$ or $\piv(M[j]) > i+1$, then after swapping, the column $j$ has no effect on the number of pivots that row $i$ and $i+1$ have.
\item \label{lem:pivot_pair:case 2} If $\piv(M[j]) = i$, then after swapping, $\piv(M'[j]) = i+1$, and so the row $i$ will have one less pivot and the row $i+1$ will have one more pivot.
\item \label{lem:pivot_pair:case 3} If $\piv(M[j]) = i+1$, then after swapping, depending on if $M[i,j] = 0$ or not, the pivot of column $j$ will be either $\piv(M'[j]) = i+1$ or $\piv(M'[j]) = i$. In particular, if $M[i,j] = 0$, then swapping row $i$ and row $i+1$ will give $\piv(M'[j]) = i$, and so row $i$ now has one more pivot and row $i+1$ now has one less pivot; if $M[i,j] \neq 0$, then swapping row $i$ and row $i+1$ will still keep $\piv(M'[j]) = i+1$, and so row $i$ and $i+1$ still have the same number of pivots. Thus, in terms of upper-bound, we can state that the column $j$ has no effect on the upper bound of pivots of row $i+1$, but row $i$ has one more pivot.
\end{enumerate}
For each column, we can apply the above 3 cases and summing over the pivot changes on all columns. Thus, the adjacent swap of row $i$ and $i+1$ will cause:
\begin{enumerate}
\item row $i$ has at most $l$ pivots, because the original $k$ pivots move to row $i+1$ (by case \ref{lem:pivot_pair:case 2}) and the (possibly) additional new $l$ pivots come from row $i+1$ (by case \ref{lem:pivot_pair:case 3});
\item row $i+1$ has at most $k+l$ pivots, because the original $l$ pivots can still exist (by case \ref{lem:pivot_pair:case 3}) and the new $l$ pivots come from row $i$ (by case \ref{lem:pivot_pair:case 2}).
\end{enumerate}
\end{proof}
Note that the \Cref{lem:pivot_pair} does not make any assumption on the matrix structure like reduced or upper-triangular.
For an $n \times n$ upper-triangular (also reduced by definition) matrix $U$, each row has only one pivot. However, after we apply a row permutation $P_c^T$ on it (in Line 5 of \Cref{alg:perm_update}), pivots of columns of $P_c^T U$ might not be unique, which means we need to eliminate those new duplicate pivots. In order to count the number of column operations to reduce $P_c^T U$ by the standard Reduction \Cref{alg:reduction}, our approach is to inductively, starting from the last row, loop over each row to count the number of duplicate pivots on each row and then reduce them.
Note that this approach is very similar to the bubble sort algorithm and each time only looks at one row above. We consider the permutation using a series of \textit{passes} of elementary transpositions that will permute a row to its new index in $P_c^T U$. For each pass, we let $\pi_c(i)$ be the row index in the last pass that will be permuted to row $i$ of $P_c^T U$, which will require $i - \pi_c(i) = |i - \pi_c(i)|$ elementary transpositions.
Then, the number of duplicate pivots needed to be eliminated will also be at most $|i-\pi_c(i)|$, by \Cref{prop:row_duplicate_pivots_upperbd}.
\begin{proposition}\label{prop:row_duplicate_pivots_upperbd}
Let $U$ be an $n\times n$ upper-triangular matrix and $P_c$ be an $n\times n$ permutation matrix. Then, for $P_c^T U$, the number of duplicate pivots needed to be eliminated in row $i$ is at most $|i-\pi_c(i)|$.
\end{proposition}
\begin{proof}
Starting from the last row we will count the number of duplicate pivots to be reduced.
First, we count the number of duplicate pivots in the last row $n$. We use \Cref{tab:pivots proof} to illustrate the change of pivots in rows from $\pi_c(n)$ to $n$.
For an upper triangular matrix $U$, the number of pivots in each row is one, which corresponds to the initial state row of \Cref{tab:pivots proof}.
Next, we swap row $\pi_c(n)$ with $\pi_c(n)+1$, by \Cref{lem:pivot_pair}, and the pivots pair of row $\pi_c(n)$ and $\pi_c(n)+1$ will change from $(1,1)$ to $(1,2)$, which corresponds to the second row of \Cref{tab:pivots proof}. Then, we continue to perform elementary transpositions until reaching the last row $n$. In the end, except for row $n$, pivots in the other rows is 1, and only row $n$ has $n-\pi_c(n)+1$ pivots, and so has at most $|n-\pi_c(n)|$ duplicate pivots. Now, to eliminate these duplicate pivots, we need to add the column $\pi_c(n)$ to the right and the number of non-zeros in that column is at most $n$, so reducing the row $n$ takes $O(n|n-\pi_c(n)|)$.
Next, assume we have finished the \textit{passes} for rows $i+1,i+2,\dots,n$ and eliminated all duplicate pivots. Then at the beginning of the pass for permuting row $\pi_c(i)$ to $i$, $U$ is still reduced. It means that we return back to the the initial state in \Cref{tab:pivots proof} with unique pivots. To calculate the number of pivots in row $i$, since $\pi_c(i) < i$, the procedure is the same as in \Cref{tab:pivots proof}.
\end{proof}
\begin{table}[h]
\centering
\begin{tabular}{c || c c c c c c}
Pivots Map & $\pi_c(n)$ & $\pi_c(n)+1$ & $\pi_c(n)+2$ & ... & $n-1$ & $n$ \\
initial state & 1 & 1 & 1 &...& 1 & 1 \\
$(1,1)\rightarrow (1,2)$ & 1 & 2 & 1 &...& 1 & 1 \\
$(1,2)\rightarrow (1,3)$ & 1 & 1 & 3 &...& 1 & 1 \\
...&...&...&...&...&...&...\\
$(1,n -\pi_c(n)-1)\rightarrow (1,n -\pi_c(n))$ & 1 & 1 & 1 &...& $n -\pi_c(n)$ & 1\\
$(1,n -\pi_c(n))\rightarrow (1,n -\pi_c(n)+1)$ & 1 & 1 & 1 &...& 1 & $n -\pi_c(n) +1$\\
\end{tabular}
\caption{The upper bound of the number of pivots of each row after each elementary transposition. Every row records the number of pivots after a pivot map introduced in \Cref{lem:pivot_pair}.}
\label{tab:pivots proof}
\end{table}
\begin{corollary} \label{complexity of reducing row permuted U}
The total number of duplicate pivots needed to be eliminated in all rows is $ \sum_{i=1}^n |\pi_c(i) - i| = |P_c|_\tau$, where $|P_c|_\tau := d(P_c, I)_{K-\tau}$ is Kendal-tau distance from $P_c$ to the identity permutation. To reduce $P_c^T U$, we also incur an additional cost of $O(m |i - \pi_c(i)|)$ for row $i$, for the same operations performed to $R$. Thus,
the total cost of reducing $P_c^T U$ is
$$
O(\sum_{i=1}^n (i+m)|\pi_c(i) - i| )= O((n+m)\sum_{i=1}^n |\pi_c(i) - i|)=
O((n+m)|P_c|).
$$
\end{corollary}
After reducing the permuted $U$, we need to permute the columns of $U$ to make it upper-triangular. Permuting the columns of $U$ and $R$ can be done in $O(\nnz(U) + \nnz(R))$ time, or $O(n)$ time in the case of a lists-of-lists data structure since we can just permute column pointers.
Finally, we turn to the complexity of reducing the matrix $R$ in line 12 of \Cref{alg:perm_update}. Note that $R$ experiences 3 multiplication before reducing:
$P_rR \tilde{U} \tilde{P}$, where $\tilde{U}$ comes from the reduction on $P_c^TU$ and $\tilde{P}$ comes from the column permutation. In $P_r R \tilde{U} \tilde{P}$, we only need to consider $P_r$ and $\tilde{U}$, because column permutation will not affect the row structure. Reversing the column operations that are stored in $\tilde{U}$ costs $O(m|P_c|_\tau)$ and then we need $O(m|P_r|_\tau)$ to reduce $R$ by viewing a reduced matrix as an upper-triangular matrix after column permutation, which then returns back to the argument we made for $U$ in \Cref{complexity of reducing row permuted U}. At the same time, we add additional cost $O(n|P_r|_\tau)$ from the the same operations performed on $U$ to reduce $R$.
Thus, by adding all above cost together, we get
$$
O((m + n)(|P_r|_\tau + |P_c|_\tau) + \nnz(U) \log n + \nnz(R) \log m),
$$
for the \Cref{alg:perm_update} to execute.
\subsection{Addition and Deletion of Cells}\label{sec:general_complexity}
We will now turn to an analysis of \Cref{alg:gen_update}. Again, we apply row permutations for a cost of $O(\nnz(U) \log n + \nnz(R) \log m)$, and it is straightforward to extend the analysis of \Cref{sec:perm_complexity} to the reduction of $U$ on line 6, for a cost of $O((m + n)(|Q_r|_\tau + |Q_c|_\tau))$. Again, the permutation of columns in lines 7--12 can be accomplished by swapping column pointers for a cost of $O(n)$.
Because we use lists of lists as our data structure of matrix, deleting the final $|I_c|$ columns of $U$ and $R$ is trivial. Furthermore, because after we have deleted these columns in $U$ there are no non-zeros in the last $|I_c|$ rows, deleting these rows does not affect any entries of the remaining columns so can be done in constant time. Inserting zero rows in $R$ potentially requires us to modify all non-zero indices in its columns, so may cost $O(\nnz(R))$ operations. Inserting columns can be done in $O(n)$ time by inserting pointers, plus the time to form the columns in $B_c$ which has $O(m |I_c'|)$ non-zeros.
Assume the new boundary matrix $B'$ has size $m' \times n'$ after insertion, we now investigate the cost to reduce $R$ after inserting $|I_c'|$ columns.
For each inserted column $j$, suppose its pivot is $\piv R'[j] = i$, and there are 2 possible cases to be discussed:
\begin{itemize}
\item if the pivot is not shared by any other columns, then there is no need to reduce this inserted column and $R$ is still reduced.
\item if the pivot is shared by another column, then the row $i$ has two pivots, and we need to eliminate the column with a larger index, which can also potentially create a new pivot in row $i-1$. We then keep eliminating duplicate pivots in columns with larger indices, until a new unique pivot appears. There are at most $\max(m',n')$ pivots to iterate through in this way.
\end{itemize}
Since each column operation costs $O(m')$, inserting $|I_c'|$ columns increases the cost of reducing $R$ by a factor of $O(|I_c'| m' \max(m',n'))$.
To simplify notation, we use $m, n$ to be the respective maximum number of rows and columns in $B$ and $B'$, the total run time of \Cref{alg:gen_update} is
\begin{equation}
O(\nnz(U) \log n + \nnz(R) \log m + (m + n)(|Q_c|_\tau + |Q_r|_\tau) + m \max(m,n)|I_c'|)
\end{equation}
Again, this bound is pessimistic due to sparsity in the matrices $U$ and $R$. In addition, note that if we update from an empty complex, then $|I_c'| = n$ and so $O(m \max(m,n)|I_c'|) = O(m n \max(m,n))$, which is the same as the bound of \Cref{alg:reduction}.
\section{Examples and Experiments}
We will now turn to demonstrating the use of our algorithms in several practical situations. Our implementation builds on the implementation found in the Basic Applied Topology Subprograms (BATS) \cite{factorizationView2019} (\url{https://github.com/bnels/BATS}), which provides a standard list-of-lists matrix data structure as well as a variety of options for computing persistent homology including the standard reduction algorithm as well as the clearing \cite{chenPersistentHomologyComputation2011,desilvaDualitiesPersistentCo2011} and compression \cite{bauerClearCompressComputing2014} optimizations. This allows us to compare to several algorithmic options without needing to worry too much about implementation-specific variation. We also compare to the more highly optimized Gudhi \cite{GUDHI15} and Ripser \cite{Ripser19} packages as well as the commonly used Dionysus library \cite{Dionysus2}. These packages are all comparable using Python bindings for compiled C++ code (for Ripser, we use the bindings at \url{https://ripser.scikit-tda.org}). Our timing results are computed using single processes on machines with Intel Xeon 6248R processors and 192GB of random access memory (memory requirements are typically much lower).
\subsection{Level Set Filtrations}\label{sec:levelset}
One common filtration used in topological data analysis is obtained through (super) level sets of a function on a topological space. Given a function $f:X\to \mathbb{R}$ we denote a super-level set as $X_a = f^{-1}([a, \infty))$, and we consider a filtration via the inclusions $X_a \subseteq X_b$ if $b < a$. An application of this type of filtration is to single channel images, where an image is considered as a pixel intensity function on a $m\times n$ grid which is extended to a filtration on a cubical complex or a simplicial complex via the Freudenthal triangulation.
We investigate level set persistence using several real and synthetic 2-dimensional image data sets:
\begin{enumerate}
\item \textbf{MNIST} \cite{lecun-mnisthandwrittendigit-2010}: A collection of handwritten digit images contains a training set of 60,000 examples, and a test set of 10,000 examples. Each image is $28 \times 28$ pixels. As a default we consider computing persistent homology of each image as an update of a pixel-wise averaged image of the same size.
\item \textbf{Vert-64}: A 3-dimensional rotational angiography scan of a head with an aneurysm used for benchmarking persistent homology in \cite{otterRoadmapComputationPersistent2017}. This data set is a 3-dimensional array of size $512 \times 512 \times 512$, and each voxel is an grey-scale integer. We obtained the data set from the repository \cite{volvis}. In our experiments, we subsample the data to form a $64\times 64\times 64$ image due to the memory overhead of forming the basis $U$. Our update tests perturbation of the pixels by random noise with mean 0 and variance $0.01$.
\item \textbf{S2D($\sigma$)} (sinusoid-2D): A synthetic $128 \times 128$ image $A$ defined as $A[i,j] = \sin(10\pi i/128) + \cos(10\pi j/128)$. The updated image adds normally distributed random noise with mean 0 and variance $\sigma$.
\item \textbf{S3D($\sigma$)} (sinusoid-3D): A 3-dimensional analog of the S2D($\sigma$) data on a $32\times 32 \times 32$ cube. In this case, $A[i,j,k] = \sin(4\pi i/32) + \cos(4\pi j/32) + \sin(4\pi k/32)$.
\end{enumerate}
Persistent homology is often used as a feature generation technique for data. In the case of images, this requires computation of persistent homology for each image in the data set, which can be a bottleneck in computation in part due to implementation and algorithmic complexity and in part due to lack of hardware acceleration seen in more popular image processing techniques such as convolutions. We will use the MNIST handwritten digit dataset as an example as it readily admits an interpretation of topological features. For example, an image of the digit ``0'' typically has a robust connected component ($H_0$ bar) and a single robust hole ($H_1$ bar), although smaller bars may appear due to variations in pixel intensity (e.g. from variations in how hard a pen was pressed down when writing the digit, or from noise at different points in the digitization process).
\begin{table}[h]
\centering
\begin{tabular}{c||c|c|c|c||c}
& Extension & Build Filtration & Reduction & Update & Total\\
\hline
Full (BATS) & $5.5\times 10^{-4}$ & $3.4\times 10^{-4}$ & $2.7\times 10^{-3}$ & -- & $3.6\times 10^{-3}$\\
\hline
Image init. & $5.2\times 10^{-4}$ & -- & -- & $8.6\times 10^{-4}$ & $\mathbf{1.3\times 10^{-3}}$\\
\hline
Avg. init. & $5.1\times 10^{-4}$ & -- & -- & $1.1\times 10^{-3}$ & $1.6\times 10^{-3}$\\
\hline
Zero init. & $4.9\times 10^{-4}$ & -- & -- & $1.1\times 10^{-3}$ & $1.6\times 10^{-3}$\\
\hline
Noise init. & $5.2\times 10^{-4}$ & -- & -- & $1.9\times 10^{-3}$ & $2.4\times 10^{-3}$
\end{tabular}
\caption{Average time to compute persistent homology on 1000 MNIST images. The Extension column gives the time to extend the filtration on pixels to a filtration on the complex. Our update scheme improves the total time per image by an approximate factor of 3. Updating the factorization from an actual image is approximately 20\% faster than initializing with a constant image or the ``average image''. Updating the factorization from an image with random pixel values is slower, but still gives a speedup likely because of memory efficiency.}
\label{tab:mnist_features}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
& MNIST & Vert-64 & S2D(0.01) &S2D(0.1) & S3D(0.01) & S3D(0.1) \\
\hline
\hline
Dionysus &
$2.5\times 10^{-3}$ &
--&
$10\times 10^{-2}$ &
$14\times 10^{-2}$ &
$17\times 10^{-1}$ &
$17 \times 10^{-1}$\\
\hline
Gudhi(f) &
$3.8\times 10^{-3}$ &
--&
$11\times 10^{-2}$ &
$13\times 10^{-2}$ &
$16\times 10^{-1}$ &
$15 \times 10^{-1}$\\
\hline
BATS(f) &
$4.8\times 10^{-3}$ &
--&
$18\times 10^{-2}$ &
$23\times 10^{-2}$ &
$52\times 10^{-1}$ &
$70\times 10^{-1}$\\
\hline
BATS(cl,f) &
$2.0\times 10^{-3}$ &
--&
$8.2\times 10^{-2}$ &
$8.1\times 10^{-2}$ &
$15\times 10^{-1}$ &
$13 \times 10^{-1}$\\
\hline
BATS(u,f) &
$\mathbf{1.6\times 10^{-3}}$ &
--&
$\mathbf{5.4\times 10^{-2}}$ &
$\mathbf{7.1\times 10^{-2}}$ &
$\mathbf{6.7\times 10^{-1}}$ &
$\mathbf{10 \times 10^{-1}}$\\
\hline
\hline
Gudhi(c)&
$2.2\times 10^{-3}$ &
$\mathbf{2.0}$&
$2.5\times 10^{-2}$ &
$\mathbf{2.7 \times 10^{-2}}$&
$\mathbf{1.6\times 10^{-1}}$ &
$\mathbf{1.7\times 10^{-1}}$\\
\hline
BATS(c)&
$3.6\times 10^{-3}$&
$76$&
$14\times 10^{-2}$ &
$15\times 10^{-2}$ &
$14\times 10^{-1}$ &
$18\times 10^{-1}$\\
\hline
BATS(cl,c)&
$2.1\times 10^{-3}$ &
$3.2$&
$7.2\times 10^{-2}$&
$7.8\times 10^{-2}$&
$6.8\times 10^{-1}$&
$6.2\times 10^{-1}$\\
\hline
BATS(u,c)&
$\mathbf{1.2\times 10^{-3}}$ &
$12$&
$\mathbf{1.8\times 10^{-2}}$&
$2.8\times 10^{-2}$&
$2.0\times 10^{-1}$&
$2.7\times 10^{-1}$\\
\hline
\end{tabular}
\caption{Average time in seconds to update persistent homology of level set filtrations on synthetic and real data. We compare Dionysus \cite{Dionysus2}, GUDHI \cite{GUDHI15}, and BATS \cite{BATS}. (f) dentotes the a simplicial complex built from the freudenthal triangulation, (c) denotes a cubical complex, (cl,f) and (cl,u) denote the clearing optimization applied to each complex, and (u,f) and (u,c) denote our update scheme (\Cref{alg:perm_update}) applied to each complex. Timings are averaged over multiple 1000 updates for MNIST, 1 update for Vert-64, 100 updates for S2D columns, and 20 updates for S3D columns. Timings for the Freudenthal triangulation of the Vert-64 data set are excluded due to memory constraints.}
\label{tab:levelset_comp}.
\end{table}
In \Cref{tab:mnist_features}, we measure the average time to compute persistent homology in dimensions 0 and 1 on 1000 random MNIST images using a 2-dimensional Freudenthal triangulation of the $28 \times 28$ grid for a total of 784 0-simplices, 2241 1-simplices, and 1458 2-simplices. We use a single initial filtration which is updated for each image. Overall, our update scheme gives almost a 3x speedup compared to a full persistent homology computation. We observe that initializing with an actual image produces slightly faster updates when compared to an ``average image" produced by averaging each pixel value over the data set or a constant ``zero image". Note that even initializing with the constant image gives a large speedup. Because MNIST digits have a constant background using this constant image for initialization is advantageous because much of the factorization can be reused over this constant region. We also measure the time to update the persistent homology of an ``image'' generated from random pixel values, which still gives a noticeable speedup. We can use this as a baseline to determine how much of the speedup using a representative image for initialization is due to memory and implementation efficiency and how much is due to the cost of updating persistent homology from a good starting point vs. a bad starting point.
In \Cref{tab:levelset_comp} we measure the time needed to compute persistent homology on a variety of data, either from scratch or using our update scheme. On all the spaces built on the Freudenthal triangulation of a grid, our update scheme demonstrates a noticeable improvement in run time, and for cubical complexes we outperform Gudhi on smaller and simpler updates, and are slightly outperformed on larger problems and updates. We also note that Dionyusus has a built-in function for the Freudenthal triangulation of an image whereas Gudhi does not, so the better performance of Gudhi on persistent homology computations is offset by the need to construct the filtration in Python. We report the results of the clearing optimization in BATS - compression tends to perform slightly worse on these examples.
\subsection{Rips Filtrations}\label{sec:geom_filtration}
Rips filtrations are are commonly used in conjunction with persistent homology to create features for finite dimensional metric spaces (point clouds). Given a metric space $(X, d)$, a Rips complex consists of simplices with a maximum pairwise distance between vertices is less than some threshold $r$:
$$
X_r = \{(x_0,\dots,x_k) \mid x_i\in X, d(x_i,x_j) \le r\}.
$$
A Rips filtration is a filtration of Rips complexes $X_r \subseteq X_s$ if $r \le s$.
The number of simplices in Rips filtrations quickly grows with the size of the data set, and much effort has gone into developing efficient algorithms for computing persistent homology of Rips filtrations. While it is possible to use an approach such as that done in \Cref{sec:levelset} which is to update every simplex in a filtration, several high-performance packages for Rips computations \cite{Ripser19, Eirene16} stop a filtration at the enclosing \emph{radius} of the metric space, at which point the complex becomes contractible, which can reduce the total number of simplices in the filtration considerably without changing persistent homology. In order to combine this optimization with our approach, it is necessary to be able to add and remove simplices from filtrations as well as permute their filtration order as in \Cref{alg:gen_update}.
\subsubsection{Updates on different data sets}
In \Cref{tab:rips_enclosing_radius}, each column records the time spent on computing persistent homology on a data set and each row represent an algorithm in BATS or in other packages. We list the data sets below. In order to mimic real situation of computing persistent homology, for synthetic data sets (\ref{ds:sphere})(\ref{ds:Klein3}), we add normal noise with standard deviation $0.001$ to them;
for data sets from empirical measurements and experiments (\ref{ds:Bunny})(\ref{ds:Dragon})(\ref{ds:H3N2}),
we add $1\%$ standard normal noise of original data points as their measurement error.
\begin{enumerate}
\item \textbf{Sphere1} and \textbf{Sphere2} \label{ds:sphere}: We first randomly generate two data sets, where each with 100 points on $S^1 \subset \mathbb{R}^2$ and on $S^2 \subset \mathbb{R}^3$, and next add normal noise with standard deviation $0.001$ to them. We update persistence from unnoised spheres.
\item \textbf{Klein3} \label{ds:Klein3}: The data set was introduced in \cite{otterRoadmapComputationPersistent2017}, which samples 400 points from the Klein bottle using its “figure-8” immersion in $\mathbb{R}^3$. We randomly re-sample 100 points from it and then add normal noise with standard deviation $0.001$ to it.
\item \textbf{Bunny}\label{ds:Bunny}: The Bunny model comes from the Stanford Computer Graphics Laboratory \cite{Stanford3D}. We use one of its 3D scan picture with size 40256 points in $\mathbb{R}^3$ and (uniform) randomly sample 400 points from it and then add $1\%$ standard normal measurement error to it.
\item \textbf{Dragon}\label{ds:Dragon}: It is a 3-dimensional scan of the dragon from the Stanford Dragon graphic model \cite{Stanford3D} and in \cite{otterRoadmapComputationPersistent2017}, they sampled 1000 and 2000 points uniformly at random. We randomly re-sample 400 points from the 1000 points and then add $1\%$ standard normal measurement error to it.
\item \textbf{H3N2}\label{ds:H3N2}: The data set contains 2722 different genomic sequences of H3N2 influenza and each is a vector in $\mathbb{R}^{1173}$. We retrieved it from the repository of \cite{otterRoadmapComputationPersistent2017} and it is also studied using persistent homology in \cite{chan2013Topology_of_viral_evolution}. There are many genetic distances used to measure the distance between two genomic sequence, but we will focus Euclidean metric for illustrative purpose of our updating algorithm. We also randomly sample 400 points and then add $1\%$ standard normal measurement error.
\end{enumerate}
\begin{table}[h]
\centering
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
& Sphere1 & Sphere2 & Klein3 & Bunny & Dragon & H3N2\\
\hline
max. PH & 1 & 2 & 2 & 1 & 1 & 1 \\
\hline
\hline
BATS & 0.44 & 22.44 & 4.89 & 61.12 & 48.00 & 28.35 \\
\hline
BATS(nb) & 0.24 & 10.60 & 2.92 & 28.48 & 20.31 & 17.63 \\
\hline
BATS(cl) & 0.23 & 10.22 & 2.81 & 28.42 & 20.26 & \textbf{17.51} \\
\hline
BATS(co) & 0.22 & 7.67 & \textbf{2.49} & \textbf{24.01} & \textbf{16.08} & 18.05 \\
\hline
BATS(u) & \textbf{0.20} & \textbf{6.82} & 2.53 & 25.27 & 19.02 & 20.77 \\
\hline
\hline
Gudhi & 0.05 & 2.30 & 0.99 & 4.49 & 2.52 & 6.57\\
\hline
Dionysus & 0.73 & 37.24 & -- & -- & -- & --\\
\hline
Ripser & \textbf{0.01} & \textbf{0.17} & \textbf{0.08} & \textbf{0.08} & \textbf{0.08} & \textbf{0.08}\\
\hline
\end{tabular}
\caption{The average time to compute persistent homology of Rips filtrations on different data sets. The row `max. PH' is the maximum persistent homology dimension we compute up to. Each column is a data set and each row is an algorithm. The first 5 algorithms are all implemented in BATS. They are (1) standard Reduction algorithm denoted BATS; (2) standard Reduction Algorithm without forming $U$: BATS(nb); (c) Clearing: BATS(clr); (d) Compression: BATS(comp); (e) \Cref{alg:gen_update}: BATS(u). Reported times are averaged over 20 runs. The enclosing radius is used for computations.}
\label{tab:rips_enclosing_radius}.
\end{table}
In \Cref{tab:rips_enclosing_radius}, we found that our Updating Algorithm \ref{alg:gen_update} can achieve at about 2x speedup compared to the standard reduction algorithm with basis in BATS. Although it is still slower than the clearing and compression variants in some situations, the time is comparable and has the advantage of maintaining the homology basis for purposes such as visualizing homology representatives.
Ripser demonstrates a large performance advantage over other options, but we note it is specifically optimized for Rips Filtrations. Gudhi also performs well in comparison to BATS relative to our experiments on level set filtrations.
\subsubsection{Effect of Noise Variance}
In practice, although there is an upper-bound of the complexity of reduction algorithm, the empirical run time varies on data sets even with the same size of filtration. In \Cref{tab:rips_enclosing_radius}, we add a normal noise with standard deviation 0.001 to $S^1$, but in \Cref{fig:subfig1 S^1 with different noise}, we experiment with the run time for larger variance. Starting with the computation of persistent homology on a noiseless sample of $S^1$, we investigate the time spent on recomputing persistence using the standard algorithm (\Cref{alg:reduction}) and our \Cref{alg:gen_update}. When noise is small (variance less than 0.1), the filtration has not changed much and the updating method faster is faster than recomputing. However, as the variance exceeds 0.1, recomputing becomes the better option. Once standard deviation exceeds 1, the updating time also converges as the space starts to look like normally distributed noise at different scales.
In \Cref{fig:subfig2 S^1 with different noise}, we visualize the run time with respect to the logarithm of number of non-zeros of $U$ and $R$ after reduction. We see that the number of non-zeros in the factorization using the update scheme can be much higher, producing a much longer run time. This indicates that one of the potential drawbacks to using large updates to filtrations is fill-in in the decomposition.
\begin{figure}[h]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{images/Time_vs_log_std.pdf}
\caption{}
\label{fig:subfig1 S^1 with different noise}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{images/Time_vs_log_nnz.pdf}
\caption{}
\label{fig:subfig2 S^1 with different noise}
\end{subfigure}
\caption{Left: The time spent on computing from scratch on points generated with different noise and updating the noiseless sample. Right: time spent on recomputing and updating as a function of the sum of $\nnz(U_k) + \nnz(R_k)$. The solid lines are a least squares fit.}
\label{fig:S^1 with different noise}
\end{figure}
\subsubsection{Updating the Metric}
As shown in the \Cref{fig:subfig1 S^1 with different noise}, if we add too much noise metric space the time used to update is much higher than recomputing. It illustrates that our algorithm might not work well on some cases where the filtration change dramatically.
In \Cref{tab:Minkow 1 to Euclidean}, We compute the time of updating persistence on filtration with metric from Euclidean to Minkowski-1 distance and compare it to recompute and repeat for 20 times. The updating performance is worse than the rebuilding option.
\begin{table}[h]
\centering
\begin{tabular}{c||c|c|c|c|c|c|c||c}
& Recompute & Update\\
\hline
Sphere1 & $\mathbf{0.198762}$ & 0.312912\\
\hline
Sphere2 & $\mathbf{7.434643}$ & 21.799024\\
\end{tabular}
\caption{Average time to compute persistent homology on a noisy circle by recomputing and updating. }
\label{tab:Minkow 1 to Euclidean}
\end{table}
\subsection{Optimization}
Optimization of a function of persistent homology is another potential application for \Cref{alg:perm_update,alg:gen_update}. A general framework for this is developed in \cite{topologyLayerMachine2020}, where functions of birth and death times in persistence diagrams for the form
$$\mathcal{E}\left(p, q, i_{0} ; \mathrm{PD}_{k}\right)=\sum_{i=i_{0}}^{\left|\mathcal{I}_{k}\right|}\left|d_{i}-b_{i}\right|^{p}\left(\frac{d_{i}+b_{i}}{2}\right)^{q}$$
are used. In \Cref{tab:Opt rips}, we report on an experiment to maximize the sum of the lengths of 1-dimensional persistence bars starting from a point cloud sampled uniformly from the unit square. Explicitly, we use gradient descent to maximize the function $\mathcal{E}\left(2, 0, 1 ; \mathrm{PD}_{1}\right) = \sum_{i = 1}^{I_1} (d_i - b_i)^2 $, where $I_1$ is the number of 1-dimensional persistence pairs $(b_i, d_i)$. As shown in \Cref{fig:rips_opt_dataset}, after 100 iterations, points that are originally uniformly generated in the unit square are moved to form more holes.
Our updating scheme shows an average 2x speedup compared to re-compute as shown in \Cref{tab:Opt rips}. Compared to other four algorithms, the updating function is almost equal to the fastest one -- `compression', but also have the advantage over all the others that we can obtain the $U$ matrix, i.e., homology basis information, which can be used for visualization and interpretation.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
BATS(std) & BATS(nb) & BATS(clr) & BATS(comp) & BATS(upd)\\
\hline
$1.65\times10^{-1}$ & $9.74\times10^{-2}$ & $9.94\times10^{-2}$&
$\mathbf{8.88\times10^{-2}}$&
$8.90\times10^{-2}$\\
\hline
\end{tabular}
\caption{Average time over 100 iterations to maximize $max \sum_{i = 1}^{I_1} (d_i - b_i)^2$ with 5 algorithms.}
\label{tab:Opt rips}
\end{table}
\begin{figure}[h]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{images/original_dataset_rips.pdf}
\caption{}
\label{fig:sfig1 data original}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{images/optimized_dataset_rips.pdf}
\caption{}
\label{fig:sfig2 data optimized}
\end{subfigure}
\caption{Left: original data set generated by uniform distribution in a square. Right: after 100 iterations of maximizing $H_1$ length.}
\label{fig:rips_opt_dataset}
\end{figure}
We have also attempted to use \Cref{alg:perm_update} in the optimization of level set filtrations. While our experiments with optimization of Rips filtrations have been encouraging, we have observed that fill-in in the $RU$ decomposition is much more prohibitive in the case of level-set filtrations. One potential explanation is that by deleting columns beyond the critical radius of the metric space in Rips filtrations there is never a large accumulation of non-zeros in columns of $R$ or $U$, but accumulation of non-zeros in levelset filtrations can grow unchecked. Strategies to deal with this issue are outside the scope of this paper, but would be an interesting direction for future investigation.
\section{Conclusion}
We present two algorithms for updating persistence: one for a fixed-sized filtration and another for a general filtration. The algorithms' asymptotic complexity is shown to be comparable to the standard reduction algorithm for computing persistent homology in the worst case, and we provide tighter bounds based on the details of the updates. Our algorithm demonstrates practical speedups on several experiments, especially where changes to the filtration are limited.
We implemented our method using the data structures in the Basic Applied Topology Subprograms (BATS) library \cite{BATS}, in order to obtain consistent comparisons with several variations of the reduction algorithm for persistent homology. We show that our update method's performance is comparable with other speedup strategies like clearing and compression, but has the advantage of maintaining homology basis information.
While we have demonstrated the utility of our approach in certain situations, there are also some limitations to its use. Some of these are inherent, for instance our approach does not work well when filtrations change too drastically, or when the additional memory requirements of maintaining the matrix $U$ are cost prohibitive. Other limitations may be implementation-specific, for instance we see that Gudhi \cite{GUDHI15} and Ripser \cite{Ripser19} outperform our update scheme on Rips computations. Some of this performance gap may be closed by adapting our algorithms to the cohomology algorithm \cite{desilvaDualitiesPersistentCo2011} or incorporating parallelism \cite{bauer2013DistrubutedComputationofPH}.
Deciding which algorithm to use for computing persistent homology on many similar problems is context-dependent. For fixed size filtrations, as in level set persistence, using our update scheme appears to be a reasonable choice for smaller perturbations, particularly when maintaining the basis matrix $U$ is desirable. For geometric filtrations, we recommend using a high-performance package designed for these computations, particularly if the homology basis is not required. In practice, a practitioner may wish to test several options experimentally as run times can be problem dependent.
There are several directions for future investigation which may build on this work. One direction is to develop methods to limit fill-in in the $RU$ decomposition when performing updates, a problem related to that of finding sparse homology generators \cite{obayashiVolumeOptimalCycleTightest2018a}. As we have discussed, this appears to be an important consideration in several potential applications of our update schemes such as optimization using level set filtrations. Another direction would be to adapt our update scheme to the cohomology algorithm. This may offer performance improvements for Rips filtrations, as has been observed for the standard reduction algorithm \cite{Ripser19}. There may also be ways to adapt our methods to the context of updating discrete Morse vector fields \cite{mischaikowMorseTheoryFiltrations2013}, which may offer another way to accelerate iterated persistent homology computations.
\section*{Acknowledgements:} BN was supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No.
HR00112190040. We are grateful for compute resources provided by the Research Computing Center (RCC) at the University of Chicago.
\bibliographystyle{acm}
|
2,877,628,091,115 | arxiv | \section{Introduction}
\label{sec:intro}
Over the past decades, randomized search heuristics such as evolutionary algorithms and ant colony optimization have been applied successfully in various areas, including engineering and economics. To gain a deep insight into the behaviors of evolutionary algorithms, many theoretical techniques for analyzing their expected runtime were presented~\cite{Auger11,ncs/Jansen13,BookNeuWit}. And using these techniques, evolutionary algorithms designed for some classic combinatorial optimization problems have been studied. In particular, the Vertex Cover problem plays a crucial role in the area~\cite{friedrich2010approximating,hansen2003reducing,jansen2013approximating,kratsch2013fixed,pourhassan2016ppsn}.
Consider an instance $I$ of a given combinatorial optimization problem, and a solution $S$ to $I$ satisfying a specific quality guarantee (optimal or approximated). If an operation on $I$ results in a new instance $I'$, which is similar to $I$
(the similarity between the two instances depends on the scale of the operation),
then a natural and interesting problem arises: Is it easy to find a solution $S'$ to $I'$ that satisfies the specific quality guarantee, starting from the original solution $S$?
In other words, how much runtime does a specific algorithm take to get a solution $S'$ to $I'$ with the quality guarantee, starting from $S$?
The above setting is referred as the {\it dynamic model} of the given combinatorial optimization problem.
Studying the performances of evolutionary algorithms for dynamic models of combinatorial optimization problems is an emerging field in evolutionary computation
\cite{friedrich2017s,kotzing20151+,neumann2015runtime,pourhassan2015maintaining,roostapour2018performance,shi2019reoptimization}.
Within the paper, we present the dynamic model of the Weighted Vertex Cover problem (WVC), which is simply named Dynamic Weighted Vertex Cover problem (DWVC). Our goal is to analyze the behaviors of the well-studied algorithms Randomized Local Search (RLS) and (1+1) EA that are adapted to DWVC. More specifically, we study the expected runtime (i.e., the expected number of fitness evaluations) that the algorithms need to recompute a 2-approximate solution when the given weighted graph is modified by a graph-editing operation, starting from a given 2-approximate solution to the original weighted graph.
Note that all weighted graphs considered in the paper are vertex-weighted, i.e., the weight function is defined on the vertices, not edges.
{\bf Related work.} For the Vertex Cover problem, it is well-known that under the Unique Games Conjecture~\cite{khot2002power}, there does not exist an approximation algorithm with a constant ratio $r < 2$, unless P = NP~\cite{khot2008vertex}.
The best-known 2-approximation algorithm for the Vertex Cover problem is based on the maximal matching: Construct a maximal matching by greedily adding edges, then let the vertex cover contain both endpoints of each edge in the maximal matching.
For WVC, Hochbaum~\cite{LPForWeighteVC1983}
showed that a 2-approximate solution can be obtained by using the Linear Programming (LP) result of the fractional WVC.
Du et al.~\cite{du2011design} found that a maximal solution to the dual form~\cite{vazirani2013approximation} of the LP formulation (simply called dual formulation) for the fractional WVC also directly induces a 2-approximate solution.
Using this conclusion, Bar-Yehuda and Even~\cite{bar1981linear} presented a linear-time 2-approximation algorithm for WVC. The essential difference between the primal form of the LP formulation (simply called primal formulation) and the dual formulation for the fractional WVC is: The primal formulation considers the problem from the perspective of vertices; the dual formulation considers it from the perspective of edges~\cite{du2011design}. (More details of the LP formulation and its dual formulation for the fractional WVC can be found in the next section.)
Pourhassan et al.~\cite{pourhassan2015maintaining} presented a dynamic model of the Vertex Cover problem, in which the graph-editing operation adds (resp., removes) exactly one edge into (resp., from) the given unweighted graph, and analyzed evolutionary algorithms with respect to their abilities to maintain a 2-approximate solution.
They examined different variants of the RLS and (1+1) EA, node-based representation and edge-based representation.
If using the node-based representation, they gave classes of instances for which both algorithms cannot get a 2-approximate solution in polynomial expected runtime with high probability.
However, using the edge-based representation, they showed that the RLS and (1+1) EA can maintain 2-approximations efficiently if the algorithms start with a search point corresponding to a maximal matching of the original unweighted graph and use the fitness function given in~\cite{jansen2013approximating} penalizing the edges sharing vertices.
Inspired by the work of Pourhassan et al.~\cite{pourhassan2015maintaining} and the essential difference between the primal and dual formulations of the fractional WVC, it is promising to consider DWVC from the perspective of edges, i.e., utilize the dual formulation to analyze DWVC.
Here we give another example to show that using the dual formulation is better than using the primal formulation, to analyze DWVC.
Consider a simplest graph-editing operation that removes or adds exactly one edge $[v,v']$.
For the primal formulation, if a new edge $[v,v']$ is added into the graph,
then the corresponding LP values of $v$ and $v'$ may be required to increase as their sum may be $< 1$ with respect to the given original LP solution; if an edge $[v,v']$ is removed from the graph, then the corresponding LP values of $v$ and $v'$ may have the room to decrease with respect to the given original LP solution.
Thus there are two possible adjustment directions for the LP values of the vertices if using the primal formulation.
For the dual formulation, because the given original maximal solution to the dual formulation does not violate the corresponding LP constraints no matter whether the edge $[v,v']$ is removed or added, so we only need to consider increasing the LP values of the edges.
Therefore, using the dual formulation is able to simplify the analysis for DWVC, compared to using the primal formulation.
We formulate DWVC in the paper as: Given a weighted graph $G = (V,E,W)$ and a maximal solution $S$ to the dual formulation of the fractional WVC on $G$, the goal is to find a maximal solution to the dual formulation of the fractional WVC on the weighted graph $G^* = (V^*,E^*,W^*)$ starting from the original maximal solution $S$, where $G^*$ is obtained by one of the two graph-editing operations on $G$:
(1) replace the edge-set $E$ with a new one $E^*$; (2) replace the weight function $W$ with a new one $W^*$.
The version of DWVC with edge-modification is denoted by DWVC-E, and the one with weight-modification is denoted by DWVC-W.
Denote by $D \in \mathds{N}^+$ the scale of the graph-editing operation, more specifically, $D = |(E^* \setminus E) \cup (E \setminus E^*)|$ or $D = |\{v \in V|W(v) \neq W^*(v)\}|$.
It is necessary to point that both $G$ and $G^*$ are simple graphs (at most one edge between any two vertices).
Recently Pourhassan et al.~\cite{pourhassan2017use} studied WVC using the dual formulation of the fractional WVC. As the weights on the vertices may be exponentially large with respect to the size of the graph (the number of edges), they incorporated the {\it Step Size Adaption} strategy~\cite{beyer2002evolution} into their (1+1) EA (Algorithm 4 given in~\cite{pourhassan2017use}). However, their (1+1) EA was shown to take exponential expected runtime with high probability to get a maximal solution to the dual formulation.
There are two factors causing the long runtime of their algorithm. Firstly, for a mutation $M$ constructed by their (1+1) EA, there may exist two edges selected by $M$ whose LP values are increased and decreased respectively.
The randomness on the adjustment direction of the LP values leads that a mutation increases the sum of LP values for the edges with a relatively small probability, i.e., a mutation is rejected with a relatively large probability.
Secondly, for a mutation $M$ that is rejected by their (1+1) EA, the step sizes of all the edges selected by $M$ would be decreased. Under the combined impact of the two factors, the step sizes of the edges cannot be increased enough to overcome the exponentially large weights on the vertices. That is, the step size adaption strategy is nearly invalid for their (1+1) EA.
{\bf Contributions}. Drawing on the experience of work~\cite{pourhassan2017use} due to Pourhassan et al., we give two algorithms (1+1) EA and RLS adapted to DWVC with the step size adaption strategy as well. To avoid the invalidation of the step size adaption strategy that happens in the algorithm of~\cite{pourhassan2017use}, the two algorithms adopt an extra policy with three points: (1) the LP values of the edges selected by a mutation either all increase or all decrease (this only applies to the (1+1) EA, because any mutation of the RLS selects exactly one edge); (2) whether the algorithms increase or decrease the LP values of the edges depends on the fitness of the maintained solution; and (3) the condition to decrease the step size of a specific edge is very strict.
Under the cooperation of the step size adaption strategy and the policy given above, the (1+1) EA and RLS are shown to take expected runtime $\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$ to solve the two versions of DWVC (including two special variants for DWVC-E, and two special variants for DWVC-W), where $m$ denotes the number of edges in $G^*$, $W_{\textup{max}} \ge 1$ denotes the maximum weight that the vertices in $G$ and $G^*$ have, and $\alpha \in \mathds{N}^+ \setminus \{1\}$ denotes the increasing/decreasing rate of the step size (i.e., the increment on the LP value for each edge can be exponentially increased or decreased by a factor $\alpha$).
\begin{table*}[t]
\vspace*{.25cm}
\scriptsize
\begin{center}
\renewcommand{\arraystretch}{1}
\begin{tabular}{@{}lcccc@{}}
\toprule
& {\bf RLS} \ or \ \textbf{(1+1) EA} & {\bf \mbox{\textup{RLS with 1/5-th Rule}}} & {\bf \mbox{\textup{(1+1) EA with 1/5-th Rule}}} &\\
\midrule
{\bf DWVC-E$^+$} &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \min\{D,\log (\alpha D \cdot \log_{\alpha} W_{\textup{max}})\} \! \big)$ &
$\mathrm O \big(\alpha m D \log_{\alpha} W_{\textup{max}} \cdot \log W_{\textup{max}} \! \big)$ &
$\mathrm \Omega (2^{m^{\epsilon/2}})$, $0 < \epsilon \leq 1/2$ & \\
{\bf DWVC-E$^-$} &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$ &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \min\{m \log W_{\textup{max}}, D \cdot W_{\textup{max}} \} \! \big)$ &
$\mathrm \Omega (2^{m^{\epsilon/2}})$, $0 < \epsilon \leq 1/2$ \\
{\bf DWVC-E} &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$ &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \min\{m \log W_{\textup{max}}, D \cdot W_{\textup{max}} \} \! \big)$ &
$\mathrm \Omega (2^{m^{\epsilon/2}})$, $0 < \epsilon \leq 1/2$ \\
\cmidrule{1-5}
{\bf DWVC-W$^+$} &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$ &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \min\{m \log W_{\textup{max}}, D \cdot W_{\textup{max}} \} \! \big)$ &
$\mathrm \Omega (2^{m^{\epsilon/2}})$, $0 < \epsilon \leq 1/2$ \\
{\bf DWVC-W$^-$} &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$ &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \min\{m \log W_{\textup{max}}, D \cdot W_{\textup{max}} \} \! \big)$ &
$\mathrm \Omega (2^{m^{\epsilon/2}})$, $0 < \epsilon \leq 1/2$ \\
{\bf DWVC-W} &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$ &
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \min\{m \log W_{\textup{max}}, D \cdot W_{\textup{max}} \} \! \big)$ &
$\mathrm \Omega (2^{m^{\epsilon/2}})$, $0 < \epsilon \leq 1/2$ \\
\bottomrule\\
\end{tabular}
\end{center}
\vspace*{-5mm}
\caption{Overview on runtime performances of the four algorithms for the two versions of DWVC, DWVC-E and DWVC-W, including the two special variants for DWVC-E (DWVC-E$^+$ and DWVC-E$^-$) and the two special variants for DWVC-W (DWVC-W$^+$ and DWVC-W$^-$).
The notation $m$ denotes the number of edges in the new graph, $W_{\textup{max}}$ denotes the maximum weight that the vertices in the original and new graphs have, $D$ denotes the scale of the graph-editing operation, and $\alpha \in \mathds{N}^+ \setminus \{1\}$ denotes the increasing/decreasing rate of the step size that is an integer ranging from 2 to $W_{\textup{max}}$.
The lower bound of the runtime of the \mbox{\textup{(1+1) EA with 1/5-th Rule}} for DWVC holds with probability $1- e^{-{\rm \Omega}(m^{\epsilon})}$ if $W_{\textup{max}} \ge \alpha^{m}$.
}
\label{table:overviewResults}
\end{table*}
For the extra policy given above, its last two points play an important role in avoiding the invalidity of the step size adaption strategy, but they seem too restrictive and a little artificial. Thus we introduce the 1/5-th (success) rule, and give two algorithms with both the 1/5-th rule and step size adaption strategy, called the (1+1) EA with 1/5-th Rule and RLS with 1/5-th Rule.
The 1/5-th rule is one of the best-known techniques in parameter control, especially in the control of the mutation probability (for more details, please refer to~\cite{doerr2015optimal}). In the paper, we use the 1/5-th rule to control the increasing/decreasing rate of the step size.
More specifically, the LP values of the edges selected by a mutation of the two algorithms with the 1/5-th rule increase or decrease with the same probability 1/2 (i.e., not depend on the maintained solution). If the mutation is accepted, then the step sizes of the selected edges are increased by a factor $\alpha$; otherwise, decreased by a factor $\alpha^{1/4}$.
For the RLS with 1/5-th Rule, we show that it can solve the two versions of DWVC (including the four special variants) efficiently. However, for the (1+1) EA with 1/5-th Rule, we construct a special instance for each version of DWVC, and show that the algorithm takes at least pseudo-polynomial time to solve it.
The main results given in the paper are summarized in Table~\ref{table:overviewResults}.
The rest of the paper is structured as follows. We start by giving the related definitions and problem formulations in Section 2.
Then we present the algorithms (1+1) EA and RLS (with the step size adaption strategy), and algorithms \mbox{\textup{(1+1) EA with 1/5-th Rule}} and \mbox{\textup{RLS with 1/5-th Rule}} for DWVC in two separated subsections of Section 3.
For the two versions of DWVC, Sections 4 and 5, respectively, analyze the expected runtime of the (1+1) EA and RLS, and the \mbox{\textup{(1+1) EA with 1/5-th Rule}} and \mbox{\textup{RLS with 1/5-th Rule}}.
Finally, conclusions are presented in Section 6.
\section{Preliminaries}
\label{sec:prelims}
Consider a weighted graph $G=(V,E,W)$ with a vertex-set $V=\{v_1,\ldots, v_n\}$, an edge-set $E=\{e_1,\ldots, e_m\}$, and a weight function $W: V \rightarrow \mathds{N}^+$ on the vertices. For any vertex $v \in V$, denote by $N_G(v)$ the set containing all the neighbors of $v$ in $G$, and by $E_G(v)$ the set containing all the edges incident to $v$ in $G$. For any vertex-subset $V' \subseteq V$, let $E_G(V') = \bigcup_{v \in V'} E_G(v)$. For any edge $e \in E$, denote by $E_G(e)$ the set containing all the edges in $G$ that have a common endpoint with $e$. For any edge-subset $E' \subseteq E$, let $E_G(E') = \bigcup_{e \in E'} E_G(e) \setminus E'$.
A vertex-subset $V_c \subseteq V$ is a {\it vertex cover} of $G$ if for each edge $e \in E$, where $e$ can be represented by its two endpoints $v$ and $v'$ as $[v,v']$, at least one of its two endpoints $v$ and $v'$ is in $V_c$. The weight of $V_c$ is defined as the sum of the weights on the vertices in $V_c$, written $\sum_{v \in V_c} W(v)$.
The Weighted Vertex Cover problem (WVC) on the weighted graph $G$ asks for a vertex cover of $G$ with the minimum weight, among all vertex covers of $G$.
Using the node-based representation (i.e. the search space is $\{0,1\}^n$, and for any solution $x = x_1 \ldots x_n$ the node $v_i$ is chosen iff $x_i=1$), the Integer Linear Programming (ILP) formulation for WVC is given as follows.
\begin{eqnarray*}
&& min \ \sum_{i=1}^n W(v_i)\cdot x_i \\
st. && x_i+x_j\geq 1 \ \ \ \ \ \forall \ [v_i,v_j]\in E \\
&& x_i\in \{0,1\} \ \ \ \ \ \ i = 1, ..., n
\end{eqnarray*}
By relaxing the constraint $x_i\in \{0,1\}$ of the ILP given above to $x_i\in [0,1]$, the Linear Programming (LP) formulation for the fractional WVC is obtained. Hochbaum~\cite{LPForWeighteVC1983} showed that a 2-approximate solution can be found by using the LP result of the fractional WVC --- include all the vertices $v_i$ with $x_i\geq 1/2$.
The dual form of the LP formulation (or simply called dual formulation) for the fractional WVC is given as follows, where $Y: E \rightarrow \mathds{R}^+ \cup \{0\}$ denotes a value assignment on the edges.
\begin{eqnarray*}
&& max \ \sum_{e \in E} Y(e)\\
st. && \sum_{e \in E_G(v)} Y(e) \leq W(v) \ \ \ \ \ \forall \ v\in V
\end{eqnarray*}
The value assignment $Y$ is called a {\it dual-solution} of $G$ in the paper. Given a vertex $v \in V$, it {\it satisfies} the {\it dual-LP constraint} with respect to the dual-solution $Y$ if $\sum_{e \in E_G(v)} Y(e) \leq W(v)$. Similarly, for an edge $e \in E$, it {\it satisfies} the dual-LP constraint with respect to $Y$ if both its endpoints satisfy the dual-LP constraint with respect to $Y$.
The dual-solution $Y$ of $G$ is {\it feasible} if all the vertices in $G$ satisfy the dual-LP constraint with respect to $Y$; otherwise, {\it infeasible}. The vertex $v \in V$ is {\it tight} with respect to $Y$ if $\sum_{e \in E_G(v)} Y(e) = W(v)$, and the edge $e \in E$ is {\it tight} with respect to $Y$ if at least one of its two endpoints is tight with respect to $Y$.
Given a dual-solution $Y$ of $G$, denote by $V_G(Y)$ the set containing all the vertices in $G$ that do not satisfy the dual-LP constraint with respect to $Y$, and by $E_G(Y)$ the set containing all the edges that are incident to the vertices in $V_G(Y)$.
A {\it maximal feasible dual-solution} (MFDS) of $G$ is a feasible dual-solution of $G$ such that none of the edges can be assigned a larger LP value without violating the dual-LP constraint.
Given an MFDS $Y$ of $G$, it induces a vertex cover of $G$ with ratio 2 directly, which contains all tight vertices with respect to $Y$ (a formal proof about the approximate ratio can be found in Theorem 8.4 of~\cite{du2011design}).
Two versions of the Dynamic Weighted Vertex Cover problem (DWVC) are studied in the paper, whose formal formulations are given below.
\begin{quote}
DWVC with Edge Modification ({\bf DWVC-E}) \\
{\it Input}: \ \ \ a weighted graph $G = (V,E,W)$, an MFDS $Y_{\textup{orig}}$ of $G$, and a new edge-set $E^*$; \\
{\it Output}: an MFDS $Y^*$ of $G^* = (V,E^*, W)$
\medskip
DWVC with Weight Modification ({\bf DWVC-W}) \\
{\it Input}: \ \ \ a weighted graph $G = (V,E,W)$, an MFDS $Y_{\textup{orig}}$ of $G$, and a new weight function $W^*$; \\
{\it Output}: an MFDS $Y^*$ of $G^* = (V,E, W^*)$
\end{quote}
There are two special variants for DWVC-E, DWVC-E$^+$ and DWVC-E$^-$.
Let $E^+ = E^* \setminus E$ and $E^- = E \setminus E^*$.
The variants DWVC-E$^+$ and DWVC-E$^-$ consider the cases $E^- = \emptyset$ and $E^+ = \emptyset$, respectively.
Similarly, there are two special variants for DWVC-W, DWVC-W$^+$ and DWVC-W$^-$.
Let $V^+ = \{v \in V|W^*(v) > W(v) \}$ and $V^- = \{v \in V|W^*(v) < W(v) \}$.
The variants DWVC-W$^+$ and DWVC-W$^-$ consider the cases $V^- = \emptyset$ and $V^+ = \emptyset$, respectively.
\section{Four Adaptive Algorithms}
\label{sec:algorithms}
We start with the subsection that introduces the (1+1)~EA and RLS, then the subsection that introduces the \mbox{\textup{(1+1)~EA with 1/5-th Rule}} and \mbox{\textup{RLS with 1/5-th Rule}}, finally the subsection that gives two simple lemmata based on the selection mechanism of the four algorithms.
It is worthy to point out that the solution maintained by the four algorithms is actually a $m$-dimensional vector (recall that $m$ is the number of edges in $G^* = (V^*,E^*,W^*)$), $[x_1,x_2,\ldots,x_m]$, in which each $x_i$ ($1 \le i \le m$) is the LP value of the edge $e_i \in E^*$.
We always use notation $M$ to denote a mutation of the four algorithms, which corresponds to an adjustment (increment or decrement) on the values of some elements (corresponding to the edges chosen by $M$) in the vector.
\subsection{(1+1)~EA and RLS}
Consider two weighted graphs $G = (V,E,W)$ and $G^* = (V^*,E^*,W^*)$, where $G^*$ is obtained by one of the two graph-editing operations mentioned above on $G$. We study the expected runtime (i.e., the expected number of fitness evaluations) of the (1+1) EA and RLS, given in Algorithm~\ref{alg:(1+1) EA} and~\ref{alg:RLS} respectively, to find an MFDS of $G^*$ starting with a given MFDS $Y_{\textup{orig}}$ of $G$ (not from scratch).
The two algorithms run in a similar way, except the mechanism selecting edges for mutation. The (1+1) EA selects each edge in $E^*$ with probability $1/m$ at each iteration, resulting in an edge-subset $I$ containing all the selected edges (see step 7 of Algorithm~\ref{alg:(1+1) EA}), and adjusts the LP values of the edges in $I$. The RLS differs from the (1+1)~EA by selecting exactly one edge in $E^*$ in each round.
The two algorithms share the same general idea: If $Y_{\textup{orig}}$ is also a feasible dual-solution of $G^*$, then they directly increase the LP values of the edges in $G^*$ until the LP value of any edge cannot be assigned with a larger value under the dual-LP constraint (i.e, an MFDS of $G^*$ is found if the claimed condition is met).
Note that no infeasible dual-solution would be accepted during the process.
If $Y_{\textup{orig}}$ is an infeasible dual-solution of $G^*$, then the two algorithms first decrease the LP values of the edges in $E_{G^*}(Y_{\textup{orig}})$ (because only the vertices in $V_{G^*}(Y_{\textup{orig}})$ violate the dual-LP constraint with respect to $Y_{\textup{orig}}$), aiming to get a feasible dual-solution $Y_t$ of $G^*$ as soon as possible, afterwards, increase the LP values of the edges in $G^*$ to get an MFDS based on $Y_t$.
The general idea of the two algorithms shows that the feasibility of the maintained solution decides the adjustment directions of the LP values of the selected edges. Thus we give a sign function $s(Y)$ below, to judge whether or not the considered solution $Y$ is a feasible dual-solution of $G^*$.
\begin{displaymath}
s(Y) = \left\{
\begin{array}{lr}
-1 & \ \ $if$ \ V_{G^*}(Y) \neq \emptyset, \ \ \ $i.e.,$ \ Y \ $is$ \ $infeasible$\\
1 & $feasible$\\
\end{array}
\right.
\end{displaymath}
It is necessary to point out that a mutation of the (1+1) EA may choose more than one edge, and the LP values of the chosen edges are required to be either all increased or all decreased.
In addition to the sign function $s()$, we also present a function $f(Y',Y)$ to compare the fitness of $Y'$ and $Y$, where $Y'$ is the dual-solution obtained by a mutation $M$ on the dual-solution $Y$ maintained by the two algorithms. It is defined as follows: $f(Y',Y) \ge 0$ if $Y'$ is not worse than $Y$; $f(Y',Y) < 0$ otherwise.
\begin{equation*}
f(Y',Y) =
\begin{cases}
s(Y') \cdot \sum_{e \in E^*} \big(Y'(e) - Y(e) \! \big) & \text{if $s(Y) = 1$} \\
\sum_{e \in E_{G^*}(Y)} \big(Y(e) - Y'(e) \! \big) - m \cdot W_{\textup{max}} \cdot \sum_{e \in E^* \setminus E_{G^*}(Y)} \big |Y(e) - Y'(e) \! \big| & \text{if $s(Y) = -1$}
\end{cases}
\end{equation*}
By the general idea of the two algorithms given above, if $Y$ is a feasible dual-solution of $G^*$, then the two algorithms increase the LP values of the edges, thus we always have that $\sum_{e \in E^*} Y'(e) \ge \sum_{e \in E^*} Y(e)$ for the obtained offspring $Y'$.
If $Y'$ is infeasible, then $s(Y') = -1$ and $f(Y',Y) < 0$; otherwise, $s(Y') = 1$ and $f(Y',Y) \ge 0$.
If $Y$ is an infeasible dual-solution of $G^*$, then the two algorithms
decrease the LP values of the edges firstly, aiming to get a feasible dual-solution of $G^*$.
Note that the LP values of the edges in $E^* \setminus E_{G^*}(Y)$ do not need to be decreased as they satisfy the dual-LP constraint with respect to $Y$.
If they are decreased during the process to get the first feasible dual-solution, then the algorithm may spend much extra time to make up the decrements on the LP values of the edges in $E^* \setminus E_{G^*}(Y)$ (i.e., spend extra time to make the edges in $E^* \setminus E_{G^*}(Y)$ be tight again).
Thus the term of $f(Y',Y)$,
$$-m \cdot W_{\textup{max}} \cdot \sum_{e \in E^* \setminus E_{G^*}(Y)} \big|Y(e) - Y'(e) \! \big| \ ,$$
penalizes the mutation that decreases the LP values of the edges in $E^* \setminus E_{G^*}(Y)$, which guides the mutation to decrease only the LP values of the edges in $E_{G^*}(Y)$.
More specifically, if the LP value of some edge in $E^* \setminus E_{G^*}(Y)$ is changed by the considered mutation (note that the increment or decrement on the LP value is always $\geq 1$), then
\begin{eqnarray}
\label{eqn:fitness1}
-m \cdot W_{\textup{max}} \cdot \sum_{e \in E^* \setminus E_{G^*}(Y)} \big|Y(e) - Y'(e) \! \big| \leq - m \cdot W_{\textup{max}} \ .
\end{eqnarray}
\begin{myAlgorithm}
\caption{(1+1) EA}
\label{alg:(1+1) EA}
Initialize solution $Y$ and step size function $\sigma : E^* \rightarrow 1$ \;
\tcp{\small{$Y(e) = Y_{\textup{orig}}(e)$ for each $e \in E^* \cap E$, and $Y(e) = 0$ for each $e \in E^* \setminus E$}}
Determine $s(Y)$ \;
\While {the termination criteria not satisfied}
{
$Y' := Y$ and $I := \emptyset$ \tcp*{\small{set $I$ keeps all edges chosen by the mutation}}
\For {each edge $e \in E^*$ with probability $1/m$}
{
$Y'(e) := \max \{ Y(e) + s(Y) \cdot \sigma(e), 0\}$ \;
$I := I \cup \{e\}$ \;
}
Determine $s(Y')$ and $f(Y',Y)$ \;
\eIf {$f(Y',Y) \ge 0$}
{
$Y := Y'$ \;
$\sigma(e) := \min \{\alpha \cdot \sigma(e), \alpha^{\lceil \log_{\alpha} W_{\textup{max}} \rceil +1} \}$ for all $e \in I$ \;
\tcp{\small{$\alpha$ is the increasing/decreasing rate of the step size}}
}
{
\If {$s(Y) > 0$}
{
Let $I'$ be the subset of $I$ such that each edge $e \in I'$ violates the dual-LP constraint with respect to $Y'$, but no other edge in $I$ shares the endpoint that violates the dual-LP constraint with $e$ \;
and $\sigma(e) := \max \{\sigma(e)/\alpha,1\}$ for all $e \in I'$ \;
}
}
}
\end{myAlgorithm}
Now we consider the upper bound of the term $\sum_{e \in E_{G^*}(Y)} \big(Y(e) - Y'(e) \! \big)$ under the assumption that the LP value of some edge in $E^* \setminus E_{G^*}(Y)$ is decreased by the considered mutation.
Since all solutions obtained during the process from the initial solution $Y_{\textup{orig}}$ to $Y$ are infeasible (including $Y_{\textup{orig}}$), $Y(e) \le Y_{\textup{orig}}(e) \le W_{\textup{max}}$ for each edge $e \in E^*$, i.e., $0 \le Y(e) - Y'(e) \le W_{\textup{max}}$.
Moreover, as $E^* \setminus E_{G^*}(Y)$ cannot be empty, $|E_{G^*}(Y)| < m$. Therefore,
\begin{eqnarray}
\label{eqn:fitness2}
\sum_{e \in E_{G^*}(Y)} \big(Y(e) - Y'(e) \! \big) < m \cdot W_{\textup{max}} \ .
\end{eqnarray}
Combining Inequalities~\ref{eqn:fitness1} and~\ref{eqn:fitness2}, $f(Y',Y) < 0$ no matter whether $Y'$ is feasible or infeasible, implying that the mutation would be rejected if it changes the LP value of some edge in $E^* \setminus E_{G^*}(Y)$.
For the case that no LP value of the edges in $E^* \setminus E_{G^*}(Y)$ is decreased by the considered mutation, it is easy to see that $f(Y',Y) \ge 0$.
\begin{myAlgorithm}
\caption{RLS}
\label{alg:RLS}
Initialize solution $Y$ and step size function $\sigma : E^* \rightarrow 1$ \;
\tcp{\small{$Y(e) = Y_{\textup{orig}}(e)$ for each $e \in E^* \cap E$, and $Y(e) = 0$ for each $e \in E^* \setminus E$}}
Determine $s(Y)$ \;
\While {the termination criteria not satisfied}
{
$Y' := Y$ \;
Choose an edge $e \in E^*$ uniformly at random \;
$Y'(e) := \max \{Y(e) + \sigma(e) \cdot s(Y), 0\}$ \;
Determine $s(Y')$ and $f(Y',Y)$ \;
\eIf {$f(Y',Y) \ge 0$}
{$Y := Y'$ and $\sigma(e) := \min \{\alpha \cdot \sigma(e), \alpha^{\lceil \log_{\alpha} W_{\textup{max}} \rceil +1} \}$ \;
\tcp{\small{$\alpha$ is the increasing/decreasing rate of the step size}}
}
{
\If {$s(Y) > 0$}
{
$\sigma(e) := \max \{\sigma(e)/\alpha,1\}$ \;
}
}
}
\end{myAlgorithm}
To deal with the case that the weights on the vertices are exponentially large with respect to the size of the graph (the number $m$ of edges), the Step Size Adaption strategy~\cite{beyer2002evolution} is incorporated into the two algorithms
(see steps 9-15 of Algorithm~\ref{alg:(1+1) EA} and steps 8-12 of Algorithm~\ref{alg:RLS}): the increment (called {\it step size} in the following text) on the LP values of the edges can exponentially increase or decrease. Let $\sigma : E^* \rightarrow \mathds{N}^+$ be the step size function that keeps the step size for each edge in $E^*$, and let $\sigma$ be initialized as $\sigma : E^* \rightarrow 1$.
Given a mutation of the RLS on $Y$, if it is accepted (i.e., $f(Y',Y) \geq 0$, where $Y'$ is the solution obtained by the mutation on $Y$), then the step size of the chosen edge $e$ is increased by a factor $\alpha$, where $\alpha$ is an integer between 2 and $W_{\textup{max}}$; otherwise, decreased by a factor $\alpha$ if $s(Y) > 0$.
W.l.o.g., we assume that the step size of each edge can be upper and lower bounded by $\alpha^{\lceil \log_{\alpha} W_{\textup{max}} \rceil +1}$ and 1, respectively.
Given a mutation of the (1+1) EA on $Y$ resulting $Y'$, if it is accepted, then the step size of each edge $e \in I$ is increased by a factor $\alpha$; otherwise, the step size of each edge $e \in I'$ is decreased by a factor $\alpha$ if $s(Y) > 0$, where $I'$ is the subset of $I$ such that each edge $e \in I'$ violates the dual-LP constraint with respect to $Y'$, but no other edge in $I$ shares the endpoint that violates the dual-LP constraint with $e$ (see step 14 of Algorithm~\ref{alg:(1+1) EA}).
The reason why we define the subset $I'$ of $I$ is that we can ensure that the step size of each edge in $I'$ is unfit for $Y$.
For an edge $e$ in $I \setminus I'$, there are two cases: (1) neither its two endpoints violates the dual-LP constraint with respect to the dual-solution $Y'$; (2) there is another edge $e' \in I \setminus \{e\}$ that has a common endpoint with $e$ such that the common endpoint of $e$ and $e'$ violates the dual-LP constraint with respect to the dual-solution $Y'$. For case (1), we should not decrease its step size. For case (2),
we cannot conclude that the step size of $e$ is unfit for the solution $Y$, because the step size of $e$ may be fit for $Y$ if it is considered independently.
If the algorithms adopt a ``radical'' strategy that decreases the step sizes of all the edges in $I$ if the mutation is rejected, then they would spend much time on increasing the step sizes of the edges (in some extreme case, the step size cannot exponentially increase, resulting in an exponential waiting time to get an MFDS~\cite{pourhassan2017use}). Thus we adopt a ``conservative" strategy: Only decrease the step sizes of the edges in $I'$.
Note that for any mutation of the (1+1) EA or RLS that is rejected, the step sizes of the edges selected by the mutation are not decreased if $s(Y) < 0$, because the rejection of the mutation is caused by the selection of the edges, not the violation of the dual-LP constraint.
\subsection{(1+1)~EA with 1/5-th Rule and RLS with 1/5-th Rule}
To eliminate the artificial influences on the two algorithms given in the previous subsection, such as the adjustment direction (increasing or decreasing the LP values) controlled by the sign function $s()$, and the strict condition to decrease the step size of a specific edge given in the (1+1) EA (only the step sizes of the edges in $I'$ can be decreased if the mutation is rejected and $s() > 0$), we incorporate the 1/5-th (success) rule, and present two algorithms, the (1+1) EA with 1/5-th Rule and RLS with 1/5-th Rule, given in Algorithm~\ref{alg:(1+1) EA with 1/5-th Rule} and~\ref{alg:RLS with 1/5-th Rule} respectively.
The two algorithms follow the fitness comparing function $f(Y',Y)$ defined in the previous subsection.
\begin{myAlgorithm}
\caption{(1+1) EA with 1/5-th Rule}
\label{alg:(1+1) EA with 1/5-th Rule}
Initialize solution $Y$ and step size function $\sigma : E^* \rightarrow 1$ \;
\tcp{\small{$Y(e) = Y_{\textup{orig}}(e)$ for each $e \in E^* \cap E$, and $Y(e) = 0$ for each $e \in E^* \setminus E$}}
\While {the termination criteria not satisfied}
{
$Y' := Y$ and $I := \emptyset$ \tcp*{\small{set $I$ keeps all edges chosen by the mutation}}
Choose $b \in \{-1,1\}$ uniformly at random \;
\For {each edge $e \in E^*$ with probability $1/m$}
{
$Y'(e) := \max \{ Y(e) + b \cdot \sigma(e), 0\}$ \;
$I := I \cup \{e\}$ \;
}
Determine $f(Y',Y)$ \;
\eIf {$f(Y',Y) \ge 0$}
{
$Y := Y'$ \;
$\sigma(e) := \min \{\alpha \cdot \sigma(e), \alpha^{\lceil \log_{\alpha} W_{\textup{max}} \rceil +1} \}$ for all $e \in I$ \;
\tcp{\small{$\alpha$ is the increasing/decreasing rate of the step size}}
}
{
$\sigma(e) := \max \{\alpha^{-1/4} \cdot \sigma(e),1\}$ for all $e \in I$ \;
}
}
\end{myAlgorithm}
The general idea of the two algorithms is: no matter whether or not the current maintained dual-solution is feasible, they either increase or decrease the LP values of the edges selected by the mutation of the algorithms with the same probability 1/2 (depend on the value of $b$, see step 4 of Algorithm~\ref{alg:(1+1) EA with 1/5-th Rule} and step 5 of Algorithm~\ref{alg:RLS with 1/5-th Rule}). If the mutation is accepted, then the dual-solution is updated, and the step sizes of these chosen edges are increased by a factor $\alpha$; otherwise, the step sizes of these chosen edges are decreased by a factor $\alpha^{1/4}$. It is necessary to point out that for a mutation of the (1+1) EA with 1/5-th Rule, we still require that the LP values of the edges selected by the mutation either all increase or all decrease.
The previous subsection analyzed the cases of the fitness comparing function $f(Y',Y)$ when the LP values of the edges are increased if $s(Y) =1$, or the LP values of the edges are decreased if $s(Y) =-1$.
Here we supplement the analysis of the case that the LP values of the edges are decreased if $s(Y) =1$ , and the case that the LP values of the edges are increased if $s(Y) =-1$.
If $s(Y) =1$ and the LP values of the edges are decreased, then obviously $s(Y') =1$, and
$$f(Y',Y) = \sum_{e \in E^*} \left(Y'(e) - Y(e) \! \right) < 0 \ .$$
If $s(Y) = -1$ and the LP values of some edges are increased, then
$$f(Y',Y) = \sum_{e \in E_{G^*}(Y)} \left(Y(e) - Y'(e) \! \right) - m \cdot W_{\textup{max}} \cdot \sum_{e \in E^* \setminus E_{G^*}(Y)} \left|Y(e) - Y'(e) \! \right| < 0 \ .$$
\begin{myAlgorithm}
\caption{RLS with 1/5-th Rule}
\label{alg:RLS with 1/5-th Rule}
Initialize solution $Y$ and step size function $\sigma : E^* \rightarrow 1$ \;
\tcp{\small{$Y(e) = Y_{\textup{orig}}(e)$ for each $e \in E^* \cap E$, and $Y(e) = 0$ for each $e \in E^* \setminus E$}}
\While {the termination criteria not satisfied}
{
$Y' := Y$ \;
Choose an edge $e \in E^*$ uniformly at random \;
Choose $b \in \{-1,1\}$ uniformly at random \;
$Y'(e) := \max \{Y(e) + b \cdot \sigma(e), 0\}$ \;
Determine $f(Y',Y)$ \;
\eIf {$f(Y',Y) \ge 0$}
{$Y := Y'$ and $\sigma(e) := \min \{\alpha \cdot \sigma(e), \alpha^{\lceil \log_{\alpha} W_{\textup{max}} \rceil +1} \}$ \;
\tcp{\small{$\alpha$ is the increasing/decreasing rate of the step size}}
}
{
$\sigma(e) := \max \{\alpha^{-1/4} \cdot \sigma(e),1\}$ \;
}
}
\end{myAlgorithm}
\subsection{Observations based on Fitness Comparing Function}
The selection mechanism of the four algorithms given above implies the following two lemmata.
\begin{lemma}
\label{lem:sign function}
Given two dual-solutions $Y$ and $Y'$, where $Y'$ is obtained by a mutation of the four algorithms on $Y$, if $Y'$ is accepted then $s(Y') \geq s(Y)$.
\end{lemma}
\begin{lemma}
\label{lem:infeasible to MFDS}
Given two dual-solutions $Y$ and $Y'$, where $Y'$ is obtained by a mutation of the four algorithms on $Y$,
if $Y$ is infeasible and $Y'$ is accepted, then the mutation only decreases the LP values of the edges in $E_{G^*}(Y)$.
\end{lemma}
\section{Runtime Analysis for the (1+1)~EA and RLS}
We start the section with a notion related to mutation, which plays an important role in the following discussion.
Given an edge $e$ in the weighted graph $G^*$, a mutation of the (1+1) EA or RLS is a {\it valid mutation on $e$} if it results in an increment or decrement on the LP value of $e$, or on the step size $\sigma(e)$ of $e$. Note that if the mutation is of the (1+1) EA, then it may choose some other edges in addition to $e$.
The two lemmata given below study the behaviors of the (1+1) EA and RLS on a specific edge $e^* = [v_1,v_2]$ in $G^*$.
\begin{lemma}
\label{lem:(1+1) EA for increment of G1}
Consider a feasible dual-solution $Y^{\dagger}$ of $G^*$, and an initial value $\sigma_1$ of the step size of the edge $e^*$. For a feasible dual-solution $Y^{\ddagger}$ obtained by the \mbox{\textup{(1+1) EA}} (or \mbox{\textup{RLS}}) starting with $Y^{\dagger}$, where $Y^{\ddagger}(e^*) - Y^{\dagger}(e^*) \geq \sigma_1$, the algorithm takes expected runtime $\mathrm O \! \left(\alpha m \log_{\alpha} \left(Y^{\ddagger}(e^*) - Y^{\dagger}(e^*) \! \right) \! \right)$ to increase the LP value of $e^*$ from $Y^{\dagger}(e^*)$ to $Y^{\ddagger}(e^*)$.
\end{lemma}
\begin{proof}
We start with the analysis for the (1+1) EA.
Since $Y^{\dagger}$ is a feasible dual-solution of $G^*$, by Lemma~\ref{lem:sign function}, the sign function $s()$ remains at 1 during the process from $Y^{\dagger}$ to $Y^{\ddagger}$, indicating that the LP value of $e^*$ is monotonically increased from $Y^{\dagger}(e^*)$ to $Y^{\ddagger}(e^*)$.
Let $Y$ be an arbitrary accepted solution of the (1+1) EA during the process, $M$ be a mutation of the (1+1) EA on $Y$, and $Y'$ be the offspring obtained by $M$ on $Y$.
In the following discussion, we first analyze the impact of the mutation $M$ on the step size $\sigma(e^*)$ of $e^*$, where the notation $\sigma(e^*)$ here denotes the step size of $e^*$ before the generation of $M$.
Observe that $M$ cannot influence $\sigma(e^*)$ if $e^* \notin I$, where $I$ denotes the set containing all the edges selected by $M$ (see step 7 of Algorithm~\ref{alg:(1+1) EA}). Thus in the following discussion, we assume that $e^* \in I$.
Case (1). $\sigma(e^*) \leq Y^{\ddagger}(e^*) - Y(e^*)$.
If $M$ is accepted by the (1+1) EA, then the step size of $e^*$ is increased from $\sigma(e^*)$ to $\alpha \cdot \sigma(e^*)$;
otherwise, the analysis on $M$ is divided into the two subcases given below.
Case (1.1). An endpoint $v_1$ of $e^*$ violates the dual-LP constraint with respect to $Y'$. Since $\sigma(e^*) \leq Y^{\ddagger}(e^*) - Y(e^*)$, the edge-subset $(E_{G^*}(v_1) \cap I) \setminus \{e^*\}$ cannot be empty, and the increments on the LP values of the edges in $E_{G^*}(v_1) \cap I$ results in the dual-LP constraint violation on $v_1$ with respect to $Y'$.
According to the definition of the edge-set $I'$ (see step 14 of Algorithm~\ref{alg:(1+1) EA}), we have that $e^* \notin I'$, and $M$ cannot influence $\sigma(e^*)$.
Case (1.2). No endpoint of $e^*$ violates the dual-LP constraint with respect to $Y'$. According to the definition of the edge-set $I'$, we also have that $e^* \notin I'$, and $M$ cannot influence $\sigma(e^*)$.
By the above analysis, any mutation of the (1+1) EA cannot cause an decrement on the step size of $e^*$ under Case (1).
If the mutation $M$ only selects the edge $e^*$, then it is a valid mutation on $e^*$, and can be accepted by the algorithm.
The (1+1) EA generates such a valid mutation on $e^*$ with probability $\mathrm \Omega (1 / m)$. Thus under Case (1), the algorithm takes expected runtime $\mathrm O (m)$ to increase the LP value of edge $e^*$ from $Y(e^*)$ to $Y(e^*) + \sigma(e^*)$, and increase the step size of $e^*$ from $\sigma(e^*)$ to $\alpha \cdot \sigma(e^*)$.
Case (2). $\sigma(e^*) > Y^{\ddagger}(e^*) - Y(e^*)$. For the case, the mutation $M$ would be rejected by the (1+1) EA as $e^* \in I$.
The analysis on $M$ can be divided into the following two subcases.
Case (2.1). There is no edge in $I \setminus \{e^*\}$ sharing the endpoint of $e^*$ that violates the dual-LP constraint with respect to $Y'$. For this subcase, $e^* \in I'$, and the step size of $e^*$ is decreased from $\sigma(e^*)$ to $\sigma(e^*) / \alpha$.
Case (2.2). There is an edge $e^*_1 \in I \setminus \{e^*\}$ sharing the endpoint of $e^*$ that violates the dual-LP constraint with respect to $Y'$. Because of the existence of $e^*_1$, $e^* \notin I'$ and $M$ does not influence the step size of $e^*$.
If the mutation $M$ only selects the edge $e^*$, then it is valid mutation on $e^*$, and belongs to Case (2.1).
The (1+1) EA generates such a valid mutation on $e^*$ with probability $\mathrm \Omega(1 / m)$.
Thus under Case (2), the (1+1) EA takes expected runtime $\mathrm O (m)$ to decrease the step size of $e^*$ from $\sigma(e^*)$ to $\sigma(e^*)/ \alpha$.
Now we are ready to analyze the expected runtime of the (1+1) EA to increase the LP value of $e^*$ from $Y^{\dagger}(e^*)$ to $Y^{\ddagger}(e^*)$, using the above obtained results. Since $Y^{\ddagger}(e^*) - Y^{\dagger}(e^*) \geq \sigma_1$, the whole process can be divided into Phase (I) and Phase (II).
Phase (I) contains all steps of the algorithm until the step size of $e^*$ is decreased for the first time, i.e., the step size of $e^*$ can only increase during the phase. More specifically, the condition of Case (1) is always met with respect to the maintained solution $Y$ during Phase (I).
Phase (II) follows Phase (I), during which the step size of $e^*$ may increase or decrease, but the general trend is decreasing.
W.l.o.g., assume that the initial value $\sigma_1$ of the step size of $e^*$ is equal to $\alpha^p$, where $p \geq 0$ is an integer not less than 0.
{\bf Phase (I)}.
Let $q$ be the integer such that
$$\sum_{i=p}^{q} {\alpha}^i \leq Y^{\ddagger}(e^*) - Y^{\dagger}(e^*) \ \ \textrm{and} \ \ \sum_{i=p}^{q+1} {\alpha}^i > Y^{\ddagger}(e^*) - Y^{\dagger}(e^*) \ .$$
Now it is easy to see that the step size of $e^*$ can be increased from $\alpha^p$ to ${\alpha}^{q+1}$ during the phase. Thus the number of valid mutations on $e^*$ required during Phase (I) is $q-p+1$, where
$$q-p+1 = \left\lfloor \log_{\alpha} \left(\frac{\big( Y^{\ddagger}(e^*) - Y^{\dagger}(e^*) \! \big) \left(\alpha-1 \right)}{\alpha^p} +1 \right) \right\rfloor \ .$$
Combining the expected runtime of the algorithm to generate a valid mutation on $e^*$,
Phase (I) takes expected runtime $\mathrm O \! \left(m \log_{\alpha} \left(Y^{\ddagger}(e^*) - Y^{\dagger}(e^*) \! \right) \! \right)$ (because $p$ may be 0).
{\bf Phase (II)}. During the phase, the LP value of $e^*$ is increased from $Y^{\dagger}(e^*) + \sum_{i=p}^{q} {\alpha}^i$ to $Y^{\ddagger}(e^*)$, and the step size of $e^*$ is decreased from ${\alpha}^{q+1}$ to 1. Similar to the analysis for Phase (I), we analyze the number $T$ of valid mutations on $e^*$ during Phase (II).
However, to simplify the analysis, we separately consider the number $t_i$ of valid mutations on $e^*$ with step size $\alpha^i$ among the $T$ valid mutations on $e^*$, where $0 \leq i \leq q+1$ (since the step size of $e^*$ can increase or decrease during Phase (II), there may be more than one valid mutation on $e^*$ with step size $\alpha^i$). Obviously $T = \sum_{i=0}^{q+1} t_i$.
We start with the analysis for $t_{q+1}$. Since the valid mutation on $e^*$ with step size $\alpha^{q+1}$ cannot be accepted, the step size will be decreased to $\alpha^{q}$. However, if a valid mutation on $e^*$ with step size $\alpha^{q}$ is accepted, then the step size will be increased to $\alpha^{q+1}$ again. Thus $t_{q+1} \leq 1 + (\alpha - 1) = \alpha$, because there are at most $\alpha - 1$ valid mutations on $e^*$ with step size $\alpha^{q}$ among the $T$ valid mutations on $e^*$ that can be accepted by the algorithm.
Now we consider $t_{i}$ for any $1 \leq i \leq q$, under the assumption that the mutation on $e^*$ with step size $\alpha^{i+1}$ cannot be accepted.
Using the reasoning similar to that given above for the mutation on $e^*$ with step size $\alpha^{q+1}$, we can get that there are at most $\alpha$ valid mutations on $e^*$ with step size $\alpha^{i}$ that can be rejected among the $T$ valid mutations on $e^*$.
Combining it with the observation that there are at most $\alpha - 1$ valid mutations on $e^*$ with step size $\alpha^{i}$ that can be accepted, we can derive that $t_i \leq 2 \alpha -1$.
Once the step size of $e^*$ is decreased to 1, then the LP value of $e^*$ is between $Y^{\ddagger}(e^*) - \alpha + 1$ and $Y^{\ddagger}(e^*)$. If the LP value of $e^*$ equals $Y^{\ddagger}(e^*)$, then Phase (II) is over, and $t_0 = 0$. If the LP value of $e^*$ is between $Y^{\ddagger}(e^*) - \alpha + 1$ and $Y^{\ddagger}(e^*) -1$, then $t_0 \leq \alpha - 1$. The above analysis gives
$$T = \sum_{i=0}^{q+1} t_i \leq (2 \alpha -1) \cdot (q+1) \ .$$
By the analysis for Case (1-2), Phase (II) takes expected runtime $\mathrm O \! \left(\alpha m \log_{\alpha} \left(Y^{\ddagger} \left(e^* \right) - Y^{\dagger}\left(e^* \right) \! \right) \! \right)$.
Summarizing the above analysis for the two phases, there are at most $2 \alpha (q+1)$ valid mutations on $e^*$ during the process from $Y^{\dagger}$ to $Y^{\ddagger}$, for which the (1+1) EA takes expected runtime
$\mathrm O \! \left(\alpha m \log_{\alpha} \left(Y^{\ddagger} \left(e^* \right) - Y^{\dagger} \left(e^* \right) \! \right) \! \right)$.
Since the RLS chooses exactly one edge in each iteration, any mutation of the RLS on $e^*$ is valid. Using the reasoning similar to that given above, we can get the same expected runtime for the RLS.
\end{proof}
Now we analyze the expected runtime of the two algorithms to make the edge $e^*$ satisfy the dual-LP constraint, if they start with an infeasible dual-solution with respect to which $e^*$ violates the dual-LP constraint.
\begin{lemma}
\label{lem:(1+1) EA for decrement of G1}
Consider an infeasible dual-solution $Y^{\dagger}$ of $G^*$, with respect to which the edge $e^*$ violates the dual-LP constraint. For the first feasible dual-solution $Y^{\ddagger}$ obtained by the \mbox{\textup{(1+1) EA}} (or \mbox{\textup{RLS}}) starting with $Y^{\dagger}$, the algorithm takes expected runtime
$\mathrm O \! \left(m \log_{\alpha} \left(Y^{\dagger}(e^*) - Y^{\ddagger}(e^*) \! \right) \! \right)$ to decrease the LP value of $e^*$ from $Y^{\dagger}(e^*)$ to $Y^{\ddagger}(e^*)$.
\end{lemma}
\begin{proof}
We start with the analysis for the (1+1) EA. Since $Y^{\ddagger}$ is the first feasible dual-solution obtained by the (1+1) EA starting with $Y^{\dagger}$, the LP value of $e^*$ is monotonically decreased from $Y^{\dagger}(e^*)$ to $Y^{\ddagger}(e^*)$.
Assume that the step size of $e^*$ is initialized as $\alpha^p$, where $p \geq 0$ is an integer not less than 0. Observe that the step size of $e^*$ cannot decrease during the process from $Y^{\dagger}$ to $Y^{\ddagger}$ because the sign function remains at $-1$.
Hence if $Y^{\ddagger}(e^*) > 0$, then there exists an integer $q$ such that $\sum_{i=p}^{q} \alpha^i = Y^{\dagger}(e^*) - Y^{\ddagger}(e^*)$, and the step size of $e^*$ is increased from $\alpha^p$ to $\alpha^{q+1}$ during the process.
Consequently, the process contains $q-p+1$ valid mutations on $e^*$, where
$$q-p+1 = \log_{\alpha} \left(\frac{\big(Y^{\dagger}(e^*) - Y^{\ddagger}(e^*) \big) \cdot \left(\alpha -1 \right)}{\alpha^p} +1 \right) \ .$$
If $Y^{\ddagger}(e^*) = 0$, then there exists an integer $q$ such that $\sum_{i=p}^{q-1} \alpha^i < Y^{\dagger}(e^*) $, $\sum_{i=p}^{q} \alpha^i \ge Y^{\dagger}(e^*)$, and the step size of $e^*$ is increased from $\alpha^p$ to $\alpha^{q+1}$ during the process.
Similarly, the process contains $q-p+1$ valid mutations on $e^*$, where
$$q-p+1 = \left\lceil \log_{\alpha} \left(\frac{\left(\alpha -1 \right) \cdot Y^{\dagger}(e^*) }{\alpha^p} +1 \right) \right\rceil \ .$$
The mutation that only selects the edge $e^*$ is a valid mutation on $e^*$, which can be generated by the (1+1) EA with probability $\mathrm \Omega(1 / m)$. Thus the (1+1) EA takes expected runtime
$\mathrm O \big(m(q+1) \! \big) = \mathrm O \! \left(m \log_{\alpha} \left(Y^{\dagger}(e^*) - Y^{\ddagger}(e^*)\! \right) \! \right)$ to get $Y^{\ddagger}$ (because $p$ may be 0). The above conclusions for the (1+1) EA also apply to the RLS.
\end{proof}
\subsection{Analysis for DWVC with Edge Modification}
We start the subsection with the analysis of the algorithms (1+1) EA and RLS for the two special variants of DWVC-E, namely, DWVC-E$^+$ and DWVC-E$^-$.
Denote by $E^+ = E^* \setminus E$ the set containing all the new added edges, and by $E^- = E \setminus E^*$ the set containing all the removed edges.
The variant DWVC-E$^+$ considers the case that $E^- = \emptyset$, and DWVC-E$^-$ considers the case that $E^+ = \emptyset$.
The following theorem analyzes the performances of the two algorithms for DWVC-E$^+$, from two different views.
We remark that for an instance $\{G = (V,E,W),Y_{\textup{orig}},E^+\}$ of \mbox{\textup{DWVC-E}}$^+$, $|E^+| = D$, and for each edge $e \in E^+$, $Y_{\textup{orig}}(e)$ and $\sigma(e)$ are initialized as 0 and 1, respectively.
\begin{theorem}
\label{theo:(1+1) EA for DWVC-E+}
The expected runtime of the \mbox{\textup{(1+1) EA}} (or \mbox{\textup{RLS}}) for \mbox{\textup{DWVC-E}}$^+$ is
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \min\{D,\log (\alpha D \cdot \log_{\alpha} W_{\textup{max}})\} \! \big)$.
\end{theorem}
\begin{proof}
We first consider the expected runtime of the (1+1) EA to obtain an MFDS of $G^* = (V,E \cup E^+,W)$, starting with the given MFDS $Y_{\textup{orig}}$ of $G = (V,E,W)$.
Observe that $Y_{\textup{orig}}$ is a feasible dual-solution of $G^*$. Thus combining Lemma 1 and the general idea of the algorithm, we have that all mutations accepted by the algorithm increase the LP values of the edges in $G^*$.
If $Y_{\textup{orig}}$ is an MFDS of $G^*$, then any mutation on $Y_{\textup{orig}}$ results in an infeasible solution that would be rejected, i.e., the algorithm keeps the dual-solution $Y_{\textup{orig}}$ forever.
In the following discussion, we assume that $Y_{\textup{orig}}$ is not an MFDS of $G^*$. Observe that any increment on the LP values of the edges in $E$ would result in an infeasible solution that cannot be accepted by the algorithm. Thus we have that $Y^*(e) = Y_{\textup{orig}}(e)$ for each edge $e \in E$, and $Y^*(e) \geq Y_{\textup{orig}}(e)$ for each edge $e \in E^+$, where $Y^*$ is an MFDS of $G^*$ obtained by the (1+1) EA starting with $Y_{\textup{orig}}$.
To study the expected runtime of the (1+1) EA to get $Y^*$, two analytical ways from different views are given below: One considers the edges in $E^+$ sequentially; the other one considers that in an interleaved way.
We start with the analysis from the view that considers the edges in $E^+$ sequentially.
Let $e^* = [v_1,v_2]$ be an arbitrary edge in $E^+$ with $Y^*(e^*) - Y_{\textup{orig}}(e^*) > 0$. Since $Y^*(e^*) - Y_{\textup{orig}}(e^*) \leq W_{\textup{max}}$ and the fact that the step size of $e^*$ is initialized with value 1, Lemma~\ref{lem:(1+1) EA for increment of G1} gives that the (1+1) EA takes expected runtime
$$\mathrm O \! \left(\alpha m \log_{\alpha} \left(Y^*(e^*) - Y_{\textup{orig}}(e^*)\! \right) \! \right) = \mathrm O (\alpha m \log_{\alpha} W_{\textup{max}})$$
to increase the LP value of $e^*$ from $Y_{\textup{orig}}(e^*)$ to $Y^*(e^*)$.
Combining the fact that the number of edges in $E^+$ is bounded by $D$, we have that the (1+1) EA takes expected runtime $\mathrm O (\alpha m D \log_{\alpha} W_{\textup{max}})$ to get $Y^*$.
Now we analyze the expected runtime of the (1+1) EA to get $Y^*$ from the other view that considers the edges in $E^+$ as a whole.
For each edge $e \in E^+$, denote $Y^*(e) - Y_{\textup{orig}}(e)$ by $\Delta(e)$, and denote by $\beta(e)$ the number of valid mutations on $e$ that the algorithm requires to increase the LP value of $e$ from $Y_{\textup{orig}}(e)$ to $Y^*(e)$.
Let $E_{\Delta} = \{e \in E^+ | \Delta(e) \neq 0\}$, and let the potential of the dual-solution $Y_{\textup{orig}}$ be
\begin{eqnarray}
\label{eqn:61}
g(Y_{\textup{orig}}) = \sum_{e \in E_{\Delta}} \beta(e) \ .
\end{eqnarray}
Observe that $E_{\Delta} \subset E^+$.
Since there may exist a mutation that is not only a valid mutation on $e_1 \in E^+$, but also a valid mutation on $e_2 \in E^+ \setminus \{e_1\}$, $g(Y_{\textup{orig}})$ is the upper bound of the number of valid mutations on the edges in $E_{\Delta}$ that the algorithm requires to get $Y^*$ starting from $Y_{\textup{orig}}$.
Moreover, the analysis of Lemma~\ref{lem:(1+1) EA for increment of G1} gives that any mutation on $Y_{\textup{orig}}$ cannot increase its potential.
To obtain the expected drift of $g$, we first consider the relation between $|E_{\Delta}|$ and $g(Y_{\textup{orig}})$.
For each edge $e \in E_{\Delta}$, Lemma~\ref{lem:(1+1) EA for increment of G1} gives that
\begin{eqnarray}
\label{eqn:62}
\beta(e) \leq 2 \alpha \left\lfloor \log_{\alpha} \big( \! (\alpha-1) \cdot \Delta(e) +1 \big)\right\rfloor
\leq 2 \alpha \log_{\alpha} \big(\alpha \cdot \Delta(e) \! \big)
\leq 2 \alpha \left(\log_{\alpha} W_{\textup{max}} + 1 \right) \ .
\end{eqnarray}
By Equations~\ref{eqn:61} and~\ref{eqn:62}, we have that
$$|E_{\Delta}| \ge \frac{g(Y_{\textup{orig}})}{2 \alpha \left(\log_{\alpha} W_{\textup{max}} + 1 \right)} \ \ \textup{and} \ \ g(Y_{\textup{orig}}) \le D \cdot 2 \alpha \left(\log_{\alpha} W_{\textup{max}} + 1 \right) \ .$$
A valid mutation that chooses exactly one of the edge in $E_{\Delta}$ can be generated by the algorithm with probability $\mathrm \Omega(|E_{\Delta}|/(e \cdot m))$, which results in a new solution $Y'$ with $g(Y') = g(Y_{\textup{orig}}) -1$.
Thus the expected drift of $g$ can be bounded by
$$\frac{|E_{\Delta}|}{e \cdot m} \ge \frac{g(Y_{\textup{orig}})}{e \cdot 2 \alpha m \cdot \left(\log_{\alpha} W_{\textup{max}} + 1 \right)} \ .$$
As mentioned above, the maximum value that $g(Y_{\textup{orig}})$ can take is $2 \alpha D \left(\log_{\alpha} W_{\textup{max}} + 1 \right)$.
Combining it, the obvious minimum value 1 that $g(Y_{\textup{orig}})$ can take, and the expected drift of $g$, the Multiplicative Drift Theorem~\cite{algorithmica/DoerrJW12} gives that the (1+1) EA takes expected runtime
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\alpha D \cdot \log_{\alpha} W_{\textup{max}}) \! \big)$ to get $Y^*$.
Summarizing the above analysis, we can conclude that the (1+1) EA takes expected runtime $\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \min\{D,\log (\alpha D \cdot \log_{\alpha} W_{\textup{max}})\} \! \big)$ to find an MFDS of $G^*$ starting with $Y_{\textup{orig}}$.
Since we only consider the mutations selecting exactly one edge in the analysis for the (1+1) EA, the above conclusions also apply to the RLS.
\end{proof}
Given an instance $\{G = (V,E,W),Y_{\textup{orig}},E^-\}$ of \mbox{\textup{DWVC-E}}$^-$, the following theorem considers the expected runtime of the (1+1) EA and RLS to obtain an \mbox{\textup{MFDS}} of $G^* = (V,E \setminus E^-,W)$, starting with the MFDS $Y_{\textup{orig}}$ of $G$.
Note that the domain of definition for $Y_{\textup{orig}}$ and the weight function $W$ are modified as $E \setminus E^-$ after the removal, and $|E^-| = D$.
\begin{theorem}
\label{theo:(1+1) EA for DWVC-E-}
The expected runtime of the \mbox{\textup{(1+1) EA}} (or \mbox{\textup{RLS}}) for \mbox{\textup{DWVC-E}}$^-$ is
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$.
\end{theorem}
\begin{proof}
Observe that $Y_{\textup{orig}}$ is a feasible dual-solution of $G^*$, and the endpoints of the edges in $E^-$ may not be tight with respect to $Y_{\textup{orig}}$ once the edges in $E^-$ are removed.
Thus the LP values of the edges in $E_{G}(E^-)$ may have the room to be increased.
If $Y_{\textup{orig}}$ is an MFDS of $G^*$, then any mutation of the \mbox{\textup{(1+1) EA}} (or \mbox{\textup{RLS}}) on $Y_{\textup{orig}}$ would be rejected, and the algorithm keeps the dual-solution $Y_{\textup{orig}}$ forever.
In the following discussion, we assume that $Y_{\textup{orig}}$ is not an MFDS of $G^*$.
Let $Y^*$ be an arbitrary MFDS of $G^*$ obtained by the (1+1) EA (or RLS) starting with $Y_{\textup{orig}}$.
The above analysis gives that $Y^*(e) = Y_{\textup{orig}}(e)$ for each edge $e \in E \setminus \big( E^- \cup E_{G}(E^-) \! \big)$, and $Y^*(e) \geq Y_{\textup{orig}}(e)$ for each edge $e \in E_{G}(E^-)$.
Observe that all the edges in $E_{G}(E^-)$ are incident to the endpoints of the edges in $E^-$, and the number of endpoints of the edges in $E^-$ is upper bounded by $2D$. Combining the observation with the fact that the sum of the LP values of the edges sharing an endpoint cannot be larger than the weight of the endpoint under the dual-LP constraint, we have that $\sum_{e \in E_{G}(E^-)} Y^*(e)$ can be upper bounded by $2 D \cdot W_{\textup{max}}$.
For each edge $e \in E_{G}(E^-)$, denote $Y^*(e) - Y_{\textup{orig}}(e)$ by $\Delta(e)$, and denote by $\beta(e)$ the number of valid mutations on $e$ that the algorithm requires to increase the LP value of $e$ from $Y_{\textup{orig}}(e)$ to $Y^*(e)$.
Let $E_{\Delta} = \{e \in E_{G}(E^-) | \Delta(e) \neq 0\}$. Then we have
$$\sum_{e \in E_{\Delta}} \Delta(e) \ = \sum_{e \in E_{G}(E^-)}\left(Y^*(e) - Y_{\textup{orig}}(e) \! \right) \ \leq \sum_{e \in E_{G}(E^-)} Y^*(e) \le 2 D \cdot W_{\textup{max}} \ .$$
Let the potential of the solution $Y_{\textup{orig}}$ be
$$g(Y_{\textup{orig}}) = \sum_{e \in E_{\Delta}} \beta(e) \ .$$
Similar to the analysis given in Theorem~\ref{theo:(1+1) EA for DWVC-E+}, we have that $g(Y_{\textup{orig}})$ is the upper bound of the number of valid mutations on the edges in $E_{\Delta}$ that the algorithm requires to get $Y^*$ starting from $Y_{\textup{orig}}$, and any mutation on $Y_{\textup{orig}}$ cannot increase its potential.
The analysis for the expected drift of $g$ is divided into two cases, based on the value of $\sum_{e \in E_{\Delta}} \Delta(e)/|E_{\Delta}|$.
Case (1). $\sum_{e \in E_{\Delta}} \Delta(e) < \alpha \cdot |E_{\Delta}|$.
Lemma~\ref{lem:(1+1) EA for increment of G1} gives that $
\beta(e) \leq 2 \alpha (\log_{\alpha} \Delta(e) + 1)$ for each edge $e \in E_{\Delta}$. Thus we have
\begin{eqnarray*}
g(Y_{\textup{orig}}) = \sum_{e \in E_{\Delta}} \beta(e)
&\le& 2 \alpha \cdot |E_{\Delta}| + 2 \alpha \cdot \log_{\alpha} \left(\prod_{e \in E_{\Delta}} \Delta(e) \right) \\
&\le& 2 \alpha \cdot |E_{\Delta}| + 2 \alpha \cdot |E_{\Delta}| \cdot \log_{\alpha} \frac{\sum_{e \in E_{\Delta}} \Delta(e)}{|E_{\Delta}|} \le 4 \alpha \cdot |E_{\Delta}| \le 4 \alpha m \ ,
\end{eqnarray*}
implying that $|E_{\Delta}| \ge g(Y_{\textup{orig}}) / (4 \alpha)$.
A valid mutation that chooses exactly one of the edges in $E_{\Delta}$ can be generated by the algorithm with probability $\mathrm \Omega(|E_{\Delta}|/(e \cdot m))$, which results in a new solution $Y'$ with $g(Y') = g(Y_{\textup{orig}}) -1$.
Thus the expected drift of $g$ can be bounded by
$$\frac{|E_{\Delta}|}{e \cdot m} \ge \frac{g(Y_{\textup{orig}})}{e \cdot 4 \alpha m} \ .$$
Case (2). $\sum_{e \in E_{\Delta}} \Delta(e) \ge \alpha \cdot |E_{\Delta}|$. By Lemma~\ref{lem:(1+1) EA for increment of G1}, we can get that
\begin{eqnarray*}
g(Y_{\textup{orig}}) = \sum_{e \in E_{\Delta}} \beta(e) \le \sum_{e \in E_{\Delta}} 2 \alpha (\log_{\alpha} \Delta(e) + 1)
\le |E_{\Delta}| \cdot 2 \alpha (\log_{\alpha} W_{\textup{max}} +1) \ ,
\end{eqnarray*}
implying that
$|E_{\Delta}| \ge \frac{g(Y_{\textup{orig}})}{2 \alpha \left(\log_{\alpha} W_{\textup{max}} + 1 \right)}$.
Using the reasoning similar to that given for Case (1), we have that the expected drift of $g$ can be bounded by
$$\frac{|E_{\Delta}|}{e \cdot m} \ge \frac{g(Y_{\textup{orig}})}{e \cdot 2 \alpha m \cdot \left( \log_{\alpha} W_{\textup{max}} + 1 \right)} \ .$$
Now we consider the maximum value that $g(Y_{\textup{orig}})$ can take,
\begin{eqnarray}
\label{ee1}
g(Y_{\textup{orig}}) = \sum_{e \in E_{\Delta}} \beta(e)
&\le& 2 \alpha \cdot |E_{\Delta}| + 2 \alpha \cdot \log_{\alpha} \left(\prod_{e \in E_{\Delta}} \Delta(e) \right) \\
\label{ee2}
&\le& 2 \alpha \cdot |E_{\Delta}| + 2 \alpha \cdot |E_{\Delta}| \cdot \log_{\alpha} \frac{\sum_{e \in E_{\Delta}} \Delta(e)}{|E_{\Delta}|} \\
\label{ee3}
&\le& 4 \alpha \cdot |E_{\Delta}| \cdot \log_{\alpha} \frac{\sum_{e \in E_{\Delta}} \Delta(e)}{|E_{\Delta}|} \\
\label{ee4}
&\le& 4 \alpha \cdot |E_{\Delta}| \cdot \log_{\alpha} \frac{2D \cdot W_{\textup{max}}}{|E_{\Delta}|} \\
\label{ee5}
&\le& 4 \alpha \cdot \frac{2D \cdot W_{\textup{max}}}{e} \cdot \log_{\alpha} e \\
\label{ee6}
&\le& 8 \alpha D \cdot W_{\textup{max}} \ ,
\end{eqnarray}
where the factor $\log_{\alpha} e$ is not greater than $e$ as $\alpha \in [2, W_{\textup{max}}]$, and Inequality~\ref{ee5} can be derived by the observation that $f(x) = x \cdot \log_{\alpha} (2D \cdot W_{\textup{max}}/x)$ ($x > 0$) gets its maximum value when $x = 2D \cdot W_{\textup{max}}/e$.
Summarizing the analysis for Cases (1-2), we have that the expected drift of $g$ can be bounded by
$$\frac{|E_{\Delta}|}{e \cdot m} \ge \frac{g(Y_{\textup{orig}})}{e \cdot 2 \alpha m \cdot \max\{2, \log_{\alpha} W_{\textup{max}} + 1\} } = \frac{g(Y_{\textup{orig}})}{e \cdot 2 \alpha m \cdot (\log_{\alpha} W_{\textup{max}} + 1 )} \ ,$$
and the maximum value of $g(Y_{\textup{orig}})$ can be bounded by
$$\max \{4 \alpha m, 8 \alpha D \cdot W_{\textup{max}} \} \ .$$
The Multiplicative Drift Theorem~\cite{algorithmica/DoerrJW12} implies that the (1+1) EA takes expected runtime
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$ to find an MFDS of $G^*$ starting with $Y_{\textup{orig}}$.
Since we only consider the mutations selecting exactly one edge in the analysis for the (1+1) EA, the above conclusions also apply to the RLS.
\end{proof}
Consider an instance $\{G = (V,E,W),Y_{\textup{orig}},E^*\}$ of \mbox{\textup{DWVC-E}}. By the analysis for \mbox{\textup{DWVC-E}}$^+$ and \mbox{\textup{DWVC-E}}$^-$, we have that $Y_{\textup{orig}}$ is a feasible dual-solution of $G^* = (V,E^*,W)$, and the LP values of the edges in $E^+ \cup E_{G}(E^-)$ may have the room to be increased, where
$E^+ = E^* \setminus E$ and $E^- = E \setminus E^*$.
Since $|E^+ \cup E^-|$ is bounded by $D$, we can derive the following theorem for \mbox{\textup{DWVC-E}} using the reasoning similar to that for Theorems~\ref{theo:(1+1) EA for DWVC-E+} and~\ref{theo:(1+1) EA for DWVC-E-}.
\begin{theorem}
\label{theo:(1+1) EA for DWVC-E}
The expected runtime of the \mbox{\textup{(1+1) EA}} (or \mbox{\textup{RLS}}) for \mbox{\textup{DWVC-E}} is
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$.
\end{theorem}
\subsection{Analysis for DWVC with Weight Modification}
We start the subsection with the analysis of the (1+1) EA and RLS for the two special variants of DWVC-W, namely, DWVC-W$^+$ and DWVC-W$^-$.
Denote by $V^+$ the set containing all the vertices $v$ with $W^*(v) > W(v)$, and by $V^-$ the set containing all the vertices $v$ with $W^*(v) < W(v)$.
The variant DWVC-W$^+$ considers the case that $V^- = \emptyset$, and DWVC-W$^-$ considers the case that $V^+ = \emptyset$.
Consider an instance $\{G = (V,E,W),Y_{\textup{orig}},W^+,V^+\}$ of \mbox{\textup{DWVC-W}}$^+$.
Observe that $Y_{\textup{orig}}$ is an obviously feasible dual-solution of $G^* = (V,E,W^+)$, and the LP values of the edges in $E_{G^*}(V^+)$ may have the room to be increased if $Y_{\textup{orig}}$ is not an MFDS of $G^*$.
The following lemma shows that the sum of the feasible LP value increments on the edges in $E_{G^*}(V^+)$ can be upper bounded, as these edges are all incident to the vertices in $V^+$.
\begin{lemma}
\label{lem:weight bound}
For any \mbox{\textup{MFDS}} $Y^*$ obtained by the \mbox{\textup{(1+1) EA}} (or \mbox{\textup{RLS}}) for the instance $\{G = (V,E,W),Y_{\textup{orig}},W^+,V^+\}$ of \mbox{\textup{DWVC-W}}$^+$,
\begin{equation*}
\sum_{e \in E} \left( Y^*(e) - Y_{\textup{orig}}(e) \! \right) \leq \sum_{v \in V^+} \left ( W^+(v) - W(v) \! \right) \leq D \cdot W_{\textup{max}} \ .
\end{equation*}
\end{lemma}
\begin{proof}
Since $Y_{\textup{orig}}$ is a feasible dual-solution of $G^*$, by Lemma~\ref{lem:sign function}, $Y^*(e) \geq Y_{\textup{orig}}(e)$ for each edge $e \in E$. Let $E_{W^+}$ be the set containing all the edges $e \in E$ with $Y^*(e) > Y_{\textup{orig}}(e)$. Then we have
\begin{equation}
\label{e1}
\sum_{e \in E} \left( Y^*(e) - Y_{\textup{orig}}(e) \! \right) = \sum_{e \in E_{W^+}} \left(Y^*(e) - Y_{\textup{orig}}(e) \! \right) \ .
\end{equation}
Note that the LP values of the edges in $E \setminus E_{G^*}(V^+)$ cannot be increased, thus $E_{W^+} \subseteq E_{G^*}(V^+)$.
For each edge $e \in E_{W^+}$, let $\tau(e)$ be the endpoint of $e$ that is tight with respect to $Y_{\textup{orig}}$ (if both endpoints of $e$ are tight, then arbitrarily choose one as $\tau(e)$). Observe that $\tau(e) \in V^+$ for each edge $e \in E_{W^+}$; otherwise, the LP value of the edge cannot be increased under the dual-LP constraint. Thus for any vertex $v \in V^+$, we have
\begin{equation}
\label{e2}
\sum_{e \in E_{W^+}| \tau(e) = v} \left( Y^*(e) - Y_{\textup{orig}}(e) \! \right) \leq W^+(v) - W(v) \ .
\end{equation}
Then summarizing Inequality~(\ref{e2}) over all vertices in $V^+$, we can get
\begin{equation}
\label{e33}
\sum_{e \in E_{W^+}} \left( Y^*(e) - Y_{\textup{orig}}(e) \! \right) \leq \sum_{v \in V^+} \left ( W^+(v) - W(v) \! \right) \leq D \cdot W_{\textup{max}} \ .
\end{equation}
Combining Equality~(\ref{e1}) and Inequality~(\ref{e33}) gives the claimed inequality.
\end{proof}
Using the reasoning similar to that for Theorem~\ref{theo:(1+1) EA for DWVC-E-} and the upper bound given by Lemma~\ref{lem:weight bound}, we can get the following theorem for \mbox{\textup{DWVC-W}}$^+$.
\begin{theorem}
\label{theo:(1+1) EA for DWVC-W+}
The expected runtime of the \mbox{\textup{(1+1) EA}} (or \mbox{\textup{RLS}}) for \mbox{\textup{DWVC-W}}$^+$ is $\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$.
\end{theorem}
Given an instance $\{G = (V,E,W),Y_{\textup{orig}},W^-,V^-\}$ of \mbox{\textup{DWVC-W}}$^-$, if $Y_{\textup{orig}}$ is a feasible dual-solution of $G^* = (V,E,W^-)$, then $Y_{\textup{orig}}$ is still an MFDS of $G^*$.
Otherwise, we have to first decrease the LP values of the edges violating the dual-LP constraint with respect to the maintained dual-solution, as the general idea of the algorithms given in Section~\ref{sec:algorithms}, to get the first feasible dual-solution as soon as possible. The remaining analysis to get an MFDS of $G^*$ based on the first feasible dual-solution is similar to that given for DWVC-E$^+$.
\begin{theorem}
\label{theo:(1+1) EA for DWVC-W-}
The expected runtime of the \mbox{\textup{(1+1) EA}} (or \mbox{\textup{RLS}}) for \mbox{\textup{DWVC-W}}$^-$ is
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$.
\end{theorem}
\begin{proof}
We first analyze the expected runtime of the (1+1) EA to obtain an \mbox{\textup{MFDS}} $Y^*$ of $G^* = (V,E,W^-)$, starting with the MFDS $Y_{\textup{orig}}$ of $G$. For the soundness and completeness of the proof, we assume that $Y_{\textup{orig}}$ is an infeasible dual-solution of $G^*$.
Then the whole process can be divided into Phase (I) and Phase (II). Phase (I) contains all steps of the algorithm until it finds the first feasible dual-solution $Y_t$ of $G^*$; Phase (II) follows Phase (I), which contains all steps of the algorithm until it obtains the MFDS $Y^*$ of $G^*$.
{\bf Phase (I)}.
By Lemma~\ref{lem:infeasible to MFDS}, to get the first feasible dual-solution $Y_t$ of $G^*$, the (1+1) EA only can decrease the LP values of the edges in $E_{G^*}(Y_{\textup{orig}})$, where $E_{G^*}(Y_{\textup{orig}}) \subseteq E_{G^*}(V^-)$. Thus $Y_t(e) \leq Y_{\textup{orig}}(e)$ for each edge $e \in E_{G^*}(Y_{\textup{orig}})$, and $Y_t(e) = Y_{\textup{orig}}(e)$ for each edge $e \in E \setminus E_{G^*}(Y_{\textup{orig}})$.
Denote $Y_{\textup{orig}}(e) - Y_t(e)$ by $\Delta(e)$ for each edge $e \in E_{G^*}(Y_{\textup{orig}})$, and denote by $\beta(e)$ the number of valid mutations on $e$ that the (1+1) EA requires to decrease the LP value of $e$ from $Y_{\textup{orig}}(e)$ to $Y_t(e)$. Let $E_{\Delta} = \{e \in E_{G^*}(Y_{\textup{orig}}) \ | \ \Delta(e) \neq 0\}$. Since each edge in $E_{\Delta}$ has an endpoint that is in $V^-$,
$\sum_{e \in E_{\Delta}} \Delta(e) \le \sum_{e \in E_{\Delta}} Y_{\textup{orig}}(e) \le D \cdot W_{\textup{max}}$.
Let the potential of the solution $Y_{\textup{orig}}(e)$ be
$$g\left(Y_{\textup{orig}}(e) \! \right) = \sum_{e \in E_{\Delta}} \beta(e) \ .$$
Similar to the analysis given in Theorem~\ref{theo:(1+1) EA for DWVC-E+}, we have that $g(Y_{\textup{orig}})$ is the upper bound of the number of valid mutations on the edges in $E_{\Delta}$ that the algorithm requires to get $Y_t$ starting from $Y_{\textup{orig}}$, and any mutation on $Y_{\textup{orig}}$ cannot increase its potential.
The analysis for the expected drift of $g$ is divided into two cases, based on the value of $\sum_{e \in E_{\Delta}} \Delta(e)/|E_{\Delta}|$.
Case (1). $\sum_{e \in E_{\Delta}} \Delta(e) < \alpha \cdot |E_{\Delta}|$.
By Lemma~\ref{lem:(1+1) EA for decrement of G1}, we have
$$\beta(e) \leq \lceil \log_{\alpha} \big( (\alpha-1) \Delta(e) +1 \big) \rceil
\leq \lceil \log_{\alpha} \big( \alpha \Delta(e) \! \big) \rceil
\leq \lceil \log_{\alpha} \Delta(e) + 1 \rceil
\leq \log_{\alpha} \Delta(e) + 2$$
for each edge $e \in E_{\Delta}$. Thus
\begin{eqnarray*}
g(Y_{\textup{orig}}) = \sum_{e \in E_{\Delta}} \beta(e)
&\le& 2 |E_{\Delta}| + \log_{\alpha} \left(\prod_{e \in E_{\Delta}} \Delta(e) \right) \\
&\le& 2 |E_{\Delta}| + |E_{\Delta}| \cdot \log_{\alpha} \frac{\sum_{e \in E_{\Delta}} \Delta(e)}{|E_{\Delta}|} \le 3 |E_{\Delta}| \ ,
\end{eqnarray*}
implying that $|E_{\Delta}| \ge g(Y_{\textup{orig}}) / 3$, and the maximum value that $g(Y_{\textup{orig}})$ can take is $3 m$ (as $|E_{\Delta}| \le m$).
A valid mutation that chooses exactly one of the edge in $E_{\Delta}$ can be generated by the algorithm with probability $\mathrm \Omega(|E_{\Delta}|/(e \cdot m))$, which results in a new solution $Y'$ with $g(Y') = g(Y_{\textup{orig}}) -1$.
Consequently, the expected drift of $g$ can be bounded by
$$\frac{|E_{\Delta}|}{e \cdot m} \ge \frac{g(Y_{\textup{orig}})}{3e \cdot m} \ .$$
Case (2). $\sum_{e \in E_{\Delta}} \Delta(e) \ge \alpha \cdot |E_{\Delta}|$.
By Lemma~\ref{lem:(1+1) EA for decrement of G1}, we have that
\begin{eqnarray*}
g(Y_{\textup{orig}}) = \sum_{e \in E_{\Delta}} \beta(e) \le \sum_{e \in E_{\Delta}} (\log_{\alpha} \Delta(e) + 2)
\le |E_{\Delta}| \cdot (\log_{\alpha} W_{\textup{max}} +2) \ ,
\end{eqnarray*}
implying that
$|E_{\Delta}| \ge \frac{g(Y_{\textup{orig}})}{\log_{\alpha} W_{\textup{max}} + 2}$.
Using the reasoning similar to that given for Case (1), we have that the expected drift of $g$ can be bounded by
$$\frac{|E_{\Delta}|}{e \cdot m} \ge \frac{g(Y_{\textup{orig}})}{e \cdot m \cdot \left( \log_{\alpha} W_{\textup{max}} + 2 \right)} \ .$$
Now we consider the maximum value that $g(Y_{\textup{orig}})$ can take,
\begin{eqnarray}
\label{ee7}
g(Y_{\textup{orig}}) = \sum_{e \in E_{\Delta}} \beta(e)
&\le& 2 |E_{\Delta}| + \log_{\alpha} \left(\prod_{e \in E_{\Delta}} \Delta(e) \right) \\
\label{ee8}
&\le& 2 |E_{\Delta}| + |E_{\Delta}| \cdot \log_{\alpha} \frac{\sum_{e \in E_{\Delta}} \Delta(e)}{|E_{\Delta}|} \\
\label{ee9}
&\le& 3 |E_{\Delta}| \cdot \log_{\alpha} \frac{\sum_{e \in E_{\Delta}} \Delta(e)}{|E_{\Delta}|} \\
\label{ee10}
&\le& 3 |E_{\Delta}| \cdot \log_{\alpha} \frac{D \cdot W_{\textup{max}}}{|E_{\Delta}|} \\
\label{ee11}
&\le& 3 \cdot \frac{D \cdot W_{\textup{max}}}{e} \cdot \log_{\alpha} e \\
\label{ee12}
&\le& 3 D \cdot W_{\textup{max}} \ ,
\end{eqnarray}
where the factor $\log_{\alpha} e$ is not greater than $e$ as $\alpha \in [2, W_{\textup{max}}]$, and Inequality~\ref{ee11} can be derived by the observation that $f(x) = x \cdot \log_{\alpha} (D \cdot W_{\textup{max}}/x)$ ($x > 0$) gets its maximum value when $x = D \cdot W_{\textup{max}}/e$.
Summarizing the analysis for Cases (1-2) and the fact that the value of $\alpha$ is between $2$ and $W_{\textup{max}}$, we have that the expected drift of $g$ can be lower bounded by
$$\frac{|E_{\Delta}|}{e \cdot m} \ge \frac{g(Y_{\textup{orig}})}{e \cdot m \cdot \max\{3, \log_{\alpha} W_{\textup{max}} + 2 \} } \ge \frac{g(Y_{\textup{orig}})}{e \cdot m \cdot (\log_{\alpha} W_{\textup{max}} + 2)} \ ,$$
and the maximum value that $g(Y_{\textup{orig}})$ can take is upper bounded by
$$\max\{3m, 3D \cdot W_{\textup{max}}\}\ .$$
The Multiplicative Drift Theorem~\cite{algorithmica/DoerrJW12} implies that the (1+1) EA takes expected runtime
$\mathrm O \big(m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{m, D \cdot W_{\textup{max}} \}) \! \big)$ to obtain the first feasible dual-solution $Y_t$ starting with $Y_{\textup{orig}}$.
{\bf Phase (II)}.
Obviously $Y_t$ may not be an MFDS of $G^* = (V,E,W^-)$. Thus we also need to consider the process of the (1+1) EA to get an MFDS of $G^*$ starting with $Y_t$. To simplify the analysis, we intend to transform Phase (II) as an execution of the (1+1) EA for an instance $\{G_t = (V,E,W_t), Y_t, W^-, V_t\}$ of DWVC-W$^+$.
Thus in the following discussion, we first give the setting way of the weight function $W_t$ such that $Y_t$ is an MFDS of $G_t$, and $W_t(v) \le W^-(v)$ for each vertex $v \in V$.
Let $V_{\Delta}$ contain all endpoints of the edges in $E_{\Delta}$. For each vertex $v \in V \setminus V_{\Delta}$, let $W'(v) = W(v)$, and for each vertex $v \in V_{\Delta}$, let
$$W'(v) = W(v) - \sum_{e \in E_{\Delta}| e \cap v \neq \emptyset}\Delta(e) \ .$$
As $Y_{\textup{orig}}$ is an MFDS of $G$, $Y_t$ is an obvious MFDS of $G' = (V,E,W')$.
Note that there may exist some vertex $v \in V$ with $W'(v) > W^-(v)$.
Thus for each vertex $v \in V$, we let
\begin{eqnarray}
\label{eqn:11-1}
W_t(v) = \min\{W'(v), W^-(v)\} \ .
\end{eqnarray}
Because of Equality~\ref{eqn:11-1} and the fact that $Y_t$ is a feasible dual-solution of both $G'$ and $G^*$, $Y_t$ is a feasible dual-solution of $G_t$. Furthermore, since $Y_t$ is an MFDS of $G'$, $Y_t$ is an MFDS of $G_t$.
Now let $V_t$ contain all vertices $v \in V$ with $W_t(v) < W^-(v)$.
Then the instance $\{G_t = (V,E,W_t), Y_t, W^-, V_t\}$ of DWVC-W$^+$ is completely constructed.
It is necessary to remark that for an edge $e$ in $E_{\Delta}$, the step size of $e$ may be larger than $Y^*(e) - Y_t(e)$ at the beginning of Phase (II), then Lemma~\ref{lem:(1+1) EA for increment of G1} is invalid under the situation. Fortunately, the step size of $e$ is at most $\alpha \cdot (Y_{\textup{orig}}(e) - Y_t(e)) = \alpha \cdot \Delta(e)$. Thus using the multiplicative drift analysis similar to that given above, we can get that the expected runtime of the (1+1) EA to decrease the step sizes of the edges in $E_{\Delta}$ to the feasible values is bounded by $\mathrm O \big(m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{m, D \cdot W_{\textup{max}} \}) \! \big)$.
Now we assume that Lemma~\ref{lem:(1+1) EA for increment of G1} is valid for each edge in $E_{\Delta}$.
Similar to Lemma~\ref{lem:weight bound}, we consider the upper bound on the sum of the feasible LP value increments on the edges with respect to $W^-$, where
\begin{eqnarray*}
\sum_{v \in V_t} \big( W^-(v) - W_t(v) \! \big) &\leq& \sum_{v \in V_t} \big( W(v) - W_t(v) \! \big) \\
&\leq& \sum_{v \in V} \Big( \! \left(W(v) - W^-(v) \! \right) + \left(W(v) - W'(v) \right) \! \Big) \\
&\leq& \sum_{v \in V} \left( \left(W(v) - W^-(v) \! \right) + \sum_{e \in E_{\Delta}| e \cap v \neq \emptyset}\Delta(e) \right) \\
&\leq& \sum_{v \in V} \left(W(v) - W^-(v) \! \right) + 2 \sum_{e \in E_{\Delta}} \Delta(e) \\
&\leq& D \cdot W_{\textup{max}} + 2 D \cdot W_{\textup{max}} = 3 D \cdot W_{\textup{max}} \ .
\end{eqnarray*}
By Lemma~\ref{lem:weight bound} and the reasoning similar to that for Theorem~\ref{theo:(1+1) EA for DWVC-E-}, we have that the (1+1) EA takes expected runtime $\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$ to get an MFDS of $G^*$ starting with $Y_t$.
Summarizing the above discussion, the (1+1) EA takes expected runtime $\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$ to get an MFDS of $G^*$. The above expected runtime also applies to the RLS.
\end{proof}
\begin{theorem}
\label{theo:(1+1) EA for DWVC-E}
The expected runtime of the \mbox{\textup{(1+1) EA}} (or \mbox{\textup{RLS}}) for \mbox{\textup{DWVC-W}} is
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log (\max\{\alpha m, \alpha D \cdot W_{\textup{max}} \}) \! \big)$.
\end{theorem}
\begin{proof}
Consider an instance $\{G = (V,E,W),Y_{\textup{orig}},W^*,V^+,V^-\}$ of \mbox{\textup{DWVC-W}}. By the discussion for \mbox{\textup{DWVC-W}}$^+$ and \mbox{\textup{DWVC-W}}$^-$, if $Y_{\textup{orig}}$ is a feasible dual-solution with respect to $G^*$, then only the LP values of the edges in $E_{G^*}(V^+)$ may have the room to be increased, and the sum of the increments can be upper bounded by $D \cdot W_{\textup{max}}$.
If $Y_{\textup{orig}}$ is an infeasible dual-solution with respect to $G^*$, then we have that the sum of the decrements on the LP values of the edges in $E_{G^*}(V^-)$ and the sum of the increments on the LP values of the edges incident to the vertices in $N_{G^*}(V^-) \cup V^- \cup V^+$ can be upper bounded by $D \cdot W_{\textup{max}}$ and $3 D \cdot W_{\textup{max}}$, respectively.
Using the reasoning similar to that for Theorems~\ref{theo:(1+1) EA for DWVC-E-} and~\ref{theo:(1+1) EA for DWVC-W-}, we have the theorem for \mbox{\textup{DWVC-W}}.
\end{proof}
\section{Runtime Analysis for the \mbox{\textup{RLS with 1/5-th Rule}} and \mbox{\textup{(1+1) EA with 1/5-th Rule}}}
Given an MFDS $Y^*$ that is obtained by the \mbox{\textup{RLS with 1/5-th Rule}} starting with a feasible dual-solution $Y^{\dagger}$ of $G^*$, the following two lemmata consider the behavior of the algorithm on a specific edge $e^*$ in $G^*$ during the process from $Y^{\dagger}$ to $Y^*$.
Denote $Y^*(e^*) - Y^{\dagger}(e^*)$ by $\Delta$.
\begin{lemma}
\label{lem:RLS with 1/5-th Rule increase weight partly}
If the step size of the edge $e^*$ has an initial value $\sigma_1 > 0$, then the \mbox{\textup{RLS with 1/5-th Rule}} takes expected runtime $\mathrm O \left(m \left(\log_{\alpha} \sigma_1 + \log_{\alpha} \Delta \right) \! \right)$ to find a feasible dual-solution $Y^{\ddagger}$ such that $Y^{\ddagger}(e^*) - Y^{\dagger}(e^*) \geq \Delta /(\alpha + 1)$, during the process from $Y^{\dagger}$ to $Y^*$.
\end{lemma}
\begin{proof}
Since $Y^{\dagger}$ is a feasible dual-solution of $G^*$, any mutation that decreases the LP values of the edges would be rejected.
Thus for any dual-solution $Y$ accepted by the algorithm during the process from $Y^{\dagger}$ to $Y^*$, there is $Y(e^*) \geq Y^{\dagger}(e^*)$.
The following analysis divides the process from $Y^{\dagger}$ to $Y^{\ddagger}$ into two phases: Phase (I) and Phase (II).
As the initial value $\sigma_1$ of the step size of $e^*$ may be larger than $\Delta$, Phase (I) contains all steps of the algorithm until the step size of $e^*$ is not greater than $\Delta$, where $Y_1$ denotes the dual-solution maintained by the algorithm at that moment.
We remark that for any dual-solution $Y$ (including $Y_1$) obtained during Phase (I), $Y(e^*) = Y^{\dagger}(e^*)$.
Phase (II) follows Phase (I), and ends if the step size of $e^*$ is greater than $Y^*(e^*) - Y_2(e^*)$, where $Y_2$ denotes the dual-solution maintained by the algorithm at the moment.
We will show that $Y_2$ satisfies the claimed condition given for $Y^{\ddagger}$.
If $\sigma_1 \leq \Delta$, then we are already at Phase (II).
For the soundness and completeness of the proof, we assume that $\sigma_1 > \Delta$ in the following discussion.
Let $\sigma_1 = \alpha^p$ and $\alpha^q \leq \Delta < \alpha^{q+1/4}$, where $p, q \in \{l/4 | l \in \mathds{N} \}$.
Now we analyze the expected runtime that Phase (I) takes to decrease the step size of $e^*$ from $\alpha^p$ to $\alpha^q$.
Since $\sigma(e^*) > Y^*(e^*) -Y(e^*)$ always holds for any maintained dual-solution $Y$ during Phase (I), and the fact that $Y^{\dagger}$ is feasible, any mutation on $e^*$ cannot be accepted, and decreases the step size of $e^*$ by a factor $\alpha^{1/4}$.
Thus Phase (I) needs $\mathrm O(p-q) = \mathrm O(p) = \mathrm O(\log_{\alpha} \sigma_1)$ mutations on $e^*$ to decrease the step size to $\alpha^q$.
Now we assume that the step size of $e^*$ is decreased to $\alpha^q$, i.e., we are at Phase (II) now.
If a mutation on $e^*$ increases its LP value, then the mutation would be accepted, and the exponent of the step size would be increased to $q+1$; otherwise, the mutation would be rejected, and the exponent of the step size would be decreased to $q -1/4$.
The mutation on $e^*$ increases or decreases its LP value with the same probability 1/2, hence the value of the exponent increases by 1 or decreases by 1/4 with the same probability 1/2.
Observe that the drift on the exponent is $(1- 1/4)/2 = 3/8$.
If the step size of $e^*$ is increased to over $\alpha^{\lceil \log_{\alpha} \Delta \rceil}$ (at most $\alpha^{\lceil \log_{\alpha} \Delta \rceil + 3/4}$), then Phase (II) obviously ends.
In fact, the step size may not be increased to over $\alpha^{\lceil \log_{\alpha} \Delta \rceil}$ during Phase (II).
Using the Additive Drift Theorem~\cite{he2004study}, the algorithm needs $\mathrm O(\log_{\alpha} \Delta - q) = \mathrm O(\log_{\alpha} \Delta)$ mutations on $e^*$ to increase the exponent to over $\lceil \log_{\alpha} \Delta \rceil$.
Thus Phase (II) contains $\mathrm O(\log_{\alpha} \Delta)$ mutations on $e^*$.
For the dual-solution $Y_2$ obtained by Phase (II), $\sigma(e^*) > Y^*(e^*) - Y_2(e^*)$.
Since Phase (II) contains at least one mutation increasing the LP value of $e^*$ that is accepted, the gap $\Delta = Y^*(e^*) - Y^{\dagger}(e^*)$ is decreased by at least $\sigma(e^*)/\alpha$, and we have
$$Y^*(e^*) - Y_2(e^*)
\leq \Delta - \sigma(e^*)/\alpha
\leq \Delta - \left(Y^*(e^*) - Y_2(e^*) \! \right)/\alpha \ .$$
By the above inequality, it is easy to get that $Y_2(e^*) - Y^{\dagger}(e^*) \geq \Delta / (\alpha +1)$.
Summarizing the above analysis, the algorithm takes expected runtime $\mathrm O \left(m (\log_{\alpha} \sigma_1 + \log_{\alpha} \Delta) \! \right)$ to get a feasible dual-solution satisfying the claimed condition.
\end{proof}
\begin{lemma}
\label{lem:RLS with 1/5-th Rule increase weight}
If the step size of the edge $e^*$ has an initial value not greater than $\Delta$, then the \mbox{\textup{RLS with 1/5-th Rule}} takes expected runtime $\mathrm O \left(\alpha m \log_{\alpha} \Delta \cdot \log \Delta \right)$ to increase the LP value of $e^*$ from $Y^{\dagger}(e^*)$ to $Y^*(e^*)$, during the process from $Y^{\dagger}$ to $Y^*$.
\end{lemma}
\begin{proof}
Let $Y$ be an arbitrary dual-solution obtained during the process from $Y^{\dagger}$ to $Y^*$. In the following discussion, we first analyze the expected runtime of the \mbox{\textup{RLS with 1/5-th Rule}} to obtain a solution $Y'$ starting with $Y$ such that $Y'(e^*) - Y(e^*) \geq \left(Y^*(e^*) - Y(e^*) \! \right) /(\alpha + 1)$.
As the initial value of the step size of $e^*$ is not greater than $\Delta$, we have the observation that the maximum value of the step size of $e^*$ during the process from $Y^{\dagger}$ to $Y^*$ is at most $\alpha^{p + 1}$, where $p = \lceil \log_{\alpha} \Delta \rceil$.
For the moment that the algorithm maintains the dual-solution $Y$, the corresponding step size of $e^*$ may be greater than $Y^*(e^*) - Y(e^*)$, thus we have to consider Phase (I) (defined in the proof of Lemma~\ref{lem:RLS with 1/5-th Rule increase weight partly}).
Since the step size of $e^*$ is bounded by $\alpha^{p + 1}$, Phase (I) needs $\mathrm O(\log_{\alpha} \Delta)$ mutations on $e^*$.
Furthermore, since $Y(e^*) \geq Y^{\dagger}(e^*)$, Phase (II) needs $\mathrm O \left(\log_{\alpha} \left(Y^*(e^*) - Y(e^*) \! \right) \! \right) = \mathrm O(\log_{\alpha} \Delta)$ mutations on $e^*$. Consequently, the \mbox{\textup{RLS with 1/5-th Rule}} takes expected runtime $\mathrm O(m \log_{\alpha} \Delta)$ to get $Y'$ starting with $Y$.
Using the above conclusion, the Multiplicative Drift Theorem~\cite{algorithmica/DoerrJW12} implies the claimed runtime.
\end{proof}
Now we analyze the expected runtime of the \mbox{\textup{RLS with 1/5-th Rule}} to make the edge $e^*$ satisfy the dual-LP constraint, if it starts with an infeasible dual-solution with respect to which $e^*$ violates the dual-LP constraint.
\begin{lemma}
\label{lem:RLS with 1/5-th Rule decrease weight}
Consider an infeasible dual-solution $Y^{\dagger}$ of $G^*$, with respect to which the edge $e^*$ violates the dual-LP constraint. For the first feasible dual-solution $Y^{\ddagger}$ obtained by the \mbox{\textup{RLS with 1/5-th Rule}} starting with $Y^{\dagger}$, the algorithm takes expected runtime $\mathrm O \! \left(m \log_{\alpha} \left(Y^{\dagger}(e^*) - Y^{\ddagger}(e^*) \! \right) \! \right)$
to decrease the LP value of $e^*$ from $Y^{\dagger}(e^*)$ to $Y^{\ddagger}(e^*)$.
\end{lemma}
\begin{proof}
Assume that the step size of $e^*$ has an initial value $\alpha^q$, where $q \geq 0$.
Since $Y^{\dagger}$ is an infeasible dual-solution of $G^*$, if a mutation on $e^*$ decreases its LP value, then the mutation is accepted, and the exponent of the step size of $e^*$ is increased by $1$; otherwise, the mutation is rejected, and the exponent of the step size is decreased by $1/4$.
Observe that the mutation on $e^*$ increases or decreases its LP value with the same probability 1/2.
Thus we have that the drift on the exponent of the step size of $e^*$ is $(1- 1/4)/2 = 3/8$, during the process that decreases the LP value of $e^*$ from $Y^{\dagger}(e^*)$ to $Y^{\ddagger}(e^*)$.
If the step size of $e^*$ is increased to not less than $\alpha^{\lceil \log_{\alpha} (Y^{\dagger}(e^*) - Y^{\ddagger}(e^*)) \rceil +1}$ during the process that decreases the LP value of $e^*$ from $Y^{\dagger}(e^*)$ to $Y^{\ddagger}(e^*)$, then the LP value of $e^*$ is decreased to 0.
In fact, the step size may not be increased to over $\alpha^{\lceil \log_{\alpha} (Y^{\dagger}(e^*) - Y^{\ddagger}(e^*)) \rceil +1}$ during the process, because $\alpha^q$ may be greater than 0.
Hence, the maximum value of the step size of $e^*$ during the process is at most $\alpha^{\lceil \log_{\alpha} (Y^{\dagger}(e^*) - Y^{\ddagger}(e^*)) \rceil +1}$.
Using the Additive Drift Theorem~\cite{he2004study} and the drift on the exponent of the step size of $e^*$ obtained above, the process that decreases the LP value of $e^*$ from $Y^{\dagger}(e^*)$ to $Y^{\ddagger}(e^*)$ needs at most
$\mathrm O \left(\log_{\alpha} \left(Y^{\dagger}(e^*) - Y^{\ddagger}(e^*) \! \right) - q \right) = \mathrm O \left(\log_{\alpha} \left(Y^{\dagger}(e^*) - Y^{\ddagger}(e^*) \! \right) \! \right)$ mutations on $e^*$. That is, the process takes expected runtime $\mathrm O \left(m \log_{\alpha} \left(Y^{\dagger}(e^*) - Y^{\ddagger}(e^*) \! \right) \! \right)$.
\end{proof}
The following theorem can be derived based on the conclusions of Lemma~\ref{lem:RLS with 1/5-th Rule increase weight}.
\begin{theorem}
\label{theo:RLS with 1/5-th Rule for DWVC-E+}
The expected runtime of the \mbox{\textup{RLS with 1/5-th Rule}} for \mbox{\textup{DWVC-E}}$^+$ is
$\mathrm O \big(\alpha m D \log_{\alpha} W_{\textup{max}} \cdot \log W_{\textup{max}} \! \big)$.
\end{theorem}
\begin{proof}
We study the expected runtime of the \mbox{\textup{RLS with 1/5-th Rule}} to obtain an MFDS $Y^*$ of $G^*=(V,E \cup E^+,W)$ starting with the MFDS $Y_{\textup{orig}}$ of $G = (V,E,W)$.
Observe that only the LP values of the edges in $E^+$ may have the room to be increased.
Moreover, Lemma~\ref{lem:RLS with 1/5-th Rule increase weight} gives that for each edge $e \in E^+$ with $Y^*(e) > Y_{\textup{orig}}(e)$, the algorithm takes expected runtime $\mathrm O \left(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log W_{\textup{max}} \! \right)$ to increase its LP value from $Y_{\textup{orig}}(e)$ to $Y^*(e)$.
Therefore, combining it with the fact that $|E^+| = D$ directly gives the expected runtime
$\mathrm O \left(\alpha m D \log_{\alpha} W_{\textup{max}} \cdot \log W_{\textup{max}} \! \right)$.
\end{proof}
\begin{theorem}
\label{theo:RLS with 1/5-th Rule for DWVC-E-}
The expected runtime of the \mbox{\textup{RLS with 1/5-th Rule}} for \mbox{\textup{DWVC-E}}$^-$ is
$\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \min\{m \log W_{\textup{max}}, D \cdot W_{\textup{max}} \} \! \big)$.
\end{theorem}
\begin{proof}
We study the expected runtime of the \mbox{\textup{RLS with 1/5-th Rule}} to obtain an MFDS $Y^*$ of $G^*=(V,E \setminus E^-,W)$ starting with the given MFDS $Y_{\textup{orig}}$ of $G = (V,E,W)$.
Let $E_{\Delta}$ be the edges $e \in E^* = E \setminus E^-$ with $Y^*(e) > Y_{\textup{orig}}(e)$, and let $\Delta(e) = Y^*(e) - Y_{\textup{orig}}(e)$ for each edge $e \in E_{\Delta}$.
The reasoning given in Theorem~\ref{theo:(1+1) EA for DWVC-E-} shows that $\sum_{e \in E_{\Delta}} \Delta(e) \le D \cdot W_{\textup{max}}$.
Lemma~\ref{lem:RLS with 1/5-th Rule increase weight} gives that for each edge $e \in E_{\Delta}$, the algorithm takes expected runtime $\mathrm O \left(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \log \Delta(e) \! \right)$ to increase the LP value of $e$ from $Y_{\textup{orig}}(e)$ to $Y^*(e)$.
Summing the expected runtime over all the edges in $E_{\Delta}$ gives the expected runtime
$\mathrm O \left(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \sum_{e \in E_{\Delta}}\log \Delta(e) \! \right)$.
For the upper bound of $\sum_{e \in E_{\Delta}}\log \Delta(e)$, we have
\begin{eqnarray}
\sum_{e \in E_{\Delta}} \log \Delta(e) &=& \log \left(\prod_{e \in E_{\Delta}} \Delta(e) \right) \\
&\le& |E_{\Delta}| \cdot \log \frac{\sum_{e \in E_{\Delta}} \Delta(e)}{|E_{\Delta}|} \\
&\le& |E_{\Delta}| \cdot \log \frac{D \cdot W_{\textup{max}}}{|E_{\Delta}|} \\
\label{ee13}
&\le& \frac{D \cdot W_{\textup{max}}}{e} \cdot \log e \ .
\end{eqnarray}
Inequality~\ref{ee13} can be obtained by the observation that $f(x) = x \cdot \log (D \cdot W_{\textup{max}}/x)$ ($x > 0$) gets its maximum value when $x = D \cdot W_{\textup{max}}/e$.
Thus the expected runtime of the \mbox{\textup{RLS with 1/5-th Rule}} to obtain an MFDS $Y^*$ of $G^*$ starting with $Y_{\textup{orig}}$ can be bounded by $\mathrm O \left(\alpha m D \log_{\alpha} W_{\textup{max}} \cdot W_{\textup{max}} \! \right)$.
Additionally, as $|E_{\Delta}| \le m$, and $\Delta(e) \le W_{\textup{max}}$ for each edge $e \in E_{\Delta}$, we have that the expected runtime of the algorithm can also be bounded by $\mathrm O \left(\alpha m^2 \log_{\alpha} W_{\textup{max}} \cdot \log W_{\textup{max}} \! \right)$. Therefore, combining the two expected runtime given above, we have the claimed result.
\end{proof}
The following theorem can be derived using the conclusions obtained by Lemmata~\ref{lem:RLS with 1/5-th Rule increase weight} and~\ref{lem:RLS with 1/5-th Rule decrease weight}, and the reasoning similar to that given in Theorems~\ref{theo:RLS with 1/5-th Rule for DWVC-E-},~\ref{theo:(1+1) EA for DWVC-E-}, and~\ref{theo:(1+1) EA for DWVC-W-}.
\begin{theorem}
\label{theo:RLS with 1/5-th Rule for DWVC}
The expected runtime of the \mbox{\textup{RLS with 1/5-th Rule}} for \mbox{\textup{DWVC-X}}, where \textup{X} $\in \{E,W^+,W^-,W\}$, is $\mathrm O \big(\alpha m \log_{\alpha} W_{\textup{max}} \cdot \min\{m \log W_{\textup{max}}, D \cdot W_{\textup{max}} \} \! \big)$.
\end{theorem}
In the following, we analyze the performance of the \mbox{\textup{(1+1) EA with 1/5-th Rule}} for DWVC.
Firstly, we give a specific graph $G_s$ (see Figure~\ref{fig:special graph}(a)) that is the same as the graph considered in~\cite{pourhassan2017use}. W.l.o.g., we assume that the maximum weight $W_{\textup{max}}$ that the vertices in $G_s$ have is $\alpha^m$.
Then we show that in a special situation, the \mbox{\textup{(1+1) EA with 1/5-th Rule}} requires pseudo-polynomial runtime to obtain the unique MFDS $Y^*$ of $G_s$, where $Y^*(e_1) = W_{\textup{max}}$ and $Y^*(e_i) = 1$ for all $2 \leq i \leq m$.
The Chernoff-Hoeffding Bound given below is used in the proof for the main result stated later.
\medskip
\noindent{\bf Chernoff-Hoeffding Bound}~\cite{phillips2012chernoff}. Let $x_1,\dots,x_n$ be independent random variables such that $a_i \leq x_i \leq b_i$ for all $1 \leq i \leq n$. Denote $X = \sum_{i=1}^{n} x_i$. Then for any $\delta \geq 0$, the following inequality holds.
$${\rm Prob}(X \geq E[X] + \delta) \leq e^{-2 \delta^2 / \sum_{i=1}^{n} (b_i - a_i)^2} \ .$$
\medskip
\begin{figure}
\begin{center}
\vspace*{.25cm}
\begin{picture}(300,70)
\put(-20,10){\begin{picture}(0,0)
\put(2,50){$W_{\textup{max}}$}
\put(2,-5){$W_{\textup{max}}$}
\put(-1,22){$e_1$}
\put(10,5){\circle*{3}}
\put(10,5){\line(0,1){40}}
\put(10,45){\circle*{3}}
\put(32,49){$1$}
\put(32,-6){$1$}
\put(24,22){$e_2$}
\put(35,5){\circle*{3}}
\put(35,5){\line(0,1){40}}
\put(35,45){\circle*{3}}
\put(57,49){$1$}
\put(57,-6){$1$}
\put(49,22){$e_3$}
\put(60,5){\circle*{3}}
\put(60,5){\line(0,1){40}}
\put(60,45){\circle*{3}}
\put(80,22){$\ldots$}
\put(107,49){$1$}
\put(107,-6){$1$}
\put(98,22){$e_m$}
\put(110,5){\circle*{3}}
\put(110,5){\line(0,1){40}}
\put(110,45){\circle*{3}}
\end{picture}}
\put(35,-7){\small (a)}
\put(210,10){\begin{picture}(0,0)
\put(-18,50){$W_{\textup{max}}$}
\put(2,-5){$W_{\textup{max}}$}
\put(-38,-5){$W_{\textup{max}}$}
\put(4,22){$e_1$}
\put(-32,22){$e'_1$}
\put(10,5){\circle*{3}}
\put(-30,5){\circle*{3}}
\put(-10,45){\circle*{3}}
\put(-10,45){\line(-1,-2){20}}
\put(-10,45){\line(1,-2){20}}
\put(32,49){$1$}
\put(32,-6){$1$}
\put(24,22){$e_2$}
\put(35,5){\circle*{3}}
\put(35,5){\line(0,1){40}}
\put(35,45){\circle*{3}}
\put(57,49){$1$}
\put(57,-6){$1$}
\put(49,22){$e_3$}
\put(60,5){\circle*{3}}
\put(60,5){\line(0,1){40}}
\put(60,45){\circle*{3}}
\put(80,22){$\ldots$}
\put(107,49){$1$}
\put(107,-6){$1$}
\put(98,22){$e_m$}
\put(110,5){\circle*{3}}
\put(110,5){\line(0,1){40}}
\put(110,45){\circle*{3}}
\end{picture}}
\put(265,-7){\small (b)}
\end{picture}
\end{center}
\vspace*{-1mm}
\caption{(a). The special graph $G_s$ contains $m$ edges, each of which is independent (i.e., each edge constitutes a connected component of $G_s$). Except the two endpoints of edge $e_1$ in $G_s$ that have weight $W_{\textup{max}}$, all the other vertices have weight 1. (b). The graph $G'_s$ is a variant of $G_s$ with an additional vertex and an additional edge $e'_1$.}
\label{fig:special graph}
\vspace*{-3mm}
\end{figure}
\begin{lemma}
\label{lem:(1+1) EA with 1/5-th Rule increase weight}
Consider a feasible dual-solution $Y^{\dagger}$ of $G_s$ with $Y^{\dagger}(e_i) = 1$ for all $1 \leq i \leq m$, and the step size of the edge $e_1$ with an initial value 1.
The expected runtime of the \mbox{\textup{(1+1) EA with 1/5-th Rule}} to obtain the unique MFDS $Y^*$ of $G_s$ starting with $Y^{\dagger}$ is lower bounded by $2^{m^{\epsilon /2}}$ ($0 < \epsilon \leq 1/2$) with probability $1- e^{-{\rm \Omega}(m^{\epsilon})}$.
\end{lemma}
\begin{proof}
Let $M$ be a mutation of the \mbox{\textup{(1+1) EA with 1/5-th Rule}} that selects the edge $e_1$ (note that $M$ may also select some edges in addition to $e_1$).
If $M$ is accepted, then the exponent $q$ of the step size $\sigma(e_1) = \alpha^q$ ($q \ge 0$) of $e_1$ is increased by $1$; otherwise, decreased by $1/4$.
Observe that $M$ can be accepted only if $M$ just selects the edge $e_1$ and increases its LP value.
Thus the probability $P_{inc}$ that the mutation $M$ is accepted is $\leq \frac{1}{2e}$, and the probability $P_{dec}$ that the mutation $M$ is rejected is $\ge 1 - \frac{1}{2e}$.
As the drift of the exponent $q$ is $1 \cdot P_{inc} + (-1/4) \cdot P_{dec} \leq (5-2e)/8e < 0$, $q$ will gradually decrease to 0 if $q > 0$, i.e., the step size of $e_1$ will gradually decrease to 1 if it is greater than 1.
The step size of $e_1$ has an initial value 1, hence if it cannot increase to an enough large value during the whole process of $2^{m^{\epsilon /2}}$ steps with a high probability, then we can show that the LP value of $e_1$ after the $2^{m^{\epsilon /2}}$ steps cannot reach $W_{\textup{max}} = \alpha^{m}$ with a high probability.
In the following discussion, we assume that the step size of $e_1$ is increased to $\alpha$ at some point, i.e., $q = 1$.
Then we use Chernoff-Hoeffding Bound to show that $T_1$ is upper bounded by $m^{\epsilon}$ with probability $1 - e^{-{\rm \Omega}(m^{\epsilon})}$, where $T_1$ denotes the number of steps that the algorithm requires to decrease the step size of $e_1$ from $\alpha$ to 1.
Observe that $x_i$ denotes the increment on the exponent $q$ of the step size of $e_1$, which equals $1$ or $-1/4$, $a_i = -1/4$, and $b_i = 1$ for all $1 \leq i \leq n$, where $n = m^{\epsilon}$ (the notations $x_i$, $b_i$, and $a_i$ follow the ones given in the definition of Chernoff-Hoeffding Bound).
As the exponent $q$ has values 1 and 0 before and after $T_1$ steps, respectively, we have $X = \sum_{i = 1}^{n} x_i = -1$.
Furthermore, considering the equality $X = E[X] + \delta$, where $E[X] \leq \frac{5-2e}{8e} \cdot m^{\epsilon}$, Chernoff-Hoeffding Bound gives that the probability that $T_1 > m^{\epsilon}$ is upper bounded by
$$e^{-2 \delta^2 / \sum_{i=1}^{n} (b_i - a_i)^2} = e ^{-2 [\frac{2e-5}{8e} \cdot m^{ \epsilon} - 1]^2 / (\frac{25}{16} m^{\epsilon})} = e^{-{\rm \Omega}(m^{\epsilon})} \ .$$
Meanwhile, we also have that the maximum value of the step size of $e_1$ during the $T_1$ steps is upper bounded by $\alpha^{m^{\epsilon}}$, with probability $1- e^{-{\rm \Omega}(m^{\epsilon})}$.
Now we consider the whole process of $2^{m^{\epsilon /2}}$ steps. A phase of the whole process is {\it non-trivial} if it starts with a point where the step size of $e_1$ is increased to $\alpha$, ends with a point where the step size of $e_1$ is decreased to 1 for the first time (i.e., the step sizes of $e_1$ at all internal points of the phase are greater than 1).
Thus the whole process consists of $N_1$ non-trivial phases and $N_2$ steps where the step size of $e_1$ is 1 (both $N_1$ and $N_2$ are nonnegative integers).
For a non-trivial phase $P$, by the analysis given above, the number of steps in $P$ is upper bounded by $m^{\epsilon}$ with probability $1 - e^{-{\rm \Omega}(m^{\epsilon})}$, and lower bounded by $5$ (one step increases the step size to $\alpha$, and four steps decrease the step size to 1).
Thus the number $N_1$ of non-trivial phases is upper bounded by $2^{m^{\epsilon /2}} /5$.
Combining the conclusion obtained above that the step size is increased to over $\alpha^{m^{\epsilon}}$ during each non-trivial phase with probability $e^{-{\rm \Omega}(m^{\epsilon})}$, for the $N_1$ non-trivial phases, we have that the step size of $e_1$ is increased to over $\alpha^{m^{\epsilon}}$ with probability
$$ e^{-{\rm \Omega}(m^{\epsilon})} \cdot N_1 \leq e^{-{\rm \Omega}(m^{\epsilon})} \cdot 2^{m^{\epsilon /2}} /5 = e^{-{\rm \Omega}(m^{\epsilon})} \ .$$
That is, during the whole process of $2^{m^{\epsilon /2}}$ steps, the step size of $e_1$ is increased to over $\alpha^{m^{\epsilon}}$ with probability $e^{-{\rm \Omega}(m^{\epsilon})}$.
Therefore, by the end of the whole process of $2^{m^{\epsilon/2}}$ steps, the increment on the LP value of $e_1$ is upper bounded by $2^{m^{\epsilon /2}} \cdot \alpha^{m^{\epsilon}} \leq \alpha^{2 m^{\epsilon}}$ (as $\alpha \ge 2$) with probability $1 - e^{-{\rm \Omega}(m^{\epsilon})}$, where $\alpha^{2 m^{\epsilon}}$ is less than $W_{\textup{max}} -1 = \alpha^m -1$ since $0 < \epsilon \leq 1/2$ and $m$ is sufficiently large.
Therefore, with probability $1- e^{-{\rm \Omega}(m^{\epsilon})}$, the \mbox{\textup{(1+1) EA with 1/5-th Rule}} cannot find the unique MFDS of $G^*$ within runtime $2^{m^{\epsilon /2}}$.
\end{proof}
There are two reasons for the pseudo-polynomial runtime of the \mbox{\textup{(1+1) EA with 1/5-th Rule}} for DWVC-E$^+$: (1). the small probability of a mutation to be accepted by the algorithm; (2). the "radical" strategy that decreases the step sizes of all the edges selected by the mutation if it is rejected.
Under the combined impact of the two factors, the step size of $e_1$ cannot be increased to an enough large value to overcome the exponential large weight $W_{\textup{max}}$.
An obvious workaround is incorporating the ``conservative'' strategy (adopted by Algorithm~\ref{alg:(1+1) EA}) into the \mbox{\textup{(1+1) EA with 1/5-th Rule}}, which only decreases the step sizes of the edges that satisfy a strict condition. Then the probability of a mutation that decreases the step size of $e_1$ would be smaller.
Another possible workaround is considering the $1/i$-th rule, where $i > 5$, to slow down the decreasing speed of the step size of $e_1$.
Both workarounds aim to make the expected drift of the step size of $e_1$ be positive.
Considering the instance $\{G_s \setminus \{e_1\},Y_{\textup{orig}},E^+ = \{e_1\} \! \}$ of DWVC-E$^+$, where $Y_{\textup{orig}}(e_i) = 1$ for all $2 \leq i \leq m$, we can get that Theorem~\ref{theo:(1+1) EA with 1/5-th Rule for DWVC-X} holds for \mbox{\textup{DWVC-E}}$^+$ by Lemma~\ref{lem:(1+1) EA with 1/5-th Rule increase weight}.
Similarly, considering the instance $\{G'_s,Y_{\textup{orig}},E^- = \{e'_1\} \! \}$ of DWVC-E$^-$, where $Y_{\textup{orig}}(e_i) = 1$ for all $1 \leq i \leq m$ and $Y_{\textup{orig}}(e'_1) = W_{\textup{max}}-1$ (graph $G'_s$ is given in Figure~\ref{fig:special graph}(b)), we can get that Theorem~\ref{theo:(1+1) EA with 1/5-th Rule for DWVC-X} holds for \mbox{\textup{DWVC-E}}$^-$ by Lemma~\ref{lem:(1+1) EA with 1/5-th Rule increase weight}.
Considering that the weights of the two endpoints of $e_1$ in $G_s$ are increased from 1 to $W_{\textup{max}}$, and $Y_{\textup{orig}}(e_i) = 1$ for all $1 \leq i \leq m$, we can get that Theorem~\ref{theo:(1+1) EA with 1/5-th Rule for DWVC-X} holds for \mbox{\textup{DWVC-W}}$^+$ by Lemma~\ref{lem:(1+1) EA with 1/5-th Rule increase weight}.
Considering that the weight of the endpoint of $e'_1$ that is not shared with $e_1$ in $G'_s$ is decreased from $W_{\textup{max}}$ to 1, and $Y_{\textup{orig}}(e'_1) = W_{\textup{max}}-1$ and $Y_{\textup{orig}}(e_i) = 1$ for all $1 \leq i \leq m$, we can get that Theorem~\ref{theo:(1+1) EA with 1/5-th Rule for DWVC-X} holds for \mbox{\textup{DWVC-W}}$^-$ by Lemma~\ref{lem:(1+1) EA with 1/5-th Rule increase weight}.
Combining the conclusions for \mbox{\textup{DWVC-E}}$^+$, \mbox{\textup{DWVC-E}}$^-$, \mbox{\textup{DWVC-W}}$^+$, and \mbox{\textup{DWVC-W}}$^-$, we have that Theorem~\ref{theo:(1+1) EA with 1/5-th Rule for DWVC-X} holds for \mbox{\textup{DWVC-E}} and \mbox{\textup{DWVC-W}}.
\begin{theorem}
\label{theo:(1+1) EA with 1/5-th Rule for DWVC-X}
If the maximum weight $W_{\textup{max}}$ of the vertices in the considered weighted graph is $\alpha^{m}$, then the expected runtime of the \mbox{\textup{(1+1) EA with 1/5-th Rule}} for \mbox{\textup{DWVC-X}}, where \textup{X} $\in \{E^+, E^-,E,W^+, W^-, W\}$, is lower bounded by $2^{m^{\epsilon /2}}$ with probability $1- e^{-{\rm \Omega}(m^{\epsilon})}$ ($0 < \epsilon \leq 1/2$).
\end{theorem}
\section{Conclusion}
In the paper, we contributed to the theoretical understanding of evolutionary computing for the Dynamic Weighted Vertex Cover problem, generalizing the results obtained by Pourhassan et al.~\cite{pourhassan2015maintaining} for the Dynamic Vertex Cover problem. Two graph-editing operations were studied for the dynamic changes on the given weighted graph, which lead to two versions: Dynamic Weighted Vertex Cover problem with Edge Modification and Dynamic Weighted Vertex Cover problem with Weight Modification, and two special variants for each version.
We first introduced two algorithms (1+1) EA and RLS with the step size adaption strategy, and analyzed their performances for the two versions (including their four special variants) separately.
Our analysis shows that the qualities of the solutions for these studied dynamic changes can be maintained efficiently.
As mentioned in Section~\ref{sec:intro}, Pourhassan et al.~\cite{pourhassan2017use} studied the Weighted Vertex Cover problem using the dual form of the LP formulation, and showed that their (1+1) EA with step size adaption strategy cannot get a 2-approximate solution in polynomial expected runtime with a high probability.
It is easy to find that our (1+1) EA can be extended to solve the Weighted Vertex Cover problem efficiently (i.e., construct a 2-approximate solution), of which each instance $G' = (V',E',W')$ can be transformed to an instance of DWVC-E$^+$ with $E = \emptyset$ and $E^+ = E'$.
There are two main differences between their (1+1) EA and our (1+1) EA, causing the big performance gap:
(1). for the mutation $M$ of their (1+1) EA, the adjustment directions of the LP values of the edges selected by $M$ are random, i.e., there may exist two edges selected by $M$ whose LP values are increased and decreased respectively; for the mutation $M$ of our (1+1) EA, the LP values of the edges selected by $M$ are either all increased or all decreased, and the adjustment direction depends on the feasibility of the maintained solution;
(2). for the mutation $M$ that is rejected by their (1+1) EA, the step sizes of all the edges selected by $M$ are decreased; for the mutation $M$ that is rejected by our (1+1) EA, only the step sizes of the edges satisfying a specific condition can be decreased.
To eliminate the artificial influences on the behaviors of the two algorithms mentioned above, we also incorporated the 1/5-th (success) rule to control the increasing/decreasing rate of the step size, and presented two algorithms named \mbox{\textup{(1+1) EA with 1/5-th Rule}} and \mbox{\textup{RLS with 1/5-th Rule}}.
The \mbox{\textup{RLS with 1/5-th Rule}} was shown to be able to maintain the qualities of the solutions efficiently as well. However, for the \mbox{\textup{(1+1) EA with 1/5-th Rule}}, its performance was shown to be not satisfying. More specifically, its expected runtime is lower bounded by a pseudo-polynomial with a high probability, to maintain the qualities of the solutions if the maximum weight that the vertices have is exponential with respect to the number of the edges in the graph.
The result matches that given by Pourhassan et al.~\cite{pourhassan2017use}, and indicates that the 1/5-th rule cannot overcome the negative impact caused by the standard mutation operator, when considering the special instances.
However, for the 1/$i$-th rule with a sufficiently large value of $i$, it seems to be a promising way to overcome such an impact.
This is the first work that incorporates the 1/5-th rule with the step size adaption strategy, to solve a dynamic combinatorial optimization problem. We will leave it for future research to extend these insights to the analysis for more (dynamic) combinatorial optimization problems.
|
2,877,628,091,116 | arxiv | \section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed a multi-task deep learning model, namely FEDAR, for the problem of document-level multi-aspect sentiment classification.
Different from previous studies, our model does not require hand-crafted aspect-specific keywords to guide the attention and boost model performance for the task of sentiment classification.
Instead, our model relies on (a) a highway word embedding layer to transfer knowledge from pre-trained word vectors on a large corpus, (b) a sequential encoder layer whose output features are enriched by pooling and feature factorization techniques, and (c) a deliberate self-attention layer which maintains the interpretability of our model.
Experiments on various DMSC datasets have demonstrated the superior performance of our model. In addition, we also developed an Attention-driven Keywords Ranking (AKR) method, which can automatically discover aspect and opinion keywords from the review corpus based on attention weights.
Attention weights visualization and aspect/opinion keywords word-cloud visualization results have demonstrated the interpretability of our model and effectiveness of our AKR method.
Finally, we also proposed a LEcture-AuDience (LEAD) method to measure the uncertainty of deep neural networks, including our FEDAR model, in the context of multi-task learning.
Our experimental results on multiple real-world datasets demonstrate the effectiveness of the proposed work.
\section{Experimental Results}
\label{sec:experiments}
In this section, we present the results from an extensive set of experiments and demonstrate the effectiveness of our proposed FEDAR model, AKR and LEAD methods.
\subsection{Research Questions}
Our empirical analysis aims at the following Research Questions (RQs):
\begin{itemize}[leftmargin=*,topsep=1pt,itemsep=1pt, partopsep=1pt, parsep=1pt]
\item[$\bullet$] \textbf{RQ1}: What is the overall performance of FEDAR? Does it outperform state-of-the-art baselines?
\item[$\bullet$] \textbf{RQ2}: What is the overall performance of LEAD method compared with uncertainty estimation baselines?
\item[$\bullet$] \textbf{RQ3}: How does each component in FEDAR contribute to the overall performance?
\item[$\bullet$] \textbf{RQ4}: Is the deliberate self-attention module interpretable? Does it learn meaningful aspect and opinion terms from a review corpus?
\end{itemize}
\subsection{Datasets}
\begin{table}[!t]
\centering
\caption{Statistics of different DMSC datasets. $\dagger$ indicates the datasets collected and prepared by us.}
\begin{tabular}{|l|c|c|c|}
\hline
\bf Dataset & \bf \# docs & \bf \# aspects & \bf Scale
\\\hline
TripAdvisor-R & 29,391 & 7 & 1-5 \\\hline
TripAdvisor-RU & 58,632 & 7 & 1-5 \\\hline
TripAdvisor-B & 28,543 & 7 &1-2 \\\hline
BeerAdvocate-R & 50,000 & 4 & 1-10\\\hline
BeerAdvocate-B & 27,583 & 4 & 1-2\\\hline
RateMDs-R$\dagger$ & 155,995 & 4 & 1-5\\\hline
RateMDs-B$\dagger$ & 120,303 & 4 & 1-2
\\
\hline
\end{tabular}
\label{tab:dataset}
\end{table}
We first conduct our experiments on
five benchmark datasets, which are obtained from TripAdvisor and BeerAdvocate review platforms.
TripAdvisor based datasets have seven aspects (\textit{value, room, location, cleanliness, check in/front desk, service,
and business service}), while BeerAdvocate based datasets have four aspects (\textit{feel, look,
smell, and taste}).
TripAdvisor-R \cite{yin2017document}, TripAdvisor-U \cite{li2018document} and BeerAdvocate-R \cite{yin2017document,lei2016rationalizing} use the original rating scores as sentiment class labels.
In TripAdvisor-B and BeerAdvocate-B \cite{zeng2019variational}, the original scale is converted to a binary scale, where $1$ and $2$ correspond to negative and positive sentiment, respectively.
Neutral has been ignored in both datasets.
All datasets have been tokenized and split into train/development/test sets with a proportion of 8:1:1.
In our experiments, we use the same datasets that are provided by the previous studies in the literature \cite{yin2017document,li2018document,zeng2019variational}.
Statistics of the datasets are summarized in Table~\ref{tab:dataset}.
In addition to the aforementioned five datasets, we also propose two new datasets, i.e., RateMDs-R and RateMDs-B, and benchmarked our models on them.
RateMDs dataset was collected from \url{https://www.ratemds.com} website which has textual reviews along with numeric ratings for medical experts primarily in the North America region.
Each review comes with ratings of four different aspects, i.e., \textit{staff, punctuality, helpfulness, and knowledge}.
The overall rating is the average of these aspect ratings.
To obtain a more refined dataset for our experiments, we removed reviews with missing aspect ratings and selected the rest of the reviews whose lengths are between 72 and 250 tokens (outliers \footnote{The average number of tokens for all reviews is 72 tokens and there are very few reviews with more than 250 tokens.}), since short reviews may not have information on all the four aspects.
The original data has a rating-imbalance problem, i.e., $60\%$ and $17\%$ of reviews are rated as 5 and 1, respectively, and more than $50\%$ of reviews have identical aspect ratings.
Therefore, similar to \cite{lei2016rationalizing},
we chose reviews with different aspect ratings, i.e., at least three of aspect ratings are different.
The statistics of our dataset is shown in Table~\ref{tab:dataset}.
For RateMDs-R, we tokenized reviews with Stanford corenlp\footnote{\url{https://stanfordnlp.github.io/CoreNLP/}} and randomly split the dataset into training, development and testing by a proportion of 135,995/10,000/10,000. For RateMDs-B, we followed the process in \cite{zeng2019variational} by converting original scales to binary and sampling data according to the overall polarities to avoid the imbalance issue.
The statistics of the RateMDs-B dataset is also shown in Table~\ref{tab:dataset}.
Similarly, we split the dataset into training, development and testing by a proportion of 100,303/10,000/10,000.
\subsection{Comparison Methods}
To demonstrate the effectiveness of our methods, we compare the proposed models with following baseline methods:
\begin{itemize}[leftmargin=*,topsep=1pt,itemsep=1pt, partopsep=1pt, parsep=1pt]
\item[$\bullet$] \textbf{MAJOR} simply uses the majority sentiment labels or polarities in training data as predictions.
\item[$\bullet$] {\bf GLVL} first calculates the document representation by averaging the word vectors of all keywords in a review, where pre-trained word vectors are obtained from GloVe \cite{pennington2014glove}. Then, a LIBLINEAR package \cite{fan2008liblinear} is used for the classification task.
\item[$\bullet$] {\bf BOWL} feeds normalized Bag-of-Words (BOW) representation of reviews into the LIBLINEAR package for the sentiment classification. In our experiments, stop-words and punctuation are removed in order to enable the model to capture the keywords more efficiently.
\item[$\bullet$] \textbf{MCNN} is an extension of the CNN model in the multi-task learning framework. For each task, CNN \cite{kim2014convolutional} extracts key features from a review by applying convolution and max-over-time pooling \cite{collobert2011natural} operations over the shared word embeddings layer.
\item[$\bullet$] \textbf{MLSTM} extends a multi-layer Bi-LSTM model \cite{hochreiter1997long}, which captures both forward and backward semantic information, with the multi-task learning framework, where different tasks have their own classifiers and share the same Bi-LSTM encoder.
\item[$\bullet$] \textbf{MBERT} is a multi-task version of the BERT classification model \cite{devlin2019bert}. Different tasks share the same BERT encoder \cite{Wolf2019HuggingFacesTS}.
\item[$\bullet$] \textbf{MATTN} is a multi-task version of self-attention based models. Similar to MLSTM, different tasks share the same Bi-LSTM encoder.
For each task, we first apply a self-attention layer, and then pass the document representations to a sentiment classifier.
\item[$\bullet$] \textbf{DMSCMC} \cite{yin2017document} introduces a hierarchical iterative attention model to build aspect-specific document representations by frequent and repeated interactions between documents and aspect questions.
\item[$\bullet$] \textbf{HRAN} \cite{li2018document} incorporates hand-crafted aspect keywords and the overall rating into a hierarchical network to build sentence and document representations.
\item[$\bullet$] \textbf{AMN} \cite{zhang2019attentive} first uses attention-based memory networks to incorporate hand-crafted aspect keywords information into the aspect and sentence memories. Then, recurrent attention operation and multi-hop attention memory networks are employed to build document representations.
\item[$\bullet$] \textbf{FEDAR} We name our model as {FEDAR}, where FE, DA and R represent Feature Enrichment, Deliberate self-Attention, and overall Rating, respectively.
\end{itemize}
\vspace{0.1in}
We compare our LEAD method with the following uncertainty estimation approaches:
\begin{itemize}[leftmargin=*,topsep=1pt,itemsep=1pt, partopsep=1pt, parsep=1pt]
\item[$\bullet$] \textbf{Max-Margin} is the maximal activation of the sentiment classification layer (after softmax normalization).
\item[$\bullet$] \textbf{PL-Variance} (Penultimate Layer Variance) \cite{zaragoza1998confidence} uses the variance of the output of the sentiment classification layer (before softmax normalization) as the uncertainty score.
\item[$\bullet$] \textbf{Dropout} \cite{gal2015dropout} apply dropout to deep neural networks during training and testing. The dropout can be used as an approximation of Bayesian inference in deep Gaussian processes, which aims to identify low-confidence regions of input space.
\end{itemize}
All methods are based on our FEDAR model.
\subsection{Implementation Details}
We implemented all deep learning models using Pytorch \cite{paszke2017automatic} and the best set of parameters are selected based on the development set.
Word embeddings are pre-loaded with 300-dimensional GloVe embeddings \cite{pennington2014glove} and fixed during training.
For MCNN, filter sizes are chosen to be 3, 4, 5 and the number of filters are 400 for each size.
For all LSTM based models, the dimension of hidden states is set to 600 and the number of layers is 4.
All parameters are trained using ADAM optimizer \cite{kingma2014adam} with an initial learning rate of 0.0005.
The learning rate decays by 0.8 every 2 epochs.
Dropout with a dropout-rate 0.2 is applied to the classifiers.
Gradient clipping with a threshold of 2 is also applied to prevent gradient explosion.
For MBERT, we leveraged the pre-trained BERT encoder from HuggingFace's Transformers package \cite{Wolf2019HuggingFacesTS} and fixed its weights during training.
We also adopted the learning rate warmup heuristic \cite{liu2019variance} and set the warmup step to 2000.
For dropout-based uncertainty estimation methods, we set the dropout-rate to 0.5.
The number of samples for Dropout are 50.
The number of audiences is 20 for our LEAD model.
$\zeta$ is set to 1.0.
Our codes and datasets are available at \url{https://github.com/tshi04/DMSC_FEDA}.
\subsection{Prediction Performance}
\begin{table}[!t]
\centering
\caption{Averaged Accuracy (ACC) and MSE of different models on TripAdvisor-R (Trip-R), TripAdvisor-U (Trip-U), TripAdvisor-B (Trip-B), BeerAdvocate-R (Beer-R), and BeerAdvocate-B (Beer-B) testing sets. For MSE, smaller is better.
$\dagger$ indicates that results are obtained from previous published papers and NA indicates that results are not available in those papers. we use bold font to highlight the best performance values and underline to highlight the second best values}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\bfseries Method}
& \multicolumn{2}{c|}{\bf Trip-R}
& \multicolumn{2}{c|}{\bf Trip-U}
& \bf Trip-B
& \multicolumn{2}{c|}{\bf Beer-R}
& \bf Beer-B
\\\cline{2-9}
& \bf ACC & \bf MSE
& \bf ACC & \bf MSE
& \bf ACC
& \bf ACC & \bf MSE
& \bf ACC
\\\hline
MAJOR
& 29.12 & 2.115 & 39.73 & 1.222 & 62.42
& 26.29 & 4.252 & 67.26
\\\hline
GLVL
& 38.94 & 1.795 & 48.04 & 0.879 & 78.15
& 30.59 & 2.774 & 79.73
\\\hline
BOWL
& 40.14 & 1.708 & 48.68 & 0.888 & 78.38
& 31.02 & 2.715 & 79.14
\\\hline
MCNN
& 41.75 & 1.458 & 51.21 & 0.714 & 81.31
& 34.11 & 2.016 & 82.37
\\\hline
MLSTM
& 42.74 & 1.401 & 48.64 & 0.791 & 80.56
& 34.48 & 2.167 & 82.07
\\\hline
MATTN
& 42.13 & 1.427 & 50.53 & 0.679 & 80.82
& 35.78 & 1.962 & 84.86
\\\hline
MBERT
& 44.41 & 1.250
& 54.50 & 0.617
& 82.84
& 35.94 & 1.963 & 84.73
\\\hline
DMSCMC$\dagger$
& 46.56 & \underline{1.083}
& 55.49 & 0.583
& \underline{83.34}
& 38.06 & 1.755 & \underline{86.35}
\\\hline
HRAN$\dagger$
& 47.43 & 1.169
& \underline{58.15} & \underline{0.528}
& NA
& 39.11 & 1.700 & NA
\\\hline
AMN$\dagger$
& \underline{48.66} & 1.109
& NA & NA
& NA
& \underline{40.19} & \underline{1.686} & NA
\\\hline
FEDAR (Ours)
& \bf 48.92 & \bf 1.072 & \bf 58.50 & \bf 0.522 & \bf 85.50 & \bf 40.62 & \bf 1.530 & \bf 87.40
\\\hline
\end{tabular}
\label{tab:prediction_performance}
\end{table}
\begin{table}[!t]
\centering
\caption{Averaged accuracy (ACC) and MSE of different models on RateMDs-R (RMD-R) and RateMDs-B (RMD-B) testing sets. For MSE, smaller is better.}
\begin{tabular}{|l|c|c|c|}
\hline
\multirow{2}{*}{\bfseries Method}
& \multicolumn{2}{c|}{\bf RMD-R}
& {\bf RMD-B}
\\\cline{2-4}
& \bf ACC & \bf MSE
& \bf ACC
\\\hline
MAJOR
& 31.42 & 3.393 & 57.18
\\\hline
GLVL
& 43.11 & 1.882 & 76.93
\\\hline
BOWL
& 44.78 & 1.704 & 78.68 \\\hline
MCNN
& 46.19 & 1.333 & 81.60
\\\hline
MLSTM
& 48.37 & 1.148 & 82.40
\\\hline
MATTN
& 49.08 & 1.157 & 82.66
\\\hline
MBERT
& 48.65 & 1.160 & 83.39
\\\hline
FEDAR (Ours)
& \bf 55.57 & \bf 0.794 & \bf 88.63
\\\hline
\end{tabular}
\label{tab:expRatemds}
\end{table}
For research question \textbf{RQ1}, we use accuracy (ACC) and mean squared error (MSE) as our evaluation metrics to measure the prediction performance of different models.
All results are shown in Tables~\ref{tab:prediction_performance} and \ref{tab:expRatemds}, where we use bold font to highlight the best performance values and underline to highlight the second best values.
For the DMSC problem, it has been demonstrated that deep neural network (DNN) based models perform much better than conventional machine learning methods that rely on $n$-gram or embedding features \cite{yin2017document,li2018document}.
In our experiments, we have also demonstrated this by comparing different DNN models with MAJOR, GLVL and BOWL.
Compared to simple DNN classification models, multi-task learning DNN models (MDNN) can achieve better results with fewer parameters and training time \cite{yin2017document}.
Therefore, we focused on comparing the performance of our model with different MDNN models.
From Table~\ref{tab:prediction_performance}, DMSCMC achieves better results on all five datasets compared with baselines MCNN, MLSTM, MBERT, and MATTN.
HRAN and AMN leverage the power of overall rating and get significantly better results than other compared methods.
From both tables, we observed our FEDAR model achieves the best performance on all seven datasets.
These results demonstrate the effectiveness of our methods.
\subsection{Uncertainty Performance}
\begin{table}[!t]
\centering
\caption{
Performance of various uncertainty methods on different datasets.}
\begin{tabular}{|l|c|c|c|c|c|}
\multicolumn{6}{c}{\bf TripAdvisor-R}\\
\hline
\bf Method & \bf top-5\% & \bf top-10\% & \bf top-15\% & \bf top-20\% & \bf top-25\% \\
\hline
Max-Margin &
35.40 & 36.00 & 37.47 & 39.15 & 40.68
\\\hline
PL-Variance &
40.20 & 42.40 & 43.00 & 44.25 & 44.84
\\\hline
Dropout &
53.80 & 53.50 & 53.33 & 53.35 & 53.60
\\\hline
LEAD &
\bf 65.40 & \bf 62.60 & \bf 60.93 & \bf 60.85 & \bf 60.20
\\\hline
\multicolumn{6}{c}{\bf BeerAdvocate-R}\\\hline
\bf Method & \bf top-5\% & \bf top-10\% & \bf top-15\% & \bf top-20\% & \bf top-25\% \\
\hline
Max-Margin &
38.80 & 43.80 & 46.53 & 48.30 & 49.44
\\\hline
PL-Variance &
44.00 & 47.00 & 48.33 & 49.65 & 50.68
\\\hline
Dropout &
57.00 & 57.90 & 58.60 & 58.70 & 59.28
\\\hline
LEAD &
\bf 71.60 & \bf 69.50 & \bf 67.93 & \bf 67.55 & \bf 67.20
\\\hline
\multicolumn{6}{c}{\bf RateMDs-R}\\\hline
\bf Method & \bf top-5\% & \bf top-10\% & \bf top-15\% & \bf top-20\% & \bf top-25\% \\
\hline
Max-Margin & 20.20 & 23.80 & 26.80 & 28.15 & 29.40
\\\hline
PL-Variance & 28.60 & 29.70 & 30.53 & 30.85 & 31.60
\\\hline
Dropout & 51.00 & 50.70 & 50.60 & 49.60 & 48.88
\\\hline
LEAD & \bf 66.00 & \bf 62.70 & \bf 60.40 & \bf 59.05 & \bf 58.32
\\\hline
\end{tabular}
\label{tab:uncertainty_performance}
\end{table}
Uncertainty estimation can help users identify reviews for which the models are not confident of their predictions.
More intuitively, prediction models are prone to mistakes on the reviews that they are uncertain about.
In Table \ref{tab:uncertainty_performance}, we first selected the most uncertain predictions (denoted by \textbf{top-n\%}) based uncertainty scores from the testing sets of TripAdvisor-R, BeerAdvocate-R and RateMDs-R datasets.
Then, we evaluated the uncertainty performance by comparing the mis-classification rate (i.e., error rate) of our FEDAR model for the selected reviews.
The more incorrect predictions that can be captured, the better the uncertainty method will be.
From these results, we can observe that Dropout method achieves significantly better results than Max-Margin and PL-Variance.
Our LEAD method outperforms all these baseline methods on three datasets, which shows our method is superior in identifying less confident predictions and answers research question \textbf{RQ2}.
\subsection{Ablation Study of FEDAR}
\begin{table}[!t]
\centering
\caption{Ablation study results. Different models are evaluated by Averaged Accuracy (ACC) and MSE metrics on five public DMSC testing sets. For MSE, smaller is better. \textbf{FE}, \textbf{DA} and \textbf{OR} represent Feature Enrichment, Deliberated self-Attention, Overall Rating, respectively.
}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\bfseries Method}
& \multicolumn{2}{c|}{\bf Trip-R}
& \multicolumn{2}{c|}{\bf Trip-U}
& \bf Trip-B
& \multicolumn{2}{c|}{\bf Beer-R}
& \bf Beer-B
\\\cline{2-9}
& \bf ACC & \bf MSE
& \bf ACC & \bf MSE
& \bf ACC
& \bf ACC & \bf MSE
& \bf ACC
\\\hline
FEDAR
& \bf 48.92 & \bf 1.072 & \bf 58.50 & \bf 0.522 & \bf 85.50 & \bf 40.62 & \bf 1.530 & \bf 87.40
\\\hline
w/o OR
& 46.72 & 1.178 & 55.82 & 0.574 & {84.23}
& 39.66 & {1.617} & {86.52}
\\\hline
w/o OR, DA
& 45.70 & 1.224 & 55.39 & 0.584 & 83.43
& 38.85 & 1.633 & 85.99
\\\hline
w/o OR, FE
& 44.50 & 1.300 & 53.41 & 0.632 & 82.39
& 38.92 & 1.714 & 84.99
\\\hline
w/o OR, DA, FE
& 42.13 & 1.427 & 50.53 & 0.679 & 80.82
& 35.78 & 1.962 & 84.86
\\\hline
\end{tabular}
\label{tab:ablation1}
\end{table}
\begin{table}[!t]
\centering
\caption{Ablation study results. Different models are evaluated by Averaged Accuracy (ACC) and MSE metrics on RateMDs-R (RMD-R) and RateMDs-B (RMD-B) testing sets.
}
\begin{tabular}{|l|c|c|c|}
\hline
\multirow{2}{*}{\bfseries Method}
& \multicolumn{2}{c|}{\bf RMD-R}
& \bf RMD-B
\\\cline{2-4}
& \bf ACC & \bf MSE
& \bf ACC
\\\hline
FEDAR
& \bf 55.82 & \bf 0.786 & \bf 88.63
\\\hline
w/o OR
& 49.80 & 1.106 & 83.89
\\\hline
w/o OR, DA
& 49.68 & 1.108 & 83.62
\\\hline
w/o OR, FE
& 49.28 & 1.123 & 83.47
\\\hline
w/o OR, DA, FE
& 49.08 & 1.157 & 82.66
\\\hline
\end{tabular}
\label{tab:ablation2}
\end{table}
\begin{figure}[!tp]
\centering
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\linewidth]{ablation_accuracy-eps-converted-to.pdf}
\caption{Accuracy vs. Epochs}
\end{subfigure}
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\linewidth]{ablation_mse-eps-converted-to.pdf}
\caption{MSE vs. Epochs}
\end{subfigure}
\caption{This figure shows (a) Averaged Accuracy and (b) MSE for FEDAR and its variants on the TripAdvisor-R dataset during the training process.}
\label{fig:ablation}
\end{figure}
For research question \textbf{RQ3}, we attribute the performance improvement of our FEDAR model to: 1) Better review encoder, including a highway word embedding layer and a feature enriched encoder.
2) Deliberate self-attention mechanism.
3) Overall rating.
Therefore, we systematically conducted ablation studies to demonstrate the effectiveness of these components, and provided the results in Table~\ref{tab:ablation1}, Table~\ref{tab:ablation2} and Fig.~\ref{fig:ablation}.
We first observe that FEDAR significantly outperforms model-OR (FEDAR w/o OR), which indicates that overall rating can help the model make better predictions.
Secondly, we compare model-OR with model-ORFE (FEDAR w/o OR, FE), which is equipped with a regular word embedding layer and a multi-layer Bi-LSTM encoder.
Obviously, model-OR obtained better results than model-ORFE.
Similarly, we also compare model-ORDA (FEDAR w/o OR, DA) with model-BASE (FEDAR w/o OR, DA, FE), since model-ORDA adopts the same self-attention mechanism as model-BASE.
It can be observed that model-ORDA performs significantly better than model-BASE on all the datasets.
This experiment shows that we can improve the performance by using highway word embedding layer and feature enrichment technique.
Furthermore, we compared model-OR
with model-ORDA, which does not have a deliberate self-attention layer.
It can been seen that model-OR outperforms model-ORDA in all the experiments.
In addition, we have also compared the results of model-ORFE and model-BASE, which are equipped with a deliberate self-attention layer and a regular self-attention layer.
We observed that model-ORFE has a better performance compared to model-BASE.
This experiment indicates the effectiveness of deliberate self-attention mechanism.
In Fig.~\ref{fig:ablation}, we show the accuracy and MSE of different models during training in order to demonstrate that FEDAR can get consistently higher accuracy and lower MSE after training for several epochs than its basic variants.
\subsection{Attention Visualization}
The attention mechanism enables a model to selectively focus on important parts of the reviews, and hence, visualization of the attention weights can help in interpreting our model and analyze the experimental results \cite{yin2017document,xu2015show}.
To answer research question \textbf{RQ4}, we need to investigate whether our model attends to relevant keywords when it is making aspect-specific rating predictions for the DMSC problem.
\begin{figure}[!tp]
\centering
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{1301.png}
\caption{BeerAdvocate-R}
\end{subfigure}
\\
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{rmd_194.png}
\caption{RateMDs-R}
\end{subfigure}
\caption{Visualization of attention weights. In parentheses, the first and second numbers represent ground-truth and predicted ratings, respectively.
Different aspects are labeled with different colors. The figure is best viewed in color.}
\label{fig:fine_example}
\end{figure}
In Fig.~\ref{fig:fine_example} (a), we show a review example from the BeerAdvocate-R testing set, for which our model has successfully predicted all aspect-specific ratings.
In this Figure, we highlighted the review with deliberate attention weights.
The review contains keywords of all four aspects, thus, we only need to verify whether our model can successfully detect those aspect-specific keywords.
We observed that deliberate self-attention attend to ``{\it creamy and luscious mouthfeel}" for {\bf feel}.
For the {\bf look} aspect, it captures ``{\it dark murky brown with a ..., leave some lacing on the glass}'', which is quite relevant to the appearance of the beer.
Our model also successfully detects ``{\it very rich and spicy}" for {\bf smell}.
For {\bf taste}, it attends to ``{\it taste is a bit disappointing, ... too prominent}", which yields a slightly lower rating.
Similarly, we show an example from the RateMDs-R testing set in Fig~.\ref{fig:fine_example} (b).
Our model detects ``\textit{unfortunately, the office staff is very lousy! I do think ...}'' for \textbf{staff}, which expresses negative opinion to the office staff.
For \textbf{punctuality}, it captures ``\textit{true that you have to wait a long time for her}'', which is also negative.
Finally, it attends to ``\textit{is by far the best doctor, she does get a lot of patient and may get overwhelmed. but when it comes to knowledge, communicating, the best}'' for the \textbf{knowledge}, and ``\textit{she is patient and caring, patience and caring attitude}'' for the \textbf{helpfulness} of the doctor.
Both aspects have positive sentiment.
Therefore, these two examples show good interpretability of our model.
\subsection{Aspect and Opinion Keywords}
\begin{figure}[!t]
\centering
\resizebox{1\linewidth}{!}{
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{value-eps-converted-to.pdf}
\caption{Value}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{room-eps-converted-to.pdf}
\caption{Room}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{cleanliness-eps-converted-to.pdf}
\caption{Cleanliness}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{service-eps-converted-to.pdf}
\caption{Service}
\end{subfigure}}
\resizebox{1\linewidth}{!}{
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{feel-eps-converted-to.pdf}
\caption{Feel}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{look-eps-converted-to.pdf}
\caption{Look}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{smell-eps-converted-to.pdf}
\caption{Smell}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{taste-eps-converted-to.pdf}
\caption{Taste}
\end{subfigure}}
\resizebox{1\linewidth}{!}{
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{staff-eps-converted-to.pdf}
\caption{Staff}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{punctuality-eps-converted-to.pdf}
\caption{Punctuality}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{helpfulness-eps-converted-to.pdf}
\caption{Helpfulness}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{knowledge-eps-converted-to.pdf}
\caption{Knowledge}
\end{subfigure}}
\caption{Word-cloud visualization of aspect keywords for TripAdvisor-B (Top row), BeerAdvocate-B (Middle row) and RateMDs-B datasets (Bottom row).}
\label{fig:word_cloud_aspect_keywords}
\end{figure}
In Fig.~\ref{fig:word_cloud_aspect_keywords}, we first show aspect keywords detected by our AKR method for TripAdivsor-B, BeerAdvocate-B, and RateMDs-B corpus.
From Fig.~\ref{fig:word_cloud_aspect_keywords} (Top row), we observe that \textbf{value} related keywords include ``\textit{price, money, rate, overprice}''.
Keywords related to a \textbf{room} are ``\textit{air conditioning, comfy, leak, mattress, bathroom, modern, ceiling}'' and others.
For \textbf{cleanliness}, people are interested in ``\textit{housekeeping, spotless, cleaning, hair, stain, smell}'' and so on. \textbf{Service} is related with ``\textit{staff, service, employee, receptionist, personnel}''.
From Fig.~\ref{fig:word_cloud_aspect_keywords} (Middle row), we observe that {\bf feel} is usually related with keywords, like ``{\it mouthfeel, mouth, smooth, watery}'', which describe feel of beers in mouth.
{\bf Look} is the appearance of beers, thus, the model captures ``{\it appearance, retention, white, head, foam, color}'' and others.
{\bf Smell} related aspect keywords include ``\textit{smell, aroma, scent, fruity}'' and more.
Finally, representative keywords for {\bf taste} are ``{\it taste, balance, complex, flavor}'' and so on.
From Fig.~\ref{fig:word_cloud_aspect_keywords} (Bottom row), we observe that \textbf{staff} related keywords are ``\textit{staff, assistant, secretary, receptionist}'' and so on.
For \textbf{punctuality}, people usually concern ``\textit{waits, hour, hours, retard}''.
The \textbf{helpfulness} of a doctor is related to ``\textit{compassion, manner, empathy, attitude, condescending}'' and so on.
Finally, \textbf{knowledge} related keywords are ``\textit{knowledge, expertise, surgeon, skill}'' and others.
\begin{figure}[!t]
\centering
\resizebox{1\linewidth}{!}{
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{room_pos-eps-converted-to.pdf}
\caption{Room Positive}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{room_neg-eps-converted-to.pdf}
\caption{Room Negative}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{service_pos-eps-converted-to.pdf}
\caption{Service Positive}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{service_neg-eps-converted-to.pdf}
\caption{Service Negative}
\end{subfigure}}
\resizebox{1\linewidth}{!}{
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{smell_pos-eps-converted-to.pdf}
\caption{Smell Positive}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{smell_neg-eps-converted-to.pdf}
\caption{Smell Negative}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{taste_pos-eps-converted-to.pdf}
\caption{Taste Positive}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{taste_neg-eps-converted-to.pdf}
\caption{Taste Negative}
\end{subfigure}}
\resizebox{1\linewidth}{!}{
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{staff_pos-eps-converted-to.pdf}
\caption{Staff Positive}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{staff_neg-eps-converted-to.pdf}
\caption{Staff Negative}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{knowledge_pos-eps-converted-to.pdf}
\caption{Knowledge Positive}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{knowledge_neg-eps-converted-to.pdf}
\caption{Knowledge Negative}
\end{subfigure}}
\caption{Word-cloud visualization of aspect-level opinion keywords for TripAdvisor-B (Top row), BeerAdvocate-B (Middle row) and RateMDs-B datasets (Bottom row).}
\label{fig:word_cloud_sentiment_keywords}
\end{figure}
We also obtain aspect-specific opinion keywords from Trip-B, Beer-B, and RMD-B datasets, and show them in Fig.~\ref{fig:word_cloud_sentiment_keywords}.
From this figure (Top row), we observe that reviewers with positive experience usually live in ``\textit{comfortable, beautiful, spacious, lovely and gorgeous}'' rooms, and the staff are ``\textit{helpful, friendly, courteous and attentive}''.
While reviewers with negative experience may live in ``\textit{uncomfortable, small, cramped and tiny}'' rooms. Something may ``\textit{leak}'' and there are also problems with ``\textit{air conditioning}''.
The staff are ``\textit{rude, unhelpful and unfriendly}'' and the service is ``\textit{poor}''.
From Fig.~\ref{fig:word_cloud_sentiment_keywords} (Middle row), we learn that good beers should have ``\textit{great, amazing, wonderful, pleasant, aromatic, fresh, rich, and incredible}'' smell, and the taste may be ``\textit{tasty, great, balanced, enjoyable, and flavorful}''.
The smell of low-rated beers is ``\textit{faint, weak, pungent, odd, funky, and rotten}'', and the taste may be ``\textit{bland, unbalanced, disappointed, and sour}''.
From Fig.~\ref{fig:word_cloud_sentiment_keywords} (Bottom row), we find that good doctors usually have ``\textit{sincerely, friendly, helpful, and wonderful}'' staff and are ``\textit{knowledgeable, competent, intelligent, and excellent}''.
In a low-rated clinic, staff may be ``\textit{incompetent, rude, horrible, terrible, and unfriendly}'', and doctors may ``\textit{misdiagnose}'' conditions of patients and can be not ``\textit{competent, knowledgeable, or trusted}''.
From these figures, we can conclude that our deliberate self-attention mechanism is interpretable, and by leveraging our AKR method, it is a powerful knowledge discovery tool for online multi-aspect reviews, which answers research question \textbf{RQ4}.
\section{Introduction}
\label{sec:introduction}
Sentiment analysis plays an important role in many business applications \cite{pang2008opinion}.
It is used to identify customers' opinions and emotions toward a particular product/service via identifying polarity (i.e., positive, neutral or negative) of given textual reviews \cite{liu2012sentiment,pang2002thumbs}.
In the past few years, with the rapid growth of online reviews, the topic of fine-grained aspect-based sentiment analysis (ABSA) \cite{pontiki2016semeval} has attracted significant attention since it allows models to predict opinion polarities with respect to aspect-specific terms in a sentence.
Different from sentence-level ABSA,
document-level multi-aspect sentiment classification (DMSC) aims to predict the sentiment polarity of documents, which are composed of several sentences, with respect to a given aspect \cite{yin2017document,li2018document,zeng2019variational}.
DMSC has become a significant challenge since many websites provide platforms for users to give aspect-level feedback and ratings, such as TripAdvisor\footnote{\url{https://www.tripadvisor.com}} and BeerAdvocate\footnote{\url{https://www.beeradvocate.com}}.
Fig.~\ref{fig:review-example} shows a review example from the BeerAdvocate website.
In this example, a beer is rated with four different aspects, i.e., feel, look, smell and taste.
The review also describes the beer with four different aspects.
There is an overall rating associated with this review.
Recent studies have found that users are less motivated to give aspect-level ratings \cite{yin2017document,zeng2019variational}, which makes it difficult to analyze their preference, and it takes a lot of time and effort for human experts to manually annotate them.
There are several recent studies that aim to predict the aspect ratings or opinion polarities using deep neural network based models with multi-task learning framework \cite{yin2017document,li2018document,zhang2019attentive,zeng2019variational}.
In this setting, rating predictions for different aspects, which are highly correlated and can share the same review encoder, are treated as different tasks. However, these models rely on hand-crafted aspect keywords to aid in rating/sentiment predictions \cite{yin2017document,li2018document,zhang2019attentive}.
Thus, their results, especially case studies of reviews, are biased towards pre-defined aspect keywords.
In addition, these models only focus on improving the prediction accuracy, however,
knowledge discovery (such as aspect and opinion related keywords) from review corpus still relies on unsupervised \cite{mcauley2012learning} and rule-based methods \cite{zeng2019variational}, which limits applications of current DMSC models \cite{yin2017document,li2018document,zhang2019attentive}.
In the past few years, model uncertainty of deep neural network classifiers has received increasing attention \cite{gal2016dropout,gal2016uncertainty}, because
it can identify low-confidence regions of input space and give more reliable predictions.
Uncertainty models have also been applied to deep neural networks for text classification \cite{zhang2019mitigating}.
However, few existing uncertainty methods have been used to improve the overall prediction accuracy of multi-task learning models when crowd-sourcing annotation is involved in the DMSC task.
In this paper, we attempt to tackle the above mentioned issues.
The primary contributions of this paper are as follows:
\begin{figure}[!t]
\centering
\includegraphics[width=0.6\textwidth]{review_example.png}
\caption{An example of an online review from BeerAdvocate platform. Keywords corresponding to different aspects are highlighted with different colors.}
\label{fig:review-example}
\end{figure}
\begin{itemize}[leftmargin=*,topsep=1pt,itemsep=1pt, partopsep=1pt, parsep=1pt]
\item
Develop a FEDAR model that achieves competitive results on five benchmark datasets without using hand-crafted aspect keywords.
The proposed model is equipped with a highway word embedding layer, a sequential encoder layer whose output features are enriched by pooling and factorization techniques, and a deliberate self-attention layer.
The deliberate self-attention layer can boost performance as well as provide interpretability for our FEDAR model.
Here, FEDAR represents of some key components of our model, including Feature Enrichment, Deliberate self-Attention, and overall Rating.
\item
Introduce two new datasets obtained from the RateMDs website \url{https://www.ratemds.com}, which is a platform for patients to review the performance of their doctors.
We benchmark different models on them.
\item
Propose an Attention-driven Keywords Ranking (AKR) method to automatically discover aspect and opinion keywords from review corpus based on attention weights, which also provides a new research direction for interpreting self-attention mechanism.
The extracted keywords are significant to ratings/polarities predicted by FEDAR.
\item
Propose a LEcture-Audience (LEAD) method to measure the uncertainty of our FEDAR model for given reviews.
This method can also be generally applied to other deep neural networks.
\end{itemize}
The rest of this paper is organized as follows:
In Section \ref{sec:related_work}, we introduce related work of the DMSC task and uncertainty estimation methods.
In Section \ref{sec:proposed_methods}, we present details of our proposed FEDAR model, AKR method and LEAD uncertainty estimation approach.
In Section \ref{sec:experiments}, we introduce different DMSC datasets, baseline methods and implementation details, as well as analyze experimental results.
Our discussion concludes in Section~\ref{sec:conclusion}.
\section{Proposed Methods}
\label{sec:proposed_methods}
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{task_framework.png}
\caption{An overview of our multi-task learning framework with uncertainty estimation for accurate and reliable sentiment classification in DMSC task.
Here, sentiment classification for each aspect is treated as a task and different tasks share the same review encoder.}
\label{fig:mtArche}
\end{figure*}
In this section, we first introduce our FEDAR model (See Fig.~\ref{fig:model_struct}) for the DMSC task.
Then, we describe our AKR method to automatically discover aspect and aspect-level sentiment terms based on the FEDAR model.
Finally, we discuss our LEAD method (See Fig.~\ref{fig:lead_method}) for measuring the uncertainty of the FEDAR model.
\subsection{The Proposed FEDAR Model}
\subsubsection{Problem Formulation}
The DMSC problem can be formulated as a multi-task classification problem, where the sentiment classification for each aspect is viewed as a task (See Fig.~\ref{fig:mtArche}). More formally, the DMSC problem is described as follows:
Given a textual review $X=(x_1, x_2,...,x_T)$, our goal is to predict class labels, i.e., integer ratings/sentiment polarity of the review $y=(y^1, y^2, ..., y^K)$, where $T$ and $K$ are the number of tokens in the review and the number of aspects/tasks, respectively.
$x_t$ and $y^k$ are the one-hot vector representations of word $t$ and the class label of aspect $k$, respectively.
The challenge in this problem is to build a model that can achieve competitive accuracy without losing model interpretability or obtaining biased results.
Therefore, we propose improving word embedding, review encoder and self-attention layers to accomplish this goal.
We will now introduce our model and provide more details of our architecture in a layer-by-layer manner.
\subsubsection{Highway Word Embedding Layer}
This layer aims to learn word vectors based on pre-trained word embeddings.
We first use word embedding technique \cite{mikolov2013distributed} to map one-hot representations of tokens $x_1, x_2,...,x_T$ to a continuous vector space, thus, they are represented as $E_{x_1}, E_{x_2}, ..., E_{x_T}$, where $E_{x_t}$ is the word vector of $x_t$, pre-trained on a large corpus and fixed during parameter inference.
In our experiments, we adopted GloVe word vectors \cite{pennington2014glove}, so that they do not need to be trained from random states, which may result in poor embeddings due to the lack of word co-occurrence.
Then, a single layer highway network \cite{srivastava2015highway} is used to adapt the knowledge, i.e., semantic information from pre-trained word embeddings, to target DMSC datasets.
Formally, the highway network is defined as follows:
\begin{equation}
E'_{x_t} = f(E_{x_t})\odot g(E_{x_t}) + E_{x_t}\odot (1-g(E_{x_t}))
\end{equation}
where $f(\cdot)$ and $g(\cdot)$ are affine transformations with ReLU and Sigmoid activation functions, respectively. $\odot$ represents element-wise product.
$g(\cdot)$ is also known as gate, which is used to control the information that is being carried to the next layer.
Intuitively, the highway network aims at transferring knowledge from pre-trained word embeddings to the target review corpus.
$E'_{x_t}$ can be viewed as a perturbation of $E_{x_t}$, and $f(\cdot)$ and $g(\cdot)$ have significantly fewer parameters than $E_{x_t}$. Therefore, training a highway network is more efficient than training a word embedding layer from random parameters.
\subsubsection{Review Encoder Layer}
This layer describes the review encoder and feature enrichment techniques proposed in our model.
\vspace{2mm}
\noindent\textbf{Sequential Encoder Layer:}
The output of highway word embedding layer ($E'_{x_1}, E'_{x_2}, ..., E'_{x_T}$) is fed into a sequential encoder layer.
Here, we adopt a multi-layer bi-directional LSTM encoder \cite{hochreiter1997long},
which encodes a review into a sequence of hidden states in forward direction $\vec{H}=(\vec{h_1}, \vec{h_2},...,\vec{h_T})$ and backward direction $\cev{H}=(\cev{h_1}, \cev{h_2},...,\cev{h_T})$.
\vspace{2mm}
\noindent\textbf{Representative Features:}
For each hidden state $\vec{h_t}$ (or $\cev{h_t}$), we generate three representative features, which will be later used to assist the attention mechanism to learn the overall review representation.
The first and second features, denoted by $\vec{h_t^\text{max}}$ and $\vec{h_t^\text{avg}}$, are the max-pooling and average-pooling of $\vec{h_t}$, respectively.
The third one is obtained using factorization machine \cite{rendle2010factorization}, where the factorization operation is defined as
\begin{equation}
\label{eq:fm}
\mathcal{F}(z)=w_0+\sum_{i=1}^N w_iz_i+\sum_{i=1}^{N}\sum_{j=i+1}^{N}\left<V_i,V_j\right>z_iz_j.
\end{equation}
Here, the model parameters are $w_i\in\mathbb{R}$ and $V\in\mathbb{R}^{N\times F}$. $N$ and $F$ are the dimensions of the input vector $z$ and factorization, respectively.
$\left<\cdot,\cdot\right>$ is the dot product between two vectors.
$w_0$ in Eq. (\ref{eq:fm}) is a global bias,
$w_i$ is the strength of the $i$-th variable, and
$\left<V_i,V_j\right>$ captures the pairwise interaction between $z_i$ and $z_j$.
Intuitively, the max-pooling and avg-pooling provide the approximated location (bound and mean) of the hidden state $\vec{h_t}$ in the $N$ dimensional space, while the factorization captures all single and pairwise interactions.
Together they provide the high-level knowledge of that hidden state.
\noindent\textbf{Feature Augmentation:}
Finally, the aggregated hidden state $h_t$ at time step $t$ is obtained by concatenating hidden states in both directions and all representative features, i.e.,
\begin{equation}
\aligned
\vec{h_t}&=\vec{h_t}\oplus\vec{h_t^\text{max}}\oplus\vec{h_t^\text{avg}}\oplus\mathcal{F}(\vec{h_t}),
\\
\cev{h_t}&=\cev{h_t}\oplus\cev{h_t^\text{max}}\oplus\cev{h_t^\text{avg}}\oplus\mathcal{F}(\cev{h_t}),\\
h_t&=\vec{h_t}\oplus\cev{h_t}.
\endaligned
\end{equation}
Thus, the review is encoded into a sequence of aggregated hidden states $H=(h_1,h_2,\dots,h_T)$.
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{model_details.png}
\caption{Review encoder and deliberate self-attention for aspect $k$. Each hidden state is enriched by three features, i.e., max-pooling, average-pooling and factorization.}
\label{fig:model_struct}
\end{figure}
\subsubsection{Deliberate Self-Attention Layer}
Once the aggregated hidden states for each review are obtained, we apply a self-attention layer for each task to learn an overall review representation for that task.
Compared with pooling and convolution operations, self-attention mechanism is more interpretable, since it can capture relatively important words for a given task.
However, a standard self-attention layer merely relies on a single global alignment vector across different reviews, which results in sub-optimal representations.
Therefore, we propose a deliberate self-attention alignment method to refine the review representations while maintaining the network interpretability.
In this section, we will first introduce the self-attention mechanism, and then provide the details of the deliberation counterpart.
\vspace{2mm}
\noindent\textbf{Global Self-Attention:}
For each aspect $k$, the self-attention mechanism \cite{yang2016hierarchical} is used to learn the relative importance of tokens in a review to the sentiment classification task.
Formally, given the aggregated hidden states $H$ for a review, the alignment score $u_{t,G}^k$ and attention weight $\alpha_{t,G}^k$ are calculated as follows:
\begin{equation}
u_{t,G}^k=(v^k_G)^\top\tanh(W_G^kh_t+b_G^k),\
\alpha_{t,G}^k=\frac{\exp(u_{t,G}^k)}{\sum_{\tau=1}^T\exp(u_{\tau,G}^k)},
\label{eqn:self_attn}
\end{equation}
where $W_G^k$, $v_G^k$ and $b_G^k$ are model parameters.
$G$ represents global, as the above attention mechanism is also known as global attention \cite{luong2015effective}.
$v_\text{G}^k$ is viewed as a global aspect-specific base-vector in this paper, since it has been used in calculating the alignment with different hidden states across different reviews.
It can also be viewed as a global aspect-specific filter that is designed to capture important information for a certain aspect from different reviews.
Therefore, we also refer a regular self-attention layer as a global self-attention layer.
With attention weights, the global review representation is calculated by taking the weighted sum of all aggregated hidden states, i.e.,
$s^k_G=\sum_{t=1}^T\alpha_{t,G}^k h_t$.
Traditionally, $s^k_G$ is used for the sentiment classification task.
\vspace{2mm}
\noindent\textbf{Deliberate Attention:}
As we can see from Eq.~(\ref{eqn:self_attn}), the importance of a token $t$ is measured by the similarity between $\tanh(W_G^kh_t+b_G^k)$ and the base-vector $v_G^k$.
However, a single base-vector $v_G^k$ is difficult to capture the variability in the reviews, and hence, such alignment results in sub-optimal representations of reviews.
In this paper, we attempt to alleviate this problem by reusing the output of the global self-attention, i.e., $s^k_G$, as a document-level aspect-specific base-vector to produce better review representations.
Notably, $s^k_G$ already incorporates the knowledge of the review content and aspect $k$.
We refer this step as deliberation.
Given the hidden states $H$ and review representation $s^k_G$, we first calculate the alignment scores and attention weights as follows:
\begin{equation}
u_{t,D}^{k}=(s^{k}_G)^\top\tanh(W_D^k h_t+b_D^k),\
\alpha_{t,D}^k=\frac{\exp(u_{t,D}^k)}{\sum_{\tau=1}^T\exp(u_{\tau,D}^k)},
\label{eqn:deli_attn}
\end{equation}
where $W_D^k$ and $b_D^k$ are parameters. $D$ represents deliberation.
Similarly, we can calculate the aspect-specific review representation by deliberation as
$s^k_D=\sum_{t=1}^T\alpha_{t,D}^k h_t$.
\vspace{2mm}
\noindent\textbf{Review Representation:}
Finally, the review representation for aspect $k$ can be obtained as follows\footnote{In this paper, we also consider models that repeat the deliberation for multiple times. However, we did not observe significant performance improvement.}:
\begin{equation}
s^k=s^k_G + s^k_D
= \sum_{t=1}^T \big(\alpha^k_{t,G} + \alpha^k_{t,D}\big)h_t.
\label{eqn:deli-context}
\end{equation}
From the above equation, we not only get refined review representations but also maintain the interpretability of our model.
Here, we did not use the concatenation of two vectors since we would like to maintain the interpretability as well.
Notably, we can use the accumulated attention weights, i.e., $\frac{1}{2}(\alpha^k_{t,G} + \alpha^k_{t,D})$, to interpret our experimental results.
\subsubsection{Sentiment Classification Layer}
Finally, we pass the representation of each review for aspect $k$ into an aspect-specific classifier to get the probability distribution over different class labels.
Here, the classifier is defined as a two layer feed-forward network with a ReLU activation followed by a softmax layer, i.e.,
\begin{equation}
\aligned
y^k_\text{out}&=\text{ReLU}(W^k_\text{out}s^k+b_\text{out}^k), \\
y^k_\text{pred}&=\text{softmax}(W_\text{pred}^ky^k_\text{out}+b_\text{pred}^k),
\endaligned
\label{eqn:class_label_ditribution}
\end{equation}
where $W^k_\text{out}$, $W_\text{pred}^k$, $b_\text{out}^k$, and $b_\text{pred}^k$ are learnable parameters.
Given the ground-truth labels $\hat{y}^k$, which is a one-hot vector, our goal is to minimize the averaged cross-entropy error between $y^k_\text{pred}$ and $\hat{y}^k$ across all aspects, i.e.,
\begin{equation}
\mathcal{L}_\theta=-\sum_{k=1}^K\sum_{i=1}^N \hat{y}^k_i\log(y^k_{\text{pred},i}),
\end{equation}
where $K$ and $N$ represents the number of aspects and class labels, respectively.
The model is trained in an end-to-end manner using back-propagation.
\subsection{Aspect and Sentiment Keywords}
Traditionally, aspect and sentiment keywords are obtained using unsupervised clustering methods, such as topic models \cite{mcauley2012learning,shi2019document}.
However, these methods cannot automatically build correlations between keywords and aspects or sentiment due to the lack of supervision.
Aspect and opinion term extractions in fine-grained aspect-based sentiment analysis tasks \cite{pontiki2014semeval,pontiki2016semeval,fan2019target,wang2017coupled} focus on extracting terms and phrases from sentences.
However, they require a number of labeled reviews to train deep learning models.
In this paper, we propose a fully automatic Attention-driven Keywords Ranking (AKR) method to discover aspect and opinion keywords, which are important to predicted ratings, from a review corpus based on self-attention (or deliberate self-attention) mechanism in the context of DMSC.
\subsubsection{Aspect Keywords Ranking}
The significance of a word $w$ to an aspect $k$ can be described by a conditional probability $p_\mathcal{C}(w|k)$ on a review corpus~$\mathcal{C}$.
Intuitively, given an aspect $k$, if a word $w_1$ is more frequent than $w_2$ across the corpus, then, $w_1$ is more significant to aspect $k$.
We can further expand this probability as follows:
\begin{equation}
p_\mathcal{C}(w|k)= \sum_{\xi\in\mathcal{C}} p_\mathcal{C}(w,\xi|k),
\label{eqn:keywords_expansion}
\end{equation}
where $\xi$ is a review in corpus $\mathcal{C}$.
For each $\xi\in\mathcal{C}$, probability $p_\mathcal{C}(w,\xi|k)$ indicates the importance of word $w$ to the aspect $k$, which can be defined using attention weights, i.e.,
\begin{equation}
p_\mathcal{C}(w,\xi|k)=\frac{\sum_{t=1}^T\alpha^\xi_t\cdot\delta(w_t,w)}{\sum_{\xi'\in\mathcal{C}}f_{\xi'}(w)+\gamma},
\label{eqn:keywords_attention_def}
\end{equation}
where $f_{\xi'}(w)$ is frequency of $w$ in document $\xi'$ and $\gamma$ is a smooth factor.
$\delta(w_t,w)=\begin{cases}1& \text{if } w_t=w\\ 0& \text{otherwise}\end{cases}$
is a delta function.
Attention weight $\alpha_t^\xi$ is defined as $\alpha_t^\xi=\frac{1}{2}(\alpha^k_{t,G} + \alpha^k_{t,D})$ for the deliberation self-attention mechanism.
In Eq.~(\ref{eqn:keywords_attention_def}), the denominator is applied to reduce the noise from stop-words and punctuation.
After obtaining the score $p_\mathcal{C}(w|k)$ for every member in the vocabulary, we collect top-ranked words (with part-of-speech tags: NOUN and PROPN) as aspect keywords.
\subsubsection{Aspect-level Opinion Keywords}
Similarly, we can estimate the significance of a word $w$ to an aspect-level opinion label/rating $\hat{y}^k$ by a conditional probability $p_\mathcal{C}(w|\hat{y}^k)$.
Let us use $\mathcal{C}_{\hat{y}^k}$ to denote reviews with rating $\hat{y}^k$ for aspect $k$, then, the following equivalence holds, i.e.,
\begin{equation}
p_\mathcal{C}(w|\hat{y}^k)=p_{\mathcal{C}_{\hat{y}^k}}(w|k),
\label{eqn:sentiment_keywords}
\end{equation}
which can be further calculated by Eqs.~(\ref{eqn:keywords_expansion}) and (\ref{eqn:keywords_attention_def}).
Intuitively, we first construct a subset $\mathcal{C}_{\hat{y}^k}\subset\mathcal{C}$ of the review corpus, then, we use attention weights of aspect $k$ to calculate the significance of word $w$ to that aspect.
Finally, we collect top-ranked words (with part-of-speech tags: ADJ, ADV and VERB) as aspect-level opinion keywords.
\begin{figure}[!tp]
\centering
\includegraphics[width=0.5 \textwidth]{uncertainty_model.png}
\caption{The LEcture-AuDience (LEAD) model for uncertainty estimation. `R', `K', `C', `U' represent rating, knowledge, probability distribution of different class labels (see Eq.~(\ref{eqn:class_label_ditribution})), and uncertainty score, respectively.}
\label{fig:lead_method}
\end{figure}
\subsection{The Proposed Uncertainty Model}
Although our FEDAR model has achieved competitive prediction accuracy and our AKR method allows us to explore aspect and sentiment keywords, it is still difficult to deploy such a model in real-world applications.
In DMSC datasets, we find that there are many typos and abbreviations in reviews and many reviews describe the product or service from only one aspect.
However, deep learning models cannot capture these problems in the datasets, therefore, the predictions are not reliable.
One way to tackle this challenge is by estimating the uncertainty of model predictions.
If a model returns ratings with high uncertainty, we can pass the review to human experts for annotation.
In this section, we propose a LEcture-AuDience (LEAD) method (See Fig.\ref{fig:lead_method}) to measure the uncertainty of our FEDAR model in the context of multi-task learning.
\subsubsection{Lecturer and Audiences}
We use a lecturer (denoted by $\mathcal{M}^L$) to represent any well-trained deep learning model, e.g., FEDAR model.
Audiences are models (denoted by $\mathcal{M}^A$) with partial knowledge of the lecturer, where \textit{knowledge can be interpreted as relationships between an input review and output ratings} which are inferred by $\mathcal{M}^L$.
Here, $\mathcal{M}^A=\{\mathcal{M}^{A_1}, \mathcal{M}^{A_2},...,\mathcal{M}^{A_{|A|}}\}$, where $|A|$ is the number of audiences.
\textit{Partial knowledge determines the eligibility of audiences to provide uncertainty scores.}
For example, eligible audiences can be:
(1) Models obtained by pruning some edges (e.g., dropout with small dropout rate) of the lecturer model.
(2) Models obtained by continuing training the lecturer model with very small learning rate for a few batches.
Ineligible audiences include:
(1) Random models trained on the same or a different review corpus.
(2) Models with the same or similar structure as lecturer but initialized with different parameters and trained on a different corpus.
\subsubsection{Uncertainty Scores}
Given a review, suppose the lecturer $\mathcal{M}^L$ predicts the class label as $\tilde{y}^{L,k}$ for aspect $k$, where $\tilde{y}^{L,k}$ is an one-hot vector.
An audience $\mathcal{M}^{A_\mu}$ obtains the probability distribution over different class labels as $y^{A_\mu,k}_\text{pred}$ (See Eq.~(\ref{eqn:class_label_ditribution})).
Then, the uncertainty score is defined as the cross entropy between $\tilde{y}^{L,k}$ and $y^{A_\mu,k}_\text{pred}$, which is calculated by
\begin{equation}
\psi^{A_\mu, k}=-\sum_{i=1}^{N}\tilde{y}^{L,k}_i\log(y^{A_\mu,k}_{\text{pred},i}).
\end{equation}
\textit{Intuitively, the audience is more uncertain about the lecturer's prediction if it gets lower probability for that prediction.}
For example, in Fig.~\ref{fig:lead_method}, the lecturer model predicts rating/label as 4.
Three audiences obtain probability 0.1, 0.8, 0.5 for that label, respectively.
Then, their uncertainty scores are $\psi^{A_1,k}=2.30$, $\psi^{A_2,k}=0.22$, and $\psi^{A_3,k}=0.69$.
With the uncertainty score from a single audience and for a single aspect, we can calculate the final uncertainty score as
\begin{equation}
\psi=\exp\sum_{\mu=1}^{|A|}\zeta\log\left(\exp{\sum_{k=1}^k\log\big(\psi^{A_\mu,k}+\lambda\big)}+\eta\right),
\end{equation}
where $\lambda$ and $\eta\geq 1$ are smoothing factor and set to $1$ in our experiments.
$\zeta$ is an empirical factor for knowledge.
If audience networks are obtained by applying dropout to the lecturer network, the higher the dropout rate, the lower the factor $\zeta$.
In this case, the audiences have less knowledge to the lecturer.
After obtaining uncertainty scores for all reviews in the testing set, we can select either certain percent of reviews with higher scores or reviews with scores over a threshold for crowdsourcing annotation.
Human experts are expected to analyze the reviews and decide the aspect ratings for them.
\section{Related Work}
\label{sec:related_work}
Document-level Multi-aspect Sentiment Classification (DMSC) aims to predict ratings/sentiment of reviews with respect to given aspects.
It is originated from online review systems which request users to provide aspect-level ratings for a product or service.
Most of the early studies in DMSC solved this problem by first extracting features (e.g., $n$-grams) for each aspect and then predicting aspect-level ratings \cite{mcauley2012learning,lu2011multi} using regression techniques (e.g., Support Vector Regression \cite{smola2004tutorial}).
More recently, deep learning models formulate DMSC as a multi-task classification problem \cite{yin2017document,li2018document,zhang2019attentive}.
In these models, reviews are first encoded to their corresponding vector representation using recurrent neural networks.
Then, aspect-specific attention modules and classifiers are built upon the review encoders to predict the sentiment. For example, Yin~\textit{et~al. } \cite{yin2017document} have formulated this task as a machine comprehension problem.
Li~\textit{et~al. } \cite{li2018document} proposed incorporating users' information, overall ratings, and hand-crafted aspect keywords into their model to predict ratings, instead of merely using textual reviews.
Zeng~\textit{et~al. } \cite{zeng2019variational}
introduced a variational approach to weakly supervised sentiment analysis.
Aspect-based sentiment classification (ABSA) \cite{pontiki2016semeval} is another research direction that is related to our work.
It consists of several fine-grained sentiment classification tasks, including aspect category detection and polarity, and aspect term extraction and polarity.
However, these tasks primarily focus on sentence-level sentiment classification \cite{wang2016attention,tang2016aspect}, and typically need human experts to annotate aspect terms, categories, and entities.
In this paper, we focus on the DMSC problem and our model is also based on a multi-task learning framework.
In addition, we place more emphasis on the model interpretability, automatic aspect and opinion keywords discovery, and uncertainty estimation.
Model uncertainty of deep neural networks (NNs) is another research topic related to this work.
Bayesian NNs, which learn a distribution over weights, have been studied extensively and achieved competitive results for measuring uncertainty \cite{blundell2015weight,neal2012bayesian,louizos2016structured}.
However, they are often difficult to implement and computationally expensive compared with standard deep NNs.
Gal and Ghahramani \cite{gal2015dropout} proposed using Monte Carlo dropout to estimate uncertainty by applying dropout \cite{srivastava2014dropout} at testing time, which can be interpreted as a Bayesian approximation of the Gaussian process \cite{rasmussen2003gaussian}.
This method has gain popularity in practice \cite{kendall2018multi,mcallister2017concrete} since it is simple to implement and computationally more efficient.
Recently, Zhang \textit{et~al. } \cite{zhang2019mitigating} applied dropout-based uncertainty estimation methods to text classification.
Our paper proposes a new method for estimating uncertainty for deep NNs and we use it to measure the uncertainty of our models. |
2,877,628,091,117 | arxiv | \section{Overview}
A central program of the theory of infinite structures
is to find which structures have partition properties resembling Ramsey's Theorem.
In this context, one colors the copies of a finite structure $\A$ inside the infinite structure $\mathcal{S}$ into finitely many colors and looks for an
infinite substructure $\mathcal{S}'$, isomorphic to $\mathcal{S}$,
in which the copies of $\A$ have the same color.
A wide collection of infinite structures have the Ramsey property for colorings of singletons.
However,
even the rationals as a linearly ordered structure do not have the Ramsey property for colorings of pairsets, as seen by Sierpi\'{n}ski's
two-coloring of pairs of rationals where each subcopy of the rationals retains both colors on its pairsets.
This leads to the following
question:
\begin{question}
Given an infinite structure $\mathcal{S}$ and
a finite substructure $\mathrm{A}$,
is there a positive integer $T(\mathrm{A}, \mathcal{S})$ such that for any coloring of all copies of $\mathrm{A}$ in $\mathcal{S}$ into finitely many colors,
there is a substructure $\mathcal{S}'$ of $\mathcal{S}$, isomorphic to $\mathcal{S}$,
in which all copies of $\mathrm{A}$ take no more than $T(\mathrm{A}, \mathcal{S})$ colors?
\end{question}
This number $T(\mathrm{A}, \mathcal{S})$, when it exists, is called the
{\em big Ramsey degree} of $\A$ in $\mathcal{S}$.
Research in this area
has gained recent momentum, as it was
highlighted by
Kechris, Pestov, and Todorcevic
in \cite{Kechris/Pestov/Todorcevic05}.
Big Ramsey degrees have implications for topological dynamics, as shown in \cite{Kechris/Pestov/Todorcevic05} and further developed in Zucker's work \cite{Zucker19}.
In contrast to finite structural Ramsey theory,
the development of Ramsey theory for infinite structures has progressed quite slowly.
After Sierpi\'{n}ski's coloring for pairs of rationals, work of Laver
and Devlin (see \cite{DevlinThesis}) established the exact big Ramsey degrees for finite sets of rationals by 1979.
In the mid 1970's, \Erdos, Hajnal, and Pos\'{a}
began work on the big Ramsey degrees of the Rado graph, establishing
an analogue of Sierpi\'{n}ski's coloring for edges in \cite{Erdos/Hajnal/Posa75}.
Building on work of Pouzet and Sauer in \cite{Pouzet/Sauer96}, the full Ramsey theory of the Rado graph for colorings of copies of any finite graph
was finally established
in 2006 in the two papers
\cite{Sauer06} by Sauer and \cite{Laflamme/Sauer/Vuksanovic06} by Laflamme, Sauer, and Vuksanovic.
Around that time, driven by the interest generated by
\cite{Kechris/Pestov/Todorcevic05}, the Ramsey theory of other
ultrahomogeneous structures was established in
\cite{NVT08} and \cite{Laflamme/NVT/Sauer10}.
A principal component in the work in \cite{DevlinThesis} and \cite{Sauer06} is a Ramsey theorem for strong trees due to Milliken \cite{Milliken79}, while
\cite{Laflamme/NVT/Sauer10} depended on the authors' development of a colored
version of this theorem.
The lack of similar means for coding infinite structures and the lack of Ramsey theorems for such coded structures
have been the largest obstacles in the further development of this area, especially for ultrahomogeneous structures with forbidden configurations.
As stated in Nguyen Van Th\'{e}'s habilitation \cite{NVTHabil},
``so far, the lack of tools to represent ultrahomogeneous structures is the major obstacle towards a better understanding of their infinite partition properties."
In this paper, we prove that for each $k\ge 4$, the $k$-clique-free Henson graph $\mathcal{H}_k$ has finite big Ramsey degrees, extending work of the author in \cite{DobrinenJML20} for the triangle-free Henson graph.
Given $k\ge 3$,
the Henson graph $\mathcal{H}_k$ is the universal ultrahomogeneous $K_k$-free graph; that is, the $k$-clique-free analogue of the Rado graph.
The only prior work on the big Ramsey degrees of $\mathcal{H}_k$ for $k\ge 4$ was work of
El-Zahar and Sauer
in \cite{El-Zahar/Sauer89} for vertex colorings in 1989.
In \cite{DobrinenJML20}, we proved that the triangle-free Henson graph has finite big Ramsey degrees.
The work in this paper follows the general outline in \cite{DobrinenJML20}, but
the extension of Ramsey theory to all Henson graphs
required expanded ideas, a better understanding of the nature of coding structures with forbidden configurations, and many new lemmas.
This article
presents a unified framework for the Ramsey theory of Henson graphs.
We develop new
techniques for coding copies of $\mathcal{H}_k$ via {\em strong $\mathcal{H}_k$-coding trees} and prove Ramsey
theorems for these trees, forming a family of Milliken-style theorems.
The approach here streamlines the one in \cite{DobrinenJML20} for $\mathcal{H}_3$ and provides a general methodology opening
further study of big Ramsey degrees for ultrahomogeneous structures with and without forbidden configurations.
\section{Introduction}
The field of
Ramsey theory
was established by
the following celebrated result.
\begin{thm}[Infinite Ramsey Theorem, \cite{Ramsey30}]\label{thm.RamseyInfinite}
Given positive integers $m$ and $j$,
suppose the collection of all $m$-element subsets of $\mathbb{N}$
is colored by $j$ colors.
Then there is an infinite set $N$ of natural numbers
such that all $m$-element subsets of $N$ have the same color.
\end{thm}
From this, Ramsey deduced the following finite version, which also can be proved directly.
\begin{thm}[Finite Ramsey Theorem, \cite{Ramsey30}]\label{thm.RamseyFinite}
Given positive integers $m,n,j$ with
$m\le n$, there is an integer $r>n$ such that for any coloring of the $m$-element subsets of $r$ into $j$ colors,
there is a subset $N\sse r$ of cardinality $n$ such that all $m$-element subsets of $N$ have the same color.
\end{thm}
In both cases, we say that the coloring is {\em monochromatic} on
$N$.
Interestingly, Theorem \ref{thm.RamseyFinite} was motivated by Hilbert's Entscheidungsproblem: to find a decision procedure deciding which formulas in first order logic are valid.
Ramsey applied Theorem \ref{thm.RamseyFinite} to prove that the validity, or lack of it, for
certain types of formulas in first order logic (those with no existential quantifiers) can be ascertained algorithmically.
Later, Church and Turing each showed that a general solution to Hilbert's problem is impossible, so Ramsey's success for the class of existential formulas is quite remarkable.
Ever since the inception of Ramsey theory, its connections with
logic have continually spurred progress in both fields.
This phenomenon occurs once again
in Sections \ref{sec.5} and \ref{sec.1SPOC}, where methods of logic are used to deduce Ramsey theorems.
Structural Ramsey theory investigates which structures satisfy versions of Ramsey's Theorem.
In this setting, one tries to find a substructure isomorphic to some fixed structure on which the coloring is monochromatic.
Given structures $\A$ and $\B$,
we write $\A\le \B$ if and only if there is an embedding of $\A$ into $\B$.
A substructure $\A'$ of $\B$ is called a {\em copy} of $\A$ if and only if $\A'$ is the image of some embedding of $\A$ into $\B$.
The collection of all copies of $\A$ in $\B$ is denoted by ${\B \choose \A}$.
Given structures $\A,\B,\C$ with $\A\le \B\le \C$
and an integer $j\ge 1$, we write
\begin{equation}
C\ra (B)^A_j
\end{equation}
to mean that for each $c:{\C\choose \A}\ra j$, there is a $\B'\in {\C\choose \B}$ for which $c$ takes only one color on ${\B'\choose \A}$.
A class $\mathcal{K}$ of finite structures is said to have the {\em Ramsey property}
if given $\A,\B\in\mathcal{K}$ with $\A\le \B$,
for any integer $j\ge 1$,
there is some $\C\in\mathcal{K}$ for which $B\le C$ and
$\C\ra (\B)^{\A}_j$.
Some classic examples of classes of structures
with the Ramsey property include
finite Boolean algebras (Graham and Rothschild \cite{Graham/Rothschild71}),
finite vector spaces over a finite field (Graham, Leeb, and Rothschild \cite{Graham/Leeb/Rothschild72} and
\cite{Graham/Leeb/Rothschild73}),
finite ordered relational structures (independently, Abramson and Harrington, \cite{Abramson/Harringon78} and \Nesetril\ and \Rodl, \cite{Nesetril/Rodl77}, \cite{Nesetril/Rodl83}),
in particular, the class of finite ordered graphs.
The papers \cite{Nesetril/Rodl77} and \cite{Nesetril/Rodl83} further proved
that all set-systems of finite ordered relational structures omitting some irreducible substructure have the Ramsey property.
This includes
the classes of finite ordered graphs omitting $k$-cliques, denoted $\mathcal{G}_k^{<}$, for each $k\ge 3$.
\Fraisse\ classes are natural objects for structural Ramsey theory investigations, for as shown by \Nesetril, any class with the Ramsey property must satisfy the amalgamation property.
Since \Fraisse\ theory is not central to the proofs in this article, we refer the interested reader to \cite{Fraisse54} and Section 2 of the more recent \cite{Kechris/Pestov/Todorcevic05} for background; the properties of the specific examples contained in this article will be clear.
Most classes of finite unordered structures do not have
the Ramsey property.
However, if equipping
the class with an additional linear order produces the Ramsey property, then some
remnant of it remains in the unordered reduct.
This is the idea behind {\em small Ramsey degrees}.
Following notation in \cite{Kechris/Pestov/Todorcevic05},
given any \Fraisse\ class $\mathcal{K}$ of finite structures,
for $A\in\mathcal{K}$,
$t(A,\mathcal{K})$ denotes the smallest number $t$, if it exists, such that
for each $B\in \mathcal{K}$ with $A\le B$ and for each $j\ge 2$,
there is some $C\in\mathcal{K}$ into which $B$ embeds such that for
any coloring $c:{C \choose A}\ra j$,
there is a $B'\in {C \choose B}$ such that the restriction of $c$ to ${B'\choose A}$ takes no more than $t$ colors.
In the arrow notation, this is written as
\begin{equation}
C\ra (B)^A_{j,t(A,\mathcal{K})}.
\end{equation}
A class $\mathcal{K}$ has {\em finite (small) Ramsey degrees} if
for each $\A\in\mathcal{K}$ the number
$t(\A,\mathcal{K})$ exists.
The number $t(\A,\mathcal{K})$ is called the {\em Ramsey degree of $A$} in $\mathcal{K}$ \cite{Fouche98}.
Notice that $\mathcal{K}$ has the Ramsey property if and only if $t(A,\mathcal{K})=1$ for each $A\in\mathcal{K}$.
The connection between \Fraisse\ classes with finite small Ramsey degrees and ordered expansions is made explicit in Section 10 of \cite{Kechris/Pestov/Todorcevic05},
where it is shown that if an ordered expansion $\mathcal{K}^{<}$ of a \Fraisse\ class $\mathcal{K}$ has the Ramsey property,
then $\mathcal{K}$ has finite small Ramsey degrees. Furthermore, the degree of $\A\in\mathcal{K}$ can be computed from the number of non-isomorphic order expansions it has in $\mathcal{K}^{<}$.
Nguyen Van Th\'{e} has extended this to the more general notion of
pre-compact expansions (see \cite{NVTHabil}).
In particular, the
classes of finite (unordered) graphs and finite (unordered) graphs omitting $k$-cliques have finite small Ramsey degrees.
Continuing this expansion of Ramsey theory leads to investigations of which infinite structures have properties similar to Theorem \ref{thm.RamseyInfinite}.
Notice that the infinite homogeneous subset $N\sse\mathbb{N}$ in
Theorem \ref{thm.RamseyInfinite}
is actually isomorphic to
$\mathbb{N}$ as a linearly ordered structure.
Ramsey theory on infinite structures is concerned with finding substructures isomorphic to the original infinite structure in which a given coloring is as simple as possible.
Many infinite structures have been proved to be
{\em indivisible}: given a coloring of its single-element substructures
into finitely many colors, there is an infinite substructure isomorphic to the original structure in which all single-element substructures have the same color.
The natural numbers and the rational numbers as linearly ordered structures are indivisible, the proofs being straightforward.
Similarly, it is folklore that the Rado graph is indivisible, the proof following naturally from the defining properties of this graph.
In contrast, it took much more effort to prove the indivisibility of the Henson graphs, and this was achieved first for the triangle-free Henson graphs in
\cite{Komjath/Rodl86}, and for all other Henson graphs in \cite{El-Zahar/Sauer89}.
When one
considers colorings of structures of two or more elements, more complexity begins to emerge.
Even for the simple structure of the rationals, there is a coloring of pairsets into two colors such that each subset isomorphic to the rationals has pairsets in both colors.
This is the infamous example of Sierpi\'{n}ski,
and it immediately leads to the notion of big Ramsey degree.
We take the definition from \cite{Kechris/Pestov/Todorcevic05}, slightly changing some notation.
\begin{defn}[\cite{Kechris/Pestov/Todorcevic05}]\label{defn.bRd}
Given an infinite structure $\mathcal{S}$ and a finite substructure $\A\le \mathcal{S}$,
let $T(\A,\mathcal{S})$ denote the least integer $T\ge 1$, if it exists, such that
given any coloring of ${\mathcal{S}\choose \A}$ into finitely many colors, there is a
substructure $\mathcal{S}'$ of $\mathcal{S}$, isomorphic to $\mathcal{S}$, such that ${\mathcal{S}'\choose \A}$ takes no more than $T$ colors.
This may be written succinctly as
\begin{equation}\label{eq.bRd}
\forall j\ge 1,\ \ {\mathcal{S}}\ra ({\mathcal{S}})^{\A}_{j,T(\A,\mathcal{S})}.
\end{equation}
We say that
$\mathcal{S}$ has {\em finite big Ramsey degrees} if for each finite substructure
$\A\le\mathcal{S}$,
there is an integer $T(A,\mathcal{S})\ge 1$
such that (\ref{eq.bRd}) holds.
\end{defn}
Infinite structures which have been investigated in this light include the rationals
(\cite{DevlinThesis}),
the Rado graph (\cite{Erdos/Hajnal/Posa75},
\cite{Pouzet/Sauer96}, \cite{Sauer06}, \cite{Laflamme/Sauer/Vuksanovic06}),
ultrametric spaces (\cite{NVT08}), the
rationals with a fixed finite number of equivalence relations, and
the tournaments $\bf{S}(2)$ and $\bf{S}(3)$
(\cite{Laflamme/NVT/Sauer10}),
and recently, the triangle-free graph Henson graph (\cite{DobrinenJML20}).
These results will be discussed below.
See \cite{NVTHabil} for an overview of results on big Ramsey degrees obtained prior to 2013.
Each of these structures is {\em ultrahomogeneous}: any isomorphism between two finitely generated substructures can be extended to an automorphism of the infinite structure.
Recently, \Masulovic\ has
widened the investigation of big Ramsey degrees to universal structures, regardless of ultrahomogeneity, and proved transfer principles in \cite{Masulovic18} from which big Ramsey degrees for one structure may be transferred to other categorically related structures.
More background on the development of Ramsey theory on infinite structures will be given below, but first, we present some recent motivation
from topological dynamics for further exploration of big Ramsey degrees.
Connections between
topological dynamics and Ramsey theory have been known for some time.
The work of
Kechris, Pestov, and Todorcevic in \cite{Kechris/Pestov/Todorcevic05}
subsumed many
previously known phenomena by proving
several general correspondences between Ramsey theory and topological dynamics.
A \Fraisse\ class which
has at least one relation which is a linear order is called a {\em \Fraisse\ order class}.
One of the main theorems in \cite{Kechris/Pestov/Todorcevic05} (Theorem 4.7)
shows that the extremely amenable
closed subgroups of the infinite symmetric group $S_{\infty}$ are exactly those of the form Aut$(\mathbf{F})$, where $\mathbf{F}$
is the \Fraisse\ limit (and hence an ultrahomogeneous structure) of some \Fraisse\ order class satisfying the Ramsey property.
Another significant theorem (Theorem 10.8)
provides a way to compute the universal minimal flow of
topological groups which arise as the automorphism groups of \Fraisse\ limits of \Fraisse\ classes with the Ramsey property and the ordering property.
That the ordering property can be relaxed to the expansion property was proved by Nguyen Van Th\'{e} in \cite{NVT13}.
In \cite{Kechris/Pestov/Todorcevic05}, Kechris, Pestov, and Todorcevic also demonstrated how big Ramsey degrees for \Fraisse\ structures $\mathbf{F}$ are related to
big oscillation degrees for their automorphism groups, Aut$(\mathbf{F})$.
Recently,
Zucker proved in \cite{Zucker19} that
if a \Fraisse\ structure $\mathbf{F}$ has finite big Ramsey degrees and moreover, $\mathbf{F}$ admits a big Ramsey structure,
then
any big Ramsey flow of Aut$(\mathbf{F})$ is a universal completion flow, and further, any two universal completion flows are isomorphic.
\subsection{A brief history of big Ramsey degrees and main obstacles to its development}
In contrast to the robust development for finite structures,
results on the Ramsey theory of infinite structures have been meager and the development quite slow.
Motivated by Sierpi\'{n}ski's coloring for pairs of rationals which admits no isomorphic copy in one color,
Laver investigated the more general problem of finding whether or not there are bounds for colorings of $m$-sized subsets of rationals, for any positive integer $m$.
In the 1970's,
Laver showed that the rationals have finite big Ramsey degrees, finding good upper bounds.
Guided by Laver's results and methods,
Devlin found
the exact bounds in
\cite{DevlinThesis}.
Interestingly, these numbers turn out to be coefficients of the Taylor series for the tangent function.
Around the same time,
\Erdos, Hajnal, and Pos\'{a} initiated investigations of the Rado graph, the universal ultrahomogeneous graph on countably many vertices.
In 1975, they proved in \cite{Erdos/Hajnal/Posa75}
that there is a coloring of edges into two colors in which
each subcopy of the Rado graph has edges in both colors.
That the upper bound for the big Ramsey degree of edges in the Rado graph is exactly two was proved much later (1996)
by
Pouzet and Sauer in \cite{Pouzet/Sauer96}.
The full Ramsey theory of the Rado graph
was finally established a decade later by
Sauer in \cite{Sauer06} and by Laflamme, Sauer, and Vuksanovic in \cite{Laflamme/Sauer/Vuksanovic06}.
Together, these two papers gave a full description of the big Ramsey degrees of the Rado graph in terms of types of certain trees.
A recursive procedure for computing these numbers was given by Larson in \cite{Larson08} soon after.
It is notable that while the big Ramsey degrees for the rationals are computed by a closed formula,
there is no closed formula producing the big Ramsey degrees for the Rado graph.
The successful completion of the Ramsey theory of the Rado graph so soon after the work of
of Kechris, Pestov, and Todorcevic stimulated more interest in Ramsey theory of infinite structures, especially ultrahomogeneous structures, which are obtained as limits of \Fraisse\ classes.
In 2008,
Nguyen Van Th\'{e}
investigated big Ramsey degrees for ultrahomogeneous ultrametric spaces.
Given $S$ a set of positive real numbers,
$\mathcal{U}_S$ denotes the class of all finite ultrametric spaces with strictly positive distances in $S$.
Its \Fraisse\ limit, denoted
$\mathbf{Q}_S$, is called the {\em Urysohn space associated with} $\mathcal{U}_S$.
In \cite{NVT08},
Nguyen Van Th\'{e} proved that
$\mathbf{Q}_S$ has finite big Ramsey degrees whenever $S$ is finite.
Moreover, if $S$ is infinite, then any member of $\mathcal{U}_S$ of size greater than or equal to $2$ does not have a big Ramsey degree.
Soon after this,
Laflamme, Nguyen Van Th\'{e}, and Sauer proved
in
\cite{Laflamme/NVT/Sauer10}
that enriched structures of the rationals, and two related directed graphs, have finite big Ramsey degrees.
For each $n\ge 1$,
$\mathbb{Q}_n$ denotes the structure $(\mathbb{Q}, Q_1,\dots, Q_n,<)$, where $Q_1,\dots, Q_n$ are disjoint dense subsets of $\mathbb{Q}$ whose union is $\mathbb{Q}$.
This is the \Fraisse\ limit of the class $\mathcal{P}_n$ of all finite linear orders equipped with an equivalence relation with $n$ many equivalence classes.
Laflamme, Nguyen Van Th\'{e}, and Sauer proved that
each member of $\mathcal{P}_n$ has a finite big Ramsey degree in $\mathbb{Q}_n$.
Further,
using the bi-definability between $\mathbb{Q}_n$ and the circular directed graphs $\mathbf{S}(n)$, for $n=2,3$,
they proved that
$\mathbf{S}(2)$ and $\mathbf{S}(3)$
have finite big Ramsey degrees.
Central to these results is a colored verision of Milliken's theorem which they proved in order to deduce the big Ramsey degrees.
For more details,
we recommend the paper \cite{NVTHabil} containing a good overview
of these results.
A common theme emerges when one looks at the proofs in \cite{DevlinThesis}, \cite{Sauer06}, and
\cite{Laflamme/NVT/Sauer10}.
The first two rely in an essential way on Milliken's Theorem,
(see Theorem \ref{thm.Milliken} in Section \ref{sec.2}).
The third proves a new colored version of Milliken's Theorem and uses it to deduce the results.
The results in \cite{NVT08} use Ramsey's theorem.
This would lead one to conclude or at least conjecture that, aside from Ramsey's Theorem itself, Milliken's Theorem contains the core combinatorial content of big Ramsey degree results, at least for binary relational structures.
The lack of useful representations and lack of Milliken-style
theorems for infinite structures in general
pose the two main obstacles to broader investigations of big Ramsey degrees.
Upon the author's initial interest in the Ramsey theory of the triangle-free Henson graph, these two challenges
were pointed out to the author by Todorcevic in 2012 and by Sauer in 2013
this idea is also expressed in \cite{NVTHabil}, quoted in the Overview.
\subsection{Big Ramsey degrees for Henson graphs: Main theorem and prior results}
For $k\ge 3$, the Henson graph $\mathcal{H}_k$ is the universal ultrahomogeneous $k$-clique free graph.
These graphs were first constructed by
Henson in 1971
in \cite{Henson71}.
It was later noticed that $\mathcal{H}_k$ is isomorphic to the \Fraisse\ limit of the \Fraisse\ class of finite $k$-clique free graphs, $\mathcal{G}_k$.
Henson proved in \cite{Henson71} that
these graphs are {\em weakly indivisible}; given a coloring of the vertices into two colors, either there is a subgraph isomorphic to $\mathcal{H}_k$ in which all vertices have the first color, or else every finite $k$-clique free graph has a copy whose vertices all have the second color.
However, the indivisibility of $\mathcal{H}_k$ took longer to prove.
In 1986,
Komj\'{a}th and \Rodl\ proved
in \cite{Komjath/Rodl86}
that
given a coloring of the vertices of $\mathcal{H}_3$ into finitely many colors, there is an induced subgraph isomorphic to $\mathcal{H}_3$ in which all vertices have the same color.
A few years later,
El-Zahar and Sauer proved
more generally that
$\mathcal{H}_k$ is indivisible for each $k\ge 4$ in \cite{El-Zahar/Sauer89}.
Prior to the author's work in \cite{DobrinenJML20},
the only further progress on big Ramsey degrees for Henson graphs was for edge relations on the triangle-free Henson graph.
In 1998,
Sauer proved in \cite{Sauer98} that the big Ramsey degree for edges in $\mathcal{H}_3$ is two.
There, progress stalled for lack of techniques.
This intrigued the author for several reasons.
Sauer and Todorcevic each stated to the author that the heart of the problem was to
find the correct analogue of Milliken's Theorem applicable to Henson graphs.
This would of course
help solve the problem of whether Henson graphs have finite big Ramsey degrees, but it would moreover have broader repercussions, as Ramsey theorems for trees are combinatorially strong objects and Milliken's Theorem has already found numerous applications.
The problem was that it was unclear to experts what such an analogue of Milliken's Theorem should be.
In \cite{DobrinenJML20}, the author developed
an analogue of Milliken's Theorem applicable to the
triangle-free Henson graph, and used it to
prove that $\mathcal{H}_3$ has finite big Ramsey degrees.
In this paper, we provide a unified approach to the Ramsey theory of the $k$-clique-free Henson graphs $\mathcal{H}_k$, for each $k\ge 3$.
This includes the
development of new types of trees which code Henson graphs and new Milliken-style theorems for these classes of trees which are applied to determine upper bounds for the big Ramsey degrees.
Our presentation encompasses and streamlines work in \cite{DobrinenJML20} for $\mathcal{H}_3$.
New obstacles arose for $k\ge 4$;
these challenges and their solutions are discussed as the sections of the paper are delineated below.
\subsection{Outline of paper}
In Section \ref{sec.2}, we introduce basic definitions
and notation, and review strong trees and
the
Halpern-\Lauchli\ and Milliken Theorems
(Theorems \ref{thm.HL} and \ref{thm.Milliken})
and their use in obtaining upper bounds for the finite big Ramsey degrees of the Rado graph.
In Subsection \ref{subsec.HLHarrington}, we include a version of Harrington's forcing proof of the Halpern-\Lauchli\ Theorem.
Many new issues arise due to $k$-cliques being forbidden in Henson graphs,
but the proof of Theorem \ref{thm.HL} will at least provide the reader with a toehold into
the
proof strategies for Theorem \ref{thm.matrixHL} and Lemma \ref{lem.Case(c)}, which lead to Theorem \ref{thm.MillikenSWP}, an analogue of Milliken's Theorem.
The article consists of three main phases.
Phase I occurs in Sections
\ref{sec.3}--\ref{sec.ExtLem}, where we define the tree structures and prove extension lemmas.
In Section \ref{sec.3},
we introduce
the notion of {\em strong $K_k$-free trees},
analogues of Milliken's strong trees
capable of coding $k$-clique free graphs.
These trees contain certain distinguished nodes, called {\em coding nodes},
which code the vertices of a given graph.
These trees branch maximally, subject to the constraint of the coding nodes not coding any $k$-cliques, and thus are the analogues of strong trees for the $K_k$-free setting.
Model-theoretically, such trees are simply coding all (incomplete) $1$-types over initial segments of a Henson graph, where the vertices are indexed by the natural numbers.
Although it is not possible to fully develop Ramsey theory on
strong $K_k$-free trees (see Example \ref{ex.badcoloring}),
they have the main structural aspects of the trees for which we will prove analogues of Halpern-\Lauchli\ and Milliken Theorems,
defined in Section \ref{sec.4}.
Section \ref{sec.3} is given for the sole purpose of building the reader's understanding of strong $\mathcal{H}_k$-coding trees.
Section \ref{sec.4} presents the definition of {\em strong $\mathcal{H}_k$-coding trees} as subtrees of a given tree $\mathbb{T}_k$ which are strongly isomorphic
(Definition \ref{defn.stable})
to $\mathbb{T}_k$.
The class of these trees is denoted $\mathcal{T}_k$, and these trees are best thought of as skew versions of the trees presented in
Section \ref{sec.3}.
Secondarily, an internal description of the trees in $\mathcal{T}_k$ is given.
This is a new and simpler approach than the one we took in \cite{DobrinenJML20} for the triangle-free Henson graph.
An important property of
strong $\mathcal{H}_k$-coding
trees is
the Witnessing Property
(Definition \ref{defn.PrekCrit}).
This means that certain configurations which can give rise to codings of pre-cliques (Definition \ref{defn.newll1s})
are witnessed by coding nodes.
The effect is a type of book-keeping to guarantee when finite trees can be extended within a given tree $T\in\mathcal{T}_k$ to another tree in $\mathcal{T}_k$.
Section \ref{sec.ExtLem} presents Extension Lemmas,
guaranteeing when a given finite tree can be extended to a desired configuration.
For $k\ge 4$, some new difficulties arise which did not exist for $k=3$.
The lemmas in this section extend work in
\cite{DobrinenJML20}, while addressing new complexities.
Further, this section includes some new extension lemmas not in \cite{DobrinenJML20}.
These have the
added benefit of streamlining proofs in Section \ref{sec.5}
in which analogues of the Halpern-\Lauchli\ Theorem are proved.
Phase II of the paper takes place in Sections \ref{sec.5} and \ref{sec.1SPOC},
the goal being to
prove the Milliken-style Theorem \ref{thm.MillikenSWP} for colorings of certain finite trees, namely those with the Strict Witnessing Property (see Definition \ref{defn.SWP}).
First, we prove analogues of
the Halpern-\Lauchli\ Theorem for strong $\mathcal{H}_k$-coding trees
in Theorem
\ref{thm.matrixHL}.
The proof builds on ideas from Harrington's forcing proof of the Halpern-\Lauchli\ Theorem,
but now we must use
distinct forcings for two
cases: the level sets being colored have either a coding node or else a splitting node.
The Extension Lemmas from Section \ref{sec.ExtLem} and the Witnessing Property are essential to these proofs.
A new ingredient for $k\ge 4$ is that all pre-cliques need to be considered and witnessed, not just pre-$k$-cliques.
It is important to note that
the technique of forcing is used to conduct unbounded searches for finite objects;
the proof takes place entirely within the standard axioms of set theory and does not involve passing to a generic model.
In Section \ref{sec.1SPOC},
we apply induction and fusion lemmas and
a third Harrington-style forcing argument to obtain our first Ramsey Theorem for colorings of finite trees.
\begin{thm7.3}
Let $k\ge 3$ be given and let $T\in\mathcal{T}_k$ be a strong $\mathcal{H}_k$-coding tree and let $A$ be a finite subtree of $T$ satisfying the Strict Witnessing Property.
Then for any coloring of the copies of $A$ in $T$ into finitely many colors,
there is a strong $\mathcal{H}_k$-coding tree $S$ contained in $T$
such that all copies of $A$ in $S$ have the same color.
\end{thm7.3}
Phase III of the article takes place in Sections \ref{sec.MainThm} and \ref{sec.7}.
There,
we prove a Ramsey theorem for finite antichains of coding nodes (Theorem \ref{thm.mainRamsey}), which is then applied to deduce that each Henson graph has finite big Ramsey degrees.
To do this, we must first develop a way to transform
antichains of coding nodes into finite trees with the Strict Witnessing Property.
This is accomplished
in Subsections
\ref{sec.squiggle}
and
\ref{sec.1color}, where
we develop the notions of incremental new pre-cliques
and envelopes.
Given any finite $K_k$-free graph $\G$, there are only finitely many strict similarity types (Definition
\ref{defn.ssimtype})
of antichains coding $\G$.
Given a
coloring $c$ of all copies of $\G$ in $\mathcal{H}_k$ into finitely many colors,
we
transfer the coloring to the envelopes of copies of $\G$ in a given strong coding tree $T$.
Then we
apply the results in previous sections to obtain a strong $\mathcal{H}_k$-coding tree $T'\le T$ in which all envelopes encompassing the same strict similarity type have the same color.
Upon thinning
to an incremental strong subtree $S\le T'$ while simultaneously choosing a set $W\sse T'$ of {\em witnessing coding nodes},
each finite
antichain $X$ of nodes in $S$ is incremental and
has an envelope comprised of nodes from $W$ satisfying the Strict Witnessing Property.
Applying Theorem \ref{thm.MillikenSWP} finitely many times, once for each strict similarity type of envelope,
we obtain our second Ramsey theorem for strong $\mathcal{H}_k$-coding trees, extending the first one.
\begin{thmR2}
[Ramsey Theorem for Strict Similarity Types]
Fix $k\ge 3$.
Let $Z$ be a finite antichain of coding nodes in a strong $\mathcal{H}_k$-coding tree $T$,
and let $h$ be a coloring of all subsets of $T$ which are strictly similar to $Z$ into finitely many colors.
Then there is an incremental strong $\mathcal{H}_k$-coding tree $S$ contained in $T$ such that all subsets of $S$
strictly similar to $Z$ have the same $h$ color.
\end{thmR2}
Upon taking an antichain of coding nodes $D\sse S$ coding $\mathcal{H}_k$,
the only sets of coding nodes in $D$ coding a given finite $K_k$-free graph $\G$ are automatically antichains which are incremental.
Applying Theorem \ref{thm.mainRamsey} to the finitely many strict similarity types of antichains coding $\G$, we arrive at the main theorem.
\begin{thmfinalthm}
The universal homogeneous $k$-clique free graph has finite big Ramsey degrees.
\end{thmfinalthm}
For each $\G\in\mathcal{G}_k$,
the number $T(\G,\mathcal{G}_k)$
is bounded by the number of strict similarity types of antichains of coding nodes coding $\G$.
It is presently open to see whether this number is in fact the lower bound.
If so, then recent work of Zucker in \cite{Zucker19} would provide an interesting connection with topological dynamics, as the colorings obtainable from our structures cohere in the manner necessary to apply Zucker's work.
\vskip.1in
\it Acknowledgements. \rm The author would like to thank
Andy Zucker for careful reading of a previous version of this paper, pointing out an oversight which led to author to consider singleton pre-cliques when $k\ge 4$, an issue that does not arise in the triangle-free Henson graph.
Much gratitude also goes to
Dana Barto\v{s}ov\'a and Jean Larson for listening to early and later stages of these results and for useful feedback;
Norbert Sauer for discussing key aspects of the homogeneous triangle-free graph with me during a research visit in Calgary in 2014; Stevo Todorcevic for pointing out to me in 2012 that any proof of finite Ramsey degrees for $\mathcal{H}_3$ would likely involve a new type of Halpern-\Lauchli\ Theorem;
and to the organizers and participants of the
Ramsey Theory Doccourse at Charles University, Prague, 2016, for their encouragement.
Most of all, I am grateful for and much indebted to
Richard Laver for
providing for me in 2011 the main points of
Harrington's forcing proof of the Halpern-\Lauchli\ Theorem,
opening the path of applying forcing methods to develop Ramsey theory on infinite structures.
\section{Background and the Milliken and Halpern-\Lauchli\ Theorems}\label{sec.2}
This section provides background and sets some notation and terminology for the paper.
We review
Milliken's Ramsey Theorem for trees and its application to proving that the Rado graph has finite big Ramsey degrees.
Then we discuss why this theorem is not sufficient for handling Henson graphs.
In Subsection \ref{subsec.HLHarrington}, we present
the Halpern-\Lauchli\ Theorem, which is a Ramsey theorem on
products of trees.
This theorem forms the basis for
Milliken's Theorem.
We present a version of
Harrington's forcing proof of the Halpern-\Lauchli\ Theorem in order to offer the reader an introduction to
the tack we take in later sections toward proving Theorem \ref{thm.MillikenSWP}.
\subsection{Coding graphs via finite binary sequences}\label{subsection.treescodinggraphs}
The following notation shall be used throughout.
The set of all natural numbers $\{0,1,2,\dots\}$ is denoted by $\mathbb{N}$.
Each natural number $n$
is equated with the set of all natural numbers strictly less than $n$; thus, $n=\{0,\dots,n-1\}$ and, in particular,
$0$ is the emptyset.
It follows that for $m,n\in \mathbb{N}$, $m<n$ if and only if $m\in n$.
For $n\in \mathbb{N}$, we shall let $\Seq_n$ denote the collection of
all binary sequences of length $n$.
Thus, each $s\in \Seq_n$ is a
function from $n$ into $\{0,1\}$, and
we often write $s$ as $\lgl s(0), \dots, s(n-1)\rgl$.
For $i<n$, $s(i)$ denotes the $i$-th {\em value} or {\em entry} of the sequence $s$.
The {\em length} of $s$, denoted $|s|$, is the domain of $s$.
Note that $\Seq_0$ contains just the empty sequence, denoted $\lgl \rgl$.
\begin{notation}\label{notn.Seq}
We shall let $\Seq$ denote $\bigcup_{n\in \mathbb{N}}\Seq_n$,
the collection of all binary sequences of finite length.
\end{notation}
For nodes $s,t\in \Seq$, we write
$s\sse t$ if and only if $|s|\le |t|$ and for each $i<|s|$, $s(i)=t(i)$.
In this case, we say that $s$ is an {\em initial segment} of $t$, or that $t$ {\em extends} $s$.
If $s$ is an initial segment of $t$ and $|s|<|t|$, then we write $s\subset t$ and say that $s$ is a {\em proper initial segment} of $t$.
For $i\in \mathbb{N}$, we let $s\re i$ denote
the sequence $s$ restricted to domain $i$.
Thus, if $i< |s|$, then $s\re i$ is
the proper initial segment of $s$ of length $i$, $s\re i=\lgl s(0),\dots, s(i-1)\rgl$;
if $i\ge |s|$, then $s\re i$ equals $s$.
Let $\Seq_{\le n}$ denote $\bigcup_{i\le n}\Seq_i$, the set of all binary sequences of length at most $n$.
In \cite{Erdos/Hajnal/Posa75},
\Erdos, Hajnal and P\'{o}sa
used the edge relation on a given ordered graph to induce a
lexicographic order
on the vertices, which
they employed
to solve problems regarding strong embeddings of graphs.
With this lexicographic order,
vertices in a given graph can
can be represented by
finite sequences of $0$'s and $1$'s, a
view made explicit in \cite{Sauer98} which we review now, using terminology from \cite{Sauer06}.
\begin{defn}[\cite{Sauer06}]\label{defn.pn}
Given nodes $s,t\in \Seq$ with $|s|<|t|$,
we call the integer $t(|s|)$
the {\em passing number} of $t$ at $s$.
Passing numbers represent the edge relation in a graph as follows:
Two vertices $v,w$ in a graph can be
{\em represented} by nodes $s,t\in \Seq$ with $|s|<|t|$, respectively, if
\begin{equation}
v\, E\, w \Longleftrightarrow t(|s|)=1.
\end{equation}
\end{defn}
Using this correspondence between passing numbers and the edge relation $E$, any graph can be coded by nodes in a binary tree as follows.
Let $\G$ be a graph with $N$ vertices,
where either $N\in \mathbb{N}$ or $N=\mathbb{N}$,
and
let $\lgl v_n:n\in N\rgl$ be any enumeration of
the vertices of $\G$.
Choose any node $t_0\in \Seq$ to represent the vertex $v_0$.
For $n>0$,
given nodes $t_0,\dots,t_{n-1}$ in $\Seq$ coding the vertices $v_0,\dots,v_{n-1}$,
take $t_n$ to be any node in $\Seq$ such that
$|t_n|>|t_{n-1}|$ and
for all $i< n$,
$v_n$ and $ v_i$ have an edge between them if and only if $t_n(|t_i|)=1$.
Then the set of nodes $\{t_n:n\in N\}$ codes the graph $\G$.
For the purposes of developing the Ramsey theory of Henson graphs, we make the convention that
the nodes in a tree used to code the vertices in a
graph have different lengths.
Figure 1.\ shows a set of nodes $\{t_0,t_1,t_2,t_3\}$ from $\Seq$ coding the four-cycle $\{v_0,v_1,v_2,v_3\}$.
\begin{figure}
\begin{tikzpicture}[grow'=up,scale=.5]
\tikzstyle{level 1}=[sibling distance=4in]
\tikzstyle{level 2}=[sibling distance=2in]
\tikzstyle{level 3}=[sibling distance=1in]
\tikzstyle{level 4}=[sibling distance=1in]
\node {}
child{
child{
child{edge from parent[draw=none]
child{edge from parent[draw=none]}
child{edge from parent[draw=none]}}
child{ coordinate (t2)
child{edge from parent[draw=none]}
child {edge from parent[draw=none]}}
}
child{ coordinate (t1)
child{
child{edge from parent[draw=none]}
child{coordinate (t3)}}
child {edge from parent[draw=none]} }}
child{coordinate (t0)
child{edge from parent[draw=none]
child{edge from parent[draw=none]
child{edge from parent[draw=none]}
child{edge from parent[draw=none]}
}
child{edge from parent[draw=none]
child{edge from parent[draw=none]}
child {edge from parent[draw=none]}}
}
child {edge from parent[draw=none]}};
\node[right] at (t0) {$t_{0}$};
\node[right] at (t1) {$t_{1}$};
\node[right] at (t2) {$t_{2}$};
\node[right] at (t3) {$t_{3}$};
\node[circle, fill=black,inner sep=0pt, minimum size=7pt] at (t0) {};
\node[circle, fill=black,inner sep=0pt, minimum size=7pt] at (t1) {};
\node[circle, fill=black,inner sep=0pt, minimum size=7pt] at (t2) {};
\node[circle, fill=black,inner sep=0pt, minimum size=7pt] at (t3) {};
\draw[thick, dotted] let \p1=(t1) in (-12,\y1) node (v1) {$\bullet$} -- (7,\y1);
\draw[thick, dotted] let \p1=(t2) in (-12,\y1) node (v2) {$\bullet$} -- (7,\y1);
\draw[thick, dotted] let \p1=(t3) in (-12,\y1) node (v3) {$\bullet$} -- (7,\y1);
\draw[thick, dotted] let \p1=(t0) in (-12,\y1) node (v0) {$\bullet$} -- (7,\y1);
\node[left] at (v1) {$v_1$};
\node[left] at (v2) {$v_2$};
\node[left] at (v3) {$v_3$};
\node[left] at (v0) {$v_0$};
\draw[thick] (v0.center) to (v1.center) to (v2.center) to (v3.center) to [bend left] (v0.center);
\end{tikzpicture}
\caption{A tree with nodes $\{t_0,t_1,t_2,t_3\}$ coding the 4-cycle $\{v_0,v_1,v_2,v_3\}$}
\end{figure}
\subsection{Trees and related notation}
In Ramsey theory on trees, it is standard to use the slightly relaxed definition of tree below.
(See for instance Chapter 6 of \cite{TodorcevicBK10}.)
Recall that
the {\em meet} of two nodes $s$ and $t$ in $\Seq$, denoted $s\wedge t$,
is the maximal initial segment of both $s$ and $t$.
In particular, if $s\sse t$ then $s\wedge t=s$.
A set of nodes $A\sse \Seq$ is {\em closed under meets}
if $s\wedge t$ is in $A$, for every pair $s,t\in A$.
\begin{defn}[Tree]\label{defn.tree}
A subset $T\sse \Seq$ is a {\em tree}
if $T$
is closed under meets and for each pair $s,t\in T$
with $|s|\le |t|$,
$t\re |s|$ is also in $T$.
\end{defn}
Thus, in our context, a {\em tree} $T$ is not necessarily closed under initial segments in $\Seq$, but rather is closed under initial segments having length in $L(T)=\{|s|:s\in T\}$.
Given a tree $T\sse \Seq$, let
\begin{equation}
\widehat{T}=\{
s\in \Seq: \exists n\le |t|\, (s=t\re n)\}.
\end{equation}
Thus, $\widehat{T}$ is closed under initial segments in
the usual sense.
Given a tree $T\sse \Seq$, for each node $t\in T$, define
\begin{equation}\label{eq.ht_T}
\mbox{ht}_T(t)
=|\{s\in T:s\subset t\}|,
\end{equation}
where we recall that $s\subset t$ denotes that $s$ is a proper initial segment of $t$.
For
$n\in\bN$, define
\begin{equation}
T(n)=\{t\in T:\mbox{ht}_T(t)=n\}.
\end{equation}
A set $X\sse T$ is called a {\em level set}
if $X\sse T(n)$ for some $n\in\bN$.
Given $l\in \bN$, let
\begin{equation}
T\re l=\{t\in\widehat{T}:|t|=l\}.
\end{equation}
If $l$ is in $L(T)$, then $T\re l$ is a level subset of $T$.
In this case, there is some $n$ such that $T\re l=T(n)$.
\subsection{Milliken's Theorem, its use in proving upper bounds for big Ramsey degrees of the Rado graph, and its insufficiency for Henson graphs}\label{subsection.HLM}
A Ramsey theorem for colorings of finite subtrees of a given tree was proved by Milliken in 1979 (Theorem \ref{thm.Milliken} below).
This theorem has turned out to be central to proving upper bounds for
big Ramsey degrees of several types ultrahomogeneous structures,
including the rationals as a linearly ordered structure in \cite{DevlinThesis},
the Rado graph and other simple binary relational structures in \cite{Sauer06} and \cite{Laflamme/Sauer/Vuksanovic06},
and
circular directed graphs and rationals with finitely many equivalence relations $\mathbb{Q}_n$ in \cite{Laflamme/NVT/Sauer10}.
A good review of these results appears in \cite{NVTHabil}.
Chapter 6 of \cite{TodorcevicBK10} provides a solid foundation for understanding how Milliken's theorem is used to deduce upper bounds on big Ramsey degrees for both the rationals and the Rado graph.
Given a tree $T\sse\Seq$,
a node $t\in T$ {\em splits} in $T$ if and only if there are $u,v\in T$ properly extending $t$ with
$u(|t|)=0$ and $v(|t|)=1$.
\begin{defn}[Strong tree]\label{defn.strongtree}
Given a tree
$T\sse \Seq$, recall that $L(T)=\{|s|:s\in T\}$, the set of lengths of nodes in $T$.
We say that
$T$ is a
{\em strong tree} if and only if
for each $t\in T$ with $|t|\ne\sup L(T)$,
$t$ splits in $T$.
We say that $T$ is
a {\em strong tree of height $k$}, or simply a {\em $k$-strong tree}, if and only if $L(T)$ has exactly $k$ members.
\end{defn}
Note that a strong tree $T$ has no maximal nodes if and only if $L(T)$ is infinite.
Further, for
each $k$-strong subtree of $\Seq$
is isomorphic as a tree to some binary tree of height $k$, where the isomorphism preserves relative lengths of nodes.
In particular, a $1$-strong tree is simply a node in $\Seq$.
See Figure 2.\ for an example of a $3$-strong tree $T$ with $L(T)=\{0,2,5\}$.
\begin{figure}
\begin{tikzpicture}[grow'=up,scale=.6]
\tikzstyle{level 1}=[sibling distance=4in]
\tikzstyle{level 2}=[sibling distance=2in]
\tikzstyle{level 3}=[sibling distance=1in]
\tikzstyle{level 4}=[sibling distance=0.5in]
\tikzstyle{level 5}=[sibling distance=0.2in]
\node {} coordinate (t9)
child{coordinate (t0) edge from parent[thick]
child{coordinate (t00) edge from parent[thin]
child{coordinate (t000)
child {coordinate(t0000)
child{coordinate(t00000) edge from parent[color=black] }
child{coordinate(t00001)}}
child {coordinate(t0001) edge from parent[color=black]
child {coordinate(t00010)}
child{coordinate(t00011)}}}
child{ coordinate(t001)
child{ coordinate(t0010)
child{ coordinate(t00100)}
child{ coordinate(t00101) edge from parent[color=black] }}
child{ coordinate(t0011) edge from parent[color=black]
child{ coordinate(t00110)}
child{ coordinate(t00111)}}}}
child{ coordinate(t01) edge from parent[color=black]
child{ coordinate(t010)
child{ coordinate(t0100) edge from parent[thin]
child{ coordinate(t01000)}
child{ coordinate(t01001)}}
child{ coordinate(t0101)
child{ coordinate(t01010) edge from parent[thin]}
child{ coordinate(t01011)}}}
child{ coordinate(t011)
child{ coordinate(t0110)
child{ coordinate(t01100) edge from parent[thin]}
child{ coordinate(t01101)}}
child{ coordinate(t0111) edge from parent[thin]
child { coordinate(t01110)}
child{ coordinate(t01111)}}}}}
child{ coordinate(t1) edge from parent[thick]
child{ coordinate(t10)
child{ coordinate(t100)
child{ coordinate(t1000) edge from parent[color=black, thin]
child{ coordinate(t10000)}
child{ coordinate(t10001)}}
child{ coordinate(t1001)
child{ coordinate(t10010)}
child{ coordinate(t10011) edge from parent[color=black, thin] }}}
child{ coordinate(t101)
child{ coordinate(t1010) edge from parent[color=black, thin]
child{ coordinate(t10100) }
child{ coordinate(t10101)}}
child{ coordinate(t1011)
child{ coordinate(t10110) edge from parent[color=black, thin] }
child{ coordinate(t10111)}}}}
child{ coordinate(t11) edge from parent[color=black, thin]
child{ coordinate(t110)
child{ coordinate(t1100)
child{ coordinate(t11000)}
child{ coordinate(t11001)}}
child{ coordinate(t1101)
child{ coordinate(t11010)}
child{ coordinate(t11011)}}}
child{ coordinate(t111)
child{ coordinate(t1110)
child{ coordinate(t11100)}
child{ coordinate(t11101)}}
child{ coordinate(t1111)
child{ coordinate(t11110)}
child{ coordinate(t11111)}}}} };
\node[left] at (t0) {$0$};
\node[left] at (t00) {$00$};
\node[left] at (t000) {$000$};
\node[left] at (t001) {$001$};
\node[left] at (t01) {$01$};
\node[left] at (t010) {$010$};
\node[left] at (t011) {$011$};
\node[right] at (t1) {$1$};
\node[right] at (t10) {$10$};
\node[right] at (t100) {$100$};
\node[right] at (t101) {$101$};
\node[right] at (t11) {$11$};
\node[right] at (t110) {$110$};
\node[right] at (t111) {$111$};
\node[circle, fill=black,inner sep=0pt, minimum size=6pt] at (t9) {};
\node[circle, fill=black,inner sep=0pt, minimum size=6pt] at (t01) {};
\node[circle, fill=black,inner sep=0pt, minimum size=6pt] at (t01011) {};
\node[circle, fill=black,inner sep=0pt, minimum size=6pt] at (t01101) {};
\node[circle, fill=black,inner sep=0pt, minimum size=6pt] at (t10010) {};
\node[circle, fill=black,inner sep=0pt, minimum size=6pt] at (t10) {};
\node[circle, fill=black,inner sep=0pt, minimum size=6pt] at (t10111) {};
\end{tikzpicture}
\caption{A strong subtree of $\Seq$ of height $3$}
\end{figure}
In \cite{Milliken79}, Milliken proved a Ramsey theorem for finite strong subtrees of finitely branching trees with no maximal nodes.
Here, we present a restricted version of that theorem relevant to this paper.
\begin{thm}[Milliken, \cite{Milliken79}]\label{thm.Milliken}
Let $k\ge 1$ be given and let all $k$-strong subtrees of $\Seq$ be colored by finitely many colors.
Then there is an infinite strong subtree $T$ of $\Seq$ such that all $k$-strong subtrees of $T$ have the same color.
\end{thm}
\begin{rem}
A theorem stronger than Theorem \ref{thm.Milliken}, also due to Milliken in \cite{Milliken81},
shows that the collection of all infinite strong subtrees of
an infinite finitely branching tree
forms a topological Ramsey space, meaning that it satisfies an infinite-dimensional Ramsey theorem for Baire sets when equipped with its version of the Ellentuck topology (see \cite{TodorcevicBK10}).
This fact informed some of our intuition when approaching the present work.
\end{rem}
Now, we present the basic ideas behind the upper bounds for the big Ramsey degrees of the Rado graph.
Upper bounds for the rationals are similar.
The work in \cite{Laflamme/NVT/Sauer10} relied on a stronger variation of Milliken's theorem proved in that paper; given that theorem, the basic idea behind the upper bounds is similar to what we now present.
The Rado graph, denoted by $\mathcal{R}$, is universal for all countable graphs.
However, it is not the only universal countable graph.
The following graph $\mathcal{G}$ is also universal.
Let $\mathcal{G}$ denote the graph with $\Seq$ as its countable vertex set and the edge relation $E_{\mathcal{G}}$ defined as follows:
For $s,t\in \Seq$,
\begin{equation}
s\, E_{\mathcal{G}}\, t \ \Llra (|s|>|t|\mathrm{\ and\ } s(|t|)=1)\mathrm{\ or \ } (|t|>|s|\mathrm{\ and\ }t(|s|)=1).
\end{equation}
It turns out that this graph $\mathcal{G}$ is also universal, so $\mathcal{G}$ embeds into $\mathcal{R}$ and vice versa.
Suppose that
$\mathrm{A}$ is a finite graph and
all copies of $\mathrm{A}$ in $\mathcal{R}$ are colored into finitely many colors.
Take a copy of $\mathcal{G}$ in $\mathcal{R}$ and restrict our attention to those copies of $\mathrm{A}$ in $\mathcal{G}$.
Each copy of $\mathrm{A}$ in $\mathcal{G}$
has vertices which are nodes in $\Seq$.
There are only finitely many {\em strong similarity types} of embeddings of $\mathrm{A}$ into $\Seq$,
(Definition 3.1 in \cite{Sauer06}), which we now review.
\begin{defn}[Strong Similarity Map, \cite{Sauer06}]\label{def.3.1.Sauer}
Let $k\ge 3$ be given and let $S,T\sse \Seq$ be meet-closed subsets.
A function $f:S\ra T$ is a {\em strong similarity map} from $S$ to $T$ if for all nodes $s,t,u,v\in S$, the following hold:
\begin{enumerate}
\item
$f$ is a bijection.
\item
$f$ preserves lexicographic order: $s<_{\mathrm{lex}}t$ if and only if $f(s)<_{\mathrm{lex}}f(t)$.
\item
$f$ preserves meets, and hence splitting nodes:
$f(s\wedge t)=f(s)\wedge f(t)$.
\item
$f$ preserves relative lengths:
$|s\wedge t|<|u\wedge v|$ if and only if
$|f(s)\wedge f(t)|<|f(u)\wedge f(v)|$.
\item
$f$ preserves initial segments:
$s\wedge t\sse u\wedge v$ if and only if $f(s)\wedge f(t)\sse f(u)\wedge f(v)$.
\item
$f$ preserves passing numbers:
If $|s|<|t|$,
then $f(t)(|f(s)|)=t(|s|)$.
\end{enumerate}
We say that $S$ and $T$ are {\em strongly similar}, and write $S\ssim T$, exactly when there is a strong similarity map between $S$ and $T$.
\end{defn}
The relation $\ssim$ is an equivalence relation, and given a fixed finite graph $\mathrm{A}$, there are only finitely many different equivalence classes of strongly similar copies of $\mathrm{A}$ in $\mathcal{G}$.
Each equivalence class is called a {\em strong similarity type}.
Thus, each copy of $\mathrm{A}$ in $\mathcal{G}$
is in exactly one of finitely many strong similarity types.
Fix one strong similarity type for $\mathrm{A}$, call it $\tau$.
For each copy $\mathrm{B}$ of $\mathrm{A}$ in
$\mathcal{G}$ of type $\tau$, as the vertices of
$\mathrm{B}$ are nodes in the tree $\Seq$, we
let $T_{\mathrm{B}}$ denote the tree induced by the meet-closure in $\Seq$ of the vertices in $\mathrm{B}$;
let $k$ be the number of levels of
$T_{\mathrm{B}}$.
There are finitely many $k$-strong subtrees of $\Seq$ which contain $T_{\mathrm{B}}$.
Moreover, each $k$-strong subtree of $\Seq$ contains $T_{\mathrm{B}}$ for exactly one $\mathrm{B}$ in $\tau$.
(A proof of this fact can be found in Section 6 of \cite{TodorcevicBK10}.)
Define a coloring $h$ on the $k$-strong subtrees of $\Seq$ as follows:
Given a $k$-strong subtree $T\sse\Seq$, let
$\mathrm{B}$ be the unique copy of $\mathrm{A}$ in $\tau$
such that $T_{\mathrm{B}}$ is contained in $T$.
Let $h(T)$ be the color of $\mathrm{B}$.
Applying Milliken's Theorem \ref{thm.Milliken}, we obtain an infinite strong subtree $S_{\tau}$ of $\Seq$ with all of its $k$-strong subtrees having the same color.
Thus, all copies of $\mathrm{A}$ in $\tau$ with vertices in $S_{\tau}$ have the same color.
Repeating this argument for each strict similarity type,
after finitely many applications of Milliken's Theorem, we obtain an infinite strong subtree $S\sse\Seq$ with the following property:
For each strong similarity type $\tau$ for $\mathrm{A}$,
all copies of $\mathrm{A}$ of type $\tau$ with vertices in $S$ have the same color.
Since $S$ is an infinite strong subtree of $\Seq$, the subgraph $\mathcal{G}'$ of
$\mathcal{G}$ coded by the nodes in $S$ is isomorphic to $\mathcal{G}$.
Since $\mathcal{R}$ embeds into $\mathcal{G}$,
we may take a copy of $\mathcal{R}$ whose nodes come from $S$; call this copy $\mathcal{R}'$.
Then every copy of $\mathrm{A}$ in $\mathcal{R}'$ has the color of its strong similarity type in $S$.
The number of strong similarity types for copies of $\mathrm{A}$ is the upper bound for the number of colors of copies of $\mathrm{A}$ in $\mathcal{R}'$.
To find the exact big Ramsey degrees, Laflamme, Sauer and Vuksanovic use an additional argument in \cite{Laflamme/Sauer/Vuksanovic06}.
We do not reproduce their argument here, as the lower bounds for big Ramsey degrees of the Henson graphs are not the subject of this article.
However,
we do point out that at the end of the article
\cite{Sauer06},
Sauer takes
an antichain
$\mathcal{A}$ of nodes in $S$ coding a copy of $\mathcal{R}$
with the further properties:
(a) The tree induced by the meet-closure of the nodes in
$\mathcal{A}$ has at each level at most one splitting node or one
maximal node, but never both.
(b)
Passing numbers at splitting nodes are always zero, except of course for the right extension of the splitting node itself.
These crucial properties (a) and (b) were used
to reduce the upper bounds found in
\cite{Sauer06} to the number of strong similarity types of copies of a finite graph $\mathrm{A}$
occuring in the copy $\mathcal{A}$ of $\mathcal{R}$.
This number was
then proved to be the exact lower bound for the big Ramsey degrees
in \cite{Laflamme/Sauer/Vuksanovic06}.
Milliken's Theorem is not able to handle big Ramsey degrees of Henson graphs for the following reasons:
First, the there is no natural way to code a Henson graph using all nodes in a strong subtree of $\Seq$, nor is there a nicely defined graph which is bi-embeddable with a Henson graph.
Second, even if there were, there is no way to guarantee that the strong subtree obtained by Milliken's Theorem would contain a Henson graph.
Thus, the need for a new Milliken-style theorem able to handle $k$-clique-free graphs.
We begin with the properties (a) and (b) in mind when we construct trees with special distinguished nodes which code Henson graphs.
The reader will notice that our strong $\mathcal{H}_k$-coding trees in Section
\ref{subsec.T_k}
have the property that each level of the tree has at most one splitting node or one {\em coding node}
(Definition \ref{defn.treewcodingnodes}).
While our $\mathcal{H}_k$-coding trees are certainly not antichains (there are no maximal nodes),
they set the stage for taking an antichain of coding nodes which code $\mathcal{H}_k$
and have the properties (a) and (b)
(Lemma \ref{lem.bD}).
This serves to reduce the upper bound on the number of colors.
We conjecture that the upper bounds found in this article - the number of strict similarity types of incremental antichains of coding nodes - are in fact the big Ramsey degrees.
\subsection{Halpern-\Lauchli\ Theorem and Harrington's forcing proof}\label{subsec.HLHarrington}
An important Ramsey theorem for trees is Theorem
\ref{thm.HL},
due to
Halpern and \Lauchli.
This theorem was found as a key step in the celebrated result of Halpern and L\'{e}vy in \cite{Halpern/Levy71}, proving
that
the Boolean Prime Ideal Theorem (the statement that any filter can be extended to an ultrafilter) is strictly weaker than the Axiom of Choice, assuming the Zermelo-Fraenkel axioms of set theory.
The Halpern-\Lauchli\ Theorem
is a Ramsey theorem for colorings of products of level sets of finitely many trees, forming
the basis for
Milliken's
Theorem \ref{thm.Milliken}, discussed in the previous subsection.
In-depth presentations and proofs of various versions of the Halpern-\Lauchli\ Theorem as well as Milliken's Theorem can be found in \cite{Farah/TodorcevicBK}, \cite{TodorcevicBK10}, and \cite{DodosKBK}.
The book \cite{Farah/TodorcevicBK} contains
the first published
version of a proof due to Harrington using the method of forcing to produce a result inside the standard axioms of set theory, Zermelo-Fraenkel + Axiom of Choice.
Harrington's novel approach is central to the methods we developed in \cite{DobrinenJML20} for the triangle-free Henson graph and the more general approach developed in this paper for all Henson graphs.
To provide the reader with a warm-up for our proof of Theorem
\ref{thm.MillikenSWP},
we reproduce here
a forcing proof from \cite{DobrinenRIMS17}.
This proof was outlined for us in 2011 by Richard Laver.
It is simpler than the one given in \cite{Farah/TodorcevicBK} (at the expense of using $\kappa=\beth_{2d-1}(\aleph_0)^+$ instead of the $\kappa=\beth_{d-1}(\aleph_0)^+$
used in \cite{Farah/TodorcevicBK})
and provides the starting point towards obtaining Theorem \ref{thm.MillikenSWP}.
The Halpern-\Lauchli\ Theorem holds for finitely many finitely branching trees with no maximal nodes;
here, we restrict attention to binary trees since they are sufficient for applications to graphs.
The following is the simplest version of the Halpern-\Lauchli\ Theorem for strong trees, which provides the reader with a basic understanding of the starting point for our Ramsey theorems in Sections
\ref{sec.5} and \ref{sec.1SPOC}.
Recall that for a tree $T\sse\Seq$, $L(T)$ denotes the set of lengths of nodes in $T$.
\begin{thm}[Halpern-\Lauchli, \cite{Halpern/Lauchli66}]\label{thm.HL}
Let $d\ge 1$ be fixed, and for each $i<d$, let $T_i$ denote
$\Seq$, the tree of all binary sequences of finite length.
Suppose
\begin{equation}
h:\bigcup_{n\in \mathbb{N}}\prod_{i<d} T_i(n)\ra r
\end{equation}
is a given coloring, where $r$ is any positive integer.
Then there are infinite strong subtrees $S_i\sse T_i$, where $L(S_i)=L(S_j)$ for all $i<j<d$,
such that $h$ is monochromatic on
\begin{equation}
\bigcup_{n\in \mathbb{N} }\prod_{i<d} S_i(n).
\end{equation}
\end{thm}
Harrington's proof uses a cardinal $\kappa$ large enough to satisfy a partition relation guaranteed by the following theorem.
Recall that
given cardinals $d,\sigma,\kappa,\lambda$,
\begin{equation}
\lambda\ra(\kappa)^d_{\sigma}
\end{equation}
means that for each coloring of $[\lambda]^d$ into $\sigma$ many colors,
there is a subset $X$ of $\lambda$ such that $|X|=\kappa$ and all members of $[X]^d$ have the same color.
The following is a ZFC result guaranteeing cardinals large enough to have the Ramsey property for colorings into infinitely many colors.
\begin{thm}[\Erdos-Rado]\label{thm.ER}
For any non-negative integer $d$ and infinite cardinal $\mu$,
$$
\beth_d(\mu)^+\ra(\mu^+)_{\mu}^{d+1}.
$$
\end{thm}
\vskip.1in
\noindent \bf Proof of Theorem \ref{thm.HL}. \rm
It is sufficient to consider the case $r=2$.
Let
$h:\bigcup_{n\in\mathbb{N}}\prod_{i<d} T_i(n) \ra 2$
be given.
Notice that since each $T_i=\Seq$, it follows that each $T_i(n)=T_i\re n=\Seq_n$.
Let $\kappa=\beth_{2d-1}(\aleph_0)^+$.
(Recall that
$\beth_1(\aleph_0)=2^{\aleph_0}$ and in general, $\beth_{n+1}(\aleph_0)=2^{\beth_n(\aleph_0)}$.)
Define $\bP$ to be the set of functions
$p$
of the form
\begin{equation}
p: d\times\vec{\delta}_p\ra \bigcup_{i<d} T_i\re l_p
\end{equation}
where $l_p$ is some positive integer and
\begin{enumerate}
\item[(i)]
$\vec{\delta}_p$ is a finite subset of $\kappa$ and
$l_p\in\mathbb{N}$;
\item [(ii)]
for each $i<d$,
$\{p(i,\delta) : \delta\in \vec{\delta}_p\}\sse T\re l_p$.
\end{enumerate}
The partial ordering on $\bP$ is inclusion:
$q\le p$ if and only if
$l_q\ge l_p$, $\vec\delta_q\contains \vec\delta_p$,
and for each $(i,\delta)\in d\times \vec\delta_p$,
$q(i,\delta)\contains p(i,\delta)$.
Forcing with
$\bP$ adds $\kappa$ branches through the tree $T_i$, for each $i<d$.
For $\al<\kappa$,
let $\dot{b}_{i,\al}$ denote the $\al$-th generic branch through $T_i$.
Thus,
\begin{equation}
\dot{b}_{i,\al}=\{\lgl p(i,\al),p\rgl :p\in \bP,\ \mathrm{and}\ (i,\al)\in\dom(p)\}.
\end{equation}
Note that for each $p\in \bP$ with $(i,\al)\in\dom(p)$, $p$ forces that $\dot{b}_{i,\al}\re l_p= p(i,\al)$.
Let $\dot{\mathcal{U}}$ be a $\bP$-name for a non-principal ultrafilter on $\mathbb{N}$.
To simplify notation, we write sets $\{\al_i:i< d\}$ in
$[\kappa]^d$ as vectors $\vec{\al}=\lgl \al_0,\dots,\al_{d-1}\rgl$ in strictly increasing order.
For $\vec{\al}=\lgl\al_0,\dots,\al_{d-1}\rgl\in[\kappa]^d$,
we let
\begin{equation}
\dot{b}_{\vec{\al}}\mathrm{\ \ denote\ \ }
\lgl \dot{b}_{0,\al_0},\dots, \dot{b}_{d-1,\al_{d-1}}\rgl,
\end{equation}
and for any $l\in \mathbb{N}$, let
\begin{equation}
\dot{b}_{\vec\al}\re l
\mathrm{\ \ denote \ \ }
\{\dot{b}_{i,\al_i}\re l:i<d\}.
\end{equation}
The goal now is to find disjoint
infinite sets $K_i\sse \kappa$, for $i<d$,
and a set of conditions $\{p_{\vec\al}:\vec\al\in \prod_{i<d}K_i\}$ which are compatible,
have the same images in $T$,
and such that for some fixed $\varepsilon^*$,
for each $\vec\al\in\prod_{i<d}K_i$,
$p_{\vec\al}$ forces
$h(\dot{b}_{\vec\al}\re l)=\varepsilon^*$ for $\dot{\mathcal{U}}$-many $l$.
Moreover, we will find nodes $t^*_i$, $i\le d$, such that for each $\vec\al\in\prod_{i<d}K_i$,
$p_{\vec\al}(i,\al_i)=t^*_i$.
These will serve as the basis for the process of building the strong subtrees $S_i\sse T_i$ on which $h$ is monochromatic.
For each $\vec\al\in[\kappa]^d$,
choose a condition $p_{\vec{\al}}\in\bP$ such that
\begin{enumerate}
\item
$\vec{\al}\sse\vec{\delta}_{p_{\vec\al}}$;
\item
$p_{\vec{\al}}\forces$ ``There is an $\varepsilon\in 2$ such that
$h(\dot{b}_{\vec{\al}}\re l)=\varepsilon$
for $\dot{\mathcal{U}}$ many $l$";
\item
$p_{\vec{\al}}$ decides a value for $\varepsilon$, label it $\varepsilon_{\vec{\al}}$; and
\item
$h(\{p_{\vec\al}(i,\al_i):i< d\})=\varepsilon_{\vec{\al}}$.
\end{enumerate}
Such conditions $p_{\vec\al}$ may be obtained as follows.
Given $\vec\al\in[\kappa]^d$,
take $p_{\vec\al}^1$ to be any condition such that $\vec\al\sse\vec{\delta}_{p_{\vec\al}^1}$.
Since $\bP$ forces $\dot{\mathcal{U}}$ to be an ultrafilter on $\mathbb{N}$, there is a condition
$p_{\vec\al}^2\le p_{\vec\al}^1$ such that
$p_{\vec\al}^2$ forces that $h(\dot{b}_{\vec\al}\re l)$ is the same color for $\dot{\mathcal{U}}$ many $l$.
Furthermore, there must be a stronger
condition deciding which of the colors
$h(\dot{b}_{\vec\al}\re l)$ takes on $\dot{\mathcal{U}}$ many levels $l$.
Let $p_{\vec\al}^3\le p_{\vec\al}^2$ be a condition which decides this color, and let $\varepsilon_{\vec\al}$ denote that color.
Finally, since $p_{\vec\al}^3$ forces that for $\dot{\mathcal{U}}$ many $l$ the color
$h(\dot{b}_{\vec\al}\re l)$
will equal $\varepsilon_{\vec{\al}}$,
there is some $p_{\vec\al}^4\le p_{\vec\al}^3$ which decides some level $l$ so that
$h(\dot{b}_{\vec\al}\re l)=\varepsilon_{\vec{\al}}$.
If $l_{p_{\vec\al}^4}<l$,
let $p_{\vec\al}$ be any member of $\bP$ such that
$p_{\vec\al}\le p_{\vec\al}^4$ and $l_{p_{\vec\al}}=l$.
If $l_{p_{\vec\al}^4}\ge l$,
let $p_{\vec\al}=\{((i,\delta), p_{\vec\al}^4(i,\delta)\re l):(i,\delta)\in d\times\vec\delta_{p_{\vec\al}^4}\}$,
the truncation of $p_{\vec\al}^4$ to images that have length $l$.
Then $p_{\vec\al}$ forces that $\dot{b}_{\vec\al}\re l=\{p_{\vec\al}(i,\al_i):i< d\}$, and hence
$p_{\vec\al}$ forces that
$h(\{p_{\vec\al}(i,\al_i):i< d\})=\varepsilon_{\vec{\al}}$.
Recall that we chose $\kappa$ large enough so that $\kappa\ra(\aleph_1)^{2d}_{\aleph_0}$ holds.
Now we prepare for an application of the \Erdos-Rado Theorem.
Given two sets of ordinals $J,K$ we shall write $J<K$ if and only if every member of $J$ is less than every member of $K$.
Let $D_e=\{0,2,\dots,2d-2\}$ and $D_o=\{1,3,\dots,2d-1\}$, the sets of even and odd integers less than $2d$, respectively.
Let $\mathcal{I}$ denote the collection of all functions $\iota: 2d\ra 2d$ such that
\begin{equation}
\{\iota(0),\iota(1)\}<\{\iota(2),\iota(3)\}<\dots<\{\iota(2d-2),\iota(2d-1)\}.
\end{equation}
Thus, each $\iota$ codes two strictly increasing sequences $\iota\re D_e$ and $\iota\re D_o$, each of length $d$.
For $\vec{\theta}\in[\kappa]^{2d}$,
$\iota(\vec{\theta}\,)$ determines the pair of sequences of ordinals
\begin{equation}
(\theta_{\iota(0)},\theta_{\iota(2)},\dots,\theta_{\iota(2d-2))}), (\theta_{\iota(1)},\theta_{\iota(3)},\dots,\theta_{\iota(2d-1)}),
\end{equation}
both of which are members of $[\kappa]^d$.
Denote these as $\iota_e(\vec\theta\,)$ and $\iota_o(\vec\theta\,)$, respectively.
To ease notation, let $\vec{\delta}_{\vec\al}$ denote
$\vec\delta_{p_{\vec\al}}$,
$k_{\vec{\al}}$ denote $|\vec{\delta}_{\vec\al}|$,
and let $l_{\vec{\al}}$ denote $l_{p_{\vec\al}}$.
Let $\lgl \delta_{\vec{\al}}(j):j<k_{\vec{\al}}\rgl$
denote the enumeration of $\vec{\delta}_{\vec\al}$
in increasing order.
Define a coloring $f$ on $[\kappa]^{2d}$ into countably many colors as follows:
Given $\vec\theta\in[\kappa]^{2d}$ and
$\iota\in\mathcal{I}$, to reduce the number of subscripts, letting
$\vec\al$ denote $\iota_e(\vec\theta\,)$ and $\vec\beta$ denote $\iota_o(\vec\theta\,)$,
define
\begin{align}\label{eq.fseq}
f(\iota,\vec\theta\,)= \,
&\lgl \iota, \varepsilon_{\vec{\al}}, k_{\vec{\al}},
\lgl \lgl p_{\vec{\al}}(i,\delta_{\vec{\al}}(j)):j<k_{\vec{\al}}\rgl:i< d\rgl,\cr
& \lgl \lgl i,j \rgl: i< d,\ j<k_{\vec{\al}}, \vec{\delta}_{\vec{\al}}(j)=\al_i \rgl,\cr
&\lgl \lgl j,k\rgl:j<k_{\vec{\al}},\ k<k_{\vec{\beta}},\ \delta_{\vec{\al}}(j)=\delta_{\vec{\beta}}(k)\rgl\rgl.
\end{align}
Let $f(\vec{\theta}\,)$ be the sequence $\lgl f(\iota,\vec\theta\,):\iota\in\mathcal{I}\rgl$, where $\mathcal{I}$ is given some fixed ordering.
Since the range of $f$ is countable,
applying the \Erdos-Rado Theorem \ref{thm.ER},
there is a subset $K\sse\kappa$ of cardinality $\aleph_1$
which is homogeneous for $f$.
Take $K'\sse K$ such that between each two members of $K'$ there is a member of $K$.
Take subsets $K_i\sse K'$ such that $K_0<\dots<K_{d-1}$
and each $|K_i|=\aleph_0$.
\begin{lem}\label{lem.HLonetypes}
There are $\varepsilon^*\in 2$, $k^*\in\mathbb{N}$,
and $ \lgl t_{i,j}: j<k^*\rgl$, $i< d$,
such that
$\varepsilon_{\vec{\al}}=\varepsilon^*$,
$k_{\vec\al}=k^*$, and
$\lgl p_{\vec\al}(i,\delta_{\vec\al}(j)):j<k_{\vec\al}\rgl
=
\lgl t_{i,j}: j<k^*\rgl$,
for each $i< d$,
for all $\vec{\al}\in \prod_{i<d} K_i$.
\end{lem}
\begin{proof}
Let $\iota$ be the member in $\mathcal{I}$
which is the identity function on $2d$.
For any pair $\vec{\al},\vec{\beta}\in \prod_{i<d}K_i$, there are $\vec\theta,\vec\theta'\in [K]^{2d}$
such that
$\vec\al=\iota_e(\vec\theta\,)$ and $\vec\beta=\iota_e(\vec\theta'\,)$.
Since $f(\iota,\vec\theta\,)=f(\iota,\vec\theta'\,)$,
it follows that $\varepsilon_{\vec\al}=\varepsilon_{\vec\beta}$, $k_{\vec{\al}}=k_{\vec{\beta}}$,
and $\lgl \lgl p_{\vec{\al}}(i,\delta_{\vec{\al}}(j)):j<k_{\vec{\al}}\rgl:i< d\rgl
=
\lgl \lgl p_{\vec{\beta}}(i,\delta_{\vec{\beta}}(j)):j<k_{\vec{\beta}}\rgl:i< d\rgl$.
\end{proof}
Let $l^*$ denote the length of the nodes $t_{i,j}$.
\begin{lem}\label{lem.HLj=j'}
Given any $\vec\al,\vec\beta\in \prod_{i<d}K_i$,
if $j,j'<k^*$ and $\delta_{\vec\al}(j)=\delta_{\vec\beta}(j')$,
then $j=j'$.
\end{lem}
\begin{proof}
Let $\vec\al,\vec\beta$ be members of $\prod_{i<d}K_i$ and suppose that
$\delta_{\vec\al}(j)=\delta_{\vec\beta}(j')$ for some $j,j'<k^*$.
For each $i<d$, let $\rho_i$ be the relation from among $\{<,=,>\}$ such that
$\al_i\,\rho_i\,\beta_i$.
Let $\iota$ be a member of $\mathcal{I}$ such that for each $\vec\zeta\in[K]^{2d}$ and each $i<d$,
$\zeta_{\iota(2i)}\ \rho_i \ \zeta_{\iota(2i+1)}$.
Take
$\vec\theta\in[K']^{2d}$ satisfying
$\iota_e(\vec\theta)=\vec\al$ and $\iota_o(\vec\theta)= \vec\beta$.
Since between any two members of $K'$ there is a member of $K$, there is a
$\vec\gamma\in[K]^{d}$ such that for each $i< d$,
$\al_i\,\rho_i\,\gamma_i$ and $\gamma_i\,\rho_i\, \beta_i$.
Given that $\al_i\,\rho_i\,\gamma_i$ and $\gamma_i\,\rho_i\, \beta_i$ for each $i<d$,
there are $\vec\mu,\vec\nu\in[K]^{2d}$ such that $\iota_e(\vec\mu)=\vec\al$,
$\iota_o(\vec\mu)=\vec\gamma$,
$\iota_e(\vec\nu)=\vec\gamma$, and $\iota_o(\vec\nu)=\vec\beta$.
Since $\delta_{\vec\al}(j)=\delta_{\vec\beta}(j')$,
the pair $\lgl j,j'\rgl$ is in the last sequence in $f(\iota,\vec\theta)$.
Since $f(\iota,\vec\mu)=f(\iota,\vec\nu)=f(\iota,\vec\theta)$,
also $\lgl j,j'\rgl$ is in the last sequence in $f(\iota,\vec\mu)$ and $f(\iota,\vec\nu)$.
It follows that $\delta_{\vec\al}(j)=\delta_{\vec\gamma}(j')$ and $\delta_{\vec\gamma}(j)=\delta_{\vec\beta}(j')$.
Hence, $\delta_{\vec\gamma}(j)=\delta_{\vec\gamma}(j')$,
and therefore $j$ must equal $j'$.
\end{proof}
For any $\vec\al\in \prod_{i<d}K_i$ and any $\iota\in\mathcal{I}$, there is a $\vec\theta\in[K]^{2d}$ such that $\vec\al=\iota_o(\vec\theta)$.
By homogeneity of $f$ and by the first sequence in the second line of equation (\ref{eq.fseq}), there is a strictly increasing sequence
$\lgl j_i:i< d\rgl$ of members of $k^*$ such that for each $\vec\al\in \prod_{i<d}K_i$,
$\delta_{\vec\al}(j_i)=\al_i$.
For each $i< d$, let $t^*_i$ denote $t_{i,j_i}$.
Then for each $i<d$ and each $\vec\al\in \prod_{i<d}K_i$,
\begin{equation}
p_{\vec\al}(i,\al_i)=p_{\vec{\al}}(i, \delta_{\vec\al}(j_i))=t_{i,j_i}=t^*_i.
\end{equation}
\begin{lem}\label{lem.HLcompat}
The set of conditions $\{p_{\vec{\al}}:\vec{\al}\in \prod_{i<d}K_i\}$ is compatible.
\end{lem}
\begin{proof}
Suppose toward a contradiction that there are $\vec\al,\vec\beta\in\prod_{i<d}K_i$ such that $p_{\vec\al}$ and
$p_{\vec\beta}$ are incompatible.
By Lemma \ref{lem.HLonetypes},
for each $i<d$ and $j<k^*$,
\begin{equation}
p_{\vec{\al}}(i,\delta_{\vec{\al}}(j))
=t_{i,j}
=p_{\vec{\beta}}(i,\delta_{\vec{\beta}}(j)).
\end{equation}
Thus,
the only way $p_{\vec\al}$ and $p_{\vec\beta}$ can be incompatible is if
there are $i< d$ and $j,j'<k^*$ such that
$\delta_{\vec\al}(j)=\delta_{\vec\beta}(j')$
but
$p_{\vec\al}(i,\delta_{\vec\al}(j))\ne p_{\vec\beta}(i,\delta_{\vec\beta}(j'))$.
Since
$p_{\vec\al}(i,\delta_{\vec\al}(j))=t_{i,j}$ and
$p_{\vec\beta}(i,\delta_{\vec\beta}(j'))= t_{i,j'}$,
this would imply
$j\ne j'$.
But by Lemma \ref{lem.HLj=j'},
$j\ne j'$ implies that $\delta_{\vec\al}(j)\ne\delta_{\vec\beta}(j')$, a contradiction.
Therefore,
$p_{\vec\al}$ and $p_{\vec\beta}$ must be compatible.
\end{proof}
We now construct the strong subtrees $S_i\sse T_i$,
for each $i<d$, by induction on the number of levels in the trees.
For each $i<d$, let $S_i(0)=\{t_i^*\}$ and
let $l_0=|t_i^*|$, the length of the $t_i^*$,
which is well defined since all nodes in the range of any $p\in\mathbb{P}$ have the same length.
Assume now that $n\ge 1$,
there are lengths $l_0<\dots< l_{n-1}$,
and
we have constructed finite strong subtrees
$\bigcup_{m<n}S_i(m)$ of $T_i$, $i<d$,
such that
for each $m<n$,
$h$ takes color $\varepsilon^*$ on each member of
$\prod_{i<d} S_i(m)$.
For each $i<d$, let $X_i$ denote the set of immediate extensions in $T_i$ of the nodes in $S_i(n-1)$.
For each $i<d$,
let $J_i$ be a subset of $K_i$ with the same size as $X_i$.
For each $i< d$, label the nodes in $X_i$ as
$\{q(i,\delta):\delta\in J_i\}$.
Let $\vec{J}$ denote $\prod_{i< d}J_i$.
Notice that for each
$\vec\al\in \vec{J}$ and $i<d$, $q(i,\al_i)\contains t^*_i=p_{\vec{\al}}(i,\al_i)$.
We now construct a condition $q\in\bP$ such that
for each $\vec\al\in\vec{J}$,
$q\le p_{\vec\al}$.
Let
$\vec{\delta}_q=\bigcup\{\vec{\delta}_{\vec\al}:\vec\al\in \vec{J}\}$.
For each pair $(i,\gamma)$ with $i<d$ and $\gamma\in\vec{\delta}_q\setminus
J_i$,
there is at least one $\vec{\al}\in\vec{J}$ and some $j'<k^*$ such that $\delta_{\vec\al}(j')=\gamma$.
For any other $\vec\beta\in\vec{J}$ for which $\gamma\in\vec{\delta}_{\vec\beta}$,
since the set $\{p_{\vec{\al}}:\vec{\al}\in\vec{J}\}$ is pairwise compatible by Lemma \ref{lem.HLcompat},
it follows
that $p_{\vec\beta}(i,\gamma)$ must equal $p_{\vec{\al}}(i,\gamma)$, which is exactly $t^*_{i,j'}$.
Let $q(i,\gamma)$ be the leftmost extension
of $t_{i,j'}^*$ in $T$.
Thus, $q(i,\gamma)$ is defined for each pair $(i,\gamma)\in d\times \vec{\delta}_q$.
Define
\begin{equation}
q= \{\lgl (i,\delta),q(i,\delta)\rgl: i<d,\ \delta\in \vec{\delta}_q\}.
\end{equation}
\begin{lem}\label{lem.HLqbelowpal}
For each $\vec\al\in \vec{J}$,
$q\le p_{\vec\al}$.
\end{lem}
\begin{proof}
Given $\vec\al\in\vec{J}$,
by our construction
for each pair $(i,\gamma)\in d\times\vec{\delta}_{\vec\al}$, we have
$q(i,\gamma)\contains p_{\vec{\al}}(i,\gamma)$.
\end{proof}
To construct the $n$-th level of the strong trees $S_i$,
take an $r\le q$ in $\bP$ which decides some $l_n\ge l_q$ for which $h(\dot{b}_{\vec\al}\re l_n)=\varepsilon^*$, for all $\vec\al\in\vec{J}$.
By extending or truncating $r$, we may assume without
loss of generality that $l_n$ is equal to the length of the nodes in the image of $r$.
Notice that since
$r$ forces $\dot{b}_{\vec{\al}}\re l_n=\{r(i,\al_i):i<d\}$ for each $\vec\al\in \vec{J}$,
and since the coloring $h$ is defined in the ground model,
it is simply true in the ground model that
$h(\{r(i,\al_i):i<d\})=\varepsilon^*$ for each $\vec\al\in \vec{J}$.
For each $i<d$ and $\al_i\in J_i$,
extend the nodes in $X_i$ to length $l_n$ by extending $q(i,\delta)$ to $r(i,\delta)$.
Thus, for each $i<d$,
we define $S_i(n)=\{r(i,\delta):\delta\in J_i\}$.
It follows that $h$ takes value $\varepsilon^*$ on each member of $\prod_{i<d} S_i(n)$.
For each $i<d$, let
$S_i=\bigcup_{n\in\mathbb{N}} S_i(n)$.
Then each $S_i$ is a strong subtree of $T_i$
with the same set of lengths
$L(S_i)=\{l_n:n\in\bN\}$, and
$h$ takes value $\varepsilon^*$ on $\bigcup_{n\in \bN}\prod_{i<d}S_i(n)$.
\hfill$\square$
\begin{rem}
This theorem of Halpern and \Lauchli\ was applied by Laver in
\cite{Laver84}
to prove that
given $k\ge 2$ and given
any coloring of the product of $k$ many copies of the rationals $\mathbb{Q}^k$
into finitely many colors,
there are subsets $X_i$ of the rationals which again are dense linear orders without endpoints such that
$X_0\times\dots\times X_{k-1}$ has at most $k!$ colors.
Laver further proved that $k!$ is the lower bound.
Thus, the big Ramsey degree for the simplest object ($k$-length sequences) in the \Fraisse\ class of products of $k$-many finite linear orders
has been found.
Shelah extended the arguments above, applying forcing methods to prove consistent
versions of the Halpern-\Lauchli\ Theorem
at a measurable cardinal in \cite{Shelah91}.
Modifications were used to prove big Ramsey degrees for the $\kappa$-rationals and $\kappa$-Rado graph by D\v{z}amonja, Larson, and Mitchell in
\cite{Dzamonja/Larson/MitchellQ09} and \cite{Dzamonja/Larson/MitchellRado09}.
Further work on the Halpern-\Lauchli\ Theorem at uncountable cardinals has been continued in \cite{Dobrinen/Hathaway16}, \cite{Dobrinen/Hathaway18} and by
Zhang who proved the analogue of Laver's result
\cite{Laver84} for measurable cardinals in \cite{Zhang17}.
\end{rem}
\section{Trees coding Henson graphs}\label{sec.3}
This section introduces a unified approach for coding the Henson graphs via trees with special distinguished nodes.
We call these trees {\em strong $K_k$-free trees} (Definition \ref{defn.stft}), since they branch as fully as
possible
(like strong trees) subject to never coding $k$-cliques.
The constructions extend and streamline the construction of
strong triangle-free trees in \cite{DobrinenJML20}.
The distinguished nodes will code the vertices of a Henson graph.
The nodes in a given level of a strong $K_k$-free tree will
code all possible types over the
finite graph coded by the lower levels of the tree.
Example 3.18 of \cite{DobrinenJML20} showed that there is a bad coloring for strong $K_3$-free trees which thwarts their development of Ramsey theory.
This will be overcome by using skewed versions of strong $K_k$-free trees, called strong $\mathcal{H}_k$-coding trees, developed in Section \ref{sec.4}.
The work in the current section
provides the reader with the essential structure of and intuition behind strong coding trees
utilized in the remainder of the paper.
\subsection{Henson's Criterion}\label{subsection.HC}
Recall that $K_k$ denotes a {\em $k$-clique}, a
complete graph on $k$ vertices.
In \cite{Henson71}, for each $k\ge 3$, Henson constructed an ultrahomogeneous $K_k$-free graph which is universal for all $K_k$-free graphs on countably many vertices.
We denote these graphs by $\mathcal{H}_k$.
It was later seen that $\mathcal{H}_k$ is isomorphic to the \Fraisse\ limit of the \Fraisse\ class of finite $K_k$-free graphs.
Given a graph $\HH$ and a subset $V_0$ of the vertices of $\HH$,
let $\HH|V_0$ denote the induced subgraph of $\HH$ on the vertices in $V_0$.
In \cite{Henson71},
Henson proved that a countable graph $\HH$ is ultrahomogeneous and universal for countable $K_k$-free graphs if and only if $\HH$ satisfies the following property ($A_k$).
\begin{enumerate}
\item[($A_k$)]
\begin{enumerate}
\item[(i)]
$\HH$ does not admit any $k$-cliques.
\item[(ii)]
If $V_0,V_1$ are disjoint finite sets of vertices of $\HH$, and $\HH|V_0$ has no copy of $K_{k-1}$,
then there is another vertex which is connected in $\HH$ to every member of $V_0$ and to no member of $V_1$.
\end{enumerate}
\end{enumerate}
The following equivalent modification will be useful for our constructions.
\begin{enumerate}
\item[$(A_k)'$]
\begin{enumerate}
\item[(i)]
$\HH$ does not admit any $k$-cliques.
\item[(ii)]
Let $\lgl v_n:n\in\mathbb{N}\rgl$ enumerate the vertices of $\HH$, and
let $\lgl F_i:i\in\mathbb{N}\rgl$ be any enumeration of the finite subsets of $\mathbb{N}$ such that for each $i \in\mathbb{N}$, $\max(F_i)<i$,
and each finite set appears infinitely many times in the enumeration.
Then there is a strictly increasing sequence $\lgl n_i: i \in\mathbb{N}\rgl$
such that for each $i \in\mathbb{N}$, if $\HH|\{v_m:m\in F_i\}$
has no copy of $K_{k-1}$,
then for all $m<i$, $v_{n_i} \, E\, v_m\longleftrightarrow m\in F_i$.
\end{enumerate}
\end{enumerate}
It is routine to check that any countably infinite graph
$\HH$
is ultrahomogeneous and universal for countable $K_k$-free graphs if and only if
$(A_k)'$ holds.
\subsection{Trees with coding nodes and strong $K_k$-free trees}\label{subsection.K_kfree}
As seen for the case of triangle-free graphs in \cite{DobrinenJML20},
enriching trees with a collection of distinguished nodes allows for coding graphs with forbidden configurations into trees which have properties similar to strong trees.
Recall that $\Seq$ denotes the set of all finite sequences of $0$'s and $1$'s.
\begin{defn}[\cite{DobrinenJML20}]\label{defn.treewcodingnodes}
A {\em tree with coding nodes}
is a structure $(T,N;\sse,<,c^T)$ in the language
$\mathcal{L}=\{\sse,<,c\}$,
where $\sse$ and $<$ are binary relation symbols and $c$ is a unary function symbol,
satisfying the following:
$T$ is a subset of $\Seq$ satisfying that $(T,\sse)$ is a tree (recall Definition \ref{defn.tree}),
either $N\in \mathbb{N}$ or $N=\mathbb{N}$, $<$ is the usual linear order on $N$, and $c^T:N\ra T$ is an injective function such that whenever $m<n$ in $N$, then $|c^T(m)|<|c^T(n)|$.
\end{defn}
\begin{notation}
The {\em $n$-th coding node} in $T$, $c^T(n)$, will usually be denoted as
$c^T_n$.
The length $|c^T_n|$ of the $n$-th coding node in $T$ shall be denoted by $l^T_n$.
Whenever no ambiguity arises, we shall drop the superscript $T$.
Throughout this paper,
we use
$N$ to denote either a member of $\bN=\{0,1,2,\dots\}$, or $\bN$ itself.
We shall treat the natural numbers as von Neumann ordinals.
Thus,
for $N\in\mathbb{N}$, $N$ denotes the set of natural numbers less than $N$; that is, $N=\{0,\dots,N-1\}$.
Hence, in either case that $N\in\bN$ or $N=\bN$,
it makes sense to write $n\in N$.
\end{notation}
\begin{defn}[\cite{DobrinenJML20}]\label{def.rep}
A graph $\G$ with vertices enumerated as $\lgl v_n:n\in N\rgl$ is {\em represented} by a tree $T$ with coding nodes $\lgl c_n:n\in N\rgl$
if and only if
for each pair $i<n$ in $N$,
$v_n\, \E\, v_i\Longleftrightarrow c_n(l_i)=1$.
We will often simply say that $T$ {\em codes} $\G$.
\end{defn}
For each copy of $\mathcal{H}_k$ with vertices indexed by $\mathbb{N}$, there is a tree with coding nodes representing the graph.
In fact, this is true more generally for any graph, finite or infinite.
\begin{defn}[The tree $T_{\mathrm{G}}$ coding $\mathrm{G}$]\label{defn.T_H}
Let $\mathrm{G}$ be any graph with vertices ordered as $\{v_i:i\in N\}$.
Define $T_{\mathrm{G}}$
as follows:
Let $c_0=\lgl\rgl$, the empty sequence.
For $n\ge 1$ with $n\in N$, given
coding nodes $\{c_i:i<n\}$ coding $\mathrm{G}|\{v_i:i<n\}$, with each $|c_i|=i$,
define $c_n$ to be the node in $\Seq$ of length $n$ such that for each $i<n$, $c_n(i)=1\ \llra \ v_n\, E \, v_i$.
Let
\begin{equation}
T_{\mathrm{G}}=\{t\in\Seq:\exists n\in N,\ \exists l\le n\, (t=c_n\re l)\}.
\end{equation}
\end{defn}
\begin{observation}\label{obs_{T_H0}}
Let $\mathrm{G}$ and $T_{\mathrm{G}}$ be as in Definition \ref{defn.T_H}.
Notice that
for each $n\ge 1$ with $n\in N$, each node in $T_{\mathrm{G}}\re n$ (the nodes in $T_{\mathrm{G}}$ of length $n$)
represents a model-theoretic (incomplete) $1$-type over the graph
$\mathrm{G}|\{v_i:i<n\}$.
Moreover, each such $1$-type is represented by a unique node in $T_{\mathrm{G}}\re n$.
In particular, if $\mathrm{G}$ is a Rado graph or a Henson graph, then
$T_{\mathrm{G}}$ has no maximal nodes and
the coding nodes in
$T_{\mathrm{G}}$ are dense.
\end{observation}
Our goal is to develop a means for working with subtrees of trees like $T_{\mathcal{H}_k}$, where $\mathcal{H}_k$ is a $k$-clique-free Henson graph, for which we can prove Ramsey theorems like the Halpern-\Lauchli\ Theorem \ref{thm.HL} and the Milliken Theorem \ref{thm.Milliken}.
There are several reasons why the most na\"{i}ve approach does not work; these will be pointed out as they arise.
In this and the next two sections, we develop tools for
recognizing which trees coding $\mathcal{H}_k$ and which of their subtrees
are able to carry a robust Ramsey theory.
These can be interpreted model-theoretically in terms of types over finite subgraphs, but the language of trees will be simpler and easier to visualize.
In Definition \ref{defn.T_H}, we showed how to make a tree with coding nodes coding a particular
copy of $\mathcal{H}_k$; this is a ``top-down'' approach.
To develop Ramsey theory for colorings of finite trees,
we will need to consider all subtrees of a given tree $T$ coding $\mathcal{H}_k$ which are ``similar'' enough to $T$ to make a Ramsey theorem possible.
In order to prove the Ramsey theorems,
we will further need criteria for how and when we can
extend a finite subtree $A$ of a given tree $S$, which is a subtree of some $T$, where $T$ codes a
copy of $\mathcal{H}_k$, to a subtree of $S$ coding another
copy of $\mathcal{H}_k$.
This will provide a ``bottom-up'' approach for constructing trees coding $\mathcal{H}_k$.
The potential obstacles are cliques coded by coding nodes in $T$, but which are not coded by coding nodes in $S$.
To begin, we observe exactly how cliques are coded.
\begin{observation}\label{obs.k-cliquecoding}
For $a\ge 2$,
given an index set $I$ of size $a$,
a collection of coding nodes
$\{c_{i}:i\in I\}$
{\em codes an $a$-clique} if and only if for each pair $i<j$ in $I$,
$c_{j}(l_{i})=1$.
\end{observation}
\begin{defn}[$K_k$-Free Criterion]\label{defn.trianglefreeextcrit}
Let $T\sse \Seq$ be a tree with coding nodes $\lgl c_n:n<N\rgl$.
We say that
$T$
{\em satisfies the $K_k$-Free Criterion}
if the following holds:
For each $n\ge k-2$,
for all increasing sequences
$i_0<i_1<\dots<i_{k-2}=n$ such that $\{c_{i_j}:j<k-1\}$ codes a $(k-1)$-clique,
for each $t\in T$ such that $|t|>l_n$,
\begin{equation}
(\forall j<k-2)\ t(l_{i_j})=1\ \ \Longrightarrow \ \ t(l_n)=0.
\end{equation}
\end{defn}
In words, a tree $T$ with coding nodes $\lgl c_n:n\in N\rgl$ satisfies the $K_k$-Free Criterion if for each $n\in N$, whenever a node $t$ in $T$ has the same length as the coding node $c_n$, the following holds:
If $t$ and $c_n$ both
code edges with
some collection of $k-2$ many coding nodes in $T$ which themselves code a $(k-2)$-clique,
then $t$ does not split in $T$; its only allowable extension in $\widehat{T}$ is
$t^{\frown}0$.
The next lemma characterizes tree representations of $K_k$-free graphs.
We say that the coding nodes in $T$ are {\em dense} in $T$, if
for each $t\in T$, there is some coding node $c_n\in T$ such that $t\sse c_n$.
Note that a finite tree $T$ in which the coding nodes are dense will necessarily have coding nodes (of differing lengths) as its maximal nodes.
\begin{lem}\label{lem.trianglefreerep}
Let $T\sse\Seq$ be a tree
with coding nodes $\lgl c_n:n\in N\rgl$
coding a countable graph $\G$ with vertices $\lgl v_n:n\in N\rgl$.
Assume that the coding nodes in $T$ are dense in $T$.
Then
$\G$ is a $K_k$-free graph if and only if the tree
$T$ satisfies the $K_k$-Free Criterion.
\end{lem}
\begin{proof}
If $T$ does not satisfy the $K_k$-Free Criterion,
then there are $i_0<\dots<i_{k-2}$ in $N$
and $t\in T$ with $|t|>l_{i_{k-2}}$ such that
$\{c_{i_j}:j<k-1\}$ codes a $(k-1)$-clique
and $t(l_{i_j})=1$ for all $j<k-1$.
Since the coding nodes are dense in $T$,
there is an $n>i_{k-2}$ such that $c_n\contains t$.
Then $\{c_{i_j}:j<k-1\}\cup\{c_n\}$ codes a $k$-clique.
On the other hand, if
$\G$ contains a $k$-clique,
then there are $i_0<\dots<i_{k-1}$ such that the coding nodes $\{c_{i_j}:j<k\}$ in $T$ code a $k$-clique, and these coding nodes witness the failure of
the $K_k$-Free Criterion in $T$.
\end{proof}
\begin{defn}[$K_k$-Free Branching Criterion]\label{defn.kFSC}
A tree $T$ with coding nodes $\lgl c_n:n\in N\rgl$
satisfies the {\em $K_k$-Free Branching Criterion ($k$-FBC)}
if $T$ is maximally branching, subject to satisfying the $K_k$-Free Criterion.
\end{defn}
Thus, $T$ satisfies the $K_k$-Free Branching Criterion if and only if
$T$ satisfies the $K_k$-Free Criterion, and
given any $n\in N$ and non-maximal node $t\in T$ of length $l_n$,
(a)
there is a node $t_0\in T$ such that $t_0\contains t^{\frown}0$, and (b)
there is a $t_1\in T$ such that
$t_1\contains t^{\frown}1$ if and only
if for all sequences $i_0<\dots<i_{k-2}=n$ such that $\{c_{i_j}:j<k-1\}$ codes a copy of $K_{k-1}$,
$t(l_{i_j})=0$ for at least one $j<k-1$.
As we move toward defining strong $K_k$-free trees in Definition \ref{defn.stft},
we recall that
the
modified Henson criterion $(A_k)'$ is satisfied by an infinite $K_k$-free graph if and only if it is ultrahomogeneous and universal for all countable $K_k$-free graphs.
The following reformulation translates $(A_k)'$ in terms of trees with coding nodes.
We say that a tree
$T\sse \Seq$ with coding nodes $\lgl c_n:n\in \mathbb{N}\rgl$ {\em satisfies property $(A_k)^{\tt tree}$} if the following hold:\vskip.1in
\begin{enumerate}
\item[$(A_k)^{\tt{tree}}$]
\begin{enumerate}
\item[(i)]
$T$ satisfies the $K_k$-Free Criterion.
\item[(ii)]
Let $\lgl F_n:n \in \mathbb{N}\rgl$ be any enumeration of finite subsets of $\mathbb{N}$ such that
for each $n \in \mathbb{N}$, $\max(F_n)<n-1$, and
each finite subset of $\mathbb{N}$ appears as $F_n$ for infinitely many indices $n$.
Given $n \in \mathbb{N}$,
if for each subset $J\sse F_k$ of size $k-1$,
$\{c_j:j\in J\}$ does not code a $(k-1)$-clique,
then
there is some $m\ge n$ such that
for all $j<i$,
$c_m(l_j)=1$ if and only if $j\in F_n$.
\end{enumerate}
\end{enumerate}
\begin{observation}\label{obs.ttimplieshomog}
If $T$ satisfies $(A_k)^{\tt tree}$, then the coding nodes in $T$ code $\mathcal{H}_k$.
\end{observation}
To see this,
suppose that $T$ satisfies $(A_k)^{\tt tree}$, and
let $\mathcal{H}$ be the graph with vertices $\lgl v_n:n \in \mathbb{N}\rgl$ where for $m<n$, $v_n\ E\ v_m$ if and only if $c_n(l_m)=1$.
Then
$\mathcal{H}$ satisfies Henson's property $(A_k)$, and hence is ultrahomogeneous and universal for countable $k$-clique-free graphs.
\begin{observation}\label{obs.anyHcoded}
Let $\mathcal{H}$ be a copy of $\mathcal{H}_k$
with vertices ordered as $\lgl v_n:n\in\mathbb{N}\rgl$.
Then $T_{\mathcal{H}}$ (recall Definition \ref{defn.T_H})
satisfies the $K_k$-Free Branching Criterion.
\end{observation}
The next lemma
shows that any finite tree with coding nodes satisfying the $k$-FBC, where all maximal nodes have height $l_{N-1}$,
has the property that every $1$-type over the graph represented by $\{c_i:i<N-1\}$ is represented by a maximal node in the tree.
This
is the vital step toward proving
Theorem \ref{thm.A_3treeimpliestrianglefreegraph}:
Any tree satisfying the $k$-FBC
with no maximal nodes and with
coding nodes forming a dense subset
codes the $k$-clique-free Henson graph.
\begin{lem}\label{lem.stftextension}
Let $T$ be a finite tree with coding nodes $\lgl c_n:n\in N\rgl$,
where $N\ge 1$,
satisfying the $K_k$-Free Branching Criterion
with all maximal nodes of length $l_{N-1}$.
Given any $F\sse N-1$ for which the set $\{c_n:n\in F\}$ codes no $(k-1)$-cliques,
there is a maximal node $t\in T$ such that
for all $n\in N-1$,
\begin{equation}
t(l_n)=1\ \ \Longleftrightarrow \ \ n\in F.
\end{equation}
\end{lem}
\begin{proof}
The proof is by induction on $N\ge 1$ over all such trees with $N$ coding nodes.
For $N=1$, $N-1=0$, so
the lemma trivially holds.
Now suppose $N\ge 2$ and suppose the lemma holds for all trees with less than $N$ coding nodes.
Let $T$ be a tree with coding nodes $\lgl c_n:n\in N \rgl$ satisfying the $k$-FBC.
Let $F$ be a subset of $N-1$ such that $\{c_n:n\in F\}$ codes no $(k-1)$-cliques.
By the induction hypothesis,
$\{t\in T:|t|\le l_{N-2}\}$
satisfies the lemma.
So
there is a node $t$ in $T$ of length $l_{N-2}$ such that
for all $n\in N-2$, $t(l_n)=1$ if and only if $n\in F\setminus \{N-2\}$.
If $N-2\not\in F$,
then
the maximal node in $T$ extending $t^{\frown}0$
satisfies the lemma; this node is guaranteed to exist
by the $k$-FBC.
Now suppose $N-2\in F$.
We claim that there is a maximal node $t'$ in $T$ which extends $t^{\frown}1$.
If not, then $t^{\frown}1$ is not in $\widehat{T}$.
By the $k$-FBC, this implies that
there is some sequence
$i_0<\dots<i_{k-2}= N-2$ such that
$\{c_{i_j}:j<k-1\}$ codes a $(k-1)$-clique
and $t(l_{i_j})=1$ for each $j<k-2$.
Since for all $i<N-2$, $t(l_i)=1$ if and only if $i\in F\setminus\{N-2\}$,
it follows that $\{i_j:j<k-2\}\sse F$.
But then $F\contains \{i_j:j<k-1\}$, which contradicts that $F$ codes no $(k-1)$-cliques.
Therefore, $t^{\frown}1$ is in $\widehat{T}$.
Taking $t'$ to be maximal in $T$ and extending $t^{\frown}1$ satisfies the lemma.
\end{proof}
\begin{rem}
Lemma \ref{lem.stftextension} says the following:
Suppose
$T$ is a tree with coding nodes $\lgl c_n:n\in N\rgl$ satisfying the $K_k$-FBC, $m+1\in N$,
and $\mathrm{G}$ is the graph represented by $\lgl c_n:n\in m\rgl$.
Let $G'$ be any graph on $m+1$ vertices such that $G'\re m=G$.
Then
there is a node $t\in T\re l_m$
such that letting $c'_{m}=t$,
the graph represented by $\{c_n:n\in m\}\cup\{c'_{m}\}$
is isomorphic to $\mathrm{G}'$.
\end{rem}
\begin{thm}\label{thm.A_3treeimpliestrianglefreegraph}
Let $T$ be a tree with infinitely many coding nodes
satisfying the $K_k$-Free Branching Criterion.
If $T$ has no maximal nodes and the coding nodes are dense in $T$,
then
$T$ satisfies $(A_k)^{\tt tree}$, and hence codes $\mathcal{H}_k$.
\end{thm}
\begin{proof}
Since $T$ satisfies the $k$-FBC, it automatically satisfies (i) of $(A_k)^{\tt tree}$.
Let $\lgl F_n:n\in\mathbb{N}\rgl$ be an enumeration of finite subsets of $\mathbb{N}$
where each set is repeated infinitely many times, and each $\max(F_n)<n-1$.
For $n=0$, $F_n$ is the emptyset, so every coding node in $T$ fulfills (ii) of $(A_k)^{\tt tree}$.
Let $n\ge 1$ be given and suppose that for each subset
$J\sse F_n$ of size $k-1$, $\{c_j:j\in J\}$ does not code a
$(k-1)$-clique.
By Lemma \ref{lem.stftextension},
there is some node $t\in T$ of length $l_{n-1}$
such that for all $i<n-1$, $t(l_i)=1$ if and only if $i\in F_n$.
Since the coding nodes are dense in $T$, there is some
$m\ge n$ such that $c_m$ extends $t$.
This coding node $c_m$ fulfills (ii) of $(A_k)^{\tt tree}$.
\end{proof}
At this point, we have developed enough ideas and terminology to define strong $K_k$-free trees.
These will be special types of trees coding copies of $\mathcal{H}_k$ with additional properties which set the stage for their skew versions in Section \ref{sec.4} on which we will be able to develop a viable Ramsey theory.
We shall use {\em ghost coding nodes} for the first $k-3$ levels.
Coding nodes will start at length $k-2$, and all coding nodes of length at least $k-2$ will end in a sequence of $(k-2)$ many $1$'s.
The effect is that coding nodes will only be extendible by $0$; coding nodes will never split.
This will serve to reduce the upper bound on the big Ramsey degrees for $\mathcal{H}_k$.
Recall that $\mathcal{S}_{\le n}$ is the set of all sequences of $0$'s and $1$'s of length $\le n$.
\begin{notation}
Throughout this paper, we use the notation $0^{(i)}$ and $1^{(i)}$ to denote sequences of length $i$ where all entries are $0$, or all entries are $1$, respectively.
\end{notation}
\begin{defn}[Strong $K_k$-Free Tree]\label{defn.stft}
A {\em strong $K_k$-free tree} is a tree with coding nodes, $(T,\mathbb{N};\sse,<,c)$, satisfying the following:
\begin{enumerate}
\item
$T$ has no maximal nodes, the coding nodes are dense in $T$, and no coding node splits in $T$.
\item
The first $k-2$ levels of $T$ are exactly $\Seq_{\le k-2}$,
and the least coding node $c_0$ is exactly $1^{(k-2)}$.
\item
For each $n\in \mathbb{N}$,
the $n$-th coding node $c_n$ has length $n+k-2$, and has as final segment a sequence of $k-2$ many $1$'s.
\item
$T$
satisfies the $K_k$-free Branching Criterion.
\end{enumerate}
Moreover, $T$ has ghost coding nodes $c_{-k+2},\dots,c_{-1}$ defined by $c_n=1^{(k+n-2)}$ for $n\in[-k+2,-1]$, where $1^{(0)}$ denotes the empty sequence.
A
{\em finite strong $K_k$-free tree} is the restriction
of a strong $K_k$-free tree to some finite level.
\end{defn}
By Theorem
\ref{thm.A_3treeimpliestrianglefreegraph},
each strong $K_k$-free tree codes $\mathcal{H}_k$.
\begin{rem}
Items (1) and (4) in Definition
\ref{defn.stft} ensure that the tree represents a $K_k$-free Henson graph.
Items (2) and (3) serve to reduce our bounds on the big Ramsey degrees by ensuring that coding nodes never split.
For any node $t$ in $T$ with all entries being $0$, the subtree $S$ of all nodes in $T$ extending $t$ codes a copy of $\mathcal{H}_k$, by Theorem
\ref{thm.A_3treeimpliestrianglefreegraph}.
Moreover,
the structure of the first $k-2$ levels of such a subtree $S$ are tree isomorphic to $\Seq_{\le k-2}$.
Thus, it makes sense to require (2) in Definition
\ref{defn.stft}.
The ghost coding nodes provide the structure which subtrees coding $\mathcal{H}_k$ in the same order as $T$ automatically inherit.
This will enable us to build the collection of all subtrees of a given tree $T$ which are isomorphic to $T$ in a strong way, to be made precise in the next section.
\end{rem}
We now present a method for constructing strong $K_k$-free trees.
For $k=3$,
this construction method simplifies the construction of a strong triangle-free tree coding $\mathcal{H}_3$ in
Theorem 3.14 of
\cite{DobrinenJML20} and accomplishes the same goals.
The aim of Example \ref{thm.stftree}
is to
build the reader's understanding of the principal structural properties
of the trees on which we will develop Ramsey theory, before defining their skew versions in the next section.
\begin{example}[Construction Method for a Strong $K_k$-Free Tree, $\bS_k$]\label{thm.stftree}
Recall that by Theorem
\ref{thm.A_3treeimpliestrianglefreegraph},
each strong $K_k$-free tree codes $\mathcal{H}_k$.
Let
$\lgl u_i: i\in \mathbb{N}\rgl$ be any enumeration of $\Seq$
such that $|u_i|\le |u_j|$ whenever $i<j$.
Notice that in particular, $|u_i|\le i$.
We will build a strong $K_k$-free tree $\bS_k\sse \Seq$
with the $n$-th coding node $c_n$ of length $l_n=n+k-2$ and
satisfying the following additional conditions for $n\ge k-2$:
\begin{enumerate}
\item[(i)]
If $n\equiv 0 \ (\mathrm{mod\ } k-1)$, $i=n/(k-1)$,
and
$u_i $ is in $\bS_k\cap \Seq_{\le i+1}$,
then $c_n\contains u_i$.
\item[(ii)]
Otherwise,
$c_n={0^{(n)}}^{\frown}1^{(k-2)}$.
\end{enumerate}
The first $k-2$ levels of $\bS_k$ are exactly $\Seq_{\le k-2}$.
The ghost coding nodes are defined as in Definition \ref{defn.stft}, with $c_{-k+2}$ being the empty sequence and $c_{-1}$ being $1^{(k-3)}$.
The shortest coding node is $c_0=1^{(k-2)}$.
Notice that since $u_0=\lgl\rgl$, $c_0$ extends $u_0$.
$\bS_k$ will have nodes of every finite length, so
$\bS_k(n)=\bS_k \re n$ for each $n\in \bN$.
We shall let $r_n(\bS_k)$ denote $\bigcup_{m<n}\bS_k(m)$;
this notation comes from topological Ramsey space theory in \cite{TodorcevicBK10} and will be used again in the next section.
Then $r_{k-1}(\bS_k)=\Seq_{\le k-2}$.
Extend the nodes of $\bS_k(k-2)$ according to the $k$-FBC with respect to
$\{c_{-k+2},\dots, c_0\}$
to form the next level
$\bS_k(k-1)$.
Let $c_{1}= \lgl 0\rgl^{\frown}1^{(k-2)}$.
This node is in
$\bS_k(k-1)$, since it codes no $k$-cliques with
$\{c_{-k+2},\dots, c_0\}$.
Extend the nodes of $\bS_k(k-1)$ according to the $k$-FBC with respect to
$\{c_{-k+2},\dots, c_1\}$
to form the next level $\bS_k(k)$.
So far,
(1)--(4) of a Strong $K_k$-Free Tree and (i) and (ii) above are satisfied.
Given $n\ge k-1$, suppose we
have defined
$r_{n+2}(\bS_k)$ and
$\{c_{-k+2},\dots,c_{n-k+2}\}$ so that (1)--(4) and (i) and (ii) hold so far.
If $n\not\equiv 0 \ (\mathrm{mod\ } k-1)$,
or $n\equiv 0 \ (\mathrm{mod\ } k-1)$ but $u_i\not\in r_{i+1}(\bS_k)$, where $i=n/(k-1)$,
then define
$c_n={0^{(n)}}^{\frown}1^{(k-2)}$.
This node is in $\bS_k(n+1)$ by the $k$-FBC, since
the only nodes
$c_n$ codes edges with are
exactly the coding nodes $c_{n-k+2},\dots, c_{n-1}$.
Now suppose that $n\equiv 0 \ (\mathrm{mod\ } k-1)$ and
$u_i$ is in
$r_{i+1}(\bS_k)$, where $i=n/(k-1)$.
Let $q$ denote $n- |u_i|$
and
define $v=u^{\frown} {0^{(q)}}^{\frown}1^{(k-2)}$.
We claim that $v$ is in
$\bS_k(n+1)$.
Let $-k+2\le m_0<\dots<m_{k-2}\le n-1$ and suppose that
$\{c_{m_j}: j\in k-1\}$ is a set of coding nodes in $r_n(\mathbb{S}_k)$ coding a $(k-1)$-clique.
If $m_{k-2}< |u_i|$, then $v(m_j)=0$ for at least one $j\in k-1$, since $u_i$ is in $r_{ |u_i|+1 }(\bS_k)$ which satisfies the $k$-FBC.
Let $w=
u^{\frown} {0^{(q)}}$.
If for some $j\in k-1$, $|u_i|\le m_j<|w|$,
then $v(m_j)=0$.
For the following, it is important to notice that
$q\ge k-2$.
If for some $j<j'$, $ |c_{m_j}|<|u_i|$ and $|w|\le |c_{m_{j'}}|$,
then $c_{m_{j'}}(m_j)=0$, contrary to our assumption that $\{c_{m_j}: j\in k-1\}$ codes a $(k-1)$-clique.
Lastly, suppose $m_0\ge |w|$.
Then the nodes $c_{m_j}$, $j\in k-1$, are exactly the coding nodes
$c_{n- k+1},\dots, c_{n-1}$.
Thus, $v(c_{m_0})=0$.
Therefore, by the $k$-FBC, $v$ is in $\bS_k(n+1)$.
Let $c_n=v$ and split according to the $k$-FBC to construct $\bS_k(n+2)$.
This satisfies (1)--(4) and (i) and (ii).
This inductive process constructs a tree $\bS_k=\bigcup_{n<\om}\bS_k(n)$ which is a strong $K_k$-free tree satisfying (i) and (ii).
\end{example}
\begin{rem}
For $k=3$, the previous construction of a strong triangle-free tree produces a strong triangle-free tree in the sense of \cite{DobrinenJML20}, albeit in a more streamlined fashion.
\end{rem}
\begin{example}[A Strong $K_3$-Free Tree, $\bS_3$]\label{ex.stft}
In keeping with the construction method above,
we present the first several levels of the construction of
$\bS_3$.
Let $u_0$ denote the empty sequence, and suppose $u_1=\lgl 1\rgl$ and $u_2=\lgl 0\rgl$.
The ghost coding node is $c_{-1}=\lgl\rgl$, the empty sequence.
The coding nodes $c_n$ where $n$ is odd will be $c_n={0^{(n)}}^{\frown}1$.
In particular, $c_1=\lgl 0,1\rgl$, $c_3=\lgl 0,0,0,1\rgl$, $c_5=\lgl 0,0,0,0,0,1\rgl$, etc.
These nodes are in every tree satisfying the $K_3$-Free Branching Criterion.
Let $c_0=\lgl 1\rgl$;
this node extends
$u_0=\lgl\rgl$.
Let $c_2=\lgl 1,0,1\rgl$; this node extends $u_1=\lgl 1\rgl$.
Let $c_4=\lgl 0,1,0,0,1\rgl$; this node extends $u_2=\lgl 0\rgl$.
If $u_3=\lgl 1,1\rgl$, since this node is not in $\bS_3(7)$, we let $c_6={0^{(6)}}^{\frown}1$.
If $u_3=\lgl 1,0\rgl$, we
can let $c_6$ be any node in $\bS_3(7)$ extending $u_3$.
For instance, if we care about making $\bS_3$ recursively definable with respect to the sequence $\lgl u_i:i\in \mathbb{N}\rgl$, we can let $c_6$ be the rightmost extension of $u_3$ in $\bS_3(7)$ which has last entry $1$, namely
$c_6= \lgl 1,0,1,0,1,0,1\rgl$.
In this manner, one constructs a tree such as in Figure 3.
\begin{figure}\label{fig.bS3}
\begin{tikzpicture}[grow'=up,scale=.45]
\tikzstyle{level 1}=[sibling distance=5in]
\tikzstyle{level 2}=[sibling distance=3in]
\tikzstyle{level 3}=[sibling distance=1.8in]
\tikzstyle{level 4}=[sibling distance=1.1in]
\tikzstyle{level 5}=[sibling distance=.7in]
\tikzstyle{level 6}=[sibling distance=0.4in]
\tikzstyle{level 7}=[sibling distance=0.2in]
\node {} coordinate(t)
child{coordinate (t0)
child{coordinate (t00)
child{coordinate (t000)
child {coordinate(t0000)
child{coordinate(t00000)
child{coordinate(t000000)
child{coordinate(t0000000)}
child{coordinate(t0000001)}
}
child{coordinate(t000001)
child{coordinate(t0000010)}
child{ edge from parent[draw=none] coordinate(t0000011)}
}
}
child{coordinate(t00001)
child{coordinate(t000010)
child{coordinate(t0000100)}
child{coordinate(t0000101)}
}
child{ edge from parent[draw=none] coordinate(t000011)
}
}}
child {coordinate(t0001)
child {coordinate(t00010)
child {coordinate(t000100)
child {coordinate(t0001000)}
child {coordinate(t0001001)}
}
child {coordinate(t000101)
child {coordinate(t0001010)}
child { edge from parent[draw=none] coordinate(t0001011)}
}
}
child{coordinate(t00011) edge from parent[draw=none] }}}
child{ coordinate(t001)
child{ coordinate(t0010)
child{ coordinate(t00100)
child{ coordinate(t001000)
child{ coordinate(t0010000)}
child{ coordinate(t0010001)}
}
child{ coordinate(t001001)
child{ coordinate(t0010010)}
child{ edge from parent[draw=none] coordinate(t0010011)}
}
}
child{ coordinate(t00101)
child{ coordinate(t001010)
child{ coordinate(t0010100)}
child{ coordinate(t0010101)}
}
child{ edge from parent[draw=none]coordinate(t001011)
}
}}
child{ edge from parent[draw=none] coordinate(t0011)}}}
child{ coordinate(t01)
child{ coordinate(t010)
child{ coordinate(t0100)
child{ coordinate(t01000)
child{ coordinate(t010000)
child{ coordinate(t0100000)}
child{ coordinate(t0100001)}
}
child{ edge from parent[draw=none] coordinate(t010001)
}
}
child{ coordinate(t01001)
child{ coordinate(t010010)
child{ coordinate(t0100100)}
child{ coordinate(t0100101)}
}
child{edge from parent[draw=none] coordinate(t010011)}
}}
child{ coordinate(t0101)
child{ coordinate(t01010)
child{ coordinate(t010100)
child{ coordinate(t0101000)}
child{coordinate(t0101001)}
}
child{ edge from parent[draw=none] coordinate(t010101)
}
}
child{ edge from parent[draw=none] coordinate(t01011)
}}}
child{ edge from parent[draw=none] coordinate(t011)}}}
child{ coordinate(t1)
child{ coordinate(t10)
child{ coordinate(t100)
child{ coordinate(t1000)
child{ coordinate(t10000)
child{ coordinate(t100000)
child{ coordinate(t1000000)}
child{ coordinate(t1000001)}
}
child{ coordinate(t100001)
child{ coordinate(t1000010)}
child{ edge from parent[draw=none] coordinate(t1000011)}
}
}
child{ coordinate(t10001)
child{coordinate(t100010)
child{coordinate(t1000100)}
child{coordinate(t1000101)}}
child{edge from parent[draw=none] coordinate(t100011)}
}}
child{edge from parent[draw=none] coordinate(t1001)}}
child{ coordinate(t101)
child{ coordinate(t1010)
child{ coordinate(t10100)
child{ coordinate(t101000)
child{ coordinate(t1010000)}
child{ coordinate(t1010001)}
}
child{ coordinate(t101001)
child{ coordinate(t1010010)}
child{ edge from parent[draw=none] coordinate(t1010011)}
}
}
child{ coordinate(t10101)
child{ coordinate(t101010)
child{ coordinate(t1010100)}
child{ coordinate(t1010101)}
}
child{ edge from parent[draw=none] coordinate(t101011)}
}}
child{ edge from parent[draw=none] coordinate(t1011)}}}
child{ edge from parent[draw=none] coordinate(t11)} };
\node[below] at (t) {${\color{gray}c_{-1}}$};
\node[right] at (t1) {$c_0$};
\node[left] at (t01) {$c_1$};
\node[right] at (t101) {$c_2$};
\node[right] at (t0001) {$c_3$};
\node[left] at (t01001) {$c_4$};
\node[left] at (t000001) {$c_5$};
\node[right] at (t1010101) {$c_6$};
\node[circle, fill=gray,inner sep=0pt, minimum size=5pt] at (t) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t1) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t01) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t101) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t0001) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t01001) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t000001) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t1010101) {};
\draw[dotted] let \p1=(t) in (-18,\y1) node (v00) {${\color{gray}\bullet}$} -- (7,\y1);
\draw[thick, dotted] let \p1=(t1) in (-18,\y1) node (v0) {$\bullet$} -- (7,\y1);
\draw[thick, dotted] let \p1=(t01) in (-18,\y1) node (v1) {$\bullet$} -- (7,\y1);
\draw[thick, dotted] let \p1=(t001) in (-18,\y1) node (v2) {$\bullet$} -- (7,\y1);
\draw[thick, dotted] let \p1=(t0001) in (-18,\y1) node (v3) {$\bullet$} -- (7,\y1);
\draw[thick, dotted] let \p1=(t01001) in (-18,\y1) node (v4) {$\bullet$} -- (7,\y1);
\draw[thick, dotted] let \p1=(t000001) in (-18,\y1) node (v5) {$\bullet$} -- (7,\y1);
\draw[thick, dotted] let \p1=(t1010101) in (-18,\y1) node (v6) {$\bullet$} -- (7,\y1);
\node[left] at (v00) {${\color{gray}v_{-1}}$};
\node[left] at (v0) {$v_0$};
\node[left] at (v1) {$v_1$};
\node[left] at (v2) {$v_2$};
\node[left] at (v3) {$v_3$};
\node[left] at (v4) {$v_4$};
\node[left] at (v5) {$v_5$};
\node[left] at (v6) {$v_6$};
\draw[thick] (v0.center) to (v1.center) to (v2.center) to (v3.center);
\draw[thick] (v3.center) to (v4.center) to (v5.center) to (v6.center) ;
\draw[thick] (v6.center) to [bend left] (v3.center);
\draw[thick] (v4.center) to [bend left] (v0.center);
\draw[thick] (v6.center) to [bend left] (v1.center);
\draw[gray] (v00.center) to [bend left] (v2.center);
\draw[gray] (v00.center) to [bend left] (v6.center);
\draw[gray] (v00.center) to (v0.center);
\end{tikzpicture}
\caption{A strong triangle-free tree $\bS_3$ densely coding $\mathcal{H}_3$}
\end{figure}
\end{example}
\begin{example}[A Strong $K_4$-Free Tree, $\bS_4$]\label{ex.S4}
The following tree $\bS_4$ in Figure 4.\ is an example of a strong $K_4$-free tree.
Let $u_0$ denote the empty sequence, and suppose $u_1=\lgl 1\rgl$ and $u_2=\lgl 0\rgl$.
The ghost coding nodes are $c_{-2}=\lgl \rgl$ and $c_{-1}=\lgl 1\rgl$.
According to the construction in Example \ref{thm.stftree},
the first three coding nodes of $\bS_4$ are
$c_0=\lgl 1,1\rgl$, which extends $u_0$,
$c_1=\lgl 0,1,1\rgl$, and $c_2=\lgl 0,0,1,1\rgl$,
each time splitting according to the $K_4$-Free Splitting Criterion ($4$-FSC) to construct a tree $r_5(\bS_4)$.
Split again according to the $4$-FSC to construct the next level of the tree, $\bS_4(5)$.
Since $u_1$ is in $r_3(\bS_4)$,
letting $c_3=\lgl 1,1,0,1,1\rgl$ satisfies requirement (i) in Example \ref{thm.stftree}.
Let $c_4=\lgl 0,0,0,0,1,1\rgl$ and $c_5=\lgl 0,0,0,0,0,1,1\rgl$.
Since $u_2=\lgl 0\rgl$ is in $r_4(\bS_4)$, taking $c_6$ to be
$\lgl 0,1,1,0,0,0,1,1\rgl$ satisfies our requiremnts.
One can check that this node is in $\bS_4(8)$.
(One could also simply let $c_6$ be ${0^{(6)}}^{\frown}\lgl 1,1\rgl$.
Continue the construction according to Example \ref{thm.stftree}.
\begin{figure}\label{fig.bS4}
\begin{tikzpicture}[grow'=up,scale=.4]
\tikzstyle{level 1}=[sibling distance=6in]
\tikzstyle{level 2}=[sibling distance=3.2in]
\tikzstyle{level 3}=[sibling distance=1.6in]
\tikzstyle{level 4}=[sibling distance=.8in]
\tikzstyle{level 5}=[sibling distance=.4in]
\tikzstyle{level 6}=[sibling distance=0.2in]
\tikzstyle{level 7}=[sibling distance=0.1in]
\node {} coordinate(t)
child{coordinate (t0)
child{coordinate (t00)
child{coordinate (t000)
child {coordinate(t0000)
child{coordinate(t00000)
child{coordinate(t000000)
child{coordinate(t0000000)}
child{coordinate(t0000001)}
}
child{coordinate(t000001)
child{coordinate(t0000010)}
child{ coordinate(t0000011)}
}
}
child{coordinate(t00001)
child{coordinate(t000010)
child{coordinate(t0000100)}
child{coordinate(t0000101)}
}
child{ coordinate(t000011)
child{ coordinate(t0000110)}
child{ coordinate(t0000111) edge from parent[draw=none] }
}
}}
child {coordinate(t0001)
child {coordinate(t00010)
child {coordinate(t000100)
child {coordinate(t0001000)}
child { coordinate(t0001001)}
}
child {coordinate(t000101)
child {coordinate(t0001010)}
child { coordinate(t0001011)}
}
}
child{coordinate(t00011)
child{coordinate(t000110)
child{coordinate(t0001100)}
child{coordinate(t0001101)}
}
child{coordinate(t000111)edge from parent[draw=none] }
}}}
child{ coordinate(t001)
child{ coordinate(t0010)
child{ coordinate(t00100)
child{ coordinate(t001000)
child{ coordinate(t0010000)}
child{ coordinate(t0010001)}
}
child{ coordinate(t001001)
child{ coordinate(t0010010)}
child{ coordinate(t0010011)}
}
}
child{ coordinate(t00101)
child{ coordinate(t001010)
child{ coordinate(t0010100)}
child{ coordinate(t0010101)}
}
child{ coordinate(t001011)
child{ coordinate(t0010110)}
child{ coordinate(t0010111) edge from parent[draw=none] }
}
}}
child{ coordinate(t0011)
child{ coordinate(t00110)
child{ coordinate(t001100)
child{ coordinate(t0011000)}
child{ coordinate(t0011001)}
}
child{ coordinate(t001101)
child{ coordinate(t0011010)}
child{ coordinate(t0011011)}
}
}
child{ coordinate(t00111)edge from parent[draw=none]}
}}}
child{ coordinate(t01)
child{ coordinate(t010)
child{ coordinate(t0100)
child{ coordinate(t01000)
child{ coordinate(t010000)
child{ coordinate(t0100000)}
child{ coordinate(t0100001)}
}
child{ coordinate(t010001)
child{ coordinate(t0100010)}
child{ coordinate(t0100011)}
}
}
child{ coordinate(t01001)
child{ coordinate(t010010)
child{ coordinate(t0100100)}
child{ coordinate(t0100101)}
}
child{ coordinate(t010011)
child{ coordinate(t0100110)}
child{ coordinate(t0100111)edge from parent[draw=none] }
}
}}
child{ coordinate(t0101)
child{ coordinate(t01010)
child{ coordinate(t010100)
child{ coordinate(t0101000)}
child{ coordinate(t0101001)}
}
child{ coordinate(t010101)edge from parent[draw=none]}
}
child{ coordinate(t01011)
child{ coordinate(t010110)
child{ coordinate(t0101100)}
child{ coordinate(t0101101)}
}
child{ coordinate(t010111)edge from parent[draw=none] }
}}}
child{ coordinate(t011)
child{ coordinate(t0110)
child{ coordinate(t01100)
child{ coordinate(t011000)
child{ coordinate(t0110000)}
child{ coordinate(t0110001)}
}
child{ coordinate(t011001)
child{ coordinate(t0110010)}
child{ coordinate(t0110011)}
}
}
child{ coordinate(t01101)
child{ coordinate(t011010)
child{ coordinate(t0110100)}
child{ coordinate(t0110101)}
}
child{ coordinate(t011011)
child{ coordinate(t0110110)}
child{ coordinate(t0110111)edge from parent[draw=none] }}
}
}
child{ coordinate(t0111)edge from parent[draw=none]
}
}}}
child{ coordinate(t1)
child{ coordinate(t10)
child{ coordinate(t100)
child{ coordinate(t1000)
child{ coordinate(t10000)
child{ coordinate(t100000)
child{ coordinate(t1000000)}
child{ coordinate(t1000001)}
}
child{ coordinate(t100001)
child{ coordinate(t1000010)}
child{ coordinate(t1000011)}
}
}
child{ coordinate(t10001)
child{ coordinate(t100010)
child{ coordinate(t1000100)}
child{ coordinate(t1000101)}
}
child{ coordinate(t100011)
child{ coordinate(t1000110)}
child{ coordinate(t1000111)edge from parent[draw=none] }
}
}}
child{ coordinate(t1001)
child{ coordinate(t10010)
child{ coordinate(t100100)
child{ coordinate(t1001000)}
child{ coordinate(t1001001)}
}
child{ coordinate(t100101)
child{ coordinate(t1001010)}
child{ coordinate(t1001011)}
}
}
child{ coordinate(t10011)
child{ coordinate(t100110)
child{ coordinate(t1001100)}
child{ coordinate(t1001101)}
}
child{ coordinate(t100111)edge from parent[draw=none] }
}}}
child{ coordinate(t101)
child{ coordinate(t1010)
child{ coordinate(t10100)
child{ coordinate(t101000)
child{ coordinate(t1010000)}
child{ coordinate(t1010001)}
}
child{ coordinate(t101001)
child{ coordinate(t1010010)}
child{ coordinate(t1010011)}
}
}
child{ coordinate(t10101)
child{ coordinate(t101010)
child{ coordinate(t1010100)}
child{ coordinate(t1010101)}
}
child{ coordinate(t101011)
child{ coordinate(t1010110)}
child{ coordinate(t1010111)edge from parent[draw=none]}
}
}}
child{ coordinate(t1011)
child{ coordinate(t10110)
child{ coordinate(t101100)
child{ coordinate(t1011000)}
child{ coordinate(t1011001)}
}
child{ coordinate(t101101)
child{ coordinate(t1011010)}
child{ coordinate(t1011011)}
}
}
child{ coordinate(t10111)
edge from parent[draw=none] }
}}}
child{ coordinate(t11)
child{coordinate (t110)
child{coordinate (t1100)
child{coordinate (t11000)
child{coordinate(t110000)
child{coordinate(t1100000)}
child{coordinate(t1100001)}
}
child{coordinate(t110001) edge from parent[draw=none] }}
child{coordinate(t11001)
child{coordinate(t110010)
child{coordinate(t1100100)}
child{coordinate(t1100101)}}
child{coordinate(t110011) edge from parent[draw=none] }
}}
child{coordinate (t1101)
child{coordinate(t11010)
child{coordinate(t110100)
child{coordinate(t1101000)}
child{coordinate(t1101001)}}
child{coordinate(t110101) edge from parent[draw=none] }
}
child{coordinate(t11011)
child{coordinate(t110110)
child{coordinate(t1101100)}
child{coordinate(t1101101)}
}
child{coordinate(t110111) edge from parent[draw=none] }
}}}
child{coordinate(t111) edge from parent[draw=none] }} };
\node[below] at (t) {${\color{gray} c_{-2}}$};
\node[right] at (t1) {$\ \ {\color{gray}c_{-1}}$};
\node[right] at (t11) {$c_0$};
\node[left] at (t011) {$c_1$};
\node[right] at (t0011) {$c_2$};
\node[right] at (t11011) {$c_3$};
\node[left] at (t000011) {$c_4$};
\node[circle, fill=gray,inner sep=0pt, minimum size=5pt] at (t) {};
\node[circle, fill=gray,inner sep=0pt, minimum size=5pt] at (t1) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t11) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t011) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t0011) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t11011) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t000011) {};
\draw[dotted] let \p1=(t) in (-17,\y1) node (v02) {${\color{gray}\bullet}$} -- (13,\y1);
\draw[dotted] let \p1=(t1) in (-17,\y1) node (v01) {${\color{gray}\bullet}$} -- (13,\y1);
\draw[thick, dotted] let \p1=(t11) in (-17,\y1) node (v0) {$\bullet$} -- (13,\y1);
\draw[thick, dotted] let \p1=(t011) in (-17,\y1) node (v1) {$\bullet$} -- (13,\y1);
\draw[thick, dotted] let \p1=(t0011) in (-17,\y1) node (v2) {$\bullet$} -- (13,\y1);
\draw[thick, dotted] let \p1=(t11011) in (-17,\y1) node (v3) {$\bullet$} -- (13,\y1);
\draw[thick, dotted] let \p1=(t000011) in (-17,\y1) node (v4) {$\bullet$} -- (13,\y1);
\node[left, gray] at (v02) {$v_{-2}$};
\node[left, gray] at (v01) {$v_{-1}$};
\node[left] at (v0) {$v_0$};
\node[left] at (v1) {$v_1$};
\node[left] at (v2) {$v_2$};
\node[left] at (v3) {$v_3$};
\node[left] at (v4) {$v_4$};
\draw[gray] (v02.center) to (v01.center) to (v0.center) to [bend right] (v02.center);
\draw[thick] (v0.center) to (v1.center) ;
\draw[gray] (v1.center) to [bend right] (v01.center);
\draw[thick] (v1.center) to (v2.center)to [bend left] (v0.center) ;
\draw[thick] (v2.center) to (v3.center)to [bend left] (v1.center) ;
\draw[gray] (v3.center) to [bend right] (v01.center);
\draw[gray] (v3.center) to [bend right] (v02.center);
\draw[thick] (v3.center) to (v4.center)to [bend left] (v2.center) ;
\end{tikzpicture}
\caption{A strong $K_4$-free tree $\bS_4$ densely coding $\mathcal{H}_4$}
\end{figure}
\end{example}
As in the case of $\mathcal{H}_3$ in \cite{DobrinenJML20},
the purpose of not allowing coding nodes to split is to
reduce the number of different types of trees coding a given finite $K_k$-free graph.
Having the coding nodes be dense in the tree is necessary development of Ramsey theory.
However,
the same example of a bad coloring as given in Example 3.18 of \cite{DobrinenJML20}
provides
a bad coloring for any strong $K_k$-free tree, for any $k\ge 3$.
\begin{example}[A bad coloring of vertices in $\mathcal{H}_k$]\label{ex.badcoloring}
Let $k\ge 3$ be fixed.
Color all coding nodes in $\bS_k$ extending $\lgl 1\rgl$ red.
In particular, $c_0=\lgl 1\rgl$ is colored red.
Given $n\ge 0$, suppose that for each $i\le n$, all coding nodes in $\bS_k$ extending ${0^{(i)}}^{\frown}1$ have been colored either red or blue.
Look at the coding node $c_{n}$.
This node has length $n+1$ and has already been assigned a color.
If $c_{n}$ is red, then color every coding node extending
${0^{(n+1)}}^{\frown}1$ blue;
if $c_{n}$ is blue, then color every coding node extending
${0^{(n+1)}}^{\frown}1$ red.
Notice that any subtree of $\bS_k$ which is strongly similar to $\bS_k$ in the sense of Definition \ref{def.3.1.Sauer}
where additionally coding nodes are sent to coding nodes, has nodes of both colors.
(See Definition \ref{def.3.1.likeSauer} for the precise definition of {\em strongly similar} for trees with coding nodes.)
Equivalently, any subtree of $\bS_k$ which is again a strong $K_k$-tree which represents a copy of $\mathcal{H}_k$ has coding nodes with both colors.
\end{example}
This coloring in Example \ref{ex.badcoloring} is equivalent to a coloring of the vertices of $\mathcal{H}_k$.
Recall that the work of Komj\'{a}th and \Rodl\ in \cite{Komjath/Rodl86} for $k=3$ and work of El-Zahar and Sauer in \cite{El-Zahar/Sauer89} for $k\ge 4$, shows that for any coloring of the vertices in
$\mathcal{H}_k$ into two colors, there is a subgraph $\mathcal{H}'$ which is again a $K_k$-free Henson graph in which all vertices have the same color.
However, the previous example shows that we cannot expect the subgraph $\mathcal{H}'$ to have induced tree with coding nodes
$T_{\mathcal{H}'}$ strongly similar to $\bS_k$.
Since we are aiming to prove Ramsey theorems on collections of trees with coding nodes which are all strongly similar to each other,
we immediately turn to the next section where we present the skewed version of these trees on which the relevant Ramsey theory can be developed.
\section{Strong $\mathcal{H}_k$-coding trees}\label{sec.4}
The classes of trees coding Henson graphs on which we develop Ramsey theory are presented in this section.
For each $k\ge 3$,
fixing a tree $\bS_k$ constructed as in Example \ref{thm.stftree}, we construct its skew version, denoted $\bT_k$, the skewing being necessary to avoid the bad colorings seen in Example
\ref{ex.badcoloring}.
The coding nodes in $\bT_k$ code a $k$-clique-free Henson graph in the same way as the coding nodes of $\bS_k$.
In Definition \ref{defn.T_pspace}, we present the space of strong $\mathcal{H}_k$-coding subtrees of $\bT_k$.
These are subtrees of $\bT_k$ which are isomorphic to $\bT_k$ in a strong way, and which consequently code a copy of $\mathcal{H}_k$ in the same way that $\bT_k$ does.
By the end of Section \ref{sec.1SPOC},
these spaces of strong $\mathcal{H}_k$-coding trees will be shown to
have Ramsey theorems similar to the Milliken space of strong trees \cite{Milliken79}.
The added
difficulty for $k>3$
will be seen and addressed from here throughout the rest of the paper.
This section extends results of Section 4 in \cite{DobrinenJML20} to $\mathcal{H}_k$ for all $k\ge 4$, while providing a new, more streamlined approach for the $k=3$ case.
\subsection{Definitions, notation, and maximal strong $\mathcal{H}_k$-coding trees, $\bT_k$}\label{subsection.T_p}
The following terminology and notation will be used throughout, some of which is recalled from previous sections for ease of reading.
A subset $X\sse \Seq$ is a {\em level set} if all nodes in $X$ have the same length.
We continue to use the notions of {\em tree}
and {\em tree with coding nodes}
given in Definitions \ref{defn.tree}
and
\ref{defn.treewcodingnodes}, respectively, augmented to include {\em ghost coding nodes}, as was the case in the definition of $\bS_k$ in Example \ref{thm.stftree}.
Let $T\sse \Seq$ be a finite or infinite tree with coding nodes
$\lgl c^T_n:n\in N\rgl$, where either $N\in\mathbb{N}$ or $N=\mathbb{N}$.
If $T$ is to be a strong $\mathcal{H}_k$-coding tree, then $T$ will also have
ghost coding nodes $\lgl c^T_{-k+2},\dots,c^T_{-1}\rgl$.
We let $l^T_n$ denote $|c^T_n|$, the length of $c^T_n$.
Recall that $|c^T_n|$ is the domain of $c^T_n$, as the sequence $c^T_n$ is a function from some natural number into $\{0,1\}$.
We sometimes drop the superscript $T$ when it is clear from the context.
A node $s\in T$ is called a {\em splitting node} if
both $s^{\frown}0$ and $s^{\frown}1$ are in $\widehat{T}$;
equivalently, $s$ is a splitting node in $T$ if there are nodes $s_0,s_1\in T$ such that
$s_0\contains s^{\frown}0$ and $s_1\contains s^{\frown}1$.
The {\em critical nodes} of $T$ is the set of all splitting and coding nodes, as well as any ghost coding nodes of $T$.
Given $t$ in $T$,
let $T\re |t|$ denote
the set of all $s\in T$ such that $|s|=|t|$, and
call $T\re |t|$ a {\em level} of $T$.
We will say that a tree
$T$ is {\em skew} if
each level of $T$ has exactly one of either a coding node or a splitting node.
The set of {\em levels} of a skew tree $T\sse \Seq$, denoted $L(T)$, is the set of those $l\in\mathbb{N}$ such that $T$ has either a splitting or a coding node
of length $l$.
A skew tree
$T$ is {\em strongly skew} if additionally
for each splitting node $s\in T$,
every
$t\in T$ such that $|t|>|s|$ and $t\not\supset s$ also
satisfies
$t(|s|)=0$;
that is, the passing number of any node passing by, but not extending, a splitting node
is $0$.
Given a skew tree $T$,
we let $\lgl d^T_m:m\in M\rgl$ denote
the enumeration of all critical nodes of $T$ in increasing order;
$d^T_0$ will be the stem of $T$, that is, the first splitting node of $T$.
Appropriating the standard notation for Milliken's strong trees,
for each $m\in M$,
the {\em $m$-th level of $T$} is
\begin{equation}\label{eq.T(m)}
T(m)=\{s\in T:|s|=|d^T_m|\}.
\end{equation}
Then for any skew tree $T$,
\begin{equation}
T=\bigcup_{m\in M} T(m).
\end{equation}
For $m\in M$, the {\em $m$-th approximation of $T$} is defined to be
\begin{equation}\label{eq.r_m(T)}
r_m(T)=\bigcup_{j<m}T(j).
\end{equation}
Let $m_n$ denote the integer
such that $c^T_n\in T(m_n)$; thus,
$d^T_{m_n}=c^T_n$.
We call a skew tree $T$ {\em regular} if for each $n\in N$,
the lengths of the splitting nodes in the $n$-th interval of $T$ increase as their lexicographic order decreases.
In contrast to our approach in \cite{DobrinenJML20} where we defined strong $\mathcal{H}_3$-coding trees via several structural properties,
in this paper we shall construct a particular strong
$\mathcal{H}_k$-coding tree $\bT_k$ and then define a subtree to be a strong $\mathcal{H}_k$-coding tree if it is isomorphic to $\bT_k$ in a strong sense, to be made precise in Definition
\ref{defn.T_pspace}.
The coding structure of $\bT_k$ is the same as that of the strong $K_k$-free tree $\bS_k$ given in Example \ref{thm.stftree}. The best way to think about $\bT_k$ is that it is simply the strongly skew, regular version of $\bS_k$.
\begin{figure}\label{fig.bT}
\begin{tikzpicture}[grow'=up,scale=.32]
\tikzstyle{level 1}=[sibling distance=5in]
\tikzstyle{level 2}=[sibling distance=2in]
\tikzstyle{level 3}=[sibling distance=1.3in]
\tikzstyle{level 4}=[sibling distance=1.2in]
\tikzstyle{level 5}=[sibling distance=1in]
\tikzstyle{level 6}=[sibling distance=1in]
\tikzstyle{level 7}=[sibling distance=.9in]
\tikzstyle{level 8}=[sibling distance=.8in]
\tikzstyle{level 9}=[sibling distance=.7in]
\tikzstyle{level 10}=[sibling distance=.6in]
\tikzstyle{level 11}=[sibling distance=.5in]
\tikzstyle{level 12}=[sibling distance=.4in]
\tikzstyle{level 13}=[sibling distance=.3in]
\tikzstyle{level 14}=[sibling distance=.3in]
\node {} coordinate(t)
child{coordinate (t0)
child{coordinate (t00)
child{coordinate (t000)
child{coordinate (t0000)
child{coordinate (t00000)
child{coordinate (t000000)
child{coordinate (t0000000)
child{coordinate (t00000000)
child{coordinate (t000000000)
child{coordinate (t0000000000)
child{coordinate (t00000000000)
child{coordinate (t000000000000)
child{coordinate (t0000000000000)
child{coordinate (t00000000000000)}
child{edge from parent[draw=none] coordinate (t00000000000001)}
}
child{coordinate (t0000000000001)
child{edge from parent[draw=none] coordinate (t00000000000010)}
child{coordinate (t00000000000011)}
}
}
child{edge from parent[draw=none] coordinate (t000000000001)}
}
child{edge from parent[draw=none] coordinate (t00000000001)}
}
child{edge from parent[draw=none] coordinate (t0000000001)}
}
child{coordinate (t000000001)
child{edge from parent[draw=none] coordinate (t0000000010)}
child{coordinate (t0000000011)
child{coordinate (t00000000110)
child{coordinate (t000000001100)
child{coordinate (t0000000011000)
child{coordinate (t00000000110000)}
child{edge from parent[draw=none] coordinate (t00000000110001)}
}
child{edge from parent[draw=none] coordinate (t0000000011001)}
}
child{edge from parent[draw=none] coordinate (t000000001101)}
}
child{edge from parent[draw=none] coordinate (t00000000111)}
}
}
}
child{edge from parent[draw=none] coordinate (t00000001)}
}
child{edge from parent[draw=none] coordinate (t0000001)}
}
child{edge from parent[draw=none] coordinate (t000001)}
}
child{coordinate (t00001)
child{edge from parent[draw=none] coordinate (t000010)}
child{coordinate (t000011)
child{coordinate (t0000110)
child{coordinate (t00001100)
child{coordinate (t000011000)
child{coordinate (t0000110000)
child{coordinate (t00001100000)
child{coordinate (t000011000000)
child{coordinate (t0000110000000)
child{coordinate (t00001100000000)}
child{edge from parent[draw=none] coordinate (t00001100000001)}
}
child{edge from parent[draw=none] coordinate (t0000110000001)}
}
child{coordinate (t000011000001)
child{coordinate (t0000110000010)
child{edge from parent[draw=none] coordinate (t00001100000100)}
child{coordinate (t00001100000101)}
}
child{edge from parent[draw=none] coordinate (t0000110000011)}
}
}
child{edge from parent[draw=none] coordinate (t00001100001)}
}
child{edge from parent[draw=none] coordinate (t0000110001)}
}
child{edge from parent[draw=none] coordinate (t000011001)}
}
child{edge from parent[draw=none] coordinate (t00001101)}
}
child{edge from parent[draw=none] coordinate (t0000111)}
}
}
}
child{edge from parent[draw=none] coordinate (t0001)}
}
child{ edge from parent[draw=none]coordinate(t001)}}
child{ coordinate(t01)
child{ edge from parent[draw=none] coordinate(t010)}
child{coordinate(t011)
child{coordinate(t0110)
child{coordinate(t01100)
child{coordinate(t011000)
child{coordinate(t0110000)
child{coordinate(t01100000)
child{coordinate(t011000000)
child{coordinate(t0110000000)
child{coordinate(t01100000000)
child{coordinate(t011000000000)
child{coordinate(t0110000000000)
child{coordinate(t01100000000000)}
child{edge from parent[draw=none] coordinate(t01100000000001)}
}
child{ edge from parent[draw=none]coordinate(t0110000000001)}
}
child{ edge from parent[draw=none] coordinate(t011000000001)}
}
child{coordinate(t01100000001)
child{coordinate(t011000000010)
child{coordinate(t0110000000100)
child{edge from parent[draw=none] coordinate(t01100000001000)}
child{coordinate(t01100000001001)}
}
child{edge from parent[draw=none] coordinate(t0110000000101)}
}
child{edge from parent[draw=none] coordinate(t011000000011)}
}
}
child{ edge from parent[draw=none] coordinate(t0110000001)}
}
child{ edge from parent[draw=none] coordinate(t011000001)}
}
child{coordinate(t01100001)
child{coordinate(t011000010)
child{edge from parent[draw=none] coordinate(t0110000100)}
child{coordinate(t0110000101)
child{coordinate(t01100001010)
child{coordinate(t011000010100)
child{coordinate(t0110000101000)
child{coordinate(t01100001010000)}
child{edge from parent[draw=none] coordinate(t01100001010001)}
}
child{edge from parent[draw=none] coordinate(t0110000101001)}
}
child{edge from parent[draw=none] coordinate(t011000010101)}
}
child{edge from parent[draw=none] coordinate(t01100001011)}
}
}
child{edge from parent[draw=none] coordinate(t011000011)}
}
}
child{ edge from parent[draw=none] coordinate(t0110001)}
}
child{ edge from parent[draw=none] coordinate(t011001)}
}
child{ edge from parent[draw=none] coordinate(t01101)}
}
child{ edge from parent[draw=none] coordinate(t0111)}
}}}
child{ coordinate(t1)
child{ coordinate(t10)
child{ coordinate(t100)
child{ coordinate(t1000)
child{ coordinate(t10000)
child{ coordinate(t100000)
child{ coordinate(t1000000)
child{ coordinate(t10000000)
child{ coordinate(t100000000)
child{ coordinate(t1000000000)
child{ coordinate(t10000000000)
child{ coordinate(t100000000000)
child{ coordinate(t1000000000000)
child{ coordinate(t10000000000000)}
child{ edge from parent[draw=none] coordinate(t10000000000001)}
}
child{edge from parent[draw=none] coordinate(t1000000000001)}
}
child{ edge from parent[draw=none] coordinate(t100000000001)}
}
child{edge from parent[draw=none] coordinate(t10000000001)}
}
child{ edge from parent[draw=none] coordinate(t1000000001)}
}
child{edge from parent[draw=none] coordinate(t100000001)}
}
child{ edge from parent[draw=none] coordinate(t10000001)}
}
child{ coordinate(t1000001)
child{ coordinate(t10000010)
child{ coordinate(t100000100)
child{ edge from parent[draw=none] coordinate(t1000001000)}
child{ coordinate(t1000001001)
child{ coordinate(t10000010010)
child{ coordinate(t100000100100)
child{ coordinate(t1000001001000)
child{ coordinate(t10000010010000)}
child{ edge from parent[draw=none] coordinate(t10000010010001)}
}
child{edge from parent[draw=none] coordinate(t1000001001001)}
}
child{edge from parent[draw=none] coordinate(t100000100101)}
}
child{ edge from parent[draw=none] coordinate(t10000010011)}
}
}
child{ edge from parent[draw=none] coordinate(t100000101)}
}
child{ edge from parent[draw=none] coordinate(t10000011)}
}
}
child{ edge from parent[draw=none] coordinate(t100001)}
}
child{ edge from parent[draw=none] coordinate(t10001)}
}
child{ coordinate(t1001)
child{ coordinate(t10010)
child{ edge from parent[draw=none] coordinate(t100100)}
child{ coordinate(t100101)
child{ coordinate(t1001010)
child{ coordinate(t10010100)
child{ coordinate(t100101000)
child{ coordinate(t1001010000)
child{ coordinate(t10010100000)
child{ coordinate(t100101000000)
child{ coordinate(t1001010000000)
child{ coordinate(t10010100000000)}
child{edge from parent[draw=none] coordinate(t10010100000001)}
}
child{ edge from parent[draw=none]coordinate(t1001010000001)}
}
child{edge from parent[draw=none] coordinate(t100101000001)}
}
child{edge from parent[draw=none] coordinate(t10010100001)}
}
child{ edge from parent[draw=none] coordinate(t1001010001)}
}
child{ edge from parent[draw=none] coordinate(t100101001)}
}
child{edge from parent[draw=none] coordinate(t10010101)}
}
child{ edge from parent[draw=none] coordinate(t1001011)}
}
}
child{ edge from parent[draw=none] coordinate(t10011)}
}
}
child{edge from parent[draw=none] coordinate(t101)}}
child{edge from parent[draw=none] coordinate(t11)
} };
\node[left] at (t0) {$d^3_1$};
\node[right] at (t10) {$c^3_0$};
\node[left] at (t10) {$d^3_2$};
\node[left] at (t100) {$d^3_3$};
\node[left] at (t0000) {$d^3_4$};
\node[left] at (t01100) {$d^3_5$};
\node[right] at (t01100) {$c^3_1$};
\node[left] at (t100000) {$d^3_6$};
\node[left] at (t0110000) {$d^3_7$};
\node[left] at (t00000000) {$d^3_8$};
\node[right] at (t000011000) {$c^3_2$};
\node[left] at (t000011000) {$d^3_9$};
\node[left] at (t0110000000) {$d^3_{10}$};
\node[left] at (t00001100000) {$d^3_{11}$};
\node[left] at (t000000000000) {$d^3_{12}$};
\node[left] at (t1000001001000) {$d^3_{13}$};
\node[right] at (t1000001001000) {$c^3_3$};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t0) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t100) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t0000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t100000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t0110000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t00000000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t0110000000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t00001100000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t000000000000) {};
\node[below] at (t) {$d^3_0\ \ \ {\color{gray}c^3_{-1}}$};
\node[circle, fill=gray,inner sep=0pt, minimum size=5pt] at (t) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t10) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t01100) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t000011000) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t1000001001000) {};
\draw[dotted] let \p1=(t) in (-24,\y1) node (v01) {$\color{gray}{\bullet}$} -- (8,\y1);
\draw[thick, dotted] let \p1=(t10) in (-24,\y1) node (v0) {$\bullet$} -- (8,\y1);
\draw[thick, dotted] let \p1=(t01100) in (-24,\y1) node (v1) {$\bullet$} -- (8,\y1);
\draw[thick, dotted] let \p1= (t000011000) in (-24,\y1) node (v2) {$\bullet$} -- (8,\y1);
\draw[thick, dotted] let \p1= (t1000001001000) in (-24,\y1) node (v3) {$\bullet$} -- (8,\y1);
\node[left, gray] at (v01) {$v_{-1}$};
\node[left] at (v0) {$v_0$};
\node[left] at (v1) {$v_1$};
\node[left] at (v2) {$v_2$};
\node[left] at (v3) {$v_3$};
\draw[gray] (v0.center) to (v01.center) to [bend left] (v3.center);
\draw[thick] (v0.center) to (v1.center) to (v2.center) to (v3.center);
\end{tikzpicture}
\caption{Strong $\mathcal{H}_3$-coding tree $\bT_3$}
\end{figure}
\begin{figure}\label{fig.bT4}
\begin{tikzpicture}[grow'=up,scale=.32]
\tikzstyle{level 1}=[sibling distance=5in]
\tikzstyle{level 2}=[sibling distance=2in]
\tikzstyle{level 3}=[sibling distance=1.3in]
\tikzstyle{level 4}=[sibling distance=1.2in]
\tikzstyle{level 5}=[sibling distance=1in]
\tikzstyle{level 6}=[sibling distance=1in]
\tikzstyle{level 7}=[sibling distance=.9in]
\tikzstyle{level 8}=[sibling distance=.8in]
\tikzstyle{level 9}=[sibling distance=.7in]
\tikzstyle{level 10}=[sibling distance=.6in]
\tikzstyle{level 11}=[sibling distance=.5in]
\tikzstyle{level 12}=[sibling distance=.4in]
\tikzstyle{level 13}=[sibling distance=.3in]
\tikzstyle{level 14}=[sibling distance=.3in]
\tikzstyle{level 15}=[sibling distance=.3in]
\node {} coordinate(t)
child{coordinate(t0)
child{coordinate(t00)
child{coordinate(t000)
child{coordinate(t0000)
child{coordinate(t00000)
child{coordinate(t000000)
child{coordinate(t0000000)
child{coordinate(t00000000)
child{coordinate(t000000000)
child{coordinate(t0000000000)
child{coordinate(t00000000000)
child{coordinate(t000000000000)
child{coordinate(t0000000000000)
child{coordinate(t00000000000000)
child{coordinate(t000000000000000)}child{coordinate(t000000000000001)edge from parent[draw=none]}
}
child{coordinate(t00000000000001)
child{coordinate(t000000000000010)edge from parent[draw=none]}
child{coordinate(t000000000000011)}
}}
child{coordinate(t0000000000001)edge from parent[draw=none]}
}
child{coordinate(t000000000001)edge from parent[draw=none]}
}
child{coordinate(t00000000001)
edge from parent[draw=none]}
}
child{coordinate(t0000000001)edge from parent[draw=none]}
}
child{coordinate(t000000001)edge from parent[draw=none]}
}
child{coordinate(t00000001)edge from parent[draw=none]}}
child{coordinate(t0000001)
child{coordinate(t00000010)edge from parent[draw=none]}
child{coordinate(t00000011)
child{coordinate(t000000110)
child{coordinate(t0000001100)
child{coordinate(t00000011000)
child{coordinate(t000000110000)
child{coordinate(t0000001100000)
child{coordinate(t00000011000000)
child{coordinate(t000000110000000)}
child{coordinate(t000000110000001)edge from parent[draw=none]}
}
child{coordinate(t00000011000001)edge from parent[draw=none]}
}
child{coordinate(t0000001100001)
child{coordinate(t00000011000010)
child{coordinate(t000000110000100)edge from parent[draw=none]}
child{coordinate(t000000110000101)}
}
child{coordinate(t00000011000011)edge from parent[draw=none]}
}}
child{coordinate(t000000110001)edge from parent[draw=none]}
}
child{coordinate(t00000011001)edge from parent[draw=none]}
}
child{coordinate(t0000001101)
edge from parent[draw=none]}
}
child{coordinate(t000000111)edge from parent[draw=none]}
}}}
child{coordinate(t000001)edge from parent[draw=none]}}
child{coordinate(t00001)edge from parent[draw=none]}
}
child{coordinate(t0001)edge from parent[draw=none]}
}
child{coordinate(t001)
child{coordinate(t0010) edge from parent[draw=none]}
child{coordinate(t0011)
child{coordinate(t00110)
child{coordinate(t001100)
child{coordinate(t0011000)
child{coordinate(t00110000)
child{coordinate(t001100000)
child{coordinate(t0011000000)
child{coordinate(t00110000000)
child{coordinate(t001100000000)
child{coordinate(t0011000000000)
child{coordinate(t00110000000000)
child{coordinate(t001100000000000)}
child{coordinate(t001100000000001)edge from parent[draw=none]}
}
child{coordinate(t00110000000001)edge from parent[draw=none]}
}
child{coordinate(t0011000000001)edge from parent[draw=none]}
}
child{coordinate(t001100000001)
child{coordinate(t0011000000010)
child{coordinate(t00110000000100)
child{coordinate(t001100000001000)edge from parent[draw=none]}
child{coordinate(t001100000001001)}
}
child{coordinate(t00110000000101)edge from parent[draw=none]}
}
child{coordinate(t0011000000011)edge from parent[draw=none]}
}}
child{coordinate(t00110000001)edge from parent[draw=none]}
}
child{coordinate(t0011000001)edge from parent[draw=none]}
}
child{coordinate(t001100001)edge from parent[draw=none]}
}
child{coordinate(t00110001)edge from parent[draw=none]}
}
child{coordinate(t0011001)edge from parent[draw=none]}
}
child{coordinate(t001101)
child{coordinate(t0011010)
child{coordinate(t00110100)edge from parent[draw=none]}
child{coordinate(t00110101)
child{coordinate(t001101010)
child{coordinate(t0011010100)
child{coordinate(t00110101000)
child{coordinate(t001101010000)
child{coordinate(t0011010100000)
child{coordinate(t00110101000000)
child{coordinate(t001101010000000)}
child{coordinate(t001101010000001)edge from parent[draw=none]}
}
child{coordinate(t00110101000001)edge from parent[draw=none]}
}
child{coordinate(t0011010100001)edge from parent[draw=none]}
}
child{coordinate(t001101010001)edge from parent[draw=none]}
}
child{coordinate(t00110101001)edge from parent[draw=none]}
}
child{coordinate(t0011010101)edge from parent[draw=none]}
}
child{coordinate(t001101011)edge from parent[draw=none]}
}}
child{coordinate(t0011011)edge from parent[draw=none]}
}}
child{coordinate(t00111)edge from parent[draw=none]}
}}}
child{coordinate(t01) edge from parent[draw=none]}
}
child{coordinate(t1)
child{coordinate(t10)
child{coordinate(t100)
child{coordinate(t1000)
child{coordinate(t10000)
child{coordinate(t100000)
child{coordinate(t1000000)
child{coordinate(t10000000)
child{coordinate(t100000000)
child{coordinate(t1000000000)
child{coordinate(t10000000000)
child{coordinate(t100000000000)
child{coordinate(t1000000000000)
child{coordinate(t10000000000000)
child{coordinate(t100000000000000)}
child{coordinate(t100000000000001)edge from parent[draw=none]}
}
child{coordinate(t10000000000001)edge from parent[draw=none]}
}
child{coordinate(t1000000000001)edge from parent[draw=none]}
}
child{coordinate(t100000000001)edge from parent[draw=none]}
}
child{coordinate(t10000000001)
child{coordinate(t100000000010)
child{coordinate(t1000000000100)
child{coordinate(t10000000001000)
child{coordinate(t100000000010000)edge from parent[draw=none]}
child{coordinate(t100000000010001)}
}
child{coordinate(t10000000001001)edge from parent[draw=none]}
}
child{coordinate(t1000000000101)edge from parent[draw=none]}
}
child{coordinate(t100000000011)edge from parent[draw=none]}
}}
child{coordinate(t1000000001)edge from parent[draw=none]}
}
child{coordinate(t100000001)edge from parent[draw=none]}
}
child{coordinate(t10000001)edge from parent[draw=none]}
}
child{coordinate(t1000001)edge from parent[draw=none]}
}
child{coordinate(t100001)edge from parent[draw=none]}
}
child{coordinate(t10001)
child{coordinate(t100010)
child{coordinate(t1000100)
child{coordinate(t10001000)edge from parent[draw=none]}
child{coordinate(t10001001)
child{coordinate(t100010010)
child{coordinate(t1000100100)
child{coordinate(t10001001000)
child{coordinate(t100010010000)
child{coordinate(t1000100100000)
child{coordinate(t10001001000000)
child{coordinate(t100010010000000)}
child{coordinate(t100010010000001)edge from parent[draw=none]}
}
child{coordinate(t10001001000001)edge from parent[draw=none]}
}
child{coordinate(t1000100100001)edge from parent[draw=none]}
}
child{coordinate(t100010010001)edge from parent[draw=none]}
}
child{coordinate(t10001001001)edge from parent[draw=none]}
}
child{coordinate(t1000100101)
child{coordinate(t10001001010)
child{coordinate(t100010010100)
child{coordinate(t1000100101000)
child{coordinate(t10001001010000)
child{coordinate(t100010010100000)edge from parent[draw=none]}
child{coordinate(t100010010100001)}
}
child{coordinate(t10001001010001)edge from parent[draw=none]}
}
child{coordinate(t1000100101001)edge from parent[draw=none]}
}
child{coordinate(t100010010101)edge from parent[draw=none]}
}
child{coordinate(t10001001011)edge from parent[draw=none]}
}}
child{coordinate(t100010011)edge from parent[draw=none]}
}}
child{coordinate(t1000101)edge from parent[draw=none]}
}
child{coordinate(t100011)edge from parent[draw=none]}
}}
child{coordinate(t1001)edge from parent[draw=none]}
}
child{coordinate(t101)edge from parent[draw=none]}
}
child{coordinate(t11)
child{coordinate(t110)
child{coordinate(t1100)edge from parent[draw=none]}
child{coordinate(t1101)
child{coordinate(t11010)
child{coordinate(t110100)
child{coordinate(t1101000)
child{coordinate(t11010000)
child{coordinate(t110100000)
child{coordinate(t1101000000)
child{coordinate(t11010000000)
child{coordinate(t110100000000)
child{coordinate(t1101000000000)
child{coordinate(t11010000000000)
child{coordinate(t110100000000000)}
child{coordinate(t110100000000001)edge from parent[draw=none]}
}
child{coordinate(t11010000000001)edge from parent[draw=none]}
}
child{coordinate(t1101000000001)edge from parent[draw=none]}
}
child{coordinate(t110100000001)edge from parent[draw=none]}
}
child{coordinate(t11010000001)edge from parent[draw=none]}
}
child{coordinate(t1101000001)edge from parent[draw=none]}
}
child{coordinate(t110100001)
child{coordinate(t1101000010)
child{coordinate(t11010000100)
child{coordinate(t110100001000)
child{coordinate(t1101000010000)
child{coordinate(t11010000100000)
child{coordinate(t110100001000000)edge from parent[draw=none]}
child{coordinate(t110100001000001)}
}
child{coordinate(t11010000100001)edge from parent[draw=none]}
}
child{coordinate(t1101000010001)edge from parent[draw=none]}
}
child{coordinate(t110100001001)edge from parent[draw=none]}
}
child{coordinate(t11010000101)edge from parent[draw=none]}
}
child{coordinate(t1101000011)edge from parent[draw=none]}
}}
child{coordinate(t11010001)edge from parent[draw=none]}
}
child{coordinate(t1101001)edge from parent[draw=none]}
}
child{coordinate(t110101)edge from parent[draw=none]}
}
child{coordinate(t11011)edge from parent[draw=none]}
}}
child{coordinate(t111)edge from parent[draw=none]}
}}
;
\node[below] at (t) {$d^4_0\ \ \ {\color{gray}c^4_{-2}}$};
\node[left] at (t1) {$d^4_1$};
\node[left] at (t00) {$d^4_2$};
\node[left] at (t100) {$d^4_3$};
\node[right] at (t100) {${\color{gray}c^4_{-1}}$};
\node[left] at (t1000) {$d^4_4$};
\node[left] at (t00110) {$d^4_5$};
\node[left] at (t000000) {$d^4_6$};
\node[left] at (t1101000) {$d^4_7$};
\node[right] at (t1101000) {$c^4_0$};
\node[left] at (t11010000) {$d^4_8$};
\node[left] at (t100010010) {$d^4_9$};
\node[left] at (t1000000000) {$d^4_{10}$};
\node[left] at (t00110000000) {$d^4_{11}$};
\node[left] at (t000000110000) {$d^4_{12}$};
\node[left] at (t0000000000000) {$d^4_{13}$};
\node[right] at (t00110101000000) {$c^4_1$};
\node[left] at (t00110101000000) {$d^4_{14}$};
\node[circle, fill=gray,inner sep=0pt, minimum size=5pt] at (t) {};
\node[circle, fill=gray,inner sep=0pt, minimum size=5pt] at (t100) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t1101000) {};
\node[circle, fill=black,inner sep=0pt, minimum size=5pt] at (t00110101000000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t1) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t00) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t1000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t00110) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t000000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t11010000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t100010010) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t1000000000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t00110000000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t000000110000) {};
\node[circle, draw,inner sep=0pt, minimum size=5pt] at (t0000000000000) {};
\draw[dotted] let \p1=(t) in (-25,\y1) node (v01) {$\color{gray}{\bullet}$} -- (9,\y1);
\draw[dotted] let \p1=(t100) in (-25,\y1) node (v0) {$\color{gray}{\bullet}$} -- (9,\y1);
\draw[thick, dotted] let \p1=(t1101000) in (-25,\y1) node (v1) {$\bullet$} -- (9,\y1);
\draw[thick, dotted] let \p1=(t00110101000000) in (-25,\y1) node (v2) {$\bullet$} -- (9,\y1);
\node[left, gray] at (v01) {$v_{-2}$};
\node[left, gray] at (v0) {$v_{-1}$};
\node[left] at (v1) {$v_0$};
\node[left] at (v2) {$v_1$};
\draw[gray] (v0.center) to (v01.center) to [bend left] (v1.center);
\draw[thick] (v0.center) to (v1.center) to (v2.center) to [bend left] (v0.center) ;
\end{tikzpicture}
\caption{Strong $\mathcal{H}_4$-coding tree $\bT_4$}
\end{figure}
\begin{example}[Construction Method for Maximal Strong $\mathcal{H}_k$-Coding Trees, $\bT_k$]\label{ex.bTp}
Fix $k\ge 3$, and let
$\lgl u_i:i\in \mathbb{N}\rgl$ be any enumeration of the nodes in $\Seq$ such that $|u_i|\le|u_j|$ whenever $i<j$.
Let $\bS_k$ be a strong $K_k$-free tree constructed in Example \ref{thm.stftree}.
Recall that the graph represented by the coding nodes $\lgl c^{\bS_k}_n:n\in\bN\rgl$ in
$\bS_k$ is the $k$-clique-free Henson graph.
(The graph represented by the coding nodes in $\bS_k$ along with the ghost coding nodes in $\bS_k$ is also a $k$-clique-free Henson graph, since $(A)^{\tt tree}_k$ holds.)
Define $\bT_k$ to be the tree
obtained by stretching $\bS_k$ to make it strongly skew and regular while preserving the passing numbers so that $\bT_k$ represents the same copy of the Henson graph $\mathcal{H}_k$ as $\bS_k$ does.
$\bT_k$ will have
coding nodes $\lgl c^k_n:n\in\mathbb{N}\rgl$
and ghost coding nodes $\lgl c^k_{-k+2},\dots, c^k_{-1}\rgl$.
For all pairs $j<n$, we will have
$c^k_n(|c^k_j|)=c^{\bS_k}_n(|c^{\bS_k}_j|)$.
The $m$-th critical node $d^k_m$ in $\bT_k$ will be a node of length $m$, so that
$\bT_k$ will have nodes in every length in $\mathbb{N}$.
The critical nodes consist of the splitting nodes, coding nodes, and ghost coding nodes.
In particular, we will have
$\widehat{\bT}_k=\bT_k$, and $\bT_k(m)=\bT_k\re m$ for each $m\in\bN$.
We now show precisely how to construct $\bT_k$, given $\bS_k$.
Set $m_{-k+2}=0$ and $\bT_k(0)=\bS_k(0)$, which is the singleton $\{\lgl\rgl\}$.
Let the ghost coding node of $\bT_k$ be $c^k_{-k+2}=\lgl \rgl$.
This node splits so that $\bT_k(1)=\bS_k(1)$, which has exactly two nodes,
so
there is a bijection between these level sets of nodes.
As in the previous section,
let $r_m(\bT_k)$ denote $\bigcup_{j<m}\bT_k(j)$.
Then
$r_2(\bT_k)=\bT_k(0)\cup\bT_k(1)$.
The $0$-th critical node $d^k_0$ equals the ghost coding node $c_{-k+2}^k$.
Suppose that $n\ge -k+2$ and that $r_{m_n+2}(\bT_k)$ has been constructed,
$c^k_n\in \bT_k(m_n)$ has been fixed, and there is a bijection between
$\bT_k(m_n+1)$ and $\bS_k(n+k-1)$.
Let $J$ be the number of nodes in
$\bS_k(n+k-1)$ which split into two extensions in $\bS_k(n+k)$.
Let $m_{n+1}=m_n+J+1$.
(Notice that
for $k=3$, $m_0=2$, and
for $k=4$, $m_{-1}=3$;
more generally, for $k\ge 4$, $m_{-k+3}=3$.)
Let $\{\tilde{s}_j : m_n<j<m_{n+1}\}$ enumerate in reverse lexicographic order those members of
$\bS_k(n+k-1)$ which split into two extensions in $\bS_k(n+k)$.
Let $\varphi$ be the lexicographic preserving bijection from
$\bS_k(n+k-1)$ onto
$\bT_k(m_n+1)$.
For each $m_n<j<m_{n+1}$, define $s_j=\varphi(\tilde{s}_j)$, and set
\begin{equation}
d^k_j={s_j}^{\frown}0^{(i_j)},
\end{equation}
where $i_j=j-m_n-1$.
Thus, $ d^k_{m_n+1}=s_{m_n+1}$, $d^k_{m_n+2}={s_{m_n+2}}^{\frown}\lgl 0\rgl$, $d^k_{m_n+3}={s_{m_n+2}}^{\frown}\lgl 0,0\rgl$,
and finally, $d^k_{m_{n+1}-1}={s_{m_{n+1}-1}}^{\frown} 0^{(J-1)}$.
These are the splitting nodes in the {\em interval} of $\bT_k$ between $c^k_{n}$ and $c^k_{n+1}$.
For each $i\in\{0,1\}$,
define
$t_j^i$
to be the binary sequence of length
$m_{n+1}$
which extends
${d^k_j}^{\frown}i$ by all $0$'s.
Define
\begin{equation}
\bT_k(m_{n+1})=
\{t_j^i: m_n<j<m_{n+1},\ i\in\{0,1\}\}\cup
\{t_{\tilde{s}}:\tilde{s}\in S_{n+1}'\}
\end{equation}
where
$S_{n+1}'=\bS_k(n+k)\setminus \{\tilde{s}_j:m_n<j<m_{n+1}\}$
and for each $\tilde{s}\in S_{n+1}'$,
$t_{\tilde{s}}$ is the
extension
of $\varphi(\tilde{s})$ by $0$'s to length $m_{n+1}$.
Let $c^k_{n+1}$ be the leftmost extension in $\bT_k(m_{n+1})$ of $\varphi(\tilde{s}_c)$,
where $\tilde{s}_c$
is the leftmost immediate successor of
$c_{n+1}^{\mathbb{S}_k}$
in $\mathbb{S}_k(n+k-1)$.
Define
\begin{equation}
\bT_k(m_{n+1}+1)=
\{{t_j^i}^{\frown}i: m_n<j<m_{n+1},\ i\in\{0,1\}\}\cup
\{{t_{\tilde{s}}}^{\frown}0:\tilde{s}\in S_{n+1}'\}.
\end{equation}
Notice that
the level sets
$\bS_k(n+k)$, $ \bT_k(m_{n+1})$, and
$ \bT_k(m_{n+1}+1)$ have the same cardinality.
Moreover,
the lexicographic preserving bijection from
$\bS_k(n+k)$ onto
$ \bT_k(m_{n+1})$
preserves passing numbers.
By our construction, for each $m\in\mathbb{N}$, $|d^k_m|=m$.
Let $\bT_k=\bigcup_{m\in\bN}\bT_k(m)$
with coding nodes $\lgl c^k_n:n\in\bN\rgl$ and ghost coding nodes $\lgl c^k_{-k+2},\dots, c^k_{-1}\rgl$.
$\bT_k$ is a strongly skew, regular tree with coding nodes representing a copy of $\mathcal{H}_k$ and is {\em maximal} in the sense that
$\widehat{\bT}_k=\bT_k$
\end{example}
\begin{rem}\label{rem.awesome!}
Notice that the
coding nodes (along with the ghost coding nodes) in $\bT_k$ represent the same ordered copy of $\mathcal{H}_k$ as the one represented by the coding nodes (along with the ghost coding nodes) in $\bS_k$.
That is, given $j<n$, $c^k_n(|c^k_j|)=c^{\mathbb{S}_k}_n(|c^{\mathbb{S}_k}_j|)$.
A simple way to think about $\bT_k$ is that it is the skew tree such that if one ``zips up'' the splitting nodes in the $n$-th interval of $\bT_k$ to the length of $c^k_n$, then one recovers a tree which is strongly similar (in fact, strongly isomorphic--see Definition \ref{defn.stable}) to $\bS_k$, in the sense of the upcoming Definition \ref{def.3.1.likeSauer}.
\end{rem}
\subsection{The space of strong $\mathcal{H}_k$-coding trees}\label{subsec.T_k}
Let $k\ge 3$ be given, and fix $\bT_k$ as constructed as in Example \ref{ex.bTp}.
In preparation for defining the space of strong $\mathcal{H}_k$-coding subtrees of $\bT_k$, we provide
the following definitions.
A subset $A$ of $\bT_k$ is an {\em antichain} if
for all $s,t\in A$,
$s\sse t$ implies $s=t$.
Recall that a subset $X$ of $\bT_k$ is a {\em level set} if all members of $X$ have the same length.
Thus, each level set is an antichain.
Given a subset $S\sse \bT_k$,
recall that
the {\em meet closure of $S$}, denoted $S^{\wedge}$,
is the set of all meets of (not necessarily distinct) pairs of nodes in $S$; in particular, $S^{\wedge}$ contains $S$.
We say that $S$ is {\em meet-closed} if $S=S^{\wedge}$.
Recall that for $s\in\Seq$ and $l\in\mathbb{N}$ with $|s|\ge l$,
$s\re l$ denotes the initial segment of $s$ of length $l$.
Given a subset $S\sse \bT_k$ and any $l\in\mathbb{N}$,
we define
\begin{equation}
S\re l=\{s\re l:s\in S\mathrm{\ and\ }|s|\ge l\}.
\end{equation}
Thus,
$S\re l= \widehat{S}\re l$, whether or not $S$ has nodes of length $l$.
If
$s\in\bT_k$ has the same length as some (ghost) coding node $c^k_n$ in $\bT_k$, then $s$ has a unique immediate successor in $\bT_k$; denote this as $s^+$.
We say that
$s$
{\em has passing number
$i$ at $c^k_n$} exactly when
$s^+(|c^k_n|)=i$.
The following is Definition 4.9 in \cite{DobrinenJML20} augmented to include ghost coding nodes; it extends Definition \ref{def.3.1.Sauer} of Sauer.
It is important to note that a ghost coding node in some subset $T\sse\bT_k$ is never a coding node in $\bT_k$.
\begin{defn}[Strong Similarity Map]\label{def.3.1.likeSauer}
Let $k\ge 3$ be fixed, and let $S,T\sse \bT_k$ be meet-closed subsets.
A function $f:S\ra T$ is a {\em strong similarity map} of $S$ to $T$ if for all nodes $s,t,u,v\in S$, the following hold:
\begin{enumerate}
\item
$f$ is a bijection.
\item
$f$ preserves lexicographic order: $s<_{\mathrm{lex}}t$ if and only if $f(s)<_{\mathrm{lex}}f(t)$.
\item
$f$ preserves meets, and hence splitting nodes:
$f(s\wedge t)=f(s)\wedge f(t)$.
\item
$f$ preserves relative lengths:
$|s\wedge t|<|u\wedge v|$ if and only if
$|f(s)\wedge f(t)|<|f(u)\wedge f(v)|$.
\item
$f$ preserves initial segments:
$s\wedge t\sse u\wedge v$ if and only if $f(s)\wedge f(t)\sse f(u)\wedge f(v)$.
\item $f$ preserves (ghost) coding nodes:
$s$ is a coding node in $S$ if and only if $f(s)$ is a coding node in $T$.
If $S$ has also ghost coding nodes, then
$s$ is a ghost coding node in $S$ if and only if $f(s)$ is a ghost coding node in $T$.
\item
$f$ preserves passing numbers at (ghost) coding nodes:
If $c$ is a (ghost) coding node in $S$ and $s$ is a node in $S$ with $|s|=|c|$,
then $(f(s))^+(|f(c)|)=s^+(|c|)$.
In words, the passing number of $f(s)$ at $f(c)$ equals the passing number of $s$ at $c$.
\end{enumerate}
We say that $S$ and $T$ are {\em strongly similar}, and write $S\ssim T$, exactly when there is a strong similarity map between $S$ and $T$.
\end{defn}
It follows from (3) that
$s\in S$ is a splitting node in $S$ if and only if $f(s)$ is a splitting node in $T$.
In all cases above, it may be that $s=t$ and $u=v$,
so in particular,
(5) implies that $s\sse u$ if and only if $f(s)\sse f(u)$.
Notice that strong similarity is an equivalence relation, since the inverse of a strong similarity map is a strong similarity map, and the composition of two strong similarity maps is a strong similarity map.
If $T'\sse T$ and $f$ is a strong similarity of $S$ to $T'$, then we say that $f$ is a {\em strong similarity embedding} of $S$ into $T$.
Our goal in this subsection is to define a space of subtrees of
$\bT_k$ for which the development of a Ramsey theory akin to the Halpern-\Lauchli\ and Milliken Theorems is possible.
The first potential obstacle arises due to the fact that $k$-cliques are forbidden in $\mathcal{H}_k$.
This manifests in terms of trees in the following way:
There are finite subtrees of $\bT_k$ which are strongly similar to an initial segment of $\bT_k$, and yet cannot be extended within $\bT_k$ to a subtree which is strongly similar to $\bT_k$.
In this subsection we
make precise what the possible obstructions are.
We then define the set of strong $\mathcal{H}_k$-coding trees to be those trees which avoid the possible obstructions.
The next definitions are new to $k$-clique-free graphs for $k>3$,
and are necessary for the work in this paper.
When $k=3$, the rest of this section simply reproduces the concepts of sets of parallel $1$'s and the Parallel $1$'s Criterion used throughout \cite{DobrinenJML20}, though in a new and more streamlined manner.
Fix $k\ge 3$ throughout the rest of Section \ref{sec.4}.
If $X$ is a level subset of some meet-closed $S\sse \bT_k$, let $l_X$ denote the length of the members of $X$.
If the nodes in $X$ are not maximal in $S$,
let
the set of immediate successors of $X$ in $\widehat{S}$ be denoted by $X^+$.
Thus, when $S$ is understood,
\begin{equation}
X^+=\{s^{\frown}i:s\in X,\ i\in 2,\mathrm{\ and \ } s^{\frown}i\in \widehat{S}\}.
\end{equation}
Note that if $l_X$ is the length of a (ghost) coding node in $\bT_k$, then each node in $X$ has a unique extension in $X^+$ which is determined by $\bT_k$, irregardless of $S$.
\begin{defn}[Pre-$a$-Clique and Witnesses]\label{defn.prepclique}
Let $a\in [3,k]$, and
let $X\sse \bT_k$
be a level subset $X$ of $\bT_k$.
($X$ is allowed to consist of a single node.)
Let $l_X$ denote the length of the nodes in $X$.
We say that $X$
{\em has pre-$a$-clique}
if there is an index set $I\sse[-k+2,-1]\cup\mathbb{N}$ of size $a-2$ such that,
letting $i_*=\max(I)$ and $l=
|c^k_{i_*}|$,
the following hold:
\begin{enumerate}
\item
The set
$\{c^k_{i}:i\in I\}$ codes an $(a-2)$-clique; that is,
for each pair $i<j$ in $I$,
$c^k_{j}(|c^k_{i}|)=1$;
\item
For each node $x\in X$ and each $i \in I$,
$x^+(|c^k_{i}|)=1$.
\end{enumerate}
We say that
$X$ has a pre-$a$-clique {\em at $l$}, and that $X\re l$ {\em is} a pre-$a$-clique.
The set of nodes $\{c^k_{i}:i\in I\}$ is said to {\em witness} that $X$ has a pre-$a$-clique at $l$.
\end{defn}
\begin{notation}\label{notn.WP_a}
We write $\mbox{P}_a(X)$ exactly when $X$ has a pre-$a$-clique.
\end{notation}
\begin{rem}\label{rem.central}
Whenever a level set $X$ has a pre-$k$-clique, then
for any set $C$ of coding (and/or ghost coding) nodes of $\bT_k$ witnessing that
$\mbox{P}_k(X)$,
the set $C$ codes a $(k-2)$-clique,
and each $x\in X$ has passing number $1$ at each $c\in C$.
It follows that, if $X$ has more than one node, then for any coding node $c$ in $\bT_k$ extending some $x\in X$,
any extension of any $y\in X\setminus\{x\}$ to some node $y'$ of length greater than $|c|$ must satisfy $y'(|c|)=0$.
Thus, the nodes in a pre-$k$-clique are `entangled':
The splitting possibilities in the cone above one of these nodes depends on the cones above the other nodes in the pre-$k$-clique.
If $X$ is contained in some finite subtree $A$ of $\bT_k$ and
$\mbox{P}_k(X)$ is not witnessed by coding nodes in $A$,
then the graph coded by $A$ has no knowledge
that the cones above $X$ in $\bT_k$ are entangled.
Then no
extension of $A$ into $\bT_k$ can be strongly similar to $\bT_k$.
This one of the main reasons that developing Ramsey theory for Henson graphs is more complex than for the Rado graph.
Even if $a<k$, pre-$a$-cliques are also entangled.
In the set-up to the space of strong coding trees, we must consider pre-$a$-cliques for each $a\in [3,k]$;
it is necessary to witness them in order to guarantee the existence of extensions
within a given strong $\mathcal{H}_k$-coding tree $T$
which are strongly similar to some
tree which should exist according to the $K_k$-Free Criterion.
The guarantee of such extensions is the heart of the extension lemmas in Section \ref{sec.ExtLem}.
\end{rem}
\begin{defn}[New Pre-$a$-Clique]\label{defn.newll1s}
Let $a\in [3,k]$, and let $X\sse \bT_k$
be a level set. ($X$ can consist of a single coding node.)
We say that $X$
{\em has a new pre-$a$-clique at $l$} if
$X\re l$ is
a pre-$a$-clique and
for each $l'< l$ for which $X\re l$ and $X\re l'$ have the same number of nodes,
$X\re l'$
is not a pre-$a$-clique.
\end{defn}
The reasoning behind the requirement that $X\re l$ and $X\re l'$ have the same number of nodes will become more apparent in latter sections, when we want to color finite antichains of coding nodes coding a copy of some finite $K_k$-free graph.
\begin{defn}\label{defn.nonewpkc}
Let
$a\in[3, k]$, and let $X, Y\sse \bT_k$ be level sets with $l_Y>l_X$.
We say that $Y$
{\em has no new pre-$a$-cliques over $X$}
if and only if the following holds:
For each $j\in(l_X,l_Y]$
and each $Z\sse Y$,
if $Z\re j$ is a pre-$a$-clique,
then
$Z$ end-extends $Z\re l_X$,
and
$Z\re l_X$ already has a pre-$a$-clique.
We say that $Y$ {\em has no new pre-cliques over $X$} if $Y$ has no new pre-$a$-cliques over $X$ for any $a\in[3,k]$.
\end{defn}
For example, suppose $X$ is a singleton which is a new pre-$a$-clique for some $a\in[3,k]$ and $Z$ is a level set with at least two members such that $Z\re l_X=X$.
If for some
$l_X<j\le l_Z$, $Z\re j$ has at least two distinct nodes and for each $z\in Z$, $z(j)=1$, then $Z$
has at least a new pre-$3$-clique over $X$.
The next definition
gives precise conditions under which a new pre-$a$-clique at $l$ in a subtree $T$ of $\bT_k$ is maximal in the interval
of $T$ containing $l$.
\begin{defn}[Maximal New Pre-$a$-Clique]\label{defn.newmpkc}
Let $T$ be a subtree of $\bT_k$,
let $X\sse T$
be a level set,
and let $a\in[3,k]$.
We say that $X$
has a {\em maximal new pre-$a$-clique in $T$ at $l$}
if
$X\re l$
is a new pre-$a$-clique in $T$ which is also maximal in $T$ in the following sense:
Let
$d$ denote the critical node in $T$ of maximum length satisfying $|d|<l$.
If $m$ is the index so that $d=d^T_m$,
let $e$ denote $d^T_{m+1}$ and
note that
$l\le |e|$.
Then for any $l'\in (l,|e|]$ and any new pre-$a$-clique $Y\sse T\re l'$,
if $Y\re l$ contains $X\re l$
then these sets are equal; hence $l'=l$, since $T$ has no splitting nodes in the interval $(|d|,|e|)$.
We write $\mbox{MP}_a(X;T)$ if
$X$ has a maximal new pre-$a$-clique in $T$ in the interval of $T$ containing the length of the nodes in $X$.
Thus, if $l_X=|d^T_m|$,
then $\mbox{MP}_a(X;T)$ means that for some $l\in(l^T_{m-1},l^T_m]$,
$X$ has a maximal
new pre-$a$-clique at $l$.
\end{defn}
In Definition \ref{defn.newmpkc}, for any level set $Z$ end-extending $X$, we say that $Z$ has a maximal new pre-$a$-clique in $T$ at $l$.
We will say that a set $Y\sse T$ {\em contains} a maximal new pre-$a$-clique at $l$ if
$\mbox{MP}_a(X;T)$ for some
subset $X\sse Y\re l$.
\begin{defn}[Strong Isomorphism]\label{defn.stable}
Let $S$ and $T$ be strongly similar subtrees of $\bT_k$ with $M$ many critical nodes.
The strong similarity map $f:T\ra S$ is a
{\em strong isomorphism} if for each positive
$m\in M$, the following hold:
For each $a\in [3,k]$,
a level subset $X\sse T\re |d^T_{m}|$ has a maximal new
pre-$a$-clique
in $T$ in the interval $(|d^T_{m-1}|,|d^T_{m}|]$
if and only if $f[X]$ has a
maximal new pre-$a$-clique
in $S$ in the interval $(|d^S_{m-1}|,|d^S_{m}|]$.
When there is a strong isomorphism between $S$ and $T$,
we say that $S$ and $T$ are {\em strongly isomorphic} and
write $S\cong T$.
\end{defn}
\begin{observation}\label{obs.urcool}
The relation $\cong$ is an equivalence relation, since the
inverse of a strong isomorphism is a strong isomorphism, and
composition of two strong isomorphisms is again a strong isomorphism.
Moreover, since strong similarity maps preserve (ghost) coding nodes and passing numbers,
any strong isomorphism
$f :T\ra S$ will map a set of coding (and/or ghost coding) nodes in $T$ witnessing a pre-$a$-clique $X$ in $T$ to a set of coding (and/or ghost coding) nodes in $S$ witnessing that $f[X]$ is a pre-$a$-clique $X$ in $S$.
\end{observation}
Strong isomorphisms preserve all relevant structure regarding the shape of the tree, (ghost) coding nodes and passing numbers, and maximal new pre-$a$-cliques and their witnesses.
They provide the essential structure of our strong
$\mathcal{H}_k$-coding trees.
\begin{defn}[The Space of Strong $\mathcal{H}_k$-Coding Trees $(\mathcal{T}_k,\le, r)$]\label{defn.T_pspace}
A tree $T\sse\bT_k$
(with designated ghost coding nodes) is a member of $\mathcal{T}_k$ if and only if there is a strong isomorphism, denoted $f_T$, from $\bT_k$ onto $T$.
Thus, $\mathcal{T}_k$ consists of all subtrees of $\bT_k$ which are strongly isomorphic to $\bT_k$.
Call
the members of $\mathcal{T}_k$ {\em strong $\mathcal{H}_k$-coding trees}, or
just {\em strong coding trees} when $k$ is clear.
The partial ordering $\le$ on $\mathcal{T}_k$ is simply inclusion:
For $S,T\in\mathcal{T}_k$, we write
$S\le T$ if and only if $S$ is a subtree of $T$.
If $T\in\mathcal{T}_k$, the notation $S\le T$ implies that $S$ is also a member of $\mathcal{T}_k$.
Given $T\in\mathcal{T}_k$, for $n\in[-k+2,-1]\cup\bN$,
we let $c^T_n$ denote $ f_T(c^k_n)$; hence,
$\lgl c^T_{-k+2},\dots,c^T_{-1}\rgl$ enumerates the ghost coding nodes and
$\lgl c^T_n:n\in\mathbb{N}\rgl$ enumerates the coding nodes in $T$ in increasing order of length.
Letting
$\lgl d^T_m:m\in\mathbb{N}\rgl$ enumerate the critical nodes of $T$ in order of increasing length,
note that the $m$-th critical node in $T$, $d^T_m$, equals $f_T(d^k_m)$.
In particular, the {\em ghost coding node} $c^T_{-k+2}$ equals $d^T_0$, the minimum splitting node in $T$.
It follows from the definition of strong isomorphism and the structure of $\bT_k$ that
each coding node in $T$ is a singleton pre-k-clique, while each ghost coding node is not.
Precisely, $c_{-k+2}^T$ is not a pre-clique at all.
If $k\ge 4$, then $c_{-k+3}^T$ is a pre-$3$-clique, and in general, for $3\le i<k$,
the ghost coding node $c^T_{-k+i}$ is a
pre-$i$-clique.
Thus, the ghost coding nodes in $T$ cannot be coding nodes in $\bT_k$; hence the name.
It is the case, though, that each ghost coding node $c^T_n$ with $n>-k+2$ in $T$ has the same length as some ghost or coding node in $\bT_k$.
Let $T(m)$ denote the set of nodes in $T$ of length $|d^T_m|$.
Recalling the notation in (\ref{eq.T(m)}) and (\ref{eq.r_m(T)}) in Subsection \ref{subsection.T_p}, the $m$-th
{\em finite approximation} of $T$ is
\begin{equation}\label{eq.r_m}
r_m(T)= \bigcup_{j<m} T(j),
\end{equation}
for $m\in\mathbb{N}$.
This is equal to $\{t\in T: |t|< |d^T_m|\}$, since $T$ is a tree.
Thus for $m<n$, $r_n(T)$ end-extends $r_m(T)$,
and $T=\bigcup_{m\in\mathbb{N}}r_m(T)$.
Notice that for any tree $T$, $r_0(T)$ is the emptyset and $r_1(T)=\{d^T_0\}$, the splitting node of smallest length in $T$.
For each $m\in\mathbb{N}$, define
\begin{equation}
\mathcal{AT}^k_m=\{r_m(T):T\in\mathcal{T}_k\},
\end{equation}
and let
\begin{equation}
\mathcal{AT}^k=\bigcup_{m\in\mathbb{N}}\mathcal{AT}^k_m.
\end{equation}
Given
$A\in\mathcal{AT}^k$ and
$T\in\mathcal{T}_k$,
define
\begin{equation}
[A,T]=\{S\in \mathcal{T}_k: \exists m\, (r_m(S)=A)\mathrm{\ and\ }S\le T\}.
\end{equation}
Given $j<m$, $A\in\mathcal{AT}^k_j$ and $T\in\mathcal{T}_k$, define
\begin{equation}
r_m[A,T]=\{r_m(S):S\in [A,T]\}.
\end{equation}
For $A\in \mathcal{AT}^k$ and $B\in \mathcal{AT}^k\cup\mathcal{T}_k$,
if
for some $m$, $r_m(B)=A$, then we write $A\sqsubseteq B$ and say that $A$ is an {\em initial segment} of $B$.
If $A\sqsubseteq B$ and $A\ne B$, then we write $A\sqsubset B$ and say that $A$ is a {\em proper initial segment} of $B$.
\end{defn}
\begin{rem}
If a subset $A\sse \bT_k$ does not contain sequences of $0$'s of unbounded length, then
there is an $n\ge 0$ such that each node in $A$ has passing number $1$ at $c^k_i$, for some $i\in [-k+2,n]$.
Such an $A$ cannot satisfy property $(A_k)^{\tt tree}$ so it does not code $\mathcal{H}_k$;
hence it is not strongly similar to $\bT_k$.
Thus, the leftmost path through any member of $\mathcal{T}_k$ is the infinite sequence of $0$'s.
It follows that
for $T\in\mathcal{T}_k$,
the strong isomorphism $f_T:\bT_k\ra T$
must take each splitting node in $\bT_k$ which is a sequence of $0$'s to a splitting node in $T$ which is a sequence of $0$'s.
In particular, $f_T$ takes
$c_{-k+2}^k$ to the ghost coding node $c_{-k+2}^T$ of $T$, which is a splitting node in $\bT_k$ consisting of a sequence of $0$'s.
\end{rem}
Next, we define what it means for a pre-clique to be witnessed inside a given subtree of $\bT_k$.
This sets the stage for the various {\em Witnessing Properties} to follow.
The Strong Witnessing Property will be a structural characteristic of strong $\mathcal{H}_k$-coding trees.
The Witnessing Property
will be useful for extending a given finite subtree of some $T\in\mathcal{T}_k$ as should be possible according to the $K_k$-free Branching Criterion.
Both will be utilized in following sections.
\begin{defn}[Witnessing new pre-cliques in $T$]\label{def.WinT}
Let $T$ be a finite or infinite subtree of $\bT_k$ with
some
coding and/or ghost coding nodes.
Suppose that $X$ is a level subset of $T$ which has
a new
pre-$a$-clique at $l$, for some $a\in [3,k]$.
We say that a set of coding and/or ghost coding nodes $\{c^T_{i}: i\in I\}$ in $T$,
$|I|=a-2$, {\em witnesses in $T$}
that $X$ has
a new
pre-$a$-clique at $l$
if, letting $i_*=\max(I)$,
\begin{enumerate}
\item
$\{c^T_{i}: i\in I\}$ codes an $(a-2)$-clique;
\item
$|c^T_{i_*}|\ge l$ and
$T$ has no critical nodes in
the interval $[l,|c^T_{i_*}|)$;
\item
For each $x\in X$,
the node $y_x$ in
$T\re |c^T_{i_*}|$ extending $x\re l$
has passing number $1$ at
$c^T_{i}$, for each $i\in I$.
\end{enumerate}
In this case, we write $WP_a(X;T)$.
If $X\re l$ is a maximal new pre-$a$-clique and is witnessed by a set of coding nodes in $T$, we write $\mbox{WMP}_a(X;T)$.
\end{defn}
\begin{rem}
Note that the set $\{y_x:x\in X\}$ in (3) above is well-defined, since by (2), $T$ has no critical nodes
(in particular no splitting nodes) in
the interval $[l,|c^T_{i_*}|)$.
The passing number of $y_x$ at $c_{i_*}^T$
is uniquely determined by $\bT_k$ to be the passing number of $(y_x)^+$ at $c_{i_*}^T$.
We point out that there may be other sets of coding and/or ghost coding nodes in $\bT_k$ witnessing $WP_a(X;T)$.
However,
any such set of witnesses in $T$ must contain $c^T_{i_*}$.
\end{rem}
In the following, given a finite or infinite subtree $T$ of $\bT_k$,
recall that $\lgl d^T_m:m\in M\rgl$ is the enumeration of the critical nodes in $T$ in order of increasing length.
Here, either $M\in\bN$ or $M=\bN$.
Enumerate the coding nodes in $T$ as $\lgl c^T_n:n\in N\rgl$, where $N\in\bN$ or $N=\bN$.
If $T$ has ghost coding nodes, enumerate these as $c^T_n$, where $n\in [-k+2,j]$ for some $j\le -1$.
The number $m_n$ is the index such that $d^T_{m_n}=c^T_n$.
\begin{defn}[Strong Witnessing Property]\label{defn.PrekCrit}
A subtree $T$ of $\bT_k$
has the {\em Strong Witnessing Property} if the following hold:
\begin{enumerate}
\item
Each new pre-clique in $T$
takes place in some interval in $T$ of the form
$(|d^T_{m_n-1}|,|c^T_n|]$.
\item
Each new pre-clique in $T$ is witnessed in $T$.
\end{enumerate}
\end{defn}
Notice that the coding node $c^T_n$ in Definition
\ref{defn.PrekCrit}
is obligated (by Definition \ref{def.WinT}) to be among the set of coding nodes witnessing $P_a(X;T)$.
Further, in order to satisfy Definition \ref{defn.PrekCrit}, it suffices that the maximal new pre-$a$-cliques are witnessed in $T$,
as this automatically guarantees that every new pre-$a$-clique is witnessed in $T$.
\begin{defn}[Witnessing Property]\label{defn.WP}
A subtree $T$ of $\bT_k$ has the {\em Witnessing Property (WP)} if
each new pre-clique in $T$ of size at least two
takes place in some interval in $T$ of the form
$(|d^T_{m_n-1}|,|c^T_n|]$ and
is witnessed by a set of coding nodes in $T$.
\end{defn}
The idea behind the Witnessing Property is that we want a property strong enough to guarantee that the finite subtree $T$ of $\bT_k$ can extend within $\bT_k$ according to the $k$-FBC.
However,
each coding node is a pre-$k$-clique, and if $T$ is an antichain of coding nodes in $\bT_k$ coding some finite $K_k$-free graph, we cannot require that all singleton pre-$k$-cliques be witnessed in $T$.
The Witnessing Property will
allow just the right amount of flexibility to achieve
our Ramsey Theorem for antichains of coding nodes.
\begin{lem}\label{lem.concpresWP}
If $A$ is a subtree of $\bT_k$ which has the (Strong) Witnessing Property and $A\cong B$,
then $B$ has the (Strong) Witnessing Property.
\end{lem}
\begin{proof}
Given the hypotheses,
let $f:B\ra A$ be a strong isomorphism from $B$ to $A$.
Suppose $X\sse B$ is a level set which has a new pre-$a$-clique, for some $a\in [3,k]$.
Let $m$ be the index such that the new pre-$a$-clique in $X$ takes place in the interval
$(|d^B_{m-1}|,|d^B_m|]$.
Without loss of generality, assume that $X$ has a maximal new pre-$a$-clique in $B$ in this interval.
In the case for WP, assume that $X$ has at least two members.
Since $f$ is a strong isomorphism,
$f[X]$ has a maximal new pre-$a$-clique in $A$ in the interval $(|d^A_{m-1}|,|d^A_m|]$.
Since $A$ has the (Strong) Witnessing Property, $d^A_m$ must be a coding node in $A$;
moreover,
this coding node must be among the set of coding nodes in $A$ witnessing that $f[X]$ has a new pre-$a$-clique.
Therefore, $d^B_m$ is a coding node,
and $d^B_m$
is among the set of coding nodes witnessing that $X$ has
a new pre-$a$-clique,
since $f$ being a strong similarity map implies $f$ preserves coding nodes and passing numbers.
Furthermore,
each new pre-$a$-clique in $B$ (of size at least two in the case of WP)
is witnessed in $B$.
Hence, $B$ has the (Strong) Witnessing Property.
\end{proof}
\begin{lem}\label{lem.need}
Suppose $A,B$ are subtrees of $\bT_k$ and that $A$ has the Strong Witnessing Property.
Then $A\cong B$ if and only if
$A\ssim B$ and $B$ also has the Strong Witnessing Property.
\end{lem}
\begin{proof}
For the forward direction,
note that $A\cong B$ implies $A\ssim B$, by the definition of strongly isomorphic.
If moreover, $A$ has the Strong Witnessing Property then Lemma
\ref{lem.concpresWP} implies $B$ also has the Strong Witnessing Property.
Now suppose that $A\ssim B$ and both $A$ and $B$ have the Strong Witnessing Property.
Let $f:A\ra B$ be the strong similarity map.
Suppose
$X$ is a level set in $A$ which has a maximal new
pre-$a$-clique, for some $a\in [3,k]$.
Since $A$ has the Strong Witnessing Property,
there is a set of coding nodes $C\sse A$ witnessing that $X$ has a new
pre-$a$-clique.
Furthermore,
$l_X$ must be the length of some coding node in the set $C$.
Since
$f$ preserves coding nodes and passing numbers,
it follows that $f[C]$ is a set of coding nodes in $B$ witnessing that $f[X]$ has a pre-$a$-clique.
It remains to show that $f[X]$ has a new pre-$a$-clique and is maximal in $B$.
If $f[X]$ does not have a new pre-$a$-clique in $B$,
then there is some critical node $d$ in $B$ below $f(c)$ such that $f[X]\re |d|$ has a new pre-$a$-clique in $B$, where $c$ is the longest coding node in $C$.
Since $B$ satisfies the Strong Witnessing Property, this new pre-$a$-clique in $f[X]$ appears at some coding node in $B$ below $d$.
Further, $f[X]$ must be witnessed by some set of coding nodes $D$ in $B$.
But then $f^{-1}[D]$ is a set of coding nodes in $A$ witnessing a pre-$a$-clique in $X$.
Since the longest length of a coding node in $f^{-1}[D]$ is shorter than $|c|$,
the pre-$a$-clique in $X$ occurs first at some coding node below $c$, a contradiction to $X$ having a new pre-$a$-clique.
Therefore, $f[X]$ is a new pre-$a$-clique in $B$.
If $f[X]$ is not maximal in $B$, then
there is some level set $Z$ of nodes of length $l_X$ properly containing $f[X]$ which has a new pre-$a$-clique in $B$.
Since $B$ has the Strong Witnessing Property,
there is some set of coding nodes $D\sse B$ witnessing $Z$.
Then $f^{-1}[D]$ witnesses that $f^{-1}[Z]$ is a pre-$a$-clique in $A$ properly containing $X$, contradicting the maximality of $X$ in $A$.
Therefore, $f$ preserves maximal new pre-$a$-cliques, and hence is a strong isomorphism.
Hence, $A\cong B$.
\end{proof}
The $K_k$-Free Branching Criterion from Definition \ref{defn.kFSC}
naturally gives rise to a version for skew trees.
A tree $T\sse\bT_k$ with (ghost) coding nodes $c_n$ for $n\in [-k+2,N)$, where $N\in\bN$ or $N=\bN$ and $c_{-k+2}$ is the stem of $T$,
satisfies the {\em $K_k$-Free Branching Criterion ($k$-FBC)}
if and only if the following holds:
For $n\ge -k+2$,
letting $u$ denote the node $c_{n+1}\re |c_n|$,
each node $t\in T$ of length $|c_n|$ branches in $T$ before reaching length $|c_{n+1}|$ if and only if for each $I\sse [-k+2,n-1]$ of size $k-2$ for which the set $C=\{c_i:i\in I\}$ codes a $(k-2)$-clique
and such that $u(|c_i|)=1$ for each $i\in I$,
$t(|c_i|)=0$ for at least one $i\in I$.
Notice that if a skew tree $T$
satisfies the $k$-FBC, then
Theorem \ref{thm.A_3treeimpliestrianglefreegraph}
implies the following fact:
\begin{observation}\label{obs.kfbcH_k}
Any skew tree satisfying the $K_k$-Free Branching Criterion in which the coding nodes are dense codes a copy of $\mathcal{H}_k$.
\end{observation}
\begin{lem}\label{lem.internalcharacterization}
(1) If $T\sse \bT_k$ is strongly similar to $\bT_k$,
then $T$ satisfies the $K_k$-Free Branching Criterion.
(2) If $T\sse \bT_k$ is strongly similar to $\bT_k$ and
has the Strong Witnessing Property,
then the strong similarity map from $\bT_k$ to $T$ is a strong isomorphism, and hence $T$ is a member of $\mathcal{T}_k$.
\end{lem}
\begin{proof}
(1) follows in a straightforward manner from the definitions of $k$-FBC and strong similarity map, along with the structure of $\bT_k$, as we now show.
Suppose $T\sse \bT_k$ is strongly similar to $T$, and let $f:\bT_k\ra T$ be the strong similarity map.
Note that for each integer $n\ge -k+2$, $c^T_n=f(c^k_n)$.
Fix $n\ge -k+2$ and a node
$t\in T\re l^T_n$ which does not extend to $c^T_{n+1}$.
Then $s:=f^{-1}(t)$ is in $\bT_k\re l^k_n$.
Since $f$ is a strong similarity map, $s$ does not extend to the coding node $c^k_{n+1}$ in $\bT_k$.
Since $\bT_k$ satisfies the $k$-FBC,
$s$ splits in $\bT_k$ before reaching the level of $c^k_{n+1}$ if and only if,
letting $u=c^k_{n+1}\re (l_n^k+1)$,
for each subset $I\sse [-k+2,n]$ of size $k-2$ such that
$C=\{c^k_i:i\in I\}$ codes a $(k-2)$-clique
and $u$ has passing number $1$ at each
$c\in C$,
there is some $c\in C$ at which $s^+$ has passing number $0$.
Since $t=f(s)$ and $f$ is a strong similarity map,
$t$ splits in $T$ before reaching the level of $c^T_{n+1}$ if and only if,
letting $v=c^T_{n+1}\re (l^T_n+1)$,
for each subset $I\sse [-k+2,n]$ of size $k-2$ for which
$D=\{c^T_i:i\in I\}$ codes a $(k-2)$-clique
and $v$ has passing number $1$ at each
$c\in D$,
there is some $c\in D$ at which $t^+$ has passing number $0$.
For
(2), if $T\sse\bT_k$ is strongly similar to $\bT_k$ and has the Strong Witnessing Property, then
it follows from Lemma \ref{lem.need}
that $T\cong \bT_k$
since $\bT_k$ has the Strong Witnessing Property.
\end{proof}
\begin{lem}\label{lem.psim.properties}
Every $T\in\mathcal{T}_k$ has
the following properties:
\begin{enumerate}
\item
$T\ssim \bT_k$.
\item
$T$ satisfies the $K_k$-Free Branching Criterion.
\item
$T$ has the Strong Witnessing Property.
\end{enumerate}
\end{lem}
\begin{proof}
(1) is immediate from the definition of $\mathcal{T}_k$.
(2) follows from Lemma \ref{lem.internalcharacterization} part (1).
(3) follows from (1) and Lemma \ref{lem.need}.
\end{proof}
\section{Extension Lemmas}\label{sec.ExtLem}
Unlike Milliken's strong trees,
not every finite subtree of a strong $\mathcal{H}_k$-coding tree can be extended within that ambient tree
to another member of $\mathcal{T}_k$, nor necessarily even to another finite tree of a desired configuration.
This section provides structural properties of finite subtrees
which are necessary and sufficient to extend to a larger tree of a particular strong similarity type.
The first subsection lays the groundwork for these properties and the second subsection proves extension lemmas which are fundamental to developing Ramsey theory on strong $\mathcal{H}_k$-coding trees.
The extension lemmas extend and streamline similar lemmas in \cite{DobrinenJML20}, taking care of new issues that arise when $k\ge 4$.
Furthermore, these lemmas lay new groundwork for general extension principles, with the benefit of a simpler
proof of Theorem \ref{thm.matrixHL} than the proof of its instance for $\mathcal{H}_3$ in \cite{DobrinenJML20}.
\subsection{Free level sets}\label{subsec.valid}
In this subsection, we provide criteria which will aid in the extension lemmas in Subsection \ref{subsec.extlemmas}.
These requirements will guarantee that a finite subtree of a strong coding tree $T$ can be extended {\em within $T$} to another strong coding tree.
Recall that given a tree $T\in\mathcal{T}_k$ and $m\in\bN$, $T(m)$ denotes the set of nodes in $T$ of length equal to $|d^T_m|$, the length of the $m$-th critical node in $T$.
The length of a (ghost) coding node $c^T_n$ is denoted $l^T_n$, which equals $|d^T_{m_n}|$.
Thus, $T(m_n)=T\re l^T_n$.
Throughout this section, $n \ge -k+2$, and when we write ``coding node'' we are including ghost coding nodes.
\begin{defn}[Free]\label{defn.nonewppc}
Let $T\in\mathcal{T}_k$ be fixed.
We say that a level set $X\sse \widehat{T}$ with length $l$
is {\em free} in $T$
if
given $n$ least such that $l^T_n\ge l$,
letting $Y$ consist of the leftmost extensions
of members of $X$
in $T\re l^T_n$,
then
$Y$ has no new pre-$a$-cliques over $X$, for any $a\in [3,k]$.
\end{defn}
In particular, any level set $X\sse T$ with length that of some coding node in $T$ is free in $T$.
\begin{rem}
For $k=3$, this is equivalent to the concept of ``$X$ has no pre-determined new parallel $1$'s in $T$" in \cite{DobrinenJML20}.
\end{rem}
\begin{term}\label{term.pc}
For a level set $Y$ end-extending a level set $X$,
we say that
$Y$ has {\em no new pre-cliques over $X$}
if $Y$ has no new pre-$a$-cliques over $X$, for any $a\in [3,k]$.
\end{term}
\begin{lem}\label{lem.leftmostfree}
Let $T\in\mathcal{T}_k$ be fixed,
$X\sse \widehat{T}$ be a level set
which is free in $T$.
Then for any $l>l_X$,
the set of leftmost extensions in $T$ of the nodes in $X$ to $T\re l$ contains no new pre-cliques over $X$.
Furthermore, for any $n$ such that $l^T_n> l_X$,
the leftmost extensions of $X$ in $T\re l^T_n$
have passing numbers $0$ at $c^T_n$.
It follows that any set of leftmost extensions of $X$ is free in $T$.
\end{lem}
\begin{proof}
This lemma follows from the fact that $T\cong \bT_k$.
To see this,
let $f:\bT_k\ra T$ be the strong isomorphism
witnessing that $T\in\mathcal{T}_k$,
and let $j$ be least such that $l^T_j\ge l_X$.
Let $n\ge j$ and $a\in[3,k]$ be given, and let $Y$ be the
end-extension of $X$ in
$T\re l^T_n$
consisting of the nodes
which are leftmost extensions in $T$ of the nodes in $X$.
Since $X$ is free in $T$, $X\re l^T_n$ has no new pre-$a$-cliques over $X$.
Since
$f^{-1}$ is a strong similarity map, $f^{-1}[Y]$
is the collection of leftmost extensions in $\bT_k\re l^k_n$ of the level set $f^{-1}[X]$.
In particular, $f^{-1}[Y]$ has no new pre-$a$-cliques in the interval $[l^k_j,l^k_n]$.
In particular, the passing numbers of members of $f^{-1}[Y]$ in this interval $[l^k_j,l^k_n]$ are all $0$.
Since $f$ is a strong isomorphism,
$Y=f\circ f^{-1}[Y]$ has no new pre-$a$-cliques over $Y\re l_X$, and all passing numbers of the leftmost extensions of $X$ in $T$ are $0$.
In particular, any set of leftmost extensions of $X$ in $T$ is free in $T$.
\end{proof}
An important property of $\mathcal{T}_k$ is that
all of its members contain
unbounded sequences of $0$'s.
\begin{lem}
Suppose $T\in\mathcal{T}_k$ and $s$ is a node in the leftmost branch of $T$.
Then $s$ is a sequence of $0$'s.
\end{lem}
\begin{proof}
Suppose there is a $T\in\mathcal{T}_k$ such that
for some $n$, no node of $T$ extends $0^{(n)}$.
Then there is a finite set $C$ of coding nodes in $\bT_k$ such that each node in $T$ has passing number $1$
by at least one member of $C$.
Let $l_C$ be the longest length of the coding nodes in $C$.
If for some $l$, for each $c\in C$ there is some coding node $e_c\in T$ so that
$\{t\in T\re l: t(e_c)=1\}=\{t\in T\re l:t(c)=1\}$,
then
the graph $G$ coded by $T$ has finitely many vertices such that every vertex in $G$ has an edge with some vertex coded by a node in $\{e_c:c\in C\}$.
In this case, $G$ is not a copy of $\mathcal{H}_k$.
Otherwise, for some $c\in C$, there is a $t\in T\re l$ such that $t(c)=1$, while $t$ has passing number $0$ at all coding nodes in $T\re l$.
Then by the $K_k$-Free Branching Criterion of $T$,
$t$ extends in $T$ to $k-1$ many coding nodes forming a $(k-1)$-clique.
However, these nodes along with $c$ form a $k$-clique in $\bT_k$, which is a contradiction.
\end{proof}
\begin{notation}\label{notn.maxl_A}
Let $A$ be a finite subtree of $\bT_k$.
We let $l_A$ denote the maximum of the lengths of the nodes in $A$, and let
\begin{equation}
\max(A)=\{t\in A:|t|=l_A\}.
\end{equation}
Thus, $\max(A)$ is a level set.
If the maximal nodes in $A$ do not all have the same length, then $\max(A)$ will not contain all the maximal nodes in $A$.
\end{notation}
\begin{defn}[Finite strong coding tree]\label{defn.fsct}
A finite subtree $A\sse\bT_k$ is a
{\em finite strong coding tree} if and only if
$A\in\mathcal{AT}^k_{m+1}$
for $m$ such that either $d^k_m$ is a coding node or else $m=0$.
\end{defn}
\begin{lem}\label{lem.fsctvalid}
Given $T\in\mathcal{T}_k$, each finite strong coding tree $A$ contained in $T$
has the Strong Witnessing Property, and $\max(A)$ is free
in $T$.
\end{lem}
\begin{proof}
Fix $T\in \mathcal{T}_k$ and
let $A$ be a finite strong coding tree contained in $T$.
If $A\in \mathcal{AT}_0^k$, then
$A$ is the empty set, so the lemma vacuously holds.
Otherwise,
by Definition \ref{defn.fsct},
$\max(A)$ contains a coding node, so
$\max(A)$ is free in $T$.
Further, $A\cong r_{m+1}(\bT_k)$ for some $m$ such that $d^k_m$ is a coding node.
As $r_{m+1}(\bT_k)$ has the Strong Witnessing Property, Lemma \ref{lem.need} implies
that $A$ also has the Strong Witnessing Property.
\end{proof}
\subsection{Extension Lemmas}\label{subsec.extlemmas}
The next series of lemmas
will be used extensively throughout the rest of the paper.
As consequences,
these lemmas ensure that every tree in $\mathcal{T}_k$ contains infinitely many subtrees which are also members of $\mathcal{T}_k$,
and that for any $A\in \mathcal{AT}_m^k$
with
$A\sse T\in\mathcal{T}_k$ and $\max(A)$ free in $T$,
the set $r_j[A,T]$, defined in (\ref{eq.r_m}) of Definition \ref{defn.T_pspace}, is infinite for each $j> m$.
\begin{lem}\label{lem.poc}
Suppose $T\in \mathcal{T}_k$ and
$X\sse T $ is a level set with $X$ free in $T$.
Fix a subset $X'\sse X$.
Let
$Y'\sse T$
be any level set
end-extending $X'$ such that $Y'$ is free in $T$.
Let $Y''$ denote the set of leftmost extensions of $X\setminus X'$ in $T \re l_{Y'}$.
Then $Y=Y'\cup Y''$ is free in $T$, and any new pre-cliques in $Y$ occur in $Y'$.
In particular, if
$Y'$ has no new pre-cliques over $X'$,
then $Y$ has no new pre-cliques over $X$.
\end{lem}
\begin{proof}
Since $Y'$ is free in $T$ and
Lemma \ref{lem.leftmostfree} implies that $Y''$ is free in $T$, it follows that $Y$ is free in $T$.
Suppose that for some $a\in[3,k]$ there is a
$Z\sse Y$
such that $Z$ has a new pre-$a$-clique in the interval $(l_X,l_Y]$.
Let $X''=X\setminus X'$, $Z'=Z\cap Y'$, and $Z''=Z\cap Y''$.
By Lemma \ref{lem.leftmostfree},
$Y''$ has no new pre-cliques over $X''$,
so $Z'$ must be non-empty.
Suppose toward a contradiction that also $Z''$ is non-empty.
Let $l$ be the minimal length at which
this new pre-$a$-clique occurs, and
let $m$ be the least integer such that
$|d^T_m|<l \le |d^T_{m+1}|$.
Since $T$ has the Strong Witnessing Property (recall Lemma \ref{lem.psim.properties}),
$d^T_{m+1}$ must be a coding node in $T$, say $c^T_j$.
In the case that $l_Y<l^T_j$,
by Lemma \ref{lem.leftmostfree} we may extend the nodes in $Z$ leftmost to nodes in $T\re l^T_j$ without adding any new pre-cliques.
Thus,
without loss of generality, assume that $l_Y\ge l^T_j$.
Since the new pre-$a$-clique $Z\re l$ must be witnessed in $T$ at the level of $c^T_j$,
it follows that all nodes in $Z$ have passing number $1$ at $c^T_j$.
Let $f$ be the strong isomorphism from $T$ to $\bT_k$ and
let $V$ denote $f[Z\re l^T_j]$.
Then $V$ is a level set in $\bT_k$ of length $l^k_j$.
Since $f$ is a strong similarity map,
it preserves passing numbers.
Hence,
all nodes in $V$ have passing number $1$ at $c^k_j$ in $\bT_k$.
However,
since
$Y''$ consists of the leftmost extensions in $T$
of the nodes in $X''$,
it follows that
$f[Y'']$ is the set of
leftmost extensions in $\bT_k$ of $f[X'']$.
Thus, each member of $f[Y'']$ has passing number $0$ at $c_j^k$.
Since $Z''\ne\emptyset$,
also
$f[Z'']\ne\emptyset$, so $V$ has at least one node with passing number $0$ at $c^k_j$, contradicting the previous paragraph.
Therefore, $Z''$ must be empty, so $Z$ resides entirely within $Y'$.
In particular, if $Y'$ has no new pre-cliques over $X'$, then $Y$ has no new pre-cliques over $X$.
\end{proof}
\begin{lem}\label{lem.perfect}
Suppose $s$ is a node in a strong coding tree $T\in\mathcal{T}_k$.
If $n$ is least such that $|s|\le l^T_n$,
then there is splitting node $t$ in $T$ extending $s$ such that
$|t|\le l^T_{n+k}$.
In particular,
every strong coding tree is perfect.
\end{lem}
\begin{proof}
It suffices to work with $\bT_k$, since each member of $\mathcal{T}_k$ is strongly isomorphic to
$\bT_k$.
We make use here of the
particular construction of $\bT_k$ from Example
\ref{ex.bTp}.
Let $s$ be a node in $\bT_k$, and let $n$ be least such that $l^k_n\ge |s|$.
Let $p> n$ be least such that $p=i(k-1)$ for some $i\ge 1$,
and let $s'$ be the leftmost extension of $s$ in
$\bT_k\re (l^k_p +1)$.
Note that $p< n+k$ and
that $s'$ has passing number $0$ at
$c^k_p$.
By the construction of $\bT_k$,
$c^k_{p+1}$ in $\bT_k$
will have passing number $1$ at precisely $c^k_{p-k+3},\dots, c^k_p$, and at no others.
Let $v$ denote the truncation of $c^k_{p+1}$ to length $l^k_p+1$.
The number of coding nodes in $\bT_k$ at which both
$s'$ and $v$ have passing number $1$ is at most $k-3$.
Therefore, $s'$ and $v$ do not code a pre-$k$-clique.
So by the $k$-FBC,
$s'$ extends to a splitting node $t$ in $\bT_k$ before reaching the level of $c^k_{p+1}$.
\end{proof}
Given a set of nodes $Z\sse \bT_k$, by
the {\em tree induced by $Z$} we mean
the set of nodes
$\{t\re |v|:t\in Z,\ v\in Z^{\wedge}\}$.
\begin{lem}\label{lem.factssplit}
Suppose $A$ is a finite subtree of some strong coding tree $T\in \mathcal{T}_k$ with $\max(A)$ free in $T$.
Let $X$ be any nonempty subset of $ \max(A)$, and
let $Z$ be any subset of
$\max(A)\setminus X$.
Let
$\{s_i:i<\tilde{i}\}$
be any enumeration
of $X$ and suppose
$l\ge l_A$ is given.
Then
there exist $l_*>l$
and extensions
$t_i^0,t_i^1\supset s_i$ ($i<\tilde{i}$)
and $t_z\supset z$ ($z\in Z$), each in $T\re l_*$,
such that letting
\begin{equation}
Y=\{t_i^j:i<\tilde{i},\ j<2\}\cup\{t_z:z\in Z\},
\end{equation}
and letting $B$ denote the tree induced by $A\cup Y$,
the following hold:
\begin{enumerate}
\item
The splitting in $B$ above level $l_A$ occurs in the order of the enumeration of $ X$.
Thus, for $i<i'<\tilde{i}$,
$|t_i^0\wedge t_i^1|<|t_{i'}^0\wedge t_{i'}^1|$.
\item
$Y$ has no new pre-cliques over $\max(A)$ and is free in $T$.
\end{enumerate}
\end{lem}
\begin{proof}
If $l_A$ is not the level of some coding node in
$T$, begin by extending each member of $X$ leftmost in $T$ to the level of the
very next coding node in $T$.
In this case, abuse notation and
let $X$ denote this set of extensions.
Since $\max(A)$ is free in $T$,
this adds no new pre-cliques over $\max(A)$.
By Lemma \ref{lem.perfect},
every node in $X$ extends to a splitting node in $T$.
Let $s_0^*$ be the splitting node of least length in $T$ extending $s_0$,
and let $c^T_{n_0}$ be the coding node in $T$ of least length above $|s_0^*|$.
Extend all nodes in $\{s_i:1\le i<\tilde{i}\}$ leftmost in $T$ to length $l^T_{n_0}$, and label their extensions $\{s^1_i:1\le i<\tilde{i}\}$.
Given $1\le p<\tilde{i}$ and the nodes $\{s^p_i:p\le i<\tilde{i}\}$,
let $s^*_p$ be the splitting node of least length in $T$ extending $s_p^p$,
and let $c^T_{n_p}$ be the coding node in $T$ of least length above $|s_p^*|$.
If $p<\tilde{i}-1$, then
extend all nodes in $\{s^p_i:p+1\le i<\tilde{i}\}$ leftmost in $T$ to length $l^T_{n_p}$, and label these $\{s^{p+1}_i:p+1\le i<\tilde{i}\}$.
When $p=\tilde{i}-1$, let $n=n_{\tilde{i}-1}$ and
for each $i<\tilde{i}$ and $j<2$, let $t_i^j$ be the leftmost extension in $T$
of ${s^*_i}^{\frown}j$ to length $l^T_n$.
For each $z\in Z$, let $t_z$ be the leftmost extension in $T$ of $z$ to length
$l^T_n$.
This collection of nodes composes the desired set $Y$.
By Lemma \ref{lem.poc},
$Y$ has no new pre-cliques over $\max(A)$.
$Y$ is free in $T$ since the nodes in $T$ have the length of a coding node in $T$.
\end{proof}
\begin{conv}\label{conv.POC}
Recall that when working within a strong coding tree $T\in\mathcal{T}_k$,
the passing numbers at coding nodes in $T$ are completely determined by $T$; in fact, they are determined by $\bT_k$.
For a finite subset $A$ of $T$ such that
$l_A$ equals $l_n^T$ for some $n<\om$,
we shall say that
$A$ {\em has the (Strong) Witnessing Property} if and only if the extension
$A\cup\{s^+:s\in\max(A)\}$ has the (Strong) Witnessing Property.
\end{conv}
The notion of a valid subtree is central to the constructions in this paper.
\begin{defn}[Valid]\label{defn.valid}
Suppose $T\in \mathcal{T}_k$ and let $A$ be a finite subtree of $T$.
We say that $A$
is {\em valid} in $T$ if and only if
$A$ has the Witnessing Property and
$\max(A)$ is free in $T$.
\end{defn}
The next
Lemma \ref{lem.pnc} shows that given a valid subtree of a strong coding tree $T$,
any of its maximal nodes can be extended to some coding node $c_n^T$ in $T$ while the rest of the maximal nodes can be extended to length $l_n^T$ so that their passing numbers are anything desired, subject only to the $K_k$-Free Criterion.
\begin{lem}[Passing Number Choice]\label{lem.pnc}
Fix $T\in\mathcal{T}_k$ and
a finite subtree $A$ with $\max(A)$ free in $T$.
Let $\{s_i:i<\tilde{i}\}$ be any enumeration of
$\max(A)$,
and
fix some $d<\tilde{i}$.
To each $i\in \tilde{i}\setminus\{d\}$ associate an $\varepsilon_i\in\{0,1\}$, with the stipulation that $\varepsilon_i$ must equal $0$ if $\{s_i,s_d\}$ has a pre-$k$-clique.
In particular, $\varepsilon_d=0$.
Then given any $j$,
there is an $n\ge j$ such that the
coding node $c^T_n$
extends $s_d$,
and
there are
extensions $u_i\contains s_i$, $i\in\tilde{i}\setminus \{d\}$, in $T\re l_n^T$
such that,
letting $u_d$ denote $ c_n^T$ and letting $Y=\{u_i:i<\tilde{i}\}$,
the following hold:
\begin{enumerate}
\item
Each $u_i$ has
passing number $\varepsilon_i$ at $u_d$.
\item
Any new pre-cliques among subsets of $Y$ (except possibly for the singleton $\{s_d\}$)
have their first instances occurring in the interval
$(|d^T_{m_n-1}|,l^T_n]$.
\item
If $A$ has the Witnessing Property, then so does
$A\cup Y$.
Thus, if $A$ is valid, then $A\cup Y$ is also valid.
\item
If $A$ has the Strong Witnessing Property and $s_d$ has a pre-$(k-1)$-clique witnessed by coding nodes in $A$,
then $A\cup Y$ has the Strong Witnessing Property.
\end{enumerate}
\end{lem}
\begin{proof}
Assume the hypotheses in the first paragraph of the lemma.
Let $m$ be least such that
$l^T_m\ge l_A$, and
for each $i<\tilde{i}$, let $s'_i$ be the leftmost extension of $s_i$ in $T$ of length $l_m^T$.
Since $\max(A)$ is free in $T$, the set $\{s'_i:i< \tilde{i}\}$ has
no new pre-cliques over
$A$.
Given
$j$,
take $n$ minimal above $\max(j,m+1)$ such that
$c_n^T\contains s'_d$, and let $u_d=c_n^T$.
Such an $n$ exists, as the coding nodes in
any strong coding tree are dense in that tree, by its strong similarity to $\bT_k$.
For $i\ne d$,
extend $s'_i$ via its leftmost extension in $T$ to length
$l_{n-1}^T$ and label it $t_i$.
By Lemma \ref{lem.poc},
$\{t_i:i\in\tilde{i}\setminus\{d\}\}\cup\{u_d\re l^T_{n-1}\}$ has
no new pre-cliques over $\{s'_i:i< \tilde{i}\}$, with the possible exception of the singleton $u_d\re l^T_{n-1}$, so (2) of the Lemma holds.
For $i\in \tilde{i}\setminus\{d\}$ with $\varepsilon_i=0$,
let $u_i$ be the leftmost extension of $t_i$ of length $l_n^T$.
For $i<\tilde{i}$ with
$\varepsilon_i=1$, by our assumption,
$\{t_i,u_d\re l^T_{n-1}\}$ has no pre-$k$-cliques, and their extensions to length
$l^T_{n-1}$ have no new pre-cliques by Lemma \ref{lem.poc}.
By the $k$-Free Branching Criterion of $T$,
$t_i$ splits in $T$ before reaching the level of $c^T_n$.
Let $u_i$ be the rightmost extension of $t_i$ to length
$l_n^T$.
Then for each $i<\tilde{i}$,
the passing number
of $u_i$ at $u_d$ is
$\varepsilon_i$.
Thus, (1) holds.
Suppose now that $A$ has the Witnessing Property.
Let $Y=\{u_i:i<\tilde{i}\}$.
Since the nodes in $Y$ have the length of the coding node $u_d$, $Y$ is free in $T$.
By construction, any new pre-cliques in $Y$ over $A$ occur in the interval $(l^T_{n-1},l^T_n]$.
Since $T$ has the Strong Witnessing Property, any new pre-cliques of $Y$ in the interval
$(l^T_{n-1},l^T_n]$ must actually occur in the interval
$(|d^T_{m_{n-1}}|,l^T_n]$.
It remains to show that any new pre-cliques of size at least two in $Y$ over $A$ in the interval $(|d^T_{m_{n-1}}|,l^T_n]$ are witnessed by coding nodes in $A\cup Y$.
We now show slightly more.
Suppose
$I\sse\tilde{i}$, where $d\not\in I$, and
$\{u_i:i\in I\}$
has a new pre-$a$-clique over $A$, for
some $a\in[3,k]$.
Let $Z$ denote $\{u_i:i\in I\}$ and let $l$ be
least
such that $Z\re l$ is a pre-$a$-clique, and note that
$l$ must be in the interval $(|d^T_{m_n-1}|,l_n^T]$.
Since $T$ has the Strong Witnessing Property,
there is some set of coding nodes
$C\sse T$ witnessing $Z\re l$.
As
$u_d$ is the least coding node in $T$ above $Z\re l$, $u_d$ must be in $C$, again by the Strong Witnessing Property of $T$.
It follows that each node in $Z$ must have passing number $1$ at $u_d$.
If $a=3$, then the coding node $u_d$, which is contained in $ Y$, witnesses the pre-$3$-clique in $Z$.
Now suppose that $a\ge 4$.
Then $C\setminus \{u_d\}$ witnesses that
$Z'=Z\cup\{u_d\}$ has a pre-$(a-1)$-clique.
The $l'$ at which $Z'\re l'$ is a new
pre-$(a-1)$-clique must be below $|d^T_{m_n-1}|$, since
$T$ cannot witness it at the level of $u_d$.
Since $Y\setminus\{u_d\}$ has no new pre-cliques over $A$ in the interval $(l_A, |d^T_{m_n-1}|]$,
it must be that $l'\le l_A$.
Since $Z'\re l_A$ has size at least two and is contained in $A$, the Witnessing Property of $A$ implies that
there is a set of coding nodes $C'$ contained in $A$ witnessing the pre-$(a-1)$-clique $Z\re l'$.
Then $C'\cup\{u_d\}$ is contained in $A\cup Y$ and witnesses the pre-$a$-clique $Z$.
Now, suppose that $d\in I\sse\tilde{i}$, $I$ has size at least two, and $\{u_i:i\in I\}$ has a new pre-clique over $A$.
We will show that this is impossible.
We point out that in the interval
$(l^T_{n-1} +1, l_n^T)$,
the coding node $u_d$ has no new pre-cliques with any other node in $T$ of length $l^T_n$,
(for $T$ has the Strong Witnessing Property and such a new pre-clique could not be witnessed in $T$).
Since for each
$i\ne d$, the node $u_i\re l^T_{n-1}$ is the leftmost extension of $s_i$ in $T$, it follows that
for each $l\in (l_A, l^T_{n-1}]$, the set
$\{u_i\re l:i\in I\}$ has
no new pre-cliques of size two or more.
Thus, any new pre-clique occuring among $\{u_i:i\in I\}$ above $A$ must exclude $u_d$.
It follows that $A\cup Y$ has the Witnessing Property, so (3) holds.
We have already shown that, assuming that $A$ has the Witnessing Property, $u_d$ is not
in any subset of $Y$ of size at least two which has a new pre-clique over $A$, and that for $i\ne d$, any new pre-clique in $\{u_i\}$ over $A$ is witnessed in $A\cup Y$.
Assuming the premise of (4), $s_d$ has a pre-$(k-1)$-clique witnessed by coding nodes in $A$.
Thus, the Strong Witnessing Property of $A$ carries over to $A\cup Y$.
\end{proof}
The next lemma provides conditions under which a
subtree of a strong coding tree can be extended to another subtree with a prescribed strong similarity type.
This will be central to the constructions involved in proving the Ramsey theorems for strong coding trees as well as in further sections.
\begin{lem}\label{lem.facts}
Suppose $A$ is a subtree of a strong coding tree $T\in\mathcal{T}_k$ with $\max(A)$ free in $T$.
Fix any member $u\in\max(A)$.
Let $X$ be any subset of $\max(A)$ such that for each $s\in X$,
the pair $\{s,u\}$ has no pre-$k$-cliques,
and let $Z$ denote $\max(A)\setminus(X\cup\{u\})$.
Let $l\ge l_A$ be given.
Then there is an $l_*>l$
and there are extensions $u_*\supset u$,
$s_*^0,s_*^1\supset s$ for all $s\in X$,
and $s_*\supset s$ for all $s\in Z$, each of length $l_*$,
such that, letting
\begin{equation}
Y=\{u_*\}\cup\{s_*^i:s\in X,\ i \le 1\}\cup\{s_*:s\in Z\},
\end{equation}
and letting $B$ be the tree induced by $A\cup Y$,
the following hold:
\begin{enumerate}
\item
$u_*$ is a coding node.
\item
For each $s\in X$ and $i\le 1$, the passing number of $s_*^i$ at $u_*$ is $i$.
\item
For each $s\in Z$,
the passing number of
$s_*$ at $u_*$ is $0$.
\item
Splitting among the extensions of the $s\in X$ occurs in reverse lexicographic order:
For $s$ and $t$ in $X$,
$|s_*^0\wedge s_*^1|<|t^0_*\wedge t^1_*|$
if and only if $s_*>_{\mathrm{lex}}t_*$.
\item
There are no new pre-cliques
among the nodes in $X$
below the length of the longest splitting node in $B$ below $u_*$.
\end{enumerate}
If moreover $A$ has the (Strong) Witnessing Property, then $B$ also has the (Strong) Witnessing Property.
\end{lem}
\begin{proof}
Since $\max(A)$ is free in $T$,
apply Lemma \ref{lem.factssplit}
to extend $\max(A)$ to have splitting nodes in the desired order
without adding any new pre-cliques and so that this extension is free in $T$.
Then apply Lemma
\ref{lem.pnc} to extend to a level with a coding node and passing numbers as prescribed, with the extension being valid in $T$.
Lemma
\ref{lem.pnc} also guarantees that we can construct such a $B$ with the (Strong) Witnessing Property, provided that $A$ has the (Strong) Witnessing Property.
\end{proof}
This immediately yields the main extension theorem of this section.
\begin{thm}\label{thm.GOODnonempty}
Suppose $T\in\mathcal{T}_k$, $m<\om$,
and $A$ is a member of $\mathcal{AT}_m$ with $\max(A)$ free in $T$.
Then the set
$r_{m+1}[A,T]$ is infinite.
In particular,
for each $l<\om$, there is a member $B\in r_{m+1}[A,T]$ with $\max(B)$ free in $T$ and $l_B\ge l$.
Furthermore, $[A,T]$ is infinite, and
for each strictly increasing sequence of integers $(l_j)_{j>m}$, there is a member $S\in [A,T]$ such that
$|d^S_j|>l_j$ and $\max(r_j(S))$ is free in $T$, for each $j>m$.
\end{thm}
\begin{proof}
Recall that every member of $\mathcal{AT}_m$ has the Strong Witnessing Property.
The first part of the theorem follows from Lemmas \ref{lem.factssplit} and \ref{lem.pnc}.
The second part
follows from Lemma \ref{lem.facts}.
\end{proof}
The final lemmas of this section set up for constructions in the main theorem of Section \ref{sec.5}.
\begin{lem}\label{lem.HLCasebtruncate}
Suppose $T\in \mathcal{T}_k$,
$X\sse T$ is a level set containing a coding node $c^T_j$, and $X'\cup X''$ is a partition of $X$ with
$c^T_j\in X'$.
Suppose further that $j<n$, $c^T_n$ extends $c^T_j$, and $c_n^T\in Y'\sse T\re l^T_n$
end-extends $X'$ with the following properties:
$Y'$ has no new pre-cliques over $X'$,
and each node in $Y'$ has the same passing number at $c^T_n$ as it does at $c^T_j$.
Then there is a level set $Y''\sse T\re l^T_n$ end-extending $X''$ such that each node in $Y''$ has the same passing number at $c^T_n$
as it does at $c^T_j$,
and $Y=Y'\cup Y''$ has no new pre-cliques over $X$.
\end{lem}
\begin{proof}
If $n>j+1$,
first extend the nodes in $X''$ leftmost in $T$ to length $l^T_{n-1}$, and label this set of nodes $Y''\re l^T_{n-1}$.
By
Lemma \ref{lem.poc}
$Y''\re l^T_{n-1} \cup Y'\re l^T_{n-1}$ has no new pre-cliques over $X$.
Apply Lemma \ref{lem.pnc} to extend the nodes in $Y''\re l^T_{n-1}$ to $Y''\in T\re l^T_n$ such that
for each node $t\in Y''$,
$t$ has the same passing number at $c^T_n$ as it does at $c^T_j$.
Let $Y=Y'\cup Y''$.
Suppose towards a contradiction that for some $a\in [3,k]$, there is a new pre-$a$-clique
$Z\sse Y$
above $X$.
If $a=3$, then $c^T_n$ witnesses this pre-$3$-clique.
Since each node in $Y$ has the same passing number at $c^T_n$ as it does at $c^T_j$,
it follows that $Z\re l^T_j$ has a pre-$3$-clique which is witnessed by $c^T_j$.
Thus, $Z$ was not new over $X$.
Now suppose that $a\ge 4$.
Then $Z\cup \{c^T_n\}$ has a pre-$b$-clique, where $b=a-1$.
Since $Z$ is a new pre-$a$-clique and $T\cong \bT_k$,
it must be that the
the level where the pre-$b$-clique in $Z\cup \{c^T_n\}$ is new must be at some $l\le l^T_{n-1}$.
Since $Y$ has no new pre-cliques in the interval
$(l^T_j,l^T_{n-1}]$,
this $l$ must be less than or equal to $l^T_j$.
Since the passing numbers of members in $Y$ are the same at $c^T_n$ as they are at $c^T_j$,
it follows that $Z\cup\{c^T_j\}$ has a pre-$b$-clique.
This pre-$b$-clique must occur at some level strictly below $l^T_j$, since the passing number of the coding node $c^T_j$ at itself is $0$.
Hence, $Z\re l^T_j\cup\{c^T_j\}$ is a pre-$a$-clique.
Therefore, $Z$ is not a new pre-$a$-clique over $X$, a contradiction.
\end{proof}
\begin{lem}\label{lem.HLconstruction}
Suppose $T\in \mathcal{T}_k$, $m\in\bN$, and $B\in r_{m+1}[0,T]$ with $\max(B)$ free in $T$.
Let $x_*$ be the critical node of $\max(B)$,
let $X\sse \max(B)$ with $x_*\in X$,
and let $X'=\max(B) \setminus X$.
Suppose that $Y$ end-extends $X$ into $T$ so that $Y$ has no new pre-cliques over $X$, $Y$ is free in $T$,
and the critical node $x_*$ is extended to the same type of critical node $y_*$ in $Y$.
If $x_*$ is a coding node, assume that for each
$y\in Y$, the passing number of $y$ at $y_*$ is the same as the passing number of $y$ at $x_*$.
Then there is a level set $Y'$ end-extending $X'$ in $T$ to length $l_Y$ such that
$r_m(B)\cup (Y\cup Y')$
is a member of $r_{m+1}[r_m(B),T]$.
\end{lem}
\begin{proof}
Suppose first that $x_*$ is a splitting node.
By Lemma \ref{lem.poc},
letting $Y'$ be the level set of leftmost nodes in $T\re |x_*|$,
we see that $Y\cup Y'$ is free in $T$ and has no new pre-cliques over $\max(B)$.
In particular, $r_m(B)\cup(Y\cup Y')$ is a member of $r_{m+1}[r_m(B),T]$.
Now suppose that $x_*$ is a coding node.
Let $n$ be the integer such that $y_*=c^T_n$.
Then $l_X\le l^T_{n-1}$.
Let $W'$ denote the leftmost extensions of the nodes in $X'$ in $T\re l^T_{n-1}$.
Again by Lemma \ref{lem.poc},
the set $W'\cup (Y\re l^T_{n-1})$ has no new pre-cliques over $B$.
For $i\in 2$, let $W'_i$ be the set of those $w\in W'$ which have passing number $i$ at $x_*$.
Note that for each $w\in W'_1$,
the set $\{w\re l_X,x_*\}$
has no pre-$k$-clique, and since no new pre-cliques are added between the levels of $l_X$ and $l^T_{n-1}$,
the set $\{w,y_*\re l^T_{n-1}\}$ has no pre-$k$-clique.
Since $T$ satisfies the $K_k$-Free Branching Criterion,
each $w\in W'_1$ can be extended to a node $y\in T\re l_Y$ with passing number $1$ at $y_*$.
Extend each node in $W'_0$ leftmost in $T$ to length $l_Y$.
Let $Y'=W'_0\cup W'_1$.
Then $Y'$ end-extends $W'$ which end-extends $X'$,
and each $y\in Y'$ has the same passing number at $y_*$ as it does at $x_*$.
We claim that $Y\cup Y'$ has no new pre-cliques over $B$.
Suppose towards a contradiction that $Z\sse Y\cup Y'$ is a new pre-$a$-clique above $B$, for some $a\in [3,k]$.
Since $Z\re l^T_{n-1}$ has no new pre-cliques over $B$,
this new pre-$a$-clique must take place at some level
$l\in (l^T_n, l_Y]$.
Since $T$ has the Strong Witnessing Property, $l$ must be in the interval $(|d|,l_Y]$, where $d$ is the longest splitting node in $T$ of length less than $l_Y$.
If $a=3$, then $y_*$ witnesses the pre-$3$-clique $Z$.
But then $Z\re l_X$ must also be a pre-$3$-clique, since the passing numbers at $y_*$ are the same as at $x_*$, and $x_*$ witnesses the pre-$3$-clique $Z\re l_X$.
Hence, $Z$ is not new over $B$.
Now suppose that $a\ge 4$.
Then $Z\cup \{x_*\}$ has a pre-$(a-1)$-clique at some level $l'<l$.
Since $T$ has the Strong Witnessing Property, $Z\cup\{x_*\}$ can have at most one new pre-clique in the interval $(|d|,l_Y]$,
and $T$ has no new pre-cliques between $(l^T_{n-1},|d|]$.
Thus, it must be that $l'\le l_X$.
Therefore, the minimal level of a pre-$(a-1)$-clique in $Z\cup\{x_*\}$ at some level in $B$.
Since $B$ has the Strong Witnessing Property, this is witnessed in $B$.
Since $y_*\contains x_*$ and each $z\in Z$ has the same passing number at $y_*$ as at $x_*$,
$x_*$ cannot be a witness of the
pre-$(a-1)$-clique in $Z\cup\{x_*\}$.
Therefore, $Z\cup\{x_*\}$ must be witnessed in $r_m(B)$, say by coding nodes $\{c^B_{i_j}:j<a-3\}$, where $i_{a-4}<l^B_m$.
But then $x_*\cup \{c^B_{i_j}:j<a-3\}$ witnesses the pre-$a$-clique $Z$.
Hence, $Z$ is not new over $B$.
Now we will show that $Y\cup Y'$ has no new pre-cliques over $r_m(B)$.
Suppose $Z\sse Y\cup Y'$ has a pre-$a$-clique, for some $a\in[3,k]$.
Since this pre-$a$-clique is not new over $B$, there is some $l\le l_X$ where $Z\re l$ is a new pre-$a$-clique in $B$.
Since $B$ has the Strong Witnessing Property, there are some coding nodes $c_{i_0}^B,\dots c_{i_{a-3}}^B$ in $B$ witnessing $Z\re l$.
If $i_{a-3}<m$, then these witnesses are in $r_m(B)$.
Now suppose that $i_{a-3}=m$.
Note that $y_*\contains x_*=c^B_m$.
Thus, $\{y_*\}\cup\{c^B_{i_j}:j<a-3\}$ forms a
pre-$(a-1)$-clique which witnesses
$Z$.
Therefore, $r_m(B)\cup Y'\cup Y$ has the Strong Witnessing Property.
Since it is strongly similar to $B$,
$r_m(B)\cup Y'\cup Y$ is a member of $r_{m+1}[r_m(B),T]$ by Lemma \ref{lem.need}.
\end{proof}
\begin{rem}
As was remarked for $\mathcal{T}_3$ in \cite{DobrinenJML20},
each
space $(\mathcal{T}_k,\le, r)$, $k\ge 3$,
satisfies Axioms \bf A.1\rm, \bf A.2\rm, and \bf A.3(1) \rm of Todorcevic's axioms in Chapter 5 of \cite{TodorcevicBK10} guaranteeing a topological Ramsey space, and it is routine to check this.
However,
Axiom
\bf A.3(2) \rm does not hold.
The pigeonhole principle, Axiom \bf A.4\rm, holds exactly when
the finite subtree is valid inside the given strong coding tree; this will follow from Theorems in Section \ref{sec.5} and \ref{sec.1SPOC}.
\end{rem}
\section{Halpern-\Lauchli-style Theorems for strong coding trees}\label{sec.5}
The
Ramsey theory content for strong coding trees
begins in this section.
The ultimate goal
is to obtain Theorem \ref{thm.mainRamsey}.
This is a
Ramsey theorem for colorings of strictly similar (Definition \ref{defn.ssimtype}) copies of any given finite antichain of coding nodes, as these are the structures which will code finite triangle-free graphs.
Phase II of the paper takes place in this and the next section.
Theorem \ref{thm.matrixHL} is a Halpern-\Lauchli-style theorem for colorings of level sets extending a finite valid subtree of some strong coding tree.
Its proof begins with Harrington's forcing proof of Theorem \ref{thm.HL} as a rough template, but involves new forcings and new arguments necessitated by the $k$-clique-free
nature of Henson graphs.
This is a major step toward proving a Milliken-style theorem for strong coding trees, but it is not enough: in the case when there is a coding node in the level sets being colored,
Theorem \ref{thm.matrixHL} proves
homogeneity on level sets extending some fixed set of nodes, but does not obtain homogeneity overall.
We will have a third Halpern-\Lauchli-style theorem
in Section \ref{sec.1SPOC}, which involves ideas unprecedented to our knowledge.
This Lemma \ref{lem.Case(c)} will use a third type of forcing.
Then using much induction on Theorem \ref{thm.matrixHL}
and
Lemma \ref{lem.Case(c)},
we will prove the Main Ramsey Theorem for Strictly Witnessed (see Definition \ref {defn.SWP}) finite subtrees of a given strong coding tree.
This Theorem
\ref{thm.MillikenSWP}
is the main theorem of Phase II of the paper.
Theorem \ref{thm.matrixHL} encompasses colorings of two different types of level set extensions of a fixed finite tree: The level set either contains a splitting node (Case (a)) or a coding node (Case (b)).
In Case (a), we obtain a direct analogue of the
Halpern-\Lauchli\ Theorem.
In Case (b), we obtain a weaker version of the
Halpern-\Lauchli\ Theorem, which is later strengthened to
the direct analogue in Lemma \ref{lem.Case(c)}.
The proof given here essentially follows the outline
of the proof in \cite{DobrinenJML20}, but our argument is now more streamlined, due to having proved more general extension lemmas
in Section \ref{sec.4}.
Let $k\ge 3$ be fixed, and
fix the following terminology and notation.
Given subtrees $U,V$ of $\bT_k$ with $U$ finite,
we write $U\sqsubseteq V$ if and only if
$U=\{v\in V:|v|\le l_U\}$;
in this case we say that $V$ {\em extends} $U$, or that $U$ is an {\em initial subtree of} $V$.
We write $U\sqsubset V$ if $U$ is a proper initial subtree of $V$.
Recall the following notation from Definition \ref{defn.T_pspace} of the space $(\mathcal{T}_k,\le, r)$:
$S\le T$ means that $S$ and $T$ are members of
$\mathcal{T}_k$ and $S$ is a subtree of $T$.
Given $A\in\mathcal{AR}_m$ for some $m$,
$[A,T]$ denotes the set of all $S\le T$ such that $S$ extends $A$.
We now begin setting up for the two possible cases before stating the theorem.
\vskip.1in
\noindent\underline{\bf{The Set-up for Theorem \ref{thm.matrixHL}.}}
Let
$T\in\mathcal{T}_k$ be given,
and
let
$A$ be a finite valid subtree of $T$ with the Witnessing Property.
$A$ is allowed to have terminal nodes at different levels.
In order to simplify notation in the proof, without loss of generality, we
assume that $0^{(l_A)}$ is in $A$.
Let $A^+$ denote the set of immediate extensions in $\widehat{T}$ of the members of $\max(A)$; thus,
\begin{equation}
A^+=\{s^{\frown}i : s\in \max(A),\ i\in\{0,1\},\mathrm{\ and\ } s^{\frown}i\in \widehat{T}\}.
\end{equation}
Note that $A^+$ is a level set of nodes of length $l_A+1$.
Let $A_e$ be a subset of $A^+$
containing $0^{(l_A+1)}$ and of size at least two.
(If $A^+$ has only one member, then
$A$ consists of one non-splitting node of the form $0^{(l)}$ for some $l$, and
the theorem in this section does not apply.)
Suppose that $\tilde{X}$ is a level set of nodes in $T$ extending $A_e$ so that $A\cup\tilde{X}$
is a finite valid subtree of $T$ satisfying the Witnessing Property.
Assume moreover that $0^{(l_{\tilde{X}})}$ is a member of $\tilde{X}$, so that the node $0^{(l_A)}$ in $A_e$ is extended by $0^{(l_{\tilde{X}})}$ in $\tilde{X}$.
There are two possibilities:
\begin{enumerate}
\item[]
\begin{enumerate}
\item[\bf{Case (a).}]
$\tilde{X}$ contains a splitting node.
\end{enumerate}
\end{enumerate}
\begin{enumerate}
\item[]
\begin{enumerate}
\item[\bf{Case (b).}]
$\tilde{X}$ contains a coding node.
\end{enumerate}
\end{enumerate}
In both cases,
define
\begin{align}\label{eq.ExtTAC}
\Ext_T(A,\tilde{X})= \{X\sse T: \ &
X\sqsupseteq \tilde{X} \mathrm{\ is\ a\ level\ set}, \
A\cup X\cong A\cup\tilde{X}, \cr
&\mathrm{\ and\ } A\cup X \mathrm{\ is\ valid\ in\ } T\}.
\end{align}
The next lemma
shows that seemingly weaker properties suffice to guarantee that a level set is in $\Ext_T(A,
\tilde{X})$.
\begin{lem}\label{lem.alternate}
Let $X$ be a level set in $T$ extending $\tilde{X}$. Then
$X\in\Ext_T(A,\tilde{X})$ if and only if
$X$ is free in $T$,
$A\cup X$ is strongly similar to $A\cup \tilde{X}$, and
$X$ has no new pre-cliques over $A\cup \tilde{X}$.
Moreover, $X\in\Ext_T(A,\tilde{X})$ implies that $A\cup X$ has the Witnessing Property.
\end{lem}
\begin{proof}
The forward direction follows from the definition of
$\Ext_T(A,\tilde{X})$.
If $X\in \Ext_T(A,\tilde{X})$, then there is
is a strong isomorphism, say
$f:A\cup \tilde{X}\ra A\cup X$.
Then
$f$ is a strong similarity map and moreover,
$f$ takes $\tilde{X}$ to $X$.
Since $X$ extends $\tilde{X}$ and $f$ maps the new pre-cliques of $\tilde{X}$ over $A$ to the new pre-cliques of $X$ over $A$,
$X$ must have no new pre-cliques over $A\cup \tilde{X}$.
Note that $X$ is free in $T$, since $A\cup X$ is valid in $T$.
Now suppose that
$X\sqsupseteq \tilde{X}$ is as in the second part of the statement.
Since $X$ is free in $T$,
$A\cup X$ is valid in $T$.
$A\cup X$ being strongly similar to $A\cup \tilde{X}$
implies that the strong similarity map $g:A\cup\tilde{X}\ra A\cup X$ takes $\tilde{X}$ to $X$.
Since $X$ has no new pre-cliques over $A\cup\tilde{X}$,
any new pre-cliques in $X$ over $A$ are already in $A\cup \tilde{X}$ and hence witnessed by coding nodes in $A$ along possibly with the coding node $c_*$ in $\tilde{X}$ (in Case (b)).
If this is the case, then the coding node $f(c_*)$ in $X$ along with those same coding nodes in $A$ witness the new pre-clique in $X$ over $A$. Therefore, $f$ is a strong isomorphism.
It follows that since $A\cup\tilde{X}$ has the Witnessing Property, so does $A\cup X$.
Also note that if moreover $A\cup\tilde{X}$ has the Strong Witnessing Property, then $A\cup X$ does as well.
\end{proof}
In the following, for a finite subtree $A$ of some $T\in\mathcal{T}_k$, recall that $\max(A)$ denotes the set $\{t\in A:|t|=l_A\}$,
the set of all nodes in $A$ of the maximum length, and
$A^+$ denotes the set of all immediate successors of $\max(A)$ in $\widehat{T}$.
We now prove the analogue of the Halpern-\Lauchli\ Theorem for strong coding trees.
\begin{thm}\label{thm.matrixHL}
Fix $T\in\mathcal{T}_k$ and $B$ a finite valid subtree of $T$ such that $B\in \mathcal{AT}_m$, for some $m\ge 1$.
Let $A$ be a subtree of $B$ with $l_A=l_B$ and $0^{(l_A)}\in A$
such that
$A$ has the Witnessing Property and is valid in $T$.
Let $A_e$ be a subset of $A^+$ of size at least two such that
$0^{(l_A+1)}$ is in $A_e$.
Let $\tilde{X}$ be a level set in $T$ end-extending $A_e$
with at least two members, one of which is the node $0^{(l_{\tilde{X}})}$
such that
$A\cup\tilde{X}$ is a finite valid subtree of $T$
with the Witnessing Property.
Given any coloring
$h:\Ext_T(A,\tilde{X})\ra 2$,
there is a strong coding tree $S\in [B,T]$ such that
$h$ is monochromatic on $\Ext_S(A,\tilde{X})$.
If $\tilde{X}$ has a coding node, then
the strong coding tree $S$ is, moreover, taken to be in $[r_{m_0-1}(B'),T]$,
where
$m_0$ is the integer
for which there is a $B'\in r_{m_0}[B,T]$ with
$\tilde{X}\sse\max(B')$.
\end{thm}
\begin{proof}
Let $T,A,A_e,B,\tilde{X}$ be given satisfying the hypotheses,
and
let $h$ be a coloring of the members of $\Ext_T(A,\tilde{X})$ into two colors, $\{0,1\}$.
Fix the following notation:
Let $d+1$ equal the number of nodes in $\tilde{X}$,
and enumerate the nodes in $\tilde{X}$ as
$s_0,\dots, s_d$
so that $s_d$ is the critical node in $\tilde{X}$.
Let $i_0$ denote the integer such that
$s_{i_0}$ is the node which is a sequence of $0$'s.
Notice that $i_0$ can equal $d$ only if we are in Case (a) and
the splitting node in $\tilde{X}$ is a sequence of $0$'s.
In Case (b), let $I_{0}$ denote the set of all $i<d$ such that $s^+_i(l_{\tilde{X}})=0$
and let
$I_{1}$ denote the set of all $i<d$ such that $s^+_i(l_{\tilde{X}})=1$.
Let $L$ denote the collection of all $l\in\bN$ such that there is a member of
$\Ext_T(A,\tilde{X})$ with nodes of length $l$.
In Case (a),
since $B$ is valid in $T$,
$L$ consists of those $l\in\bN$ for which
there is a splitting node of length $l$ extending $s_d$,
is infinite by Lemma \ref{lem.poc}.
In Case (b),
since $\tilde{X}$ contains a coding node, it follows from the proof of Lemma \ref{lem.pnc} that
$L$ is exactly the set of all $l\in\bN$ for which
there is a coding node of length $l$ extending $s_d$.
For each
$i\in (d+1)\setminus\{i_0\}$, let $T_i=\{t\in T:t\contains s_i\}$.
Let $\Seq[0]$ denote the set of all sequences of $0$'s of finite length.
Let $T_{i_0}=\{t\in T:t\contains s_{i_0}$ and $t\in \Seq[0]\}$, the collection of all leftmost nodes in $T$ extending $s_{i_0}$.
Let $\kappa=\beth_{2d}$.
The following forcing notion $\bP$ adds $\kappa$ many paths through $T_i$, for each $i\in d\setminus\{i_0\}$,
and one path through $T_d$.
If $i_0\ne d$, then $\bP$ will add one
path through $T_{i_0}$, but with $\kappa$ many ordinals labeling this path.
We allow this in order to simplify notation.
\vskip.1in
$\bP$ is the set of conditions $p$ such that
$p$ is a function
of the form
$$
p:(d\times\vec{\delta}_p)\cup\{d\}\ra T\re l_p,
$$
where $\vec{\delta}_p$ is a finite subset of $\kappa$,
$l_p\in L$,
$\{p(i,\delta) : \delta\in \vec{\delta}_p\}\sse T_i\re l_p$ for each $i<d$, and the following hold:
\vskip.1in
\noindent \underline{Case (a)}.
(i) $p(d)$ is {\em the} splitting node extending $s_d$ of length $l_p$;
\begin{enumerate}
\item [(ii)]
$\{p(i,\delta):(i,\delta)\in d\times\vec{\delta}_p\}\cup \{p(d)\}$ is free in $T$.
\end{enumerate}
\vskip.1in
\noindent \underline{Case (b)}. (i)
$p(d)$ is {\em the} coding node extending $s_d$ of length $l_p$;
\begin{enumerate}
\item [(ii)]
For each $\delta\in\vec{\delta}_p$,
$j\in \{0,1\}$,
and $i\in I_j$, the passing number of $p(i,\delta)$ at $p(d)$ is $j$.
\end{enumerate}
\vskip.1in
Given $p\in\bP$,
the {\em range of $p$} is defined as
$$
\ran(p)=\{p(i,\delta):(i,\delta)\in d\times \vec\delta_p\}\cup \{p(d)\}.
$$
If also $q\in \bP$ and $\vec{\delta}_p\sse \vec{\delta}_q$, then we let $\ran(q\re \vec{\delta}_p)$ denote
$\{q(i,\delta):(i,\delta)\in d\times \vec{\delta}_p\}\cup \{q(d)\}$.
In both Cases (a) and (b), the partial
ordering on $\bP$ is defined as follows:
$q\le p$ if and only if
$l_q\ge l_p$, $\vec{\delta}_q\contains \vec{\delta}_p$, and the following hold:
\begin{enumerate}
\item[(i)]
$q(d)\contains p(d)$,
and
$q(i,\delta)\contains p(i,\delta)$ for each $(i,\delta)\in d\times \vec{\delta}_p$, and
\item[(ii)]
$\ran(q\re\vec{\delta}_p)$ has no new
pre-cliques over $\ran(p)$.
\end{enumerate}
Since all conditions in $\bP$ have ranges which are free in $T$,
we shall say that {\em $q$ is valid over $p$}
to mean that (ii) holds.
The theorem will be proved in two main parts.
In Part I,
we check that $\bP$ is an atomless partial order and then prove the main Lemma \ref{lem.compat}.
In Part II, we apply Lemma \ref{lem.compat} to build
the tree $S$
such that
$h$ is monochromatic on
$\Ext_S(A,\tilde{X})$.
\vskip.1in
\noindent \underline{\bf {Part I.}}
\begin{lem}\label{lem.atomlesspo}
$(\bP,\le)$ is an atomless partial ordering.
\end{lem}
\begin{proof}
The order $\le$ on $\bP$ is clearly reflexive and antisymmetric.
Transitivity follows from the fact that the requirement (ii) in the definition of the partial order on $\bP$ is a transitive property.
To see this, suppose that
$p\ge q$ and $q\ge r$.
Then $\vec{\delta}_p\sse\vec{\delta}_q\sse \vec{\delta}_r$, $l_p\le l_q\le l_r$,
$r$ is valid over $q$, and $q$ is valid over $p$.
Since
$\ran(r\re \vec{\delta}_p)$ is contained in $\ran(r\re \vec{\delta}_q)$ which
has no new pre-cliques over $\ran(q)$,
it follows that
$\ran(r\re \vec{\delta}_p)$ has no new pre-cliques over $\ran(q\re \vec{\delta}_p)$.
Since
$\ran(q\re \vec{\delta}_p)$ has no new pre-cliques over $\ran(p)$,
it follows that $\ran(r\re \vec{\delta}_p)$
has no new pre-cliques over $\ran(p)$.
Therefore, $r$ is valid over $p$, so
$p\ge r$.
\begin{claim}\label{claim.densehigh}
For each $p\in\bP$ and $l>l_p$, there
are $q,r\in\bP$ with $l_q,l_r> l$ such that $q,r<p$ and $q$ and $r$ are incompatible.
\end{claim}
\begin{proof}
Let $p\in \bP$ and $l>l_p$ be given, and
let
$\vec{\delta}$ denote $\vec{\delta}_p$ and let
$\vec{\delta}_r=\vec{\delta}_q=\vec{\delta}$.
In Case (a),
take $q(d)$ and $r(d)$ to be incomparable splitting nodes in $T$ extending $p(d)$ to some lengths greater than $l$.
Such splitting nodes exist by Lemma \ref{lem.perfect},
showing that strong coding trees are perfect.
Let $l_q=|q(d)|$ and $l_r=|r(d)|$.
For each $(i,\delta)\in d\times\vec{\delta}$,
let $q(i,\delta)$ be the leftmost extension in $T$ of $p(i,\delta)$ to length $l_q$,
and let
$r(i,\delta)$ be the leftmost extension of $p(i,\delta)$ to length $l_r$.
Then $q$ and $r$ are members of $\bP$.
Since $\ran(p)$ is free in $T$,
both $\ran(q)$ and $\ran(r)$ are free in $T$ and
$\ran(q\re \vec{\delta}_p)$ and $\ran(r\re \vec{\delta}_p)$
have no new pre-cliques over $\ran(p)$, by Lemma \ref{lem.poc}.
It follows
that $q$ and $r$ are both valid over $p$.
Since neither of $q(d)$ and $r(d)$ extends the other,
$q$ and $r$ are incompatible.
In Case (b),
let $s$ be a splitting node in $T$ of length greater than $l$ extending $p(d)$.
Let $k$ be minimal such that $|c^T_k|\ge |s|$.
Let $u,v$ extend $s^{\frown}0,s^{\frown}1$, respectively, leftmost in $T\re l^T_k$.
For each $(i,\delta)\in d\times\vec\delta_p$,
let $p'(i,\delta)$ be the leftmost extension
of $p(i,\delta)$
in $T\re l^T_k$.
By Lemma \ref{lem.pnc},
there are $q(d)\contains u$ and $q(i,\delta)\contains p'(i,\delta)$, $(i,\delta)\in d\times \vec{\delta}_p$,
such that
\begin{enumerate}
\item
$q(d)$ is a coding node;
\item
$q$ is valid over $p$;
\item
For each $j<2$,
$i\in I_j$ if and only if the immediate extension of $q(i,\delta)$ is $j$.
\end{enumerate}
Then $q\in \bP$ and $q\le p$.
Likewise by Lemma \ref{lem.pnc},
there is a condition $r\in\bP$ which extends
$\{p'(i,\delta): (i,\delta)\in d\times \vec{\delta}_p\}\cup\{v\}$ such that
$r\le p$.
Since the coding nodes $q(d)$ and $r(d)$ are incomparable, $q$ and $r$ are incompatible conditions in $\bP$.
\end{proof}
It follows from Claim \ref{claim.densehigh} that $\bP$ is atomless.
\end{proof}
From now on, whenever
ambiguity will not arise by doing so, for a condition $p\in\bP$,
we will use the terminology
{\em critical node} of $p$ to
refer to $p(d)$, which is
a splitting node in Case (a) and a coding node in Case (b).
Let $\dot{b}_d$ be a $\bP$-name for the generic path through $T_d$;
that is, $\dot{b}_d=\{\lgl p(d),p\rgl:p\in\bP\}$.
Note that for each $p\in \bP$, $p$ forces that $\dot{b}_d\re l_p= p(d)$.
By Claim \ref{claim.densehigh}, it is dense to force a critical node in $\dot{b}_d$ above any given level in $T$, so $\mathbf{1}_{\bP}$ forces that the set of levels of critical nodes in $\dot{b}_d$ is infinite.
Thus, given any generic filter $G$ for $\bP$, $\dot{b}_d^G=\{p(d):p\in G\}$ is a cofinal path of critical nodes in $T_d$.
Let $\dot{L}_d$ be a $\bP$-name for the set of lengths of critical nodes in $\dot{b}_d$.
Note that $\mathbf{1}_{\bP}\forces \dot{L}_d\sse L$.
Let $\dot{\mathcal{U}}$ be a $\bP$-name for a non-principal ultrafilter on $\dot{L}_d$.
For $i<d$ and $\al<\kappa$, let $\dot{b}_{i,\al}$ be a $\bP$-name for the $\al$-th generic branch through $T_i$;
that is, $\dot{b}_{i,\al}=\{\lgl p(i,\al),p\rgl:p\in \bP$ and $\al\in\vec{\delta}_p\}$.
Then
for any $p\in \bP$ ,
\begin{equation}
p\forces (\forall i<d\ \forall \al\in \vec\delta_p\, (\dot{b}_{i,\al}\re l_p= p(i,\al)) )\wedge
( \dot{b}_d\re l_p=p(d)).
\end{equation}
For $j\in\bN$, we let $[\kappa]^j$ denote the collection of all $j$-element subsets of $\kappa$.
We shall write sets $\{\al_i:i< d\}$ in $[\kappa]^d$ as vectors $\vec{\al}=\lgl \al_0,\dots,\al_{d-1}\rgl$ in strictly increasing order.
For $\vec{\al}\in[\kappa]^d$,
we use the following abbreviation:
\begin{equation}
\dot{b}_{\vec{\al}}\mathrm{\ \ denotes \ \ }
\lgl \dot{b}_{0,\al_0},\dots, \dot{b}_{d-1,\al_{d-1}},\dot{b}_d\rgl.
\end{equation}
Since the branch $\dot{b}_d$ is unique, this abbreviation introduces no ambiguity.
For any $l<\om$,
\begin{equation}
\mathrm{\ let\ \ }\dot{b}_{\vec\al}\re l
\mathrm{\ \ denote \ \ }
\lgl \dot{b}_{0,\al_0}\re l,\dots, \dot{b}_{d-1,\al_{d-1}}\re l,\dot{b}_d\re l\rgl.
\end{equation}
Using the abbreviations just defined,
$h$ is a coloring on sets of nodes of the form $\dot{b}_{\vec\al}\re l$
whenever this is
forced to be a member of $\Ext_T(A,\tilde{X})$.
Given $\vec{\al}\in [\kappa]^d$ and a condition $p\in \bP$ with $\vec\al\sse\vec{\delta}_p$,
let
\begin{equation}
X(p,\vec{\al})=\{p(i,\al_i):i<d\}\cup\{p(d)\}.
\end{equation}
We now set up to prove
Lemma \ref{lem.compat}.
For each $\vec\al\in[\kappa]^d$,
choose a condition $p_{\vec{\al}}\in\bP$ such that
\begin{enumerate}
\item
$\vec{\al}\sse\vec{\delta}_{p_{\vec\al}}$.
\item
$X(p_{\vec\al},\vec{\al})\in\Ext_T(A,\tilde{X})$.
\item
There is an $\varepsilon_{\vec{\al}}\in 2$
such that
$p_{\vec{\al}}\forces$
``$h(\dot{b}_{\vec{\al}}\re l)=\varepsilon_{\vec{\al}}$
for $\dot{\mathcal{U}}$ many $l$ in $\dot{L}_d$''.
\item
$h(X(p_{\vec\al},\vec{\al}))=\varepsilon_{\vec{\al}}$.
\end{enumerate}
Properties (1) - (4) can be guaranteed as follows.
Recall that $\{s_i:i\le d\}$ enumerates $\tilde{X}$ and that $s_d$ is the critical node in $\tilde{X}$.
For each $\vec{\al}\in[\kappa]^d$, define
$$
p^0_{\vec{\al}}=\{\lgl (i,\delta), t_i\rgl: i< d, \ \delta\in\vec{\al} \}\cup\{\lgl d,t_d\rgl\}.
$$
Then $p^0_{\vec{\al}}$ is a condition in $\bP$ with
$\ran(p^0_{\vec{\al}})=\tilde{X}$, and
$\vec\delta_{p_{\vec\al}^0}= \vec\al$ which implies (1) holds for any $p\le p^0_{\vec{\al}}$.
The following fact will be used many times.
\begin{claim}\label{claim.extensiongood}
Given $\vec{\al}\in[\kappa]^d$,
for any $p\le p_{\vec\al}^0$, the set of nodes
$X(p,\vec{\al})$
is a member of
$\Ext_T(A,\tilde{X})$.
\end{claim}
\begin{proof}
Suppose $p\le p_{\vec\al}^0$.
Then
$p$ is valid over $ p_{\vec\al}^0$, so
$X(p,\vec{\al})$ has no new pre-cliques over $\tilde{X}$.
Since $p$ is a condition of $\bP$, $X(p,\vec{\al})$ is free in $T$ and
$A\cup X(p,\vec{\al})$ is strongly similar to $A\cup\tilde{X}$.
It follows from Lemma \ref{lem.alternate}
that $X(p,\vec{\al})$ is in
$\Ext_T(A,\tilde{X})$.
\end{proof}
Thus, (2) holds for any $p\le p_{\vec\al}^0$.
Take an extension $p^1_{\vec{\al}}\le p^0_{\vec{\al}}$ which
forces $h(\dot{b}_{\vec{\al}}\re l)$ to be the same value for
$\dot{\mathcal{U}}$ many $l\in \dot{L}_d$.
Since $\bP$ is a forcing notion, there is a $p^2_{\vec{\al}}\le p_{\vec{\al}}^1$ deciding a value $\varepsilon_{\vec{\al}}$ for which $p^2_{\vec{\al}}$ forces that $h(\dot{b}_{\vec{\al}}\re l)=\varepsilon_{\vec{\al}}$
for $\dot{\mathcal{U}}$ many $l$ in $\dot{L}_d$.
Then (3) holds for any $p\le p_{\vec\al}^2$.
If $ p_{\vec\al}^2$ satisfies (4), then let $p_{\vec\al}=p_{\vec\al}^2$.
Otherwise,
take some $p^3_{\vec\al}\le p^2_{\vec\al}$
which decides
some $l\in\dot{L}$
such that
$l_{p^2_{\vec\al}}< l_n^T< l\le l_{p^3_{\vec\al}}$,
for some $n$,
and $p^3_{\vec\al}$ forces
$h(\dot{b}_{\vec\al}\re l)=\varepsilon_{\vec\al}$.
Since $p^3_{\vec\al}$ forces ``$\dot{b}_{\vec\al}\re l=
\{p^3_{\vec\al}(i,\al_i)\re l:i<d\} \cup\{p^3_{\vec\al}(d)\re l\}$'' and $h$ is defined in the ground model,
this means that $p^3_{\vec\al}(d)\re l$ is a splitting node in Case (a) and a coding node in Case (b), and
\begin{equation}\label{eq.hrest}
h(X(p^3_{\vec\al},\vec\al)\re l)
=\varepsilon_{\vec\al},
\end{equation}
where
$X(p^3_{\vec\al},\vec\al)\re l$ denotes
$\{p^3_{\vec\al}(i,\al_i)\re l:i<d\} \cup\{p^3_{\vec\al}(d)\re l\}$.
If $l=l_{p^3_{\vec\al}}$, let $p_{\vec\al}=p_{\vec\al}^3$, and note that $p_{\vec\al}$ satisfies (1) - (4).
Otherwise, $l<l_{p^3_{\vec\al}}$.
In Case (a), let
$p_{\vec\al}$ be defined as follows:
Let $\vec\delta_{\vec\al}=\vec\delta_{p_{\vec\al}^2}$ and
\begin{equation}
\forall (i,\delta)\in d\times\vec\delta_{\vec\al}, \mathrm{\ let\ }
p_{\vec\al}(i,\delta)=p^3_{\vec\al}(i,\delta)\re l\mathrm{\ \ and\ let\ }
p_{\vec\al}(d)=p^3_{\vec\al}(d)\re l.
\end{equation}
Since $p^3_{\vec\al}$ is a condition in $\bP$,
$\ran(p^3_{\vec\al})$ is free in $T$.
Furthermore, $p^3_{\vec\al}\le p^2_{\vec\al}$
implies that $\ran(p_{\vec\al}^3\re \vec\delta_{p^2_{\vec\al}})$ has no new pre-cliques over $\ran(p^2_{\vec\al})$.
Therefore, leftmost extensions of $\ran(p_{\vec\al})$ have no new pre-cliques, so $\ran(p_{\vec\al})$ is free in $T$.
Therefore,
$p_{\vec\al}$ is a condition in $\bP$ and $p_{\vec\al}\le p_{\vec\al}^2$.
Thus, $p_{\vec\al}$ satisfies (1) - (3), and (4) holds by equation (\ref{eq.hrest}).
In Case (b),
we construct $p_{\vec\al}\le p^2_{\vec\al}$ as follows:
As in Case (a), let
$\vec{\delta}_{\vec\al}=\vec\delta_{p^2_{\vec\al}}$.
For each $i<d$, define
$p_{\vec\al}(i,\al_i)=p^3_{\vec\al}(i,\al_i)\re l$,
and let
$p_{\vec\al}(d)=p^3_{\vec\al}(d)\re l$.
Then $X(p_{\vec\al},\vec\al)=\{p^3_{\vec\al}(i,\al_i)\re l:i<d\}\cup\{p^3_{\vec\al}(d)\re l\}$,
so $h(X(p_{\vec\al},\vec\al))=\varepsilon_{\vec\al}$.
Let $U$ denote $X( p_{\vec\al}^2,\vec\al)$ and
let $U'=\ran (p_{\vec\al}^2)\setminus U$.
Let $X$ denote $X(p_{\vec\al},\vec\al)$ and note that $X$ end-extends $U$, and $X$ is free in $T$ and has no new pre-cliques over $U$.
By Lemma
\ref {lem.HLCasebtruncate},
there is an $X'$ end-extending $U'$ to nodes in $T\re l$ so that the following hold:
$X\cup X'$ is free in $T$ and has no new pre-cliques over $U\cup U'$;
furthermore, each node in $X'$ has the same passing number at $l$ as it does at $l_{p_{\vec\al}^2}$.
Let $\ran(p_{\vec\al})$ be this set of nodes $X\cup X'$,
where
for each $i<d$
and $(i,\delta)\in d\times\vec{\delta}_{p_{\vec\al}^3}$ with $\delta\ne\al_i$,
we
let $p_{\vec\al}(i,\delta)$ be the node in $Y'$ extending
$p_{\vec\al}^3(i,\delta)$.
This defines a condition $p_{\vec\al}\le p_{\vec\al}^2$
satisfying (1) - (4).
The rest of Part I follows by arguments in \cite{DobrinenJML20} for the case $k=3$, with no modifications.
It is included here for the reader's convenience.
We are assuming $\kappa=\beth_{2d}$ so that $\kappa\ra(\aleph_1)^{2d}_{\aleph_0}$, by the \Erdos-Rado Theorem (Theorem \ref{thm.ER}).
Given two sets of ordinals $J,K$ we shall write $J<K$ if every member of $J$ is less than every member of $K$.
Let $D_e=\{0,2,\dots,2d-2\}$ and $D_o=\{1,3,\dots,2d-1\}$, the sets of even and odd integers less than $2d$, respectively.
Let $\mathcal{I}$ denote the collection of all functions $\iota: 2d\ra 2d$ such that
$\iota\re D_e$
and $\iota\re D_o$ are strictly increasing sequences
and $\{\iota(0),\iota(1)\}<\{\iota(2),\iota(3)\}<\dots<\{\iota(2d-2),\iota(2d-1)\}$.
Thus, each $\iota$ codes two strictly increasing sequences $\iota\re D_e$ and $\iota\re D_o$, each of length $d$.
For $\vec{\theta}\in[\kappa]^{2d}$,
$\iota(\vec{\theta}\,)$ determines the pair of sequences of ordinals $(\theta_{\iota(0)},\theta_{\iota(2)},\dots,\theta_{\iota(2d-2))}), (\theta_{\iota(1)},\theta_{\iota(3)},\dots,\theta_{\iota(2d-1)})$,
both of which are members of $[\kappa]^d$.
Denote these as $\iota_e(\vec\theta\,)$ and $\iota_o(\vec\theta\,)$, respectively.
To ease notation, let $\vec{\delta}_{\vec\al}$ denote $\vec\delta_{p_{\vec\al}}$,
$k_{\vec{\al}}$ denote $|\vec{\delta}_{\vec\al}|$,
and let $l_{\vec{\al}}$ denote $l_{p_{\vec\al}}$.
Let $\lgl \delta_{\vec{\al}}(j):j<k_{\vec{\al}}\rgl$
denote the enumeration of $\vec{\delta}_{\vec\al}$
in increasing order.
Define a coloring $f$ on $[\kappa]^{2d}$ into countably many colors as follows:
Given $\vec\theta\in[\kappa]^{2d}$ and
$\iota\in\mathcal{I}$, to reduce the number of subscripts, letting
$\vec\al$ denote $\iota_e(\vec\theta\,)$ and $\vec\beta$ denote $\iota_o(\vec\theta\,)$,
define
\begin{align}\label{eq.fiotatheta}
f(\iota,\vec\theta\,)= \, &
\lgl \iota, \varepsilon_{\vec{\al}}, k_{\vec{\al}}, p_{\vec{\al}}(d),
\lgl \lgl p_{\vec{\al}}(i,\delta_{\vec{\al}}(j)):j<k_{\vec{\al}}\rgl:i< d\rgl,\cr
& \lgl \lgl i,j \rgl: i< d,\ j<k_{\vec{\al}},\ \mathrm{and\ } \delta_{\vec{\al}}(j)=\al_i \rgl, \cr
& \lgl \lgl j,k\rgl:j<k_{\vec{\al}},\ k<k_{\vec{\beta}},\ \delta_{\vec{\al}}(j)=\delta_{\vec{\beta}}(k)\rgl\rgl.
\end{align}
Let $f(\vec{\theta}\,)$ be the sequence $\lgl f(\iota,\vec\theta\,):\iota\in\mathcal{I}\rgl$, where $\mathcal{I}$ is given some fixed ordering.
Since the range of $f$ is countable,
apply the \Erdos-Rado Theorem
to obtain a subset $K\sse\kappa$ of cardinality $\aleph_1$
which is homogeneous for $f$.
Take $K'\sse K$ such that between each two members of $K'$ there is a member of $K$.
Take subsets $K_i\sse K'$ such that $K_0<\dots<K_{d-1}$
and each $|K_i|=\aleph_0$.
\begin{lem}\label{lem.onetypes}
There are $\varepsilon^*\in 2$, $k^*\in\om$, $t_d$,
and $ \lgl t_{i,j}: j<k^*\rgl$, $i< d$,
such that
for all $\vec{\al}\in \prod_{i<d}K_i$ and each $i< d$,
$\varepsilon_{\vec{\al}}=\varepsilon^*$,
$k_{\vec\al}=k^*$, $p_{\vec{\al}}(d)=t_d$, and
$\lgl p_{\vec\al}(i,\delta_{\vec\al}(j)):j<k_{\vec\al}\rgl
=
\lgl t_{i,j}: j<k^*\rgl$.
\end{lem}
\begin{proof}
Let $\iota$ be the member in $\mathcal{I}$
which is the identity function on $2d$.
For any pair $\vec{\al},\vec{\beta}\in \prod_{i<d}K_i$, there are $\vec\theta,\vec\theta'\in [K]^{2d}$
such that
$\vec\al=\iota_e(\vec\theta\,)$ and $\vec\beta=\iota_e(\vec\theta'\,)$.
Since $f(\iota,\vec\theta\,)=f(\iota,\vec\theta'\,)$,
it follows that $\varepsilon_{\vec\al}=\varepsilon_{\vec\beta}$, $k_{\vec{\al}}=k_{\vec{\beta}}$, $p_{\vec{\al}}(d)=p_{\vec{\beta}}(d)$,
and $\lgl \lgl p_{\vec{\al}}(i,\delta_{\vec{\al}}(j)):j<k_{\vec{\al}}\rgl:i< d\rgl
=
\lgl \lgl p_{\vec{\beta}}(i,\delta_{\vec{\beta}}(j)):j<k_{\vec{\beta}}\rgl:i< d\rgl$.
Thus, define $\varepsilon^*$, $k^*$, $t_d$, $\lgl \lgl t_{i,j}:j<k^*\rgl:i<d\rgl$ to be
$\varepsilon_{\vec\al}$, $k_{\vec\al}$,
$p_{\vec\al}(d)$,
$\lgl \lgl p_{\vec{\al}}(i,\delta_{\vec{\al}}(j)):j<k_{\vec{\al}}\rgl:i< d\rgl$
for any $\vec\al\in \prod_{i<d}K_i$.
\end{proof}
Let $l^*$ denote the length of $t_d$.
Then all the nodes $t_{i,j}$, $i< d$, $j<k^*$, also have length $l^*$.
\begin{lem}\label{lem.j=j'}
Given any $\vec\al,\vec\beta\in \prod_{i<d}K_i$,
if $j,k<k^*$ and $\delta_{\vec\al}(j)=\delta_{\vec\beta}(k)$,
then $j=k$.
\end{lem}
\begin{proof}
Let $\vec\al,\vec\beta$ be members of $\prod_{i<d}K_i$ and suppose that
$\delta_{\vec\al}(j)=\delta_{\vec\beta}(k)$ for some $j,k<k^*$.
For each $i<d$, let $\rho_i$ be the relation from among $\{<,=,>\}$ such that
$\al_i\,\rho_i\,\beta_i$.
Let $\iota$ be the member of $\mathcal{I}$ such that for each $\vec\gamma\in[K]^{d}$ and each $i<d$,
$\theta_{\iota(2i)}\ \rho_i \ \theta_{\iota(2i+1)}$.
Then there is a
$\vec\theta\in[K']^{2d}$ such that
$\iota_e(\vec\theta)=\vec\al$ and $\iota_o(\vec\theta)= \vec\beta$.
Since between any two members of $K'$ there is a member of $K$, there is a
$\vec\gamma\in[K]^{d}$ such that for each $i< d$,
$\al_i\,\rho_i\,\gamma_i$ and $\gamma_i\,\rho_i\, \beta_i$,
and furthermore, for each $i<d-1$,
$\{\al_i,\beta_i,\gamma_i\}<\{\al_{i+1},\beta_{i+1},\gamma_{i+1}\}$.
Given that $\al_i\,\rho_i\,\gamma_i$ and $\gamma_i\,\rho_i\, \beta_i$ for each $i<d$,
there are $\vec\mu,\vec\nu\in[K]^{2d}$ such that $\iota_e(\vec\mu)=\vec\al$,
$\iota_o(\vec\mu)=\vec\gamma$,
$\iota_e(\vec\nu)=\vec\gamma$, and $\iota_o(\vec\nu)=\vec\beta$.
Since $\delta_{\vec\al}(j)=\delta_{\vec\beta}(k)$,
the pair $\lgl j,k\rgl$ is in the last sequence in $f(\iota,\vec\theta)$.
Since $f(\iota,\vec\mu)=f(\iota,\vec\nu)=f(\iota,\vec\theta)$,
also $\lgl j,k\rgl$ is in the last sequence in $f(\iota,\vec\mu)$ and $f(\iota,\vec\nu)$.
It follows that $\delta_{\vec\al}(j)=\delta_{\vec\gamma}(k)$ and $\delta_{\vec\gamma}(j)=\delta_{\vec\beta}(k)$.
Hence, $\delta_{\vec\gamma}(j)=\delta_{\vec\gamma}(k)$,
and therefore $j$ must equal $k$.
\end{proof}
For any $\vec\al\in \prod_{i<d}K_i$ and any $\iota\in\mathcal{I}$, there is a $\vec\theta\in[K]^{2d}$ such that $\vec\al=\iota_o(\vec\theta)$.
By homogeneity of $f$ and by the first sequence in the second line of equation (\ref{eq.fiotatheta}), there is a strictly increasing sequence
$\lgl j_i:i< d\rgl$ of members of $k^*$ such that for each $\vec\al\in \prod_{i<d}K_i$,
$\delta_{\vec\al}(j_i)=\al_i$.
For each $i< d$, let $t^*_i$ denote $t_{i,j_i}$.
Then for each $i<d$ and each $\vec\al\in \prod_{i<d}K_i$,
\begin{equation}
p_{\vec\al}(i,\al_i)=p_{\vec{\al}}(i, \delta_{\vec\al}(j_i))=t_{i,j_i}=t^*_i.
\end{equation}
Let $t_d^*$ denote $t_d$.
\begin{lem}\label{lem.compat}
For any finite subset $\vec{J}\sse \prod_{i<d}K_i$,
the set of conditions $\{p_{\vec{\al}}:\vec{\al}\in \vec{J}\,\}$ is compatible.
Moreover,
$p_{\vec{J}}:=\bigcup\{p_{\vec{\al}}:\vec{\al}\in \vec{J}\,\}$
is a member of $\bP$ which is below each
$p_{\vec{\al}}$, $\vec\al\in\vec{J}$.
\end{lem}
\begin{proof}
For any $\vec\al,\vec\beta\in \prod_{i<d}K_i$,
whenver
$j,k<k^*$ and
$\delta_{\vec\al}(j)=\delta_{\vec\beta}(k)$, then $j=k$, by Lemma \ref{lem.j=j'}.
It then follows from Lemma \ref{lem.onetypes}
that for each $i<d$,
\begin{equation}
p_{\vec\al}(i,\delta_{\vec\al}(j))=t_{i,j}=p_{\vec\beta}(i,\delta_{\vec\beta}(j))
=p_{\vec\beta}(i,\delta_{\vec\beta}(k)).
\end{equation}
Thus, for each $\vec\al,\vec\beta\in\vec{J}$ and each
$\delta\in\vec{\delta}_{\vec\al}\cap
\vec{\delta}_{\vec\beta}$,
for all $i<d$,
\begin{equation}
p_{\vec\al}(i,\delta)=p_{\vec\beta}(i,\delta).
\end{equation}
Thus,
$p_{\vec{J}}:=
\bigcup \{p_{\vec{\al}}:\vec\al\in\vec{J}\}$
is a function.
Let $\vec\delta_{\vec{J}}=
\bigcup\{
\vec{\delta}_{\vec\al}:
\vec\al\in\vec{J}\,\}$.
For each $\delta\in
\vec{\delta}_{\vec{J}}$ and $i<d$,
$p_{\vec{J}}(i,\delta)$ is defined,
and it is exactly $p_{\vec\al}(i,\delta)$, for any $\vec\al\in\vec{J}$ such that $\delta\in \vec\delta_{\vec\al}$.
Thus, $p_{\vec{J}}$ is a member of $\bP$, and $p_{\vec{J}}\le p_{\vec\al}$ for each $\vec\al\in\vec{J}$.
\end{proof}
The final lemma of Part I will be used in the next section.
\begin{lem}\label{lem.subclaimA}
If $\beta\in \bigcup_{i<d}K_i$,
$\vec{\al}\in\prod_{i<d}K_i$,
and $\beta\not\in\vec\al$,
then
$\beta$ is not a member of $\vec{\delta}_{\vec{\al}}$.
\end{lem}
\begin{proof}
Suppose toward a contradiction that $\beta\in\vec{\delta}_{\vec{\al}}$.
Then there is a $j<k^*$ such that $\beta=\delta_{\vec{\al}}(j)$.
Let $i$ be such that $\beta\in K_i$.
Since $\beta\ne\al_i=\delta_{\vec{\al}}(j_i)$, it must be that $j\ne j_i$.
However,
letting $\vec\beta$ be any member of $\prod_{i<d}K_i$ with $\beta_i=\beta$,
then
$\beta=\delta_{\vec{\beta}}(j_i)=\delta_{\vec{\al}}(j)$, so Lemma \ref{lem.j=j'}
implies that $j_i=j$, a contradiction.
\end{proof}
\vskip.1in
\noindent \underline{\bf{Part II.}}
In this last part of the proof,
we build a strong coding tree $S$ valid in $T$ on which the coloring $h$ is homogeneous.
Cases (a) and (b) must be handled separately.
\vskip.1in
\noindent\underline{\bf{Part II Case (a).}}
Recall that $\{s_i:i\le d\}$ enumerates the members of $A_e$, which is a subset of $B^+$.
Let $m'$ be the integer such that
$B\in\mathcal{AT}_{m'}^k$.
Let $M=\{ m_j:j\in\bN\}$ be the strictly increasing enumeration of those $m> m'$
such that the splitting node in $\max(r_m(T))$ extends $s_d$.
We will find $U_{m_0}\in r_{m_0}[B,T]$
and in general, $U_{m_{j+1}}\in r_{m_{j+1}}[U_{m_j},T]$
so that for each $j\in\bN$,
$h$ takes color $\varepsilon^*$ on $\Ext_{U_{m_j}}(A,\tilde{X})$.
Then setting $S=\bigcup_{j\in\bN} U_{m_j}$ will yield $S$ to be a member of $[B,T]$ for which $\Ext_S(A,\tilde{X})$ is homogeneous for $h$, with color $\varepsilon^*$.
First extend each node in $B^+$ to level $l^*$ as follows.
The set $\{t^*_i:i\le d\}$ end-extends
$A_e$,
has no new pre-cliques
over $A_e$, and is free in $T$.
For each node $u$ in $B^+\setminus A_e$, let $u^*$ denote
its leftmost extension in $T\re l^*$.
Then the set
\begin{equation}
U^*=\{t^*_i:i\le d\}\cup\{u^*:u\in B^+\setminus A_e\}
\end{equation}
end-extends $B^+$, is free in $T$, and has no new pre-cliques over $B$, by
Lemma \ref{lem.poc}.
Thus,
$U^*$ is free in $T$, and
$B\cup U^*$
satisfies the Witnessing Property
so is valid in $T$.
If $m_0=m'+1$,
then $B\cup U^*$ is a member of $r_{m_0}[B,T]$, by Lemma \ref{lem.HLconstruction}.
In this case, let $U_{m'+1}=B\cup U^*$ and
extend $U_{m'+1}$ to a member $U_{m_1-1}\in r_{m_1-1}[U_{m'+1},T]$, using Theorem \ref{thm.GOODnonempty}.
If $m_0>m'+1$,
apply Lemma \ref{lem.HLconstruction} and Theorem \ref{thm.GOODnonempty} to
extend above $U^*$ to construct
a member $U_{m_0-1}\in r_{m_0-1}[B,T]$ which is valid in $T$.
In this case, note that $\max(r_{m'+1}(U_{m_0}))$ is not
$ U^*$, but rather $\max(r_{m'+1}(U_{m_0}))$
end-extends $U^*$.
Assume $j<\om$ and
we have constructed $U_{m_j-1}$, valid in $T$, so that every member of $\Ext_{U_{m_j-1}}(A,\tilde{X})$ is colored $\varepsilon^*$ by $h$.
Fix some $C\in r_{m_j}[U_{m_j -1} ,T]$ with $C$ valid in $T$, and let $Z=\max(C)$.
The nodes in $Z$ will not be in the tree $S$ we are constructing;
rather,
we will extend the nodes in $Z$ to construct
$U_{m_j}\in r_{m_j}[U_{m_j-1},T]$.
We now start to construct a condition $q$ which will satisfy
Lemma \ref{lem.qbelowpal}, below.
Let $q(d)$ denote the splitting node in $Z$ and let $l_q=|q(d)|$.
For each $i<d$,
let $Z_i$ denote the set of those $z\in T_i\cap Z$
such that $z\in X$ for some $X\in\Ext_{Z}(A,\tilde{X})$.
For each $i<d$,
take a set $J_i\sse K_i$ of cardinality $|Z_i|$
and label the members of $Z_i$ as
$\{z_{\al}:\al\in J_i\}$.
Notice that each member of $\Ext_T(A,\tilde{X})$ above $Z$ extends some set $\{z_{\al_i}:i<d\}\cup\{q(d)\}$, where each $\al_i\in J_i$.
Let $\vec{J}$ denote the set of those $\lgl \al_0,\dots,\al_{d-1}\rgl\in \prod_{i< d}J_i$ such that the set $\{z_{\al_i}:i< d\}\cup\{q(d)\}$ is in $\Ext_T(A,\tilde{X})$.
Then for each $i<d$,
$J_i=\{\al_i:\vec\al\in\vec{J}\}$.
It follows from Lemma \ref{lem.compat} that
the set $\{p_{\vec\al}:\vec\al\in\vec{J}\}$ is compatible.
The fact that
$p_{\vec{J}}$ is a condition in $\bP$ will be used
to make the construction of $q$ very precise.
Let
$\vec{\delta}_q=\bigcup\{\vec{\delta}_{\vec\al}:\vec\al\in \vec{J}\}$.
For each $i<d$ and $\al\in J_i$,
define $q(i,\al)=z_{\al}$.
Notice that for each
$\vec\al\in \vec{J}$ and $i<d$,
\begin{equation}
q(i,\al_i)\contains t^*_i=p_{\vec\al}(i,\al_i)=p_{\vec{J}}(i,\al_i),
\end{equation}
and
\begin{equation}
q(d)\contains t^*_d=p_{\vec\al}(d)=p_{\vec{J}}(d).
\end{equation}
For each $i<d$ and $\gamma\in\vec{\delta}_q\setminus
J_i$,
there is at least one $\vec{\al}\in\vec{J}$ and some $k<k^*$ such that $\delta_{\vec\al}(k)=\gamma$.
Let $q(i,\gamma)$ be the leftmost extension
of $p_{\vec{J}}(i,\gamma)$ in $T$ of length $l_q$.
Define
\begin{equation}
q=\{q(d)\}\cup \{\lgl (i,\delta),q(i,\delta)\rgl: i<d,\ \delta\in \vec{\delta}_q\}.
\end{equation}
Since $C$ is valid in $T$ and $Z=\max(C)$, it follows that $Z$ is free in $T$.
Since $\ran(q)$ consists of $Z$ along with leftmost extensions of nodes in $\ran(p_{\vec{J}}(i,\gamma))$,
all of which are free,
$\ran(q)$ is free.
Therefore, $q$ is a condition in $\bP$.
\begin{lem}\label{lem.qbelowpal}
For all $\vec\al\in\vec{J}$,
$q\le p_{\vec{\al}}$.
\end{lem}
\begin{proof}
Given $\vec\al\in\vec{J}$,
it follows from the definition of $q$ that
$\vec{\delta}_q\contains \vec{\delta}_{\vec{\al}}$,
$q(d)\contains p_{\vec{\al}}(d)$,
and
for each pair $(i,\gamma)\in d\times \vec{\delta}_{\vec\al}$,
$q(i,\gamma)\contains p_{\vec{\al}}(i,\gamma)$.
So it only remains to show that $q$
is valid over $p_{\vec{\al}}$.
It follows from Lemma \ref{lem.subclaimA}
that
$\vec{\delta}_{\vec\al}\cap
\bigcup_{i<d}K_i=\vec\al$; so
for each $i<d$ and $\gamma\in\vec{\delta}_{\vec\al}\setminus \{\al_i\}$,
$q(i,\gamma)$ is the leftmost extension of $p_{\vec\al}(i,\gamma)$.
Since $\vec\al$ is in $\vec{J}$,
$ X(q,\vec\al)$ is in $\Ext_T(A,\tilde{X})$.
This implies that $ X(q,\vec\al)$
has no new pre-cliques
over $A$, and hence, none over $X(p_{\vec\al},\vec\al)$.
It follows that
$\ran(q\re \vec{\delta}_{\vec\al})$
is valid over $\ran(p_{\vec\al})$, by Lemma \ref{lem.poc}.
Therefore, $q\le p_{\vec\al}$.
\end{proof}
\begin{rem}
Notice that
we did not prove that $q\le p_{\vec{J}}$; in fact that is generally false.
\end{rem}
To construct $U_{m_j}$,
take an $r\le q$ in $\bP$ which decides some $l_j$ in $\dot{L}_d$ for which $h(\dot{b}_{\vec\al}\re l_j)=\varepsilon^*$, for all $\vec\al\in\vec{J}$.
This is possible since for all $\vec\al\in\vec{J}$,
$p_{\vec\al}$ forces $h(\dot{b}_{\vec\al}\re l)=\varepsilon^*$ for $\dot{\mathcal{U}}$ many $l\in \dot{L}_d$.
By the same argument as in creating the conditions $p_{\vec\al}\le p_{\vec\al}^2$ to satisfy (4)
in Part I,
we may assume that
the nodes in the image of $r$ have length $l_j$.
Since
$r$ forces $\dot{b}_{\vec{\al}}\re l_j=X(r,\vec\al)$
for each $\vec\al\in \vec{J}$,
and since the coloring $h$ is defined in the ground model,
it follows that
$h(X(r,\vec\al))=\varepsilon^*$ for each $\vec\al\in \vec{J}$.
Extend the splitting node $q(d)$ in $Z$
to $r(d)$.
For each $i<d$ and $\al_i\in J_i$,
extend $q(i,\al_i)$ to $r(i,\al_i)$.
Let
\begin{equation}
Z_0=\{q(i,\al_i):i<d,\ \al_i\in J_i\}\cup \{q(d)\}
\end{equation}
and let $Z_1= Z\setminus Z_0$.
Let
\begin{equation}
Y=\{r(i,\al_i):i<d,\ \al_i\in J_i\}\cup \{r(d)\}.
\end{equation}
Then $Y$ extends $Z_0$ and has no new pre-cliques over $Z_0$, since $r\le q$.
By Lemma \ref{lem.HLconstruction},
there is a $U_{m_j}\in r_{m_j}[U_{m_j-1},T]$ which is valid in $T$
such that
$\max(U_{m_j})$ end-extends $Z$ and in particular,
$Y\sse\max(U_{m_j})$.
Notice that every $X\in\Ext_{U_{m_j}}(A,\tilde{X})$
with $X\sse\max(U_{m_j})$
satisfies $h(X)=\varepsilon^*$.
This holds since for each such $X$,
the truncation $ X \re l_q$
is a member of $\Ext_{Z}(A,\tilde{X})$.
So there corresponds a sequence $\vec\al\in\vec{J}$ such that
$X\re l_q= X(q,\vec\al)$.
Then
$X= X(r,\vec\al)$,
which has $h$-color $\varepsilon^*$.
Let $S=\bigcup_{j\in\bN}U_{m_j}$.
Then for each $X\in\Ext_{S}(A,\tilde{X})$, there corresponds a $j$ such that $X\in\Ext_{U_{m_j}}(A,\tilde{X})$, and hence,
$h(X)=\varepsilon^*$.
Thus, $S\in [B,T]$ and satisfies the theorem.
This concludes the proof of the theorem for Case (a).
\vskip.1in
\noindent\underline{\bf{Part II Case (b).}}
Let $m_0$ be the integer such that there is a $B'\in r_{m_0}[B,T]$ with
$\tilde{X}\sse \max(B')$.
Let $U_{m_0-1}$ denote $r_{m_0-1}(B')$.
Since $\tilde{X}\sse \max(B')$, it follows that
$l^*\ge l_{B'}$.
Let $V=\{t^*_i:i\le d\}$, and recall that
this set has no new pre-cliques over $\tilde{X}$.
By Lemma \ref{lem.HLconstruction}
there is a set of nodes $V'$ end-extending $\max(B')\setminus V$ such that
$U_{m_0-1}\cup V\cup V'$ is a member of $r_{m_0}[U_{m_0-1},T]$;
label this $U_{m_0}$.
Since $\max(U_{m_0})$ is at the level of the coding node $t^*_d$, $\max(U_{m_0})$ is free in $T$.
Moreover, $U_{m_0}\in r_{m_0}[U_{m_0-1},T]$ implies that $U_{m_0}$
satisfies the Strong Witnessing Property.
Therefore, $U_{m_0}$ is valid in $T$.
Notice that $\{t^*_i:i\le d\}$ is the only member of
$\Ext_{U_{m_0}}(A,\tilde{X})$,
and it has $h$-color $\varepsilon^*$.
Let $M=\{m_j:j\in\bN\}$ enumerate the set of $m\ge m_0$
such that the coding node $c^T_{m}\contains c^T_{m_0}$.
Assume that $j\ge 1$ and
we have constructed $U_{m_{j-1}}\in \mathcal{AT}^k_{m_{j-1}}$ valid in $T$ so that every member of $\Ext_{U_{m_{j-1}}}(A,\tilde{X})$ is colored $\varepsilon^*$ by $h$.
By Theorem \ref{thm.GOODnonempty},
we may fix some
$U_{m_j-1}\in r_{m_j-1}[U_{m_{j-1}},T]$ which is valid in $T$.
Take some $C\in r_{m_j}[U_{m_j-1} ,T]$, and
let $Z$ denote $\max(C)$.
The nodes in $Z$ will not be in the tree $S$ we are constructing;
rather,
we will construct
$U_{m_j}\in r_{m_j}[U_{m_j-1},T]$
so that
$\max(U_{m_j})$ extends
$Z$.
Let $q(d)$ denote the coding node in $Z$ and let $l_q=|q(d)|$.
Recall that for $e\in\{0,1\}$, $I_e$ denotes the set of
$i<d$ for which $t^*_i$ has passing number $e$ at $t^*_d$.
For each pair $e\in\{0,1\}$ and
$i\in I_e$, let
$Z_i$ be the set
of nodes $z$ in $T_i\cap Z$ such that $z$ has passing number $e$ at $q(d)$.
We now construct a condition $q$ similarly to, but not exactly as in, Case (a).
For each $i<d$,
let $J_i$ be a subset of $K_i$ with the same size as $Z_i$.
For each $i< d$, label the nodes in $Z_i$ as
$\{z_{\al}:\al\in J_i\}$.
Let $\vec{J}$ denote the set of those $\lgl \al_0,\dots,\al_{d-1}\rgl\in \prod_{i< d}J_i$ such that the set
$\{z_{\al_i}:i< d\}\cup\{q(d)\}$ is in $\Ext_T(A,\tilde{X})$.
Notice that for each $i<d$ and
$\vec\al\in \vec{J}$, $z_{\al_i}\contains t^*_i=p_{\vec{\al}}(i,\al_i)$, and $q(d)\contains t^*_d=p_{\vec{\al}}(d)$.
Furthermore, for each $i<d$ and $\delta\in J_i$,
there is an $\vec\al\in\vec{J}$ such that $\al_i=\delta$.
Let
$\vec{\delta}_q=\bigcup\{\vec{\delta}_{\vec\al}:\vec\al\in \vec{J}\,\}$.
For each pair $(i,\gamma)\in d\times\vec{\delta}_q$ with $\gamma\in J_i$,
define $q(i,\gamma)=z_{\gamma}$.
Let $\mathcal{J}=\{(i,\gamma)\in
d\times\vec{\delta}_q : i<d$ and $
\gamma\in\vec{\delta}_q\setminus J_i\}$.
For each pair $(i,\gamma)\in \mathcal{J}$,
there is at least one $\vec{\al}\in\vec{J}$ and some $k<k^*$ such that $\delta_{\vec\al}(k)=\gamma$.
By Lemma \ref{lem.compat},
$p_{\vec\beta}(i,\gamma)=p_{\vec{\al}}(i,\gamma)=t^*_{i,k}$,
for any $\vec\beta\in\vec{J}$ for which $\gamma\in\vec{\delta}_{\vec\beta}$.
For each pair $(i,\gamma)\in\mathcal{J}$ with $i\in I_0$,
take $q(i,\gamma)$ to be the leftmost extension of $t^*_{i,k}$ in $T\re l_q$.
For each pair $(i,\gamma)\in\mathcal{J}$ with $i\in I_1$,
let $q(i,\gamma)$ be the node which extends $t^*_{i,k}$ leftmost until length of the longest coding node in $T$ strictly below $q(d)$, and then takes the rightmost path to length $l_q$.
Note that $q(i,\gamma)$ has passing number $e$, where $e\in\{0,1\}$ is the number such that $i\in I_e$.
By similar arguments to those in
Lemma \ref{lem.HLCasebtruncate},
the set $\{q(i,\gamma):(i,\gamma)\in\mathcal{J}\}$ has no new pre-cliques over
$\ran(p_{\vec{\al}})$ for $\vec\al\in\vec{J}$ (recall, these all have the same range);
moreover,
any new pre-cliques in the set
$\{q(i,\gamma):i< d,\ \gamma\in\vec{\delta}_q\}\cup\{q(d)\}$ over $\ran(p_{\vec{\al}})$ (for any $\vec\al\in\vec{J}$)
must occur among
$\{q(i,\gamma): i<d,\ \gamma\in J_i\}\cup\{q(d)\}$.
Define
\begin{equation}
q=\{q(d)\}\cup \{\lgl (i,\delta),q(i,\delta)\rgl: i<d,\ \delta\in \vec{\delta}_q\}.
\end{equation}
By the construction, $q$ is a member of $\bP$.
\begin{claim}\label{claim.qbelowpal}
For each $\vec\al\in \vec{J}$,
$q\le p_{\vec\al}$.
\end{claim}
\begin{proof}
By construction,
$q(i,\delta)\contains p_{\vec{\al}}(i,\delta)$ for all $(i,\delta)\in d\times \vec{\delta}_{\vec\al}$; so
it suffices to show that for each $\vec\al\in\vec{J}$,
$\ran(q\re \vec\delta_{\vec\al})$ has no new pre-cliques over $\ran(p_{\vec\al})$.
Let $\vec\al\in\vec{J}$ be given.
Then
\begin{equation}
\ran(q\re \vec\delta_{\vec\al})\sse
\{q(i,\gamma):(i,\gamma)\in\mathcal{J}\}\cup
X(q,\vec\al),
\end{equation}
recalling that $X(q,\vec\al)=\{q(i,\al_i):i<d\}\cup\{q(d)\}$.
By definition of $\vec{J}$,
$\vec{\al}\in\vec{J}$ implies that
$X(q,\vec\al)$ is a member of $\Ext_T(A,\tilde{X})$.
Thus,
$X(q,\vec\al)$
has no new pre-cliques over $A\cup\tilde{X}$, by
Lemma \ref{lem.alternate}.
Since $\{t^*_i:i\le d\}$ end-extends $\tilde{X}$,
it follows that $X(q,\vec\al)$ has no new pre-cliques over $\{t^*_i:i\le d\}$.
Since the set $\{q(i,\gamma):i\le d,\ \gamma\in\vec{\delta}_{\vec{\al}}, \ \gamma\ne \al_i\}$ has no new pre-cliques with $X(q,\vec\al)$ over
$\ran(p_{\vec\al})$,
it follows that
$q\le p_{\vec\al}$.
\end{proof}
To construct $U_{m_j}$,
take an $r\le q$ in $\bP$ which decides $l_r\in \dot{L}_d$ such that
$h(\dot{b}_{\vec\al}\re l_r)=\varepsilon^*$ for all $\vec\al\in\vec{J}$,
using
the same ideas as in the construction of the $p_{\vec\al}$'s.
Let $Y=\bigcup\{ X(r,\vec\al):\vec\al\in\vec{J}\}$, and let $Z^*=\{r(d)\}\cup\bigcup_{i<d}Z_i$.
Since $\ran(r\re \vec\delta_q)$ has no new pre-cliques over $\ran(q)$,
it follows that $Y$ has no new pre-cliques over $Z^*$.
Apply Lemma \ref{lem.HLCasebtruncate} to
extend the nodes in $Z\setminus Z^*$ to a set $Y'\sse T\re l_r$ so that
each node in $Y'$ has the same passing number at $r(d)$ as it does at $q(d)$, and such that $Y\cup Y'$ has no new pre-cliques over $Z$.
Then $U_{m_j-1}\cup Y\cup Y'$ is a member of
$r_{m_j}[U_{m_j-1},T]$ which is valid in $T$.
To finish the proof of the theorem for Case (b),
Define $S=\bigcup_{j\in\bN}U_{m_j}$.
Then $S\in [B',T]$, and
for each $Z\in\Ext_{S}(A,\tilde{X})$, there is a $j\in\bN$ such that $Z\in\Ext_{U_{m_j}}(A,\tilde{X})$, so $h(Z)=
\varepsilon^*$.
This concludes the proof of the theorem.
\end{proof}
\section{Ramsey Theorem for finite trees with the Strict Witnessing Property}\label{sec.1SPOC}
The main theorem of this section is
Theorem \ref{thm.MillikenSWP}, which is
an analogue of Milliken's Theorem \ref{thm.Milliken}
for
colorings of finite trees with the following strong version of the Witnessing Property.
\begin{defn}[Strict Witnessing Property]\label{defn.SWP}
A subtree $A$ of a strong coding tree satisfies the {\em Strict Witnessing Property (SWP)}
if $A$
satisfies the Witnessing Property and
the following hold:
\begin{enumerate}
\item
For each interval $(|d_m^A|,|d^A_{m+1}|]$,
$A$ has at most one new pre-clique of size at least two, or a singleton with some new pre-cliques in $(|d_m^A|,|d^A_{m+1}|]$, but not both.
\item
If $X$ is a new pre-$a$-clique of size at least three
in $(|d_m^A|,|d^A_{m+1}|]$,
then every proper subset of $X$ has a new pre-$a$-clique in an interval $(|d_j^A|,|d^A_{j+1}|]$, for some $j<m$.
\end{enumerate}
\end{defn}
\begin{lem}\label{lem.copy}
If $A\sse\bT_k$ has the Strict Witnessing Property and
$B\cong A$,
then $B$ also has the Strict Witnessing Property.
\end{lem}
\begin{proof}
If $B\cong A$ and $A$ has the WP,
then $B$ also has the WP
by Lemma \ref{lem.concpresWP}.
Let $f:A\ra B$ be the strong isomorphism between them.
Since $A$ has the SWP, each new pre-clique of size at least two in $A$ is the only new pre-clique occuring in that interval of $A$, hence it is maximal in that interval.
By (2) of Definition \ref{defn.SWP},
each proper subset of a new pre-clique in a given interval of $A$ occurs as a maximal new pre-clique and is witnessed in some lower interval of $A$.
Since $f$ preserves maximal new pre-cliques,
each new pre-clique of size at least two in $B$ is a maximal new pre-clique in $B$, and is the only new pre-clique of $B$ in the interval in which it occurs.
Thus, $B$ satisfies (2).
Furthermore, for any $t\in A$, $f(t)$ is a new singleton pre-$a$-clique in $B$ iff $t$ is a is a new singleton pre-$a$-clique in $A$.
Therefore, $B$ has the SWP.
\end{proof}
Given a finite tree $A$ with the SWP,
we say that $B$ is a {\em copy} of $A$ if $A\cong B$.
The main theorem of this section,
Theorem \ref{thm.MillikenSWP},
will guarantee a Ramsey Theorem for colorings of copies of a finite tree with the SWP inside a strong coding tree.
\begin{thm}\label{thm.MillikenSWP}
Let $T\in\mathcal{T}_k$
be a strong coding tree
and
let $A$ be a finite subtree of $T$ satisfying the Strict Witnessing Property.
Then for any coloring of the copies of $A$ in $T$ into finitely many colors,
there is a strong coding subtree $S\le T$ such that all copies of $A$ in $S$ have the same color.
\end{thm}
Theorem \ref{thm.MillikenSWP} will be proved via four lemmas and an induction argument.
The main difficulty is that
Case (b) of Theorem \ref{thm.matrixHL}
provides homogeneity for
$\Ext_S(A,\tilde{X})$ for some strong coding tree $S$; in particular, homogeneity only holds for level sets $X$
end-extending $\tilde{X}$.
The issue of new singleton pre-$a$-cliques will be handled similarly to how we handle the case when $\tilde{X}$ has a coding node.
We need a strong coding tree in which {\em every} $X$ satisfying $A\cup X\cong A\cup\tilde{X}$ has the same color.
This will be addressed by the following:
Lemma \ref{lem.endhomog} will build a fusion sequence to obtain an $S\le T$ which is homogeneous
on $\Ext_S(A,Y)$ for each
minimal level set $Y$ extending $A_e$ such that $A\cup Y\cong A\cup\tilde{X}$.
Lemma \ref{lem.Case(c)} will use a new forcing and arguments from the proof of Theorem \ref{thm.matrixHL} to
obtain a strong coding tree $S\in [B,T]$ in which every $X$
satisfying
$A\cup X\cong A\cup\tilde{X}$ has the same color.
The last two lemmas involve fusion to construct a strong coding subtree which is homogeneous for the induced color on
copies of $A$.
The theorem then follows by induction and an application of Ramsey's Theorem.
The following basic assumption, similar to but stricter than Case (b) of Theorem \ref{thm.matrixHL}, will be used in much of this section.
\begin{assumption}\label{assumption.6}
Let $A$ and $C$ be fixed non-empty finite valid subtrees
of a strong coding tree $T\in\mathcal{T}_k$ such that
\begin{enumerate}
\item
$A$ and $C$ both satisfy the Strict Witnessing Property; and
\item
$C\setminus A$ is a level set containing both a coding node and the sequence $0^{(l_C)}$.
\end{enumerate}
Let $\tilde{X}$ denote $C\setminus A$, and
let $A_e$ be the subset of
$A^+$ which is extended to $\tilde{X}$.
Let $d+1$ be the number of nodes
in $\tilde{X}$.
List the nodes
in $A_e$ as $\lgl s_i:i\le d\rgl$
and the nodes of $\tilde{X}$ as $\lgl t_i:i\le d\rgl$ so that each $t_i$ extends $s_i$ and $t_d$ is the coding node in $\tilde{X}$.
For $j\in\{0,1\}$,
let
$I_j$ denote the set of $i\le d$ such that $t_i$
has passing number $j$ at $t_d$.
If $\tilde{X}$ has a new pre-clique over $A$,
let $I_*$ denote the set of $i\in I$ such that
$\{t_i:i\in I_*\}$ is the new pre-clique in $\tilde{X}$ over $A$.
Note that $I_*\sse I_1$ and $t_d$ must be among the coding nodes in $C$ witnessing this new pre-cliqe.
\end{assumption}
For
any $X$ such that $A\cup X\cong C$,
let
$\Ext_T(A,X)$
be defined as in equation (\ref{eq.ExtTAC}) of Section \ref{sec.5}.
Thus,
$\Ext_T(A,X)$ is the collection of level sets $Y\sse T$ such that $Y$ end-extends $X$ and $A\cup Y\cong A\cup X$,
(equivalently, $A\cup Y\cong C$),
and $A\cup Y$ is valid in $T$.
Recall that, since $\tilde{X}$ contains a coding node,
$A\cup X\cong A\cup \tilde{X}$ implicitly includes that
the strong isomorphism from $A\cup \tilde{X}$ to $A\cup X$ preserves passing numbers between $\tilde{X}^+$ and $X^+$.
We hold to the convention that given $Y$ such that $A\cup Y\cong C$,
the nodes in $Y$
are labeled $y_i$, $i\le d$, where each $y_i\contains s_i$.
In particular, $y_d$ is the coding node in $Y$.
In this section, we want to consider all copies of $C$ extending $A$.
To that end let
\begin{equation}
\Ext_T(A,C)=\bigcup\{\Ext_T(A,X): A\cup X\cong C\}.
\end{equation}
Now we define the notion of minimal pre-extension, which
will be used in the next lemma.
For $x\in T$,
define $\splitpred_T(x)$
to be $x\re l$ where $l< |x|$ is maximal such that $x\re l$ is a splitting node in $T$.
\begin{defn}[Minimal pre-extension of $A$ to a copy of $C$]\label{defn.mpe}
Given $A$, $\tilde{X}$, and $C$ as in Assumption \ref{assumption.6},
for
$X=\{x_i:i\le d\}$ a level set extending $A_e$ such that $x_i\contains s_i$ for each $i\le d$
and such that
$l_X$
is the length of some coding node in $T$,
we say that $X$ is a
{\em minimal pre-extension in $T$ of $A$ to a copy of $C$} if the following hold:
\begin{enumerate}
\item[(i)]
$\{i\le d: $ the passing number of $x_i$ at $x_d$ is $1\}=I_1$.
\item[(ii)]
$A\cup \SP_T(X)$
satisfies the Strict Witnessing Property,
where
\begin{equation}\label{eq.SP}
\SP_T(X)=\{\splitpred_T(x_i):i\in I_1\}\cup\{x_i:i\in I_0\}.
\end{equation}
\item[(iii)]
If $X$ has a new pre-clique over $A$,
then $X$ has only one new maximal pre-clique over $A$ which is exactly $\{x_i:i\in I_*\}\re l$, for some $l\in (l_A, l_X]$.
\end{enumerate}
\end{defn}
Notice that
for (ii) to hold, $\SP_T(X)$ must have no new pre-cliques over $A$.
Let $\MPE_T(A,C)$ denote the set of minimal pre-extenions in $T$ of $A$ to a copy of $C$.
When $A$ and $C$ are clear,
we call members of $\MPE_T(A,C)$ simply
{\em minimal pre-extensions}.
Minimal pre-extensions
are exactly the level sets in $T$ which
can be extended to a member of $\Ext_T(A,\tilde{X})$.
For $X\in\MPE_T(A,C)$,
define
\begin{equation}
\Ext_T(A,C;X)=\{Y\sse T: A\cup Y\cong C\mathrm{\ and\ }Y \mathrm{\ extends\ } X\}.
\end{equation}
Then
\begin{equation}
\Ext_T(A,C)=\bigcup\{\Ext_T(A,C;X): X\in \MPE_T(A,C)\},
\end{equation}
\begin{defn}\label{defn.endhomog}
A coloring on $\Ext_T(A,C)$ is {\em end-homogeneous} if for each minimal pre-extension $X$,
every member of $\Ext_T(A,C;X)$ has the same color.
\end{defn}
The following lemma is a slightly modified version of Lemma 6.7 in \cite{DobrinenJML20}.
\begin{lem}[End-homogeneity]\label{lem.endhomog}
Assume \ref{assumption.6}, and
let $m$ be the integer such that $\max(A)\sse r_{m}(T)$.
Then for any coloring $h$
of $\Ext_T(A,C)$ into two colors,
there is a $T'\in[r_{m}(T),T]$ such that $h$ is
end-homogeneous on $\Ext_{T'}(A,C)$.
\end{lem}
\begin{proof}
Let $(m_j)_{j\in\bN}$ enumerate those integers greater than $m$ such that there is a minimal pre-extension of $A$ to a copy of $C$ from among the maximal nodes in
$r_{m_j}(T)$.
Notice that for each $j\in\bN$,
$\max(r_{m_j}(T))$ contains a coding node,
although there can be members of $\MPE_T(A,C)$
contained in $\max(r_{m_j}(T))$ not containing that coding node.
Let $T_{-1}$ denote $T$.
Suppose that $j\in\bN$ and $T_{j-1}$ is given so that
the coloring $h$ is homogeneous on
$\Ext_{T_{j-1}}(A,C;X)$ for each minimal pre-extension $X$ in $r_{m_j-1}(T_{j-1})$.
Let $U_{j-1}$ denote $r_{m_j-1}(T_{j-1})$.
Enumerate the minimal pre-extensions contained in
$\max(r_{m_j}(T_{j-1}))$ as $X_0,\dots, X_n$.
By induction on $i\le n$, we will obtain
$T_j\in [U_{j-1},T_{j-1}]$ such that $\max(r_{m_j}(T_j))$ end-extends $\max(r_{m_j}(T_{j-1}))$
and $\Ext_{T_j}(A,C;Z)$ is homogeneous
for each minimal pre-extension $Z$ in $\max(r_{m_j}(T_{j-1}))$.
Let $l$ denote the length of the nodes in $\max(r_{n_j}(T_{j-1}))$, and
let $S_{-1}=T_{j-1}$.
Suppose $0\le i\le n$ and
we have
strong coding trees
$S_{-1},\dots, S_{i-1}$ such that
for each $0\le i'\le i-1$,
$S_{i'}\in [U_{j-1},S_{i'-1}]$
and
$h$ is homogeneous on $\Ext_{S_{i'}}(A,C;X_{i'})$.
Note that
$X_i$ is contained in $r_{m_j}(S_{i-1})\re l$,
though $l$ does not have to be the length of any node in $S_{i-1}$.
The point is that the set of nodes $Y_i$ in $\max(r_{m_j}(S_{i-1}))$ end-extending $X_i$
is again a minimal pre-extension.
Extend the nodes in $Y_i$ to some $Z_i\in \Ext_{ S_{i-1}}(A,C;Y_i)$,
and let $l'$ denote the length of the nodes in $Z_i$.
Note that $Z_i$ has no new pre-cliques over $Y_i$.
Let $W_i$ consist of the nodes in $Z_i$ along with the leftmost extensions of
the nodes in $\max(r_{m_j}(S_{i-1}))\setminus Y_i$
to the length $l'$ in $S_{i-1}$.
Let $S'_{i-1}$ be a strong coding tree in $[U_{j-1},S_{i-1}]$ such that $\max(r_{m_j}(S'_{i-1}))$ extends $W_i$.
Such an $S'_{i-1}$ exists by
Lemmas \ref{lem.poc} and \ref{lem.pnc} and
Theorem \ref{thm.GOODnonempty}.
Apply
Case (b) of Theorem \ref{thm.matrixHL}
to obtain a strong coding tree
$S_i\in [U_{j-1},S'_{i-1}]$ such that the coloring on $\Ext_{S_i}(A,C;Z_i)$ is homogeneous.
At the end of this process, let $T_j=S_n$.
Note that for each minimal pre-extension $Z\sse\max(r_{m_j}(T_j))$,
there is a unique $i\le n$ such that
$Z$ extends $X_i$,
since each node in $\max(r_{m_j}(T_j))$ is a unique extension of one node in $\max(r_{m_j}(T_{j-1}))$,
and hence
$\Ext_{T_j}(A,C;Z)$ is homogeneous.
Having chosen each $T_j$ as above,
let $T'=\bigcup_{j\in\bN}r_{m_j}(T_j)$.
Then $T'$ is a strong coding tree which is a member of
$[r_{m}(T),T]$,
and for each minimal pre-extension $Z$ in $T'$,
$\Ext_{T'}(A,C;Z)$ is homogeneous for $h$.
Therefore, $h$ is end-homogeneous on $\Ext_{T'}(A,C)$.
\end{proof}
The next lemma provides a means for uniformizing the end-homogeneity from the previous lemma
to obtain one color for all
members of $\Ext_S(A,C)$.
The arguments are often similar to those of
Case (a) of
Theorem \ref{thm.matrixHL}, but sufficiently different to warrant a proof.
\begin{lem}\label{lem.Case(c)}
Assume \ref{assumption.6}, and suppose that $B$ is a finite strong coding tree valid in $T$ and $A$ is a subtree of $B$
such that $\max(A)\sse\max(B)$.
Suppose that $h$ is end-homogeneous on $\Ext_{T}(A,C)$.
Then there is an $S\in[B,T]$ such that $h$ is homogeneous on
$\Ext_S(A,C)$.
\end{lem}
\begin{proof}
Given any $U\in[B,T]$,
recall that $\MPE_U(A,C)$ denotes the set of all minimal pre-extensions of $A$ to a copy of $C$ in $U$.
We are under Assumption \ref{assumption.6}.
Let $i_0\le d$ be such that $t_{i_0}= 0^{(l_C)}$, and note that $i_0$ is a member of $I_0$.
Each member $Y$ of $\MPE_T(A,C)$
will be enumerated as $\{y_i:i\le d\}$ so that $y_i\contains s_i$ for each $i\le d$.
Recall notation (\ref{eq.SP})
of
$\SP_T(Y)$.
Since $C$ satisfies the SWP,
$\tilde{X}$ is in $\MPE_T(A,C)$.
Let $P$ denote $\SP_T(\tilde{X})$.
Since $\tilde{X}$ is contained in an interval of $T$ above the interval containing $\max(A)$,
each node of $P$ extends exactly one node of $A_e$.
For any $U\in[B,T]$, define
\begin{equation}
X\in\Ext_{U}(A,P)\Longleftrightarrow
X=\SP_U(Y) \mathrm{\ for\ some\ }
Y\in\MPE_U(A,C).
\end{equation}
By assumption,
the coloring $h$ on $\Ext_{T}(A,C)$ is end-homogeneous.
This
induces a coloring $h$ on $\MPE_T(A,C)$
by defining, for $Y\in\MPE_T(A,C)$,
$h(Y)$ to be
{\em the} $h$-color that all members of
$\Ext_T(A,C;Y)$ have.
This further
induces a coloring $h'$ on $\Ext_{T}(A,P)$
as follows:
For $Q\in \Ext_{T}(A,P)$, for the $Y\in \MPE_T(A,C)$ such that $\SP_T(Y)=Q$,
let
$h'(Q)=h(Y)$.
Given $Q\in \Ext_{T}(A,P)$, the extensions of the $q_i\in Q$ such that $i\in I_1$ to the level of next coding node in $T$, with passing number $1$ at that coding node, recovers
$Y$.
Thus, $h'$ is well-defined.
Let $L$ denote the collection of all $l\in\bN$ such that there is a member of
$\Ext_{T}(A,P)$ with maximal nodes of length $l$.
For each
$i\in (d+1)\setminus\{i_0\}$, let $T_i=\{t\in T:t\contains s_i\}$.
Let $T_{i_0}$ be the collection of all leftmost nodes in $T$ extending $s_{i_0}$.
Let $\kappa=\beth_{2d+2}$.
The following forcing notion $\bQ$ will add $\kappa$ many paths through each $T_i$, $i\in (d+1)\setminus\{i_0\}$ and
one path through $T_{i_0}$, though with $\kappa$ many labels.
The present case is handled similarly to Case (a) of Theorem \ref{thm.matrixHL}.
Let
$\bQ$ be the set of conditions $p$ such that
$p$ is a function
of the form
$$
p:(d+1)\times\vec{\delta}_p\ra T,
$$
where $\vec{\delta}_p$ is a finite subset of $\kappa$,
$l_p\in L$,
$\{p(i,\delta):\delta\in\vec\delta_p\}\sse T_i$ for each $i<d$,
and
\begin{enumerate}
\item[(i)]
There is some
some coding node
$c^{T}_{n(p)}$ in $T$
such that $l^{T}_{n(p)}=l_p$,
and
$l^{T}_{n(p)-1}< |p(i,\delta)|\le l_p$ for each
$(i,\delta)\in (d+1)\times
\vec{\delta}_p$.
\item [(ii)]
\begin{enumerate}
\item[($\al$)]
If $i\in I_1$,
then
$p(i,\delta)=\splitpred_{T}(y)$ for some $y\in T_i\re l_p$.
\item[$(\beta)$]
If $i\in I_0$, then
$p(i,\delta)\in T_i\re l_p$ and has immediate extension $0$ in $T$.
\end{enumerate}
\end{enumerate}
It follows from the definition that for $p\in \bQ$,
$\ran(p):=\{p(i,\delta):(i,\delta)\in (d+1)\times\vec{\delta}_p\}$ is free in $T$: leftmost extensions add no new pre-cliques.
Furthermore, all nodes in $\ran(p)$ are contained in the $n(p)$-th interval of $T$.
We point out that $\ran(p)$ may or may not contain a coding node.
If it does, then that coding node must appear as $p(i,\delta)$ for some $i\in I_0$; this $i$ may or may node equal $d$.
The partial ordering on $\bQ$ is defined as follows:
$q\le p$ if and only if
$l_q\ge l_p$, $\vec{\delta}_q\contains \vec{\delta}_p$,
\begin{enumerate}
\item[(i)]
$q(i,\delta)\contains p(i,\delta)$ for each $(i,\delta)\in (d+1)\times\vec{\delta}_p$; and
\item[(ii)]
$\ran(q\re \vec\delta_p):=\{q(i,\delta):(i,\delta)\in (d+1)\times\vec{\delta}_p\}$
has no new pre-cliques over
$\ran(p)$.
\end{enumerate}
By arguments similar to those in the proof of Theorem \ref{thm.matrixHL},
$(\bQ,\le)$ is an atomless partial order, and any condition in $\bQ$ can be extended by two incompatible conditions of length greater than any given $l\in\bN$.
Let $\dot{\mathcal{U}}$ be a $\bQ$-name for a non-principal ultrafilter on $L$.
For each $i\le d$ and $\al<\kappa$, let
$\dot{b}_{i,\al}=\{\lgl p(i,\al),p\rgl:p\in \bQ$ and $\al\in\vec{\delta}_p\}$,
a $\bQ$-name for the $\al$-th generic branch through $T_i$.
For
any condition $p\in \bQ$, for
$(i,\al)\in
I_0\times \vec\delta_p$, $p$ forces that $\dot{b}_{i,\al}\re l_p= p(i,\al)$.
For $(i,\al)\in I_1\times\vec\delta_p$, $p$ forces that $\splitpred_{T}(\dot{b}_{i,\al}\re l_p)= p(i,\al)$.
For $\vec{\al}=\lgl\al_0,\dots,\al_{d}\rgl\in[\kappa]^{d+1}$,
\begin{equation}
\mathrm{let\ \ }\dot{b}_{\vec{\al}}\mathrm{\ \ denote\ \ }
\lgl \dot{b}_{0,\al_0},\dots,\dot{b}_{d,\al_d}\rgl.
\end{equation}
For $l\in L$, we shall use the abbreviation
\begin{equation}
\dot{b}_{\vec\al}\re l
\mathrm{\ \ to\ denote \ \ }
\SP_T(\dot{b}_{\vec\al}\re l),
\end{equation}
which is exactly
$\{\dot{b}_{i,\al_i}\re l :i\in I_0\}\cup \{\splitpred_T(\dot{b}_{i,\al_i}\re l):i\in I_1\}$.
Similarly to the proof of Theorem \ref{thm.matrixHL},
we will find infinite pairwise disjoint sets $K_i\sse \kappa$, $i\le d$, such that $K_0<K_1<\dots <K_d$,
and conditions $p_{\vec\al}$, $\vec\al\in \prod_{i\le d}K_i$,
such that these conditions are pairwise compatible,
have the same images in $T$, and force the same color $\varepsilon^*$ for $h'(\dot{b}_{\vec\al}\re l)$ for $\dot{\mathcal{U}}$ many levels $l$ in $L$.
Moreover, the nodes $\{t^*_i:i\le d\}$ obtained from the application of the \Erdos-Rado Theorem for this setting
will extend
$\{s_i:i\le d\}$
and form a member of $\Ext_{T}(A,P)$.
The arguments are quite similar to those in Theorem \ref{thm.matrixHL}, so we only fill in the details for arguments which are necessarily different.
\vskip.1in
\noindent{\bf \underline{Part I}.}
Given $p\in\bQ$ and $\vec\al\in [\vec\delta_p]^{d+1}$,
let
\begin{equation}
P(p,\vec\al)=\{p_{\vec\al}(i,\al_i):i\le d\}.
\end{equation}
For each $\vec\al\in[\kappa]^{d+1}$,
choose a condition $p_{\vec{\al}}\in\bQ$ such that
\begin{enumerate}
\item
$\vec{\al}\sse\vec{\delta}_{p_{\vec\al}}$.
\item
$P(p,\vec\al) \in\Ext_T(A,P)$.
\item
There is a $\varepsilon_{\vec{\al}}\in 2$ such that
$p_{\vec{\al}}\forces$ ``$h(\dot{b}_{\vec{\al}}\re l)=\varepsilon_{\vec{\al}}$
for $\dot{\mathcal{U}}$ many $l$ in $L$."
\item
$h'(P(p_{\vec\al},\vec\al))=\varepsilon_{\vec{\al}}$.
\end{enumerate}
Properties (1) - (4) can be guaranteed as follows.
For each $i\le d$, let $u_i$ denote the member of $P$ which extends $s_i$.
For each $\vec{\al}\in[\kappa]^{d+1}$, let
$$
p^0_{\vec{\al}}=\{\lgl (i,\delta), u_i\rgl: i\le d, \ \delta\in\vec{\al} \}.
$$
Then $p^0_{\vec{\al}}$ is a condition in $\bP$ and
$\vec\delta_{p_{\vec\al}^0}= \vec\al$, so (1) holds for every $p\le p^0_{\vec{\al}}$.
Further,
$\ran(p^0_{\vec\al})$ is a member of $\Ext_T(A,P)$ since it equals $P$.
For any $p\le p_{\vec\al}^0$,
(ii) of the definition of the partial ordering on $\bQ$ guarantees that
$P(p,\vec\al)$
has no new pre-cliques over $\ran(p)$, and hence
is also a member of $\Ext_T(A,P)$.
Thus, (2) holds for any $p\le p_{\vec\al}^0$.
Take an extension $p^1_{\vec{\al}}\le p^0_{\vec{\al}}$ which
forces $h'(\dot{b}_{\vec{\al}}\re l)$ to be the same value for
$\dot{\mathcal{U}}$ many $l\in L$,
and which decides that value, denoted by $\varepsilon_{\vec{\al}}$.
Then any $p\le p^1_{\vec{\al}}$ satisfies (3).
Take $p_{\vec\al}^3\le p^2_{\vec\al}$ which decides $h'(\dot{b}_{\vec\al}^3 \re l)=\varepsilon_{\vec\al}$, for some $l$ such that
$l_{p^2_{\vec\al}}<l \le l_{p^3_{\vec\al}}$.
If $l= l_{p^3_{\vec\al}}$, let $p_{\vec\al}=p_{\vec\al}^3$.
Otherwise,
let $\vec\delta_{\vec\al}=\vec\delta_{p^2_{\vec\al}}$ and define
$p_{\vec\al}$ as follows:
For each $i\in I_0$,
for $\delta\in \vec\delta_{\vec\al}$,
let
$p_{\vec\al}(i,\delta)=p_{\vec\al}^3(i,\delta)\re l$.
For each $i\in I_1$,
for $\delta\in \vec\delta_{\vec\al}$,
let
$p_{\vec\al}(i,\delta)=\splitpred_T (p_{\vec\al}^3(i,\delta)\re l)$.
Then $p_{\vec\al}$ is a condition in $\bQ$,
and $p_{\vec\al}\le p_{\vec\al}^2$, so it satisfies (1) - (3).
Furthermore,
$h'(P(p_{\vec\al},\vec\al))=\varepsilon_{\vec\al}$, so $p_{\vec\al}$ satisfies (4).
We are assuming $\kappa=\beth_{2d+2}$.
Let $D_e=\{0,2,\dots,2d\}$ and $D_o=\{1,3,\dots,2d+1\}$, the sets of even and odd integers less than $2d+2$, respectively.
Let $\mathcal{I}$ denote the collection of all functions $\iota: (2d+2)\ra (2d+2)$ such that
$\iota\re D_e$
and $\iota\re D_o$ are strictly increasing sequences
and $\{\iota(0),\iota(1)\}<\{\iota(2),\iota(3)\}<\dots<\{\iota(2d),\iota(2d+1)\}$.
For $\vec{\theta}\in[\kappa]^{2d+2}$,
$\iota(\vec{\theta}\,)$ determines the pair of sequences of ordinals $(\theta_{\iota(0)},\theta_{\iota(2)},\dots,\theta_{\iota(2d))}), (\theta_{\iota(1)},\theta_{\iota(3)},\dots,\theta_{\iota(2d+1)})$,
both of which are members of $[\kappa]^{d+1}$.
Denote these as $\iota_e(\vec\theta\,)$ and $\iota_o(\vec\theta\,)$, respectively.
Let $\vec{\delta}_{\vec\al}$ denote $\vec\delta_{p_{\vec\al}}$,
$k_{\vec{\al}}$ denote $|\vec{\delta}_{\vec\al}|$,
and let $l_{\vec{\al}}$ denote $l_{p_{\vec\al}}$.
Let $\lgl \delta_{\vec{\al}}(j):j<k_{\vec{\al}}\rgl$
denote the enumeration of $\vec{\delta}_{\vec\al}$
in increasing order.
Define a coloring $f$ on $[\kappa]^{2d+2}$ into countably many colors as follows:
Given $\vec\theta\in[\kappa]^{2d+2}$ and
$\iota\in\mathcal{I}$, to reduce the number of subscripts, letting
$\vec\al$ denote $\iota_e(\vec\theta\,)$ and $\vec\beta$ denote $\iota_o(\vec\theta\,)$,
define
\begin{align}\label{eq.fiotatheta(c)}
f(\iota,\vec\theta\,)= \,
&\lgl \iota, \varepsilon_{\vec{\al}}, k_{\vec{\al}},
\lgl \lgl p_{\vec{\al}}(i,\delta_{\vec{\al}}(j)):j<k_{\vec{\al}}\rgl:i\le d\rgl,\cr
& \lgl \lgl i,j \rgl: i\le d,\ j<k_{\vec{\al}},\ \mathrm{and\ } \delta_{\vec{\al}}(j)=\al_i \rgl, \cr
&\lgl \lgl j,k\rgl:j<k_{\vec{\al}},\ k<k_{\vec{\beta}},\ \delta_{\vec{\al}}(j)=\delta_{\vec{\beta}}(k)\rgl\rgl.
\end{align}
Let $f(\vec{\theta}\,)$ be the sequence $\lgl f(\iota,\vec\theta\,):\iota\in\mathcal{I}\rgl$, where $\mathcal{I}$ is given some fixed ordering.
By the \Erdos-Rado Theorem,
there is a subset $K\sse\kappa$ of cardinality $\aleph_1$
which is homogeneous for $f$.
Take $K'\sse K$ such that between each two members of $K'$ there is a member of $K$.
Then take subsets $K_i\sse K'$ such that $K_0<\dots<K_{d}$
and each $|K_i|=\aleph_0$.
The following four lemmas are direct analogues of
Lemmas \ref{lem.onetypes}, \ref{lem.j=j'}, \ref{lem.compat}, and \ref{lem.subclaimA}.
Their proofs follow by simply making the correct notational substitutions, and so are omitted.
\begin{lem}\label{lem.onetypes(c)}
There are $\varepsilon^*\in 2$, $k^*\in\om$,
and $ \lgl t_{i,j}: j<k^*\rgl$, $i\le d$,
such that
for all $\vec{\al}\in \prod_{i\le d}K_i$ and each $i\le d$,
$\varepsilon_{\vec{\al}}=\varepsilon^*$,
$k_{\vec\al}=k^*$, and
$\lgl p_{\vec\al}(i,\delta_{\vec\al}(j)):j<k_{\vec\al}\rgl
=
\lgl t_{i,j}: j<k^*\rgl$.
\end{lem}
Let $l^*=|t_{i_0}|$.
Then for each $i\in I_0$,
the nodes $t_{i,j}$, $j<k^*$, have length $l^*$;
and for each $i\in I_1$,
the nodes $t_{i,j}$, $j<k^*$, have length in the interval $(l^T_{n-1},l^T_n)$,
where $n$ is the index of the coding node in $T$ of length $l^*$.
\begin{lem}\label{lem.j=j'(c)}
Given any $\vec\al,\vec\beta\in \prod_{i\le d}K_i$,
if $j,k<k^*$ and $\delta_{\vec\al}(j)=\delta_{\vec\beta}(k)$,
then $j=k$.
\end{lem}
For any $\vec\al\in \prod_{i\le d}K_i$ and any $\iota\in\mathcal{I}$, there is a $\vec\theta\in[K]^{2d+2}$ such that $\vec\al=\iota_o(\vec\theta)$.
By homogeneity of $f$, there is a strictly increasing sequence
$\lgl j_i:i\le d\rgl$ of members of $k^*$ such that for each $\vec\al\in \prod_{i\le d}K_i$,
$\delta_{\vec\al}(j_i)=\al_i$.
For each $i\le d$, let $t^*_i$ denote $t_{i,j_i}$.
Then for each $i\le d$ and each $\vec\al\in \prod_{i\le d}K_i$,
\begin{equation}
p_{\vec\al}(i,\al_i)=p_{\vec{\al}}(i, \delta_{\vec\al}(j_i))=t_{i,j_i}=t^*_i.
\end{equation}
\begin{lem}\label{lem.compat(c)}
For any finite subset $\vec{J}\sse \prod_{i\le d}K_i$,
the set of conditions $\{p_{\vec{\al}}:\vec{\al}\in \vec{J}\,\}$ is compatible.
Moreover,
$p_{\vec{J}}:=\bigcup\{p_{\vec{\al}}:\vec{\al}\in \vec{J}\,\}$
is a member of $\bP$ which is below each
$p_{\vec{\al}}$, $\vec\al\in\vec{J}$.
\end{lem}
\begin{lem}\label{lem.subclaimA(c)}
If $\beta\in \bigcup_{i\le d}K_i$,
$\vec{\al}\in\prod_{i\le d}K_i$,
and $\beta\not\in\vec\al$,
then
$\beta$ is not a member of $\vec{\delta}_{\vec{\al}}$.
\end{lem}
\noindent{\bf \underline{Part II}.}
Let $(n_j)_{j\in \bN}$ denote the set of indices for which there is an
$X\in \MPE_T(A,C)$
with $X=\max(V)$ for some
$V$ of $r_{n_j}[B,T]$.
For $i\in I_0$,
let $u^*_i=t^*_i$.
For $i\in I_1$,
let $u_i^*$ be the leftmost extension of $t^*_i$ in $T\re l^*$.
Note that $\{u_i^*:i\le d\}$ has no new pre-cliques over $A_e$, since leftmost extensions of splitting nodes with no new pre-cliques add no new pre-cliques; this follows from the WP of $T$.
Extend each node $u$ in $B^+\setminus A_e$ to its leftmost extension in $T\re l^*$ and label that extension $u^*$.
Let
\begin{equation}
U^*=\{u^*_i:i\le d\}\cup\{u^*:u\in B^+ \setminus A_e\}.
\end{equation}
Then $U^*$ extends $B^+$,
and $U^*$ has no new pre-cliques over $B$.
Let $n_{-1}$ be the integer such that
$B=r_{n_{-1}}(B)$.
Take
$S_0\in r_{n_0}[B,T]$
such that
the nodes in $\max(r_{n_{-1}+1}(S_0))$ extend the nodes in $U^*$.
This is possible by Lemma \ref{lem.HLconstruction}.
Suppose that $j\in\bN$, and for all $i<j$, we have chosen $S_i\in r_{n_i}[B,T]$ such that
$i<i'<j$ implies
$S_i\sqsubset S_{i'}$, and
$h'$ is constant of value $\varepsilon^*$ on $\Ext_{S_{i}}(A,P)$.
Take $V_j\in r_{n_j}[S_{j-1},T]$, and
let $X$ denote $\max(V_j)$.
Notice that
each member of
$\Ext_X(A,P)$
extends the nodes in $U^*$.
By the definition of $n_j$, the set of nodes $X$ contains a coding node.
For each $i\in I_0$,
let $Y_i$ denote the set of all $t\in T_i\cap X$
which have immediate extension $0$ in $T$.
Let $n$ be such that $l_X=l^T_n$.
For each $i\in I_1$,
let $Y_i$ denote the set of all
splitting predecessors of nodes in
$T_i\cap X$ which split in the interval
$(l^T_{n-1},l^T_n]$ of $T$.
For each $i\le d$,
let $J_i$ be a subset of $K_i$ of size $|Y_i|$,
and enumerate the members of $Y_i$ as $q(i,\delta)$, $\delta\in J_i$.
Let $\vec{J}$ denote the set of $\vec\al\in\prod_{i\le d}J_i$ such that the set $\{q(i,\al_i):i\le d\}$
has no new pre-cliques over $A$.
Thus,
the collection of sets $\{q(i,\al_i):i\le d\}$, $\vec\al\in \vec{J}$, is exactly the collection of sets of nodes in the interval $(l^T_{n-1},l^T_n]$ of $T$
which are members of
$\Ext_{T}(A,P)$.
Moreover, for
$\vec\al\in \vec{J}$
and $i\le d$,
\begin{equation}
q(i,\al_i)\contains t^*_i=p_{\vec{\al}}(i,\al_i).
\end{equation}
To complete the construction of the desired $q\in \bQ$ for which $q\le p_{\vec\al}$ for all $\vec\al\in \vec{J}$,
let $\vec{\delta}_q=\bigcup\{\vec{\delta}_{\vec\al}:\vec\al\in \vec{J}\}$.
For each pair $(i,\gamma)$ with $\gamma\in\vec{\delta}_q\setminus
J_i$,
there is at least one $\vec{\al}\in\vec{J}$ and some $m<k^*$ such that $\gamma=\delta_{\vec\al}(m)$.
As in Case (a) of Theorem \ref{thm.matrixHL},
for any other $\vec\beta\in\vec{J}$ for which $\gamma\in\vec{\delta}_{\vec\beta}$,
it follows
that
$p_{\vec\beta}(i,\gamma)=p_{\vec{\al}}(i,\gamma)=t^*_{i,m}$ and $\delta_{\vec\beta}(m)=\gamma$.
If $i\in I_0$,
let $q(i,\gamma)$ be the leftmost extension
of $t_{i,m}^*$ in $T\re l^{V_j}_{n_j}$.
If $i\in I_1$, let $q(i,\gamma)$ be the leftmost extension of $t^*_{i,m}$ to a splitting node in $T$
in the interval
$(l^{V_j}_{n_j-1}, l^{V_j}_{n_j}]$.
Such a splitting node must exist, because
the coding node in $X$ must have no pre-cliques
with $t^*_i$ (since $i\in I_1$).
Thus, by Lemma \ref{lem.poc}
the leftmost extension of $t^*_i$ in $T$ to length $l_X$ has no pre-cliques with the coding node in $X$, so it has a splitting predecessor in the interval
$(l^{V_j}_{n_j-1}, l^{V_j}_{n_j}]$.
Define
\begin{equation}
q=\bigcup_{i\le d}\{\lgl (i,\al),q(i,\al)\rgl: \al\in \vec{\delta}_q\}.
\end{equation}
By a proof similar to that of
Claim \ref{claim.qbelowpal},
it follows that
$q\le p_{\vec\al}$,
for each $\vec\al\in \vec{J}$.
Take an $r\le q$ in $\bP$ which decides some $l_j$ in $L$,
and such that
for all $\vec\al\in\vec{J}$, $h'(\dot{b}_{\vec\al}\re l_j)=\varepsilon^*$.
Without loss of generality, we may assume that the maximal nodes in $r$ have length $l_j$.
If $q(i',\al')$ is a coding node for some $i'\in I_0$ and $\al'\in J_{i'}$,
then let $c_r$ denote $r(i',\al')$;
otherwise, let $c_r$ denote the leftmost extension in $T$ of the coding node in $X$ to length $l_j$.
Let $c_X$ denote the coding node in $X$.
Let $Z_0$ denote those nodes in $\splitpred_T(X)$ which
which have length equal to
$l_X$
and
are not in $\bigcup_{i\in I_0} Y_i$.
For each $z\in Z_0$,
let $s_z$ denote the leftmost extension of $z$ in $T$ to length $l_j$.
Let $Z_1$ denote the set of all
nodes in $\splitpred_T(X)$ which are not in
$Z_0\cup \bigcup_{i\in I} Y_i$.
For each $z\in Z_1$, let $s_z$ denote the splitting predecessor of the leftmost extension of $z$ in $T$ to length $l_j$.
This splitting node $s_z$ exists in $T$ for the following reason:
If $z$ is a splitting predecessor of a node in $X$,
then $z$ has no pre-$k$-clique with $c_X$, so the leftmost extension of $z$ to any length has no pre-$k$-cliques with any extension of $c_X$.
In particular, the set $\{s_z:z\in Z_0\cup Z_1\}$ has no new pre-cliques over $X$.
Let
\begin{equation}
Z^-=\{r(i,\al):i\le d,\ \al\in J_i\}\cup\{s_z:z\in Z_0\cup Z_1\}.
\end{equation}
Let $Z^*$ denote all extensions in $T$ of the members of $Z^-$ to length $l_j$.
Note that $Z^-=\splitpred_T(Z^*)$, which end-extends $\splitpred_T(X)$.
Let $m$ denote the index such that the maximal coding node in $V_j$ below $c_X$ is $c^{V_j}_{n_m}$.
Note that $Z^*$ has no new pre-cliques over $\splitpred_T(X)$;
furthermore, the tree induced by
$r_{n_m}(V_j)\cup Z^*$ is strongly similar to $V_j$, except that the coding node might possibly be in the wrong place.
Using Lemma \ref{lem.HLconstruction},
there is an $S_j\in r_{n_j}[r_{n_m}(V_j),T]$ with $\max(S_j)$ extending $Z^*$.
Then every member of $\Ext_{S_j}(A,P)$ has the same $h'$ color $\varepsilon^*$,
by the choice of $r$,
since each minimal pre-extension in $\MPE_{S_j}(A,C)$ extends some member of $\Ext_{S_j}(A,P)$
which extends members in $\ran(r)$ and so have $h'$-color $\varepsilon^*$.
Let $S=\bigcup_{j\in\bN} S_j$.
Then $S$ is a strong coding tree in $[B,T]$.
Given any $Y\in\Ext_S(A,C)$,
there is some
$X\in\MPE_S(A,C)$
such that $Y$ extends $X$.
Since
$\SP_S(X)$ is in $\Ext_{S_j}(A,P)$ for some $j\in\bN$,
$\SP_S(X)$ has
$h'$ color $\varepsilon^*$.
Thus,
$Y$ has
$h$-color $\varepsilon^*$.
\end{proof}
\begin{lem}\label{lem.Case(b)}
Assume \ref{assumption.6}.
Then there is a strong coding subtree $S\le T$ such that for each copy $A'$ of $A$ in $S$,
$h$ is homogeneous on $\Ext_S(A',C)$.
\end{lem}
\begin{proof}
Let $(n_i)_{i\in\bN}$ be the sequence of integers such that $r_{n_i}(T)$ contains a copy of $A$
which is valid in
$r_{n_i}(T)$ and such that $\max(A)\sse\max(r_{n_i}(T))$.
Let $n_{-1}=0$, $T_{-1}=T$,
and
$U_{-1}=r_{0}(T)$.
Suppose $i\in\bN$, and $U_{i-1}\cong r_{n_{i-1}}(T)$
and $T_{i-1}$ are given satisfying that for each copy $A'$ of $A$
valid in $U_{i-1}$ with $\max(A)\sse \max(U_{i-1})$,
$h$ is homogeneous on $\Ext_{U_{i-1}}(A',C)$.
Let $U_i$ be in $r_{n_i}[U_{i-1},T_{i-1}]$.
Enumerate all
copies
$A'$ of $A$ which are valid in $U_i$ and have $\max(A')\sse \max(U_i)$ as $\lgl A_0,\dots,A_m\rgl$.
Apply Lemma \ref{lem.endhomog} to obtain $R_{0}\in [U_i,T_{i-1}]$ which is end-homogeneous for $\Ext_{R_0}(A_0,C)$.
Then
apply Lemma \ref{lem.Case(c)} to
obtain $R'_{0}\in [U_i,R_0]$
such that $\Ext_{R'_{0}}(A_0,C)$ is homogeneous for $h$.
Given $R'_{j}$ for $j<m$,
apply Lemma \ref{lem.endhomog} to obtain a $R_{j+1}\in [U_i,R'_{j}]$ which is end-homogeneous for $\Ext_{R_{j+1}}(A_{j+1},C)$.
Then
apply Lemma \ref{lem.Case(c)} to
obtain $R'_{j+1}\in [U_i,R_{j+1}]$
such that $\Ext_{R'_{j+1}}(A_{j+1},C)$ is homogeneous for $c$.
Let $T_i=R'_m$.
Let $U=\bigcup_{i\in\bN}U_i$.
Then $U\le T$ and $h$ has the same color on $\Ext_U(A',C)$
for each copy
$A'$ of $A$ which is valid in $U$.
Finally, take $S\le U$ such that for each $m\in\bN$,
$r_m(S)$ is valid in $U$.
Then
each copy $A'$ of $A$ in $S$ is valid in $U$.
Hence,
$h$ is homogeneous on $\Ext_S(A',C)$, for each copy $A'$ of $A$ in $S$.
\end{proof}
For the setting of Case (a) in Theorem \ref{thm.matrixHL}, a similar lemma holds.
The proof is omitted, as it is almost identical, making the obvious changes.
\begin{lem}\label{lem.fusionsplit}
Let $T$ be a member of $\mathcal{T}_k$ and let $A,C,h$ be as in Case (a) of Theorem \ref{thm.matrixHL}.
Then there is a strong coding tree $S\le T$ such that for each $A'\sse S$ with $A'\cong A$,
$\Ext_S(A',C)$ is homogeneous for $h$.
\end{lem}
Finally, for the case of $k\ge 4$, a new phenomenon appears: new singleton pre-$a$-cliques for $a\ge 4$ must be dealt with.
The steps are very similar to the case when the level set $\tilde{X}$ contains a coding node.
\vskip.1in
\noindent\bf Case (b${}'$). \rm
Assume $k\ge 4$.
Let $A$ and $C$ be fixed non-empty finite valid subtrees of a strong coding tree $T\in\mathcal{T}_k$ such that
\begin{enumerate}
\item
$A$ and $C$ both satisfy the Strict Witnessing Property; and
\item
$C\setminus A$ is a level set containing exactly one new singleton pre-$a$-clique, for some $a\in [4,k]$.
\end{enumerate}
Given a copy $A'$ of $A$ in a strong $\mathcal{H}_k$-coding tree $T$, let $\Ext_T(A',C)$ denote the set of all $C'$ contained in $T$ which end-extend $A'$ and such that $C'\cong A'$.
\begin{lem}[Case (b${}'$)]\label{lem.Case(bb)}
Given $A$ and $C$ as above, there is a strong coding subtree $S\le T$ such that for each copy $A'$ of $A$ in $S$,
$h$ is homogeneous on $\Ext_S(A',C)$.
\end{lem}
\begin{proof}
The proof follows from simple modifications of the proofs of
Case (b) in Theorem
\ref{thm.matrixHL} and
Lemmas \ref{lem.endhomog}, \ref{lem.Case(c)}, and \ref{lem.Case(b)}.
Just replace the coding node in Case (b) with the new singleton
pre-$a$-clique in Case (b$'$).
Splitting predecessors work as before.
\end{proof}
\noindent {\bf Proof of Theorem \ref{thm.MillikenSWP}}.
The proof is by induction on the number of critical nodes, where in this proof, by {\em critical node} we mean a coding node, a splitting node, or
a new singleton pre-$a$-clique for some $a\in[4,k]$.
Suppose first that $A$ consists of a single node.
Then $A$ consists of a single splitting node in $T$ on the leftmost branch of $T$,
so the strongly isomorphic copies of
$A$ are exactly the leftmost splitting nodes
in $T$.
Recall that $\Seq[0]$ denotes the set of all finite length sequences of $0$'s.
Thus, the copies of $A$ in $T$ are exactly those
splitting nodes in $T$ which are members of $\Seq[0]$.
Let $h$ be any finite coloring on the splitting nodes in the leftmost branch of $T$.
By Ramsey's Theorem,
infinitely many splitting nodes in the leftmost branch of $T$ must have the same $h$ color.
By the Extension Lemmas in Section \ref{sec.ExtLem},
there is a subtree $S\le T$
in which all splitting nodes in the leftmost branch of $S$ have the same $h$ color.
Now assume that $n\ge 1$ and the theorem holds
for each finite tree $B$ with $n$ or less critical nodes
such that $B$ satisfies the SWP and $\max(B)$ contains a node which is a sequence of all $0$'s.
Let $C$ be a finite tree with $n+1$ critical nodes
containing a maximal node in $\Seq[0]$, and suppose
$h$ maps the copies of $C$ in $T$ into finitely many colors.
Let $d$ denote the maximal critical node in $C$ and let
$B=\{t\in C: |t|<|d|\}$.
Apply
Lemma \ref{lem.Case(b)}, \ref{lem.fusionsplit} or \ref{lem.Case(bb)}, as appropriate,
to obtain $T'\le T$ so that for each copy $V$ of $B$ in $T'$, the set $\Ext_{T'}(V,C)$ is homogeneous for $h$.
Define $g$ on the copies of $B$ in $T'$ by letting $g(V)$ be the value of $h$ on $V\cup X$ for any $X\in\Ext_{T'}(V,C)$.
By the induction hypothesis,
there is an $S\le T'$ such that $g$ is homogeneous on all copies of $B$ in $S$.
It follows that $h$ is homogeneous on the copies of $C$ in $S$.
To finish, let $A$ be any finite tree satisfying the SWP.
If $\max(A)$ does not contain a member of $\Seq[0]$,
let $l_A$ denote the longest length of nodes in $A$,
and let $\tilde{A}$ be the tree induced by $A\cup\{0^{(l_A)}\}$.
Otherwise, let $\tilde{A}=A$.
Let $g$ be a finite coloring of the copies of $A$ in $T$.
To each copy $B$ of $\tilde{A}$ in $T$ there corresponds a unique copy of $A$ in $T$, denoted $\varphi(B)$:
If $\tilde{A}=A$, then $\varphi(B)=B$;
if $\tilde{A}\ne A$, then $\varphi(B)$ is $B$ with the leftmost node in $\max(B)$ removed.
For each copy $B$ of $\tilde{A}$, define
$h(B)=g(\varphi(B))$.
Take $S\le T$ homogeneous for $h$.
Then $S$ is homogeneous for $g$ on the copies of $A$ in $S$.
\hfill $\square$
\section{Main Ramsey Theorem for strong $\mathcal{H}_k$-coding trees}\label{sec.MainThm}
The third phase of this article takes place in this and the next section.
Subsection \ref{sec.squiggle} develops the notion of incremental trees, which sets the stage for envelopes for incremental antichains.
These envelopes
transform finite antichains of coding nodes to finite trees with the Strict Witnessing Property,
enabling applications of
Theorem \ref{thm.MillikenSWP} to deduce
Theorem \ref{thm.mainRamsey}.
This theorem takes a finite coloring of
all antichains of coding nodes strictly similar to a given finite antichain of coding nodes
and finds a strong coding tree in which the coloring has one color.
After showing in Lemma \ref{lem.bD} that any strong coding tree contains an antichain of coding nodes coding a Henson graph,
we will apply Theorem \ref{thm.mainRamsey} to prove that each Henson graph has finite big Ramsey degrees, thus obtaining the main result of this paper in Theorem \ref{finalthm}.
\subsection{Incremental trees}\label{sec.squiggle}
The new notions of {\em incremental new pre-cliques} and {\em incrementally witnessed pre-cliques}, and {\em incremental trees} are defined now.
The main lemma of this subsection, Lemma \ref{lem.squiggletree}, shows that given a strong coding tree $T$,
there is an incremental strong coding subtree $S\le T$ and
a set $W\sse T$ of coding nodes disjoint from $S$ such that
all pre-cliques in $S$ are incrementally witnessed by coding nodes in $W$.
This sets the stage for the development of {\em envelopes} with the Strict Witnessing Property in the next subsection.
\begin{defn}[Incremental Pre-Cliques]\label{defn.incrementalpo}
Let $S$ be a subtree of $\bT_k$,
and let $\lgl l_j:j<\tilde{j}\rgl$
list in increasing order the minimal lengths
of new pre-cliques in $S$, except for singleton new pre-$3$-cliques.
We say that
$S$ has {\em incremental new pre-cliques},
or simply $S$ is
{\em incremental},
if
letting
\begin{equation}
S_{l_j,1}:=\{t\re l_j:t\in S, \ |t|>l_j,\mathrm{\ and \ } t(l_j)=1\},
\end{equation}
the following hold
for each $j<\tilde{j}$:
\begin{enumerate}
\item
$S_{l_j,1}$
is a new pre-$a$-clique for some $a\in [3,k]$, and no proper subset of $S_{l_j,1}$ is a new pre-$b$-clique for any $b\in[3,k]$;
\item
If $a=3$ and
$S_{l_j,1}$ has more than two members, then for each
proper subset $X\subsetneq S_{l_j,1}$ of size at least $2$,
for some $i<j$, $X\re l_i=S_{l_i,1}$ and is also a pre-$3$-clique;
\item
If $a>3$ and
$S_{l_j,1}$ has at least two members, then for each
proper subset $X\subsetneq S_{l_j,1}$,
for some $i<j$, $X\re l_i=S_{l_i,1}$ and is also a pre-$a$-clique;
\item
If $a>3$,
then there are $l_{j-1}<l^3<\dots<l^a=l_j$ such that for each $3\le b\le a$,
$S_{l_j,1}\re l^b$ is a pre-$b$-clique.
Furthermore, for some $m$, $|d^S_m|<l^3<l^a=l_j<|d^S_{m+1}|$.
\end{enumerate}
A tree $S\in\mathcal{T}_k$ is called an {\em incremental strong coding tree}
if $S$ is incremental and moreover,
the node
$d^S_{m+1}$ in (4) is a coding node in $S$.
\end{defn}
Note that every subtree of an incremental strong coding tree is incremental, but a strong coding subtree of an incremental strong coding tree need not be an incremental strong coding tree.
Note also that in (4), the pre-$b$-cliques
for $3\le b<a$
at the levels $l^3$ through $l^a$ are not new, but they build up to the new pre-$a$-clique in the interval $(l_{j-1},l_j]$.
This redundancy will actually make the definition of the envelopes simpler.
\begin{defn}[Incrementally Witnessed Pre-Cliques]\label{defn.incremental}
Let $S,T\in \mathcal{T}_k$ be such that $S$ is incremental and
$S\le T$.
We say that the pre-cliques in $S$ are {\em incrementally witnessed} by a set of witnessing coding nodes $W\sse T$
if the following hold.
Given that $\lgl l_j:j\in\bN\rgl$ is the increasing enumeration of the minimal lengths of new pre-cliques in $S$,
for each $j\in\bN$ the following hold:
\begin{enumerate}
\item
$|d^S_{m_n-1}|<l_j< l^S_n$ for some $n\in\bN$.
\item
If $S_{l_j,1}$ is a new pre-$a_j$-clique of size at least two, where
$a_j\in [3,k]$,
then there exist coding nodes
$w_j^3,\dots, w_j^{a_j}$ in $T$ such that,
letting $W$ denote
$\bigcup_{j\in\bN}\{ w_j^3,\dots w_j^{a_j}\}$,
the set of all these witnessing coding nodes,
\begin{enumerate}
\item[(a)]
The set of nodes
$\{ w_j^3,\dots ,w_j^{a_j} : j\in\bN\}$ forms a pre-$(a_j-2)$-clique which witnesses the pre-$a_j$-clique in $S_{l_j,1}$.
\item[(b)]
The nodes in $\{ w_j^3,\dots ,w_j^{a_j}\}$ do not form pre-cliques with any nodes in
$(W\setminus \{ w_j^3,\dots ,w_j^{a_j} \})\cup (S\re |w^{a_j}_j|\setminus S_{l_j,1}')$
where $S_{l_j,1}'$ denotes the set of nodes in $S\re |w^{a_j}_j|$ which end-extend $S_{l_j,1}$.
\item[(c)]
If $Z\sse \{ w_j^3,\dots ,w_j^{a_j} \}\cup (S\re |w^{a_j}_j|)$ forms a pre-clique, then
$Z\re l_j\cap S$ must be contained in $S_{l_j,1}$.
\end{enumerate}
Recalling that
$w^{\wedge}$ denotes $w\re l$, where $l$ is least such that $w(l)\ne 0$, we have
\begin{enumerate}
\item[(d)]
$|d^S_{m_n-1}|<|(w_j^3)^{\wedge}|<\dots<|(w_j^{a_j})^{\wedge}|<
|w_j^3|<\dots<|w_j^{a_j}|$.
\item[(e)]
If $|d^S_{m_n-1}|<l_{j+1}< l^S_n$,
then
$\max(l_{j},|w^{a_j}_j|)<
|(w_{j+1}^3)^{\wedge}|$.
\end{enumerate}
\end{enumerate}
\end{defn}
For $k=3$, in the terminology of \cite{DobrinenJML20}, (c) says that the only nodes in $S$ with which
$\{w_j^3,\dots, w_j^{a_j} \}$ has parallel $1$'s (pre-$3$-cliques) are in $S_{l_j,1}$.
In what follows, we shall say that a strong coding tree $S$ such that $S\le T$ is {\em valid} in $T$ if for each $m\in\bN$, $r_m(S)$ is valid in $T$.
Since $S$ is a strong coding tree, this is equivalent to $\max(r_m(S))$ being free in $T$ for each $m\in\bN$.
\begin{lem}\label{lem.squiggletree}
Let $T\in\mathcal{T}_k$ be a strong coding tree.
Then there is an incremental strong coding tree $S\le T$ and a set of coding nodes $W\sse T$ such that each
new pre-clique in $S$ is incrementally
witnessed in $T$ by coding nodes in $W$.
\end{lem}
\begin{proof}
Recall that for any tree $T\in\mathcal{T}_k$,
the sequence
$\lgl m_n:n\in\bN\rgl$ denotes the indices
such that $d^T_{m_n}=c^T_n$; that is, the $m_n$-th critical node in $T$ is the $n$-th coding node in $T$.
If $k=3$, fix some $S_0\in r_{m_0+1}[0,T]$ which is
valid in $T$.
Then $S_0$ has exactly one coding node, $c^{S_0}_0$, and it has ghost coding node $c^{S_0}_{-1}$, which is the shortest splitting node in $S_0$.
There are no pre-cliques in $S_0$.
For $k=3$ and $n\ge 1$, or for $k\ge 4$ and $n\ge 0$, proceed as follows:
If $k\ge 4$, let $S_{-1}$ consist of the stem of $T$, that is $d^T_0$.
Suppose that we have chosen $S_{n-1}\in r_{m_{n-1}+1}[0,T]$
valid in $T$ and $W_{n-1}\sse T$ so that $S_{n-1}$ is incremental and
each new pre-clique in $S_{n-1}$ is
incrementally witnessed by some coding nodes in $W_{n-1}$.
Take some $U_n\in r_{m_n+1}[S_{n-1},T]$ such that $r_{m_n}(U_n)$ is valid in $T$.
Let $V=\max(r_{m_n}(U_n))$.
Let $\lgl X_j:j<\tilde{j}\rgl$ enumerate those subsets of $\max(U_n)$ which have new
pre-cliques over $r_{m_n}(U_n)$ so that
for each pair $j<j'<\tilde{j}$,
if $X_j$ is a new pre-$a$-clique
and $X_{j'}$ is a new pre-$a'$-clique,
then
\begin{enumerate}
\item
$a\le a'$;
\item
If $a=a'$, then $X_j\not\contains X_{j'}$.
\end{enumerate}
Note that (1) implies that, in the case that $k\ge 4$, for each $a\in [3,k-1]$, all new pre-$a$-cliques are enumerated before any new pre-$(a+1)$-clique is enumerated.
Furthermore, every new pre-clique in $\max(U_n)$ over $r_{k_n}(U_n)$ is enumerated in $\lgl X_j:j<\tilde{j}\rgl$ whether or not it is maximal.
By (2), all new pre-$a$-cliques composed of two nodes are listed before any new pre-$a$-clique consisting of three nodes, etc.
For each $j<\tilde{j}$,
let $Y_j=X_j\re (l_V+1)$.
By properties (1) and (2), $X_0$ must be a pre-$3$-clique consisting of two nodes.
The construction process in this case is similar to the construction above for $S_0$ when $k>3$.
By Lemma \ref{lem.perfect},
there is a splitting node $s\in T$ such that $s$ is a sequence of $0$'s and $|s|> l_V+1$.
Extend all nodes in $V$ leftmost in $T$ to the length $|s|$, and call this set of nodes $Z$.
Apply Lemma \ref{lem.pnc} to obtain $V_0$ end-extending $Z$ so that the following hold:
The
node in $V_0$ extending $s^{\frown}1$ is a coding node, call it $w_{n,0}$;
the two nodes in $V_0$ extending the nodes in $Y_0$ both have passing number $1$ at $w_{n,0}$;
all other nodes in $V_0$ are leftmost extensions of the nodes in $V^+\setminus Y_0$;
and the only new pre-clique in $V_0$ is the nodes in $V_0$ extending $Y_0$.
Let $W_{n,0}=\{w_{n,0}\}$.
Given $j<\tilde{j}-1$ and $V_j$,
let $Y'_{j+1}$ be the set of those nodes in $V_j$ which extend the nodes in $Y_{j+1}$.
Let $a\in[3,k]$ be such that $X_{j+1}$ is a new pre-$a$-clique.
Applying Lemma \ref{lem.perfect} $a-2$ times,
obtain
splitting nodes $s_i$, $i<a-2$, in $T$ which are sequences of $0$ such that
$l_{V_j}<|s_0|<\dots<|s_{a-3}|$.
Extend all nodes ${s_i}^{\frown}1$, $i<a-2$,
leftmost in $T$ to length $|s_{a-3}|+1$;
and extend
the nodes in $V_j$ leftmost in $T$ to length $|s_{a-3}|+1$ and denote this set of nodes as $Z$.
By Lemma \ref{lem.poc}, this adds no new pre-cliques over $V_j$.
Next apply Lemma \ref{lem.pnc} $a-2$ times to obtain $V_{j+1}$ end-extending $Z$ and coding nodes $w_{n,j+1,i}\in T$, $i<a-2$,
such that
letting $Y''_{j+1}$ be those nodes in $V_{j+1}$ extending nodes in $Y'_{j+1}$,
the following hold:
\begin{enumerate}
\item
$|w_{n,j+1,0}|<\dots<|w_{n,j+1,a-3}|$;
\item
The nodes in $V_{j+1}$ all have length
$|w_{n,j+1,a-3}|$;
\item
For each $i<a-2$,
all nodes in
$\{w_{n,j+1,i'}:i<i'<a-2\}
\cup Y''_{j+1}$ have passing number $1$ at
$w_{n,j+1,i}$.
\item
All nodes in $V_{j+1}\setminus Y''_{j+1}$ are leftmost extensions of nodes in $V_j\setminus Y'_{j+1}$.
\item
The only new pre-clique
in $V_{j+1}$ above $V^+$ is the set of nodes in
$Y''_{j+1}$.
\end{enumerate}
Let $W_{n,j+1}=\{w_{n,j+1,i}:i<a-2\}$.
After $V_{\tilde{j}-1}$ has been constructed,
take some $S_n\in r_{m_n+1}[r_{m_n}(U_n),T]$ such that $\max(S_n)$ end-extends $V_{\tilde{j}-1}$, by Lemma \ref{lem.HLconstruction}.
Let $W_n=\bigcup_{j<\tilde{j}}W_{n,j}$.
To finish, let $S=\bigcup_{n\in\bN} S_n$ and $W=\bigcup_{n\in\bN}W_n$.
Then $S\le T$, $S$ is incremental, and
the pre-cliques in $S$ are incrementally witnessed by coding nodes in $W$.
\end{proof}
\subsection{Ramsey theorem for strict similarity types}\label{sec.1color}
The main Ramsey theorem for strong coding trees
is Theorem \ref{thm.mainRamsey}:
Given a finite coloring of all strictly similar copies (Definition \ref{defn.ssimtype})
of a fixed finite antichain in an incremental strong coding tree,
there is a subtree which is again a strong coding tree in which all strictly similar copies of the antichain have the same color.
Such antichains will have envelopes which have the Strict Witnessing Property.
Moreover, all envelopes of a fixed incremental antichain of coding nodes will be strongly isomorphic to each other.
This will allow for an application of
Theorem \ref{thm.MillikenSWP} to obtain
the same color for all copies of a given envelope, in some subtree in $\mathcal{T}_k$.
From this, we will deduce Theorem \ref{thm.mainRamsey}.
Recall that a set of nodes $A$ is an {\em antichain} if no node in $A$ extends any other node in $A$.
In what follows, by {\em antichain}, we mean an antichain of coding nodes.
If $Z$ is an antichain,
then the {\em tree induced by $Z$}
is the set of nodes
\begin{equation}
\{z\re |u|:z\in Z\mathrm{\ and\ } u\in Z^{\wedge}\}.
\end{equation}
We say that an antichain satisfies the Witnessing Property (Strict Witnessing Property) if and only if the tree it induces satisfies the Witnessing Property (Strict Witnessing Property).
Fix, for the rest of this section, an incremental strong coding tree
$T\in\mathcal{T}_k$, as in
Lemma \ref{lem.squiggletree}.
Notice that any strong coding subtree of $T$ will also be incremental.
Furthermore, any antichain in $T$ must be incremental.
\begin{defn}[Strict similarity type]\label{defn.ssimtype}
Suppose $Z\sse T$ is a finite antichain of coding nodes.
Enumerate the nodes of $Z$ in increasing order of length as $\lgl z_i:i<\tilde{i}\rgl$ (excluding, as usual, new singleton pre-$3$-cliques).
Enumerate all nodes in $Z^{\wedge}$ as $\lgl u^Z_m:m< \tilde{m}\rgl$ in order of increasing length.
Thus, each $u^Z_m$ is either a splitting node in $Z^{\wedge}$ or else a coding node in $Z$.
List the minimal levels of new pre-cliques in $Z$
in increasing order as $\lgl l_j: j<\tilde{j}\rgl$.
For each $j<\tilde{j}$, let $I^Z_{l_j}$ denote the set of those
$i<\tilde{i}$ such that
$\{z_i\re l_j :i\in I^Z_{l_j} \}$ is the new pre-clique in $Z\re l_j$.
The sequence
\begin{equation}
\lgl \lgl l_j:j<\tilde{j}\rgl,
\lgl I^Z_{l_j}:j< \tilde{j}\rgl,
\lgl |u^Z_m|:m<\tilde{m}\rgl\rgl
\end{equation}
is {\em the strict similarity sequence of $Z$}.
Let $Y$ be another finite antichain in $T$, and
let
\begin{equation}
\lgl
\lgl p_j:j<\tilde{k}\rgl,
\lgl I^Y_{p_j}:j< \tilde{k}\rgl,
\lgl |u^Y_m|:m<\tilde{q}\rgl\rgl
\end{equation}
be its strict similarity sequence.
We say that $Y$ and $Z$ have the same {\em strict similarity type} or are {\em strictly similar}, and write $Y\sssim Z$, if
\begin{enumerate}
\item
The tree induced by
$Y$ is strongly isomorphic to the tree induced by $Z$, so in particular, $\tilde{m}=\tilde{q}$;
\item
$\tilde{j}=\tilde{k}$;
\item
For each $j<\tilde{j}$, $I^Y_{p_j}=I^Z_{l_j}$;
and
\item
The function $\varphi:\{p_{j}:j<\tilde{j}\}\cup\{|u^Y_m|:m<\tilde{m}\}\ra \{l_{j}:j<\tilde{j}\}\cup\{|u^Z_m|:m<\tilde{m}\}$,
defined by $\varphi(p_{j})=l_{j}$ and
$\varphi(u^Y_m)=u^Z_m$,
is an order preserving bijection between these two linearly ordered sets of natural numbers.
\end{enumerate}
Define
\begin{equation}
\Sim^{ss}_T(Z)=\{Y\sse T:Y\sssim Z\}.
\end{equation}
\end{defn}
Note that if $Y\sssim Z$, then
the map $f:Y\ra Z$ by $f(y_i)=z_i$, for each $i<\tilde{i}$, induces the strong similarity map from the tree induced by $Y$ onto the tree induced by $Z$.
Then $f(u^Y_m)=u^Z_m$, for each $m<\tilde{m}$.
Further,
by (3) and (4) of Definition \ref{defn.ssimtype},
this map preserves the order in which new pre-cliques appear, relative to all other new pre-cliques in $Y$ and $Z$ and the nodes in $Y^{\wedge}$ and $Z^{\wedge}$.
The following notion of envelope is defined in terms of structure without regard to an ambient strong coding tree.
Given a fixed incremental strong coding tree $T$,
in any given strong coding subtree $U\le T$, there will
certainly be finite subtrees of $U$ which have no envelope in $U$.
The point of Lemma \ref{lem.squiggletree} is that
there will be
a strong coding subtree $S\le U$ along with a set of witnessing coding nodes $W\sse U$ so that each finite antichain in $S$ has an envelope consisting of nodes from $W$.
Thus, envelopes of antichains in $S$ will exist in $U$.
Moreover, $S$ must be incremental, since $S\le U\le T$.
\begin{defn}[Envelopes]\label{defn.envelope}
Let $Z$ be a finite incremental antichain of coding nodes.
An envelope of $Z$, denoted $E(Z)$, consists of $Z$ along with a set of coding nodes $W$ such that $Z\cup W$ satisfies Definition \ref{defn.incremental}.
\end{defn}
Thus, all new pre-cliques in an envelope
$E(Z)=Z\cup W$ are incrementally witnessed by coding nodes in $W$.
The set $W$ is called the set of {\em witnessing coding nodes} in the envelope.
The next fact follows immediately from the definitions.
\begin{fact}\label{fact.Claim1}
Let
$Z$ be any antichain in an incremental strong coding tree.
Then any envelope of $Z$ has incrementally witnessed pre-cliques, which implies that $Z$ has
the Strict Witnessing Property.
\end{fact}
\begin{lem}\label{lem.envelopesbehave}
Let $Y$ and $Z$ be strictly similar incremental antichains of coding nodes.
Then any envelope of $Y$ is strongly isomorphic to any envelope of $Z$, and both envelopes have the Strict Witnessing Property.
\end{lem}
\begin{proof}
Let $Y=\{y_i:i<\tilde{i}\}$ and $Z=\{z_i:i<\tilde{i}\}$ be the enumerations of $Y$ and $Z$ in order of increasing length,
and let
\begin{equation}
\lgl \lgl l_j:j<\tilde{j}\rgl,\lgl I^Y_{l_j}:j<\tilde{j}\rgl, \lgl |u^Y_m|:m<\tilde{m}\rgl\rgl
\end{equation}
and
\begin{equation}
\lgl \lgl p_j:j<\tilde{j}\rgl,\lgl I^Z_{p_j}:j<\tilde{j}\rgl, \lgl |u^Z_m|:m<\tilde{m}\rgl\rgl
\end{equation}
be their strict similarity sequences, respectively.
Let $E=Y\cup V$ and $F=Z\cup W$ be any envelopes of $Y$ and $Z$, respectively.
For each $j<\tilde{j}$, let $a_j\ge 3$ be such that
$I_{l_j}^Y$ is a new pre-$a_j$-clique.
Then the members of $V$ may be labeled as
$\{v_j^3,\dots,v_j^{a_j}:j<\tilde{j}\}$ with the property that for each $j<\tilde{j}$,
given the least $m<\tilde{m}$ such that
$|v_j^{a_j}|<|u^Y_m|$, we have
$|u^Y_{m-1}|<|(v_j^3)^{\wedge}|$.
This follows from Definition \ref{defn.incremental}.
Since $Y$ and $Z$ have the same strict similarity type,
it follows that for each $j<\tilde{j}$, $I_{p_j}^Z$ is also a new pre-$a_j$-clique.
Furthermore, $W=\{w_j^3,\dots,w_j^{a_j}:j<\tilde{j}\}$,
where for each $j<\tilde{j}$,
given the least $m<\tilde{m}$ such that
$|w_j^{a_j}|<|u^Z_m|$, we have that $|u^Z_{m-1}|<|(w_j^3)^{\wedge}|$.
Thus, $V$ and $W$ both have the same size, label it $J$.
Let $\tilde{n}=\tilde{i}+J$,
and let $\{e_n:n<\tilde{n}\}$ and $\{f_n:n<\tilde{n}\}$ be the enumerations of $E$ and $F$ in order of increasing length, respectively.
For each $j<\tilde{j}$,
let $n_j$ be the index in $\tilde{n}$ such that
$e_{n_j}=v_j$ and $f_{n_j}=w_j$.
For $n<\tilde{n}$,
let $E(n)$ denote
the tree induced by $E$ restricted to those nodes of length less than or equal to $|e_n|$; precisely,
$E(n)=\{e\re |t|: e\in E$, $t\in E^{\wedge}$, and $ |t|\le |e_n|\}$.
Define $F(n)$ similarly.
We prove that $E\cong F$ by induction on $\tilde{j}$.
If $\tilde{j}=0$, then $E=Y$ and $F=Z$, so $E\cong F$ follows from $Y\sssim Z$.
Suppose now that $\tilde{j}\ge 1$ and that, letting $j=\tilde{j}-1$,
the induction hypothesis gives that
$E(n)\cong F(n)$ for the maximal $n<\tilde{n}$ such that
$e_n\in Y^{\wedge}$ and $|e_n|<l_j$.
Let $m$ be the least integer below $\tilde{m}$ such that
$|u^Y_{m}|>l_j$.
Then $e_n=u^Y_{m-1}$ and
the only nodes in $E^{\wedge}$
in the interval $(|u^Y_{m-1}|,|u^Y_{m}|)$
are
$(v^3_j)^{\wedge},\dots, (v^{a_j}_j)^{\wedge}, v^3_j,\dots, v^{a_j}_j$.
Likewise, the only nodes in $F^{\wedge}$ in the interval
$(|u^Z_{m-1}|,|u^Z_{m}|)$
are $(w^3_j)^{\wedge},\dots, (w^{a_j}_j)^{\wedge}, w^3_j,\dots, w^{a_j}_j$.
By the induction hypothesis, there is a strong isomorphism $g:E(n)\ra F (n)$.
Extend it
to a strong isomorphism
$g^*:E(n')\ra F(n')$, where $n'=\tilde{n}-1$
as follows:
Define
$g^*=g$ on $E(n)$.
For each $i\in [3,a_j]$, let $g^*((v^i_j)^{\wedge})=
(w^i_j)^{\wedge}$ and $g^*(v^i_j)=
w^i_j$.
Recall that the nodes $\{v^3_j,\dots v^{a_j}_j\}$ form a pre-$(a_j-1)$-clique and only have mutual pre-cliques with nodes in $\{y_i:i\in I^Y_{l_j}\}$, witnessing this set, and no other members of $E$.
Likewise, for
$\{w^3_j,\dots v^{a_j}_j\}$ and $\{z_i:i\in I^Z_{lp_j}\}$.
Thus, $g^*$ from $E(n'')$ to $F(n'')$ is a strict similarity map, where $n''<\tilde{n}$ is the index such that $v^{a_j}_j=e_{n''}$.
If $n''<\tilde{n}-1$,
then $\{e_q:n''<q<\tilde{n}\}\sse Y^{\wedge}$
and
$\{f_q:n''<q<\tilde{n}\}\sse Z^{\wedge}$.
Since these sets have no new pre-cliques and are strictly similar, the map $g^*(e_q)=f_q$, $n''<q<\tilde{n}$,
is a strong isomorphism.
Thus, we have constructed a strong isomorphism $g^*:E\ra F$.
It follows from the definitions that envelopes satisfy the Strict Witnessing Property.
\end{proof}
\begin{lem}\label{lem.lastpiece}
Suppose
$Z$ is a finite antichain of coding nodes and $E$ is an envelope of $Z$ in $T$.
Enumerate the nodes
in $Z$ and $E$ in
order of increasing length as
$\lgl z_i:i<\tilde{i}\rgl$ and
$\lgl e_k:k<\tilde{k}\rgl$,
respectively.
Given any $F\sse T$ with $F\cong E$,
let $F\re Z:=\{f_{k_i}:i<\tilde{i}\}$, where
$\lgl f_k:k<\tilde{k}\rgl$
enumerates the nodes in $F$ in order of increasing length
and for each $i<\tilde{i}$,
$k_i$ is the index such that
$e_{k_i}=z_i$.
Then
$F\re Z$
is strictly similar to $Z$.
\end{lem}
\begin{proof}
Recall that $E$ has incrementally witnessed new pre-cliques and $F\cong E$ implies that $F$ also has this property, and hence has the SWP.
Let $\iota_{Z,F}:Z\ra F$ be the injective map defined via $\iota_{Z,F}(z_i)=f_{k_i}$, $i<\tilde{i}$,
and let
$F\re Z$ denote $\{f_{k_i}:i<\tilde{i}\}$, the image of $\iota_{Z,F}$.
Then $F\re Z$ is a subset of $F$ which we claim is strictly similar to $Z$.
Since $F$ and $E$ each have incrementally witnessed new pre-cliques,
the strong similarity map $g:E\ra F$
satisfies that for each $j<\tilde{k}$, the indices of the new pre-cliques at level of the $j$-th coding node are the same:
\begin{equation}
\{k<\tilde{k}:e_k(|e_j|)=1\}=
\{k<\tilde{k}: g(e_k)(|g(e_j)|)=1\}
=\{k<\tilde{k}:f_k(|f_j|)=1\}.
\end{equation}
Since $\iota_{Z,F}$ is the restriction of $g$ to $Z$,
$\iota_{Z,F}$ also
takes each
new pre-clique in $Z$
to the corresponding new pre-clique in $F\re Z$, with the same set of indices.
Thus, $\iota_{Z,F}$ witnesses that $F\re Z$ is
strictly similar to $Z$.
\end{proof}
\begin{thm}[Ramsey Theorem for Strict Similarity Types]\label{thm.mainRamsey}
Let $Z$ be a finite antichain of coding nodes in an incremental strong coding tree $T$,
and suppose $h$ colors of all subsets of $T$ which are strictly similar to $Z$ into finitely many colors.
Then there is an incremental strong coding tree $S\le T$ such that
all subsets of $S$ strictly similar to $Z$ have the same $h$ color.
\end{thm}
\begin{proof}
First, note that there is an envelope $E$
of a copy of $Z$ in $T$:
By Lemma \ref{lem.squiggletree},
there is an incremental strong coding tree $U\le T$ and a set of coding nodes $V\sse T$
such that each $Y\sse U$ which is strictly similar to $Z$
has an envelope in $T$ by adding nodes from $V$.
Since $U$ is strongly isomorphic to $T$,
there is subset $Y$ of $U$ which is strictly similar to $Z$.
Let
$E$ be any envelope of $Y$ in $T$, using witnessing coding nodes from $V$.
By Lemma \ref{lem.envelopesbehave}, all envelopes of copies of $Z$ are strongly isomorphic and have the SWP.
For each $F\cong E$,
define
$h^*(F)=h(F\re Z)$,
where
$F\re Z$ is the subset of $F$ provided by Lemma \ref{lem.lastpiece}.
The set $F\re Z$
is strictly similar to $Z$,
so the coloring $h^*$ is well-defined.
By
Theorem \ref{thm.MillikenSWP}, there is
a strong coding tree $T'\le T$ such that
$h^*$ is monochromatic on all strongly isomorphic copies of $E$ in $T'$.
Lemma \ref{lem.squiggletree} implies there is an incremental strong coding tree $S\le T'$ and a set of coding nodes $W\sse T'$
such that each $Y\sse S$ which is strictly similar to $Z$
has an envelope $F$ in $T'$, so that
$h(Y)=h^*(F)$.
Therefore, $h$ takes only one color on
all strictly similar copies of $Z$ in $S$.
\end{proof}
\section{The Henson graphs have finite big Ramsey degrees}\label{sec.7}
From the results in previous sections, we now prove the
main theorem of this paper, Theorem \ref{finalthm}.
This result follows from
Ramsey Theorem \ref{thm.mainRamsey} for strict similarity types
along with Lemma \ref{lem.bD} below.
For a strong coding tree $T$, let $(T,\sse)$
be the reduct of $(T,\mathbb{N};\sse,<,c)$.
Then $(T,\sse)$
is simply the tree structure of $T$, disregarding the difference between coding nodes and non-coding nodes.
We say that two trees $(T,\sse)$ and $(S,\sse)$ are {\em strongly similar trees}
if they
satisfy
Definition 3.1 in \cite{Sauer06}.
This is the same as modifying Definition \ref{def.3.1.likeSauer} by deleting
(6) and changing (7) to apply to passing numbers of {\em all} nodes in the trees.
By saying that two finite trees are strongly similar trees, we are implicitly assuming that
their extensions to their immediate successors of their
maximal nodes are still strongly similar.
Thus, strong similarity of finite trees implies passing numbers of their immediate extensions are preserved.
Given an antichain $D$ of coding nodes from a strong coding tree, let $L_D$ denote the set of all
lengths of nodes $t\in D^{\wedge}$
such that
$t$ is not the splitting predecessor of any coding node in $D$.
Define
\begin{equation}\label{eq.D^*}
D^*=\bigcup\{t \re l:t\in D^{\wedge}\setminus D
\mathrm{\ and\ }l\in L_D\}.
\end{equation}
Then $(D^*,\sse)$ is a tree.
\begin{lem}\label{lem.bD}
Let $T\in\mathcal{T}_k$ be a strong coding tree.
Then there is an infinite antichain of coding nodes $D\sse T$ which code
$\mathcal{H}_k$ in the same way as $\bT_k$:
$c^{D}_n(l^{D}_i)=c^k_n(l^k_i)$,
for all $i<n$.
Moreover,
$(D^*,\sse)$ and $(\bT_k,\sse)$ are strongly similar as trees.
\end{lem}
\begin{proof}
We will construct a subtree
$\bD\sse\bT_k$
such that $D$
the set of coding nodes in $\bD$ form
an antichain
satisfying the lemma.
Then, since $T\in\mathcal{T}_k$ implies $T\cong \bT_k$,
letting $\varphi:\bT_k\ra T$ be the strong similarity map between $\bT_k$ and $T$,
the image of $\varphi$ on the coding nodes of $\bD$ will yield an antichain of coding nodes $D\sse T$ satisfying the lemma.
We will construct $\bD$ so that for each $n$,
the node of length $l^{\bD}_{n}$ which is going to be extended to the next coding node $c^{\bD}_{n+1}$ will extend to a splitting node in $\bD$
of length smaller than that of any other splitting node in the $(n+1)$-st interval of $\bD$.
Above that splitting node, the splitting will be regular in the interval until the next coding node.
Recall that for each $i\in\mathbb{N}$,
$\bT_k$
has either a coding node or else a splitting node of length $i$.
To avoid some superscripts, let
$l_n=|c^k_n|$
and $p_n=|c^{\bD}_n|$.
Let
$j_n$ be the index such that $c^{\bD}_n=c^k_{j_n}$, so that
$p_n$ equals $l_{j_n}$.
The set of nodes in $\bD\setminus\{c^{\bD}_n\}$ of length $p_n$
shall be indexed as $\{d_t:t\in \bT_k\cap\Seq_{l_n}\}$.
Recall that $m_n$ is the index such that the $m_n$-th critical node $d^k_{m_n}$ of $\bT_k$ is the $n$-th coding node $c^k_n$ of $\bT_k$.
We define inductively on $n\ge -1$ finite trees with coding nodes,
$r_{m_n+1}(\bD):= \bD\cap\Seq_{ \le p_{n}}$,
and
strong similarity maps of the trees
$\varphi :
r_{m_n+1}(\bT_k)\ra r_{m_n+1}(\bD^*)$,
where $|\varphi(c^k_n)|=p_n$.
Recall that the node $\lgl\rgl$ is the ghost coding node $c^k_{-1}$ in $\bT_k$.
Define $d_{\lgl\rgl}=\varphi(\lgl\rgl)=\lgl\rgl$.
The node $\lgl\rgl$ splits in $\bT_k$,
so the node $d_{\lgl\rgl}$ will split in $\bD$.
Suppose that $n\in \mathbb{N}$ and we have constructed
$r_{m_{n-1}+1}(\bD)$
satisfying the lemma.
By the induction hypothesis,
there is a strong similarity map of the trees
$\varphi :
r_{m_{n-1}+1}(\bT_k)\ra r_{m_{n-1}+1}(\bD^*)$.
For $t\in \bT_k(m_{n-1})$, let
$d_t$ denote $\varphi(t)$.
Let $s$ denote the node in $\bT_k (m_{n-1})$ which extends to the coding node $c^k_n$.
Let $v_s$ be a splitting node in $\bT_k$ extending $d_s$.
Let $u_s={v_s}^{\frown}1$
and extend all nodes $d_t$,
$t\in \bT_k(m_{n-1})
\setminus\{s\}$,
leftmost to length $|u_s|$ and label these $d_t'$.
Extend ${v_s}^{\frown}0$ leftmost to length $|u_s|$ and label it $d'_s$.
Let $X=\{d'_t:t\in \bT_k(m_{n-1})\}\setminus\{u_s\}$
and let
\begin{equation}
\Spl(n)=\{t\in \bT_k(m_{n-1}): t\mathrm{\ extends\ to\ a\ splitting\ node\ in\ the\ } n\mathrm{-th\ interval\ of\ } \bT_k\}.
\end{equation}
Apply Lemma \ref{lem.facts} to obtain a coding node $c^{\bD}_n$ extending $u_s$
and nodes $d_w$, $w\in \bT_k(m_n)$,
so that,
letting $p_n=|c^{\bD}_n|$
and
\begin{equation}
\bD(m_n)=
\{d_t:t\in \bT_k(m_n)\}\cup\{c^{\bD}_n\},
\end{equation}
and
for $m\in (m_{n-1},m_n)$, defining $\bD(m)=\{ d_t\re |s_m|:t\in \bD(m_n)\}$,
where $s_m$ is the $m$-th splitting node in $\bD(m_n)^{\wedge}$,
the following hold:
$r_{m_n+1}(\bD)$ satisfies the Witnessing Property
and
$r_{m_n+1}(\bD^*)$ is strongly similar as a tree
to
$r_{m_n+1}(\bT_k)$.
Thus, the coding nodes in $r_{m_n+1}(\bD)$
code exactly the same graph as the coding nodes in $r_{m_n+1}(\bT_k)$.
Let $\bD=\bigcup_{m\in\mathbb{N}}\bD(m)$.
Then the set of coding nodes in $\bD$ forms an antichain of maximal nodes in $\bD$.
Further,
the tree generated by the
the meet closure of the set $\{c^{\bD}_n:n<\om\}$
is exactly $\bD$,
and
$\bD^*$ and $\bT_k$
are strongly similar as trees.
By the construction,
for each pair $i<n<\om$, $c^{\bD}_n(p_i)=c^k_n(l_i)$;
hence they code $\mathcal{H}_k$ in the same order.
To finish, let $f_T$ be the strong isomorphism from $\bT_k$ to $T$.
Letting $D$ be the $f_T$-image of $\{c^{\bD}_n:n<\om\}$,
we see that $D$ is an antichain of coding nodes in $T$
such that $D^*$ and $\bD^*$ are strongly similar trees,
and hence $D^*$ is strongly similar as a tree to $\bT_k$.
Thus, the antichain of coding nodes $D$
codes $\mathcal{H}_k$ and satisfies the lemma.
\end{proof}
Recall that the Henson graph $\mathcal{H}_k$ is, up to isomorphism, the homogeneous $k$-clique-free graph on countably many vertices which is universal for all $k$-clique-free graphs on countably many vertices.
\begin{mainthm}\label{finalthm}
For each $k\ge 3$,
the Henson graph $\mathcal{H}_k$ has finite big Ramsey degrees.
\end{mainthm}
\begin{proof}
Fix $k\ge 3$ and let $\G$ be a finite $K_k$-free graph.
Suppose $f$ colors of all the copies of $\G$ in
$\mathcal{H}_k$ into finitely many colors.
By Example
\ref{ex.bTp},
there is a
strong coding tree $\bT_k$ such that the coding nodes in $\bT_k$ code a $\mathcal{H}_k$.
Let $\mathcal{A}$ denote the set of all
antichains of coding nodes
of $\bT_k$ which code a copy of $\G$.
For each $Y\in\mathcal{A}$,
let
$h(Y)=f(\G')$,
where
$\G'$ is the copy of $\G$
coded by the coding nodes in $Y$.
Then $h$ is a finite coloring on $\mathcal{A}$.
Let $n(\G)$ be the number of different strict similarity types
of incremental antichains of coding nodes in of $\bT_k$ coding $\G$,
and let $\{Z_i:i<n(\G)\}$ be a set of one representative from each of these strict similarity types.
Successively
apply Theorem \ref{thm.mainRamsey}
to obtain incremental strong coding trees $\bT_k\ge T_0\ge\dots\ge T_{n(\G)-1}$ so that for each $i<n(\G)$,
$h$ is takes only one color on
the set of
incremental antichains of coding nodes $A\sse T_i$ such that $A$ is strictly similar to $Z_i$.
Let $S=T_{n(\G)-1}$.
By Lemma \ref{lem.bD}
there is an antichain of coding nodes $D\sse S$ which codes $\mathcal{H}_k$ in the same way as $\bT_k$.
Every set of coding nodes in $D$ coding $\G$ is automatically incremental, since $S$ is incremental.
Therefore, every copy of $\G$ in the copy of $\mathcal{H}_k$ coded by the coding nodes in $\D$
is coded by an incremental antichain of coding nodes.
Thus, the number of
strict similarity types of incremental antichains in $\bT_k$ coding $\G$
provides an upper bound for
the big Ramsey degree of $\G$ in $\mathcal{H}_k$.
\end{proof}
Thus, each Henson graph has finite big Ramsey degrees.
Moreover, given a finite $k$-clique-free graph,
the big Ramsey degree $T(\mathrm{G},\mathcal{H}_k)$
is bounded by the number of strict similarity types of incremental antichains coding copies of $\mathrm{G}$ in $\bT_k$.
\section{Future Directions}
This article developed a unified approach to proving upper bounds for big Ramsey degrees of all Henson graphs.
The main phases of the proof were as follows:
I. Find the correct structures to code $\mathcal{H}_k$ and prove Extension Lemmas.
II. Prove an analogue of Milliken's Theorem for finite trees with certain structure.
In the case of the Henson graphs, this is the Strict Witnessing Property.
III. Find a means for turning finite antichains into finite trees with the Strict Witnessing Property so as to deduce a Ramsey Theorem for finite antichains from the previous Milliken-style theorem.
This general approach should apply to a large class of ultrahomogeneous structures with forbidden configurations.
It will be interesting to see where the dividing line is between those structures for which this methodology works and those for which it does not.
The author conjectures that similar approaches will work for forbidden configurations which are irreducible in the sense of \cite{Nesetril/Rodl77} and \cite{Nesetril/Rodl83}.
Although we have not yet proved the lower bounds
to obtain the precise
big Ramsey degrees $T(\G,\mathcal{K}_k)$, we conjecture that they will be exactly the number of strict similarity
types of
incremental
antichains coding $\G$.
We further conjecture that once found, the lower bounds will satisfy the conditions needed for Zucker's work in \cite{Zucker19} to apply.
If so, then each Henson graph would admit a big Ramsey structure and any big Ramsey flow will be a universal completion flow, and any two universal completion flows will be universal.
We mention some bounds on big Ramsey degrees, found by computing the number of distinct strict similarity types of incremental antichains of coding nodes.
Let $\bar{K}_2$ denote two vertices with no edge between them.
$T(K_2,\mathcal{H}_3)=2$ was proved by Sauer in \cite{Sauer98}.
This is in fact the number of strict similarity types of two coding nodes coding an edge in $\bT_3$.
The number of strict similarity types of incremental antichains coding a non-edge in $\bT_3$ is seven, so
\begin{equation}
T(\bar{K}_2,\mathcal{H}_3) \le 7.
\end{equation}
The number of strict similarity types of incremental antichains coding an edge in $\bT_4$ is 44, so
\begin{equation}
T(K_2,\mathcal{H}_4)\le 44.
\end{equation}
The number of strict similarity types of incremental antichains coding a non-edge in $\bT_4$ is quite large.
These numbers grow quickly as more pre-cliques are allowed.
Since any copy of $\mathcal{H}_k$ can be enumerated and coded by a strong coding tree, which wlog can be assumed to be incremental, it seems that these strict similarity types should persist.
We point out that by a compactness argument,
one can obtain finite versions of the two main Ramsey theorems in this article.
In particular, the finite version of Theorem \ref{thm.mainRamsey} may well produce better bounds for the sizes of finite $K_k$-free graphs instantiating that the \Fraisse\ class $\mathcal{G}_k^{<}$
has the Ramsey property.
Curiously, the methodology in this paper and \cite{DobrinenJML20} is also having impact on \Fraisse\ structures without forbidden configurations.
In \cite{DobrinenRado19}, the author recently developed trees with coding nodes to code copies of the Rado graph
and used forcing arguments similar to, but much simpler than, those in Section \ref{sec.5}
to answer
a question of \cite{Kechris/Pestov/Todorcevic05} regarding infinite dimensional Ramsey theory of copies of the Rado graph.
These methods work also for the rationals, and ongoing work is to discover all aspects of Ramsey and anti-Ramsey theorems for colorings of definable sets of spaces of such \Fraisse\ structures, the aim being to extend theorems of Galvin-Prikry in \cite{Galvin/Prikry73} and Ellentuck in \cite{Ellentuck74} to a wide collection of \Fraisse\ classes.
Lastly,
modifications and generalizations of this approach seem likely to
produce a general theorem
for big Ramsey degrees for a large collection of relational \Fraisse\ structures without forbidden configurations.
\bibliographystyle{amsplain}
|
2,877,628,091,118 | arxiv | \section{Introduction}
Perhaps the most important correlation-related problem is the \emph{famous} Gaussian Correlation Conjecture. The standard Gaussian measure (denoted by $\gamma_n$) of any measurable subset $A\subseteq \mathbb{R}^n$ is defined by
\begin{eqnarray*}
\gamma_n(A)=\frac{1}{(2\pi)^{n/2}}\int_{A}e^{-\vert x\vert^2/2} dx.
\end{eqnarray*}
A general mean zero Gaussian measure, $\mu_n$, defined on $\mathbb{R}^n$ is a linear image of the standard Gaussian measure. The Gaussian Correlation Conjecture is formulated as follows :
\begin{conj} \label{1}
For any $n\geq 1$, if $\mu$ is a mean zero, Gaussian measure on $\mathbb{R}^n$, then for $K, M$, convex closed subsets of $\mathbb{R}^n$ which are symmetric about the origin, we have
\begin{eqnarray*}
\mu_n(K\cap M)\geq \mu_n(K)\mu_n(M).
\end{eqnarray*}
\end{conj}
For some background on the above conjecture, a less general form of the Gaussian Correlation Conjecture first appeared in $1955$ in \cite{dunnet}. The general setting appeared a few years later in $1972$ by S. Das Gupta, M.L. Eaton, I. Olkin, M. Perlman, L.J. Savage and M. Sobel in \cite{dasgu}. I won't go into details about what is known or unknown about Conjecture \ref{1}, in this paper, I am more interested about a correlation inequality with respect to another measure. In the past few years, there has been a bit of research about correlation inequalities with respect to measures other than the Gaussian (see for example \cite{lewi1} and \cite{lewi2}). People began to wonder (since it is hard to prove (or disprove) Conjecture \ref{1})if perhaps dealing with other measures could be easier. For example, in \cite{figa}, the authors prove some sharp correlation type inequalities for rotationally invariant measures (with a decay condition on the density function) in $\mathbb{R}^2$. One should recall that Conjecture \ref{1} is proved in the two dimensional case. This paper concerns the following:
\begin{conj} \label{pri}
For any $n\geq 1$, for every two symmetric convex sets $K,M\subset \mathbb{R}^n$, we have
\begin{eqnarray*}
\nu_n(K\cap M)\geq \nu_n(K)\nu_n(M),
\end{eqnarray*}
where
\begin{eqnarray*}
\nu_n=C.(1+\vert x\vert^2)^{-\frac{n+1}{2}},
\end{eqnarray*}
for $C$ the normalisation constant.
\end{conj}
In dimension $2$, the results proven in \cite{figa} give a positive answer to Conjecture \ref{pri}. Therefore there is no need to discuss this conjecture for this case. Here we discuss several particular cases in higher dimensions for which Conjecture \ref{pri} holds. We also give some ideas on possible proves for this conjecture. The method used to approach Conjecture \ref{pri} is purely geometric. We first announce a spherical correlation conjecture on the canonical Riemannian sphere. Later on, we show that the spherical correlation conjecture is equivalent to Conjecture \ref{pri} using a projective mapping of the Euclidean space to a ball of radius $\pi/2$ of the sphere. Under the projective mapping, every straight line of the Euclidean space is mapped to a geodesic of the sphere. Therefore any convex set of the Euclidean space is mapped to a convex set of the sphere. Morever, the push-forward of the Cauchy measure $\nu_n$ with density $C.(1+\vert x\vert^2)^{-\frac{n+1}{2}}$ (where $C>0$ is a normalisation constant) with respect to the Lebesgue measure, is mapped to the normalised canonical Riemannian measure of the sphere (or, better said \emph{ball}). The spherical correlation will be the subject of Section $3$ of this paper. In Section $5$, we define the projective mapping connecting Conjecture \ref{pri} to the spherical one. In the final section, we discuss possible ways of proving the spherical correlation conjecture.
\section{Acknowledgement}
I am grateful to Michel Ledoux and Franck Barthe for their useful remarks concerning this project.
\section{A Correlation Conjecture on the Sphere}
In this section, we present a correlation conjecture on the canonical Riemannian sphere. Let $\mathbb{S}^n$ be the canonical Riemannian sphere. Fix a hemi-sphere $\mathbb{S}^n_{+} \subset \mathbb{S}^n$. We denote the center of the hemi-sphere by $o$ and recall that $\mathbb{S}^n_{+}=B(o,\pi/2)$ where $B(o,\pi/2)$ is a spherical ball of radius $\pi/2$ centered at the point $o$. We denote the Riemannian volume of the sphere by $vol_n$.
\begin{de}
An open set $S\subset B(o,\pi/2)\subset \mathbb{S}^n$ is convex if it is geodesically convex with respect to the canonical Riemannian geometry of the sphere. A convex set $S$ is centrally symmetric around a point $x\in Int(S)$ if for any geodesic segment $\sigma$ containing the point $x$
\begin{eqnarray*}
l([x,x^{+}_b])=l([x,x^{-}_b]).
\end{eqnarray*}
Where $\sigma\cap \partial S=\{x^{+}_b,x^{-}_b\}$ and $l(.)$ stands for the length (which is understood as the length related to the Riemannian structure of the sphere).
\end{de}
For our future purposes it's best to set the following :
\begin{nott}
Let $X$ be a general metric-measure space and $Y\subset X$. Then
\begin{eqnarray*}
Y+\varepsilon =\{x \in X \vert \hspace{0.5mm} d(x,Y)\leq \varepsilon\},
\end{eqnarray*}
where $d(.,.)$ stands for the metric of $X$ and $d(x,Y)=inf_{y\in Y} d(x,y)$.
\end{nott}
We shall need to remind (and define) a fairly known operation on subsets of the sphere which will come useful in the next section :
\begin{de}[Double Suspension] \label{cone}
Let $X_k\subset\mathbb{S}^k_{o}$ be a $k$-dimensional symmetric convex set containing the point $o$. Let $B_{n-k}^{\perp}(o,\pi/2)$ be the $(n-k)$-dimensional ball \emph{orthogonal} to $\mathbb{S}^k_{o}$ containing $X_k$. Let $\mathbb{S}^{n-k-1}=\partial B_{n-k}^{\perp}(o,\pi/2)$. Define $X_n=X_k *\mathbb{S}^{n-k-1}$ to be the $n$-dimensional symmetric convex set which contains $X_k$ and all the geodesics orthogonal to $X_k$ joining $\mathbb{S}^{n-k-1}$.
\end{de}
\emph{Remark}:
This operation for $k=1$ defines a double cone over a $(n-1)$-dimensional symmetric convex set $X_{n-1}$ and hence generalises it to higher dimensions.
We are now ready to announce the spherical correlation conjecture :
\begin{conj} \label{princ}
Let $K_1$ and $K_2$ be two geodesically convex (spherical) bodies contained in the hemi-sphere $B(o,\pi/2)\subset \mathbb{S}^n$. Additionaly, $K_1$ and $K_2$ are both centrally symmetric around the point $o$. Then
\begin{eqnarray*}
vol_n(K_1\cap K_2).vol_n(B(o,\pi/2))\geq vol_n(K_1).vol_n(K_2).
\end{eqnarray*}
\end{conj}
Conjecture \ref{princ}(in its most general form) is open. However, in the next section we examine a few important specific cases for which there is a positive answer to this conjecture.
\section{A Few Special Cases in Conjecture \ref{princ}}
The aim of this section is to prove the following :
\begin{theo} \label{dmain}
For the following different cases, the result of conjecture \ref{princ} holds :
\begin{itemize}
\item One set is a spherical ball, i.e. $K_i=B(o,r)$ for any $r>0$.
\item Let $0<\varepsilon\leq \pi/2$ and a $1\leq k\leq (n-1)$. Let $Y$ be a spherical tube of width $\varepsilon$, i.e. $Y=\mathbb{S}^k+\varepsilon$ and let $X_n=X_k*\mathbb{S}^{n-k-1}$ as in definition \ref{cone}. Then $X_n$ and $Y$ satisfy the equality case in conjecture \ref{princ}.
\item If an integer $N$ exists such that for every $n\geq N$ conjecture \ref{princ} is true, then the conjecture is true for every dimension.
\item If $X$ is an arbitrary symmetric spherical convex set and $Y$ is any tube, i.e. $Y=\mathbb{S}^k+\varepsilon$, then conjecture \ref{princ} holds for $X$ and $Y$.
\end{itemize}
\end{theo}
Next sections concern the proof of Theorem \ref{dmain}.
\subsection{One Set Is a Spherical Ball}
Let $K_1=B(o,r)$ for $0<r\leq \pi/2$ and $K_2$ an arbitrary convex set (containing $o$).
In this case, the proof of Conjecture \ref{princ} follows immediately applying the following version of the Bishop-Gromov Inequality :
\begin{lem} \label{bg}
For all open convex sets $S\subset B(o,\pi/2)$ and all $x\in S$, the function
\begin{eqnarray*}
\frac{vol_n(S\cap B(x,r))}{vol_n(B(x,r))}
\end{eqnarray*}
is a non-increasing function of $r$.
\end{lem}
\begin{flushright}
$\Box$
\end{flushright}
\emph{Remark}:
\begin{itemize}
\item Note that $S$ is any convex set and not necessarely symmetric.
\item Lemma \ref{bg} is not sharp. There are some non-convex sets in existance for which this Lemma still holds. Indeed for $S$ being a waist of length $r$ around a hyper-sphere i.e. $\mathbb{S}^{n-1}+r$ and for all the tubes $\mathbb{S}^{k}+r$ ($1\leq k\leq n-2$), Lemma \ref{bg} still holds.
\end{itemize}
\subsection{The Equality Case in Conjecture \ref{princ}}
The second natural question to ask is whether it is possible to classify the sets (or at least some class of sets) for which the equality holds in Conjecture \ref{princ}. Normally in a correlation type problem, studying the equality cases is as hard as solving the original general inequality. But, as we shall see in this section, for the spherical correlation inequality, one has a nice characterisation of the equality cases. Of course, one obvious equality case is when one set is the whole half-sphere i.e. $B(o,\pi/2)$. We shall examine a less obvious class of examples for which the equality in Conjecture \ref{princ} holds. Let $X_k$ and $X_n$ be defined as in definition \ref{cone}. One can verify in a straightforward way the following :
\begin{lem} \label{formul}
Let $X_k$ and $X_n=X_n=X_k *\mathbb{S}^{n-k-1}$ be as defined above. Then
\begin{eqnarray*}
\frac{vol_n(X_n)}{vol_n(\mathbb{S}^n_{+})}=\frac{vol_k(X_k)}{vol_k(\mathbb{S}^k_{+})}.
\end{eqnarray*}
\end{lem}
Let $\varepsilon>0$ and let $Y=\mathbb{S}^k+\varepsilon$ be a tube of radius $\varepsilon$. Choose $\mathbb{S}^k$ such that it contains $X_k$. Then
\begin{prop} \label{equal}
For $X_n$ and $Y$ defined as above, we have
\begin{eqnarray*}
vol_n(\mathbb{S}^n_{+}).vol_n(X_n\cap Y)=vol_n(X_n).vol_n(Y).
\end{eqnarray*}
\end{prop}
We recall that the set $Y$ is not a convex set, and it is unclear whether two symmetric convex sets, both different from $B(o,\pi/2)$ exist, such that they satisfy the equality case of Conjecture \ref{princ}.
\emph{Proof of Proposition \ref{equal}}:
Remark that
\begin{eqnarray*}
vol_n(X_n\cap Y)=(\int_{0}^{\varepsilon}\cos(t)^{k}\sin(t)^{n-k-1}dt).vol_k(X_k).
\end{eqnarray*}
Hence
\begin{eqnarray*}
vol_n(\mathbb{S}^n_{+}).vol_n(X_n\cap Y)&=& vol_n(\mathbb{S}^n_{+}).(\int_{0}^{\varepsilon}\cos(t)^{k}\sin(t)^{n-k-1}dt).vol_k(X_k)\\
&=& (\int_{0}^{\varepsilon}\cos(t)^{k}\sin(t)^{n-k-1}dt). vol_k(\mathbb{S}^k_{+}).vol_n(X_n)\\
&=& vol_n(Y).vol_n(X_n).
\end{eqnarray*}
This ends the proof of Proposition \ref{equal}.
\begin{flushright}
$\Box$
\end{flushright}
\subsection{High Dimensions Imply All Dimensions}
Here we show a simple yet useful result regarding Conjecture \ref{princ}. Roughly speaking, if one can prove the spherical correlation Conjecture for high dimensional spheres, then the conjecture holds in all dimensions. More precisely:
\begin{lem} \label{high}
Suppose an integer $N\in \mathbb{N}$ exists such that for every $n\geq N$-dimensional sphere, Conjecture \ref{princ} holds. Then Conjecture \ref{princ} holds for all $n$.
\end{lem}
\emph{Proof of Lemma \ref{high}}:
Let $k\leq N$ and let $K_1, K_2\subset \mathbb{S}^k_{+}$ two symmetric convex sets around $o \in \mathbb{S}^k_{+}$. See $\mathbb{S}^k_{+}\subset \mathbb{S}^{N}_{+}$ as a $k$-dimensional totally geodesic sub-sphere. Let $X_i=K_i*\mathbb{S}^{N-k-1}$ for $i=1,2$ and $Y=(K_1\cap K_2)*\mathbb{S}^{N-k-1}$. Remark that $X_1, X_2, Y$ are symmetric convex sets in $\mathbb{S}^N$. By the assumptions of Lemma \ref{high} and by the result of Lemma \ref{formul}, we have
\begin{eqnarray*}
\frac{vol_k(K_1\cap K_2)}{vol_k(\mathbb{S}^{k}_{+})} &=& \frac{vol_N(Y)}{vol_N(\mathbb{S}^{N}_{+})}\\
&\geq& \frac{vol_N(X_1\cap X_2)}{vol_N(\mathbb{S}^{N}_{+})}\\
&\geq& \frac{vol_N(X_1)}{vol_N(\mathbb{S}^{N}_{+})}.\frac{vol_N(X_2)}{vol_N(\mathbb{S}^{N}_{+})}\\
&=& \frac{vol_k(K_1)}{vol_k(\mathbb{S}^{k}_{+})}.\frac{vol_k(K_2)}{vol_k(\mathbb{S}^{k}_{+})}.
\end{eqnarray*}
This ends the proof of Lemma \ref{high}.
\begin{flushright}
$\Box$
\end{flushright}
Lemma \ref{high} is very useful since it is now adequate to prove the spherical correlation conjecture for sufficiently high dimensional spheres.
\subsection{General Symmetric Convex Body and Tubes}
Here we shall examine a harder case for Conjecture \ref{princ}. We assume that none of the convex sets contain the other. The proof of Conjecture \ref{princ} for the opposite case is trivial.
We first need a bit of background on the metric invariant \emph{waist} and an important class of measures on the sphere called $\sin^k$-concave measures.
The waist of a general mm-space is defined in \cite{grwst}. Let $X$ be a mm (metric-measure)-space of dimension $n$. For every $1\leq k\leq n$ the $k$-waist of $X$, denoted by $wst_k(X)$ is the infimum of numbers $r\geq 0$ such that for every family of $k$-cycles (or relative cycles) parametrised by a $n-k$ dimensional $\mathbb{Z}_2$-topological manifold and genereting the fundamental $\mathbb{Z}_2$-homology class of the space of $k$-cycles, the $k$-volume of every cycle is at most equal to $r$. The waist of the canonical sphere is sharply estimated in \cite{grwst} and \cite{memwst}.
A convex subset $X$ of the canonical Riemannian sphere has sectional curvature everywhere (on its regular part) at least equal to $1$, i.e. $sec(X)\geq 1$. In his recent paper \cite{groexp}, Gromov, by reviewing deeply the ideas of F.Almgren in \cite{almg} and the Heintze-Karcher Volume Comparison Theorem, gives a sharp estimate of the waist of Riemannian manifolds with sectional curvature at least equal to $1$. (The number one can also be relaxed to a $\kappa>0$). More precisely:
\begin{theo} \label{wt}
Let $X$ be a compact connected Riemannian manifold (with a possibly non-empty quasi-regular convex boundary) such that $sec(X)\geq 1$. Let $f:X\to \mathbb{R}^k$ be a smooth map. Then there exists a $z\in \mathbb{R}^k$ such that:
\begin{eqnarray*}
\frac{vol_{n-k}(f^{-1}(z))}{vol_n(X)}\geq \frac{vol_{n-k}(\mathbb{S}^{n-k})}{vol_n(\mathbb{S}^n)}.
\end{eqnarray*}
Where $vol_{n-k}$ stands for the Riemannian volume (or equivalently the Hausdorff measure) in dimension $(n-k)$.
\end{theo}
For a proof of this theorem one can see \cite{groexp}, or \cite{mempos} where the present author provides a detailed proof of Theorem \ref{wt}.
We recall that a compact $n$-dimensional rectifiable set $X_1\subset X_2$ is called \emph{quasi-regular} if the complementary of the subset of regular parts of $X_1$ has measure zero in $X_1$, and for almost all the points of $X_2$, the distance function $d_x(y)=d(x,y)$ (where the distance is with respect to $X_2$) has its minimum (in $X_1$) at a regular point of $X_1$.
Here, since we are dealing with convex subsets of the sphere, we want to apply Theorem \ref{wt}. Our convex sets may very well have some singularities on the boundary, but since it is enough to use spherical polytopes (which are the interior of the intersection of a finite number of hyperspheres) then we fall under the assumptions of Theorem \ref{wt}. This gives us the desired lower bound for the waist of the convex sets $K_1$, $K_2$ and $K_1\cap K_2$.
Take a parallel family of hyperspheres $\{\mathbb{S}^{n-1}_t\}_{t\in I}$ which sweep out the convex set $K_1$ (the notion of \emph{parallel} is well defined on symmetric Riemannian manifolds as "one can define two hypersurfaces as parallel if they have parallel second fundamental forms"). $I$ is an interval of $\mathbb{R}$. Since $K_1$ is centrally symmetric, then for every $t\in I$ we have
\begin{eqnarray*}
vol_{n-1}(\mathbb{S}^{n-1}_{o}\cap K_1)\geq vol_{n-1}(\mathbb{S}^{n-1}_t\cap K_1),
\end{eqnarray*}
where $\mathbb{S}^{n-1}_{o}$ is the only hypersphere in this family that contains the point $o$.
Indeed if this is not the case, then there is a $t_0\in I$ such that for every $t\in I$
\begin{eqnarray*}
vol_{n-1}(\mathbb{S}^{n-1}_{t_0}\cap K_1)\geq vol_{n-1}(\mathbb{S}^{n-1}_t\cap K_1),
\end{eqnarray*}
and $\mathbb{S}^{n-1}_{t_0}$ does not contain the point $o$. Then by the symmetry of $K_1$ there is another $t'_0\neq t_0$ such that
\begin{eqnarray*}
vol_{n-1}(\mathbb{S}^{n-1}_{t_0}\cap K_1)=vol_{n-1}(\mathbb{S}^{n-1}_{t'_0}\cap K_1)
\end{eqnarray*}
and by convexity of $K_1$ this means that there is a hypersphere $\mathbb{S}^{n-1}_t$ between $\mathbb{S}^{n-1}_{t_{0}}$ and $\mathbb{S}^{n-1}_{t'_0}$ such that
\begin{eqnarray*}
vol_{n-1}(\mathbb{S}^{n-1}_{t}\cap K_1)\geq vol_{n-1}(\mathbb{S}^{n-1}_{t_0}\cap K_1)=vol_{n-1}(\mathbb{S}^{n-1}_{t'_0}\cap K_1),
\end{eqnarray*}
and this is a contradiction. This argument, combined with Theorem \ref{wt}, shows that for any hypersphere $\mathbb{S}^{n-1}_{o}$ which contains the point $o$ we have :
\begin{eqnarray*}
\frac{vol_{n-1}(\mathbb{S}^{n-1}_{o}\cap K_1)}{vol_n(K_1)}\geq \frac{vol_{n-1}(\mathbb{S}^{n-1}_{o}\cap B(o,\pi/2))}{vol_n(B(o,\pi/2))}.
\end{eqnarray*}
Unfortunately, the above waist inequality is not enough for our purpose. We need to extend the waist inequality for the volume of the $\varepsilon$-neighborhood of sections $\mathbb{S}^{n-1}_{o}\cap K_1$. To do this, we require some information and tools about $\sin^k$-measures and functions:
\begin{de}[$\sin$-concave functions]
A real non-negative function $f$ defined on an interval of length less than $2\pi$ is called $\sin$-concave, if, when transported by a unit speed parametrisation of the unit circle, it can be extended to a $1$-homogeneous and concave function on a convex cone of $\mathbb{R}^2$.
\end{de}
\begin{de}[$\sin^k$-concave functions]
A non-negative real function $f$ is called $\sin^k$-concave if the function $f^{\frac{1}{k}}$ is $\sin$-concave.
\end{de}
One can use the following lemma as a definition for $\sin^k$-concave functions:
\begin{lem}
A real non-negative function defined on an interval of length less than $\pi$ is $\sin^k$-concave if for every $0<\alpha<1$ and for all $x_1,x_2\in I$ we have
\begin{eqnarray*}
f^{1/k}(\alpha x_1+(1-\alpha)x_2)\geq (\frac{\sin(\alpha\vert x_2-x_1\vert)}{\sin(\vert x_2-x_1\vert)})f(x_1)^{1/k}+(\frac{\sin((1-\alpha)\vert x_2-x_1\vert)}{\sin(\vert x_2-x_1\vert)}f(x_2)^{1/k}.
\end{eqnarray*}
Particularly if $\alpha=\frac{1}{2}$ we have
\begin{eqnarray*}
f^{1/k}(\frac{x_1+x_2}{2})\geq \frac{f^{1/k}(x_1)+f^{1/k}(x_2)}{2\cos(\frac{\vert x_2-x_1\vert}{2})}.
\end{eqnarray*}
\end{lem}
The following important lemma is proved in \cite{memwst}:
\begin{lem} \label{grmem}
Let $\mu=f.dvol_{k}$ be a measure with a $\sin^{n-k}$-concave density with respect to the $k$-dimensional Riemannian volume of $\mathbb{S}^k$. Let the measure $\mu$ be supported on a $k$-dimensional convex subset $S\subseteq \mathbb{S}^k$. Let $o\in S$ be the point where $f$ attains its maximum, then :
\begin{eqnarray*}
\frac{\int_{B(o,\varepsilon)\cap S}f(x) dvol(x)}{\int_{S}f(x) dvol(x)} &\geq& \frac{\int_{0}^{\varepsilon}\cos^{n-k}(t)\sin^{k-1}(t)dt}{\int_{0}^{\pi/2}\cos^{n-k}(t)\sin^{k-1}(t) dt} \\
&=&\frac{vol_n(\mathbb{S}^{n-k}+\varepsilon)}{vol_n(\mathbb{S}^n)}.
\end{eqnarray*}
\end{lem}
The following lemma is a simplified version of a spherical Brunn Theorem:
\begin{lem} \label{brun}
Let $S\subset \mathbb{S}^n$ be a symmetric convex set with respect to $o \in S$. Let $\mathbb{S}^k_{0}\subset \mathbb{S}^n$ be a $k$-dimensional totally geodesic sub-sphere containing the point $o$. Let $\mathbb{S}^k\subset\mathbb{S}^{k+1}_{o}$ be a hyper-sphere in a $k+1$-dimensional sphere containing $o$. Let $p:S\to \sigma$ be the orthogonal projection of $S$ onto $\sigma$. Then the push-forward of the Riemannian volume has a $\sin^{n-1}$-concave density with respect of $dt$ the canonical measure of the segment $\sigma$.
\end{lem}
We are ready to prove a special case for which Conjecture \ref{princ} holds:
\begin{prop} \label{import}
For every $\varepsilon>0$ and every $\mathbb{S}^{n-1}_{o}$ we have :
\begin{eqnarray*}
\frac{vol_n((\mathbb{S}^{n-1}_{o}+\varepsilon)\cap K_1)}{vol_n(K_1)}\geq \frac{vol_n(\mathbb{S}^{n-1}_{o}+\varepsilon)}{vol_n(\mathbb{S}^n)}.
\end{eqnarray*}
\end{prop}
\emph{Proof of Proposition \ref{import}}:
Let $\varepsilon>0$ and $\mathbb{S}^{n-1}_{o}$ be fixed. Let $\sigma$ be a geodesic orthogonal to $\mathbb{S}^{n-1}_{o}$. Project orthogonally $K_1$ onto $\sigma$. Applying Lemma \ref{brun}, the density of the push-forward measure, denoted by $f$, is a $\sin^{n-1}$-concave function which attains the maximum at point $o$, thanks to central symmetry of $K_1$. Then we have :
\begin{eqnarray*}
\frac{vol_n((\mathbb{S}^{n-1}_{o}+\varepsilon)\cap K_1)}{vol_n(K_1)} &=& \frac{\int_{[-\varepsilon,\varepsilon]\subset \sigma}f(t)dt}{\int_{\sigma}f(t)dt}\\
&\geq& \frac{\int_{0}^{\varepsilon}\cos^{n-1}(t)dt}{\int_{0}^{\pi/2}\cos^{n-1}(t)dt} \\
&=& \frac{vol_n(\mathbb{S}^{n-1}+\varepsilon)}{vol_n(\mathbb{S}^n)},
\end{eqnarray*}
where the two last equations are obtained by applying Lemma \ref{grmem}.
This completes the proof of Proposition \ref{import}.
\begin{flushright}
$\Box$
\end{flushright}
\emph{Remark}: Generalising Lemma \ref{brun} to higher dimensions and using Lemma \ref{grmem}, we can thus easily obtain the following:
\begin{prop} \label{importt}
For every $\varepsilon>0$ and every $\mathbb{S}^{n-k}_{o}$ we have :
\begin{eqnarray*}
\frac{vol_n((\mathbb{S}^{n-k}_{o}+\varepsilon)\cap K_1)}{vol_n(K_1)}\geq \frac{vol_n(\mathbb{S}^{n-k}_{o}+\varepsilon)}{vol_n(\mathbb{S}^n)}.
\end{eqnarray*}
\end{prop}
This proves that every symmetric convex set and every tube of the form $\mathbb{S}^k+\varepsilon$ satisfy the correlation inequality of Conjecture \ref{princ}.
Lemmas \ref{bg} and \ref{high} combined with propositions \ref{equal} and \ref{importt} complete the proof of Theorem \ref{dmain}.
\begin{flushright}
$\Box$
\end{flushright}
\section{From the Euclidean Space to the Sphere and the Cauchy Correlation Conjecture}
In this section, we shall prove that Conjecture \ref{pri} is equivalent to Conjecture \ref{princ}. For this we shall recall an important map between the Euclidean space to the open hemi-sphere:
\subsection{The Projective (or Gnomonic Projection) of the Euclidean Space onto the Sphere}
Let $\mathbb{R}^n=\mathbb{R}^n\times{0}\subset \mathbb{R}^{n+1}$. Let $\mathbb{S}^n_{-1}$ be the unit sphere centered at $c=(0,\cdots,-1)$.
\begin{de}[Gnomonic Projection] \label{gno}
The Gnomonic map is the map $q: \mathbb{R}^n\rightarrow \mathbb{S}^n_{-1}$ defined such that for every $x\in \mathbb{R}^n$
\begin{eqnarray*}
q(x)=[x,c]\cap \mathbb{S}^n_{-1},
\end{eqnarray*}
where $[x,c]$ is the \emph{line} segment (in $\mathbb{R}^{n+1}$) joining the center of the sphere to the point $x$.
\end{de}
$p$ is an isomorphism between the Euclidean space and a (fixed) ball of radius $\pi/2$ of the sphere. By the previous definition, one can clearly see that $p$ maps every straight line to a geodesic segment. This particularity of the gnomonic projection is essential for us since $p$ maps convex sets of Euclidean space to convex subsets of a hemi-sphere of the round sphere. Note that the map $p$ is neither the exponential map (for which only the lines passing through the origin are mapped to geodesics) nor the stereographical map (for which the image of a point of the sphere is the intersection of a segment passing through this point and a fixed point of the sphere with the Euclidean space). This specific map was used in \cite{gromil} and \cite{grwst} to transport measures supported by convex subsets of the Euclidean space to measures supported by convex sets of the sphere and is used to prove some isoperimetric type inequalities on the sphere and the Gaussian space.
\begin{lem} \label{2}
The push-forward of the Cauchy measure $\nu_n$ with density
\begin{eqnarray*}
p(x)=C.(1+\vert x\vert^2)^{-\frac{n+1}{2}}
\end{eqnarray*}
under the gnomonic projection is the normalised canonical Riemannian measure $\frac{1}{vol(B(.,\pi/2))}dv_{\mathbb{S}^n_{+}}$.
\end{lem}
\emph{Proof of the Lemma \ref{2}}
It is sufficient to calculate the Jacobian of the map of definition \ref{gno}.
\begin{flushright}
$\Box$
\end{flushright}
\subsection{Equivalence Between Conjecture \ref{princ} and Conjecture \ref{pri}}
\begin{prop} \label{eqi}
Conjecture \ref{pri} is equivalent to Conjecture \ref{princ}.
\end{prop}
\emph{Proof of Proposition \ref{eqi}}
Applying the isomorphism $q$ of definition \ref{gno} and Lemma \ref{2}, we can translate/transport the setting of Conjecture \ref{pri} on a hemi-sphere of $\mathbb{S}^n$ :
Let $K$ and $M$ be the two convex bodies of the assumption of Conjecture \ref{pri}. Let $K_1=q(K)$ and $K_2=q(M)$. Applying Lemma \ref{2} one has
\begin{eqnarray*}
\nu_n(K)&=&q^{-1}_{*}(\mu_n)(K_1)=vol_n(K_1)/vol_n(B(o,\pi/2)) \\
\nu_n(M)&=&q^{-1}_{*}(\mu_n)(K_2)=vol_n(K_2)/vol_n(B(o,\pi/2)) \\
\nu_n(K\cap M)&=&q^{-1}_{*}(\mu_n)(K_1\cap K_2)=vol_n(K_1\cap K_2)/vol_n(B(o,\pi/2)).
\end{eqnarray*}
Note that $q(K\cap M)=q(K)\cap q(M)$ since of course $q$ is an isomorphism. If Conjecture \ref{princ} holds, we get
\begin{eqnarray*}
\nu_n(K\cap M)&=&\frac{vol_n(K_1\cap K_2).vol_n(B(o,\pi/2))}{(vol_n(B(o,\pi/2)))^2} \\
&\geq& \frac{(vol_n(K_1)}{vol_n(B(o,\pi/2))}.\frac{vol_n(K_2)}{vol_n(B(o,\pi/2))} \\
&=&\nu_n(K).\nu_n(M).
\end{eqnarray*}
This proves a correlation theorem for Cauchy Measures.
If one supposes Conjecture \ref{pri} holds (again by applying the above argument using the Gnomonic projection) we can obtain the spherical correlation and hence Conjecture \ref{princ}.
This ends the proof of Proposition \ref{eqi}.
\begin{flushright}
$\Box$
\end{flushright}
\section{Remarks and Questions}
\begin{itemize}
\item Is there an $O(n)$-invariant map from $\mathbb{R}^n$ to $\mathbb{S}^n_{+}$ which transports the Gaussian measure to the normalised Riemannian measure of the hemi-sphere (such that given two (bounded) convex subsets $K_1$ and $K_2$ of $\mathbb{R}^n$, their image remain convex)?
If such a map exists, then the Gaussian Correlation Conjecture can be proved directly from Conjecture \ref{princ} using this map.
\item Following the proof of Conjecture \ref{princ} for the special cases discussed in Section $4$, it seems that the spherical correlation Conjecture should hold for a wider class of sets. My first supposition is that Conjecture \ref{princ} should hold for a pair of symmetric convex set and a symmetric \emph{mean-convex} set (a set with positive mean-curvature of boundary). I did not follow up on this problem, but it could be interesting to characterise every two subsets (not necessarely convex) of the sphere which satisfy the correlation property.
\item One possible way of proving Conjecture \ref{princ} (according to Section $4$) would be, for example, to choose a \emph{good} $k$-dimensional section of one set $K_1$, replace it by a tube $\mathbb{S}^k+\varepsilon$ of the same volume, deform the other convex set $K_2$ to become a generalised double cone of appropriate (co)-dimension and same volume. Then, according to Proposition \ref{equal}, we have the equality case for the spherical correlation and it would remain to prove that the volume of the intersection of this deformation becomes smaller than the original volume of the intersection. I attempted this method by trying to find the \emph{good} section by generalising the Dvoretzky Theorem for two symmetric convex sets but still wasn't able to prove Conjecture \ref{princ} in its most general form.
\item In the past few years localisation methods were used to prove very interesting geometric inequalities. In \cite{lova} and \cite{kann} the authors prove integral formulae using localisation, and apply their methods to conclude a few isoperimetric type inequalities concerning the convex sets in the Euclidean space. In \cite{guedon} the authors study a functional analysis version of the localisation used again on the Euclidean space. The localisation on more general spaces was studied in \cite{gromil}, \cite{grwst}, \cite{memwst}, \cite{memusphere} and \cite{membru}. It may seem hard to believe that one could prove Conjecture \ref{princ} using localisation methods but an easier version of Conjecture \ref{princ}, where we replace $vol_n(B(o,\pi/2)$ by $vol_n(\mathbb{S}^n)$, should be possible to prove using the following proposition proved in \cite{membru} :
\begin{prop} \label{fonda}
Let $f_1$, $f_2$ be two upper semi-continuous nonegative functions on $\mathbb{S}^n$ and $f_3$, $f_4$ be two lower semi-continuous nonegative functions on $\mathbb{S}^n$. Let $-\infty\leq s\leq 1/2$ and $\alpha$,$\beta>0$. Suppose that $f_1^{\alpha}f_2^{\beta}\leq f_3^{\alpha}f_4^{\beta}$ and for every $a,b\in \mathbb{S}^n$, for every $\sin^s$-affine probability measure $\nu$ supported by the geodesic segment $[a,b]$,
\begin{eqnarray*}
(\int f_1 d\nu)^{\alpha}(\int f_2 d\nu)^{\beta}\leq (\int f_3 d\nu)^{\alpha}(\int f_4 d\nu)^{\beta}.
\end{eqnarray*}
Then for every $\sin^s$-concave probability measure $\mu$ on $\mathbb{S}^n$,
\begin{eqnarray*}
(\int f_1 d\mu)^{\alpha}(\int f_2 d\mu)^{\beta}\leq (\int f_3 d\mu)^{\alpha}(\int f_4 d\mu)^{\beta}.
\end{eqnarray*}
\end{prop}
To prove Conjecture \ref{princ} applying Proposition \ref{fonda}, one can fix $K_1$ and translate (without rotating) $K_2$ to the point $-o$. Distribute the mass of $K_1\cap K_2$ around the waist of the sphere (i.e. around $\partial B(o,\pi/2)=\mathbb{S}^{n-1}$). I believe this is a suitable geometric configuration for one to be able to apply Proposition \ref{fonda} and prove a weak version of Conjecture \ref{princ}.
\item The Bishop-Gromov Inequality asserts that for any convex set $X\subset \mathbb{S}^n$ and for every $x\in X$, the function
\begin{eqnarray*}
\frac{vol_n(X\cap B(x,r))}{vol_n(B(x,r))}
\end{eqnarray*}
is a non-increasing function of $r$ (See Lemma \ref{bg}). If the following conjecture (which is a generalisation of the Bishop-Gromov Inequality) holds, then the proof of Conjecture \ref{princ} becomes straightforward.
\begin{conj} \label{bgr}
Let $X$ be a convex subset of the sphere having $o\in\mathbb{S}^n$ in its interior. Let $F_1\subset F_2$ be two symmetric convex subsets of the sphere, both containing $o$ in their interiors. Then
\begin{eqnarray*}
\frac{vol_n(X\cap F_1)}{vol_n(F_1)}\geq \frac{vol_n(X\cap F_2)}{vol_n(F_2)}
\end{eqnarray*}
\end{conj}
I attempted to prove this conjecture by putting a density on the sphere and using the Bishop-Gromov Inequality for Riemannian manifolds with density having positive Ricci curvature in the sense of Lott-Villani (see \cite{lotvil}) but I was not successful. A possible way to prove Conjecture \ref{bgr} could be the use of the theory of mean-curvature flow (see \cite{mf}).
\item The theory of Ricci curvature for general metric-measure spaces is well developed. For an example, one can see \cite{lotvil} where the authors prove Bishop-Gromov type inequality for metric-measure spaces having $Ricci>0$ in the sense of displacement convexity. By using their definition of Ricci curvature and combining it with their Bishop-Gromov type inequality for the Gaussian space, can one directly (without passing through the sphere) prove Conjecture \ref{pri}?
\item What type of correlation theorem (like the one we proposed in Conjecture \ref{princ}) can one prove for Riemannian manifolds with a lower bound on the Ricci curvature?
\end{itemize}
\bibliographystyle{plain}
|
2,877,628,091,119 | arxiv | \section{Introduction}
\label{zero}
The Kneser-Poulsen conjecture raises an important fundamental problem on volume measure. This paper suveys the major developments regarding this problem and orients the attention of the reader towards a number of open questions in order to generate further research progress. Anyone interested in the history of the Kneser-Poulsen conjecture as well as in the many related references is refereed to the recent paper of Bezdek and Connelly~\cite{bc} and to the elegant book of Klee and Wagon \cite{kw}.
\section{The Kneser-Poulsen conjecture}
\label{one}
Let $\|\dots \|$ denote the standard Euclidean norm of the $n$-dimensional Euclidean space ${\bf E}^n$. So, if ${\bf p}_i, {\bf p}_j$ are two points in ${\bf E}^n$, then $\|{\bf p}_i- {\bf p}_j \|$ denotes the Euclidean distance between them. It will be convenient to denote the (finite) point configuration consisting of the points ${\bf p}_1, {\bf p}_2, \dots, {\bf p}_N$ in ${\bf E}^n$ by ${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$. Now, if ${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$ and ${\bf q}=({\bf q}_1, {\bf q}_2, \dots, {\bf q}_N)$ are two configurations of $N$ points in ${\bf E}^n$ such that for all $1\le i<j\le N$ the inequality $\|{\bf q}_i- {\bf q}_j \|\le \|{\bf p}_i- {\bf p}_j \|$ holds, then we say that ${\bf q}$ is a {\it contraction} of ${\bf p}$. If ${\bf q}$ is a contraction of ${\bf p}$, then there may or may not be a continuous motion ${\bf p}(t)=({\bf p}_1(t), {\bf p}_2(t), \dots, {\bf p}_N(t))$, with ${\bf p}_i(t)\in {\bf E}^n$ for all $0\le t\le 1$ and $1\le i\le N$ such that ${\bf p}(0)={\bf p}$ and ${\bf p}(1)={\bf q}$, and $\|{\bf p}_i(t)- {\bf p}_j(t)\|$ is monotone decreasing for all $1\le i<j\le N$. When there is such a motion, we say that ${\bf q}$ is a {\it continuous contraction} of ${\bf p}$. Finally, let $B^n({\bf p}_i, r_i)$ denote the closed $n$-dimensional ball centered at ${\bf p}_i$ with radius $r_i$ in ${\bf E}^n$ and let ${\rm Vol}_n(\dots)$ represent the $n$-dimensional volume (Lebesgue measure) in ${\bf E}^n$. In 1954 Poulsen \cite{p} and in 1955 Kneser \cite{k} independently conjectured the following for the case when $r_1=\dots=r_N$:
\medskip
\begin{con} \label{elso} If ${\bf q}=({\bf q}_1, {\bf q}_2, \dots, {\bf q}_N)$ is a contraction of ${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$ in ${\bf E}^n$, then
$${\rm Vol}_n[\cup_{i=1}^{N}B^n({\bf p}_i, r_i)]\ge {\rm Vol}_n[\cup_{i=1}^{N}B^n({\bf q}_i, r_i)].$$
\end{con}
\medskip
\begin{con} \label{masodik} If ${\bf q}=({\bf q}_1, {\bf q}_2, \dots, {\bf q}_N)$ is a contraction of ${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$ in ${\bf E}^n$, then
$${\rm Vol}_n[\cap_{i=1}^{N}B^n({\bf p}_i, r_i)]\le {\rm Vol}_n[\cap_{i=1}^{N}B^n({\bf q}_i, r_i)].$$
\end{con}
Actually, M. Kneser seems to be the one who has generated a great deal of interest in the above conjectures also via private letters written to a number of mathematicians. For more details on this see for example \cite{kw}.
\section{Nearest and farthest point Voronoi diagrams}
\label{two}
For a given point configuration ${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$ in ${\bf E}^n$ and radii $r_1, r_2, \dots , r_N$ consider the following sets:
$$V_i=\{{\bf x}\in {\bf E}^n\ |\ {\rm for\ all\ {\it j},}\ \|{\bf x}-{\bf p}_i\|^2-r_i^2\le \|{\bf x}-{\bf p}_j\|^2-r_j^2\},$$
$$V^i=\{{\bf x}\in {\bf E}^n\ |\ {\rm for\ all\ {\it j},}\ \|{\bf x}-{\bf p}_i\|^2-r_i^2\ge \|{\bf x}-{\bf p}_j\|^2-r_j^2\}.$$
The set $V_i$ (resp., $V^i$) is called the {\it nearest (resp., farthest) point Voronoi cell} of the point ${\bf p}_i$. (For a detailed discussion on nearest as well as farthest point Voronoi cells we refer the interested reader to \cite{e} and \cite{s}.) We now restrict each of these sets as follows:
$$V_i(r_i)=V_i\cap B^n({\bf p}_i, r_i),$$
$$V^i(r_i)=V^i\cap B^n({\bf p}_i, r_i).$$
We call the set $V_i(r_i)$ (resp., $V^i(r_i)$) the {\it nearest (resp., farthest) point truncated Voronoi cell} of the point ${\bf p}_i$. For each $i\neq j$ let $W_{ij}=V_i\cap V_j$ and $W^{ij}=V^i\cap V^j$. The sets $W_{ij}$ and $W^{ij}$ are the {\it walls} between the nearest point and farthest point Voronoi cells. Finally, it is natural to define the relevant {\it truncated walls} as follows:
$$W_{ij}({\bf p}_i, r_i)=W_{ij}\cap B^n({\bf p}_i, r_i)=$$
$$W_{ij}({\bf p}_j, r_j)=W_{ij}\cap B^n({\bf p}_j, r_j),$$
$$W^{ij}({\bf p}_i, r_i)=W^{ij}\cap B^n({\bf p}_i, r_i)=$$
$$W^{ij}({\bf p}_j, r_j)=W^{ij}\cap B^n({\bf p}_j, r_j).$$
\section{Csik\'os's formula}
\label{three}
The following formula discovered by Csik\'os \cite{cs1} proves Conjecture~\ref{elso} as well as Conjecture~\ref{masodik} for continuous contractions in a straighforward way in any dimension. (Actually, the planar case of the Kneser-Poulsen conjecture under continuous contractions have been proved independently in \cite{b}, \cite{cs0}, \cite{c} and \cite{bs}.)
\medskip
\begin{theorem} \label{csikos} {\it Let $n\ge 2$ and let ${\bf p}(t), 0\le t\le 1$ be a smooth motion of a point configuration in ${\bf E}^n$ such that for each $t$, the points of the configuration are pairwise distinct.
Then
$$\frac{d}{dt}{\rm Vol}_n[\cup_{i=1}^{N}B^n({\bf p}_i(t), r_i)]=$$
$$\sum_{1\le i< j\le N} (\frac{d}{dt}d_{ij}(t))\cdot{\rm Vol}_{n-1}[W_{ij}({\bf p}_i(t), r_i)],$$
$$\frac{d}{dt}{\rm Vol}_n[\cap_{i=1}^{N}B^n({\bf p}_i(t), r_i)]= $$
$$\sum_{1\le i< j\le N} -(\frac{d}{dt}d_{ij}(t))\cdot{\rm Vol}_{n-1}[W^{ij}({\bf p}_i(t), r_i)],$$
where $d_{ij}(t)=\|{\bf p}_i(t)-{\bf p}_i(t)\|$.}
\end{theorem}
\medskip
On the one hand, Csik\'os \cite{cs2} managed to generalize his formula to configurations of balls called flowers which are sets obtained from balls with the help of operations $\cap$ and $\cup$. This work extends to hyperbolic as well as spherical space. On the other hand, Csik\'os \cite{cs3} has succeded to prove a Schl\"afli-type formula for polytopes with curved faces lying in pseudo-Riemannian Einstein manifolds, which can be used to provide another proof of Conjecture~\ref{elso} as well as Conjecture~\ref{masodik} for continuous contractions (for more details see \cite{cs3}).
\section{A short outline of the proof of Bezdek and Connelly of Conjectures~\ref{elso} and \ref{masodik} in ${\bf E}^2$}
\label{four}
In the recent paper \cite{bc} Bezdek and Connelly proved Conjecture~\ref{elso} as well as Conjecture~\ref{masodik} in the Euclidean plane. In fact, the paper contains a proof of an extension of these conjectures to flowers as well. In what follows we give an outline of the three step proof published in \cite{bc} by phrasing it through a sequence of theorems each being higher dimensional. The proofs of these results are based on the underlying Voronoi diagrams.
\medskip
\begin{theorem} \label{I} Consider $N$ moving closed $n$-dimensional balls $B^n({\bf p}_i(t), r_i)$ with $1\le i\le N, 0\le t\le 1$ in ${\bf E}^n$. If $F_i(t)$ is the contribution of the $i$th ball to the boundary of the union $\cup_{i=1}^{N}B^n({\bf p}_i(t), r_i)$ (resp., of the intersection $\cap_{i=1}^{N}B^n({\bf p}_i(t), r_i)$), then
$$\sum_{1\le i\le N} \frac{1}{r_i}\cdot {\rm Vol}_{n-1}(F_i(t))$$
decreases (resp., increases) in t under any analytic contraction ${\bf p}(t)$ of the center points, where $0\le t\le 1$.
\end{theorem}
\medskip
\begin{theorem} \label{II} Let the centers of the closed $n$-dimensional balls $B^n({\bf p}_i, r_i)$, $1\le i\le N$ lie in the $(n-2)$-dimensional (affine) subspace $L$ of ${\bf E}^n$. If $F_i$ stands for the contribution of the $i$th ball to the boundary of the union $\cup_{i=1}^{N}B^n({\bf p}_i, r_i)$ (resp., of the intersection $\cap_{i=1}^{N}B^n({\bf p}_i, r_i)$), then
$$\frac{1}{2\pi}\sum_{1\le i\le N} \frac{1}{r_i}\cdot {\rm Vol}_{n-1}(F_i)$$
is equal to the volume of $\cup_{i=1}^{N}B^{n-2}({\bf p}_i, r_i)$ (resp., $\cap_{i=1}^{N}B^{n-2}({\bf p}_i, r_i)$) lying $L$.
\end{theorem}
\medskip
\begin{theorem} \label{III} If ${\bf q}=({\bf q}_1, {\bf q}_2, \dots, {\bf q}_N)$ is a contraction of ${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$ in ${\bf E}^n$, then there is an analytic contraction of ${\bf p}$ onto ${\bf q}$ in ${\bf E}^{2n}$.
\end{theorem}
\medskip
Note that Theorem~\ref{I}, \ref{II} and \ref{III} imply in a straighforward way that Conjecture~\ref{elso} as well as Conjecture~\ref{masodik} hold in the Euclidean plane. Also, it is worth mentioning that somewhat surprisingly Theorem~\ref{III} (also called the leapfrog lemma) cannot be improved namely, it has been proved in \cite{mbc} that there exist point configurations
${\bf q}$ and ${\bf p}$ in ${\bf E}^n$, constructed actually in the way as it was suggested in \cite{bc}, such that ${\bf q}$ is a contraction of ${\bf p}$ in ${\bf E}^n$ and there is no continuous contraction from ${\bf p}$ to ${\bf q}$ in ${\bf E}^{2n-1}$.
\section{Further results obtained from the proof of Bezdek and Connelly}
\label{five}
It is worth listing two additional results obtained from the proof published in \cite{bc} in order to describe a more complete picture of the status of the Kneser-Poulsen conjecture. For more details see \cite{bc}.
\medskip
\begin{theorem} Let ${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$ and ${\bf q}=({\bf q}_1, {\bf q}_2, \dots, {\bf q}_N)$ be two point configurations in ${\bf E}^n$ such that ${\bf q}$ is a piecewise-analytic contraction of ${\bf p}$ in ${\bf E}^{n+2}$. Then the conclusions of Conjecture~\ref{elso} as well as Conjecture~\ref{masodik} hold in ${\bf E}^n$.
\end{theorem}
The following generalizes a result of Gromov in \cite{g}, who proved it in the case $N\le n+1$.
\begin{theorem} If ${\bf q}=({\bf q}_1, {\bf q}_2, \dots, {\bf q}_N)$ is an arbitrary contraction of
${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$ in ${\bf E}^n$ and $N\le n+3$, then both Conjecture~\ref{elso} and Conjecture~\ref{masodik} hold.
\end{theorem}
As a next step it would be natural to investigate the case $N=n+4$.
\medskip
\section{Kneser-Poulsen-type results for spherical and hyperbolic convex polytopes}
\label{six}
It is somewhat surprising that in spherical space for specific radius of balls (i.e. spherical caps) one can find a proof of both Conjecture~\ref{elso} and Conjecture~\ref{masodik} in all dimensions. The magic radius is $\frac{\pi}{2}$ and the following theorem describes the desired result in details.
\medskip
\begin{theorem} \label{gombi} If a finite set of closed $n$-dimensional balls of radius $\frac{\pi}{2}$ (i.e. of closed hemispheres) in the $n$-dimensional spherical space is rearranged so that the (spherical) distance between each pair of centers does not increase, then the (spherical) $n$-dimensional volume of the intersection does not decrease and the (spherical) $n$-dimensional volume of the union does not increase.
\end{theorem}
\medskip
The method of the proof published by Bezdek and Connelly in \cite{bc04} can be described as follows. First, one can use a leapfrog lemma to move one configuration to the other in an analytic and monotone way, but only in higher dimensions. Then the higher-dimensional balls have their combined volume (their intersections or unions) change monotonically, a fact that one can prove using Schl\"afli's differential formula. Then one can apply an integral formula to relate the volume of the higher dimensional object to the volume of the lower-dimensional object, obtaining the volume inequality for the more general discrete motions.
The following statement is a corollary of Theorem~\ref{gombi} (for details see \cite{bc04}) the Euclidean part of which has been proved independently by Alexander \cite{a85}, Capoyleas and Pach \cite{cp} and Sudakov \cite{su}.
\begin{theorem}
Let ${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$ be $N$ points on a hemisphere of the $2$-dimensional spherical space ${\bf S}^2$ (resp., points in ${\bf E}^2$), and let ${\bf q}=({\bf q}_1, {\bf q}_2, \dots, {\bf q}_N)$ be a contraction of ${\bf p}$ in ${\bf S}^2$ (resp., in ${\bf E}^2$). Then the perimeter of the convex hull of ${\bf q}$ is less than or equal to the perimeter of the convex hull of ${\bf p}$.
\end{theorem}
We remark that Theorem~\ref{gombi} extends to flowers as well moreover, a positive answer to the following problem would imply that both Conjecture~\ref{elso} and Conjecture~\ref{masodik} hold for circles in ${\bf S}^2$ (for more details on this see \cite{bc04}).
\begin{prob}
Suppose that ${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$ and ${\bf q}=({\bf q}_1, {\bf q}_2, \dots, {\bf q}_N)$ are two configurations in ${\bf S}^2$. Then prove or disprove that there is a monotone piecewise-analytic motion from
${\bf p}=({\bf p}_1, {\bf p}_2, \dots, {\bf p}_N)$ to ${\bf q}=({\bf q}_1, {\bf q}_2, \dots, {\bf q}_N)$ in ${\bf S}^4$.
\end{prob}
Note that in fact, Theorem~\ref{gombi} states a volume inequality between two spherically convex polytopes satisfying some metric conditions. The following problem searches for a natural analogue of that in hyperbolic $3$-space. In order to state it properly we recall the following. Let $A$ and $B$ be two planes in the hyperbolic $3$-space and let $A^+$ (resp., $B^+$) denote one of the two closed halfspaces bounded by $A$ (resp., $B$) such that the set $A^+\cap B^+$ is nonempty. Recall that either $A$ and $B$ intersect or $A$ is parallel to $B$ or $A$ and $B$ have a line perpendicular to both of them. Now, "the dihedral angle $A^+\cap B^+$" means not only the set in question but, also it refers to the standard angular measure of the corresponding angle between $A$ and $B$ in the first case, it refers to $0$ in the second case, and finally, in the third case it refers to the negative of the distance between $A$ and $B$ as well.
\begin{prob}
Let $P$ and $Q$ be compact convex polyhedra of the $3$-dimensional hyperbolic space with $P$ (resp., $Q$) being the intersection of the closed halfspaces $H_1^P, H_2^P, \dots, H_N^P$ (resp., $H_1^Q, H_2^Q, \dots, H_N^Q$). Assume that the dihedral angle $H_i^Q\cap H_j^Q$ is at least as large as the corresponding dihedral angle $H_i^P\cap H_j^P$ for all $1\le i<j\le N$. Then prove or disprove that the volume of $P$ is at least as large as the volume of $Q$.
\end{prob}
Using Andreev's version \cite{an} of the Koebe-Andreev-Thurston theorem and Schl\"afli's differential formula Bezdek \cite{b05} proved the following partial analogue of Theorem~\ref{gombi} in hyperbolic $3$-space.
\begin{theorem}
Let $P$ and $Q$ be nonobtuse-angled compact convex polyhedra of the same simple combinatorial type in hyperbolic $3$-space. If each inner dihedral angle of $Q$ is at least as large as the corresponding inner dihedral angle of $P$, then the volume of $P$ is at least as large as the volume of $Q$.
\end{theorem}
\medskip
\section{Alexander's conjecture}
\label{seven}
\medskip
It seems that in the Euclidean plane, for the case of the intersection of congruent disks, one can sharpen the results proved by Bezdek and Connelly \cite{bc}. Namely, Alexander \cite{a85} conjectures the following.
\begin{con} \label{alexander}
Under arbitrary contraction of the center points of finitely many congruent disks in the Euclidean plane, the perimeter of the intersection of the disks cannot decrease.
\end{con}
The analogous question for the union of congruent disks has a negative answer, as was observed by Habicht and Kneser long ago (for details see \cite{bc}). In \cite{bcc} some supporting evidence for the above conjecture of Alexander has been collected in particular, the following theorem was proved.
\begin{theorem}
Alexander's conjecture holds for continuous contractions of the center points and it holds up to $4$ congruent disks under arbitrary contractions of the center points.
\end{theorem}
We note that Alexander's conjecture does not hold for incronguent disks (even under continuous contractions of their center points) as it is shown in \cite{bcc}. Last but not least we remark that if Alexander's conjecture were true, then it would be a rare instance of an asymmetry between intersections and unions for Kneser-Poulsen type questions.
\medskip
\section{Disk-polygons and ball-polyhedra}
\label{eight}
The previous sections indicate a good deal of geometry on unions and intersections of balls that is worth for studying. In particular, when we restrict our attention to intersections of balls the underlying convexity suggests a broad spectrum of new analytic and combinatorial results. To make the setup ideal for discrete geometry from now on we will look at intersections of finitely many congruent closed $n$-dimensional balls with non-empty interior in ${\bf E}^n$. Also, it is natural to assume that removing any of the balls defining the ball-polyhedron in question yields the intersection of the remaining balls to become a larger set. If $n=2$, then we will call the sets in question {\it disk-polygons} and for $n\ge 3$ they will be called {\it ball-polyhedra}. This definition along with some basic properties of ball-polyhedra (resp., disk-polygons) were introduced by Bezdek in a sequence of talks at the University of Calgary in the fall of 2004. Based on that the paper \cite{blnp} written by Bezdek, L\'angi, Nasz\'odi and Papez systematically extended those investigations to get a better understanding of the geometry of ball-polyhedra (resp., disk-polygons) by proving quite a number of theorems, which one can regard the analogues of the classical theorems on convex polytopes.
\medskip
\section{Finding the shortest billiard trajectories in disk-polygons}
\label{nine}
Billiards have been around for quite some time in mathematics and generated a great deal of research. (See for example the recent elegant book \cite{t} of Tabachnikov.)
For our purposes it seems natural to define billiard trajectories in the following way. This introduces a larger class of polygons for billiard trajectories than the traditional definition widely used in the literature. So, let $C$ be an arbitrary convex domain that is a compact convex set with non-empty interior in the Euclidean plane. Then we say that the closed polygonal path $P$ (possible with self-intersections) is a {\it generalized billiard trajectory} of $C$ if all the vertices of $P$ lie on the boundary of $C$ and if all the inner angle bisectors of $P$ are perpendicular to a supporting line of $C$ passing through the corresponding vertex of $P$. If $P$ has $N$ sides, then we say that $P$ is an {\it $N$-periodic generalized billiard trajectory} in $C$. Note that our definition of generalized billiard trajectories coincides with the traditional definition of billiard trajectories whenever the billiard table has no corner points. According to Birkhoff's well-known theorem if $B$ is a strictly convex billiard table with smooth boundary (that is if the boundary of $B$ is a simple, closed, smooth and strictly convex curve) in the Euclidean plane, then for every positive integer $N>1$ there exist (at least two) $N$-periodic billiard trajectories in $B$. This motivates the following theorem that has just been proved in \cite{bb}. In order to state that theorem in a possible short form it seems natural to introduce the following concept. Let $D$ be a disk-polygon in the Euclidean plane having the property that the pairwise distances between the centers of its generating disks of radii $r$ are at most $r$. In short, we say that $D$ is a {\it fat disk-polygon} with parameter $r>0$. In fact, it is easy to see that the disk-polygon $D$ with parameter $r$ is a fat disk-polygon if and only if the centers of the generating disks of $D$ belong to $D$ or putting it somewhat differently if and only if the center of any (closed) circular disk of radius $r$ containing $D$ belongs to $D$.
\begin{theorem}\label{altalanos}
Let $D$ be a fat disk-polygon in the Euclidean plane. Then any of the shortest generalized billiard trajectories in $D$ is a $2$-periodic one.
\end{theorem}
Take a disk-polygon $D$ with generating disks of radii $r>0$. Then choose a positive $\varepsilon$ not larger than the inradius of $D$ (which is the radius of the largest circular disk contained in $D$) and take the union of all circular disks of radius $\varepsilon$ that lie in $D$. The set obtained in this way we call the {\it $\varepsilon$-rounded disk-polygon} of $D$ and denote it by $D(\varepsilon)$. The proof of the following theorem published in \cite{bb} is based on Theorem~\ref{altalanos}.
\begin{theorem}
Let $D$ be a fat disk-polygon in the Euclidean plane. Then any of the shortest (generalized) billiard trajectories in the $\varepsilon$-rounded disk-polygon $D(\varepsilon)$ is a $2$-periodic one for all $\varepsilon>0$ being sufficiently small.
\end{theorem}
Actually, we believe that the following even stronger statement holds (see also \cite{bb}).
\begin{con}
Let $D$ be a fat disk-polygon in the Euclidean plane. Then any of the shortest (generalized) billiard trajectories in the $\varepsilon$-rounded disk-polygon $D(\varepsilon)$ is a $2$-periodic one for all $\varepsilon$ being at most as large as the inradius of $D$.
\end{con}
Last but not least we mention the following result obtained as a corollary of Theorem~\ref{altalanos}. This might be of independent interest in particular, because it generalizes the result proved in \cite{bc89} that any closed curve of length at most $1$ can be covered by a translate of any convex domain of constant width $\frac{1}{2}$ in the Euclidean plane. As usual if $C$ is a convex domain of the Euclidean plane, then let ${\rm width}(C) $ denote the minimal width of $C$ (that is the smallest distance between two parallel supporting lines of $C$).
\begin{cor}
Let $D$ be a fat disk-polygon in the Euclidean plane. Then any closed curve of length at most $2\cdot{\rm width}(D)$ of the Euclidean plane can be covered by a translate of $D$.
\end{cor}
\medskip
It would be natural and important to look for higher dimensional analogues of these theorems.
\medskip
\section{Searching for an analogue of Steinitz theorem for standard ball-polyhedra in ${\bf E}^3$}
\label{ten}
\medskip
One can represent the boundary of a ball-polyhedron in ${\bf E}^3$ as the union of {\it vertices, edges} and {\it faces} defined in a rather natural way as follows. A boundary point is called a {\it vertex} if
it belongs to at least three of the closed balls defining the ball-polyhedron.
A {\it face} of the ball-polyhedron is the intersection of
one of the generating closed balls with the boundary of the ball-polyhedron.
Finally, if the intersection of two faces is non-empty, then it is the
union of (possibly degenerate) circular arcs. The non-degenerate
arcs are called {\it edges} of the ball-polyhedron.
Obviously, if a ball-polyhedron in ${\bf E}^3$ is generated by at least three balls, then it
possesses vertices, edges and faces. Finally, a ball-polyhedron is called a
{\it standard ball-polyhedron} if its vertices, edges and faces (together with the empty set and the ball-polyhedron itself) form an algebraic lattice with respect to containment. We note that not every ball-polyhedron of ${\bf E}^3$ is a standard one a fact, that is somewhat surprising and is responsible for some of the difficulties arising at studying ball-polyhedra in general (for more details see \cite{bn} as well as \cite{blnp}).
In this survey paper, a {\it graph} is always a non-oriented one and has finitely many
vertices and edges. Recall that a graph is {\it $3$-connected} if it has at least four vertices
and deleting any two vertices yields a connected graph. Also, a graph is called
{\it simple} if it contains no loops (edges with identical end-points)
and no parallel edges (edges with the same two end-points). Finally, a graph is {\it planar} if it can be drawn in the Euclidean plane without crossing edges. Now, recall that according to the well-known theorem of Steinitz a graph is the edge-graph of some convex polyhedron in ${\bf E}^3$ if, and
only if, it is simple, planar and $3$-connected. As a partial analogue of Steinitz theorem for ball-polyhedra the following theorem is proved in \cite{blnp}.
\begin{theorem}
The edge-graph of any standard ball-polyhedron in ${\bf E}^3$ is a
simple, planar and $3$-connected graph.
\end{theorem}
Based on that it would be highly interesting to find an answer to the following question raised in \cite{blnp}.
\begin{prob}
Prove or disprove that every simple, planar and $3$-connected graph is
the edge-graph of some standard ball-polyhedron in ${\bf E}^3$.
\end{prob}
\medskip
\section{On global rigidity of ball-po\-ly\-hed\-ra in ${\bf E}^3$}
\label{eleven}
\medskip
One of the best known
results on the geometry of convex polyhedra is Cauchy's
rigidity theorem: If two convex polyhedra $P$ and $Q$ in ${\bf E}^3$ are combinatorially
equivalent with the corresponding facets being congruent, then also the angles
between the corresponding pairs of adjacent facets are equal and thus, $P$
is congruent to $Q$. For more details on Cauchy's rigidity theorem and on its extensions
we refer the interested reader to \cite{C}. In order to phrase properly the main theorem of this section we need to recall the following terminology. To each edge of a ball-polyhedron in ${\bf E}^3$ we can assign an
{\it inner dihedral angle}. Namely, take any point ${\bf p}$ in the
relative interior of the edge and take the two balls that contain
the two faces of the ball-polyhedron meeting along that edge. Now, the
inner dihedral angle along this edge is the angle of the two half-spaces
supporting the two balls at ${\bf p}$.
The angle in question is obviously independent of the choice of ${\bf p}$.
Finally, at each vertex of a face of a ball-polyhedron there is a
{\it face angle} formed by the two edges meeting at the given vertex
(which is in fact, the angle between the two tangent half-lines of the two edges
meeting at the given vertex). We say that the standard ball-polyhedron $P$ in ${\bf E}^3$ is {\it globally rigid with respect
to its face angles} (resp. {\it its inner dihedral angles}) if the following holds:
If $Q$ is another standard ball-polyhedron in ${\bf E}^3$ whose face-lattice is isomorphic
to that of $P$ and whose face angles (resp. inner dihedral angles)
are equal to the corresponding face angles (resp. inner dihedral angles)
of $P$, then $Q$ is similar to $P$. (Note that in case the family of ball-polyhedra is defined with the additional restriction that the radii of the generating balls are all equal to say, $1$, then in the above definition of global rigidity "similar" should be replaced by "congruent" as in \cite{bn}.) A ball-polyhedron of ${\bf E}^3$ is called {\it triangulated} if all its faces are bounded by three edges. It is easy to see that any triangulated ball-polyhedron is in fact, a standard one. The following theorem has been proved in \cite{bn}.
\begin{theorem}
Let $P$ be a triangulated ball-polyhedron in ${\bf E}^3$.
Then $P$ is globally rigid with respect to its face angles.
\end{theorem}
It remains to be a challanging problem to answer the following related question.
\begin{prob}
Let $P$ be a triangulated ball-polyhedron in ${\bf E}^3$.
Prove or disprove that $P$ is globally rigid with respect to its dihedral angles.
\end{prob}
Finally, we mention that one can regard the above problem as an analogue of Stokker's conjecture \cite{st} according to which for convex polyhedra the face-lattice and the dihedral angles determine the face angles.
\section{Illumination of ball-polyhedra}
\label{twelve}
As we have mentioned before \cite{blnp} lays a broad ground for future study of ball-polyhedra by proving several new properties of them and raising open research problems as well. This list includes among many things analogues of the classical separation theorems of convex polytopes, a Kirchberger-type theorem, analogues of the Caratheodory theorem and the Euler-Poincare formula for ball-polyhedra. Here we want to focus on another possible direction for research. Let $K$ be a convex body (i.e. a compact convex set with nonempty interior) in the $n$-dimensional Euclidean space ${\bf E}^{n}, n\geq 2$. According to Hadwiger (see \cite{b06}) an exterior point ${\bf p}\in {\bf E}^{n}\setminus K$ of $K$ illuminates the boundary point ${\bf q}$ of $K$ if the half line emanating from ${\bf p}$ passing through ${\bf q}$ intersects the interior of $K$ (at a point not between ${\bf p}$ and ${\bf q}$). Furthermore, a family of exterior points of $K$ say, ${\bf p}_1, {\bf p}_2, \dots , {\bf p}_N$ illuminates $K$ if each boundary point of $K$ is illuminated by at least one of the point sources ${\bf p}_1, {\bf p}_2, \dots , {\bf p}_N$. Finally, the smallest $N$ for which there exist $N$ exterior points of $K$ that illuminate $K$ is called the {\it illumination number} of $K$ denoted by $I(K)$. In 1960, Hadwiger (see \cite{b06}) raised the following amazingly elementary but, very fundamental question. An equivalent but somewhat different looking concept of illumination was introduced by Boltyanski in the same year. There he proposed to use directions (i.e. unit vectors) instead of point sources for the illumination of convex bodies (for more details see \cite{b06}). Based on these circumstances the following conjecture we call the Boltyanski-Hadwiger illumination conjecture. According to this conjecture the illumination number $I(K)$ of any convex body $K$ in ${\bf E}^{n}, n\geq 2$ is at most $2^n$ and $I(K)=2^n$ if and only if $K$ is an affine $n$-cube. This conjecture is easy to prove for $n=2$ but, it is open for all $n\ge 3$.
The following statement follows from the Separation Lemma of Bezdek \cite{b06}. In order to state it properly we need to recall two basic notions. Let $K$ be a convex body in ${\bf E}^n$ and let $F$ be a face of
$K$ that is let $F$ be the intersection of $K$ with some of its supporting hyperplanes.
The {\it Gauss image} $\nu (F)$ of the face $F$ is the set of
all points (i.e. unit vectors) ${\bf u}$ of the $(n-1)$-dimensional unit sphere ${\bf S}^{n-1}\subset
{\bf E}^n$ centered at the origin ${\bf o}$ of ${\bf E}^n$ for which the supporting
hyperplane of $K$ with outer normal vector ${\bf u}$ contains $F.$
It is easy to see that the Gauss images of distinct faces of $K$
have disjoint relative interiors in ${\bf S}^{n-1}$ and $\nu (F)$ is compact and spherically
convex for any face $F$. Let $C\subset {\bf S}^{n-1}$ be a set of finitely many points. Then the {\it covering radius} of $C$ is the smallest positive real number $r$ with the property that the family of spherical balls of radii $r$ centered at the points of $C$ cover ${\bf S}^{n-1}$.
\begin{theorem}\label{becsles}
Let $K\subset {\bf E}^n$, $n\geq 3$ be a convex body and let $r$ be a positive real number with the property that the Gauss image $\nu (F)$ of any face $F$ of $K$ can be covered by a spherical ball of radius $r$ in ${\bf S}^{n-1}$. Moreover, assume that there exist $N$ points of ${\bf S}^{n-1}$ with covering radius $R$ satisfying the inequality $r+R\le\frac{\pi}{2}$. Then $I(K)\le N$.
\end{theorem}
Using Theorem~\ref{becsles} as well as the optimal codes for the covering radii of four and five points on ${\bf S}^2$ (\cite{ft}) one can easily prove the result stated below. (In fact, weaker but, still reasonable estimates can be proved for larger values of $x$ (relative to $r$) by taking into account additional (optimal) codes from \cite{ft}.)
\medskip
\begin{theorem} Let $B(x, r)$ be a ball-polyhedron in ${\bf E}^{3}$ having the property that the diameter of the centers of its generating balls is at most $x$, where $0<x< 2r$ with $r$ standing for the radii of the generating balls of $B(x, r)$. Then for $0<x\le 0.57r$ we have that $I(B(x, r))=4$ and for $0.57r<x\le 0.77r$ we get that $I(B(x, r))\le 5$.
\end{theorem}
\medskip
Based on this it is tempting to raise the following question.
\begin{prob}
Prove or disprove that if $B$ is an arbitrary ball-polyhedron of ${\bf E}^{3}$, then $I(B)\le 5$. More generally, prove or disprove that there exists a universal constant $c>0$ such that the illumination number of any $n$-dimensional ball-polyhedron in ${\bf E}^{n}$ is smaller than $(2-c)^n$ for all $n\ge 3$.
\end{prob}
\medskip
|
2,877,628,091,120 | arxiv | \section{Introduction}
Let $D$ be an integral domain. Semistar operations on $D$ are a class of closure operations on the set of $D$-submodules of the quotient field $K$ of $D$, defined by Okabe and Matsuda \cite{okabe-matsuda} as a generalization of the concept of star operation, originally introduced by Krull \cite{krull_breitage_I-II} and Gilmer \cite[Chapter 32]{gilmer}. Semistar operations enjoy greater flexibility than star operations, making them a good tool to use in order to study several topics relative to the properties of ideals of $D$, as well as the properties of overrings of $D$. There are several subclasses of semistar operations that are particularly of interest, among which we cite \emph{finite type} operations, \emph{eab} operations (that are related to the valuation overrings of $D$, cf. \cite{fontana_loper-eab} and \cite[Section 4]{fifolo_transactions}) and \emph{spectral} operations (related to the spectrum of $D$, cf. \cite{anderson_two_2000,anderson_intersections_2005,localizing-semistar}). See Section \ref{sect:prelim} for a precise definition.
A semistar operation $\star$ is \emph{stable} if it distributes over finite intersections, i.e., if $(I\cap J)^\star=I^\star\cap J^\star$ for all $D$-submodules $I,J$; in particular, every spectral semistar operations is stable. Stable operations are naturally connected to \emph{localizing systems} \cite{localizing-semistar} and \emph{singular length functions} \cite[Theorem 6.5 and subsequent dicussion]{length-funct}, meaning that there are canonical order-preserving bijections between the sets of stable operations, localizing systems and singular length functions on the same domain $D$ (see Section \ref{sect:other}); in particular, any classification of stable semistar operations classifies, as well, localizing systems and singular length functions. However, while spectral semistar operations can be easily classified through subsets of the spectrum of $D$ (see \cite[Remark 4.5]{localizing-semistar} and \cite[Corollary 4.4]{topological-cons}), the same does not hold for stable operations; indeed, a standard representations and a classification of stable operations have only be obtained in very specific situations, like for one-dimensional domains with scattered maximal space (see \cite{jaff-derived}) and for Pr\"ufer domains such that every ideal has only finitely many minimal primes (see \cite{stable_prufer} and \cite{length-funct}).
In this paper, we study \emph{radical} semistar operations, i.e., stable semistar operations such that, for every ideal $I$, $1\in I^\star$ if and only if $1\in\rad(I)^\star$. This notion arose in the study of almost Dedekind domains that generalize the notion of SP-domains: indeed, it can be proved that, for the class of \emph{SP-scattered domains}, every stable operation is radical \cite{almded-radfact}. We systematize the study of this class of stable operations, showing that their set $\insrad(D)$ is a complete lattice (Theorem \ref{teor:completeness}) that is, furthermore, the join-completion of the set $\insspectral(D)$ of spectral operations inside the set $\inssemistar(D)$ of all semistar operations. For \emph{rad-colon coherent domains} (a large class of domain that includes domains with Noetherian spectrum and Pr\"ufer and coherent domains), it follows that the set $\insrad(D)$ depends uniquely on the spectrum of $D$ (in the sense that any two such domains with homeomorphic spectra have isomorphic set of radical operations; Theorem \ref{teor:insrad-iso}).
In Section \ref{sect:scattered}, we connect the study of radical operations with the use of the derived set and of scattered topological spaces (following \cite{jaff-derived,almded-radfact,PicInt}) to show that (under the hypothesis that $D$ is rad-colon coherent) the two sets $\insrad(D)$ and $\insspectral(D)$ coincide if and only if the space $\Min(I)$ of minimal ideals of $D$ is scattered for every ideal $I$. Specializing further to the case of Pr\"ufer domain, we show that this property is enough to obtain a full classification of all stable operations of $D$ by means of a standard representation (Theorem \ref{teor:prufer-MinI}), generalizing the results obtained in \cite{stable_prufer} and \cite{length-funct} for the case where each $\Min(I)$ is finite; in particular, we show that for these Pr\"ufer domains the set $\insstable(D)$ depends only on the spectrum of $D$ (as a topological space) and on which prime ideals are locally principal (Theorem \ref{teor:prufer-iso}). In particular, these results hold when the spectrum of $D$ is countable.
In Section \ref{sect:other}, we define the concepts analogue to radical semistar operations in the context of localizing systems and length functions.
\section{Preliminaries}\label{sect:prelim}
Throughout the paper, $D$ will denote an integral domain with quotient field $K$, and $\inssubmod(D)$ will denote the set of $D$-submodules of $K$. An \emph{overring} of $D$ is a ring between $D$ and $K$.
\subsection{Semistar operations}
A \emph{semistar operation} on $D$ is a map $\star:\inssubmod(D)\longrightarrow\inssubmod(D)$, $I\mapsto I^\star$, such that, for every $I,J\in\inssubmod(D)$, $x\in K$:
\begin{itemize}
\item $I\subseteq I^\star$;
\item if $I\subseteq J$, then $I^\star\subseteq J^\star$;
\item $(I^\star)^\star=I^\star$;
\item $(xI)^\star=x\cdot I^\star$.
\end{itemize}
A submodule $I$ is said to be \emph{$\star$-closed} if $I=I^\star$. The set of $\star$-closed ideals uniquely determines $\star$.
The set $\inssemistar(D)$ of the semistar operations on $D$ has a natural partial order, where $\star_1\leq\star_2$ if and only if $I^{\star_1}\subseteq I^{\star_2}$ for every $I\in\inssubmod(D)$, or equivalently if every $\star_2$-closed ideal is $\star_1$-closed. Under this order, $\inssemistar(D)$ is a complete lattice: the infimum of a family $\{\star_\alpha\}_{\alpha\in A}$ is the map
\begin{equation*}
I\mapsto\bigcap_{\alpha\in A}I^{\star_\alpha},
\end{equation*}
while its supremum is the semistar operation $\sharp$ such that a submodule $I$ is $\sharp$-closed if and only if it is $\star_\alpha$ closed for every $\alpha\in A$.
An ideal $I$ of $D$ is said to be a \emph{quasi-$\star$-ideal} if $I=I^\star\cap D$; if $I$ is a prime quasi-$\star$-ideal, we say that $I$ is a \emph{quasi-$\star$-prime}. The set of quasi-$\star$-primes is called the \emph{quasi-spectrum} of $\star$, and is denoted by $\qspec^\star(D)$.
A semistar operation $\star$ is said to be \emph{of finite type} if $I^\star=\bigcup\{J^\star\mid J\subseteq I$ is finitely generated$\}$, for every $I\in\inssubmod(D)$. It is \emph{semi-finite} (or \emph{quasi-spectral}) if every quasi-$\star$-ideal is contained in a quasi-$\star$-prime; every semistar operation of finite type is semi-finite.
A very general way to define semistar operations is through overrings: any family $\Lambda$ of overrings induces the semistar operation
\begin{equation*}
\star_\Lambda:I\mapsto\bigcap_{T\in\Lambda}IT.
\end{equation*}
When $\Lambda$ is a family of localizations of $D$, we say that $\star$ is a \emph{spectral} semistar operation. A spectral semistar operation can also be defined through a subset of the spectrum $\Spec(D)$ of $D$: given a family $\Delta\subseteq\Spec(D)$, we denote by $s_\Delta$ the semistar operation
\begin{equation*}
s_\Delta:I\mapsto\bigcap_{P\in\Delta}ID_P.
\end{equation*}
Setting $\Delta^\downarrow:=\{Q\in\Spec(D)\mid Q\subseteq P$ for some $P\in\Delta\}$, we have that $\qspec^{s_\Delta}(D)=\Delta^\downarrow$, and that $s_\Delta=s_{\Delta^\downarrow}$; moreover, $s_\Delta=s_\Lambda$ if and only if $\Delta^\downarrow=\Lambda^\downarrow$ \cite[Remark 4.5]{localizing-semistar}. A spectral operation $s_\Delta$ is of finite type if and only if $\Delta$ is compact, with respect to the Zariski topology \cite[Corollary 4.4]{topological-cons}.
A semistar operation is \emph{stable} if $(I\cap J)^\star=I^\star\cap J^\star$ for every $I,J\in\inssubmod(D)$. Every spectral semistar operation is stable, while every semi-finite stable operation is spectral \cite[Theorem 4]{anderson_overrings_1988}. There exist stable operations that are not spectral: an example is the \emph{$v$-operation} $I\mapsto(D:(D:I))$ when $D$ is a valuation domain with non-principal maximal ideal. The \emph{localizing system associated} to $\star$ is \cite[Section 2]{localizing-semistar}
\begin{equation*}
\mathcal{F}^\star:=\{I\text{~ideal of~}D\mid I^\star\cap D=D\}=\{I\text{~ideal of~}D\mid 1\in I^\star\};
\end{equation*}
this set uniquely determines $\star$, in the sense that if $\star_1,\star_2$ are stable, then $\star_1=\star_2$ if and only if $\mathcal{F}^{\star_1}=\mathcal{F}^{\star_2}$. More precisely, $\star_1\leq\star_2$ if and only if $\mathcal{F}^{\star_1}\subseteq\mathcal{F}^{\star_2}$.
The set $\insspectral(D)$ of spectral semistar operations of $D$ is closed by infimum, but not by supremum (see \cite[Example 4.5]{spettrali-eab} and Example \ref{ex:supnonrad} below); note, however, that $\insspectral(D)$ is a complete lattice (see below the discussion after Corollary \ref{cor:MinIfinito}). On the other hand, the set $\insstable(D)$ of stable operations is closed by both infima and suprema \cite[Proposition 5.3]{non-ft}.
\subsection{Topologies on the spectrum}
Let $\Spec(D)$ denote the spectrum of $D$, i.e., the set of all prime ideals of $D$. We denote by $\V(I)$ and $\D(I)$, respectively, the closed and the open sets of the Zariski topology associated to an ideal $I$; i.e., $\V(I):=\{P\in\Spec(D)\mid I\subseteq P\}$, while $\D(I):=\Spec(D)\setminus\V(I)$.
The spectrum of a ring can also be endowed with two other topologies. The \emph{inverse} topology is the topology whose subbasic open sets are those in the form $\V(I)$, as $I$ ranges among the finitely generated ideals of $D$; the \emph{constructible} topology is the topology whose subbasic open sets are the $\V(I)$ and the $\D(I)$, for $I$ ranging among the finitely generated ideals of $D$. In particular, the constructible topology is finer than both the Zariski and the inverse topology, and, furthermore, it is Hausdorff.
If $I$ is an ideal of $D$ and $\Min(I)$ denotes the set of minimal primes of $D$, the Zariski and the constructible topology agree on $\Min(I)$ (by \cite[Corollary 4.4.6(i)]{spectralspaces-libro}, applied to the spectral space $\V(I)$).
\subsection{Derived sets and scattered spaces}
Let $X$ be a topological space. A point $x\in X$ is \emph{isolated} if $\{x\}$ is an open set; the set of non-isolated points of $X$ is called the \emph{derived set} of $X$, and is denoted by $\deriv(X)$. Given an ordinal $\alpha$, we define the \emph{$\alpha$-th derived set} as
\begin{equation*}
\deriv^\alpha(X):=\begin{cases}
\deriv(\deriv^\gamma(X)) & \text{if~}\alpha=\gamma+1;\\
\bigcap_{\beta<\alpha}\deriv^\beta(X) & \text{if~}\alpha\text{~is a limit ordinal}.
\end{cases}
\end{equation*}
If $\deriv^\alpha(X)=\emptyset$ for some $\alpha$, the space $X$ is said to be \emph{scattered}. On the other hand, if $\deriv(X)=X$, then $X$ is said to be \emph{perfect}.
\section{Radical semistar operations}
\begin{defin}
We say that a semistar operation $\star$ on $D$ is \emph{quasi-radical} if, whenever $1\notin I^\star$ for some ideal $I$ of $D$, then $1\notin\rad(I)^\star$.
\end{defin}
We collect in the next few propositions the main properties of quasi-radical semistar operations.
\begin{prop}\label{prop:psrad-Dstar}
Let $D$ be an integral domain and $\star$ be a semistar operation on $D$. If $\star|_{\inssubmod(D^\star)}$ is quasi-radical as a semistar operation on $D^\star$, then $\star$ is quasi-radical.
\end{prop}
\begin{proof}
Let $I$ be an ideal of $D$ such that $1\notin I^\star$. Then, $1\notin(ID)^\star=(ID^\star)^\star$, and thus, by hypothesis, $1\notin\rad(ID^\star)^\star$. However, $\rad(I)\subseteq\rad(ID^\star)$; hence, $1\notin\rad(I)^\star$. It follows that $\star$ is quasi-radical.
\end{proof}
\begin{prop}\label{prop:pseudrad-ex}
Let $D$ be an integral domain and $\star$ be a semistar operation on $D$.
\begin{enumerate}[(a)]
\item If $\star$ is semi-finite, then it is quasi-radical.
\item If $\star$ is of finite type, then it is quasi-radical.
\item If $\star$ is induced by overrings, then it is quasi-radical.
\item If $\star$ is spectral, then it is quasi-radical.
\end{enumerate}
\end{prop}
\begin{proof}
Suppose $\star$ is semi-finite, and let $I$ be an ideal of $D$ such that $1\notin I^\star$. Then, $J:=I^\star\cap D$ is a quasi-$\star$-ideal such that $1\notin J^\star$. Since $\star$ is semi-finite, there is a quasi-$\star$-prime ideal $P$ containing $J$; thus, $1\notin P^\star\supseteq\rad(J)^\star\supseteq\rad(I)^\star$. Therefore, $\star$ is quasi-radical.
The next three points follows from the fact that every semistar operation of finite type is semi-finite, as well as any semistar operation induced by overrings, and that any spectral semistar operation is induced by overrings.
\end{proof}
\begin{prop}\label{prop:quasirad-inf}
Let $\{\star_\alpha\}_{\alpha\in A}$ be a set of quasi-radical semistar operations on $D$. Then, $\inf_{\alpha\in A}\star_\alpha$ is quasi-radical.
\end{prop}
\begin{proof}
Let $\star:=\inf_{\alpha\in A}\star_\alpha$, and let $I$ be an ideal of $D$ such that $1\notin I^\star$. Since $I^\star=\bigcap_{\alpha\in A} I^{\star_\alpha}$, it follows that there is a $\beta\in A$ such that $1\notin I^{\star_\beta}$. Since $\star_\beta$ is quasi-radical, then $1\notin\rad(I)^{\star_\beta}$, and thus also $1\notin\rad(I)^\star$. Hence $\star$ is quasi-radical.
\end{proof}
The previous proposition does not extend to the supremum of a family of quasi-radical operations, as the next example shows.
\begin{ex}
Let $D$ be a Pr\"ufer domain of dimension $1$ such that $\Max(D)=\{P,Q_0,Q_1,\ldots,Q_n,\ldots,\}$ is countable and with a single non-isolated point, $P$; suppose also that $D_P$ is not discrete. For every $n\inN$, let $T_n:=\bigcap_{i\geq n}D_{Q_i}$; then, $T_n$ is a Pr\"ufer domain whose maximal ideals are the extensions of $Q_i$ (for $i\geq n$) and of $P$; in particular, $\bigcup_nT_n=D_P$.
Recall that a \emph{fractional ideal} of a domain $T$ is an $I\in\inssubmod(T)$ such that $dI\subseteq T$ for some $d\in K$, $d\neq 0$. For every $n$, let $\sharp_n$ and $\star_n$ be the semistar operations defined by
\begin{equation*}
I^{\sharp_n}:=\begin{cases}
IT_n & \text{if~}IT_n\text{~is a fractional ideal over~}T_n\\
K & \text{otherwise},
\end{cases}
\end{equation*}
and
\begin{equation*}
I^{\star_n}:=I^{\sharp_n}\cap(ID_P)^{v_P},
\end{equation*}
where $v_P$ is the $v$-operation on $D_P$. Since $T_n\subseteq D_P$, for every ideal $I$ of $D$ we have $I^{\star_n}=I^{\sharp_n}$: hence, if $1\notin I^{\star_n}$ then $IT_n\neq T_n$ and so $\rad(I)T_n\neq T_n$. Thus, every $\star_n$ is quasi-radical.
Let now $\star$ be the supremum of all $\star_n$. Then, $D_P^\star=D_P$ since $D_P^{\star_n}\subseteq(D_P)^{v_P}=D_P$. Moreover, if $t\in D_P$, then $t\in T_n$ for some $n$, and thus $t\in D^{\star_n}\subseteq D^\star$. Hence, $D^\star=D_P$. For every $n$, $PD_P$ is not a fractional ideal over $T_n$, and thus
\begin{equation*}
(PD_P)^{\star_n}=K\cap(PD_P)^{v_P}=K\cap D_P=D_P.
\end{equation*}
Hence,
\begin{equation*}
P^\star=(PD)^\star=(PD^\star)^\star=(PD_P)^\star=D_P.
\end{equation*}
On the other hand, if $L\neq P$ is a $P$-primary ideal, then $(LD_P)^{v_P}=LD_P$; hence, $LD_P$ is $\star$-closed and thus $L^\star\subseteq LD_P\cap D$, so that $1\notin L^\star$ while $1\in P^\star=\rad(L)^\star$. Therefore, $\star$ is not quasi-radical.
\end{ex}
\begin{comment}
\begin{ex}\label{ex:sup-quasiradical}
Let $(D,\mm)$ be a local domain, and suppose that there are:
\begin{itemize}
\item a valuation domain $V$ of rank $1$, not discrete, such that $\mm V$ is the maximal ideal of $V$;
\item a chain $T_1\subsetneq T_2\subsetneq T_3\subsetneq\cdots$ of overrings of $D$ such that $\bigcup_iT_i=K$ and such that $T_iV=K$ and $\mm T_i\neq T_i$ for every $i$.
\end{itemize}
Let $\star_i$ be the semistar operation
\begin{equation*}
\star_i:I\mapsto IT_i\cap(IV)^v,
\end{equation*}
where $v$ is the $v$-operation on $V$. Then, $\mm^{\star_i}\cap D=\mm$ for every $i$, and thus $1\notin I^{\star_i}$ for every proper ideal $I$; in particular, each $\star_i$ is quasi-radical.
Let now $\star$ be the supremum of the $\star_i$. We claim that $D^\star=V$. Since $V^{\star_i}=V$ for every $i$, $D^\star\subseteq V$. Moreover, if $t\in V$, then there is an $i$ such that $t\in T_i$; then, $t\in D^{\star_i}=T_i\cap V$, and thus $t\in D^\star$. Hence $D^\star=V$. In particular, $\mm^\star=(\mm V)^\star$ can only be $\mm V$ or $V$. In the former case, $\mm V$ should be $\star_i$-closed for every $i$; however,
\begin{equation*}
(\mm V)^{\star_i}=\mm VT_i\cap(\mm V)^v=\mm K\cap V=V,
\end{equation*}
a contradiction.
Let now $J:=\{x\in D\mid v(x)\geq1\}$, where $v$ is a valuation relative to $V$ such that $1$ belongs to the value group $\Gamma_v$. Then, $J$ is an ideal of $V$ of radical $\mm$, since if $m\in\mm$ then $v(m)>0$, so that $v(m^k)=kv(m)>1$ for some $k$. However, $JV=\{x\in V\mid v(x)\geq 1\}$ is a principal ideal of $V$, and thus it is closed by every $\star_i$; hence, $J^\star=JV$ and $1\notin J^\star$. By the previous part of the proof, however, $1\in\rad(J)^\star=\mm^\star$; therefore $\star$ is not quasi-radical.
For an explicit example, let $A$ be a Pr\"ufer domain of dimension $1$ with maximal ideals $P,Q_1,Q_2,\ldots$ and with Jacobson radical $J\neq(0)$; suppose also that $P$ is isolated in $\Max(A)$, with respect to the constructible topology. Suppose that every residue field is $K$ and that $D_P$ is not discrete. Let $\pi:D\longrightarrow A/J\simeq A/P\prod_i A/Q_i$ be the quotient map, and let $\iota:K\longrightarrow A/J$ be the diagonal embedding. Then, $D:=\pi{-1}(\iota(K))$ is a local subring of $A$, such that the extension of its maximal ideal $\mm$ to $A$ is $J$. Hence, if $V:=A_P$, then $\mm V$ is the maximal ideal of $V$. Choose a decreasing family $\Delta_n$ of
Set $T_n:=\bigcap_{i\geq n}A_{Q_i}$; then, $T_n$ is a Pr\"ufer domain whose maximal ideals are the extensions of $Q_i$, for $i\geq n$, and $P$. Hence $\bigcup_{i\geq n}T_i=V$.
\end{ex}
\end{comment}
The main problem of the previous example is that the restriction of a quasi-radical operation on $D$ to an overring of $D$ is not quasi-radical (as it happens for $\star_i|_{\inssubmod(D_P)}$); this in turn is due to the fact that the property of being quasi-radical depends only on the ideals of $D$, rather than on all $D$-submodules of $K$. For this reason, we are only interested in the following subclass of semistar operations.
\begin{defin}
We say that a semistar operation $\star$ on $D$ is \emph{radical} if it is quasi-radical and stable.
\end{defin}
\begin{lemma}\label{lemma:overring-radical}
Let $\star$ be a radical stable operation, and suppose that $T$ is an overring of $D$. Then $\star|_{\inssubmod(T)}$ is radical.
\end{lemma}
\begin{proof}
Let $I$ be a $T$-ideal such that $1\notin I^\star$. Then, $1\notin(I\cap D)^\star$, and since $\star$ is radical we have $1\notin(\rad(I\cap D))^\star$. However, $\rad(I\cap D)=\rad(I)\cap D$; hence
\begin{equation*}
1\notin\rad(I\cap D)^\star=(\rad(I)\cap D)^\star=\rad(I)^\star\cap D^\star.
\end{equation*}
Thus $1\notin\rad(I)^\star$ and so $\star|_{\inssubmod(T)}$ is radical, as claimed.
\end{proof}
\begin{prop}\label{prop:radJchiuso}
Let $D$ be an integral domain and let $\star$ be a radical semistar operation on $D$ such that $D=D^\star$. Let $J$ be an ideal of $D$ such that $J=J^\star$. Then, $\rad(J)^\star=\rad(J)$.
\end{prop}
\begin{proof}
Let $s\in\rad(J)^\star$, and let $t\in s^{-1}\rad(J)\cap D$. Then, $st\in\rad(J)$, and thus there is an $n$ such that $s^nt^n\in J$, i.e., $t^n\in s^{-n}J\cap D$. Hence $t\in\rad(s^{-n}J\cap D)$ and so $s^{-1}\rad(J)\cap D\subseteq\rad(s^{-n}J\cap D)$.
Since $s\in\rad(J)^\star$, we have $1\in s^{-1}\rad(J)^\star$; hence also $1\in \rad(s^{-n}J\cap D)^\star$. Since $\star$ is radical, it follows that $1\in (s^{-n}J\cap D)^\star$; thus $1\in s^{-n}J^\star$ and $s^n\in J^\star=J$. Therefore, $s\in\rad(J)$, and $\rad(J)^\star=\rad(J)$.
\end{proof}
\begin{teor}\label{teor:radical-compllattice}
Let $D$ be an integral domain. Then, the set $\insrad(D)$ of radical stable semistar operations is a complete sublattice of $\inssemistar(D)$.
\end{teor}
\begin{proof}
Let $\{\star_\alpha\}_{\alpha\in A}$ be a family of radical semistar operations. Then, its infimum is quasi-radical by Proposition \ref{prop:quasirad-inf} and stable since every $\star_\alpha$ is stable, and thus $\insrad(D)$ is closed by infima. Let $\star$ be the supremum of $\{\star_\alpha\}_{\alpha\in A}$.
Let $T:=D^\star$: then, $T$ is $\star_\alpha$-closed for every $\alpha$. By Proposition \ref{prop:psrad-Dstar}, it suffices to show that $\star|_{\inssubmod(T)}$ is radical; furthermore, by Lemma \ref{lemma:overring-radical}, each $\star_\alpha|_{\inssubmod(T)}$ is radical. Therefore, without loss of generality, we can actually suppose that $T=D$, i.e., that $D$ is $\star_\alpha$-closed for every $\alpha$.
Let $J$ be an ideal of $D$ such that $1\notin J^\star$. Let $L:=J^\star$; then, $L$ is an ideal of $D$ that is $\star_\alpha$-closed for every $\alpha$, and thus by Proposition \ref{prop:radJchiuso} also $\rad(L)$ is $\star_\alpha$-closed for every $\alpha$; thus, $\rad(L)=\rad(L)^\star$. In particular, $1\notin\rad(L)^\star$; the claim now follows from the fact that $\rad(J)\subseteq\rad(L)$.
\end{proof}
\section{Radical operations as a completion}
By Proposition \ref{prop:pseudrad-ex}, each spectral semistar operation $s_\Delta$ is radical; in this section, we explore the link between these two classes of semistar operations. Following \cite[Example 4.5]{spettrali-eab}, we first give an example of a radical operation that is not spectral.
\begin{ex}\label{ex:supnonrad}
Let $\ins{A}$ be the ring of all algebraic integer, i.e., the integral closure of $\insZ$ in $\overline{\insQ}$. Then, $\ins{A}$ is a B\'ezout domain (every finitely generated ideal is principal) and, for every maximal ideal $P$, we have that $\ins{A}=\bigcap\{\ins{A}_Q\mid Q\in\Max(\ins{A})\setminus\{P\}\}$. Hence, for each $P$ the spectral operation $\sharp(P):=s_{\Max(\ins{A})\setminus\{P\}}$ closes $\ins{A}$, and thus the supremum $\star$ of all the $\sharp(P)$ closes $\ins{A}$ too, and thus it closes every principal ideal (since $(x\ins{A})^\star=x\cdot\ins{A}^\star=x\cdot\ins{A}$).
As the supremum of a family of radical operations, $\star$ is itself radical. However, for every $P$-primary ideal $Q$, we have $Q^{\sharp(P)}=\ins{A}$; therefore, $\qspec^\star(D)$ contains only the zero ideal. In particular, were $\star$ spectral, it should be equal to $s_{(0)}$, and in particular $1$ would belong to $I^\star$ for every nonzero ideal $\star$, contradicting the fact that principal ideals are closed. Hence $\star$ is radical, but not spectral.
\end{ex}
The following proposition characterizes which radical operations are spectral.
\begin{prop}\label{prop:caratt-spectral}
Let $\star$ be a radical stable operation on $D$. Then, $\star$ is spectral if and only if, for every radical ideal $I$,
\begin{equation*}
I^\star\cap D=\bigcap\{P\mid P\in \V(I)\cap\qspec^\star(D)\}.
\end{equation*}
\end{prop}
\begin{proof}
Suppose first that $\star$ is spectral, say $\star=s_\Delta$ with $\Delta=\Delta^\downarrow$. For every $P\in\Delta$, the ideal $ID_P$ is radical, and its minimal primes are the minimal primes of $I$ contained in $P$; all of them belong to $\Delta$, and thus they are all in $\V(I)\cap\qspec^\star(D)$. Hence,
\begin{equation*}
I^\star\cap D=\bigcap_{P\in\Delta}\{QD_P\cap D\mid Q\in\Min(I),Q\subseteq P\}=\bigcap_{Q\in\Min(I)\cap\Delta}Q.
\end{equation*}
The claim follows.
Conversely, suppose that the equality holds, and let $\Delta:=\qspec^\star(D)$. For every $P\in\Delta$, $PD_P$ is $\star$-closed, and thus $\star$ is the identity on $\inssubmod(D_P)$; it follows that $I^\star\subseteq ID_P$ for every $P\in\Delta$, and thus $\star\leq s_\Delta$.
Suppose that $\star<s_\Delta$: then, there is an ideal $I$ of $D$ such that $I^\star\subsetneq I^{s_\Delta}$. Let $x\in I^{s_\Delta}\setminus I^\star$ and let $J:=(I:_Dx)$. Since $\star$ is stable, we have $1\in J^{s_\Delta}$ while $1\notin J^\star$; since both $s_\Delta$ and $\star$ are radical, it follows that $1\in\rad(J)^{s_\Delta}$ while $1\notin\rad(J)^\star$. However, by the hypothesis and the first part of the proof, $\rad(J)^{s_\Delta}\cap D=\rad(J)^\star\cap D$; this is a contradiction, and thus $\star$ must be equal to $s_\Delta$. In particular, $\star$ is spectral, as claimed.
\end{proof}
\begin{cor}\label{cor:MinIfinito}
Let $D$ be an integral domain such that every ideal has only finitely many minimal primes. Then, every radical stable operation is spectral.
\end{cor}
\begin{proof}
Let $I$ be a radical ideal and $P_1,\ldots,P_n$ be its minimal primes. Then, $I=P_1\cap\cdots\cap P_n$, and thus $I^\star=P_1^\star\cap\cdots\cap P_n^\star$. Since $\star$ is stable, for each $i$ the ideal $P_i^\star\cap D$ is either equal to $P_i$ or to $D$ \cite[Lemma 3.1]{stable_prufer} hence, $I^\star\cap D$ is equal to the intersection of the minimal primes that are quasi-$\star$-ideals. By Proposition \ref{prop:caratt-spectral}, $\star$ is spectral.
\end{proof}
The following result is a variant of \cite[Lemma 3.1]{stable_prufer}.
\begin{prop}\label{prop:chius-rad}
Let $\star$ be a stable semistar operation, and let $J$ be a radical ideal of $D$. Then, $J^\star\cap D$ is either $D$ or a radical ideal.
\end{prop}
\begin{proof}
Suppose $J^\star\cap D\neq D$. Let $s\in D$ be such that $s^n\in J^\star$ for some integer $n$. Let $L:=s^{-n}J\cap D$: since $\star$ is stable, $1\in L^\star$. We claim that $s^{-1}J\cap D=\rad(L)$. Indeed, if $x\in s^{-1}J\cap D$ then $sx\in J$ and thus also $s^nx\in J$, i.e., $x\in s^{-n}J\cap D=L\subseteq\rad(L)$. On the other hand, if $x\in\rad(L)$, then $x^k\in s^{-n}J$ for some $k$, and thus $x^ks^n\in J$. Since $x,s\in D$, we have $x^Ns^N\in J$, where $N:=\max\{n,k\}$; since $J$ is radical, it follows that $xs\in J$, that is, $x\in s^{-1}J\cap D$. Thus $s^{-1}J\cap D=L=\rad(L)$.
Since $1\in L^\star$, it follows that $1\in(s^{-1}J)^\star=s^{-1}J^\star$, that is, $s\in J^\star$. Hence $J^\star$ is radical, as claimed.
\end{proof}
\begin{prop}\label{prop:dense-closed}
Let $D$ be a domain, let $I$ be a radical ideal of $D$ and $\Delta=\Delta^\downarrow\subseteq\Spec(D)$. Then, the following are equivalent:
\begin{enumerate}[(i)]
\item $I=I^{s_\Delta}\cap D$;
\item $\V(I)\cap\Delta$ is dense in $\V(I)$;
\item $\Min(I)\cap\Delta$ is dense in $\Min(I)$.
\end{enumerate}
\end{prop}
\begin{proof}
By Proposition \ref{prop:chius-rad}, the ideal $J:=I^{s_\Delta}\cap D$ is a radical ideal of $D$ containing $I$; therefore, $\V(J)\subseteq \V(I)$ is a closed set, and $\V(J)$ contains $\V(I)\cap\Delta$ since, if $P\in\V(I)\cap\Delta$, then $J=I^\star\cap D\subseteq P^\star\cap D=P$. In particular, if $\V(I)\cap\Delta$ is dense then it must be $I=J$. On the other hand, if $\V(I)\cap\Delta$ is not dense, then there is an ideal $L$ such that $\Delta\cap \V(L)\subsetneq \V(I)$; thus, $ID_P=LD_P$ for every $P\in\Delta$, and $J\supseteq L$, so that $I\neq J$. Thus, the first two conditions are equivalent.
If $\Min(I)\cap\Delta$ is dense in $\Min(I)$, then $\Min(I)$ is contained in the closure of $\V(I)\cap\Delta$; then, $\V(I)\cap\Delta$ is dense since $\Min(I)$ is dense in $\V(I)$. Conversely, suppose $\V(I)\cap\Delta$ is dense in $\V(I)$ and take $P\in\Min(I)$. For every open set $\Omega$ meeting $\V(I)$, $\Omega\cap\Delta\cap\V(I)$ is nonempty; if $Q$ belongs to the intersection, then $\Omega\cap\Delta\cap\Min(I)$ contains the minimal primes of $I$ contained in $Q$. Hence, $\Delta\cap\Min(I)$ is dense in $\Min(I)$. Thus also the last two conditions are equivalent.
\end{proof}
Let $D$ be an integral domain. The space $\insspectral(D)$ of spectral semistar operation on $D$ is a complete lattice: indeed, let $X:=\{s_{\Delta_\alpha}\mid\alpha\in A\}$ be a subset of $\insspectral(D)$ with $\Delta_\alpha=\Delta_\alpha^\downarrow$. Then, setting $\Delta^\cup:=\cup_\alpha\Delta_\alpha$ and $\Delta^\cap:=\bigcap_\alpha\Delta_\alpha$, it is easy to see that the infimum of $X$ in $\insspectral(D)$ is $s_{\Delta^\cup}$ and that its supremum is $s_{\Delta^\cap}$.
However, while $s_{\Delta^\cup}$ is also the infimum of $X$ as a subset of $\inssemistar(D)$, the same does not hold for $s_{\Delta^\cap}$ (see Example \ref{ex:supnonrad}). We now want to prove that the set $\insrad(D)$ of radical semistar operations is the join-completion of $\insspectral(D)$ in $\inssemistar(D)$. In particular, the construction of Example \ref{ex:supnonrad} is the only way to obtain non-spectral radical semistar operations.
\begin{teor}\label{teor:completeness}
Let $D$ be an integral domain. Then:
\begin{enumerate}[(a)]
\item $\insspectral(D)$ is join-dense in $\insrad(D)$;
\item $\insrad(D)$ is the completion of $\insspectral(D)$ in $\inssemistar(D)$.
\end{enumerate}
\end{teor}
\begin{proof}
Since $\insrad(D)$ is a complete sublattice of $\inssemistar(D)$ (Theorem \ref{teor:radical-compllattice}), we only need to prove that every radical stable operation is the supremum of a family of spectral operations.
Fix thus $\star\in\insrad(D)$. Let $\Delta\subseteq\Spec(D)$ be such that $\Delta\cap \V(I)$ is dense in $\V(I)$ for every radical ideal $I$ such that $I=I^\star\cap D$. Then, $s_\Delta\leq\star$: indeed, if $J$ is an ideal such that $1\in J^{s_\Delta}$ and $1\notin J^\star$, then $\Delta\cap \V(J^\star\cap D)$ would be dense in $\V(J^\star\cap D)$, and thus by Proposition \ref{prop:dense-closed} $J^\star\cap D$ would be quasi-$s_\Delta$-closed, against the fact that $1\in J^{s_\Delta}$. Hence, $s_\Delta\leq\star$. Let $\sharp$ be the supremum of all such $s_\Delta$: by construction, $\sharp\leq\star$.
We claim that $\star=\sharp$. Let $J$ be a proper radical ideal: if $1\in J^\sharp$, then $1\in J^\star$ since $\sharp\leq\star$. Suppose that $1\in J^\star$. We claim that $\D(J)\cap \V(I)$ is dense in $\V(I)$ for every radical ideal $I$ such that $I=I^\star\cap D$. If not, there is a $P\in \V(J)$ that is not in the closure of $\D(I)\cap \V(J)$; hence, there is a radical ideal $L$ such that $P\in \D(L)$ and $\D(L)\cap \D(J)\cap \V(I)=\emptyset$. Since $\D(L)\cap \D(J)=\D(L\cap J)$, it follows that $\D(L\cap J)\cap \V(I)=\emptyset$, and thus $\V(I)\subseteq \V(L\cap J)$. Thus, $L\cap J\subseteq I$, and $L^\star\cap J^\star=(L\cap J)^\star\subseteq I^\star$. Hence
\begin{equation*}
(J^\star\cap D)\cap(L^\star\cap D)\subseteq I^\star\cap D=I.
\end{equation*}
By hypothesis, $J^\star$ contains $1$; hence, $J^\star\cap D=D$ and $L^\star\cap D\subseteq I$, so that $L\subseteq I$. In particular, $\D(L)\subseteq \D(I)$; it follows that $\D(L)\cap \V(I)=\emptyset$, against the hypothesis that $P\in \D(L)\cap \V(I)$. Therefore, $\D(J)\cap \V(I)$ is dense in $\V(I)$ for all radical ideal $I$ such that $I=I^\star\cap D$; thus, $s_{\D(J)}$ is one of the spectral operations used to define $\sharp$; hence, $s_{\D(J)}\leq\sharp$. It follows that $1\in J^{s_{\D(J)}}\subseteq J^\sharp$. Therefore, $1\in J^\star$ if and only if $1\in J^\sharp$; since $\star$ and $\sharp$ are stable, it follows that $\star=\sharp$, as claimed, and $\star$ is in the completion of $\insspectral(D)$.
\end{proof}
\section{Isomorphic sets of radical operations}
Let $D_1,D_2$ be two integral domains. If $\phi:\Spec(D_1)\longrightarrow\Spec(D_2)$ is an order isomorphism, then $\phi$ induces an order isomorphism $\Phi:\insspectral(D_1)\longrightarrow\insspectral(D_2)$ by setting $\Phi(s_\Delta)=s_{\phi(\Delta)}$ for every $\Delta\subseteq\Spec(D_1)$. However, $\Phi$ does not, in general, extend to a similar isomorphism between the set of radical semistar operations, for example because it may be $\insspectral(D_1)=\insrad(D_1)$ while $\insspectral(D_2)\neq\insrad(D_2)$ (take for example $D_1:=K[X]$ and $D_2:=\ins{A}$, where $K$ is a field of the same cardinality of $\Max(\ins{A})$).
In this section, we extend this result to radical operations by using the Zariski topology. We work in a particular class of domains: we say that a domain is \emph{rad-colon coherent} if, for every $x\in K$, the radical of the ideal $(D:_Dx)$ is the radical of a finitely generated ideal. This property is linked with the relationship between the Zariski, inverse and constructible topology of $\Spec(D)$ and the Zariski, inverse and constructible topology of $\Over(D)$. Every Noetherian domain (or, more generally, every domain with Noetherian spectrum) is rad-colon coherent; likewise, every Pr\"ufer domain and every coherent domain are rad-colon coherent, as well as every polynomial in finitely many variables over a Pr\"ufer domain. See \cite{localizzazioni} for applications of this property and for an example of a domain that is not rad-colon coherent.
In our context, the reason why we use this notion is essentially the following lemma.
\begin{lemma}\label{lemma:rcc-Tstar}
Let $D$ be a rad-colon coherent domain and let $I$ be a radical ideal. Define $T:=\bigcap\{D_P\mid P\in\Min(I)\}$. If $\star$ is a radical semistar operation such that $I=I^\star\cap D$, then $T^\star=T$ and $(IT)^\star=IT$.
\end{lemma}
\begin{proof}
Suppose first that $\star=s_\Delta$ is spectral, with $\Delta=\Delta^\downarrow$. Then, by Proposition \ref{prop:dense-closed}, $\Delta\cap \V(I)$ is dense in $\V(I)$ and $\Delta\cap\Min(I)$ is dense in $\Min(I)$, with respect to the Zariski topology. By \cite[Corollary 4.4.6(i)]{spectralspaces-libro}, the Zariski and the constructible topology agree on $\Min(I)$; hence, $\Delta\cap\Min(I)$ is dense in $\Min(I)$ also with respect to the constructible topology.
Let $x\in T^\star$, and let $J:=(D:_Dx)=x^{-1}D\cap D$. We claim that $\V(J)\cap\Min(I)\cap\Delta=\emptyset$. Indeed, let $P\in\Min(I)\cap\Delta$. Since $x\in T^\star\subseteq D_P^\star$, we have $1\in(x^{-1}D_P)^\star$, and thus $1\in(JD_P)^\star$; however, if $P\in \V(J)$ then $(JD_P)^\star\subseteq(PD_P)^\star=PD_P$ since $P\in\Delta$. Therefore, $\V(J)\cap\Min(I)\cap\Delta=\emptyset$, and thus $\Min(I)\cap\Delta\subseteq \D(J)$. Since $D$ is rad-colon coherent, $\rad(J)$ is the radical of a finitely generated ideal, and thus $\D(J)$ is a closed subset, with respect to the constructible topology; thus $\D(J)\cap\Min(I)$ is closed in $\Min(I)$. Since $\Min(I)\cap\Delta$ is dense in $\Min(I)$, it follows that $\D(J)\cap\Min(I)$ must be equal to the whole of $\Min(I)$, that is, $\V(J)\cap\Min(I)=\emptyset$. Thus, $JD_P=D_P$ for every $P\in\Min(I)$, and $x\in T$. Hence, $T^\star=T$.
This also implies that $(IT)^\star$ is a radical ideal of $T$ contained in $PT$ for every $P\in\Min(I)$. Hence $(IT)^\star=IT$, as claimed.
Suppose now that $\star$ is any radical operation. By Theorem \ref{teor:completeness}, $\star$ is the supremum of a family $Y$ of spectral semistar operation. For each $\sharp\in Y$, we have $\sharp\leq\star$, and thus $I=I^\sharp\cap D$; by the previous part of the proof, $T^\sharp=T$ and $(IT)^\sharp=IT$. Hence, also $T^\star=T$ and $(IT)^\star=IT$, as claimed.
\end{proof}
\begin{prop}\label{prop:sup-rcc}
Let $D$ be a rad-colon coherent domain and let $I$ be a radical ideal. Let $Y$ be a family of radical semistar operations and let $\sharp:=\sup Y$. If $I=I^\star\cap D$ for every $\star\in Y$, then $I=I^\sharp\cap D$.
\end{prop}
\begin{proof}
Let $T:=\bigcap\{D_P\mid P\in\Min(I)\}$. By Lemma \ref{lemma:rcc-Tstar}, $(IT)^\star$ is closed by every $\star\in Y$, and thus also $(IT)^\sharp$ is closed. Then, $I^\sharp\cap D\subseteq(IT)^\sharp\cap D=I$, and thus $I=I^\sharp\cap D$.
\end{proof}
We are ready to prove the main result of this section.
\begin{teor}\label{teor:insrad-iso}
Let $D_1,D_2$ be rad-colon coherent integral domains, and suppose that there is a homeomorphism $\phi:\Spec(D_1)\longrightarrow\Spec(D_2)$. Then, there is an order isomorphism
\begin{equation*}
\Phi:\insrad(D_1)\longrightarrow\insrad(D_2)
\end{equation*}
such that $\Phi(s_\Delta)=s_{\phi(\Delta)}$ for every $\Delta\subseteq\Spec(D_1)$.
\end{teor}
\begin{proof}
Let $X_i:=\insspectral(D_i)$ and $Y_i:=\insrad(D_i)$ for $i=1,2$.
By Theorem \ref{teor:radical-compllattice}, $Y_1$ is a join-completion of $X_1$; hence, we can consider $Y_1$ as a sublattice of the set $\mathcal{L}(X_1)$ of lower sets of $X_1$ by the map $\epsilon_1$, defined by $\epsilon_1(y)=\{x\in X_1\mid x\leq y\}$ for every $y\in Y$. In particular, $\epsilon_1(x)=\{x\}^\downarrow$ for every $x\in X_1$. Likewise, we can consider $Y_2$ as a sublattice of $\mathcal{L}(X_2)$ through a map $\epsilon_2$ defined analogously.
The map
\begin{equation*}
\begin{aligned}
\Phi\colon X_1 & \longrightarrow X_2,\\
s_\Delta& \longmapsto s_{\phi(\Delta)}
\end{aligned}
\end{equation*}
is an order isomorphism; thus, it can be extended to a map $\widetilde{\Phi}$ between $\mathcal{L}(X_1)$ and $\mathcal{L}(X_2)$, that remains an order isomorphism. We claim that $\widetilde{\Phi}(\epsilon_1(Y_1))=\epsilon_2(Y_2)$, and to do so it is enough to prove that, if $A\subseteq X_1$, then the supremum $\sup_{Y_1}A$ in $Y_1$ (that is, the supremum of $A$ as a semistar operation) is spectral if and only if $\sup_{Y_2}\Phi(A)$ is spectral.
Suppose first that $\star:=\sup_{Y_1}A$ is not spectral, and let $\sharp=s_\Delta$ be the supremum of $A$ in $X_1$. Let $\star'$ and $\sharp'$ be, respectively, the supremum of $\Phi(A)$ in $Y_2$ and $X_2$. By construction, $\star<\sharp$, and thus there is a radical ideal $I$ such that $I=I^\star\cap D_1$ while $I^\sharp=D_1^\sharp$. Let now $J$ be the radical ideal such that $\V(J)=\phi(\V(I))$; we claim that $J=J^{\star'}\cap D$ while $J^{\sharp'}=D_2^{\sharp'}$.
Indeed, if $s_\Lambda\in\Phi(A)$, then $\phi^{-1}(\Lambda)\cap\V(I)$ is dense in $\V(I)$, and thus $\Lambda\cap\V(J)$ is dense in $\V(J)$; since $D_2$ is rad-colon coherent, by Proposition \ref{prop:sup-rcc} $J=J^{\sup\Phi(A)}\cap D_2$, i.e., $J=J^{\star'}\cap D_2$. On the other hand, $\sharp=s_\Delta$ for some $\Delta$ such that $\Delta\cap\V(I)=\emptyset$; hence, $\sharp'=\Phi(\sharp)=\Phi(s_\Delta)=s_{\phi(\Delta)}$, where $\phi(\Delta)\cap\V(J)$ is empty. Hence, $J^{\sharp'}=D_2^{\sharp'}$. Thus, $\star'\neq\sharp'$, and $\sup_{Y_2}\Phi(A)$ is not spectral.
The opposite implication follows by applying the same reasoning to the homeomorphism $\phi^{-1}$ (which induces the map $\Phi^{-1}$ on the sets of spectral semistar operations).
Therefore, $\widetilde{\Phi}$ restricts to an isomorphism between $\epsilon_1(Y_1)$ and $\epsilon_2(Y_2)$; since $Y_i\simeq\epsilon_i(Y_i)$ for $i=1,2$, it follows that $Y_1=\insrad(D_1)$ and $Y_2=\insrad(D_2)$ are isomorphic, as claimed.
\end{proof}
\section{When every spectral operation is radical}\label{sect:scattered}
We have seen that, in general, not every radical semistar operation is spectral, although the two sets are equal when every ideal has only finitely many minimal primes (Corollary \ref{cor:MinIfinito}). In this section, we characterize when the two sets are equal for rad-colon coherent domains; specializing to Pr\"ufer domain, we also show that under this hypothesis we can obtain a standard representation of all stable operations.
We start with two topological lemmas.
\begin{lemma}
Let $D$ be an integral domain and $I$ a radical ideal that is not prime. Then, $\Min(I)$ is not perfect if and only if there are a prime ideal $Q$ and a radical ideal $J\neq I$ such that $I=Q\cap J$.
\end{lemma}
\begin{proof}
If $I$ is not perfect, there is an isolated point $Q$ of $\Min(I)$, and $\Min(I)\setminus\{Q\}=\Min(I)\cap\V(J)$ for some radical ideal $J$. By construction, $J\supsetneq I$ and $\V(J)\cup\V(Q)=\V(I)$, so that $I=Q\cap J$. Conversely, if $I=Q\cap J$, then $\V(I)=\V(Q)\cup\V(J)$. Since $I\neq J$, $\V(J)$ cannot contain all minimal primes of $I$; therefore, $Q$ must be contained in $\Min(I)$. Hence, $\{Q\}=\Min(I)\setminus\V(J)$ is open in $\Min(I)$ and $Q$ is isolated; thus $\Min(I)$ is not perfect.
\end{proof}
\begin{lemma}\label{lemma:Min-scat-perf}
Let $D$ be an integral domain. Then, the following are equivalent:
\begin{enumerate}[(i)]
\item\label{lemma:Min-scat-perf:scat} $\Min(I)$ is scattered for every ideal $I$;
\item\label{lemma:Min-scat-perf:perf} $\Min(I)$ is not perfect for every ideal $I$.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{lemma:Min-scat-perf:scat} $\Longrightarrow$ \ref{lemma:Min-scat-perf:perf} is obvious. To show \ref{lemma:Min-scat-perf:perf} $\Longrightarrow$ \ref{lemma:Min-scat-perf:scat}, let $I$ be a radical ideal and let $X:=\bigcap_\alpha\deriv^\alpha(\Min(I))$: then, $X$ is perfect. Let $J:=\bigcap\{Q\mid Q\in X\}$; then, $I\subseteq J\subseteq P$ for all $P\in X$, and thus $X\subseteq\Min(J)$. We claim that $\Min(J)$ is perfect. Indeed, suppose not: then, it has an isolated point $P$, and $P$ cannot belong to $X$, since $X$ is perfect. Since $P$ is isolated, there is a finitely generated ideal $L$ such that $\D(L)\cap\Min(J)=\{P\}$; therefore, $L\nsubseteq P$ while
\begin{equation*}
L\subseteq\bigcap_{Q\in\Min(J)\setminus\{P\}}\subseteq\bigcap_{Q\in X}Q=J,
\end{equation*}
a contradiction. Thus $\Min(J)$ is perfect, as claimed.
\end{proof}
\begin{defin}
We say that $D$ is \emph{min-scattered} if $\Min(I)$ is a scattered space for every ideal $I$.
\end{defin}
\begin{prop}\label{prop:countable}
Let $D$ be a domain such that $\Spec(D)$ is countable. Then, $D$ is min-scattered.
\end{prop}
\begin{proof}
The space $\Spec(D)$, endowed with the constructible topology, is Hausdorff, compact and countable, and thus scattered \cite{mazur-sierp-numerabili}. Therefore, for every ideal $I$, also $\Min(I)$ is scattered, with respect to the constructible topology; however, on each $\Min(I)$ the constructible and the Zariski topology coincide. Hence $D$ is min-scattered.
\end{proof}
\begin{teor}\label{teor:rcc-radspectral}
Let $D$ be a rad-colon coherent domain. Then, the following are equivalent:
\begin{enumerate}[(i)]
\item\label{teor:rcc-radspectral:scat} $D$ is min-scattered;
\item\label{teor:rcc-radspectral:radical} every radical semistar operation is spectral.
\end{enumerate}
\end{teor}
\begin{proof}
\ref{teor:rcc-radspectral:radical} $\Longrightarrow$ \ref{teor:rcc-radspectral:scat} Suppose that there is a radical ideal $I$ such that $\Min(I)$ is perfect. For every $P\in\Min(I)$, let $\sharp(P):=s_{\Min(I)\setminus\{P\}}$, and let $\star$ be the supremum of all these $\sharp(P)$. Then, $\star$ is a radical semistar operation; we claim that $\star$ is not spectral.
Indeed, since $\Min(I)$ is perfect each $\Min(I)\setminus\{P\}$ is dense, and thus by Proposition \ref{prop:caratt-spectral} $I=I^{\sharp(P)}\cap D$ for every $P$; since $D$ is rad-colon coherent, by Proposition \ref{prop:sup-rcc} we have $I=I^\sharp\cap D$. However, $P^{\sharp(P)}\ni 1$ for every $P\in\Min(I)$; hence, $1\in P^\sharp$ for every $P\in \V(I)$. By Proposition \ref{prop:caratt-spectral}, $\sharp$ cannot be spectral.
\ref{teor:rcc-radspectral:scat} $\Longrightarrow$ \ref{teor:rcc-radspectral:radical} Suppose that there is a radical operation $\star$ that is not spectral. By Proposition \ref{prop:caratt-spectral}, there is an ideal $I$ such that $I^\star\cap D\subsetneq\bigcap\{P\mid P\in \V(I)\cap\qspec^\star(D)\}$; without loss of generality we can suppose that $I=I^\star$. Let $J$ be equal to the intersection, and let $\Gamma:=\Min(I)\setminus \V(J)$. By construction, $\Gamma$ is nonempty.
The set $\Gamma$ does not contain isolated points of $\Min(I)$: if $Q\in\Gamma$ is isolated, then $I=Q\cap I_0$ for some $I_0\supsetneq I$, and thus $I^\star\cap D=(Q\cap I_0)^\star\cap D=Q^\star\cap I_0^\star\cap D$ can only be equal to $I$ if $Q=Q^\star\cap D$, i.e., $Q\in\qspec^\star(D)$ and $J\subseteq Q$.
For every $P\in\Gamma$, let $\gamma(P)$ be the minimal ordinal number such that $P\notin\deriv^\gamma(\Gamma)$. Note that $\gamma(P)$ exists since $\Min(I)$ is scattered and no element of $\Gamma$ is isolated; furthermore, $\gamma(P)$ is a successor ordinal. Let $\gamma$ be the minimal element of the set of all $\gamma(P)$, and let $Q\in\Min(I)$ be such that $\gamma(Q)=\gamma$. Let also $\beta$ be such that $\gamma=\beta+1$.
By construction, $Q$ is a limit point of $\Min(I)$, while $Q$ is isolated in $\deriv^\beta(\Min(I))$. Hence, $Q$ is a limit point of $\Min(I)\setminus\deriv^\beta(\Min(I))$. The latter set is contained in $\V(J)$, by definition of $Q$; since $\V(J)$ is closed, it follows that also $Q\in \V(J)$. This is a contradiction, and thus $\Gamma$ must be empty, i.e., there cannot be a radical non-spectral semistar operation. The claim is proved.
\end{proof}
We now restrict to the case of Pr\"ufer domains, extending results proved in \cite{stable_prufer} and mostly following the general method of that paper. Given a semistar operation $\star$ on the Pr\"ufer domain $D$, we define the \emph{pseudo-spectrum} $\psspec^\star(D)$ as the set of those prime ideals $Q$ such that $1\in Q^\star$, but there is a $Q$-primary ideal $L$ such that $L=L^\star\cap D$. Using the quasi-spectrum and the pseudo-spectrum, we can define from $\star$ a new semistar operation $\stdstable{\star}$, called the \emph{normalized stable version} of $D$, as
\begin{equation*}
\stdstable{\star}:I\mapsto\bigcap_{P\in\qspec^\star(D)}ID_P\cap\bigcap_{Q\in\psspec^\star(D)}(ID_Q)^{v_Q},
\end{equation*}
where $v_Q$ is the $v$-operation on the valuation domain $D_Q$. Note that $v_P$ is different from the identity on $D_P$ if and only $P$ is idempotent.
\begin{lemma}\label{lemma:radical-stdstable}
Let $\star$ be a radical semistar operation. Then, $\star=\stdstable{\star}$ if and only if $\star$ is spectral.
\end{lemma}
\begin{proof}
If $\star$ is spectral, then $\star=s_{\qspec^\star(D)}=\stdstable{\star}$. Conversely, if $\star=\stdstable{\star}$ but $\star$ is not spectral, there is a $Q\in\psspec^\star(D)$. By definition, $1\in Q^\star$, while $1\notin L^\star$ for some $Q$-primary ideal $L$; since $\rad(L)=Q$, this contradicts the fact that $\star$ is radical. Hence $\star$ must be spectral.
\end{proof}
\begin{comment}
\begin{lemma}
Let $D$ be an integral domain and let $I$ be an ideal of $D$; let $X$ be the set of all flat overrings $T$ of $D$ such that $I=IT\cap D$. Then, $X$ has a unique maximal element.
\end{lemma}
\begin{proof}
Let $\{T_\alpha\mid\alpha\in A\}$ be a chain in $X$: then, the union $T:=\bigcup_\alpha T_\alpha$ is again flat. Moreover, if $I\neq IT\cap D$, then there are $x\in D\setminus I$, $i_1,\ldots,i_k\in I$ and $t_1,\ldots,t_k\in T$ such that $x=i_1t_1+\cdots+i_kt_k$: taking a $\beta\in A$ such that $t_1,\ldots,t_k\in T_\beta$, we obtain $x\in IT_\beta\cap D$, against the fact that $T_\beta\in X$. By Zorn's lemma, $X$ has maximal elements.
To show that it is unique, let $S,T\in X$. Then, $ST\in X$: indeed, $ST$ is flat, and
\begin{equation*}
IST\cap D=IST\cap S\cap D=(IT\cap D)S\cap D=IS\cap D=I.
\end{equation*}
Hence, $X$ can have at most one maximal element.
\end{proof}
We denote the maximal element of the set $X$ of the lemma by $\Omega(I)$. When $D$ is a Pr\"ufer domain, every overring is flat; hence, $\Omega(I)$ is the largest \emph{overring} $T$ of $D$ with the property that $I=IT\cap D$.
The proof of the following lemma follows closely the second part of the proof of \cite[Theorem 4.5]{stable_prufer}.
\begin{lemma}\label{lemma:IDQ-OmegaQ}
Let $D$ be a Pr\"ufer domain and let $I$ be an ideal of $D$. Let $Q$ be a minimal prime of $I$ and $\star$ be a stable semistar operation. Then, $1\in(ID_Q)^\star$ if and only if $1\in(I\Omega(Q))^\star$.
\end{lemma}
\begin{proof}
If $1\in(I\Omega(Q))^\star$. then $1\in(ID_Q)^\star$ since $\Omega(Q)\subseteq D_Q$.
Suppose $1\in(ID_Q)^\star$ and let $\sharp:=\star|_{\inssubmod(\Omega(Q))}$. Then, $\sharp$ is a stable semistar operation on $\Omega(Q)$ such that the elements of $\qspec^\sharp(\Omega(Q))$ and $\psspec^\sharp(\Omega(Q))$ are the extension of the elements of $\qspec^\star(D)$ and $\psspec^\star(D)$ \cite[Lemma 3.8]{stable_prufer}; hence, $\stdstable{\sharp}=\stdstable{\star}|_{\inssubmod(\Omega(L)}$, and thus we can suppose without loss of generality that $\Omega(Q)=D$ (and so $\sharp=\star$).
Since $1\in(ID_Q)^\star$, we can't have $Q\in\qspec^\star(D)$. If $Q\in\psspec^\star(D)$, then $ID_Q=QD_Q$; if $q\in QD_Q$, then $qQD_Q\subsetneq ID_Q$ and thus $qQ\subseteq I$ by \cite[Lemma 4.4(a)]{stable_prufer}. Thus,
\begin{equation*}
q=q\cdot 1\in qQ^\star=(qQ)^\star\subseteq I^\star.
\end{equation*}
Hence $Q\subseteq I^\star$, so that $Q^\star\subseteq I^\star$, and in particular $1\in I^\star$. Finally, if $Q\notin\qspec^\star(D)\cup\psspec^\star(D)$, then $1\in L^\star$ for every $Q$-primary ideal $L$; since $\rad(I)=Q$, by \cite[Lemma 4.4(b)]{stable_prufer} we can find such a primary ideal $L\subseteq I$, and thus $1\in I^\star$.
Therefore, $1\in I^\star$ in every case, as claimed.
\end{proof}
\end{comment}
\begin{teor}\label{teor:prufer-MinI}
Let $D$ be a Pr\"ufer domain. Then, the following are equivalent:
\begin{enumerate}[(i)]
\item\label{teor:prufer-MinI:scat} $D$ is min-scattered;
\item\label{teor:prufer-MinI:radical} every radical semistar operation is spectral;
\item\label{teor:prufer-MinI:stdstable} $\star=\stdstable{\star}$ for every stable semistar operation $\star$.
\end{enumerate}
\end{teor}
\begin{proof}
\ref{teor:prufer-MinI:scat} $\iff$ \ref{teor:prufer-MinI:radical} follows from Theorem \ref{teor:rcc-radspectral}, since a Pr\"ufer domain is rad-colon coherent, while \ref{teor:prufer-MinI:stdstable} $\Longrightarrow$ \ref{teor:prufer-MinI:radical} follows from Lemma \ref{lemma:radical-stdstable}.
To prove \ref{teor:prufer-MinI:scat} $\Longrightarrow$ \ref{teor:prufer-MinI:stdstable}, fix a stable semistar operation $\star$. By \cite[Theorem 3.9]{stable_prufer}, $\star\leq\stdstable{\star}$, and thus if $1\in I^\star$ then also $1\in I^{\stdstable{\star}}$. Suppose that $1\in I^{\stdstable{\star}}$ while $1\notin I^\star$. Then, $J:=I^\star\cap D$ is a proper ideal of $D$ that is quasi-$\star$-closed. Changing notation from $I$ to $J$, we can suppose without loss of generality that $I=I^\star\cap D$.
Since $\Min(I)$ is not perfect, there is an isolated point $Q$. Since $\Min(I)\setminus\{Q\}$ is closed, it is equal to $\V(I_1)\cap\Min(I_1)$ for some radical ideal $I_1$. Let $T:=\bigcap\{D_P\mid P\in\V(Q)\}$ and $S:=\bigcap\{D_P\mid P\in\V(I_1)\}$: Then, $I=IS\cap IT$, and in particular
\begin{equation*}
I^\star\cap D=(IS)^\star\cap D\cap(IT)^\star\cap D.
\end{equation*}
The radical of $IT\cap D$ is $Q$, which is a prime ideal. By the proof of \cite[Theorem 4.5]{stable_prufer}, since $1\in (IT\cap D)^{\stdstable{\star}}$, we also have $1\in(IT\cap D)^\star$, and thus $(IT)^\star\cap D=D$. On the other hand, $IS\cap D$ is not contained in $Q$; hence, neither does $(IS)^\star\cap D$. By construction, $I=I^\star\cap D$; this is a contradiction, and thus $\star$ and $\stdstable{\star}$ must be equal, as claimed.
\end{proof}
\begin{cor}
Let $D$ be a domain such that $\Spec(D)$ is countable. Then, every radical semistar operation is spectral. If $D$ is Pr\"ufer, moreover, $\star=\stdstable{\star}$ for every stable semistar operation $\star$.
\end{cor}
\begin{proof}
If $\Spec(D)$ is countable, then $D$ is min-scattered by Proposition \ref{prop:countable}. The claims now follow from Theorems \ref{teor:rcc-radspectral} and \ref{teor:prufer-MinI}.
\end{proof}
The following is a version of Theorem \ref{teor:insrad-iso} for stable operations on a Pr\"ufer domain; it can also be seen as a variant of \cite[Theorem 5.12]{length-funct} (in view of \cite[Section 6]{length-funct}).
\begin{teor}\label{teor:prufer-iso}
Let $D_1,D_2$ be Pr\"ufer domains. Suppose that there is a homeomoprhism $\phi:\Spec(D_1)\longrightarrow\Spec(D_2)$ such that a prime ideal $P$ is idempotent if and only if $\phi(P)$ is idempotent. If $D_1$ is min-scattered, then there is an isomorphism $\Phi:\insstable(D_1)\longrightarrow\insstable(D_2)$.
\end{teor}
\begin{proof}
We first note that if $J$ is an ideal of $D_2$, then $\Min(J)$ is the set of minimal elements of the closed set $\V(J)$; then, $\phi^{-1}(\V(J))$ is a closed set of $\Spec(D_1)$, and thus it is equal to $\V(J')$ for some ideal $J'$ of $D_1$. By hypothesis, $\Min(J')$ is scattered, and thus also $\phi(\Min(J'))=\Min(J)$ is scattered. Hence also $D_2$ is min-scattered.
Given a stable semistar operation $\star$ on $D_1$, we define $\Phi(\star)$ as the map
\begin{equation*}
\Phi(\star):I\mapsto\bigcap_{P\in\phi(\qspec^\star(D_1))}I(D_2)_P\cap\bigcap_{Q\in\phi(\psspec^\star(D_1))}(I(D_2)_Q)^{v_Q}.
\end{equation*}
We claim that $\qspec^{\Phi(\star)}(D_2)=\phi(\qspec^\star(D_1))$ and $\psspec^{\Phi(\star)}(D_2)=\phi(\psspec^\star(D_1))$.
Indeed, let $P\in\Spec(D_1)$ and let $Q:=\phi(P)$. If $P\in\qspec^\star(D_1)$, then
\begin{equation*}
Q^{\Phi(\star)}\cap D\subseteq Q(D_2)_Q\cap D_2=Q,
\end{equation*}
and thus $Q\in\qspec^{\Phi(\star)}(D_2)$; conversely, if $Q\in\qspec^{\Phi(\star)}(D_2)$, then either $Q(D_2)_A\neq (D_2)_A$ for some $A\in\phi(\qspec^\star(D_1))$ or $(Q(D_2)_B)^{v_B}\neq Q(D_2)_B$ for some $B\in\phi(\psspec^\star(D_1))$. In the former case, $P\subseteq A$; since the quasi-spectrum is closed by generizations \cite[Proposition 3.4(a)]{stable_prufer}, $P\in\qspec^\star(D_1)$ and thus $Q\in\phi(\qspec^\star(D_1))$. In the latter case it must be $Q\subsetneq B$, and thus $Q\in\phi(\qspec^\star(D_1))$ since every $B'\subsetneq B$ is in the quasi-spectrum \cite[Proposition 3.4(b)]{stable_prufer}. Therefore, $\qspec^{\Phi(\star)}(D_2)=\phi(\qspec^\star(D_1))$.
Suppose now that $P\in\psspec^\star(D_1)$. By the previous paragraph, $Q\notin\qspec^{\Phi(\star)}(D_1)$. There is a $P$-primary ideal $L\subsetneq P$ such that $L=L^\star\cap D$; in particular, $P$ is not branched in the valuation domain $(D_1)_P$. The map $\phi$ induced a homeomorphism between $\Spec((D_1)_P)$ and $\Spec((D_2)_Q)$; therefore, neither $Q$ is branched, and thus there exist a $Q$-primary ideal $L'\subsetneq Q$. By definition,
\begin{equation*}
(L')^{\Phi(\star)}\cap D_2\subseteq L'(D_2)_Q\cap D_2=L',
\end{equation*}
and thus $Q\in\psspec^{\Phi(\star)}(D_2)$. Conversely, if $Q\in\psspec^{\Phi(\star)}(D_2)$, then there is a $Q$-primary ideal $L\subsetneq Q$ such that $L^{\Phi(\star)}\cap D_2=L$. By the previous part of the proof, $P\notin\qspec^\star(D_1)$; if $P$ is not even in $\psspec^\star(D_1)$, then $L^{\Phi(\star)}$ would just be equal to $D^{\Phi(\star)}$, a contradiction. Hence, $Q=\phi(P)\in\phi(\psspec^\star(D_1))$. Therefore, $\psspec^{\Phi(\star)}(D_2)=\phi(\psspec^\star(D_1))$.
Consider now the map $\Psi:\insstable(D_2)\longrightarrow\insstable(D_1)$ defined by
\begin{equation*}
\Psi(\sharp):I\mapsto\bigcap_{P\in\phi^{-1}(\qspec^\sharp(D_2))}I(D_1)_P\cap\bigcap_{Q\in\phi^{-1}(\psspec^\sharp(D_2))}(I(D_1)_Q)^{v_Q}
\end{equation*}
for every ideal $I$ of $D_1$ and every $\sharp\in\insstable(D_2)$. Then, $\Psi$ is the map associated to the homeomorphism $\phi^{-1}$ by the previous construction; hence,
\begin{align*}
\Psi\circ\Phi(\star):I& \mapsto \bigcap_{P\in\phi^{-1}(\qspec^{\Phi(\star)}(D_2))}I(D_1)_P\cap\bigcap_{Q\in\phi^{-1}(\psspec^{\Phi(\star)}(D_2))}(I(D_1)_Q)^{v_Q}=\\
&= \bigcap_{P\in\qspec^\star(D_1)}I(D_1)_P\cap\bigcap_{Q\in\psspec^\star(D_1)}(I(D_1)_Q)^{v_Q}=I^{\stdstable{\star}}
\end{align*}
since $\phi$ is a homeomorphism. By Theorem \ref{teor:prufer-MinI}, $\stdstable{\star}=\star$, and thus $\Psi\circ\Phi(\star)=\star$, i.e., $\Psi\circ\Phi$ is the identity on $\insstable(D_1)$. By symmetry, also $\Phi\circ\Psi$ is the identity on $\insstable(D_2)$; hence, $\Phi$ and $\Psi$ are inverses one of each other. It is straightforward to see that they are also order-preserving; thus, they establish a isomorphism between $\insstable(D_1)$ and $\insstable(D_2)$, as claimed.
\end{proof}
\begin{cor}
Let $D_1,D_2$ be Pr\"ufer domains with countable spectrum. Suppose that there is a homeomoprhism $\phi:\Spec(D_1)\longrightarrow\Spec(D_2)$ such that a prime ideal $P$ is idempotent if and only if $\phi(P)$ is idempotent. Then, there is an isomorphism $\Phi:\insstable(D_1)\longrightarrow\insstable(D_2)$.
\end{cor}
\begin{proof}
If $\Spec(D_1),\Spec(D_2)$ are countable, then $D_1,D_2$ are min-scattered by Proposition \ref{prop:countable}. The claim now follows from Theorem \ref{teor:prufer-iso}.
\end{proof}
\section{Other versions}\label{sect:other}
Stable semistar operations are linked to two other structures on a ring: localizing systems and length functions.
A \emph{localizing system} on a domain $D$ is a set of ideals $\mathcal{F}$ such that:
\begin{itemize}
\item if $I\in\mathcal{F}$ and $I\subseteq J$, then $J\in\mathcal{F}$;
\item if $I\in\mathcal{F}$ and $(J:_DiD)\in\mathcal{F}$ for all $i\in I$, then $J\in\mathcal{F}$.
\end{itemize}
The map $\star\mapsto\mathcal{F}^\star:=\{I\mid 1\in I^\star\}$ establishes a bijective correspondence between the set of stable semistar operations and the set of all localizing systems, whose inverse is given by the map associating to $\mathcal{F}$ the semistar operation \cite[Section 2]{localizing-semistar}
\begin{equation*}
\star_\mathcal{F}:I\mapsto\bigcup_{J\in\mathcal{F}}(I:_DJ).
\end{equation*}
We say that a localizing system is \emph{radical} if, for every ideal $I$ such that $\rad(I)\in\mathcal{F}$, we have $I\in\mathcal{F}$. This notion corresponds exactly to radical semistar operations.
\begin{prop}
Let $\star$ be a stable semistar operation. Then, $\star$ is radical if and only if $\mathcal{F}^\star$ is a radical localizing system.
\end{prop}
\begin{proof}
If $\star$ is radical and $\rad(I)\in\mathcal{F}^\star$, then $1\in\rad(I)^\star$ and thus $1\in I^\star$ by definition, so that $I\in\mathcal{F}$ and $\mathcal{F}^\star$ is radical. Conversely, if $\star$ is not radical there is an ideal $I$ such that $1\notin I^\star$ while $1\in\rad(I)^\star$: then, $\rad(I)\in\mathcal{F}^\star$ while $I\notin\mathcal{F}^\star$, so that $\mathcal{F}^\star$ is not radical.
\end{proof}
Therefore, the bijection between stable operations and localizing systems restricts to a bijection between $\insrad(D)$ and the set $\mathrm{LS}_{\mathrm{rad}}(D)$ of radical localizing systems; it follows that Theorems \ref{teor:insrad-iso} and \ref{teor:rcc-radspectral} can be expressed also in the terminology of localizing systems.
\medskip
A \emph{singular length function} on $D$ is a map $\ell$ from the set of all $D$-modules to $\{0,\infty\}$ such that:
\begin{itemize}
\item $\ell$ is additive on exact sequences;
\item for each module $M$, $\ell(M)$ is the supremum of $\ell(N)$, as $N$ ranges among the finitely generated submodules of $M$.
\end{itemize}
A singular length function is uniquely determined by its \emph{ideal colength} $\tau$, where $\tau$ is the map associated to each ideal $I$ the length $\ell(D/I)$ \cite[Proposition 3.3]{zanardo_length}. We denote by $\mathcal{L}_{\mathrm{sing}}(D)$ the set of singular length functions on $D$.
There is a bijective correspondence between localizing systems and singular length functions, where we associate to a localizing system $\mathcal{F}$ the colength \cite[Section 6]{length-funct}
\begin{equation*}
\tau_\mathcal{F}(I)=\begin{cases}
0 & \text{if~}I\in\mathcal{F},\\
\infty & \text{if~}I\notin\mathcal{F}.\\
\end{cases}
\end{equation*}
We say that a length function $\ell$ with associated colength $\tau$ is \emph{radical} if $\tau(I)=\tau(\rad(I))$ for every ideal $I$. This definition corresponds to radical semistar operations and radical localizing systems.
\begin{prop}
Let $\mathcal{F}$ be a localizing system. Then, $\mathcal{F}$ is radical if and only if the associated length function $\ell_\mathcal{F}$ is radical.
\end{prop}
\begin{proof}
If $I$ is an ideal, by definition $\tau(I)=0$ if and only if $I\in\mathcal{F}$. Therefore, $\tau(I)=\tau(\rad(I))=0$ if and only if $I,\rad(I)\in\mathcal{F}$, while $\tau(I)=\tau(\rad(I))=\infty$ if and only if $I,\rad(I)\notin\mathcal{F}$. The claim follows.
\end{proof}
Let $D$ be a Pr\"ufer domain. To every singular length function $\ell$ with associated colength $\tau$ we can associate the space
\begin{equation*}
\Sigma(\ell):=\{P\in\Spec(D)\mid \tau(Q)>0\text{~for some~}P\text{-primary ideal~}Q\},
\end{equation*}
and the length function
\begin{equation*}
\ell^\sharp:=\sum_{P\in\Sigma(\ell)}\ell\otimes D_P
\end{equation*}
(where $(\ell\otimes D_P)(M):=\ell(M\otimes D_P)$). Then, we get an analogue of Theorem \ref{teor:prufer-MinI}.
\begin{prop}\label{prop:prufer-length-MinI}
Let $D$ be a Pr\"ufer domain. The following are equivalent:
\begin{enumerate}[(i)]
\item\label{prop:prufer-length-Min:scat} $D$ is min-scattered;
\item\label{prop:prufer-length-Min:stdstable} $\ell=\ell^\sharp$ for every singular length function $\ell$.
\end{enumerate}
\end{prop}
\begin{proof}
Let $\Phi$ be the isomorphism between $\insstable(D)$ and $\mathcal{L}_{\mathrm{sing}}(D)$ obtained composing the bijections of the two with the set of localizing systems. The claim follows from Theorem \ref{teor:prufer-MinI} and the fact that $\Phi(\stdstable{\star})=\Phi(\star)^\sharp$ \cite[Proposition 6.8]{length-funct}.
\end{proof}
As a consequence, we also obtain a version of Theorem \ref{teor:prufer-iso} (compare with \cite[Theorem 5.12]{length-funct}).
\begin{teor}\label{teor:prufer-iso-length}
Let $D_1,D_2$ be Pr\"ufer domain. Suppose that there is a homeomoprhism $\phi:\Spec(D_1)\longrightarrow\Spec(D_2)$ such that a prime ideal $P$ is idempotent if and only if $\phi(P)$ is idempotent. If $D_1$ is min-scattered, then there is an isomorphism $\Phi:\mathcal{L}_{\mathrm{sing}}(D_1)\longrightarrow\mathcal{L}_{\mathrm{sing}}(D_2)$.
\end{teor}
To conclude, we express \cite[Corollary 7.5]{almded-radfact} in the terminology of this paper; see \cite{almded-radfact} for the definition of SP-scattered domain.
\begin{prop}
Let $D$ be an SP-scattered domain. Then, every stable semistar operation on $D$ is radical.
\end{prop}
\bibliographystyle{plain}
|
2,877,628,091,121 | arxiv | \section{Introduction}
\label{subsec:intro}
\subsection{Motivation}
\input{sec_introduction}
\subsection{Statement of Contribution}
\label{sec:contrib}
\input{sec_contribution}
\section{Related Work}
\input{sec_related}
\label{subsec:related}
\section{MLNav Framework}
\label{sec:approach}
\input{sec_approach}
\section{Deployment for Mars Rover Navigation}
\label{sec:enav}
\input{sec_enav}
\section{Performance Evaluation}
\label{sec:exp}
\input{sec_experiments}
\section{MLNav on Real Mars Data}
\label{sec:mars_exp}
\input{sec_mars_experiment}
\section{Conclusion}
\input{sec_conclusion}
\section{Acknowledgement}
The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, and California Institute of Technology under a contract with the National Aeronautics and Space Administration, and supported in part by Raytheon. The authors would like to thank Olivier Toupet, Mitch Ingham and Ravi Lanka for valuable discussions and problem formulation, and the JPL Research and Technology Development (R\&TD) program for supporting this research.
\bibliographystyle{IEEEtran}
\subsection{Overview and Problem Formulation}
\label{subsec:problem}
In this section, we provide a general formulation of our proposed MLNav framework (as shown in Figure \ref{fig:mlnav-overview}). Traditionally, the motion planning pipeline for goal-directed real-time autonomous navigation is done in hierarchical receding horizon manner, with a global planner at the top level driving the robot in a general direction of the goal while a local planner uses the immediate perceived environment (up to a finite sensing horizon) to makes sure that the robot avoids obstacles while making progress towards the goal.
For dynamically constrained systems operating in high-dimensional complex environments, a library of candidate trajectories is usually computed offline by sampling from a much larger (possibly infinite) set of feasible trajectories. Such libraries effectively discretize a large control space and enable tasks to be completed with reasonable performance while still respecting computational constraints. Such library-based model predictive approaches have been widely used in state-of-the-art robotic systems \cite{dey2016vision}. We follow a similar paradigm as well for rovers on Mars \cite{carsten2007global}, and use it for our MLNav framework. Note that the proposed architecture does also generalize to any sampling-based motion planning problem where edge-evaluation is expensive.
Let us define a robot at time $t$ with state $\phi(x_t, m)$, where $x_t \in R$ is the pose of the rover operating in a 2.5D static local heightmap $m \in \mathcal{M}$, sampled from a distribution of terrains $p(m)$. Also, let us assume a library $\mathcal{L}$ of $N$ trajectories is given, such that $\mathcal{L}$ = \{$\xi_j$\}, j = 1: N, $\xi_j \in \Xi$, where $\Xi$ spans the space of all possible trajectories. At each planning cycle the local planner picks the trajectory that yields the least cost $C(\cdot)$ for traversal. We formulate this as an optimization problem:
\begin{equation}
\phi_{o} \mapsto \overset{*}{\xi} = \argmin_{\xi \in \mathcal{L}} C(\xi_j)
\label{eq:pathplanning}
\end{equation}
\noindent
where $\phi_{o}$ is represents the initial state of the robot in the map. The cost function being optimized is then defined as:
\begin{equation}
C(\xi_j) = \alpha \cdot C_{goal}(\xi_j, \phi_{o}) + \beta \cdot \sum_{t = 1}^{T} C_{collision}(\xi_j, \phi(x_t, m)))
\label{eq:cost}
\end{equation}
\noindent
where $C_{goal}$ is the cost for path execution, such as the time to get to the final goal, consisting of the cost within the planning horizon and the cost-to-go beyond the horizon, which usually comes from a separate global planner. This part is fast to compute but does not account for vehicle safety. $C_{collision}$ is the expected collision cost for each trajectory, computed by estimating the clearance of the robot with the local terrain features over the planning horizon of $T$ time steps. In a deterministic environment $C_{collision} \in \{0, \infty\}$; in an uncertain environment, it represents the probability of collision.
The collision cost is typically computed by repeatedly running a collision checking algorithm at a certain interval over the candidate trajectories.
The computation time of collision cost grows proportionally with both the number of trajectory options $N$ and the length of the horizon $T$.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/mlnav-overview-v2.pdf}
\caption{MLNav Framework: A classical search-based planner is augmented with a learning-based heuristic to accelerate the search, where the safety of the selected path is guaranteed by a model-based collision checker.}
\label{fig:mlnav-overview}
\vspace{-4mm}
\end{figure}
\subsection{Learned Proxy Collision Heuristics}
We propose to alleviate this bottleneck by leveraging the planner's past experience to learn a proxy collision heuristic:
\begin{equation}
C_{proxy\_collision}(\mathcal{L}) = f_h(\phi(x, m))
\end{equation}
where the function $f_h$ maps the rover's state and local terrain features, $\phi(x, m)$, to a probability of collision. While any ML model can be used to learn $f_h$, we propose to use a Convolutional Neural Network (CNN) based image-to-image translation model. This has two key advantages: First, since the heightmap $m$ is usually encoded as an image, it allows us to estimate $f_h$ using a single-shot inference on the entire map. The subroutine that estimates this heuristic could, in principle, be queried each time we want to estimate $C_{collision}$ in Equation \ref{eq:cost} using only local features and robot pose. However, that would be helpful only if we the cost of invoking $f_h$ each time is significantly lower than the cost of computing the actual collision cost. Second, it allows us to leverage the representation power of CNNs without handcrafted feature engineering. Once the heuristic map is estimated, computing the $C_{collision}(\xi_j, \phi(x_t, m))$ values in Equation \ref{eq:cost} gets reduced to a trivial constant-time look-up.
The proxy collision heuristic is then used to select the optimal trajectory $\overset{*}{\xi}$ for execution. Importantly, the model-based collision checking algorithm is then run only on the optimal path. If it is not feasible, it evaluates the next best path until a feasible one is found.
In an ideal case where $f_h$ makes a perfect prediction of $C_{collision}$, the planner finds the optimal path by running collision checking only on a single path. Regardless of the performance of $f_h$, the safety of the chosen path is always guaranteed because collision checking must pass for a path to be executed. This allows us to leverage the benefits of ML based models while maintaining the same safety guarantees of model-based planners.
\subsection{Overview of ENav}
ENav is a tree-based planner that considers a parametrized tree of candidate trajectories, represented by $\mathcal{L}$ in Eq. \ref{eq:pathplanning}. At each planning cycle, it generates
a 2.5D terrain height map $m$ from stereo imagery.
$C_{goal}$ is the estimated time to reach the final goal, with an additional penalty based on terrain roughness.
$C_{collision}$ is computed by a collision checking algorithm called Approximate Clearance Evaluation (ACE) \cite{otsu2020fast}.
The computation of $C_{collision}$ is by far more complex in comparison to $C_{goal}$. Therefore, to save the precious on-board computational resource, ENav gives up finding the optimal path and instead makes the following modifications to optimal path planning in Eq. \ref{eq:pathplanning}.
First, before running ACE, ENav computes $C_{goal}$ for all candidate trajectories in $\mathcal{L}$ and sort them by $C_{goal}$.
Then, ENav greedily evaluates $C_{collision}$ of the trajectories from the top of the sorted list.
It cuts off the search if at least one feasible path is found \textit{and} a pre-specified threshold on the number of ACE execution is reached, even if some trajectories in $\mathcal{L}$ remain unevaluated.
Finally, ENav chooses the best trajectory among the ones that have evaluated before the cut off.
The vast majority of the computation time of ENav is for computing $C_{collision}$ with ACE.
In the worst case where a sole feasible path is ranked at the bottom of the sorted list, ENav needs to run ACE on all candidates in $\mathcal{L}$;
if there are multiple feasible paths but the optimal one is located below the cut off threshold, a suboptimal path may be chosen.
This is why ENav performance could be enhanced by introducing a heuristic to better sort $\mathcal{L}$.
\subsection{MLNav implementation on ENav}
In order to more effectively sort ENav's list of candidate rover paths, such that the most highly ranked paths are feasible and near-optimal, we use the learned proxy collision heuristics. The updated cost function then is as follows:
\begin{equation}
C_{total}(\xi_j) = \alpha \cdot C_{goal}(\xi_j, \phi_{o}) + \beta \cdot C_{proxy\_ACE}(\mathcal{L})
\label{eq:enav-cost}
\end{equation}
where $C_{proxy\_ACE}$ is computed as a look-up using the predicted heuristic map.
In our particular implementation for MLNav, we use a model based on a U-Net \cite{ronneberger2015u}, and learn it in a supervised manner. The input is a height-map $m$ and the output is a proxy collision heuristic map, such that the value for each pixel in the heuristic map corresponds to the predicted collision probability. The collision cost at any given spatial point in the height-map depends not only on the terrain features but also on the rover heading. We encode the rover heading as part of the learning problem itself by extending the model output to have a multi-channel representation such that each channel represents a cardinal heading angle (see Figure \ref{fig:train-data}). Note that the granularity of the discretization depends on the specific instantiation of the MLNav framework. For this implementation, we found a discretization of 8 heading angles (at 45 degree intervals) to be sufficient. Sigmoid activation is then applied to each channel to give a value in the range [0, 1]. Training data is collected by running ACE on synthetic terrains that are representative of Martian terrain. Note that the training set does \textit{not} include real Mars terrains although our test does, as described in Section \ref{sec:mars_exp}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/enav-overview.pdf}\\
\vspace{2mm}
\includegraphics[width=\linewidth]{figures/mlace-data-for-training.pdf}
\vspace{-7mm}
\caption{ENav overview and examples of training data for learning the heuristic function. For each heighmap, a set of 8 output ACE Maps were generated, corresponding to different rover headings.}
\label{fig:train-data}
\end{figure}
\subsection{Experimental Setup}
For all our experiments presented in this section, we used a ROS-based, high-fidelity simulation environment called ENav Sim \cite{toupet2020ros} that was originally developed for prototyping and testing of Perseverance's ENav algorithm. ENav Sim is capable of generating a rich set of varied 2.5D terrains representative of the candidate Mars landing sites, producing synthetic stereo images for simulating onboard hightmap generation, and simulating the rover's motion. At its core, the simulator wraps a flight software implementation of ENav and a software library called HyperDrive Sim (HDSim), which has been used for terrain simulation for multiple Mars rover missions. HDSim provides an image rendering capability based on the rover's navigation cameras, rover-terrain settling and a realistic slip model. Furthermore, ENav Sim provides a method for running a large-scale Monte Carlo simulation in parallel and automatically generate reports that capture the key ENav performance metrics. The learned heuristic model was implemented in Tensorflow and setup as a separate subroutine, that could be invoked by the ENav algorithm on-demand as a ROS service call.
\begin{table*}[!t]
\caption{Performance Evaluation of MLNav for Mars Rover Navigation. We compare MLNav to the baseline ENav, and study its sensitivity to key parameters (MLNav$^{\dagger}$) and tree design (MLNav$^{\dagger} (BT)$, MLNav$^{\dagger} (DT)$ and MLNav$^{\dagger}$). See \ref{sec:exp-sim}(C) for more details.}
\label{tab:enav-ablation}
\centering
\begin{tabular}{c c c c c c c c c}
\toprule
\\[-2mm]
\textbf{Metric} & \textbf{Terrain} & \textbf{Baseline} & \textbf{MLNav} & \textbf{Baseline$^{\dagger}$} & \textbf{MLNav$^{\dagger}$} & \textbf{MLNav$^{\dagger}$ (BT)} & \textbf{MLNav$^{\dagger}$ (DT)} & \textbf{MLNav$^{\dagger}$ (VLT)}\\
\\[-2mm]
\midrule
\midrule
\multirow{2}{*}{Success rate (\%)} & Benign & 100.0 & 99.5 & 99.4 & 99.9 & 99.9 & 98.5 & 99.7\\
& Complex & 69.9 & 72.5 & 69.0 & 69.3 & 73.9 & 59.9 & 78.8\\
\midrule
\multirow{2}{*}{Path Inefficiency (\%)} & Benign & 4.4 & 3.9 & 3.95 & 3.3 & 3.2 & 3.7 & 3.0\\
& Complex & 25.4 & 20.4 & 22.1 & 19.5 & 17.6 & 19.1 & 17.6\\
\midrule
\multirow{2}{*}{Number of Collision Checks} & Benign & 275 & 262 & 74 & 39 & 58 & 78 & 70\\
& Complex & 377 & 283 & 216 & 90 & 142 & 164 & 317\\
\midrule
\multirow{2}{*}{Overthink Rate (\%)} & Benign & 5.3 & 2.2 & 4.7 & 1.2 & 2.7 & 4.4 & 2.98\\
& Complex & 20 & 7.1 & 19 & 6.5 & 10.3 & 15.6 & 12.6\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table}[t]
\centering
\caption{Performance Evaluation vs Model Size}
\begin{tabular}{c c c c c}
\toprule
\\ [-2.5mm]
\textbf{Models} & U-Net & SegNet & DeeplabV3+ & PSPNet \\
\\ [-2.5mm]
\midrule
\\ [-2.5mm]
Model Size & 130 MB & 110 MB & 190MB & 280MB \\
\\ [-2.5mm]
Accuracy & 95.1\% & 81.7\% & 82.8\% & 85.2\% \\
\\ [-2.5mm]
\bottomrule
\end{tabular}
\label{tab:mlace-models}
\end{table}
\subsection{Training and Validation of Learned Heuristics}
We first evaluated the fidelity of our proposed model to learn the proxy collision heuristics. Training data was gathered by running Monte Carlo simulations of the baseline ENav algorithm on 1500 terrains and randomly sampling 8 heightmaps from each trial. For each cell in each sampled height-map, the ACE costs were estimated for eight fixed rover heading values at $45\degree$ intervals, resulting in a 8-channel “ACE map” such that each channel corresponded eight heading-specific ACE values. This ACE-map was then used as the training signal for our modified U-Net model. A total of 12000 height-map and ACE-map pairs were generated; 9500 were used for training, and 2500 were used for validation. The learned heuristic model was able to achieve $97.8\%$ training accuracy and $95.1\%$ validation accuracy, demonstrating that the model was able to accurately learn the mapping to collision probabilities directly.
Next, we performed an ablation study to justify the design choice towards using the U-Net model architecture. In particular, the MLNav framework was designed with computational efficiency as the primary focus. However, one might ask the question: does using a larger state-of-the-art models from semantic segmentation help? Table \ref{tab:mlace-models} shows the performance of different models on learning the proxy collision heuristics. We observe that U-Net provides the best performance even though it is the smallest model. Our hypothesis is that while bigger models have a much larger capacity to learn generic representations required for visual recognition tasks, they also require larger training sets. In contrast, U-Net was specifically designed for biomedical applications, where it could be trained with very few images.
\subsection{Benefits to Mars Rover Navigation}
\label{sec:exp-sim}
In this section, we evaluate the potential benefits to the overall pipeline, in the context of the rover navigation. Monte Carlo simulations were run for both the baseline ENav and MLNav, using the same set of terrains. Note that the terrains used for these experiments are a separate set from the set of terrains used to train the learned model, to ensure that the observed performance is not biased. The Monte Carlo simulation consisted of 1500 terrains with various slope and rock density, which is quantified by CFA (cumulative fraction of area) \cite{golombek2003rock}. We report out results on two categories: (1) Benign - terrains with slope less than $15^{\circ}$ and CFA value of $7\%$ or less, and (2) Complex - terrains with greater slope or CFA. For ENav, the default tree of paths used in our experiments is composed of 14 candidate turns-in-place, followed by 11 3-meter arcs of various curvatures, followed by another set of 11 3-meter arcs. Thus, at each step the rover is planning about 6 meters ahead of its current position, and considering 1694 potential paths. Note that the hardware of the Perseverance rover does not allow steering while driving, and thus the rover can only move in fixed-curvature arcs.
For evaluation, the following performance metrics were used:
\begin{itemize}
\item \textbf{Success Rate:} Defined as the percentage of trials that result in the rover reaching the goal without timing out or violating safety constraints. Higher values are better.
\item \textbf{Path Inefficiency:} Defined as the average excess length of the path taken by the rover, as compared to a straight line path from straight to goal, expressed as a percentage. For example, if the rover drives 110 m to reach a goal that is 100 m in straight line distance, the path inefficiency is 10 \%. Lower values are better.
\item \textbf{Number of Collision Checks:} Defined as the average ACE runs per planning cycle.
Lower values are better.
\item \textbf{Overthink Rate:} Defines as the average number of planning cycles required above a minimum number of ACE checks (default: 275), defined in percentage. When the number of ACE checks exceed a threshold, it indicates that the highest ranked paths were all deemed unsafe, and the rover may need to stop and ``overthink" until a solution is found. Lower values are better.
\end{itemize}
Table \ref{tab:enav-ablation} shows the results from the MC simulations. We observe that as using the learned heuristics there is a significant improvement in the overall efficiency of the ENav algorithm. In particular for complex terrain, there was a $20\%$ reduction in path inefficiency, $25\%$ reduction in number of collision checks and a $65\%$ reduction in overthink rate using the MLNav framework, as compared to the baseline ENav algorithm. This improvement in computational and performance efficiency comes at almost no cost; the success rate of MLNav is comparable to the baseline, and within the experimental margin of error. Furthermore, one of the primary objectives of this work was to design a framework that could leverage the benefits of ML without compromising on safety guarantees of the vehicle. Based on our extensive MC simulations, our assertion holds. In all our experiments, no trial failures were reported due to violated safety constraints; all failures are due to either timeouts or failures to find paths to the goal. This is the most significant result of this paper.
\subsection{Sensitivity \& Ablation Analyses}
Next, we performed ablation studies on the sensitivity of the MLNav framework to two key parameters:
\subsubsection{Performance vs. minimum ACE threshold} For all our experimental trials above, we define a threshold value of 275 for the minimum number of ACE checks. In practice, this number can be thought of as the computational budget allocated to the planner and also protects against choosing high-risk paths with finite but poor ACE cost. However, if the learned heuristics is working optimally, one could imagine that further evaluation beyond the first safe trajectory is no longer needed and this computation might be better used elsewhere. In order to study this trade-off we set the minimum ACE thresholds to 0 and repeated the above experiments. We call this version MLNav$^{\dagger}$, and tabulate the results in Table \ref{tab:enav-ablation}.
For comparison, we also run ENav with the minimum ACE threshold set to 0, shown as Baseline$^{\dagger}$ in the table.
This intuitively means that MLNav$^{\dagger}$ and Baseline$^{\dagger}$ always choose the first feasible path found and does not search any further.
Interestingly, the difference in path inefficiency between MLNav and MLNav$^{\dagger}$ (also between Baseline and Baseline$^{\dagger}$) is within the error margin.
Furthermore, MLNav$^{\dagger}$ substantially improved on the number of collision checks and overthink rate. Since ACE is run every 25 cm, evaluating a single 6 m path requires 24 ACE runs. Therefore, the average number of ACE checks being 39 and 90 for benign and complex terrains respectively means that MLNav$^{\dagger}$ finds a feasible path only after evaluating 1.6 and 3.7 options.
These results imply that the learned heuristics almost always ranks the optimal path near the top of the list.
\subsubsection{Performance vs. Tree Design} The above experiments demonstrated that MLNav can significantly reduce computation time in the ENav planning. We next ask: Can this surplus computation be put to use back in planner to improve the success rate? We evaluate three different approaches to increase the complexity of the tree. First, by increasing the number of candidate trajectories in the library, which can increase the probability of finding a safe and efficient path towards goal. Here, we increase the branching factor of possible actions at each tree-depth by $4$, leading to 4050 candidate trajectories. Note: this could increase the computation time by 2-3x for the baseline ENav algorithm. The results in Table \ref{tab:enav-ablation} - MLNav$^{\dagger}$(BT), shows a small improvement in overall success rate with further reduction in computation. Second, we use a deeper tree by adding a set of 11 arcs to previous leaf nodes, extending the planning horizon to 9m. This led to poor results (tabulated at MLNav$^{\dagger}$(DT)), where the success rate dropped to $59.9\%$ without any improvement in other metrics. We believe this is due to other system-level uncertainties such as stereo, slip, etc which may be significantly worse that far out. Finally, we use a more complex tree design with variable length arcs at different depth layers - the first layer was 11 turn-in-place, followed by 15 1-meter arcs, 11 2-meter arcs and 7 3 meter-arcs. The results using this tree design (tabulated as MLNav $^{\dagger}$ (VLT)) show an increase in success rate to $78.8\%$. As the average number of collision checks for MLNav $^{\dagger}$ was still lower compared to the baseline, we can conclude that with MLNav, now we can indeed use a more complex tree design to achieve a higher success rate in complex terrains without increasing the computational budget.
\subsection{Hardware-in-the-loop (HIL) Benchmarks}
\label{sec:exp-hils}
We first benchmark using the RAD750 processor, which is similar to the one running on the Perseverance Rover. We find that it takes 12ms, on average, for each ACE check. For the baseline ENav algorithm this would translate to a total cost of ACE checking being 4.5s (377x12ms). In comparison, for the MLNav$^{\dagger}$ case, the total cost of ACE checking would be only 1.1s, saving 3.4s of computation time per planning cycle.
To quantify the computational cost of running our learned model, we benchmarked its inference time using a Nvidia Jetson TX2, which serves as a reliable analog for future High-performance Space Computing (HPSC) \cite{doyle2013high}, and found that forward inference for predicting the collision heuristic map takes only 125ms, even without model optimization. We thus expect that a real-world deployment of MLNav will lead to significant and tangible acceleration of traditional navigation pipelines
\subsection{Experimental Setup}
We now validate MLNav on ENav Sim using real data from Mars collected by the Perseverance rover. A challenge to this approach is that we do not usually have the complete set of onboard images for reconstructing the terrain due to the limitation in communication data volume between the two planets. In particular, only the last and the penultimate images are typically transmitted for manual drives and, as of the writing of this paper, there have only been 17 occasions where the rover drove fully autonomously. Within these limitations, we chose two data sets acquired on the following Sols (Martian days since landing) for our test venues:
\paragraph{Sol 122} The drive on Sol 122 was the last of a series of first-time activities (FTAs) for commissioning ENav on Perseverance. On this Sol, the rover was commanded to drive fully autonomously over $\sim$30 m northwards. As the ``final test" for ENav, the goal was intentionally set behind a rock, seen in Figure \ref{fig:hafiq}-left, such that the rover had to deviate from the straight-line path to get to the goal. Due to the need for detailed assessment of this particular drive, all of the on-board stereo image pairs were transmitted to Earth, which allowed us to reconstruct the terrain completely. ENav completed the drive successfully.
\paragraph{Sol 178} It was one of the most challenging drives of Perseverance to date. The $\sim$85 m drive started with $\sim$15 m of manual driving, followed by fully autonomous driving along a ridge that concluded by climbing a slope to reach a science target. There were large rocks and exposed bedrock along the ridge and on each side of the slope as seen in Figure \ref{fig:hafiq}-right. Image pairs and onboard heightmaps were downlinked from the last $\sim15$ m segment of this drive, allowing us to recreate the corresponding portion of the terrain in simulation. ENav completed the drive successfully.
For each Sol, we first ran stereo processing to produce DEMs (digital elevation models). The DEMs were then mosaiced using the onboard pose updates to reconstruct the 3D terrain with a 0.05 m resolution. In simulation, the start and goal were chosen to closely match Perseverance’s actual drive path. We used the MLNav$^{\dagger}$ (VLT) setting. Other parameters were identical to the ones we used in the actual Sol 122 and 178 drives.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{figures/perseverance_terrains.pdf}
\vspace{-0.27in}
\caption{Terrains used for experiments. Images taken by Perseverance on Sol 122 (left) and 178 (right). Image: NASA/JPL-Caltech}
\label{fig:hafiq}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{figures/Sol122-Baseline.pdf}\\
\vspace{-.27in}
\includegraphics[width=\linewidth]{figures/Sol122-MLNav.pdf}
\vspace{-5mm}
\caption{Paths planning on real Martian terrain (Perseverance Sol 122) with Baseline (ENav; top) and MLNav (bottom).}
\vspace{-2mm}
\label{fig:sol122-visualization}
\end{figure}
\subsection{Results}
The results on both Sols agree with the results on synthetic terrains presented in the previous section.
Both ENav and MLNav found feasible paths to the goal as expected, and the paths are qualitatively similar.
This is consistent with the result in Table \ref{tab:enav-ablation} that MLNav gives relatively minor improvement in path inefficiency, particularly on benign terrains.
Since both of Sol 122 and Sol 178 terrains fall under the "benign" category in Table \ref{tab:enav-ablation}, it is expected that the baseline algorithm can find a path as good as MLNav.
Notable difference were observed when the rover avoided obstacles.
For example, Figure \ref{fig:sol122-visualization} is the visualization of Sol 122 drives by ENav and MLNav when the rover was avoiding a rock.
The blue and pink lines seen on the ENav visualizations are the paths on which ENav ran ACE evaluations and ended up with not choosing.
Observe that there is only one blue line in the MLNav drive at the same location.
This means that the second-ranked path based on ML heuristics turned out to be feasible, hence MLNav ran ACE only on two paths in this planning cycle.
Of course, MLNav occasionally needed to ran ACE on many path options until finding a feasible one, but overall MLNav ran substantially smaller number of ACE collision checks and, in the majority of the planning cycles, it evaluates only a single path.
The complete movies of Sol 122 and 178 drives are attached as supplemental materials.
Table \ref{tab:mars-data-evaluation} shows the quantitative results. On both sols, there was only trivial changes in path inefficiency, indicating that MLNav resulted in a qualitatively similar path as discussed above.
In contrast, a substantial improvement in the number of ACE checks is observed. Again, this is because the top-ranked path evaluated by the ML-based heuristics was often feasible even in the presence of obstacles.
While the number of test cases with real Martian terrain was limited for practical reasons mentioned above, this experiment demonstrated the ability of MLNav to improve the performance of path planning in a real environment.
This result is particularly remarkable because, as explained in Section \ref{sec:enav}, the training data that we used for this experiment was produced solely with synthetic terrains \textit{before} the landing of the rover.
\begin{table}[!t]
\caption{Path Planning Results on real Mars data}
\label{tab:mars-data-evaluation}
\centering
\begin{tabular}{c c c c c}
\toprule
\\[-2mm]
& \multicolumn{2}{c}{Sol 122} & \multicolumn{2}{c}{Sol 178} \\
\textbf{Metric} & \textbf{Baseline} & \textbf{MLNav} & \textbf{Baseline} & \textbf{MLNav}\\
\\[-2mm]
\midrule
\midrule
Path Inefficiency (\%) &3.1 & 2.0 & 0.13 & 0.66 \\
\midrule
Number of ACE Checks & 284 & 42.7 & 271 & 28.3 \\
\midrule
Overthink Rate (\%) & 8.3 & 4.2 & 5.6 & 0 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{ML in Planning and Navigation}
The success of deep learning has made the use of ML in planning approaches attractive, as it opens up the possibility of learning high-capacity models to reason about complex, real-world environments. Several prior work proposed systems that use of end-to-end navigation approaches that attempt to either replace the entire navigation pipeline from perception to control using black-box image-to-action mappings \cite{levine2016end,daftry2016learning,pfeiffer2017perception} or attempt to learn an end-to-end cost mapping based on large amounts of expert demonstrations \cite{abbeel2004apprenticeship,ziebart2008maximum,wulfmeier2017large,wulfmeier2016watch}. While considerable progress is being made in this direction, they are several challenges towards adoption in real-world safety-critical robotic applications like Mars rovers. In contrast, a few have attempted to replace only particular navigation subsystems such as global planning \cite{yao2019following}, local planning \cite{gao2017intention,faust2018prm} or improve individual components such as world representation \cite{richter2017safe}. Our work falls in this latter category, where we try to systematically “unwrap” our Mars rover system and identify collision checking as a bottleneck component that can be improved using ML. We also refer the readers to \cite{xiao2020motion} for a comprehensive review on this topic.
\subsection{ML for Collision Checking}
Several related work have also tried to use machine learning to overcome the computational bottleneck of collision-checking. Examples include learning a distribution of promising regions \cite{ichter2018learning}, learning heuristics for collision distance metrics such as swept volume \cite{chiang2018fast}, employing a lazy approach to evaluate only the most promising edges based on predicted energy costs \cite{choudhury2018data}, and learning to accelerate collision checking itself by modeling the configuration space of the robot \cite{das2020learning,han2020configuration,huh2016learning,kew2019neural}.
While similar in theme, in the sense they all use learning to improve the bottlenecks of collision checking, all the above approaches, vary considerably in system design.
The key design choices are: (1) to operate directly from raw sensing data or a processed representation (e.g., a configuration space); (2) to directly predict the probability of collision or some other measure such as which paths are more ``promising''; and (3) the interplay with the model-based collision checker and overall system which ultimately guarantees safety. Which design choices work best (including safety guarantees, and computational efficiency) depends on the system and its goals. To the best of our knowledge, our approach is the first to be validated on a mature system with real-world safety-critical needs.
|
2,877,628,091,122 | arxiv | \section{Introduction}
When strongly interacting matter is heated, it is expected to go over to the Quark
Gluon plasma (QGP) phase. Lattice Quantum chromodynamics (QCD) is extensively used
to study the properties of matter across this transition. Even though the thermodynamic
quantities
and the equation of state is known in sufficient detail, more information about the
plasma phase is needed to distinguish it from the hadronic phase.
The study of static correlation lengths constitute one such important clue. The low lying
degrees of excitations of the plasma can be studied using the spatial correlation
function of hadronic operators \cite{DeTarKogut}. The screening mass is obtained from the
exponential decay of the screening correlators at large spatial separations. Under certain
analyticity assumptions, this screening mass can be related to the
pole of the real time propagator of the corresponding operator as argued in \cite{DeTarKogut}.
These correlation functions, and the screening masses encode
information about the interactions in the medium, indicating for example, whether
certain symmetries broken at low temperatures are restored at high temperatures.
It is known that at very high temperatures the
screening masses of the mesons approach that of a free ideal gas, 2$\pi$T \cite{bornetal}.
Moreover, finite volume corrections for the thermodynamic quantities can also be
studied using the screening masses.
We investigate the spatial correlation functions for hadrons in 2-flavour QCD with
staggered fermions in both the hadronic and the plasma phases. We study the pattern
of chiral symmetry restoration across the cross-over in the screening
spectrum. We study the finite volume effects on the correlation functions at
low temperature and the possibility of the decay of the scalar meson.
\section{Technical Details}
The generation of the configurations used in the present study was reported in
\cite{rvgsgNt6}. The lattices had a temporal extent of $N_t=6$ and spatial extent
$N_s=24$, except in the finite volume study where spatial lattice sizes from 8 to
30 were used. Two flavours of light staggered quarks were used with the bare quark
masses tuned
such that the $T=0$, $m_{\pi} \approx 230$ MeV at all temperatures T.
\newline
Point-to-point correlation functions for local meson operators
in the pseudo-scalar (PS), scalar (S), vector (V) and
axial-vector (AV) channels were analyzed in detail.
Denoting by $G(x,0)$ the staggered propagator,
the correlation functions projected to zero momentum are:
\begin{equation}
C(z) = \frac{1}{N_x^2 N_t} \sum_{x,y,t} \textmd{Tr} [G(x,0) G^{\dagger}(x,0)] g_S(x)
\end{equation}
where $g_S(x)$ \cite{rvgsgpm}
are the staggered phase factors specific to the mesons in
given quantum number channels. We have calculated only
the connected part of the correlation functions, which correspond to
the flavour non-singlet mesons. These correlation functions
contain contributions from the respective parity
partner as well, and hence are parameterized as
\begin{equation}
\label{2masfit}
\nonumber C(z) = A_1(~\mathrm{e}^{-m_1 z} + \mathrm{e}^{-m_1(N_z-z)}~)
+ (-1)^z A_2(~\mathrm{e}^{-m_2 z} + \mathrm{e}^{-m_2(N_z-z)}~)
\end{equation}
Here $m_1$ and $m_2$ are the screening masses of the lightest
natural parity meson appropriate for the operator used and its opposite
parity partner. $A_1$ and $A_2$ are the respective normalizations.
In this convention, the Goldstone pion is the non-oscillating
part of the correlator and has positive $A_1$.
The tolerance of the conjugate gradient (CG) algorithm was chosen to be
$\epsilon = 10^{-5}$. This ensured that the systematic error in the
meson correlation functions arising due to the matrix inversion
was much lesser than the statistical errors.
The screening masses were extracted by fitting the correlation functions to the
2-mass fit form in eqn \ref{2masfit}. As usual, the correlation between the different
z-slices are incorporated through the covariance matrix and the fit values
are obtained by minimizing the correlated-$\chi^2$.
\section{Results}
While our main results are on the temperature dependence of the screening
mass spectrum on $N_t=6$ lattices, and the consequent study of the chiral
symmetry restoration pattern as a function of temperature, we also present
results for the spatial volume (in-)dependence of the screening masses of
all the mesons near the critical end-point temperature.
\begin{figure}[!tbh]
\begin{center}
\hbox{
\includegraphics[width=0.5\textwidth]{intmes1b5.415.eps}
\includegraphics[width=0.5\textwidth]{intmes1b5.75.eps}
}
\end{center}
\caption{The correlators for all the mesons
at T=1.92 $T_c$(left) and T=0.97 $T_c$(right).
Note the prominent oscillations for the V-AV
mesons. We have plotted the absolute
value of the correlators for the V and AV mesons.}
\label{fig:corrl}
\end{figure}
\subsection{Screening spectrum}
We display our results for all the meson correlators in fig \ref{fig:corrl} at two
different temperatures: 1.92 $T_c$ and 0.97 $T_c$. The V/AV correlators
show prominent oscillations at both temperatures. For the same statistics, there is
more noise for the other mesons at large-z and lower temperature than the pion, which
is equally good at both temperatures. The pion correlation functions could be
fit well to a single mass form at all temperatures, except the highest temperature
(1.92 $T_c$) where we had to use the 2-mass fit. A full list of the
screening masses will be given elsewhere\cite{dbrvgsg}.
To check the consistency of fitted masses, we also calculated the local masses. Due to the
oscillating nature of the staggered fermions, the local masses were defined using
correlators separated by 2 z-slices as done in \cite{rvgsgpm}:
\begin{equation}
\frac{C(z+1)}{C(z-1)} =
\frac{\cosh[-m(z)(z + 1 - N_z/2)]} {\cosh[-m(z)(z - 1 - N_z/2)]}~.~
\end{equation}
The local masses so extracted are shown in
Fig. \ref{fig:localm} for the same two temperatures, 1.92 $T_c$ and 0.97
$T_c$. The pion masses are the best determined in both the phases with small
error bars and agree very well with the fitted values. Such an agreement are
also found for the other mesons, although with larger bars.
\begin{figure}[!tbh]
\begin{center}
\hbox{
\includegraphics[width=0.5\textwidth]{localmb5.415.eps}
\includegraphics[width=0.5\textwidth]{localmb5.75.eps}
}
\end{center}
\caption{The extracted local masses for the mesons at
T=0.97 $T_c$(left) and T=1.92 $T_c$(right). For T=1.92 $T_c$,
only the PS and V meson masses are shown since the S and AV
exactly coincide with them. The black line coinciding with the
lower blue line is the ideal gas limit.
}
\label{fig:localm}
\end{figure}
The behaviour we discussed so far for both the hadronic and QGP phase
is generic for other temperatures as well, with gradual changes in the
fit parameters. Fig. \ref{fig:PSfits} shows the fits to the PS and V
correlators using the fitted values of the screening masses and corresponding
amplitudes. The correlators have been scaled to depict the entire range in
temperature on the same plot. The 2-mass nature of the fit used for the V
correlators is very evident.
\begin{figure}[!tbh]
\begin{center}
\hbox{
\includegraphics[width=0.5\textwidth]{PSfit.eps}
\includegraphics[width=0.5\textwidth]{Vfit.eps}
}
\end{center}
\caption{Single mass fits to the pseudo-scalar correlators (left)
and the 2-mass fits to the vector correlators (right) for the entire temperature range
studied here. Correlators have been appropriately scaled to make
them fit in the same figure. The lines and symbols in each plot
represent temperatures in units of $T_c$ in decreasing order
from bottom: 1.92, 1.48, 1.33, 1.21, 1.01, 1.00, 0.99, 0.97, 0.94, 0.92,
0.89. (The lowest curve is for 1.92 $T_c$ and the topmost is
for 0.89 $T_c$.)}
\label{fig:PSfits}
\end{figure}
Finally, we plot the screening masses of the S, PS, V and AV mesons as a function
of $T/T_c$ in Fig. \ref{fig:allmass}. The chiral symmetry, broken at
low temperatures, gets restored sufficiently away from the cross-over temperature
$T_c$. While the PS and S mesons masses become degenerate only around 1.33 $T_c$,
the V and AV mesons are degenerate even at temperatures $\simeq T_c$.
We note from Fig.\ref{fig:localm} that while the V/AV screening masses appear to be the same as the
free theory continuum limit at about 2 $T_c$, the screening masses of the
PS/S mesons still differ from the ideal gas, and the V/AV
mesons, by about $\sim$ 20\%.
Our results confirm similar results obtained by other collaborations\cite{bornetal,rvgsgpm,swagato}.
\begin{figure}[!tbh]
\begin{center}
\includegraphics[width=0.6\textwidth]{massplt.eps}
\end{center}
\caption{Masses of the PS, S, V and AV mesons a a function of $T/T_c$.
The free continuum value of 2$\pi$ is indicated with an arrow.}
\label{fig:allmass}
\end{figure}
\begin{figure}[!tbh]
\begin{center}
\includegraphics[width=0.6\textwidth]{masspltV.eps}
\end{center}
\caption{Masses of the PS, S, V and AV mesons a a function of $LT$ at $T=0.94 T_c$.}
\label{fig:masV}
\end{figure}
\subsection{Finite Volume Study}
A finite volume study was done at 0.94 $T_c$, since this is the phenomenologically
interesting temperature where the QCD CEP is expected to lie at $\mu_E=1.6 T_c$ \cite{rvgsgNt6}.
A range of lattices with $N_s$=8,12,18,24 and 30 were considered, where
the smallest two had anisotropic spatial extents. From fig. \ref{fig:masV}, it is clear
that the screening masses do not have significant volume dependence. All the lattices
in our study are much bigger than the corresponding screening lengths of the mesons,
which explains the absence of the finite volume effect associated with stable states.
Further, given that the mass of the scalar meson is more than twice the mass of the pion,
we can discuss whether the decay of the scalar is possible. In the continuum, this decay is
forbidden by parity. On the lattice however, unphysical decays are known to occur \cite{clb}.
In our case, parity and kinematical considerations imply that the scalar could only decay into
the pion and one of its taste partners. A finite volume study could potentially answer this
interesting question. Our results indicate that this decay does not occur.
Fig \ref{fig:decay} shows the scalar correlation functions are quite featureless
as a function of volume. Further support comes from the fitted and the measured correlator
normalization, $C_S$(0), which are both independent of volume.
\begin{figure}[!tbh]
\begin{center}
\hbox{
\includegraphics[scale=0.6]{ScorrlV.eps}
\includegraphics[scale=0.6]{corrnorm8.eps}
}
\end{center}
\caption{The upper figure shows the Scalar meson correlation
functions. It is quite featureless as a function of the volume.
The lower figure shows that both the measured and fitted value of
$C_S(0)$ is independent of volume.}
\label{fig:decay}
\end{figure}
It is known that decays can be
arrested if the volume is too small. In this context, we
point out that we have worked with reasonably large volumes.
The LT values in our study range from 1.33 to 5 while m$_{PS}$L range from
2.4 to 12.
\section{Summary}
We studied point-to-point correlation functions for mesons in
the PS, S, V and the AV channels for two flavour
QCD in the staggered fermion formulation. We extracted the corresponding
screening masses and showed that while the screening masses of the
V/AV mesons become degenerate by $T_c$, one needs to go up to 1.33 $T_c$ to see the corresponding
degeneracy of the PS/S mesons.
Moreover, while by 2 $T_c$ the V/AV screening masses
are already seen to reach their continuum value,
the PS/S screening masses are still 15-20\% away
from the free theory value. Using a finite volume study, we find that the scalar
correlation functions are insensitive to the change in spatial volume, which leads us
to suspect that the scalar meson decay does not occur at
finite lattice spacings at temperatures less than the cross-over temperature.
\section*{Acknowledgments}
The computations were performed on the CRAY X1 of the
Indian Lattice Gauge Theory Initiative (ILGTI) in TIFR, Mumbai. D.B.
wishes to acknowledge useful discussions with Saumen Datta,
Nilmani Mathur and Jyotirmoy Maiti.
|
2,877,628,091,123 | arxiv | \section{Introduction}
Topological $K$-theory was first introduced in the 1960's as a generalized cohomology theory \cite{atiyah;k-theory}, and it was soon extended to the equivariant world by Segal \cite{segal;equivariant-k-theory}. In 1970, the notion of twisted $K$-theory was first introduced by Donovan and Karoubi \cite{donovan-karoubi;local-coefficients}. They twisted the $K$-theory of a space $X$ by a torsion element of $H^3(X,\mathbb{Z})$, and it wasn't until the 1980's that Rosenberg \cite{rosenberg;continuous-trace-algebras-bundle-theoretic} showed how to twist by a non-torsion element of $H^3(X,\mathbb{Z})$. He used the connections between $K$-theory and operator algebras to define the twisting.
In the 1990's, twisted $K$-theory gained notice when it came to the attention of physicists studying string theory. They found that certain charges can be described using the twisted $K$-theory of spacetime. More recently Freed, Hopkins, and Teleman proved that the twisted equivariant $K$-theory of a compact simple simply connected Lie group $G$ is the Verlinde algebra of the loop group of $G$ \cite{freed;twisted-verlinde-algebra,fht1,fht2}.
Atiyah and Segal \cite{atiyah-segal;twisted} have recently written a paper which collects the definitions and constructions for twisted $K$-theory and twisted equivariant $K$-theory that are used in \cite{fht1,fht2} and \cite{bunke-schick;T-duality}. For the construction of twisted equivariant $K$-theory, they require that the group $G$ be compact. Motivated by orbifolds, Adem and Ruan \cite{adem-ruan} constructed twisted equivariant $K$-theory in a different manner for a finite group $G$. More general constructions can describe both notions, and work has been done by Jean-Louis Tu and Ping Xu \cite{tu-xu;chern-orbifolds} and also in their collaboration with Camille Laurent-Gengoux \cite{tu-xu-lg;differentiable-stacks} on this.
In this paper we will extend the construction of Adem and Ruan by following the example of L\"{u}ck and Oliver \cite{oliver-completion-theorem-2001} to define twisted equivariant $K$-theory for proper actions of general discrete groups. For simplicity, this paper is meant to be self contained, but in the interest of brevity, many of the elementary statements are left unproven. The beginning of every section contains references for a deeper treatment of the elementary materials. The paper is organized as follows:
Section 2 introduces projective representations of finite groups and some of the basic results involving them. Particular attention is given to the similarity between the theory of $\alpha$-twisted representations and complex representations. The subgroup theorem is particularly important as it will play a role in section 4.
In section 3, twisted equivariant $K$-theory is constructed in the case of a torsion cocycle $\alpha$. The first part of the section is devoted to background and Theorem \ref{bundle}, which is a key component of the construction. The main theorem (Theorem \ref{main}) is stated and proved in the second part.
Section 4 is devoted to the definition of twisted Bredon cohomology and the description of a spectral sequence relating it to twisted equivariant $K$-theory. First, we define the coefficient functor along with a twisted product which is analogous to the one defined in section 3. Some properties of this twisted product are proved in section 3. Then, we will sketch the construction of the spectral sequence and use a result of L\"{u}ck \cite{luck} to show that it collapses at the $E_2$ page after tensoring with $\mathbb{C}$. A consequence is that this spectral sequence is a module over the non-twisted equivariant Atiyah-Hirzebruch spectral sequence in a natural way.
Section 5 talks about how this fits into the general theory of twisted equivariant $K$-theory coming from \cite{atiyah-segal;twisted}, \cite{tu-xu-lg;differentiable-stacks}, \cite{bunke-schick;T-duality}, and \cite{tu-xu;chern-orbifolds}. I owe great thanks to many people for helping me understand this material including but not limited to Alejandro Adem, Yongbin Ruan, my father, Wolfgang L\"{u}ck, Bob Oliver, Ian Leary, and Johann Leida. I also owe great thanks to the referee for suggesting a more elegant formulation of the results.
\section{Projective Representations}
This section will introduce some of the basic properties of projective representations. In particular, we will focus on the similarities of the theory of projective representations of a finite group with the theory of complex representations of a finite group. For this section, let $G$ be a finite group. More information about projective representations and proofs of the statements can be found in \cite{karpilovsky;projective-representations-finite-groups}.
\subsection{$\alpha$-twisted representations}
A projective representation of $G$ is a map which is not quite a complex representation but is only off by multiplication by a complex number.
\begin{Def}
Let $V$ be a complex vector space. A \emph{projective representation} of $G$ is a map $\rho: G \rightarrow GL(V)$ such that there exists $\alpha: G \times G \rightarrow \mathbb{C}^*$ with
\begin{itemize}
\item[(i)] $\rho(x)\rho(y) = \alpha(x,y)\rho(xy)$ for all $x,y \in G$, and
\item[(ii)] $\rho(1)=I$.
\end{itemize}
\end{Def}
It is not hard to see that $\alpha$ is going to be a 2-cocycle of $G$, i.e. $\alpha \in Z^2(G, \mathbb{C}^*)$. The associativity of multiplication in $GL(V)$ combined with condition $(i)$ gives the cocycle condition, and $\alpha(g,1) = \alpha(1,g) = 1$ for all $g \in G$ by condition $(ii)$.
From now on we will focus on projective representations which are associated to the same cocycle $\alpha$. Any projective representation associated to $\alpha$ is called an \emph{$\alpha$-twisted} representation of $G$. In the case of complex representations of $G$, we have a notion of when two representations are isomorphic. We have a similar idea for $\alpha$-twisted representations.
\begin{Def}
Two $\alpha$-twisted representations $\rho_i:G \rightarrow GL(V_i)$, $i=1,2$, are called
linearly equivalent if there is a vector space isomorphism $f:V_1 \rightarrow V_2$ with
$$\rho_2(g) = f \rho_1(g) f^{-1} \, \, \, \mbox{ for all } g \in G$$
\end{Def}
Clearly the direct sum of two $\alpha$-twisted representations is again an $\alpha$-twisted representation. So we can form the monoid of linear isomorphism classes of $\alpha$-twisted representations of $G$, denoted by $M_\alpha(G)$. Let $R_\alpha(G)$ be its associated Grothendieck group, the $\alpha$-twisted representation group. Note that if $\alpha$ is the trivial cocycle, then $R_\alpha(G)$ is $R(G)$, the complex representation ring of $G$.
It is not hard to see that the tensor product of two $\alpha$-twisted representations is not an $\alpha$-twisted representation, but that doesn't mean that tensor products are uninteresting in this context. If $\alpha$ and $\beta$ are two cocycles, then the tensor product of an $\alpha$-twisted representation $\rho:G \rightarrow GL(V)$ and a $\beta$-twisted representation $\tau:G \rightarrow GL(W)$ is an $\alpha+\beta$-twisted representation. This can easily be seen by forming $\rho \otimes \tau: G \rightarrow GL(V \otimes W)$. Note that this can be extended to a pairing
$$R_\alpha(G) \otimes R_\beta(G) \rightarrow R_{\alpha + \beta}(G).$$
In order to study $R_\alpha(G)$, we introduce the $\alpha$-\emph{twisted group algebra} $\mathbb{C}^{\alpha}G$. We denote by $\mathbb{C}^{\alpha}G$ the vector space over $\mathbb{C}$ with basis \{$\bar{g}\}_{g \in G}$ with product $\bar{g} \cdot \bar{h} = \alpha(g, h)\overline{gh}$ extended by linearity. This makes $\mathbb{C}^{\alpha}G$ into a $\mathbb{C}$-algebra with $\bar{1}$ as the identity.
\begin{Def}
If $\alpha$ and $\beta$ are cocycles of $G$, then $\mathbb{C}^{\alpha}G$ and $\mathbb{C}^{\beta}G$ are \emph{equivalent} if there exists a $\mathbb{C}$-algebra isomorphism $\phi:\mathbb{C}^{\alpha}G \rightarrow \mathbb{C}^{\beta}G$ and a mapping $t:G \rightarrow \mathbb{C}^*$ such that $\phi(\bar{g}) = t(g)\tilde{g}$ for all $g \in G$, where {$\bar{g}$} and {$\tilde{g}$} are the bases for the two twisted group algebras.
\end{Def}
This defines an equivalence relation on such twisted algebras, and we have the following classification result
\begin{Lem}
There is an equivalence of twisted group algebras, $\mathbb{C}^{\alpha}G \simeq \mathbb{C}^{\beta}G$, if and only if $\alpha$ is cohomologous to $\beta$. In fact, $\alpha \longmapsto \mathbb{C}^{\alpha}G$ induces a bijective correspondence between $H^2(G, \mathbb{C}^*)$ and the set of equivalence classes of twisted group algebras of $G$ over $\mathbb{C}$.
\end{Lem}
In \cite[Theorem 2.2.5]{karpilovsky;projective-representations-finite-groups}, it is shown that these twisted group algebras play the same role in determining $R_\alpha(G)$ that $\mathbb{C} G$ plays in determining $R(G)$.
\begin{Thm}
There is a bijective correspondence between $\alpha$-twisted representations of $G$ and $\mathbb{C}^{\alpha}G$-modules. This correspondence preserves sums and bijectively maps linearly equivalent (irreducible, completely reducible) representations into isomorphic (irreducible, completely reducible) modules.
\end{Thm}
$\mathbb{C}^{\alpha}G$ is also called the \emph{$\alpha$-twisted regular representation} of $G$. In \cite{karpilovsky;projective-representations-finite-groups}, it is shown that every $\mathbb{C}^{\alpha} G$-module is projective.
\subsection{Induced $\alpha$-Representations}
Now let $\alpha$ be a fixed cocycle of $G$. We will use $\alpha$ to denote both the cocycle and its restriction to any subgroup as long as it is clear from the context which is meant. With this convention, for a subgroup $H$ of $G$ we can identify $\mathbb{C}^{\alpha}H$ with the subalgebra of $\mathbb{C}^{\alpha}G$ consisting of all $\mathbb{C}$ linear combinations of the elements \{$\bar{h}|h \in H$\}. If $V$ is a $\mathbb{C}^{\alpha}G$-module, then we shall denote by $V_H$ the $\mathbb{C}^{\alpha}H$-module obtained by the restriction of the algebra. So $V_H$ equals $V$, but only the action of $\mathbb{C}^{\alpha}H$ is defined on $V_H$. This process is called \emph{restriction} and it takes any $\mathbb{C}^{\alpha}G$-module $V$ to a uniquely determined $\mathbb{C}^{\alpha}H$-module $V_H$.
There is a dual process of \emph{induction}. Let $W$ be a $\mathbb{C}^{\alpha}H$-module. We are considering $\mathbb{C}^{\alpha}H$ as a subalgebra of $\mathbb{C}^{\alpha}G$, and so we can define a $\mathbb{C}^{\alpha}G$-module structure on the tensor product $\mathbb{C}^{\alpha}G \otimes_{\mathbb{C}^{\alpha}H} W$. We will denote this induced module $W^G$.
Here are some formal properties of induced modules
\begin{Lem}
Let $V_1$ and $V_2$ be submodules of a $\mathbb{C}^{\alpha}H$-module $V$.
\begin{itemize}
\item[(i)] $V^G_1 \subseteq V^G_2$ if and only if $V_1 \subseteq V_2$.
\item[(ii)] $V^G_1 = V^G_2$ if and only if $V_1 = V_2$.
\item[(iii)] $(V_1 + V_2)^G = V^G_1 + V^G_2$.
\item[(iv)] $(V_1 \bigcap V_2)^G = V^G_1 \bigcap V^G_2$.
\item[(v)] If $V = V_1 \oplus V_2$, then $V^G = V^G_1 \oplus V^G_2$.
\end{itemize}
\end{Lem}
\begin{Lem}
Let $H$ be a subgroup of $G$ and let $\xymatrix@1{U \ar[r]^-{\lambda} & V \ar[r]^-{\mu} & W}$ be a sequence of homomorphisms of $\mathbb{C}^{\alpha}H$-modules. Then the following properties hold:
\begin{itemize}
\item[(i)] The sequence
$$\xymatrix@1{
0 \ar[r] & U \ar[r]^-{\lambda} & V \ar[r]^-{\mu} & W \ar[r] & 0 & & (*)
}$$
is exact if and only if the corresponding sequence of $\mathbb{C}^{\alpha}G$-modules
$$\xymatrix@1{
0 \ar[r] & U^G \ar[r]^-{1 \otimes \lambda} & V^G \ar[r]^-{1 \otimes \mu} & W^G \ar[r] & 0 & & (**)
}$$
is exact.
\item[(ii)] Suppose that the sequence \emph{(*)} is exact. Then \emph{(*)} splits if and only if \emph{(**)} splits.
\item[(iii)] If $U$ is a submodule of $V$, then $V^G/U^G \cong (V/U)^G$.
\end{itemize}
\end{Lem}
Some notation is needed for the following theorem. Let $H$ be a subgroup of $G$, and let $V$ be a $\mathbb{C}^{\alpha}H$-module. We can always choose a $\mathbb{C}^{\alpha}G$-module $W$ so that $V$ is a submodule of $W_H$. Then, for any $g \in G$, $\bar{g}V = \{\bar{g}v|v \in V \}$ is a $\mathbb{C}$-subspace of $W$. In fact, $\bar{g}V$ is a $\mathbb{C}^{\alpha}(gHg^{-1})$-module whose isomorphism class is independent of the choice of $W$. Now define $V^{(g)}$ to be the $\mathbb{C}^{\alpha}(gHg^{-1})$-module whose underlying space is $V$ and with $\bar{x}$, for $x \in gHg^{-1}$, acting as
$$\begin{array}{rcl}
\bar{x} \cdot v & = & (\bar{g}^{-1} \bar{x} \bar{g})v \\
& = &\alpha^{-1}(xg, g^{-1})\alpha^{-1}(g, g^{-1}xg)\alpha(g, g^{-1})\overline{g^{-1}xg}v. \\
\end{array}$$
Note that if $g \in C_G(H)$, then $V^{(g)}$ is isomorphic to $V$ as $\mathbb{C}^{\alpha}H$-modules since $\overline{g^{-1}xg} = \bar{x}$.
\begin{Thm}[Subgroup Theorem] \label{sub}
Let $H$ and $K$ be subgroups of the finite group $G$. Let $T$ be a set of double coset representatives for $(K, H)$ in $G$, and for $\alpha \in Z^2(G, \mathbb{C}^*)$, let $V$ be a $\mathbb{C}^{\alpha}H$-module. For each $t \in T$, denote by $V_t$ the restriction of $V^{(t)}$ to $\mathbb{C}^{\alpha}(tHt^{-1} \bigcap K)$. Then
$$(V^G)_K \cong \bigoplus_{t \in T} (V_t)^K$$
as $\mathbb{C}^{\alpha}K$-modules.
\end{Thm}
\section{Twisted Equivariant $K$-theory}
We will construct twisted equivariant $K$-theory out of finite rank equivariant bundles similar to \cite{segal;equivariant-k-theory}. However, we only consider equivariant bundles whose underlying structure is a complex vector bundle over $X$. Another concern is that in \cite{segal;equivariant-k-theory} the group must be compact. Since we are interested in actions of general discrete groups, we can no longer assume this. However, we will assume that the group is acting properly. This means that all the subgroups that fix a point are compact and therefore finite. We will using the work of L\"{u}ck and Oliver \cite{oliver-completion-theorem-2001} to construct the \emph{$\alpha$-twisted} equivariant $K$-theory of a finite proper $G$-CW pair for a discrete group $G$.
First we need some background for the construction. Let $\alpha \in Z^2(G, U(1))$ be a torsion cocycle of order $n$. Since
$\alpha$ is torsion of order $n$, $\alpha^n$ is cohomologous to the
trivial cocycle, i.e., there exists a cocycle $t \in Z^1(G, U(1))$ with
$\alpha^n = (\delta t)$. Define a cocycle $u \in Z^1(G, U(1))$ by
$u(g) = (t(g))^{-1/n}$. We can define such a $u$ since $U(1)$ is a divisible group. Using multiplicative notation, this means we can always take $n^{th}$ roots. $\beta = \alpha (\delta u)$ is again torsion of
order $n$, and $\beta$ takes values in $\mathbb{Z}/n\mathbb{Z}$. So by restricting to a
cohomologous cocycle if necessary, we may assume that $\alpha$ takes
values in $\mathbb{Z}/n\mathbb{Z}$. We call such a cocycle \emph{normalized}.
So, we can take $\alpha$ to represent a central extension of $G$ by $\mathbb{Z}/n\mathbb{Z}$ as below.
$$1 \longrightarrow \mathbb{Z}/n\mathbb{Z} \longrightarrow G_{\alpha} \stackrel{\rho}{\longrightarrow} G \longrightarrow 1$$
Recall that elements of $G_{\alpha}$ are of the form $(g, \sigma^k)$ for $g \in G$, $\sigma$ the generator of $\mathbb{Z}/n\mathbb{Z}$ and $0 \leq k \leq n-1$; where multiplication is given by $(g, \sigma^j) \cdot (h, \sigma^k) = (gh, \alpha(g,h) \sigma^{j+k})$. We can form $\mathbb{C} G_{\alpha}$, the algebra over $\mathbb{C}$ with basis $\{ \overline{(g, \sigma^j)} | g \in G, \, 0 \leq j \leq n-1 \}$. We also have $\mathbb{C}^{\alpha} G$, the $\alpha$-twisted group algebra of $G$, and $\mathbb{C}^{\alpha} G$ has basis $\{ \tilde{g} | g \in G \}$ as an algebra over $\mathbb{C}$.
For the following let $\zeta = e^{2 \pi i/n}$. There is a natural $\mathbb{C}$-algebra monomorphism $\phi:\mathbb{C}^{\alpha}G \rightarrow \mathbb{C} G_{\alpha}$ with
$$\phi(\tilde{g}) = \displaystyle\frac{1}{n} \displaystyle\sum_{j=0}^{n-1} \zeta^{n-j} \overline{(g, \sigma^j)}$$
extended by linearity. This is a $\mathbb{C}$-algebra homomorphism because if $\alpha(g, h) = \sigma^j$, then
$$\begin{array}{rcl}
\phi(\tilde{g})\phi(\tilde{h}) & = & \displaystyle\frac{1}{n^2}\left(\displaystyle\sum_{0 \leq k \leq n-1} \zeta^{n-k}\overline{(g, \sigma^k)}\right) \left(\displaystyle\sum_{0 \leq m \leq n-1} \zeta^{n-m}\overline{(h, \sigma^m)}\right) \\
& & \\
& = & \displaystyle\frac{1}{n^2}\left(n \overline{(gh, \sigma^j)} + n\zeta^{n-1}\overline{(gh, \sigma^{j+1})} + \cdots + n\zeta^1\overline{(gh, \sigma^{j-1})}\right)\\
& & \\
& = & \alpha(g,h)\phi(\widetilde{gh}) \\
\end{array}$$
Note that $\phi(\zeta^{k} v) = \overline{(1,
\sigma^k)}\cdot\phi(v)$ for any $v \in \mathbb{C}^{\alpha} G$. Since $\phi$
takes the basis elements of $\mathbb{C}^\alpha G$ to linearly independent
vectors in $\mathbb{C} G_{\alpha}$, $\phi$ is clearly a monomorphism. This
gives a way to identify $\mathbb{C}^{\alpha} G$ with a subspace of $\mathbb{C}
G_{\alpha}$. This identification is natural with respect to restriction
to subgroups as follows. For any group monomorphism $f:H \rightarrow G$, we have a commutative diagram
$$\begin{array}{rcl}
\mathbb{C}^{\alpha} H & \stackrel{\tilde{f}}{\longrightarrow} & \mathbb{C}^{\alpha} G \\
\downarrow \phi & & \phi \downarrow \\
\mathbb{C} H_{\alpha} & \stackrel{\bar{f}}{\longrightarrow} & \mathbb{C} G_{\alpha} \\
\end{array}$$
It is well known that if $[G_{\alpha}:H_{\alpha}] = k$, then $\mathbb{C} G_{\alpha} \cong
\oplus_{i=1}^{k} \mathbb{C} H_{\alpha}$ as right $\mathbb{C} H_\alpha$-modules by the
relationship
$$
\mathbb{C} G_{\alpha} = \overline {t_1} (\mathbb{C} H_{\alpha}) \oplus \overline{t_2}
(\mathbb{C} H_{\alpha}) \oplus \overline{t_3} (\mathbb{C} H_{\alpha}) \oplus \cdots
\oplus \overline{t_k} (\mathbb{C} H_{\alpha})
$$
where $\{ t_1, t_2, t_3, \ldots, t_k \}$ is a left transversal for
$H_{\alpha}$ in $G_{\alpha}$. We may assume that $t_1$ is the
identity element of $G_{\alpha}$. Note that $\{ \rho(t_1), \rho(t_2),
\rho(t_3), \ldots, \rho(t_k) \}$ is a left transversal for $H$ in $G$, where $\rho:G_\alpha \rightarrow G$ is the projection. So we have
$$ \mathbb{C}^{\alpha}G = \bigoplus_{i=1}^{k} \widetilde{\rho(t_i)}(\mathbb{C}^{\alpha}H)$$
This fact combined with the canonical inclusion above gives the
following commutative diagram of $H_{\alpha}$-modules
$$\begin{array}{rcl}
\bigoplus_{i=1}^{k} \widetilde{\rho(t_i)}(\mathbb{C}^{\alpha}H) & \stackrel{=}{\longrightarrow} &
\mathbb{C}^{\alpha}G \\
\downarrow \oplus \phi & & \downarrow \phi \\
\bigoplus_{i=1}^k \overline{t_i}(\mathbb{C} H_{\alpha}) & \stackrel{=}{\longrightarrow} & \mathbb{C}
G_{\alpha} \\
\end{array}$$
This will be important in the proof of the following theorem.
\begin{Thm} \label{bundle}
Let $G$ be a discrete group, and let $\alpha$ be a normalized torsion cocycle of $G$ as above. Then for any finite proper $G$-CW complex $X$, there is a $G_{\alpha}$-bundle $E$ over $X$ with fiber $E|_x \cong (V_{\alpha, G_x})^{\oplus k}$ for some $k$ where $V_{\alpha, G_x} = \mathbb{C}^{\alpha}G_x$. That is, the fiber of $E$ is the direct sum of a number of copies of the twisted regular representation of $G_x$.
\end{Thm}
\noindent \emph{Proof.}
First, we need to note that since $G_{\alpha}$ is a finite extension of $G$, we can think of $X$ as a finite, proper $G_\alpha$-CW complex as follows. For any subgroup $H_i$ of $G$, $\alpha$ restricts to the subgroup $H_i$ to define an extension $H_{i, \alpha}$ of $H_i$ by $\mathbb{Z}/n\mathbb{Z}$. Then for every $G$-equivariant $n$-cell of $X$, say $(G/H_i \times D^n, \, G/H_i \times S^{n-1})$, define a $G_\alpha$-equivariant $n$-cell of $X$, $(G_{\alpha}/H_{i, \alpha} \times D^n, \, G_{\alpha}/H_{i, \alpha} \times S^{n-1})$. Gluing these cells together in the obvious way gives a $G_\alpha$-CW structure for $X$.
Another fact we will need is that there is a $G_\alpha$- bundle $E^\prime$ over $X$ with fiber $E'|_x$ the direct sum of a number of copies of the regular representation of $G_{x,\alpha}$ \cite[Corollary 2.7]{oliver-completion-theorem-2001}. The bundle we want will be a subbundle of $E^\prime$.
Using the notation above, we will write elements of $G_\alpha$ in the form $(g, \sigma^i)$ with $g \in G$ and $\sigma$ the generator of $\mathbb{Z}/n\mathbb{Z}$, and let $\zeta = e^{2\pi i /n}$. Note that since $(1,\sigma^j)$ fixes $X$ the action of $(1,\sigma^j)$ on $E^\prime$ must take any fiber to itself. Define $E$ to be the subbundle of $E^\prime$ consisting of all vectors $v$ with $(1,\sigma^j) \cdot v = \zeta^j v$ for $0 \leq j \leq n-1$.
It still remains to be shown that $E$ is the bundle we want. The fiber $E|_x$ is going be a subspace of the fiber $E^\prime |_x = (\mathbb{C} G_{x,\alpha})^{\oplus k}$, so $E|_x$ is going to contain $(\mathbb{C}^\alpha G_x)^{\oplus k}$ in an obvious way. We need to show that that there isn't anything else. So let $v$ be a vector in $E|_x$. We can view $v$ as a vector in $E^\prime |_x = (\mathbb{C} G_{x,\alpha})^{\oplus k}$, and we can express $v$ in the following way,
$$v = \displaystyle\sum_{j=1}^{k} \displaystyle\sum_{h^j \in G_x^j} \displaystyle\sum_{l =0}^{n-1} a_{h,l}^j\overline{(h^j, \sigma^l)}
$$
where $G_x^j$ is the $j^{th}$ copy of $G_x$ in the fiber. A simple calculation shows that since $(1,\sigma^j) \cdot v = \zeta^j v$, $v \in (\mathbb{C}^\alpha G_x)^{\oplus k}$. So $E$ is the bundle we want.
\hfill \qed
\vspace{0.2cm}
Let $G$ be a discrete group and let $\alpha \in Z^2(G, U(1))$ be a normalized
cocycle of finite order $n$, as above. We know that $\alpha$ defines a central
extension of $G$ by $U(1)$, and this extension factors through a central extension of $G$ by $\mathbb{Z}/n\mathbb{Z}$ in the following sense. In the diagram below, $G_\alpha$ is included as a subgroup of $\widetilde{G_\alpha}$, and the diagram commutes.
$$\begin{array}{rcccccccl}
1 & \rightarrow & \mathbb{Z}/n\mathbb{Z} & \rightarrow & G_\alpha & \rightarrow & G & \rightarrow & 1 \\
\left| \right| & & \downarrow & & \downarrow & & \left| \right| & & \left| \right| \\
1 & \rightarrow & U(1) & \rightarrow & \widetilde{G_\alpha} & \rightarrow & G & \rightarrow & 1 \\
\end{array}$$
From the above diagram, it is not hard to see that any $\widetilde{G_{\alpha}}$-equivariant bundle in which the central $U(1)$ acts by complex multiplication on the fibers restricts to a $G_{\alpha}$-equivariant bundle in which the central $\mathbb{Z}/n\mathbb{Z}$ acts by multiplication by powers of $e^{2\pi i/n}$ on the fibers. Equivalently, it is clear that for any $G_{\alpha}$-bundle in which the central $\mathbb{Z}/n\mathbb{Z}$ acts by complex multiplication on the fibers, the action can be extended to a $\widetilde{G_{\alpha}}$ action with the central $U(1)$ acting by complex multiplication on the fibers. So it suffices to consider $G_{\alpha}$-equivariant bundles with this action.
\begin{Def} Let $G$ be a discrete group, $X$ a finite, proper $G$-CW
complex, and $\alpha \in Z^2(G, U(1))$ a normalized torsion cocycle of
order $n$. An \emph{$\alpha$-twisted $G$-bundle} over $X$ is a complex vector
bundle $E \rightarrow X$ with an action of $\widetilde{G_\alpha}$ on $E$ which covers the action of $G$ on $X$ and which restricts to the complex multiplication action of $U(1) \subset \widetilde{G_\alpha}$ on the fibers.
\end{Def}
So far, we have only been considering \emph{torsion} cocycles. It turns out that for our purposes, it suffices to consider only torsion cocycles, as the following lemma will show.
\begin{Lem}
Let $X$ and $G$ be as in the definition, and let $E$ be an $\alpha$-twisted $G$ bundle of rank $n$. Then the order of $\alpha$ divides $n$.
\end{Lem}
\noindent \emph{Proof.}
Let $E$ be an $\alpha$-twisted $G$ bundle of rank $n$ over $X$. Since $E$ is a $(\widetilde{G_\alpha}, GL_n(\mathbb{C}))$ bundle, we will get a homomorphism $\tilde{f}:\widetilde{G_\alpha} \rightarrow GL_n(\mathbb{C})$ by restricting to an orbit and trivializing the bundle there. $\tilde{f}$ restricts to a homomorphism $f:G \rightarrow PGL_n(\mathbb{C})$ after dividing out by $U(1)$. The map $f$ represents an element of $H^1(G; PGL_n(\mathbb{C}))$ in an obvious way.
There is a long exact sequence in group cohomology coming from the short exact sequence of coefficients
$$\begin{array}{rcccccccl}
1 & \rightarrow & U(1) & \rightarrow & GL_n(\mathbb{C}) & \rightarrow & PGL_n(\mathbb{C}) & \rightarrow & 1.
\end{array}$$
This gives a map $\delta: H^1(G;PGL_n(\mathbb{C})) \rightarrow H^2(G; U(1))$ which, it is easy to see, takes the cohomology class of $f$ to the cohomology class of $\alpha$. Because of the following commuting diagram of coefficients
$$\begin{array}{rcccccccl}
1 & \rightarrow & \mathbb{Z}/n\mathbb{Z} & \rightarrow & SL_n(\mathbb{C}) & \rightarrow & PGL_n(\mathbb{C}) & \rightarrow & 1 \\
& & \downarrow & & \downarrow & & \left| \right| & & \\
1 & \rightarrow & U(1) & \rightarrow & GL_n(\mathbb{C}) & \rightarrow & PGL_n(\mathbb{C}) & \rightarrow & 1, \\
\end{array}$$
we have the following commutative diagram in cohomology
$$\xymatrix{
H^1(G; PGL_n(\mathbb{C})) \ar[r]^{\delta^\prime} \ar[d]_{\cong} & H^2(G; \mathbb{Z}/n\mathbb{Z}) \ar[d] \\
H^1(G; PGL_n(\mathbb{C})) \ar[r]^\delta & H^2(G; U(1)). }
$$
Therefore, $\alpha$ lies in the image of $H^2(G; \mathbb{Z}/n\mathbb{Z}) \rightarrow H^2(G; U(1))$ and so it must be $n$-torsion. This proves the statement.\hfill \qed
\vspace{0.2cm}
In other words, if $\alpha$ is not a torsion cocycle, there are no finite rank $\alpha$-twisted $G$ bundles. If $\alpha$ is torsion, then we know there is at least one $\alpha$-twisted bundle by Theorem \ref{bundle}.
Now if $E$ and $F$ are both $\alpha$-twisted $G$-bundles, then so is $E \oplus F$. This means that isomorphism classes of $\alpha$-twisted bundles over $X$ form a monoid. Let $\,^{\alpha}K_G(X)$ be the associated Grothendieck group. It is important to note that if $E$ and $F$ are both $\alpha$-twisted $G$-bundles, $E \otimes F$ is {\bf not} an $\alpha$-twisted $G$-bundle. So $\,^{\alpha}K_G(X)$ doesn't have a ring structure.
The following observation will allow us to extend the construction of $\,^\alpha K_G(X)$ to a $\mathbb{Z}/2\mathbb{Z}$-graded equivariant cohomology theory. Any $G_\alpha$-equivariant bundle $E$ over $X$ will determine a representation of $\mathbb{Z}/n\mathbb{Z}$ for each $G$-connected component of $X$. This comes from the fact that $\mathbb{Z}/n\mathbb{Z}$ acts trivially on $X$. So each fiber of $E$ gives a representation of $\mathbb{Z}/n\mathbb{Z}$, and because $E$ is a bundle, the representation is fixed over every $G$-connected component of $X$. Let $\mathbb{K}_{G_\alpha}(X)$ be the Grothendieck group of the monoid of isomorphism classes of $G_\alpha$-equivariant bundles over $X$. Then, if $X$ is $G$-connected, we have a splitting
\begin{equation}
\label{splitting} \mathbb{K}_{G_\alpha}(X) = \bigoplus_{V \in Irr(\mathbb{Z}/n\mathbb{Z})} \mathbb{K}_{G_\alpha,V}(X).
\end{equation}
where $Irr(\mathbb{Z}/n\mathbb{Z})$ is the set of isomorphism classes of irreducible complex representations of $\mathbb{Z}/n\mathbb{Z}$. Here $\mathbb{K}_{G_\alpha,V}(X)$ consists of the $G_\alpha$-equivariant vector bundles which restrict to the representation $V$ in each fiber. The splitting comes from the fact that every complex representation of $\mathbb{Z}/n\mathbb{Z}$ splits into a direct sum of irreducibles.
So we have a splitting according to the irreducible complex characters of $\mathbb{Z}/n\mathbb{Z}$, and it is clear that $\,^\alpha K_G(X)$ is the direct summand corresponding to the representation given by taking the generator of $\mathbb{Z}/n\mathbb{Z}$ to multiplication by $e^{2 \pi i /n}$. Also, the splitting is functorial in the category of finite, proper $G$-CW complexes and $G$-maps via pullback. Given any $G$-map $f:X \longrightarrow Y$ between $G$-CW complexes $X$ and $Y$, $f$ is trivially $G_\alpha$-equivariant, and the induced map $f^*:\mathbb{K}_{G_\alpha}(Y) \longrightarrow \mathbb{K}_{G_\alpha}(X)$ will respect the splitting (\ref{splitting}). So $f^*$ will take $\mathbb{K}_{G_\alpha, V}(Y)$ to $\mathbb{K}_{G_\alpha,V}(X)$ for each $V \in Irr(\mathbb{Z}/n\mathbb{Z})$. Also, as noted for $\,^\alpha K_G(X)$, the Whitney sum of bundles will restrict to an operation on each summand in the splitting (\ref{splitting}). So each summand is an abelian group with respect to Whitney sum. Tensor product of bundles will not restrict to an operation on each summand, however.
Since $G_\alpha$ is a finite extension of $G$, it is again a discrete group, and $X$ is a proper $G_\alpha$-CW complex if it is a proper $G$-CW complex as seen in Theorem \ref{bundle}. Therefore we can describe the $G_\alpha$-equivariant $K$-theory of $X$ using the results of L\"{u}ck and Oliver \cite{oliver-completion-theorem-2001}. In particular, if we again let $\mathbb{K}_{G_{\alpha}}(X)$ denote the Grothendieck group of the monoid of isomorphism classes of $G_{\alpha}$-equivariant vector bundles on $X$, then we define the $G_\alpha$-equivariant $K$ groups of $X$ by
$$K_{G_\alpha}^{-n}(X) = Ker[\mathbb{K}_{G_\alpha}(X \times S^n) \stackrel{incl^*}{\longrightarrow} \mathbb{K}_{G_\alpha}(X)].$$
For a finite, proper $G$-CW pair $(X,A)$, define
$$K_{G_\alpha}^{-n}(X, A) = Ker[\mathbb{K}_{G_\alpha}^{-n}(X \cup_A X) \stackrel{i^*_2}{\longrightarrow} \mathbb{K}_{G_\alpha}^{-n}(X)].$$
L\"{u}ck and Oliver \cite{oliver-completion-theorem-2001} show that these definitions give a $\mathbb{Z}/2\mathbb{Z}$ graded equivariant cohomology theory and Bott periodicity holds in this context.
We can now state the main theorem of the paper.
\begin{Thm}[Main Theorem] \label{main}
Given a discrete group $G$ and a normalized torsion cocycle $\alpha \in Z^2(G, U(1))$, the groups $\,^{\alpha}K_G^{-n}(X,A)$ extend to a $\mathbb{Z}/2\mathbb{Z}$-graded equivariant cohomology theory called $\alpha$-twisted equivariant $K$-theory on the category of finite proper $G$-CW pairs. If $\alpha$ is the trivial cocycle, then $\,^{\alpha}K^*_G(X,A)$ is just the equivariant $K$-theory defined by L\"{u}ck and Oliver \cite{oliver-completion-theorem-2001}. For two normalized torsion cocycles $\alpha$ and $\beta$, there is a twisted product
$$\,^{\alpha}K_G^*(X, A) \bigotimes \,^{\beta}K_G^*(X, A) \rightarrow \,^{\alpha + \beta}K_G^*(X, A).$$
which gives $\,^{\alpha}K_G^*(X,A)$ a natural graded $K^*_G(X,A)$-module structure. Also, for finite $H \subseteq G$, there are natural isomorphisms $\,^{\alpha}K_G^0(G/H) \cong R_{res^G_H \alpha}(H)$ and $\,^{\alpha}K_G^1(G/H) = 0$. If $G$ is finite, then this construction agrees with the construction of Adem and Ruan \cite{adem-ruan}.
\end{Thm}
The proof of Theorem \ref{main} will be the focus of the rest of this
section. The main issue will be proving the existence of the twisted product. The fact that we have an equivariant cohomology theory will follow from the fact that $\,^{\alpha}K_G(X)$ is a direct summand of $K_{G_{\alpha}}(X)$ and all the maps in \cite{oliver-completion-theorem-2001} respect the splitting in (\ref{splitting}).
First, we see from the definition that each $K^{-n}_{G_\alpha}(X)$ splits as in (\ref{splitting}). Because the splitting is functorial in the category of finite, proper $G$-CW complexes and $G$-maps, we see that for any $G$-CW complex $X$, we can define $\,^\alpha K^{-n}_G(X)$ as the direct summand of $K^{-n}_{G_\alpha}(X)$ corresponding to the representation which takes the generator of $\mathbb{Z}/n\mathbb{Z}$ to multiplication by $e^{2 \pi i /n}$. With this definition, the extension of the groups $\,^{\alpha} K_G(X)$ to a $\mathbb{Z}/2\mathbb{Z}$-graded $G$-equivariant cohomology theory follows from the fact that the groups $K^{-n}_{G_\alpha}(X)$ extend to a $\mathbb{Z}/2\mathbb{Z}$-graded cohomology theory on the category of finite, proper $G$-CW pairs \cite{oliver-completion-theorem-2001}.
The properties of excision, homotopy invariance, disjoint union, and the long exact and Mayer-Vietoris sequences for $\,^{\alpha}K^{-n}_G(X)$ all follow directly from the same properties of $K^{-n}_{G_\alpha}(X)$ proven in \cite{oliver-completion-theorem-2001} by restricting all the maps to the appropriate direct summands. All of the maps in the constructions which are induced from $G_\alpha$-equivariant maps automatically respect the splitting (\ref{splitting}), and so restrict to the direct summands. The only map which needs to be checked is the connecting homomorphism in the Mayer-Vietoris sequence
$$d^{-n}:K^{-n}_{G_\alpha}(A) \rightarrow K^{-n+1}_{G_\alpha}(X).$$
However, it is easily seen to respect the splitting by its definition. It is the restriction of the induced map
$$\mathbb{K}_{G_\alpha}((X_1 \times D^n) \bigcup_{A \times S^{n-1}} (X_2 \times D^n)) \stackrel{incl^*}{\longrightarrow} \mathbb{K}_{G_\alpha}(X \times S^n)$$
to a subgroup of $\mathbb{K}_{G_\alpha}((X_1 \times D^n) \bigcup_{A \times S^{n-1}} (X_2 \times D^n))$. Here $X_1$ and $X_2$ are part of a pushout diagram of finite proper $G$-CW complexes
$$\xymatrix{
A \ar[d]_{i_2} \ar[r]^{i_1} & X_1 \ar[d]^{j_1}\\
X_2 \ar[r]^{j_2} & X }
$$
where $i_1$ and $j_2$ are inclusions of subcomplexes.
We should say a little about induction. Let $(X,A)$ be a finite proper $G$-CW pair, and let $H$ be a subgroup of $G$. From \cite[Lemma 3.4]{oliver-completion-theorem-2001}, we have
$$i^{G_\alpha}_{H_\alpha}: K^{-n}_{H_\alpha}(X,A) \stackrel{\cong}{\longrightarrow} K^{-n}_{G_\alpha}(G_\alpha \times_{H_\alpha}(X,A)).$$
Here we are using the abuse of notation that $H_\alpha$ corresponds to the extension of $H$ by $\mathbb{Z}/n\mathbb{Z}$ which comes from restricting the cocycle $\alpha$ to the subgroup $H$. This isomorphism respects the splittings of $K^{-n}_{H_\alpha}(X,A)$ and $K^{-n}_{G_\alpha}(G_\alpha \times_{H_\alpha} (X,A))$. This follows from the construction of the isomorphism since given an $H_\alpha$-equivariant vector bundle $E$ and the corresponding $G_\alpha$-equivariant vector bundle $G_\alpha \times_{H_\alpha} E$, they will both clearly restrict to the same representation of $\mathbb{Z}/n\mathbb{Z}$ in the fibers over the points $x$ and $G_\alpha \times_{H_\alpha} x$. So the isomorphism will restrict to the appropriate summand, and induciton will hold for $\,^{\alpha}K^{-n}_G(X,A)$.
The Bott periodicity map will also respect the splitting (\ref{splitting}), since it involves the exernal tensor product with a bundle over $S^2$. So for any finite proper $G$-CW pair $(X,A)$ and normalized torsion cocycle $\alpha$ of $G$, we can define $\,^{\alpha} K^0_G(X,A)$ as the direct summand of $K^0_{G_\alpha}(X,A)$ corresponding to the representation of $\mathbb{Z}/n\mathbb{Z}$ which takes the generator to multiplication by $e^{2 \pi i/n}$. We can define $\,^\alpha K^1_G(X,A)$ as the corresponding direct summand of $K^1_{G_\alpha}(X,A)$. This gives a $\mathbb{Z}/2\mathbb{Z}$-graded equivariant cohomology theory on the category of finite proper $G$-CW pairs which has the following exact rectangle
$$\xymatrix{
\,^{\alpha}K^0_G(X,A) \ar[r] & \,^{\alpha}K^0_G(X) \ar[r] &
\,^{\alpha}K^0_G(A) \ar[d]^{\delta^0}\\
\,^{\alpha}K^1_G(A) \ar[u]^{\delta^{-1}} & \ar[l] \,^{\alpha}K^1_G(X) & \ar[l] \,^{\alpha}K^1_G(X,A) }
$$
for any finite proper $G$-CW pair $(X,A)$. The rectangle again just comes from the rectangle we get from the $G_\alpha$-equivariant $K$-theory of the pair $(X,A)$ \cite[Theorem 3.2]{oliver-completion-theorem-2001}.
The only part of Theorem \ref{main} left to prove is the existence of an associative graded commutative twisted product. This will come from the graded commuative product constructed by L\"{u}ck and Oliver in \cite{oliver-completion-theorem-2001}. Given two normalized torsion cocycles $\alpha$ and $\beta$ with values in $\mathbb{Z}/n\mathbb{Z}$ and $\mathbb{Z}/m\mathbb{Z}$ respectively, we can form an exterior product $\alpha \oplus \beta$ which is a cocycle with values in $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/m\mathbb{Z}$. This defines an extension of $G$ by $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/m\mathbb{Z}$ which we can call $G_{\alpha \oplus \beta}$.
There are obvious projection maps $p_\alpha: G_{\alpha \oplus \beta} \rightarrow G_\alpha$ and $p_\beta: G_{\alpha \oplus \beta} \rightarrow G_\beta$, and these give rise to maps in equivariant $K$-theory
$$\phi_\alpha:K_{G_\alpha}(X) \rightarrow K_{G_{\alpha \oplus \beta}}(X), \,\,\, \phi_\beta:K_{G_\beta}(X) \rightarrow K_{G_{\alpha \oplus \beta}}(X)$$
for any $G$-space $X$. Note that if $E$ is a $G_{\alpha}$-bundle, then the central subgroup $1 \times \mathbb{Z}/m\mathbb{Z} \subset G_{\alpha \oplus \beta}$ acts trivially on $\phi_\alpha(E)$, and similarly if $F$ is a $G_\beta$-bundle, then the central subgroup $\mathbb{Z}/n\mathbb{Z} \times 1 \subset G_{\alpha \oplus \beta}$ acts trivially on $\phi_\beta(F)$.
We can form the composition
$$\xymatrix@1{
K_{G_\alpha}(X) \bigotimes K_{G_\beta}(X) \ar[r]^-{\phi_\alpha \otimes \phi_\beta} & K_{G_{\alpha \oplus \beta}}(X) \bigotimes K_{G_{\alpha \oplus \beta}}(X) \ar[r]^-{\mu} & K_{G_{\alpha \oplus \beta}}(X) \,\,\,\,\,\,\,\,\,\,\,(3)
} \label{comp}$$
where $\mu$ is the product defined in \cite{oliver-completion-theorem-2001}. This will give us the associative graded commutative twisted product we want. Let us call this composition $\eta$.
Notice that $K_{G_{\alpha \oplus \beta}}(X)$ has a splitting just as in (\ref{splitting}). In this case, the direct summands will correspond to irreducible characters of $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/m\mathbb{Z}$. However, any irreducible character of $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/m\mathbb{Z}$ can be seen as the product of an irreducuble character of $\mathbb{Z}/n\mathbb{Z}$ with an irreducible character of $\mathbb{Z}/m\mathbb{Z}$.
If we restrict $\eta$ to $\,^\alpha K_G(X) \bigotimes \,^\beta K_G(X)$, the image lands in the direct summand of $K_{G_{\alpha \oplus \beta}}(X)$ which corresponds to the representation of $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/m\mathbb{Z}$ which takes the generator of $\mathbb{Z}/n\mathbb{Z} \times 1$ to multiplication by $e^{2 \pi i / n}$ and the generator of $1 \times \mathbb{Z}/m\mathbb{Z}$ to multiplication by $e^{2 \pi i/m}$. We will denote this image by $\,^{\alpha,\beta} K_{G_{\alpha \oplus \beta}}(X)$.
\begin{Lem}
Let $\alpha, \, \beta \in Z^2(G, U(1))$ be two normalized torsion cocycles of the discrete group $G$. For any finite $G$-CW complex $X$, there is a canonical map
$$\varphi:\mathbb{K}_{G_{\alpha + \beta}}(X) \longrightarrow \mathbb{K}_{G_{\alpha \oplus \beta}}(X)$$
such that the restriction
$$\overline{\varphi}: \,^{\alpha + \beta} K_G(X) \longrightarrow \,^{\alpha,\beta} K_{G_{\alpha \oplus \beta}}(X)$$
is an isomorphism. Here $\alpha + \beta \in Z^2(G,U(1))$ represents the normalized torsion cocycle which is the sum of the cocycles $\alpha$ and $\beta$.
\end{Lem}
\noindent \emph{Proof. }
Since they are normalized torsion cocycles, let $\alpha$ take values in $\mathbb{Z}/n\mathbb{Z} \subset U(1)$ and let $\beta$ take values in $\mathbb{Z}/m\mathbb{Z} \subset U(1)$. Since $\alpha$ and $\beta$ are normalized, $\alpha + \beta$ will be normalized as well, and it will take values in $\mathbb{Z}/k\mathbb{Z}$ for some $k$ dividing the least common multiple of $n$ and $m$. In fact, $\mathbb{Z}/k\mathbb{Z}$ will be the image of $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/m\mathbb{Z}$ under the multiplication map
$$U(1) \times U(1) \rightarrow U(1).$$
There is an obvious projection map $p:G_{\alpha \oplus \beta} \rightarrow G_{\alpha + \beta}$ given by $(g,a,b) \mapsto (g,ab)$. Let $\varphi$ be the map induced by $p$ on equivariant $K$-theory. Under this construction, if $E$ is an $(\alpha+\beta)$-twisted $G$-bundle, then $\varphi(E)$ will be the equivariant $G_{\alpha \oplus \beta}$-bundle with the same total space $E$, where $G_{\alpha \oplus \beta}$ acts on $\varphi(E)$ by the projection $p$ and the action of $G_{\alpha+\beta}$ on $E$. This action shows that $\varphi(E)$ is an element of $\,^{\alpha,\beta}K_{G_{\alpha \oplus \beta}}(X)$ from the definition, and it makes sense to talk about the restriction $\overline{\varphi}$. It is clear from this construction that at the level of bundles, $\overline{\varphi}$ will take two non-isomorphic $\alpha+\beta$-twisted bundles $E$ and $F$ to non-isomorphic $G_{\alpha \oplus \beta}$-equivariant bundles.
It remains to be shown that $\overline{\varphi}$ is a surjection. However, this is simple to see at the level of bundles. If we take a $G_{\alpha \oplus \beta}$-equivariant bundle $E$ with the generator of the central $\mathbb{Z}/n\mathbb{Z} \times 1$ (resp, central $1 \times \mathbb{Z}/mZ$) acting by multiplication by $e^{2 \pi i/n}$ (resp. $e^{2 \pi i/m}$) on the fibers, then we can define a bundle $F$ with the same total space and an action of $G_{\alpha + \beta}$ as follows. The generator of the central $\mathbb{Z}/k\mathbb{Z}$ acts by multiplication by $e^{2 \pi i/k}$ on the fibers. Given an element $g \in G$, we will let $(g,1) \in G_{\alpha+\beta}$ act on $F$ the same way $(g,1,1) \in G_{\alpha \oplus \beta}$ acts on $E$. This action will be well defined, and it will make $F$ into an $\alpha + \beta$-twisted $G$-bundle. This construction means that every $G_{\alpha \oplus \beta}$-bundle with the given action corresponds to an $\alpha+\beta$-twisted $G$-bundle, and so $\overline{\varphi}$ is an isomorphism.
\hfill \qed
\vspace{0.2cm}
Using the above lemma, we can construct the product
$$\overline{\varphi}^{-1} \circ \eta: \,^{\alpha} K_G(X) \bigotimes \,^{\beta}K_G(X) \longrightarrow \,^{\alpha + \beta}K_G(X).$$
This product will extend to a product
$$\tau: \,^{\alpha}K^i_G(X) \bigotimes \,^{\beta}K^j_G(X) \longrightarrow \,^{\alpha + \beta}K^{i+j}_G(X)$$
in the same way that the product in \cite{oliver-completion-theorem-2001} extends. In fact, using the splitting (\ref{splitting})
$$K_{G_{\alpha \oplus \beta}}(X) = \bigoplus_{V \in Irred(\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/m\mathbb{Z})} K_{G_{\alpha \oplus \beta}, V}(X)$$
we have for the $G_{\alpha \oplus \beta}$-equivariant $K$-theory of a finite $G$-CW complex $X$, the product
$$K_{G_{\alpha \oplus \beta}}(X) \bigotimes K_{G_{\alpha \oplus \beta}}(X) \longrightarrow K_{G_{\alpha \oplus \beta}}(X)$$
will restrict to the direct summands to give a product
$$K_{G_{\alpha \oplus \beta},V}(X) \bigotimes K_{G_{\alpha \oplus \beta},W}(X) \longrightarrow K_{G_{\alpha \oplus \beta},V \otimes W}(X)$$
where $V$ and $W$ are irreducible complex $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/m\mathbb{Z}$-modules. Therefore, for a finite $G$-CW complex $X$, the twisted product is a restriction of the standard product for the $G_{\alpha \oplus \beta}$-equivariant $K$-theory of $X$. The graded commutativity comes from the fact that $\alpha+\beta = \beta+\alpha$ as cocycles and the graded commutativity of the product in \cite{oliver-completion-theorem-2001}.
\section{Twisted Bredon Cohomology and the Spectral Sequence}
This section will be devoted to the construction of a coefficient system for Bredon cohomology on the category of finite proper $G$-CW complexes. This gives \emph{twisted} Bredon cohomology which is related to twisted equivariant $K$-theory by a spectral sequence.
\subsection{Twisted Bredon Cohomology}
Bredon cohomology was first introduced by Bredon \cite{bredon;equivariant-cohomology-theories} as a way to formulate equivariant obstruction theory. Let $\mathcal{O}_G$ be the \emph{orbit category} of $G$; a category with one object $G/H$ for each subgroup $H \subseteq G$ and morphisms all $G$-maps between any two objects. Notice that there are morphisms $\phi:G/H \rightarrow G/K$ if and only if $H$ is subconjugate to $K$, since if $\phi(eH) = gK$, then $g^{-1}Hg \subseteq K$.
A \emph{coefficient system} for Bredon cohomology is a contravariant functor $M:\mathcal{O}_G^{op} \rightarrow \mathcal{A}b$ into the category of abelian groups. Given a $G$-CW complex $X$, there are some standard coefficient systems given by
$$\underline{C}_n(X)(G/H) = H_n( (X^{(n)})^H, (X^{(n-1)})^H; \mathbb{Z})$$
The connecting homomorphisms of the triple $((X^{(n)})^H, (X^{(n-1)})^H, (X^{(n-2)})^H)$ give rise to a boundary map
$$ \underline{ \partial}: \underline{C}_n(X) \rightarrow \underline{C}_{n-1}(X)$$
with $\underline{\partial}^2 = 0$. For two coefficient systems $M$ and $M'$, we denote by $Hom_{\mathcal{C}_G}(M, M')$ the abelian group of natural transformations between the two systems. This is an abelian group since the category $\mathcal{C}_G$ of coefficient systems is abelian \cite{may;equivariant-homotopy-cohomology}.
Now for any coefficient system $M$ and any $G$-CW compex $X$, we can define a cochain complex of abelian groups $C^*_G(X;M)$ as follows
$$C^n_G(X;M) = Hom_{\mathcal{C}_G}(\underline{C}_n(X), M), \mbox{ with } \delta=Hom_{\mathcal{C}_G}(\underline{\partial}, id).$$
This makes $C^*_G(X;M)$ a cochain complex of abelian groups. The homology of this complex is the \emph{Bredon cohomology} of $X$ with coefficient system $M$, denoted $H^*_G(X;M)$.
Now, let $G$ be an arbitrary discrete group and $\alpha \in Z^2(G, U(1))$ be a normalized torsion cocycle. As before, to avoid cumbersome notation we will use $\alpha$ to denote both the cocycle and its restriction to any subgroup $H \subseteq G$ if it is clear from the context which is meant. We can define a coefficient system for Bredon cohomology of proper, finite $G-CW$ complexes by using the twisted representation group functor $\mathcal{R_\alpha}(-)$. Define $\mathcal{R_\alpha}$ on objects by $\mathcal{R_\alpha}(G/H) = R_\alpha(H)$. Note that this is well defined if $H$ is a finite subgroup of $G$. Since we are considering proper action of discrete groups, the only orbits we are concerned with are of the form $G/H$ with $H$ finite. Therefore, this coefficient system is well defined in our case.
The coefficient system is defined on the morphisms of $\mathcal{O}_G$ as
follows. If $H \subseteq K$ are finite subgroups of $G$, then there is an
obvious map $\phi: G/H \rightarrow G/K$ in the orbit category. We define
$\mathcal{R}_\alpha(\phi)$ to be the map $R_{\alpha}(K) \rightarrow
R_{\alpha}$ induced by restricting every representation to the
subgroup. A map in the orbit category induced by $gHg^{-1} \subseteq K$ for
some $g \in G$ is given by $g'H \rightarrow g'gK$, and it induces a map
$R_{\alpha}(K) \rightarrow R_{\alpha}(H)$ by restricting every
representation to the subgroup $gHg^{-1}$ and then using the isomorphism
induced by conjugation.
\begin{Def} The $\alpha$-twisted Bredon cohomology of a finite, proper
$G-CW$ complex $X$ is defined to be $H^*_G(X; \mathcal{R_\alpha})$ where $\alpha \in Z^2(G, U(1))$ is a cocycle and $\mathcal{R_\alpha}$
is the coefficient system described above.
\end{Def}
It is worth mentioning that the name $\alpha$-twisted Bredon cohomology is a bit misleading. This is {\bf not} Bredon cohomology with twisted (local) coefficients. There is a theory of Bredon cohomology with local coefficients which generalizes the nonequivariant case \cite{mukherjee-pandey}. We use the name $\alpha$-twisted Bredon cohomology to emphasize the connection with $\alpha$-twisted equivariant $K$-theory which will be explored more deeply later.
Many common coefficient systems for Bredon cohomology extend to a quotient of $\mathcal{O}_G$ denoted by $Sub(G)$. $Sub(G)$ is the category of subgroups of $G$ and monomorphisms induced by subconjugation. $Sub(G)$ is a subquotient of $\mathcal{O}_G$ because all of the morphisms $G/H \rightarrow G/H$ coming from conjugation by an element of $C_G(H)$ are collapsed to the identity. So for a coefficient system to extend to $Sub(G)$, it must take any map $G/H \rightarrow G/H$ induced by conjugation by $g \in C_G(H)$ to the identity. It is clear that the coefficient system $\mathcal{R_\alpha}$ extends to $Sub(G)$, and so it defines a contravariant functor from $Sub(G)$ to $\mathcal{A}b$.
For the following important result we need to use induced modules.
\begin{Thm} \label{mackey}
The coefficient system $\mathcal{R_\alpha}$ has the structure of a Mackey functor to the category of $\mathbb{Z}$-modules.
\end{Thm}
Before proving this, recall the definition of a Mackey functor. We will use the notation of \cite{luck}. Let $M:FGIN\mathcal{J}_G \rightarrow R-MOD$ be a \emph{bifunctor}. This means it is a pair
$(M_*, M^*)$ consisting of a covariant functor $M_*$ and a contravariant functor $M^*$ from $FGIN\mathcal{J}_G$ to $R-MOD$ which agree on objects. Here $FGIN\mathcal{J}_G$ is the category of finite subgroups of $G$ and injective homomorphisms induced by (sub)conjugation. For an injective group homomorphism $f:H \rightarrow K$ we will denote the map $M_*(f)$ by $ind_f$ and the map $M^*(f)$ by $res_f$. If $f$ is an inclusion of groups we will write $ind^{K}_H = ind_f$ and $res^H_K = res_f$. Such a bifunctor $M$ is a \emph{Mackey functor} if
\begin{itemize}
\item[(1)] For an inner automorphism $c(g):H \rightarrow H$, we have $M_*(c(g)) = id:M(H) \rightarrow M(H)$;
\item[(2)] For an isomorphism of groups $f:K \stackrel{\cong}{\longrightarrow} H$, the composites $res_f \circ ind_f$ and $ind_f \circ res_f$ are the identity;
\item[(3)] Double coset formula \\
We have for two subgroups $H, K \subseteq L$
$$res^K_L \circ ind^L_H = \sum_{KgH \in K \backslash L/H} ind_{c(g):H \cap g^{-1}Kg \rightarrow K} \circ res^{H \cap g^{-1}Kg}_H$$
where $c(g)$ is conjugation with $g$.
\end{itemize}
\noindent \emph{Proof of Theorem \ref{mackey}.}
First, recall that when considering $\alpha$-twisted representations of a finite group $G$ it suffices to consider $\mathbb{C}^{\alpha}G$-modules. We will use the notation of \cite{luck} as above. We make $\mathcal{R_\alpha}$ into a bifunctor in the following way. Let its value on objects be the obvious one, $\mathcal{R_\alpha}(H) = R_\alpha(H)$. For the contravariant functor, let $(\mathcal{R_\alpha})^*(H \rightarrow K) = R_\alpha(K) \rightarrow R_\alpha(H)$ be the usual restriciton induced by inclusion (and possibly conjugation). For the covariant functor, let $(\mathcal{R_\alpha})_*(H \rightarrow K)$ be the map taking the $\mathbb{C}^{\alpha}H$ -module $V$ to the $\mathbb{C}^{\alpha}K$-module $\mathbb{C}^{\alpha}K \otimes_{\mathbb{C}^{\alpha}H} V^{(g)}$ where $gHg^{-1} \subseteq K$. Now the first two properties follow from the definitions, and the third follows from the Subgoup Theorem (Theorem \ref{sub}). So $\mathcal{R_\alpha}$ is a Mackey functor.
\hfill \qed
\vspace{0.2cm}
Since $R_\alpha(H)$ is not a ring, we cannot expect to have a multiplicative structure on $H^*_G(X, \mathcal{R_\alpha})$. However, we do have a sort of twisted multiplication
$$R_\alpha(H) \otimes R_\beta(H) \rightarrow R_{\alpha + \beta}(H)$$
as noted previously. This gives hope that there might be an analogous
product structure on twisted Bredon cohomology, and in fact there is.
\begin{Thm} There is a pairing relating the $\alpha$-twisted and
$\beta$-twisted Bredon cohomologies of a proper $G$-CW complex to its
($\alpha + \beta$)-twisted Bredon cohomology. The pairing takes the form
$$\cup : H^n_G(X,\mathcal{R_\alpha}) \otimes H^m_G(X,\mathcal{R_\beta})
\rightarrow H^{n+m}_G(X,\mathcal{R_{\alpha + \beta}})$$
for $\alpha, \beta \in Z^2(G, U(1))$. This pairing is natural in $X$.
\end{Thm}
The proof of this theorem is exactly analagous to the proof of the existence of the cup product in singular cohomology. Also, since $\alpha + \beta = \beta + \alpha$ as cocycles, it is easy to show that the twisted product is graded commutative, i.e.
$$s \cup t = (-1)^{(nm)}t \cup s \,\, \mbox{ for } s \in H^n_G(X, \mathcal{R_\alpha}), t \in H^m_G(X,\mathcal{R_\beta}).$$
It is clear that if $\alpha$ is the trivial cocycle, we have Bredon cohomology with coefficients in the representation ring functor. For simplicity call it untwisted Bredon cohomology. Notice that the product just defined makes $\alpha$-twisted Bredon cohomology a module over untwisted Bredon cohomology in a natural way. This is completely analagous to what happened with $\alpha$-twisted equivariant $K$-theory. This isn't suprising, since we can connect the two theories by a twisted equivariant Atiyah Hirzebruch spectral sequence.
\subsection{The Spectral Sequence}
Next we see that there is a spectral sequence connecting the $\alpha$-twisted Bredon cohomology and the $\alpha$-twisted equivariant $K$-theory of finite proper $G$-CW complexes. This spectral sequence is a special case of the more general spectral sequence constructed by Davis and L\"{u}ck \cite{davis-l*;spaces-category-spaces-assembly-isomorphism-conjectures}.
\begin{Thm}
Let $X$ be a finite proper $G$-CW complex for a discrete group $G$,
and let $\alpha \in Z^2(G, U(1))$ be a normalized torsion cocycle.
Then there is a spectral sequence with
$$E_2^{p,q} = \left\{ \begin{array}{ll} H^p_G(X,\mathcal{R}_\alpha) & \mbox{ if } q \mbox{ is even} \\
0 & \mbox{ if } q \mbox{ is odd} \\
\end{array} \right.$$
so that $E^{p,q}_{\infty} \Longrightarrow
\,^{\alpha}K_G^{p+q}(X)$.
\end{Thm}
The construction of the spectral sequence is completely analogous to the construction of the standard Atiyah-Hirzebruch spectral sequence \cite{atiyah-hirzebruch;vector-bundles-homogeneous-spaces} using the skeletal filtration of $X$. In fact, this spectral sequence is a direct summand of the spectral sequence in \cite{l*-oliver;chern-proper-equivariant-characters}. We can see this using the fact that $R_\alpha(G)$ is a direct summand of $R(G_\alpha)$ and $\,^\alpha K^*_G(X)$ is a direct summand of $K^*_{G_\alpha}(X)$ because of the splitting (\ref{splitting}). The maps in the spectral sequence considered in \cite{l*-oliver;chern-proper-equivariant-characters} clearly respect the splitting since they are induced by the skeletal filtration, and so our spectral sequence comes out as a direct summand.
Note that the $E_2$ page of the spectral sequence for $\alpha$-twisted equivariant $K$-theory can be seen as a module over the $E_2$ page of the spectral sequence for untwisted equivariant $K$-theory using the product described in Section 4.1. Also, it has been shown \cite{l*-oliver;chern-proper-equivariant-characters} that the spectral sequence for untwisted equivariant $K$-theory is a multiplicative spectral sequence. This can be used to make the spectral sequence for $\alpha$-twisted $K$-theory into a module over the spectral sequence for untwisted equivariant $K$-theory since all of the differentials respect the module structure.
We will use a result of L\"{u}ck to show that this spectral sequence collapses modulo torsion. In \cite{luck}, he constructs a Chern character for any proper equivariant cohomology theory which maps from the theory to its associated Bredon cohomology. In our case, $\alpha$-twisted equivariant $K$-theory is the proper equivariant cohomology theory and its associated Bredon cohomology is $\alpha$-twisted Bredon cohomology. In fact, since our spectral sequence is a summand of the spectral sequence from \cite{l*-oliver;chern-proper-equivariant-characters} for $G_\alpha$-equivariant $K$-theory of $X$, the Chern character for $\alpha$-twisted $K$-theory can be seen as a restriction of the Chern character for $G_\alpha$-equivariant $K$-theory.
To apply \cite[Theorem 5.5]{luck} which gives that the Chern character is an isomorphism, it suffices to show that $\mathcal{R}_\alpha(-) \otimes \mathbb{Q}$ has a Mackey structure. By Theorem \ref{mackey}, $R_\alpha(-)$ has a Mackey structure as a functor into $\mathbb{Z}-MOD$. Since $\mathbb{Q}$ is a flat $\mathbb{Z}$-module, tensoring with $\mathbb{Q}$ preserves exact sequences; in particular it preserves isomorphisms. This means that the three conditions of a Mackey functor are still preserved. So $R_\alpha(-) \otimes \mathbb{Q}$ does have a Mackey structure.
After tensoring with $\mathbb{Q}$, the equivariant Chern character gives an isomorphism between the $E_2$ page and the $E_\infty$ page of the spectral sequence. This tells us that the spectral sequence collapses at the $E_2$ page after tensoring with $\mathbb{Q}$.
\section{Connections to other constructions}
There has been a lot of activity in studying twisted equivariant $K$-theory recently motivated by orbiforlds \cite{adem-ruan}, \cite{bunke-schick;T-duality}, \cite{tu-xu-lg;differentiable-stacks}. Generally, as in \cite{atiyah-segal;twisted}, if $X$ is a $G$-space, the twistings considered are classified by $H^3(X \times_G EG;\mathbb{Z})$. In fact, it is a little more subtle than that because the twistings actually depend on cocycles not cohomology classes. Two cohomologus cocycles do give isomorphic twisted equivariant $K$-groups. However the isomorphism is not canonical; it depends upon the coboundary connecting the two cocycles.
The twistings discussed here fit into the more general setting in a very natural way. Given a discrete group $G$ and a normalized torsion cocycle $\alpha \in Z^2(G,U(1))$, $\alpha$ defines a cohomology class $[\alpha] \in H^2(G,U(1))$. The short exact sequence of coefficients
$$ 1 \rightarrow \mathbb{Z} \rightarrow \mathbb{R} \rightarrow U(1) \rightarrow 1$$
gives rise to a long exact sequence in the cohomology of $G$. In particular, it gives a connecting homomorphism
$$
H^2(G,U(1)) \stackrel{\delta}{\longrightarrow} H^3(G,\mathbb{Z}).
$$
Now given a finite $G$-CW complex $X$, we have the fibration
$$\xymatrix@1{
X \ar[r] & X \times_G EG \ar[r]^p & BG \\
}$$
which induces a map $p^*:H^3(BG;\mathbb{Z}) \rightarrow H^3(X \times_G EG; \mathbb{Z})$. So twistings of the type we have discussed are a subclass of the more general twistings.
Morally, the $G$-equivariant $K$-theory of a space $X$ is a combination of the K-theory of the space, $K(X)$, and the complex representation ring of the group, $R(G)$. General twistings involve both pieces, however the twistings we have considered only depend on the group. They can be interpreted as leaving $K(X)$ alone and only twisting the ingredient $R(G)$.
Another approach to twisted $K$-theory for orbifolds come from the groupoid description of an orbifold. In \cite{tu-xu-lg;differentiable-stacks}, given an orbifold groupoid $\mathfrak{X}$, the twisted orbifold $K$-theory of $\mathfrak{X}$ is described in terms of the $K$-theory of a particular $C^*$-algebra. In the case where the orbifold is a quotient orbifold given by a proper action of a Lie Group $G$ on a $G$-compact space $X$ and the twisting is trivial, this resticts to the $G$-equivariant $K$-theory of $X$. For quotient orbifolds the twistings again are classified by $H^3(X \times_G EG; \mathbb{Z})$, and twisted orbifold $K$-theory is just the twisted equivariant $K$-theory of $X$. One important question raised in \cite{tu-xu-lg;differentiable-stacks} is : When can the twisted orbifold $K$-theory be realized in terms of finite rank bundles? In \cite{tu-xu-lg;differentiable-stacks}, they prove that if the twisting is not torsion, then it is impossible to describe twisted orbifold $K$-theory just using finite rank bundles. However, if the twisting is torsion, it is unknown if it is possible or not.
They prove \cite[Theorem 5.28]{tu-xu-lg;differentiable-stacks} a general result in this direction. For a compact quotient orbifold and a torsion twisting, the theorem says that if there is a single finite rank twisted bundle, then the twisted orbifold $K$-theory is representable in terms of finite rank bundles. They conjecture that for any torsion twisting, there will always exist at least one finite rank twisted bundle. So our construction gives a partial result for their conjecture. If $G$ is a discrete group, $X$ a finite proper $G$-CW complex, then given any torsion twisting $[\alpha] \in H^3(X \times_G EG; \mathbb{Z})$ with $[\alpha]$ in the image of $p^* \circ \delta$ there is a finite rank $\alpha$-twisted bundle (Theorem \ref{bundle}).
Tu and Xu have also constructed a Chern character for twisted orbifold $K$-theory which lands in a twisted version of deRahm cohomology \cite{tu-xu;chern-orbifolds}. Since their definition of twisted $K$-theory involves operator algebras, their Chern character factors through the periodic cyclic homology groups of the operator algebra. In the case of a discrete group acting properly and a torsion twisting coming from the group, the Chern character mentioned in this paper gives an alternate method of computation for the twisted $K$-theory groups.
|
2,877,628,091,124 | arxiv | \section{Introduction}
In 1986, Bednorz and M\"uller unveiled a new class of ceramic superconducting materials composed of a layered structure of a two-dimensional CuO$_2$ plane\cite{BM}.
Subsequently, the superconducting transition temperature ($T_{\rm c}$) in cuprates rises up to $\sim$133 K in HgBa$_2$Ca$_2$Cu$_3$O$_{8+\delta}$ (Hg1223)\cite{Schilling}, which reaches $T_{\rm c}$=164 K under pressure~\cite{Gao,Chu,Ihara}.
The discovery of a remarkably high $T_{\rm c}$ in the cuprate family has not only opened many possibilities for potential technical applications, but has also provided a challenging research subject for condensed-matter physics and material sciences.
Despite more than a quarter of a century of research, there is still no universally accepted theory for the mechanism of high-$T_{\rm c}$ superconductivity (HTSC) in cuprates.
The main controversy arises with regard to the origin of the attractive force for the formation of Cooper pairs, which leads to such a remarkably high SC transition.
In conventional superconductors with a relatively low $T_{\rm c}$, the phonon-mediated electron-electron interaction is the attractive force for the formation of Cooper pairs in the Bardeen-Cooper-Schrieffer (BCS) theory, which was established half a century ago~\cite{BCS}.
The HTSC in cuprates emerges on a layered structure of a two dimensional CuO$_2$ lattice when an antiferromagnetic Mott insulator is doped with hole carriers.
The strong hybridization of Cu-$3d_{x^2-y^2}$ and O-$2p\sigma$ orbitals brings a large superexchange interaction $J_{\rm in}\sim 0.12$ eV ($\sim$1300 K) in the CuO$_2$ plane for nondoped cuprates~\cite{JinLSCO,JinYBCO1,JinYBCO2,TokuraJ}.
Therefore, an intimate relationship between antiferromagnetism (AFM) and HTSC is believed to be a key for understanding the origin of the remarkably high SC transition in cuprate superconductors\cite{Anderson,Anderson1,Anderson2,Chen,Giamarchi,Inaba,Lee1,Zhang,Himeda,Kotliar,Paramekanti1,Lee2,Demler,Yamase1,Yamase2,Paramekanti2,Shih1,Shih2,Senechal,Capone,Ogata,Pathak,Moriya}.
Experimentally, however, in prototype high-$T_{\rm c}$ cuprates La$_{2-x}$Sr$_x$CuO$_4$ (LSCO), AFM and HTSC are separated by the spin-glass phase~\cite{Keimer}.
In LSCO, the chemical substitution of Sr$^{2+}$ for La$^{3+}$ is necessary to increase the planar CuO$_2$ hole density, but it introduces some local disorder simultaneously into nearly two-dimensional CuO$_2$ planes and buckling on a CuO$_{6}$ octahedral unit, which make doped hole carriers localize owing to the Anderson localization mechanism.
As a result, it is inevitable that the intrinsic electronic characteristics of LSCO are masked so that AFM and HTSC are separated by the spin glass phase.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=15cm]{81002Fig1.eps}
\end{center}
\caption{(Color online) Crystal structure of $n$-layered cuprates of (a) $M$12($n$-1)$n$ and (b) 02($n$-1)$n$F. Copper oxides with more than three layers comprise inequivalent types of CuO$_2$ layers: an outer plane (OP) in a five-fold pyramidal coordination and an inner plane (IP) in a four-fold square coordination.
Although the disorder may be introduced along with the chemical substitution in charge reservoir layers (CRLs), it is effectively shielded out of OPs, and hence homogeneously hole-doped CuO$_2$ planes with ideal flatness are realized, especially at IP, which is ensured by the narrow NMR linewidths (see Fig.~\ref{fig:CuNMR_comparison}).}
\label{fig:structure}
\end{figure*}
Multilayered cuprates provide us with the opportunity to research on the characteristics of a disorder-free CuO$_2$ plane with hole carriers homogeneously doped.
Figures \ref{fig:structure}(a) and \ref{fig:structure}(b) respectively show the crystal structures of $n$-layered cuprates in the series of $M$Ba$_2$Ca$_{n-1}$Cu$_n$O$_{2n+2+\delta}$ ($M$=Hg, Tl, and Cu) and Ba$_2$Ca$_{n-1}$Cu$_n$O$_{2n}$(F$_y$O$_{1-y}$)$_2$, denoted as $M$12($n$-1)$n$ and 02($n$-1)$n$F.
Here, $n$ is the number of CuO$_2$ planes within a unit cell.
Copper oxides with more than three layers comprise inequivalent types of CuO$_2$ layers: an outer CuO$_2$ plane (OP) in a five-fold pyramidal coordination and an inner CuO$_2$ plane (IP) in a four-fold square coordination.
Site-selective nuclear magnetic resonance (NMR) and nuclear quadrupole resonance (NQR) studies are unique tools for differentiating layer-dependent electronic characteristics microscopically~\cite{TokunagaJLTP,Tokunaga,Kotegawa2001,Kotegawa2004,MukudaPRL2006,MukudaJPSJ2006,Shimizu2007,Mukuda2007PhysC,Mukuda2008,Shimizu2009PRB,Shimizu2009JPSJ,Mukuda2010,Itohara,Shimizu2011JPSJ,Shimizu2011PRB,Shimizu2011_n3,Tabata,KitaokaJPCS2011,KitaokaIOP}.
One of the remarkable features and advantages of multilayered cuprates is that the CuO$_2$ layers are very flat and homogeneously doped, which have been ensured by the narrowest NMR linewidth to date among the very high quality cuprates investigated thus far~(for example, see Fig.~\ref{fig:CuNMR_comparison}).
In multilayered cuprates, the carrier densities are inequivalent between OP and IP owing to an imbalance in the Madelung potential at each CuO$_2$ plane.
Namely, since IP is farther from the charge reservoir layer (CRL) than OP, the carrier density at IPs is always lower than that at OPs.
Carrier density can be tuned by the oxygen deficiency in CRLs ($M$O$_\delta$) for $M$12($n$-1)$n$ or by the chemical substitution of F at apical oxygen sites for 02($n$-1)$n$F.
Such chemical substitutions introduce some disorder in CRLs, which may be slightly mapped onto OP, but the disorder potential at IP can be effectively shielded owing to the presence of conducting OP, as deduced from the narrower NMR linewidth at IP than at OPs (for example, see Figs.~\ref{fig:Cuspectra} and \ref{fig:CuNMR_comparison}).
In this context, ideally flat CuO$_2$ planes are realized especially at underdoped IP, differentiating multilayered cuprates from monolayered cuprate LSCO.
In this paper, we review a decade of extensive NMR investigations of $n$-layered cuprates with $n$=3, 4, and 5, which have revealed the intimate relationship between AFM and HTSC for a disorder-free CuO$_2$ plane with hole carriers homogeneously doped~\cite{Kotegawa2004,MukudaPRL2006,MukudaJPSJ2006,Shimizu2007,Mukuda2007PhysC,Mukuda2008,Shimizu2009PRB,Shimizu2009JPSJ,Mukuda2010,Itohara,Shimizu2011JPSJ,Shimizu2011PRB,Shimizu2011_n3,Tabata,KitaokaJPCS2011,KitaokaIOP,Shimizu_n5}.
The intrinsic phase diagram possesses the following features: The AFM metallic state is robust and coexists uniformly with the HTSC at a single CuO$_2$ plane in a region extending up to the optimally doped one.
The critical carrier density $p_{c}$ at which the AFM order collapses decreases from 0.10 to 0.08 to 0.075 as the interlayer magnetic coupling becomes weaker when decreasing from $n$=5 to 4 to 3, respectively.
This provides a reasonable explanation why the AFM order in $n$=1:LSCO and $n$=2:YBa$_2$Cu$_3$O$_{6+x}$(YBCO$_{6+x}$) collapses at carrier densities with $p_{c}$=0.02 and 0.055, respectively.
We reveal that the SC gap and $T_{\rm c}$ exhibit a maximum irrespective of $n$ at $p\sim$~0.16 just outside $p_c(M_{\rm AFM}$=$0)\sim0.14$, where the AFM moment ($M_{\rm AFM}$) inherent in the CuO$_2$ plane totally disappears in the ground state.
We highlight that the ground-state phase diagram of AFM and HTSC (see Fig.~\ref{Mvsp}) is in good agreement with the ground-state phase diagrams in terms of either the $t$-$J$ model~\cite{Chen,Giamarchi,Inaba,Anderson,Anderson1,Anderson2,Lee1,Himeda,Kotliar,Paramekanti1,Lee2,Yamase1,Yamase2,Paramekanti2,Shih1,Shih2,Ogata,Pathak}, or the Hubbard model in the strong-correlation regime~\cite{Senechal,Capone}.
The results presented here demonstrate that the in-plane superexchange interaction $J_{\rm in}$ plays a vital role as a glue for Cooper pairs or mobile spin-singlet pairs, which will lead us to a coherent understanding why $T_{\rm c}$ is so high for hole-doped cuprates.
\section{Experimental}
\subsection{Sample preparation and characterization}
Polycrystalline powder samples of $n$-layered cuprates, i.e., $M$Ba$_2$Ca$_{n-1}$Cu$_n$O$_{2n+2+\delta}$ ($M$=Hg, Tl, and Cu) and apical F-substituted Ba$_2$Ca$_{n-1}$Cu$_n$O$_{2n}$(F$_y$O$_{1-y}$)$_2$, were prepared by the high-pressure synthesis technique, as described in the literature\cite{Tokiwa1,Tokiwa2,Iyo_TcVsn,Iyo1,Iyo2,Iyo_Hg_F,Shirage,Hirai}.
To obtain more underdoped samples of the Hg1245 system, as-prepared samples in nearly optimally doped region [Hg1245(OPT)] were annealed in a quartz tube with Cu powder or in Ar gas atmosphere for more than several hundreds of hours\cite{Hirai,MukudaPRL2006,Mukuda2010}.
In Ba$_2$Ca$_{n-1}$Cu$_n$O$_{2n}$(F$_y$O$_{1-y}$)$_2$, the substitution of oxygen (O$^{-2}$) for apical fluorine (F$^{-1}$), i.e., a decrease in nominal fluorine content ($y$), results in the doping of holes into CuO$_2$ layers, increasing $T_{\rm c}$\cite{Iyo_TcVsn,Iyo1,Iyo2,Iyo_Hg_F,Shirage}.
Although the real apical fluorine F$^{-1}$ content may deviate slightly from the nominal one, $T_{\rm c}$ and carrier density ($p$) can be tuned by changing the nominal content $y$ in this series, which provides an opportunity to investigate the characteristics of CuO$_2$ layers over a wide $p$ range systematically in the homologous series of $n$-layered cuprates \cite{Shimizu2009JPSJ,Shimizu2011_n3}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7cm]{81002Fig2.eps}
\end{center}
\caption{(Color online) (a) Lattice parameters $a$ and $c$ plotted against $n$ for the Hg12($n$-1)$n$ and 02($n$-1)$n$F systems\cite{Iyo_TcVsn,Iyo_Hg_F}. Note that the $c/2$ of 02($n$-1)$n$F is compared with the $c$ of Hg12($n$-1)$n$, because of the difference in the unit cell.
(b) Relationship between $T_{\rm c}$ and $n$ for the homologous series of $n$-layered cuprates $M$12($n$-1)$n$ and 02($n$-1)$n$F.
[cited from refs.\cite{Iyo_TcVsn,Iyo_Hg_F}] }
\label{fig:Tc_n}
\end{figure}
Powder X-ray diffraction measurements indicate that the samples used for NMR/NQR measurements are almost entirely composed of a single phase. As shown in Fig. \ref{fig:Tc_n}(a), the $c$-axis length monotonically increases with increasing $n$, which can be fitted using linear functions, $c(n) \simeq 9.451+3.171\times(n-1)$[\AA] for Hg12($n$-1)$n$ and $c(n)/2 \simeq 7.205+3.191\times(n-1)$[\AA] for 02($n$-1)$n$F \cite{Iyo_TcVsn,Iyo_Hg_F}.
The first term corresponds to the distance between OPs through CRL averaged in $n$-layered cuprates; the distances are 9.451[\AA] for Hg12($n$-1)$n$ and 7.205[\AA] for 02($n$-1)$n$F, originating from the difference in the structure of CRL between the two systems.
The coefficient of the second term corresponds to the average distance between adjacent CuO$_2$ planes, which is almost equal to the $c$-axis length of the infinite layer CaCuO$_2$(IL) ($c(\infty)$= 3.179 \AA).
The $a$-axis length of two systems also approaches that of the CaCuO$_2$(IL) ($a(\infty)$= 3.856 \AA) as $n$ increases, as shown in the lower panel of Fig. \ref{fig:Tc_n}(a).
The values of $T_{\rm c}$ of all the samples were uniquely determined by susceptibility measurement using a SQUID magnetometer, which exhibits a marked decrease due to the onset of SC diamagnetism.
The $T_{\rm c}$ of nearly optimally doped samples exhibits a striking dependence on $n$, as shown in Fig. \ref{fig:Tc_n}(b), which has been confirmed in a homologous series\cite{Scott,Iyo_TcVsn,Iyo_Hg_F,Antipov}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7cm]{81002Fig3.eps}
\end{center}
\caption{(Color online) $^{63}$Cu-NMR spectrum of $n$=5:Hg1245(OPT)$\sharp1$. The $^{63}$Cu-NMR spectra at a pyramid-type outer CuO$_2$ plane (OP) and a square-type inner one (IP) are separately observed owing to the differences in Knight shift~\cite{Kotegawa2004}. The linewidths in the $^{63}$Cu-NMR spectra are as narrow as 50 Oe at IP and 110 Oe at OP at $\sim$15 T($B \| c$), indicating that IPs are ideally flat and homogeneously hole-doped. [cited from ref.\cite{Kotegawa2004}]
}
\label{fig:Cuspectra}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7cm]{81002Fig4.eps}
\end{center}
\caption{(Color online) $^{63}$Cu-NMR spectra of (a) $n$=1: LSCO ($x$=0.24, $T_{\rm c}$=18 K)\cite{Ohsugi} and Hg1201 ($T_{\rm c}$=96 K)\cite{ItohHg1201_98}, (b) $n$=2: YBCO$_7$ ($T_{\rm c}$=92 K) and Bi2212 ($T_{\rm c}$=86 K)\cite{IshidaBi2212SG}, (c) $n$=3: Hg1223 ($T_{\rm c}$=133 K)\cite{MagishiPRB} and Cu1223 ($T_{\rm c}$=71 K)\cite{Kotegawa2001}, (d) $n$=4: Cu1234 ($T_{\rm c}$=117 K)\cite{Tokunaga,Kotegawa2001}, and (e) $n$=5: Hg1245(OPT)$\sharp1$ ($T_{\rm c}$=108 K)\cite{Kotegawa2004}.
Here, the spectra are displayed against $(B-B_{\rm res})/B_{\rm res}$ for the normalization of differences in field conditions, where $B_{\rm res}$ is a resonance field for $^{63}$Cu. These spectra were measured under the following experimental conditions; LSCO ($B_{\rm res}\perp c$=10.79 T, $T$=20 K, aligned polycrystal(APC))\cite{Ohsugi}, Hg1201 ($B_{\rm res}\parallel c$=7.46 T, $T$=100 K, APC)\cite{ItohHg1201_98}, YBCO$_7$ ($B_{\rm res}\parallel c$=15.08 T, $T$=90 K, single crystal), Bi2212 ($B_{\rm res}\parallel c$=10.95 T, $T$=77 K, single crystal)\cite{IshidaBi2212SG}, Hg1223 ($B_{\rm res}\parallel c$=10.95 T, $T$=140 K, APC)\cite{MagishiPRB}, Cu1223 ($B_{\rm res}\parallel c$=15.24 T, $T$=160 K, APC)\cite{Kotegawa2001}, Cu1234 ($B_{\rm res}\parallel c$=15.25 T, $T$=90 K, APC)\cite{Tokunaga,Kotegawa2001}, and Hg1245 ($B_{\rm res}\parallel c$=15.25 T, $T$=280 K, APC)\cite{Kotegawa2004}.
}
\label{fig:CuNMR_comparison}
\end{figure}
\subsection{NMR/NQR measurements}
For NMR/NQR measurements, the powder samples were aligned along the $c$-axis at an external field ($B$) of $\sim$16 T and fixed using stycast 1266 epoxy.
Figure \ref{fig:Cuspectra} shows a typical $^{63}$Cu-NMR spectrum of $n$=5:Hg1245(OPT)$\sharp1$, in which the spectra from OP and IP are separately observed owing to the difference in Knight shift, which enables us to study multilayered compounds site-selectively \cite{Julien,Magishi,MagishiPRB,
TokunagaJLTP,Tokunaga,Kotegawa2001,Kotegawa2004,MukudaPRL2006,MukudaJPSJ2006,Shimizu2007,Mukuda2007PhysC,Mukuda2008,Shimizu2009PRB,Shimizu2009JPSJ,Mukuda2010,Itohara,Shimizu2011JPSJ,Shimizu2011PRB,Shimizu2011_n3,Tabata,KitaokaJPCS2011,KitaokaIOP}.
Owing to the difference in local structure between IP and OP, the $^{63}$Cu-NQR frequencies ($^{63}\nu_{Q}$) at OP and IP are typically $\sim$16 and $\sim$8.4 MHz, respectively, for $n$=5:Hg1245(OPT)$\sharp$1\cite{Kotegawa2004}, in which the presence of apical oxygen on OP generally makes the $^{63}\nu_{Q}$ of OP large.
Moreover, it is remarkable that the linewidths in the $^{63}$Cu-NMR spectra are particularly narrow in multilayered cuprates, which are as narrow as 50 Oe for IP and 110 Oe for OP in $n$=5:Hg1245(OPT)$\sharp$1 even at $\sim$15 T($B \| c$).
For comparison, the $^{63}$Cu-NMR spectra of typical $n$-layered cuprates are presented in Fig. \ref{fig:CuNMR_comparison}: (a) $n$=1 : LSCO ($x$=0.24, $T_{\rm c}$=18 K)\cite{Ohsugi} and Hg1201 ($T_{\rm c}$=96 K)\cite{ItohHg1201_98}, (b) $n$=2: YBCO$_7$ ($T_{\rm c}$=92 K) and Bi2212 ($T_{\rm c}$=86 K)\cite{IshidaBi2212SG}, (c) $n$=3: Hg1223 ($T_{\rm c}$=133 K)\cite{MagishiPRB} and Cu1223 ($T_{\rm c}$=71 K)\cite{Kotegawa2001}, (d) $n$=4: Cu1234 ($T_{\rm c}$=117 K)\cite{Tokunaga,Kotegawa2001}, and (e) $n$=5: Hg1245(OPT)$\sharp1$ ($T_{\rm c}$=108 K)\cite{Kotegawa2004}.
Here, the spectra are displayed against $(B-B_{\rm res})/B_{\rm res}$ for the normalization of differences in field conditions, where the $B_{\rm res}$ is the resonance field for $^{63}$Cu.
The broadening of $^{63}$Cu-NMR linewidth originates from the inhomogeneity of the Knight shift, and the distribution of the quadrupole shift when CuO$_2$ planes are buckled.
Thus, the figure indicates that the $^{63}$Cu-NMR linewidth becomes narrower as $n$ increases, suggesting that multilayered cuprates possess very flat CuO$_2$ layers from a microscopic point of view as well as homogeneous electronic states over the CuO$_2$ plane with hole carriers.
Moreover, we also note that the SC in Hg1201($n$=1) and YBCO$_7$($n$=2) compounds also occurs on the CuO$_2$ plane with less disorder, which is one of the key factors for the relatively high SC transition at $T_{\rm c}$=96 and 92 K among $n$=1 and 2 compounds, respectively\cite{Eisaki}.
These facts that the CuO$_2$ planes are ideally flat and homogeneously hole-doped in multilayered cuprates enable us to investigate the intrinsic properties of an ideal CuO$_2$ plane.
The Knight shift $K$ generally comprises the temperature ($T$)-dependent spin part $K_{\rm s}$ and the $T$-independent orbital part $K_{\rm orb}$ as follows:
\begin{equation}
K^\alpha = K_{\rm s}^\alpha(T) + K_{\rm orb}^\alpha ~~~ (\alpha = c,ab),
\end{equation}
where $\alpha$ is the direction of $B$.
The spin part of the Knight shift for the $B\parallel$ $ab$-plane ($K_{\rm s}^{ab}$) is obtained by subtracting $K_{\rm orb}^{ab}$, which is approximately 0.23($\pm$0.02)\% assuming $K_{\rm s}^{ab}\approx 0$ in the $T$=0 limit.
For multilayered cuprates that exhibit an AFM order, $K^{ab}_s(T)$ at nonmagnetic OP shows an upturn below $T_{\rm N}$ due to the transferred hyperfine magnetic field arising from AFM IP, as shown in the lower panel of Fig.~\ref{fig:summary}: $K_{\rm orb}^{ab}\simeq$0.23($\pm$0.02)\% was assumed to be the same as that of multilayered cuprates in which all CuO$_2$ planes are in a paramagnetic state.
Note that $K_{\rm orb}^{ab}$'s are not so different among high-$T_{\rm c}$ cuprates whether at IP or OP. \cite{Julien,Magishi,MagishiPRB,Barrett,ItohHg1201_98,IshidaBi2212SG}
\subsection{Hyperfine magnetic field in CuO$_2$ plane}
According to the Mila-Rice Hamiltonian \cite{MilaRice}, the spin Knight shift of Cu in the CuO$_2$ plane is expressed as
\begin{equation}
K_{s}^{\alpha}(T) = (A_{\alpha}+4B)\chi_s(T) ~~~(\alpha =c,ab),
\label{eq:Ks}
\end{equation}
where $A_{\alpha}$ and $B$ are the on-site and supertransferred hyperfine fields of Cu, respectively. Here, $A_{\alpha}$ consists of contributions induced by on-site Cu $3d_{x^2-y^2}$ spins $-$ anisotropic dipole, spin-orbit, and isotropic core polarization, and the $B$ term originates from the isotropic $4s$ spin polarization produced by four neighboring Cu spins through the Cu($3d_{x^2-y^2}$)-O($2p\sigma$)-Cu($4s$) hybridization.
Since the spin susceptibility $\chi_s(T)$ is assumed to be isotropic, the anisotropy $\Delta$ of $K_{s}^{\alpha}(T)$ is given by
\begin{equation}
\Delta \equiv \frac{K_{s}^{c}(T)}{K_{s}^{ab}(T)}=\frac{A_{c}+4B}{A_{ab}+4B}.
\label{eq:B}
\end{equation}
The on-site hyperfine fields $A_{ab}$ $\approx$ 3.7 T/$\mu_{\rm B}$ and $A_c$ $\approx$ $-$17 T/$\mu_{\rm B}$ \cite{Monien,Millis,Imai} are assumed as material-independent in hole-doped high-$T_{\rm c}$ cuprates.
In multilayered compounds [Hg1245(OPT)$\sharp1$], $B({\rm IP})\approx$ 6.1 T/$\mu_{\rm B}$ and $B({\rm OP})\approx$ 7.4 T/$\mu_{\rm B}$ are estimated~\cite{Kotegawa2004}, which are larger than $B\sim$ 4 T/$\mu_{\rm B}$ for LSCO \cite{Ohsugi}, YBCO$_7$~\cite{Walstedt,Barrett,Takigawa}, and YBa$_2$Cu$_4$O$_8$(Y1248)~\cite{Zimmermann} compounds.
In the AFM ordered state where spins align antiferromagnetically among nearest-neighbor Cu sites in a two-dimensional lattice of the CuO$_2$ plane, the internal field at the Cu site is generally given by $B_{\rm int}$=$|A_{ab}-4B|M_{\rm AFM}$ from the AFM moment ($M_{\rm AFM}$) through those hyperfine interactions.
\subsection{Evaluation of planar CuO$_2$ hole density}
In hole-doped high-$T_{\rm c}$ cuprates, the hole density $p$ in CuO$_2$ planes has been determined by various methods: indirect chemical methods like solid solutions \cite{Shafer,Torrance,Kishino,Tokura}, bond valence sums determined from structural bond lengths \cite{Brown,TallonBV,Cava}, or the Fermi surface topology \cite{Kordyuk}. Besides, the thermoelectric power is a universal function of $p$ \cite{Obertelli,Tallon}, and the phase diagram for hole-doped cuprates is well described by $T_{\rm c}(p)$=$T_{\rm c}^{\rm max}[1-82.6(p-0.16)^2]$ \cite{Presland}, which are applicable to the estimation of $p$ when no suitable structural data are available.
These methods are, however, inapplicable to multilayered cuprates composed of more than two inequivalent CuO$_2$ planes in a unit cell, because these methods evaluate the {\it total hole density} not {\it each hole density} inherent in CuO$_2$ planes.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig5.eps}
\end{center}
\caption{\footnotesize (Color online) Plots of $K_{\rm s}^{ab}$(RT)s for 0212F\cite{Shimizu2011PRB}, Bi$_2$Sr$_2$CaCu$_2$O$_8$~(Bi2212),\cite{IshidaBi2212} Tl$_2$Ba$_2$CuO$_{6+\delta}$~(Tl2201),\cite{KitaokaTl2201} and TlSr$_2$CaCu$_2$O$_{7-\delta}$~(Tl1212),\cite{MagishiTl1212} as functions of $p$ evaluated using $T_{\rm c}$=$T_{\rm c}^{\rm max}[1-82.6(p-0.16)^2]$.\cite{Presland}
Here, the above cuprates are selected because their local structures are homologous to those of the multilayered 02($n$-1)$n$F and $M$12($n$-1)$n$ series, which guarantees that the plot of $K_{\rm s}^{ab}$(RT) vs $p$ is material-independent among the homologous compounds.
The relationship F2: ($p$=0.492$K_{\rm s}^{ab}$(RT)-0.023)~(solid line) between $K_{\rm s}^{ab}$(RT) and $p$ obtained by fitting with the data for $K_{\rm s}^{ab}$(RT)$<$ 0.5\% allows us to separately estimate $p$ inherent in IP and OP in multilayered cuprates.
More details of the validity of this relation are described in the literature.\cite{Shimizu2011PRB}
Note that the relation F1:$p'$=0.502$K_{\rm s}^{ab}$(RT)+0.0462 used in the previous studies had overestimated the hole density by 0.06$\sim$0.07.\cite{TokunagaJLTP,Tokunaga,Kotegawa2001,Kotegawa2004,MukudaPRL2006,MukudaJPSJ2006,Shimizu2007,Mukuda2007PhysC,Mukuda2008,Shimizu2009PRB,Shimizu2009JPSJ,Mukuda2010,Itohara} [cited from ref. \cite{Shimizu2011PRB}]
}
\label{fig:Ks_p}
\end{figure}
In NMR experiments, the spin part of the Knight shift at room temperature ($K_{\rm s}^{ab}$(RT)) increases with $p$ from the underdoped region to the overdoped region in hole-doped cuprates \cite{Kotegawa2001,Walstedt1990,Ohsugi,IshidaBi2212,FujiwaraJPSJ,MagishiTl1212,Storey}, suggesting that $K_{\rm s}^{ab}$(RT) is available to determine planar CuO$_2$ hole densities ($p$).
The linear equation $p'$=0.502$K_{\rm s}^{ab}$(RT)+0.0462 has been reported, as indicated by the dotted line (F1) in Fig. \ref{fig:Ks_p} \cite{TokunagaJLTP,Kotegawa2001}, where $p'$ is derived from the NQR frequencies of Cu and O in CuO$_2$ planes~\cite{Zheng}.
However, when noting that $K_{\rm s}$(RT) for optimally doped cuprates is empirically 0.35 - 0.39\%, an optimal doping level was evaluated to be $p'$=0.22$\sim$0.24\cite{MukudaPRL2006,Mukuda2008,Shimizu2009JPSJ}, which is relatively larger than the widely accepted optimal doping level in hole-doped cuprates, i.e., $p$ $\sim$ 0.16, as shown in the figure.
This inconsistency is, in part, due to the calculation that connects NQR frequency to hole density\cite{Haase}.
Recently, we have investigated the bilayered ($n$=2) apical-fluorine compounds Ba$_2$CaCu$_2$O$_4$(F,O)$_2$ ($n$=2:0212F) over a wide carrier density range\cite{Shimizu2011PRB}, which provides an opportunity to reexamine $p$ by comparing it with those well established in other bilayered compounds, exhibiting the maximum $T_{\rm c}$ at $p(T_{\rm c}^{\rm max})$$\sim$~0.16. Figure \ref{fig:Ks_p} shows plots of $K_{\rm s}^{ab}$(RT) for $n$=2:0212F, Bi$_2$Sr$_2$CaCu$_2$O$_8$ ($n$=2:Bi2212) \cite{IshidaBi2212}, Tl$_2$Ba$_2$CuO$_{6+\delta}$ ($n$=1:Tl2201) \cite{KitaokaTl2201}, and TlSr$_2$CaCu$_2$O$_{7-\delta}$ ($n$=2:Tl1212) \cite{MagishiTl1212} as a function of $p$ evaluated using $T_{\rm c}(p)$=$T_{\rm c}^{\rm max}[1-82.6(p-0.16)^2]$ \cite{Presland}, indicating that $K_{\rm s}^{ab}$(RT) monotonically increases with $p$ from the underdoped region to the overdoped region.
Here, the above cuprates are selected because their local structures are homologous to those of the multilayered 02($n$-1)$n$F and $M$12($n$-1)$n$ series, which guarantees that the plot of $K_{\rm s}^{ab}$(RT) vs $p$ is material-independent among these homologous compounds.
More details of the validity of this relation have been described in the literature\cite{Shimizu2011PRB}.
This renewed relation, based on the Knight shift, enables us to separately estimate $p$ for each CuO$_2$ plane in multilayered cuprates.
In this review, we use the linear function of $p$=0.492$K_{\rm s}^{ab}$(RT)-0.023, as shown by the solid line (F2) in Fig. \ref{fig:Ks_p}, which was obtained by fitting with the data for $K_{\rm s}^{ab}$(RT)$<$ 0.5\%, since the $K_{\rm s}^{ab}$(RT) of the samples used in this study is less than 0.5\%.
Note that the relation (F1) used in the previous studies had overestimated the hole density by 0.06$\sim$0.07.\cite{TokunagaJLTP,Tokunaga,Kotegawa2001,Kotegawa2004,MukudaPRL2006,MukudaJPSJ2006,Shimizu2007,Mukuda2007PhysC,Mukuda2008,Shimizu2009PRB,Shimizu2009JPSJ,Mukuda2010,Itohara}
According to eq.~(\ref{eq:Ks}), the $p$ dependence of $K_{s}^{ab}$(RT) is derived from those of the $B$ term in the hyperfine coupling constant and $\chi_s$(RT). In multilayered cuprates, the $B$ term increases moderately with $p$, showing a steep increase at $p$=0.18$\sim$0.20\cite{Shimizu2011PRB}.
The $B$ term arises from Cu($3d_{x^2-y^2}$)-O($2p\sigma$)-Cu($4s$) covalent bonds with four nearest-neighbor Cu sites; therefore, the large $B$ suggests the strong hybridization between the Cu($3d_{x^2-y^2}$) and O($2p\sigma$) orbits.
This result is consistent with the fact that a metallic state is more stabilized in an overdoped regime.
For layers with tetragonal symmetry in cuprates homologous to the multilayered series, the $p$-dependent $B$ terms are {\it material-independent}\cite{Shimizu2011PRB}, being larger than the $B$ terms for LSCO, YBCO$_{6+x}$, and Y1248, $\sim$ 4 T/$\mu_{\rm B}$\cite{IshidaBi2212SG}.
This difference is also seen in the variation in the nuclear quadrupole frequency $^{63}\nu_Q$.
$^{63}\nu_Q$ increases with $p$ for all the materials\cite{Ohsugi,Zheng,Haase}, while, for a given $p$, the absolute values of what LSCO and YBCO$_{6+x}$ are about 2 to 3 times larger than those for others\cite{Shimizu2011PRB}.
$^{63}\nu_Q$ depends on the hole number $n_d$ in a Cu($3d_{x^2-y^2}$) orbit and on $n_p$ in a O($2p\sigma$) orbit.
Therefore, it is expected that $n_d$ and $n_p$ in LSCO and YBCO$_{6+x}$ will be different from those in the other compounds treated here, even if they have the same $p$ ($p$=$n_d$+2$n_p-$1).
Actually, it has been reported that $n_d$ is large in LSCO and YBCO$_{6+x}$, which is the reason for the large $\nu_Q$ \cite{Zheng,Haase}.
In this context, the reason why the $B$ terms in LSCO, YBCO$_{6+x}$, and Y1248 deviate from those in other materials is the different partitions of holes in the Cu($3d_{x^2-y^2}$) and O($2p\sigma$) orbits, which is probably related to the crystal structures; LSCO, YBCO$_{6+x}$, and Y1248 have orthorhombic crystal structures in the superconducting region, while 0212F, Bi2212, Tl2201, and Tl1212 have tetragonal ones.
Thus, we conclude that the $p$ dependence of $B$ holds in CuO$_2$ planes with tetragonal symmetry homologous to the multilayered series, which guarantees the estimation of $p$ for OP and IP independently in multilayered cuprates using the renewed relation between $K_{s}^{ab}$(RT) and $p$.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=15cm]{81002Fig6.eps}
\end{center}
\caption[]{\footnotesize (Color online)
Illustration of layer-dependent physical properties for $n$=5 compounds:(a) Hg1245(UD)$\sharp1$ \cite{MukudaPRL2006,Tabata}, (b) Hg1245(UD)$\sharp2$\cite{Mukuda2010,Tabata},(c) Hg1245(UD)$\sharp3$ \cite{Mukuda2010,Tabata}, (d) Hg1245(OPT)$\sharp1$\cite{Kotegawa2004,Mukuda2008}, (e) Hg1245(OPT)$\sharp2$\cite{Mukuda2008}, (f) Tl1245(OVD)\cite{Kotegawa2004,Mukuda2008}, and (g) Cu1245(OVD)\cite{Kotegawa2001}. The middle panels present tables of the hole densities of $p$(IP) and $p$(OP), and the AFM ordered moments of $M_{\rm AFM}$(IP) and $M_{\rm AFM}$(OP). The lower panels show the $T$ dependences of $K_{\rm s}^{ab}(T)$s, which enable us to separately estimate $p$s for IP and OP, and to probe the onset of AFM and HTSC at IP and OP.
}
\label{fig:summary}
\end{figure*}
\section{Results}
\subsection{Five-layered ($n$=5) compounds}
Figures \ref{fig:summary}(a) - \ref{fig:summary}(g) respectively show the layer-dependent physical properties unraveled by site-selective NMR studies of the following $n$=5 compounds: Hg1245(UD)$\sharp$1 ($T_{\rm c}$=72 K) \cite{MukudaPRL2006,Tabata}, Hg1245(UD)$\sharp$2 ($T_{\rm c}$=82 K) \cite{Mukuda2010,Tabata}, Hg1245(UD)$\sharp$3 ($T_{\rm c}$=92 K) \cite{Mukuda2010,Tabata}, Hg1245(OPT)$\sharp$1 ($T_{\rm c}$=108 K) \cite{Kotegawa2004,Mukuda2008}, Hg1245(OPT)$\sharp$2 ($T_{\rm c}$=110 K) \cite{Mukuda2008}, Tl1245(OVD) ($T_{\rm c}$=100 K) \cite{Kotegawa2004,Mukuda2008}, and Cu1245(OVD) ($T_{\rm c}$=90 K) \cite{Kotegawa2001}.
The temperature ($T$) dependences of $K^{ab}_s(T)$ are shown in the lower panel of the figures.
The $K^{ab}_s$(RT)s at room temperature decrease with decreasing $p$, which enables us to evaluate $p$ at each CuO$_2$ plane using the relationship of $K^{ab}_s$(RT) vs $p$~\cite{Shimizu2011PRB}.
\subsubsection{Superconducting characteristics}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig7.eps}
\end{center}
\caption{(Color online)
$T$ dependences of $K_{\rm s}^{ab}$s (solid circles) and its $T$ derivatives (empty circles) at OP and IP of (a) Hg1245(OPT)$\sharp2$ with $T_{\rm c}$=110 K and (b) Cu1245(OVD) with $T_{\rm c}$=90 K. Owing to a large imbalance in $p$s , $T_{\rm c}$s inherent in OP and IP are assigned from a peak in the $T$ derivatives of $K_{\rm s}^{ab}$ (see text). (a) A distinct peak in the $T$ derivatives of $K_{\rm s}^{ab}$ at OP coincides with the bulk $T_{\rm c}$= 110 K, demonstrating that the SC is driven by OPs. Another peak at $T$=85 K in the $T$ derivatives of $K_{\rm s}^{ab}$ at IP was assigned to $T_{\rm c}'$ inherent in IP.\cite{Mukuda2008} (b) A distinct peak in the $T$ derivatives of $K_{\rm s}^{ab}$ at IP coincides with the bulk $T_{\rm c}$= 90 K, demonstrating that the SC is driven by IPs. A distinct peak at $T$=65 K in the $T$ derivatives of $K_{\rm s}^{ab}$ at OP was assigned to $T_{\rm c}'$ inherent in overdoped OP.\cite{TokunagaJLTP,Tokunaga,Kotegawa2001} (c,d) The SC gaps at IP (OP) for Hg1245(OPT)$\sharp2$~(Cu1245(OVD)) with $T_{\rm c}'$ lower than bulk $T_{\rm c}$ are anticipated to develop linearly between $T_{\rm c}$ and $T_{\rm c}'$ owing to the proximity effect.\cite{Tokunaga} [cited from refs.\cite{Kotegawa2001,Mukuda2008}]
}
\label{fig:Knightshift}
\end{figure}
In multilayered cuprates, $T_{\rm c}$s inherent in OP and IP are estimated from the $T$ dependences of $K_{\rm s}^{ab}$ and its $T$ derivatives~\cite{TokunagaJLTP,Tokunaga,Kotegawa2001,Mukuda2008}, which are shown in Figs.~\ref{fig:Knightshift}(a) and \ref{fig:Knightshift}(b).
Figure~\ref{fig:Knightshift}(a) shows that a distinct peak in the $T$ derivatives of $K_{\rm s}^{ab}$ at OP coincides with the bulk $T_{\rm c}$= 110 K in optimally doped Hg1245(OPT)$\sharp2$, demonstrating that the SC is driven by OPs with an optimum $p$, while another peak at $T$=85 K in the $T$ derivatives of $K_{\rm s}^{ab}$ at IP is assigned to $T_{\rm c}'$ inherent to IP with a hole density smaller than that at OP~\cite{Mukuda2008}.
Figure~\ref{fig:Knightshift}(b) shows that a distinct peak in the $T$ derivatives of $K_{\rm s}^{ab}$ at IP coincides with the bulk $T_{\rm c}=$ 90 K in overdoped Cu1245(OVD), demonstrating that the SC is driven by IPs, while a distinct peak at $T=65$ K in the $T$ derivatives of $K_{\rm s}^{ab}$ at OP was assigned to $T_{\rm c}'$ inherent in overdoped OP.~\cite{TokunagaJLTP,Tokunaga,Kotegawa2001}
When noting that OP and IP are alternatively stacked along the $c$-axis in multilayered cuprates, the SC gaps at IP (OP) for Hg1245(OPT)$\sharp2$ [Cu1245(OVD)] with $T_{\rm c}'$ lower than bulk $T_{\rm c}$ are anticipated to develop linearly between $T_{\rm c}$ and $T_{\rm c}'$, as shown in Fig.~\ref{fig:Knightshift}(c) [Fig.~\ref{fig:Knightshift}(d)], respectively, owing to the proximity effect.
Likewise, it was widely established that $T_{\rm c}$ in multilayered cuprates is determined by the carrier density at each CuO$_2$ plane~\cite{TokunagaJLTP,Tokunaga,Kotegawa2001,Mukuda2008,Shimizu2009JPSJ,Shimizu2011_n3}, and it was proposed that such two-gap SC causes anomalous behaviors~\cite{Hirai,YTanaka,Crisan,YTanaka_soliton}.
\subsubsection{Estimation of AFM moments $-$ zero-field Cu-NMR/NQR studies}
The observation of a zero-field (ZF) Cu-NMR spectrum at low $T$ allows us to estimate the AFM ordered moment $M_{\rm AFM}$.
In general, the Hamiltonian for Cu nuclear spins ($I=3/2$) in crystal lattices with an axial symmetry is described by the Zeeman interaction due to the magnetic field ${\bm B}$, ${\cal H}_{Z}$, and the nuclear-quadrupole interaction ${\cal H}_{Q}$ as
\begin{eqnarray}
{\cal H}&=&{\cal H}_Z+{\cal H}_Q \notag \\
&=&-\gamma_N \hbar {\bm I} \cdot {\bm B}+\frac{e^{2}qQ}{4I(2I-1)}(3I_{z^{\prime}}^2-I(I+1)),
\label{eq:hamiltonian}
\end{eqnarray}
where $\gamma_{N}$ is the Cu nuclear gyromagnetic ratio, $eQ$ is the nuclear quadrupole moment, and $eq$ is the electric field gradient at the Cu nuclear site. In ${\cal H}_{Q}$, the nuclear quadrupole resonance (NQR) frequency is defined as $\nu_{Q}=e^{2}qQ/2h$.
In nonmagnetic substances, the NQR spectrum originates from the second term in eq.~(\ref{eq:hamiltonian}) at a zero external field ($B_{\rm ext}$=0 T). On the other hand, in magnetically ordered substances, the ZF-NMR spectrum is observed owing to the internal magnetic field $B_{\rm int}$ at Cu sites in eq. (\ref{eq:hamiltonian}) despite $B_{\rm ext}$=0 T.
Figure~\ref{fig:ZFNMR}(a) shows a typical Cu-NQR spectrum of $n$=4:Hg1234(OPT) with $T_{\rm c}$=123 K at 1.5 K\cite{Itohara}. The respective $^{63}$Cu-NQR frequencies ($^{63}\nu_{Q}$) at IP and OP are 9.6 and 17.8 MHz, which are comparable to the typical $^{63}\nu_{Q}$ values at IP and OP in a paramagnetic regime\cite{Julien,MagishiPRB}.
\begin{figure}[tbp]
\centering
\includegraphics[width=7.5cm]{81002Fig8.eps}
\caption[]{\footnotesize (Color online) Cu-NQR/zero-field NMR spectra at 1.5 K for (b) Tl1245(OVD),\cite{Kotegawa2004,Mukuda2008} (c) Hg1245(OPT)$\sharp2$,\cite{Mukuda2008} (d) Hg1245(OPT)$\sharp1$,\cite{Kotegawa2004,Mukuda2008} (e) Hg1245(UD)$\sharp3$,\cite{Tabata} (f) Hg1245(UD)$\sharp2$,\cite{Tabata} and (g) Hg1245(UD)$\sharp1$\cite{MukudaPRL2006,Tabata,IP_UD}, along with (a) $n$=4:Hg1234(OPT) with $T_{\rm c}$=123 K in the paramagnetic state.\cite{Itohara} Dotted lines represent the NQR frequencies $^{63}\nu_{Q}$(IP)=8.4 and $^{63}\nu_{Q}$(OP)=16 MHz for Hg1245(OPT)$\sharp1$.\cite{Kotegawa2004} The solid bars represent resonance frequencies and intensities for two components of isotopes, $^{63}$Cu and $^{65}$Cu, which were calculated based on eq.~(\ref{eq:hamiltonian}). [cited from refs.\cite{Itohara,Kotegawa2004,MukudaPRL2006,Mukuda2008,Tabata}]
}
\label{fig:ZFNMR}
\end{figure}
Figures \ref{fig:ZFNMR}(b)-\ref{fig:ZFNMR}(g) respectively show the Cu-NQR/ZF-NMR spectra at 1.5 K for Tl1245(OVD)\cite{Kotegawa2004,Mukuda2008} Hg1245(OPT)$\sharp2$\cite{Mukuda2008}, Hg1245(OPT)$\sharp1$\cite{Kotegawa2004,Mukuda2008}, Hg1245(UD)$\sharp3$\cite{Tabata}, Hg1245(UD)$\sharp2$\cite{Tabata}, and Hg1245(UD)$\sharp1$\cite{MukudaPRL2006,Tabata}. In these spectra, no NQR spectrum at IP is observed, pointing to an onset of AFM order at IP, whereas the NQR spectrum at OP is observed for (b) Tl1245(OVD)\cite{Kotegawa2004,Mukuda2008}, (c) Hg1245(OPT)$\sharp2$\cite{Mukuda2008}, (d) Hg1245(OPT)$\sharp1$\cite{Kotegawa2004,Mukuda2008}, (e) Hg1245(UD)$\sharp3$, and (f) Hg1245(UD)$\sharp2$.
These spectra at IP, that are observed in the range of 20$\sim$50 MHz, are reproduced by assuming the internal field $B_{\rm int}$ at IP, which is generally given by $B_{\rm int}$=$|A_{\rm hf}|M_{\rm AFM}$=$|A_{ab}-4B|M_{\rm AFM}$.
Here, $A_{ab}\approx$ 3.7 T/$\mu_{\rm B}$, $B({\rm IP})\approx$ 6.1 T/$\mu_{\rm B}$ and $B({\rm OP})\approx$ 7.4 T/$\mu_{\rm B}$ are assumed in multilayered compounds\cite{Kotegawa2004}, $M_{\rm AFM}$ is a spontaneous AFM moment at Cu sites. Using these values, $M_{\rm AFM}({\rm IP})$s at IP are estimated to be in the range of 0.1$\sim$0.18 $\mu_{\rm B}$ at $T$=1.5 K, which are listed up in the middle panel of Fig.~\ref{fig:summary}. Note that the Cu-NMR spectra in undoped Mott insulators are observed in the higher-frequency ranges, such as 75$\sim$110 MHz for La$_2$CuO$_4$\cite{Tsuda} and YBCO$_6$\cite{YasuokaZF,Tsuda} or 125$\sim$150 MHz for CaCuO$_2$(IL) (see Fig.\ref{fig:n8}(f)) from the $M_{\rm AFM}$=0.5$\sim$0.7$\mu_{\rm B}$\cite{Vaknin1987,Vaknin1989}.
Thus, the mobile holes existing at IP uniformly reduce to $M_{\rm AFM}$(IP)=0.1$\sim$0.18 $\mu_{\rm B}$, indicating that a static AFM {\it metallic} state is realized at IP uniformly.
Here, note that the presence of the spin-glass phase is excluded because the possible distribution in $M_{\rm AFM}({\rm IP})$ is less than $\pm$ 0.02$\mu_{\rm B}$.
In the most underdoped Hg1245(UD)$\sharp$1, the spectrum at OP is observed at approximately 30 MHz (see Fig.~\ref{fig:ZFNMR}(g)), indicating that $M_{\rm AFM}$(OP)$\sim$ 0.092$\mu_{\rm B}$ even at OP that is responsible for the onset of HTSC with $T_{\rm c}$= 72 K.
Both the phase separation and the spin-glass phase are excluded since no NQR spectra are observed.
These facts indicate that both AFM and HTSC uniformly coexist at the microscopic level\cite{MukudaPRL2006}.
Here, note that the broadening of these spectra at OP and IP points to a possible distribution of $M_{\rm AFM}$(OP) and $M_{\rm AFM}$(IP) with approximately $\pm$ 0.02 $\mu_{\rm B}$ over the samples. This may be partly because some subtle inhomogeneity in carrier density takes place owing to deoxidization process.
Note that $M_{\rm AFM}$ and $T_{\rm c}$ at OP in Hg1245(UD)$\sharp$1 are comparable to those at IP in Hg1245(OPT)$\sharp$2, since the $p$ values are almost the same for these layers.
This means that $M_{\rm AFM}$ and $T_{\rm c}$ are primarily determined by $p$ at each layer not by the electronic states of adjacent layers, whereas $T_{\rm N}$ depends on the number $n$ of CuO$_2$ layers or on interlayer magnetic coupling, as discussed in $\S$4.1 and $\S$4.2.
\subsubsection{Determination of N\'eel temperature $T_{\rm N}$ for AFM order}
The N\'eel temperature $T_{\rm N}$ is determined by the measurement of nuclear-spin-relaxation rate $1/T_{\rm 1}$,
which exhibits a peak at $T_{\rm N}$. Generally, $1/T_{\rm 1}$ is described as
\begin{equation}
\frac{1}{T_{1}}=\frac{2\gamma_{\rm N}^{2}k_{\rm B}T}{(\gamma_{\rm e} \hbar )^{2}}\sum_{\bm q} |A_{\bm q}|^{2}\frac{{\rm Im}[\chi(\bm q, \omega_{0})]}{\omega_{0}},
\label{eq:T1}
\end{equation}
where $A_{\bm q}$ is a wave-vector (${\bm q}$)-dependent hyperfine-coupling constant, $\chi({\bm q},\omega)$ is the dynamical spin susceptibility, and $\omega_0$ is the NMR frequency.
The $T$ dependence of $1/T_1$ exhibits a peak at $T_{\rm N}$ because low-energy spectral weight in $\chi({\bm q}={\bm Q},\omega)$ is strongly enhanced at $\omega_0 \sim 0$ in association with the divergence in magnetic correlation length at $T \sim T_{\rm N}$. Here, ${\bm Q}$ is an AFM wave vector ($\pi$, $\pi$). In the case of Hg1245(OPT)$\sharp$1, for instance, the AFM order at IP was detected by $^{63}$Cu-NMR $1/T_1$ at OP that shows a peak at $T_{\rm N}$=60 K, as shown in Fig. \ref{fig:spectraHg}(b)\cite{Kotegawa2004}. Likewise, the $T_{\rm N}$s at IP were 55 and 45 K for Hg1245(OPT)$\sharp2$\cite{Mukuda2008} and Tl1245(OVD)\cite{Kotegawa2004,Mukuda2008,Mukuda2007PhysC}, respectively. Note that the AFM order below $T_{\rm N}=60$ K in Hg1245(OPT)$\sharp$1 was also confirmed by the $\mu$SR measurement\cite{muSR,muSR2}. Note that the onset of AFM order at IP was corroborated by an upturn in $K^{ab}_s(T)$ at OP upon cooling below the $T_{\rm N}$s in Hg1245(OPT)$\sharp1$, Hg1245(OPT)$\sharp2$, and Tl1245(OVD), as marked by upward-arrows in the lower panel of Fig.~\ref{fig:summary}.
As a result, the $T_{\rm N}$s for the more underdoped compounds such as Hg1245(UD)$\sharp$1, Hg1245(UD)$\sharp$2, and Hg1245(UD)$\sharp$3 were evaluated from the temperature below which the $K^{ab}_s(T)$ at OP reveals an upturn. As expected, as $p$ decreases, the $T_{\rm N}$ at IP increases from $T_{\rm N}\sim$110, 150, and 180 K for Hg1245(UD)$\sharp$3, Hg1245(UD)$\sharp$2~\cite{Mukuda2010,Tabata}, and Hg1245(UD)$\sharp$1, respectively~\cite{Tabata}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig9.eps}
\end{center}
\caption{(Color online) (a) $T$ dependences of $^{199}$Hg-NMR spectrum and its full-width at half-maximum (FWHM) when the $B \perp$ $c$-axis for $n$=5:Hg1245(OPT)$\sharp$1. The solid line indicates the fitting of FWHM to the relation $M_{\rm AFM}(T)\propto (1-(T/T_N)^{3/2})^{1/2}$, which is the theoretical prediction for weak itinerant AFM metals\cite{MoriyaAFM}.
(b) $T$ dependences of short (triangles) and long (squares) components in $^{63}$Cu-NMR $1/T_1$ at OP. A peak in $1/T_1$ points to $T_{\rm N}$=60 K, below which the FWHM at the Hg site increases owing to the development of $M_{\rm AFM}$(IP).\cite{Kotegawa2004}
[cited from ref.\cite{Kotegawa2004}]
}
\label{fig:spectraHg}
\end{figure}
The $T$ variation of $M_{\rm AFM}$ below $T_{\rm N}$ was indirectly probed on the basis of the $T$ dependence of the internal field at nuclear sites in charge reservoir layers.
Figure \ref{fig:spectraHg}(a) shows the $T$ dependence of the $^{199}$Hg-NMR spectrum for the $B \perp c$-axis and its full-width at half-maximum (FWHM) of Hg1245(OPT)$\sharp$1 with $T_{\rm c}$=108 K and $T_{\rm N}$=60 K.
The FWHM increases rapidly below $T_{\rm N}$= 60 K, probing the development of the internal field at the $^{199}$Hg site induced by the onset of $M_{\rm AFM}$(IP). The $T$ dependence of $M_{\rm AFM}$(IP) was close to the theoretical prediction for weak itinerant AFM metals \cite{MoriyaAFM}, as indicated by the solid line in Fig.~\ref{fig:spectraHg}(a).
Note that the FWHM of the $^{199}$Hg-NMR spectrum is increased markedly below $T_{\rm A}$= 25 K, which has not been identified yet\cite{Kotegawa2004}.
Recently, in the apical-F multilayered compound $n$=5:0245F with $T_{\rm c}$=52 K, which is more underdoped than $n$=5:Hg1245(UD)$\sharp$1 with $T_{\rm c}$=72 K, clear evidence of the $T$ evolution of $M_{\rm AFM}$ has been presented below $T_{\rm N}$=175 K, along with an SC diamagnetic shift below $T_{\rm c}$=52 K through $^{19}$F-NMR studies\cite{Shimizu2011JPSJ}.
The ZF Cu-NMR study revealed that $M_{\rm AFM}$(IP)=0.20 $\mu_{\rm B}$ and $M_{\rm AFM}$(OP)=0.14 $\mu_{\rm B}$ at 1.5 K.
As shown in Fig.~\ref{fig:Fdata}, the internal field $|B_{\rm int}^c({\rm F})|$ at the apical-F site for the $B\parallel$ $c$-axis, which was evaluated from the splitting of the $^{19}$F-NMR spectra, increases significantly below $T_{\rm N}=$ 175 K \cite{HF_Fsite}, at which the $^{19}$F-NMR $1/T_1$ also exhibits a peak.
The $T$ dependence of $|B_{\rm int}^c({\rm F})|$ at the apical-F site for $B\parallel$ $c$-axis was roughly reproduced down to $T_{\rm c}$= 52 K by either $B_{\rm int}^c$(F)$\propto$ $M_{\rm AFM}(T)$=$M_{\rm AFM}(0)(1-(T/T_N)^{3/2})^{1/2}$ (solid line)\cite{MoriyaAFM} or $\propto(1-(T/T_N))^{1/2}$ (dotted line) used for slightly doped LSCO compounds~\cite{Borsa}.
These results convinced us of the three-dimensional long-range AFM order below $T_{\rm N}$=175 K in $n$=5:0245F with the SC transition below $T_{\rm c}$=52 K.
Here, we note that $B_{\rm int}^c$(F) shows an additional increase as $T$ decreases below $T_{\rm c}$; the increase in $B_{\rm int}^c$(F) below $T_{\rm c}$ implies that the onset of an SC order parameter is actually coupled with $M_{\rm AFM}$(OP) in the HTSC-AFM coexisting state.\cite{Shimizu2011JPSJ}
This result gives convincing evidence that the HTSC below $T_{\rm c}$=52 K emerges on the background of the AFM order taking place below $T_{\rm N}$=175 K; hence, both coexist.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig10.eps}
\end{center}
\caption{(Color online) (a) $T$ dependence of the internal field at apical F site ($|B_{\rm int}^c({\rm F})|$) for $n$=5:0245F with $T_{\rm c}$=52 K and $T_{\rm N}$=175 K.\cite{Shimizu2011JPSJ} $|B_{\rm int}^c({\rm F})|$ was estimated from the splitting of the $^{19}$F-NMR spectra in the $B\parallel$ $c$-axis. (b) $T_{\rm N}$=175 K was determined by a peak of $^{19}$F-NMR $1/T_1$. The solid and dotted lines represent $B_{\rm int}^c$(F)$\propto (1-(T/T_N)^{3/2})^{(1/2)}$,\cite{MoriyaAFM} and $\propto(1-T/T_N)^{(1/2)}$,\cite{Borsa} respectively. [cited from ref.\cite{Shimizu2011JPSJ}]
}
\label{fig:Fdata}
\end{figure}
\subsubsection{Pseudogap behavior in $n$=5 compounds}
Since the discovery of high-$T_{\rm c}$ cuprates, anomalous normal states exhibiting a pseudogap behavior have been one of the most important subjects in the research on HTSC. Initially, a gaplike behavior was reported as a gradual suppression of $1/T_1T$ below $T^{*}$~\cite{Yasuoka}, which has been called the {\it spin gap}.
Neutron scattering experiments in underdoped regions also showed that spin excitations at low energies are suppressed in the normal state~\cite{Rossat}. Then, angle-resolved-photoemission spectroscopy (ARPES) experiments directly identified the existence of an energy gap in electronic spectra even above $T_{\rm c}$~\cite{Loeser,Ding}.
This gap observed in ARPES below $T^{*}$ turned out to have the same angular dependence as the $d$-wave SC gap in the Brillouin zone~\cite{Ding,Harris}.
After these experimental observations of single-particle spectra, the gap has been called the {\it pseudogap}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig11.eps}
\end{center}
\caption{(Color online) $T$ dependence of $^{63}$Cu-NMR $1/T_1T$ at IP and OP in $n$=5 compounds\cite{Kotegawa2001,Kotegawa2004,Mukuda2007PhysC,Mukuda2008}. The data are presented from the most underdoped IP (top) to the heavily overdoped OP (bottom). The $T^{*}$ of $n$=5 compounds increases as $p$ decreases, but the underdoped IP, which shows the AFM order below 60 K (Hg1245(OPT)$\sharp1$) and 45 K (Tl1245(OVD)), shows no indication of a pseudogap above 140 K. This suggests that the {\it spin-gap} collapses in the AFM-HTSC mixed state for $p < p_c(n)$.[cited from refs. \cite{Kotegawa2001,Kotegawa2004,Mukuda2007PhysC,Mukuda2008}]
}
\label{fig:n5_PG}
\end{figure}
Figure \ref{fig:n5_PG} indicates the $T$ dependence of $^{63}$Cu-NMR $1/T_1T$ at IP and OP of the $n$=5 compounds for the $B\parallel$ $c$-axis\cite{Kotegawa2001,Kotegawa2004,Mukuda2007PhysC,Mukuda2008}. The $1/T_1T$ at OP for Hg1245(OPT)$\sharp1$ starts to decrease upon cooling below $T^*$=160 K \cite{Kotegawa2004}. As shown in Fig.~\ref{fig:PhaseDiagram_n5}, $T^{*}$ decreases as $p$ increases as is also observed in single- and bilayered compounds~\cite{Yasuoka,REbook,IshidaBi2212,ZhengPG}.
However, note that when the underdoped IPs of Hg1245(OPT)$\sharp1$ and Tl1245(OVD) show an AFM order below $T_{\rm N}$=60 K and 45 K, respectively, $1/T_1T$ exhibits no indication of a pseudogap behavior above 140 K.
These results reveal that low-energy spectral weights in $\chi(Q,\omega)$ at IP of Hg1245(OPT)$\sharp1$ and Tl1245(OVD) are critically enhanced around $\omega\rightarrow0$ toward the AFM order. As a result, the NMR spectra at IP are lost below 140 K owing to the extremely short nuclear relaxation times. By contrast, $1/T_1T$ at the slightly overdoped IP for Cu1245(OVD) shows the pseudogap state below $T^*$=145 K~\cite{Kotegawa2001,Kotegawa2004,Mukuda2007PhysC}.
Eventually, we highlight that a {\it spin-gap} collapses in the underdoped region where the AFM order emerges.
Recently, a similar behavior on the $p$ dependence of $T^*$ has been observed in the $n$=3 compounds \cite{Shimizu2011_n3}.
This pseudogap behavior is also extensively discussed in $\S$ 4.4.
\subsubsection{Phase diagram in $n$=5 compounds}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig12.eps}
\end{center}
\caption{(Color online) Phase diagram of AFM and HTSC at homogeneously doped CuO$_2$ plane in $n$=5 compounds. $T_{\rm N}$, $T_{\rm c}$, and $T^*$ are plotted against the hole density $p$\cite{Shimizu2011PRB}. The solid and empty circles correspond to the data for IP and OP, respectively.
The AFM metallic phase is robust and uniformly coexists with the HTSC state up to a quantum critical density $p_c(5)\sim$ 0.1 at which AFM order collapses. $T_{\rm c}$ exhibits a maximum at $p(T_{\rm c}^{\rm max})\sim$ 0.16. Here, PM and PG denote the paramagnetic phase and pseudogap state, respectively. The star denotes the pseudogap temperature $T^*$. [cited from refs.\cite{MukudaPRL2006,Mukuda2008,Shimizu2011PRB}]
}
\label{fig:PhaseDiagram_n5}
\end{figure}
Figure \ref{fig:PhaseDiagram_n5} shows a novel phase diagram for $n$=5 compounds. Here, $T_{\rm N}$ and $T_{\rm c}$ are plotted as functions of $p$, which is estimated from the relationship of $K_{\rm s}^{ab}$(RT) vs $p$ discussed in $\S$ 2.4.
The characteristic features are summarized as follows: (i) The AFM {\it metallic} phase (AFMM) is robust up to $p \sim$0.1 and uniformly coexists with the HTSC state up to a quantum critical density $p_c(n$=$5)\sim$ 0.1 at which the AFM order collapses. (ii) $T_{\rm c}$ has a peak at $p\sim$0.16 after the AFM order collapses.
These findings suggest the intimate relationship between HTSC and AFM.
(iii) The $p$ dependence of $T^*$ below which $1/T_1T$ starts to decrease points to that a {\it spin-gap} collapses in the AFM-HTSC mixed state. These features of a phase diagram differ significantly from the well-established phase diagram of LSCO \cite{Keimer}, in which the AFM and HTSC phases are separated by the spin-glass phase in association with the Anderson localization mechanism (see Fig. \ref{fig:PhaseDiagram}).
\subsection{Phase diagram in $n$=4 compounds}
Since it was difficult to change carrier density widely in $n$=4:Hg1234~\cite{Itohara}, we used Ba$_2$Ca$_3$Cu$_4$O$_8$(F$_y$O$_{1-y}$)$_2$ ($n$=4:0234F)~\cite{Iyo1} as $n$=4 compounds. The substitution of O$^{-2}$ for F$^{-1}$ at the apical site increases $p$ and $T_{\rm c}$ from 55 K up to 102 K~\cite{Iyo1,Shimizu2009JPSJ}. As shown in Figs.~\ref{fig:summary_n4}(a)-\ref{fig:summary_n4}(d), systematic $^{63}$Cu- and $^{19}$F-NMR studies have revealed SC and AFM characteristics for $n$=4:0234F, which are denoted as 0234F($\sharp$1), 0234F($\sharp$2), 0234F($\sharp$3), and 0234F($\sharp$4) with nominal contents $y$= 0.6, 0.7, 0.8 and 1.0, respectively. The measurements of $K_{\rm s}^{ab}$ shown in Figs.~\ref{fig:summary_n4}(e-h) enable us to estimate $p$ at IP and OP for the samples\cite{Shimizu2011PRB}. In fact, $p$ increases progressively with decreasing $y$.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig13.eps}
\end{center}
\caption{(Color online)
Illustration of layer-dependent physical properties for $n$=4:0234F: (a) $\sharp4(y\sim$1.0), (b) $\sharp3(y\sim$0.8), (c) $\sharp2(y\sim$0.7), and (d) $\sharp1(y\sim$0.6). The middle panels present tables of the hole densities of $p$(IP) and $p$(OP), and the AFM ordered moments of $M_{\rm AFM}$(IP) and $M_{\rm AFM}$(OP) (see text). The lower panels show the $T$ dependences of $K_{\rm s}^{ab}(T)$s, which enable us to separately estimate $p$s for IP and OP, and to probe the onset of AFM and HTSC at IP and OP.
[cited from refs.\cite{Shimizu2009JPSJ,Shimizu2011PRB}]
}
\label{fig:summary_n4}
\end{figure}
Figures~\ref{fig:zero}(a)-\ref{fig:zero}(d) show the Cu-NQR/ZF-NMR spectra at $T$=1.5 K for $n$=4:0234F: (a) $\sharp1(y\sim$0.6), (b) $\sharp2(y\sim$0.7), (c) $\sharp3(y\sim$0.8), and (d) $\sharp4(y\sim$1.0).
For $\sharp1$, two NQR spectra revealed that $^{63}\nu_{\rm Q}$(IP)=9.7 MHz and $^{63}\nu_{\rm Q}$(OP)=15 MHz, which are comparable to those for other paramagnetic multilayered cuprates\cite{Julien,MagishiPRB,Itohara}. For $\sharp2$, the NQR spectrum at OP is observed at $^{63}\nu_{\rm Q}$(OP)=15 MHz. By contrast, it is not expected that the NQR spectral intensity at IP(i) observed at 9.1 MHz will be significantly smaller than the intensity of the ZF-NMR spectrum at IP(ii). The latter spectrum probes an internal field $B_{\rm int}$=1.5 T in association with the onset of AFM order with $M_{\rm AFM}$(IP)$\sim$ 0.08 $\mu_{\rm B}$. These results point to that the IPs for $\sharp2$ undergo a phase separation into the paramagnetic and AFM phases owing to closeness to $p_c$, at which an AFM order collapses.
For $\sharp3$, the observation of the NQR spectrum at OP indicates that a spontaneous AFM moment is absent, whereas the NMR spectrum at approximately 28 MHz at IP probes $B_{\rm int}\sim$ 2.4 T and hence $M_{\rm AFM}$(IP)$\sim0.12$ $\mu_{\rm B}$.
Figure~\ref{fig:zero}(d) shows that the Cu-ZF-NMR spectra at IP and OP are observed at approximately 45 and 30 MHz for $\sharp4$ with $T_{\rm c}$=55 K, which allows us to estimate $B_{\rm int}\sim$ 3.8 T and 2.7 T and hence $M_{\rm AFM}$(IP)$\sim$ 0.18$\mu_{\rm B}$ and $M_{\rm AFM}$(OP)$\sim$ 0.11$\mu_{\rm B}$, respectively.
Note that no trace of NQR spectra excludes the possibility of phase separation into the paramagnetic and AFM phases.
Therefore, the OP, which is mainly responsible for the HTSC with $T_{\rm c}$= 55 K, manifests the AFM order, indicating that the uniform mixing of AFM with the spontaneous moment $M_{\rm AFM}$=$0.11 \mu_{\rm B}$ and HTSC at $T_{\rm c}$= 55 K occurs in the OP.
Taking into account of the ARPES experiment on this compound ($\sharp4$) that observed SC gaps on Fermi sheets of both IP and OP\cite{YChen}, we deduce that the AFM order coexists uniformly with HTSC at both IP and OP for $\sharp4$\cite{KitaokaIOP}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig14.eps}
\end{center}
\caption{\footnotesize (Color online) Cu-NQR/ZFNMR spectra at $B_{\rm ext}$=0 T and $T$=1.5 K for $n$=4:0234F: (a) $\sharp1$, (b) $\sharp2$, (c) $\sharp3$, and (d) $\sharp4$.
The $T$ dependences of $^{19}$F-NMR ($1/T_1$)s for (e) $\sharp1$, (f) $\sharp2$, (g) $\sharp3$, and (h) $\sharp4$ at $f$=174.2 MHz and $B \parallel$ $c$-axis. The AFM moments and $T_N$s for OPs and IPs are evaluated from these results (see text). [cited from ref.\cite{Shimizu2009JPSJ}]
}
\label{fig:zero}
\end{figure}
The $T_{\rm N}$s of $\sharp2$, $\sharp3$, and $\sharp4$ are determined by the $^{19}$F-NMR $1/T_1$ measurements at the $B \parallel c$-axis, as presented for all the samples in Figs.~\ref{fig:zero}(e-h).
In the present case, $^{19}(1/T_1)$ is dominated by magnetic fluctuations~\cite{Shimizu2009JPSJ}.
Figures~\ref{fig:zero}(f) and \ref{fig:zero}(g) show the $T$ dependences of the $^{19}(1/T_{1})$s of $\sharp2$ and $\sharp3$, which exhibit peaks at approximately 30 and 50 K, respectively. These peaks are associated with the onset of the AFM order at $T_{\rm N}$=30 K at IP(ii) for $\sharp2$ and $T_{\rm N}$=50 K at IP for $\sharp3$, accompanying $M_{\rm AFM}$(IP(ii))$\sim$0.08 $\mu_{\rm B}$ and $M_{\rm AFM}$(IP)$\sim$0.12 $\mu_{\rm B}$, respectively. It is unexpected for the $1/T_1$ of $\sharp4$ to show two peaks at $T_{\rm N} \sim$ 80 K and $T'_{\rm N}\sim$ 30 K, as shown in Fig.~\ref{fig:zero}(h). Since $p$(IP) $<$ $p$(OP) and $M_{\rm AFM}$(IP) $>$ $M_{\rm AFM}$(OP), it is likely that $T_{\rm N}$s inherent to IP become larger than that at OP.
The AFM order inherent to OP may develop below $T'_{\rm N}\sim$ 30 K. Here, note that $T'_{\rm N}\sim$ 30 K at OP for $\sharp4$ is comparable to $T_{\rm N}\sim$ 30 K at IP (ii) for $\sharp2$ because their $p$s are almost the same for both compounds.
Figure~\ref{fig:PD} shows the phase diagram of AFM and HTSC for $n$=4:0234F. Here, $T_{\rm c}$ and $T_{\rm N}$ at OP and IP are plotted against $p$\cite{Shimizu2011PRB}. This phase diagram resembles that for the $n$=5 compounds. In particular, the uniformly mixed phase of AFM with $T_{\rm N}$=30 K and HTSC with $T_{\rm c}$=55 K was observed at the OP for $\sharp4$, demonstrating that it is the universal phenomenon inherent in a single CuO$_2$ plane in the underdoped regime.
In the phase diagram of the $n$=4 compounds presented in Fig.~\ref{fig:PD}, $p_c(4)$ at which the AFM order collapses is extrapolated to 0.08, which is smaller than $p_c(5)\simeq$0.1 for the $n$=5 compounds. As discussed in $\S$ 4.1, this is because the interlayer magnetic coupling becomes weaker as the number $n$ of CuO$_2$ layers decreases.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig15.eps}
\end{center}
\caption{\footnotesize (Color online) Phase diagram of AFM and HTSC at homogeneously doped CuO$_2$ plane in $n$=4 compounds. $T_{\rm N}$ and $T_{\rm c}$ are plotted against the hole density $p$.
The solid and empty circles correspond to the data for IP and OP, respectively.
Note that the uniform mixing of AFM with $T_{\rm N}$=30 K and HTSC with $T_{\rm c}$=55 K takes place at OP of $\sharp4$.
A critical hole density for the AFM order in $n$=4 compounds is extrapolated to $p_c(4)\sim$ 0.08, which is lower than that for the $n$=5 compounds. The pseudogap phase of the $n$=4 compounds has not been determined yet. [cited from refs.\cite{Shimizu2009JPSJ,Shimizu2011PRB}]
}
\label{fig:PD}
\end{figure}
Here, we comment on 0234F($\sharp4$) that attracted much attention at the initial stage of studies~\cite{YChen,OK,Shimizu2007}, since the {\it self-doping} mechanism had been proposed: charge carriers are transferred between IP and OP in a unit cell. Here a formal Cu valence was assumed to be just 2+ in an ideal case of a nominal fluorine content (F$_2$), where the apical sites are fully occupied by F$^{-1}$. However, extensive investigations on 02($n$-1)$n$F with $n$=2, 3, and 4 have ruled out the possibility of a {\it self-doping} mechanism in these compounds~\cite{Shimizu2009PRB}.
The point was that an inevitable deviation from the nominal apical-fluorine F$^{-1}$ content results in the doping of hole carriers into both OP and IP.
\subsection{Phase diagram of $n$=3 compounds}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig16.eps}
\end{center}
\caption{\footnotesize (Color online) Illustration of layer-dependent physical properties for n=3:0223F: (a) $\sharp4$($y\sim$1.0), (b) $\sharp3$($y\sim$0.9), (c) $\sharp2$($y\sim$0.8), and (d) $\sharp1$($y\sim$0.6). (e) $T$ dependences of $K_{\rm s}^{ab}$ and $dK_{\rm s}^{ab}/dT$ for $\sharp4$. (f) Cu-NQR/ZFNMR spectrum for $\sharp4$ at 1.5 K, along with that for $\sharp1$ with no AFM order, revealing $M_{\rm AFM}$(IP)$\sim$ 0.12 $\mu_{\rm B}$ and $M_{\rm AFM}$(OP)=0 for $\sharp4$.
The middle panels present tables of the hole densities of $p$(IP) and $p$(OP), and the AFM ordered moments at IPs (see text).[cited from refs.\cite{Shimizu2009PRB,Shimizu2011_n3}]
}
\label{fig:summary_n3}
\end{figure}
The $^{63}$Cu- and $^{19}$F-NMR studies of three-layered Ba$_2$Ca$_2$Cu$_3$O$_6$(F$_y$O$_{1-y}$)$_2$ compounds with a nominal F content $y$ denoted as $n$=3:0223F have revealed the SC and AFM characteristics shown in Figs.~\ref{fig:summary_n3}(a)-\ref{fig:summary_n3}(d)~\cite{Shimizu2009PRB,Shimizu2011_n3}.
The $p$s at IP and OP determined by the relationship of $K_{\rm s}^{ab}$(RT) vs $p$ are listed in the middle panel of Fig.~\ref{fig:summary_n3}.
Figure \ref{fig:summary_n3}(e) shows the $T$ dependences of $K_{\rm s}^{ab}$ and $dK_{\rm s}^{ab}/dT$ for $\sharp4$.
Besides the bulk $T_{\rm c}$=76 K, $T_{\rm c}'$ inherent in IP is tentatively deduced as $T_{\rm c}'$=60 K from a secondary peak in the $T$ dependence in $dK_{\rm s}^{ab}/dT$.
Figure~\ref{fig:summary_n3}(f) shows the Cu-NQR/ZFNMR spectrum of $\sharp4$, along with that of $\sharp1$; the spectra are observed at each NQR frequency, revealing no AFM order at either layer of $\sharp1$.
In $\sharp4$, the spectrum observed at 13.6 MHz corresponds to the $\nu_{Q}$ at OP, which is slightly lower than that at OP of $\sharp1$ because of the lower doping level\cite{Ohsugi,Zheng,Haase}.
On the other hand, the spectrum of IP is observed at a frequency much higher than the NQR frequency, which probes $B_{\rm int}$ $\sim$ 2.4 T and hence $M_{\rm AFM}$(IP)$\sim$ 0.12 $\mu_{\rm B}$.
We note that a phase separation into the AFM and paramagnetic phases is excluded in the IP of $\sharp4$, because no paramagnetic NQR spectrum for IP was observed.
The $T_{\rm N}$=23 K at IP was determined from the peak in $^{19}$F-NMR $1/T_1$ (see ref.\cite{Shimizu2009PRB}).
Details of systematic NMR experiments on $\sharp1$, $\sharp2$, and $\sharp3$ were published elsewhere \cite{Shimizu2011_n3}.
As a consequence, we have unraveled a phase diagram of the $n$=3:0223F compounds\cite{Shimizu2011_n3}, as shown in Fig. \ref{fig:PD_n3}.
The AFM order emerges at IPs of $\sharp3$ and $\sharp4$ with $p\le$ 0.075, suggesting that the critical hole density of AFM order for $n$=3 compounds $p_c(3)$ is close to 0.075, which is lower than $p_c(4)\simeq$0.08 for the $n$=4 compounds and $p_c(5)\simeq$0.10 for the $n$=5 compounds.
These AFM ordered states at IPs appear under the background of HTSC with $T_{\rm c}'$= 60 K, which indicates that the uniformly mixed state of AFM and HTSC emerges universally for a single CuO$_2$ layer in the underdoped region, when a magnetic interlayer coupling is strong enough to stabilize an AFM order.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig17.eps}
\end{center}
\caption{\footnotesize (Color online) Phase diagram of $n$=3 compounds. The solid and empty circles correspond to the data for IP and OP, respectively. The critical hole density of AFM order for the $n$=3 compounds is extrapolated to $p_c(3)\sim$ 0.075, which is lower than that for the $n$=4 and $n$=5 compounds.
The filled and open stars represent the pseudogap temperature $T^{*}$ determined from the peak of $^{63}$Cu-$(1/T_1T)$ for n=3:0223F\cite{Shimizu2011_n3} and $M$1223\cite{Julien,Kotegawa2002,Shimizu2011_n3}, respectively. This suggests that the {\it spin gap} collapses in the AFM-HTSC mixed state for $p < p_c(3)$.
[cited from refs.\cite{Shimizu2009PRB,Shimizu2011_n3}]
}
\label{fig:PD_n3}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig18.eps}
\end{center}
\caption{\footnotesize (Color online) $T$ dependences of $^{63}$Cu-NMR 1/$T_1T$ at OP and IP for $\sharp$3 at $B\perp c$ and $f$=174.2 MHz\cite{Shimizu2011_n3}. $1/T_1T$ at IP continues to increase upon cooling towards the AFM order at $T_{\rm N}\sim$ 10 K, whereas $1/T_1T$ at OP exhibits a pseudogap behavior with $T^* \sim$150 K. The $p$ dependence of $T^*$ for $n$=3 compounds is shown in Fig. \ref{fig:PD_n3}.
The inset shows a plot of $T_1T$ vs $T$ to reveal the Curie-Weiss behavior at high temperatures as $1/T_1T$=$C/(T+\theta)$. As for IP, $\theta$=-10 K was obtained, which corresponds to the peak of $1/T_1T$ at OP, that is, $T_N\sim$10 K at IP \cite{Shimizu2011_n3}.
[cited from ref.\cite{Shimizu2011_n3}]
}
\label{fig:n3_PG}
\end{figure}
To probe the pseudogap behavior for the $n$=3 compounds, the $^{63}$Cu-NMR $1/T_1T$ measurement has been performed in $\sharp$1, $\sharp$2, and $\sharp$3 \cite{Shimizu2011_n3}.
Figure \ref{fig:n3_PG} shows the $T$ dependence of $1/T_1T$ for $\sharp$3, which starts to decrease upon cooling below $T^* \sim$ 150 K for OP. On the other hand, the $1/T_1T$ at IP in the same sample continues to increase upon cooling down to 200 K, and the NMR spectrum at IP is lost below 200 K because of extremely short nuclear relaxation times due to the critical enhancement of AFM spin fluctuations.
Taking into account of the doping level ($p\sim0.073$), unless an AFM order occurs, one could expect $T^*$ to be around 200 $\sim$ 250 K, which is deduced from the case of the single- and bilayered cuprates presented in Fig. \ref{fig:PG}, but no peak of $1/T_1T$ was seen above 200 K.
Instead, the inset of Fig. \ref{fig:n3_PG} indicates that the $1/T_1T$ for IP at high temperatures follows the Curie-Weiss law as $1/T_1T$=$C/(T+\theta)$ with $\theta \sim -$ 10 K.
This indicates that the {\it out-of-plane} magnetic interaction, which is responsible for the onset of the AFM order at $T_{\rm N}\sim$ 10 K, causes the {\it in-plane} AFM correlation to develop further without opening a gap at the low-energy spectral weight of AFM excitations.
Although $1/T_1T$ is expected to decrease owing to the opening of the SC gap, we consider that the low-energy parts in AFM excitations below $T_c$ continue to increase at IP down to $T_{\rm N}\sim$ 10 K, since the peak at $T\sim 10$ K observed in the $1/T_1T$ for OP points to some divergence of the $1/T_1T$ at IP towards the AFM order at $T_{\rm N}\sim$ 10 K through the supertransferred hyperfine coupling between $^{63}$Cu nuclei at OP and the Cu-derived spin at IP. When noting that $\theta$ coincides with $T_{\rm N}$, as far as the low-energy parts in AFM excitations are concerned in an energy region smaller than an SC energy gap, we suppose that a {\it spin gap} may not open between 10 K and 200 K.
\subsection{AFM and HTSC in $n$=8 compound}
To gain further insight into the $n$ dependence of $p_c(n)$ at which the AFM order collapses, we have investigated the magnetic and SC properties of $n$=8:Hg1278(OPT) with $T_{\rm c}$=103 K~\cite{Yamaguchi}. Here, note that the interlayer magnetic coupling is expected to be larger than in the other multilayered cuprates.
As indicated in Figs.~\ref{fig:Tc_n}(a) and \ref{fig:Tc_n}(b), note that the $c$-axis length increases progressively with $n$, whereas $T_{\rm c}$ is almost constant for $n > 6$. The sample for NMR study comprises an almost single phase of $n$=8:Hg1278, although it is inevitable that a small number of stacking faults along the $c$-axis are contained for layered cuprates with $n > 6$~\cite{Iyo_TcVsn,Iyo_Hg_F}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig19.eps}
\end{center}
\caption{\footnotesize (Color online) $T$ dependence of $K_{\rm s}^{ab}$ at OP in nearly optimally doped Hg12($n$-1)$n$ series: (a) $n$=4:Hg1234(OPT) with $T_{\rm c}$=123 K\cite{Kotegawa2001}, (b) $n$=5:Hg1245(OPT)$\sharp$1 with $T_{\rm c}$=108 K\cite{Kotegawa2004}, and (c) $n$=8:Hg1278(OPT) with $T_{\rm c}$=103 K.
The broken curves in (b) and (c) show the $T$ dependence of $K_{\rm s}^{ab}$ at OP of Hg1234(OPT), which shows no AFM order.
The Cu-NQR/ZFNMR spectra at $B_{\rm ext}$=0 and $T$=1.5 K for (d) $n$=4:Hg1234(OPT), (e) $n$=5:Hg1245(OPT)$\sharp$1\cite{Kotegawa2004}, and (f) $n$=8:Hg1278(OPT). The spectra indicated by the symbol ($\ast$) arise from the AFM-Mott insulator CaCuO$_2$(IL), which is contaminated during the high-pressure synthesis process~\cite{Takano}.
The inset at the bottom illustrates the physical properties of $n$=8:Hg1278(OPT) with $T_{\rm c}$=103 K. [cited from ref.\cite{Yamaguchi}]
}
\label{fig:n8}
\end{figure}
Figures~\ref{fig:n8}(a)-\ref{fig:n8}(c) show the $T$ dependence of $K_{\rm s}^{ab}$ at OP of (c) $n$=8:Hg1278(OPT), together with (a) $n$=4:Hg1234(OPT) and (b) $n$=5:Hg1245(OPT)$\sharp$1, which are as-prepared samples in an optimally doped regime.
$K_{\rm s}^{ab}$(RT) at OP is comparable to those for $n$=4:Hg1234(OPT) and $n$=5:Hg1245(OPT)$\sharp$1, pointing to $p($OP$)\sim$ 0.14 in these compounds.
It is also corroborated by similar NQR frequencies of OP in these compounds as will be discussed later.
Therefore, if all layers were in the paramagnetic state, the $T$ dependence of $K_{\rm s}^{ab}$ should be similar to that of $n$=4:Hg1234(OPT) as indicated by the broken curves in Figs.~\ref{fig:n8}(b) and \ref{fig:n8}(c).
By contrast, note that the $K_{\rm s}^{ab}$ at OP for $n$=5:Hg1245(OPT)$\sharp$1 deviates from this broken curve below $T_{\rm N}$=60 K at which the AFM order sets in at IPs.
Accordingly, the $T_{\rm N}$ of $n$=8:Hg1278(OPT) is tentatively deduced to be about 300 K, since $K_{\rm s}^{ab}$ deviates from the broken curve below 300 K, as shown in Fig.~\ref{fig:n8}(c).
Although $n$=8:Hg1278(OPT) comprises three crystallographically inequivalent IPs, no Cu-NMR spectra at IPs were observed up to 300 K, which suggests that AFM order emerges at all IPs.
To evaluate AFM moments of IPs, we present the Cu-NQR/ZF-NMR spectra of Hg1278(OPT) at $T$=1.5 K in Fig.~\ref{fig:n8}(f), which are compared with the spectra of (d) $n$=4:Hg1234(OPT) and (e) $n$=5:Hg1245(OPT)$\sharp$1. Since $\nu_Q$(OP)$\sim$ 16 MHz at OP for $n$=8:Hg1278(OPT) coincides with those for $n$=4 and $n$=5 compounds, the OP is paramagnetic.
The Cu-ZF-NMR spectrum at approximately 50$\sim$70 MHz can be assigned to IPs, although we cannot resolve three inequivalent IPs precisely from the present ZF-NMR spectra.
The $M_{\rm AFM}$ at those IPs may be tentatively estimated as $M_{\rm AFM}$(IPs)=0.24 $\sim$ 0.28$\mu_{\rm B}$.
The ZF-NMR spectra at approximately 120 $\sim$ 150 MHz arise from the AFM-Mott insulator CaCuO$_2$(IL), which is contaminated during synthesis process under high-pressure and high-temperature conditions~\cite{Takano}.
Further extensive NMR measurements are desired for cuprates with more than six layers over a wide doping level.
Nevertheless, note that $T_{\rm c}$=103 K for $n$=8:Hg1278(OPT) remains high, even though the OPs that are responsible for HTSC are separated by a thick AFM block consisting of six IPs.
When noting that SC Cooper pairs can tunnel between OPs through AFM ordered IPs by virtue of the Josephson coupling as predicted by a theoretical study of multilayered cuprates~\cite{Mori}, it would be expected that HTSC can maintain relatively high and constant $T_{\rm c}$ values, as shown in Fig.~\ref{fig:Tc_n}(b).
More importantly, when noting that the OP with $p$(OP)$\sim$ 0.14 is paramagnetic, we remark that the critical hole density of $n$=8 compounds $p_c(8)$ does not reach 0.14 even when an interlayer magnetic coupling is sufficiently increased.
This result is consistent with the fact that $M_{\rm AFM}(p)$ at 1.5 K decreases monotonically as $p$ increases and is extrapolated to zero in the range of $p_c(M_{\rm AFM}$=$0) \le $ 0.14, as discussed in $\S$ 4.2 (see Fig.~\ref{Mvsp}).
\section{Discussion}
\subsection{Number $n$ of CuO$_2$ layer dependence of phase diagram of AFM and HTSC}
\begin{figure*}[t]
\centering
\includegraphics[width=12cm]{81002Fig20.eps}
\caption[]{\footnotesize (Color online) Phase diagrams of AFM and HTSC for $n$=1:LSCO~\cite{Keimer,JulienSG}, $n$=2:0212F~\cite{Shimizu2011PRB} and YBCO$_{6+x}$~\cite{Sanna,Coneri}, $n$=3:0223F~\cite{Shimizu2009PRB,Shimizu2011_n3}, $n$=4:0234F~\cite{Shimizu2007,Shimizu2009PRB,Shimizu2009JPSJ}, and $n$=5:$M$1245~\cite{Kotegawa2004,MukudaPRL2006,Mukuda2008,Mukuda2010,Tabata}. $p_{c}(n)$, at which an AFM order collapses, increases from $p_{c}(n)\sim$0.075 to 0.08 to 0.10 as $n$ increases from 3 to 4 to 5, respectively. The result on $n$=8 compound has revealed that $p_c(8)$ does not reach 0.14 even when an interlayer magnetic coupling is enhanced, which is also deduced from the fact that $M_{\rm AFM}$ decreases to zero in the range of $p_c(M_{\rm AFM}$=$0) \le $ 0.14 (see Fig. \ref{Mvsp}).
[cited from refs.\cite{Mukuda2010,KitaokaJPCS2011,KitaokaIOP}]
}
\label{fig:PhaseDiagram}
\end{figure*}
Figure~\ref{fig:PhaseDiagram} shows the phase diagrams of AFM and HTSC for $n$=1: LSCO~\cite{Keimer,JulienSG}, $n$=2: 0212F\cite{Shimizu2011PRB} and YBCO$_{6+x}$~\cite{Sanna,Coneri}, $n$=3:0223F~\cite{Shimizu2009PRB,Shimizu2011_n3}, $n$=4:0234F~\cite{Shimizu2007,Shimizu2009PRB,Shimizu2009JPSJ}, and $n$=5:$M$1245~\cite{Kotegawa2004,MukudaPRL2006,Mukuda2008,Mukuda2010,Tabata}.
The phase diagram on $n$-layered cuprates with $n$=3, 4, and 5 are characterized as follows:
(i) The AFM metallic phase is robust up to an optimally hole-doped region: The quantum critical point ($p_{c}$) at which an AFM order collapses increases from $p_{c}(n)\sim$0.075 to 0.08 to 0.10 as $n$ increases from 3 to 4 to 5, respectively.
(ii) The uniform coexistence of AFM and HTSC is the universal phenomenon inherent in a single CuO$_2$ plane in an underdoped region.
(iii) The maximum $T_{\rm c}$ takes place at approximately $p(T_{\rm c}^{\rm max})\sim$~0.16 irrespective of $n$.
\begin{figure}[tbp]
\centering
\includegraphics[width=7.5cm]{81002Fig21.eps}
\caption[]{\footnotesize (Color online) Schematic illustrations of magnetic couplings in $n$-layered cuprates: (a) interlayer magnetic coupling along $c$-axis and (b) in-plane superexchange interaction $J_{\rm in}$ among spins at nearest-neighbor Cu sites in two-dimensional lattice of CuO$_2$ plane. Here, $J_{\rm c}$ is the magnetic coupling between OPs through CRL, which is independent of $n$, and $J_{\rm out}(n)$ is the magnetic coupling between OPs through IPs, which increases with $n$. Since $J_{\rm in}$ is as large as 1300~K in undoped AFM-Mott insulators\cite{JinLSCO,JinYBCO1,JinYBCO2,TokuraJ}, an AFM order is stabilized when the interlayer magnetic coupling ($\sqrt{J_cJ_{\rm out}(n)}$) becomes stronger with increasing $n$.
}
\label{fig:interlayer}
\end{figure}
The phase diagrams of $n$-layered cuprates with $n$=3, 4, and 5 differ significantly from the well-established phase diagrams of $n$=1:LSCO and $n$=2:YBCO$_{6+x}$, where $p_c(1)\sim$ 0.02~\cite{Keimer} and $p_c(2)\sim$ 0.055~\cite{Sanna,Coneri}, respectively, as shown in Fig.~\ref{fig:PhaseDiagram}.
In fact, from the present NMR measurement, the phase diagram of $n$=2:0212F does not reveal an AFM order in the range of $p\ge 0.083$.\cite{Shimizu2011PRB} It is apparent that as $n$ increases from 1 to 5, $p_c(n)$ increases from 0.02 to 0.10.
The mother compounds for HTSC are characterized by a large in-plane superexchange interaction $J_{\rm in}\sim$1300 K among spins at nearest-neighbor Cu sites~\cite{JinLSCO,JinYBCO1,JinYBCO2,TokuraJ}.
However, since no long-range AFM order occurs at a finite temperature for an isolated two-dimensional (2D) system, the interlayer magnetic coupling along the $c$-axis, which depends on $n$, plays a crucial role in stabilizing an AFM order.
An effective interlayer magnetic coupling of $n$-layered cuprates is given as $\sqrt{J_cJ_{\rm out}(n)}$, where $J_{\rm c}$ is the magnetic coupling between OPs through CRL and $J_{\rm out}(n)$ is the magnetic coupling in a unit cell, as illustrated in Fig. \ref{fig:interlayer}(a).
Here, $J_{\rm c}$ is independent of $n$, but $J_{\rm out}(n)$ increases with increasing $n$.
In this context, it is the weak interlayer magnetic coupling that suppresses the static long-range AFM order in LSCO and YBCO$_{6+x}$ at such small carrier densities.
In the contrast, the result for the $n$=8 compound has revealed that $p_c(8)$ does not reach 0.14 even when the interlayer magnetic coupling becomes sufficiently large.
It is likely that $p_c$ saturates at approximately 0.14 even in the strong limit of interlayer magnetic coupling expected for the $n$=$\infty$ compound, since $M_{\rm AFM}$ in the ground state is extrapolated to zero at $p_c(M_{\rm AFM}$=$0)\le$ 0.14, as discussed in the next section (see Fig. \ref{Mvsp}).
\subsection{Ground-state phase diagram of AFM and HTSC}
\begin{figure*}[t]
\centering
\includegraphics[width=14cm]{81002Fig22.eps}
\caption[]{\footnotesize (Color online) Ground-state phase diagram: $p$-dependence of $M_{\rm AFM}$ at $T$=1.5 K and SC gap ($\Delta_{\rm SC}$) for $n$=2:0212F\cite{Shimizu2011PRB}, $n$=3:0223F\cite{Shimizu2009PRB}, $n$=4:0234F\cite{Shimizu2009JPSJ}, and $n$=5:$M$1245\cite{Mukuda2008,Mukuda2010,Tabata}. Data of $M_{\rm AFM}$s for nondoped and slightly doped Mott insulating states are cited from $n$=$\infty$:Ca$_{0.85}$Sr$_{0.15}$CuO$_{2}$\cite{Vaknin1989} and $n$=2:YBCO$_{6+x}$\cite{Coneri}, respectively.
The AFM moment at the CuO$_2$ plane totally disappears in the ground state when $p$= 0.12$\sim$0.14 ($\equiv p_c(M_{\rm AFM}$=0)), which is extrapolated from the $p$ dependence of $M_{\rm AFM}$(solid curve).
$\Delta_{\rm SC}$ shows a maximum just outside of $p_c(M_{\rm AFM}$=0)$\sim$0.14 irrespective of $n$. Here, the $\Delta_{\rm SC}$ is estimated from the $T_{\rm c}$ values with the relation 2$\Delta_{\rm SC}$=$8k_{\rm B}T_{\rm c}$. [cited from refs.\cite{Mukuda2010,KitaokaJPCS2011,KitaokaIOP,Shimizu_n5}]
}
\label{Mvsp}
\end{figure*}
Since $T_{\rm N}$ depends on the strength of interlayer magnetic coupling, i.e., the number of CuO$_2$ layers, the temperature phase diagram against $p$ depends on $n$, as shown in Fig.~\ref{fig:PhaseDiagram}.
Thus, we discuss here a ground-state phase diagram of a CuO$_2$ plane by plotting the AFM moment~($M_{\rm AFM}$) and SC gap ($\Delta_{\rm SC}$) against $p$, which also gives us an opportunity to compare the experimental outcomes with theoretical ones.
Figure~\ref{Mvsp} shows the $p$ dependence of the $M_{\rm AFM}$ at $T$=1.5~K and $\Delta_{\rm SC}$ evaluated using the relation 2$\Delta_{\rm SC}$=$8k_{\rm B}T_{\rm c}$~\cite{Mukuda2008,Shimizu2009PRB,Shimizu2009JPSJ,Mukuda2010,KitaokaJPCS2011,KitaokaIOP,Shimizu_n5}.
The $M_{\rm AFM}$ on the CuO$_2$ plane decreases monotonically as $p$ increases and is extrapolated to zero in the range of 0.12$<p_c(M_{\rm AFM}$=$0) \le$0.14, as shown by the solid curve in Fig.~\ref{Mvsp}.
Here, the data of $M_{\rm AFM}$s for nondoped and slightly doped Mott insulating states are cited from $n$=$\infty$:Ca$_{0.85}$Sr$_{0.15}$CuO$_{2}$\cite{Vaknin1989} and $n$=2:YBCO$_x$\cite{Coneri}, respectively.
Figures \ref{fig:PhaseDiagram} and \ref{Mvsp} reveal that $p_c(M_{\rm AFM}$=$0)\sim$0.14 is an intrinsic quantum critical hole density at the CuO$_2$ plane, at which the AFM moment totally disappears in the ground state.
It is noteworthy that the maxima of $\Delta_{\rm SC}$ and $T_{\rm c}$ are at $p(T_{\rm c}^{\rm max})\sim$ 0.16 irrespective of $n$, which are close to $p_c(M_{\rm AFM}$=$0)\sim$0.14, revealing an intimate relationship between the SC order parameter and the AFM ordered moment.
The phase diagram of $M_{\rm AFM}$ vs $p$ presented here is totally consistent with the ground-state phase diagram at a single CuO$_2$ plane, which has been theoretically addressed thus far in terms of either the $t$-$J$ model~\cite{Chen,Giamarchi,Inaba,Lee1,Himeda,Kotliar,Paramekanti1,Paramekanti2,Lee2,Shih1,Shih2,Yamase1,Yamase2,Ogata,Pathak}, or the Hubbard model in a strong correlation regime~\cite{Senechal,Capone}. Note that $p_c(M_{\rm AFM}$=$0)\sim$ 0.14 determined experimentally is also in good agreement with the theoretical calculation at $T$=0.
We also point out that no static AFM order with a 3D long-range character can be observed in the range of $p_c(n) < p \le p_c(M_{\rm AFM}$=$0)$ owing to the strong 2D fluctuations and weak interlayer magnetic coupling; hence, the AFM moments at CuO$_2$ planes may fluctuate on a time scale faster than the nanosecond order in NMR measurement, which may be related to various anomalous magnetic behaviors in underdoped regions, as addressed in $\S$ 4.6.
\subsection{Superexchange interaction in hole-doped CuO$_2$ plane}
\begin{figure}[tbp]
\centering
\includegraphics[width=7.5cm]{81002Fig23.eps}
\caption[]{\footnotesize (Color online) Plot of $T_{\rm N}$ vs $M_{\rm AFM}$~\cite{Shimizu2009PRB,Shimizu2009JPSJ,Mukuda2008,Mukuda2010}. The solid curve shows $T_{\rm N}\propto M_{\rm AFM}^2$ with $T_{\rm N}$=537~K at $M_{\rm AFM}$=0.51$\mu_{\rm B}$ in $n$=$\infty$:Ca$_{0.85}$Sr$_{0.15}$CuO$_{2}$\cite{Vaknin1989}. The arrow points to $M_{\rm AFM}\sim$~0.18$\mu_{\rm B}$.[cited from refs.\cite{Mukuda2010,KitaokaJPCS2011,KitaokaIOP}]
}
\label{MvsTN}
\end{figure}
Figure~\ref{MvsTN} shows a plot of $T_{\rm N}$ vs $M_{\rm AFM}$ to gain insight into the $p$ dependence of the in-plane superexchange interaction $J_{\rm in}(p)$. In this figure, data are presented with respect to $n$=3:0223F~\cite{Shimizu2009PRB}, $n$=4:0234F~\cite{Shimizu2009JPSJ}, and $n$=5:$M$1245~\cite{Mukuda2008,Mukuda2010}, along with the data of $n$=$\infty$:Ca$_{0.85}$Sr$_{0.15}$CuO$_{2}$ with $p$=0~\cite{Vaknin1989}.
On the basis of the mean-field approximation of localized spins, we assume that $T_N\propto$~$M_{\rm AFM}^2$, and $J_{\rm out}(n=\infty)$ and $J_{\rm in}(p=0)$ for Ca$_{0.85}$Sr$_{0.15}$CuO$_{2}$ stay constant regardless of $p$.
The solid curve in Fig.~\ref{MvsTN} shows a graph of $T_N\propto$~$M_{\rm AFM}^2$ plotted using $T_N(\infty)$=537~K at $M_{\rm AFM}$=0.51~$\mu_{\rm B}$ for Ca$_{0.85}$Sr$_{0.15}$CuO$_{2}$.
First, we notice that, as shown by an arrow in Fig.~\ref{MvsTN}, $T_N(n)$ increases from $n$=4 to 5 as $n$ increases, even though $M_{\rm AFM}\sim$~0.18~$\mu_{\rm B}$ remains constant for the $n$=4 and 5 compounds, which is attributed to the increase in $J_{\rm out}(n)$, namely, $J_{\rm out}(4)<J_{\rm out}(5)$.
The effective interlayer coupling of the $n$=4 and 5 compounds, given by $\sqrt{J_c J_{\rm out}(n)}$, is always smaller than $J_{\rm out}(\infty)$ in Ca$_{0.85}$Sr$_{0.15}$CuO$_{2}$: However, as shown by the solid curve in Fig.~\ref{MvsTN}, $T_N(n)$s for the $n$=4 and 5 compounds, which are given by $T_N(n)\sim M_{\rm AFM}^2(p)[J_{\rm in}(p)\sqrt{J_cJ_{\rm out}(n)}]^{1/2}$, are larger than $T_N(\infty)$ for $n$=$\infty$:Ca$_{0.85}$Sr$_{0.15}$CuO$_{2}$, which is given by $T_N(\infty)\sim M_{\rm AFM}^2(p)[J_{\rm in}(0)J_{\rm out}(\infty)]^{1/2}$ with $T_N$=537 K at $M_{\rm AFM}$=0.51$\mu_{\rm B}$ and $p$=0.
Thus, we also obtain an unexpected relation, i.e., $J_{\rm in}(p)>J_{\rm in}(0)\sim$~1300~K.
The two experimental relationships $-$ the plot of $M_{\rm AFM}$ vs $p$ shown in Fig.~\ref{Mvsp} and the plot of $T_{\rm N}$ vs $M_{\rm AFM}$ shown in Fig.~\ref{MvsTN} $-$ suggest that the AFM ground state in the homogeneously hole-doped CuO$_2$ planes is determined by $p$, $\sqrt{J_cJ_{\rm out}(n)}$, and $J_{\rm in}(p)$ that is larger than $J_{\rm in}(0)\sim$~1300~K. It is surprising that $J_{\rm in}(p)$ becomes stronger for the doped CuO$_2$ planes with AFM order than for the undoped AFM-Mott insulators. Mean-field theories of HTSC used to consider the superexchange interaction $J_{\rm in}$ as the source of an instantaneous attraction that led to pairing in a $d$-wave state~\cite{Anderson2}.
The present outcomes may experimentally support such a scenario as far as the underdoped region is concerned, where AFM and HTSC uniformly coexist at a CuO$_2$ plane.
Further theoretical study is desired to address whether $J_{\rm in}$($p$) becomes larger than that for AFM-Mott insulators even though mobile holes are doped.
\subsection{Pseudo gap behavior in underdoped region }
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=7.5cm]{81002Fig24.eps}
\end{center}
\caption{\footnotesize (Color online) $p$ dependences of $T^{*}$ for $n$=5:$M$1245, along with the $T^{*}$ for $n$=1:Bi2201\cite{Kondo,ZhengPG} and Hg1201\cite{ItohHg1201_96}, $n$=2:Bi2212\cite{Campzano,IshidaBi2212,Walstedt}, and $n$=3:0223F\cite{Shimizu2011_n3} and $M$1223\cite{Julien,Kotegawa2002}. Those data were obtained by the measurements of ARPES~\cite{Kondo,Campzano} and NMR~\cite{ZhengPG,ItohHg1201_96,IshidaBi2212,Walstedt,Julien,Kotegawa2002,Shimizu2011_n3}. The broken curve represents the $p$ dependence of $T_{\rm c}$, $T_{\rm c}$=$T_{\rm c}^{\rm max}$[1-82.6($p$-0.16)$^2$].\cite{Groen,Presland}
As far as no AFM order takes place, the $p$ dependences of $T^{*}_{spin}$ and $T^{*}_{charge}$ resemble irrespective of $n$, suggesting that $T^{*}$ is determined by {\it in-plane} magnetic and charge correlations. [cited from ref. \cite{Shimizu2011_n3}]
}
\label{fig:PG}
\end{figure}
The pseudogap behavior emerging above $T_{\rm c}$ is the underlying issue in cuprate superconductors.
NMR and ARPES studies have observed pseudogap behaviors in spin and charge excitations below $T^{*}_{spin}$ and $T^{*}_{charge}$, respectively. Figure~\ref{fig:PG} shows the $p$ dependence of $T^{*}$ for $n$=5:$M$1245, along with $T^{*}$ for $n$=1:Bi2201~\cite{Kondo,ZhengPG} and Hg1201~\cite{ItohHg1201_96}, $n$=2:Bi2212~\cite{Campzano,IshidaBi2212,Walstedt}, and $n$=3:0223F\cite{Shimizu2011_n3} and $M$1223~\cite{Julien,Kotegawa2002}.
As shown in Fig.~\ref{fig:PG}, the $p$ dependences of $T^{*}_{spin}$ and $T^{*}_{charge}$ resemble each other for the $n$=1 and $n$=2 compounds, as argued in the literature~\cite{Yoshida}.
Note that, as far as no AFM order does not takes place, the $p$ dependence of $T^{*}_{spin}$ in the $n$=5 compounds resembles those for the $n$=1, 2, and 3 compounds, suggesting that the magnetic and charge excitations are suppressed below the same $T$, that is, $T^{*}_{spin}$ coincides with $T^{*}_{charge}$.
Despite the stronger magnetic interlayer coupling in the $n$=5 compounds than in the other multilayered cuprates, the $p$-dependence of $T^{*}_{spin}$ resembles those of $n$=1 and $n$=2 compounds in the optimally doped regime: it is expected that $T^{*}$ is determined by in-plane magnetic and charge correlations~\cite{Yoshida}. From a theoretical point of view, the $t$-$J$ model has explained $T^{*}$ as an onset temperature for a singlet pairing of spinons in the CuO$_2$ plane~\cite{Ogata}, which would be insensitive to the presence of magnetic interlayer coupling.
Here, we highlight that the spin-gap behavior disappears when the AFM order takes place in $p < p_c(n)$; $1/T_1T$ continues to increase toward $T_{\rm N}$ upon cooling for the $n$=3 and $n$=5 compounds that exhibit the AFM order at low temperatures, as shown in Figs.~\ref{fig:n5_PG} and \ref{fig:n3_PG}, respectively.
In underdoped regions where the ground state is characterized by the coexistence of HTSC and AFM, low-lying magnetic excitations develop upon cooling toward the AFM order without a gap opening.
In this context, unless an AFM order occurs in the underdoped region, it is natural for spin-singlet formation to develop above $T_{\rm c}$, leading to a pseudogap behavior in magnetic excitations. Recently, Yamase {\it et al.} have theoretically pointed out that a {\it spin gap} is strongly suppressed near a tetracritical point for phases of the AFM, HTSC, AFM+HTSC, and normal states~\cite{Yamase2}. As a result, it is likely that the pseudogap, which emerges around the antinode region at the wave vectors (0,$\pi$) and ($\pi$,0), evolves to a real {\it AFM gap} in the AFM-HTSC mixed phase. One underlying issue is to experimentally address whether or not a pseudogap survives even in the paramagnetic and normal states in the AFM-HTSC mixed region.
\subsection{Momentum dependences of SC gap and pseudogap in HTSC}
As observed in ARPES experiments on multilayered cuprates~\cite{YChen,Ideta}, the momentum dependence of the gap magnitude in HTSC is schematically drawn in Fig.~\ref{ARPES}. Here, the gap for overdoped HTSC is almost a simple $d$ wave, $\Delta_0|\cos(k_xa)-\cos(k_ya)|/2$, as shown by a straight line. On the other hand, the gap for the underdoped one deviates from the simple $d$-wave around the antinodes $\sim(0,\pi$) and ($\pi,0$). The gap size is characterized by two parameters, i.e., $\Delta_0$ around the node and $\Delta^*$ in the antinodal region, where $\Delta_0$ and $\Delta^*$ are defined by the linear extrapolation of the gap magnitude to the antinode [$\Delta_0|\cos(k_xa)-\cos(k_ya)|/2=\Delta_0$], as shown in the figure. The deviation of the gap anisotropy from the simple $d$-wave is known to be prominent in underdoped cuprates, which is called the {\it two-gap behavior}\cite{Tanaka2gap}.
The ARPES on $n$=4:0234F($\sharp$4) with $T_{\rm c}$=55 K showed two Fermi sheets of IP and OP and revealed that their momentum dependences of the gap magnitude exhibit a {\it two-gap behavior} at IP and OP at 20 K~\cite{YChen}. When incorporating that the AFM order sets in at $T_{\rm N}$=80 K well above $T_{\rm c}$=55 K and that the AFM unit cell doubles a crystal one, $\Delta^*$ around the antinodes $\sim(0,\pi$) and ($\pi,0$) is assigned to an {\it AFM gap}.
By contrast, $\Delta^*$, which used to be observed even in the absence of an AFM order in the underdoped regime for the $n$=1 and 2 compounds, may be ascribed to a formation of a {\it spin gap} due to the development of a singlet resonating valence bond (RVB) state~\cite{Ogata}.
\begin{figure}[tbp]
\centering
\includegraphics[width=7.5cm]{81002Fig25.eps}
\caption[]{\footnotesize (Color online) Illustration of momentum dependences of energy gap $\Delta_0$ and pseudogap $\Delta^*$~\cite{Ideta}.[cited from ref.\cite{KitaokaIOP}]
}
\label{ARPES}
\end{figure}
\subsection{Comments on underlying issues in underdoped cuprates}
The experimental results in Figs. \ref{fig:PhaseDiagram} and \ref{Mvsp} suggest that an AFM moment exists at Cu sites up to $p_c(M_{\rm AFM}$=$0)\le 0.14$ even in the $n$=1:LSCO and $n$=2:YBCO$_{6+x}$ compounds, but it cannot be observed as a static AFM order in the range of $p_c(n) < p \le p_c(M_{\rm AFM}$=$0)$ owing to strong 2D fluctuations since the interlayer magnetic coupling is too weak to stabilize a completely static AFM order with a 3D long-range character.
This instability of the ground state on the electronic states was corroborated by the facts that the application of a high magnetic field helps to bring about an AFM order with $M_{\rm AFM}\sim 0.1\mu_{\rm B}$ in the vortex state of LSCO with $p$=0.1~\cite{Lake} and YBCO$_{6.5}$\cite{Miller} and that the spin-glass phase existing in LSCO survives up to a hole density close to $p_c(M_{\rm AFM}$=$0)\sim 0.14$\cite{JulienSG}. In addition, the static AFM order was also observed when the charge-stripe order occurs at $x\sim$1/8 of LSCO~\cite{Tranquada}. The neutron scattering experiment suggested that an unusual AFM order with $M_{\rm AFM}\sim 0.1\mu_{\rm B}$ takes place in the underdoped YBCO$_{6.5}$ with $T_{\rm c}$=60 K, although it fluctuates on a nanosecond time scale~\cite{Sidis}.
The phase diagrams of LSCO and YBCO$_{6+x}$ have been believed as the prototypes thus far; however, we note that the underlying issues in the underdoped region mentioned above should be reconsidered on the basis of the following facts: (i) an AFM moment intrinsically exists up to $p_c(M_{\rm AFM}$=$0) \le 0.14$ in the ground state, (ii) the interlayer magnetic coupling in LSCO and YBCO$_{6+x}$ is not sufficient to stabilize a completely static AFM order and (iii) the chemical substitution for doping introduces inevitable disorders into the CuO$_2$ plane.
\section{Concluding Remarks}
Site-selective NMR studies on multilayered cuprates have unraveled the intrinsic phase diagram of homogeneously hole-doped CuO$_2$ plane as follows:
\begin{enumerate}
\item[(i)] The AFM {\it metallic} phase is robust and uniformly coexists with the HTSC phase up to the quantum critical hole density ($p_{c}(n)$), at which the AFM order collapses.
\item[(ii)] $p_{c}(n)$ increases from 0.075 to 0.08 to 0.10 as the interlayer magnetic coupling becomes stronger in increasing from $n$=3 to 4 to 5, respectively.
\item[(iii)] The uniform coexistence of AFM and HTSC at a single CuO$_2$ plane is the universal phenomenon in the underdoped region for $p < p_c(n)$.
\item[(iv)] The maxima of $\Delta_{\rm SC}$ and $T_{\rm c}$ take place irrespective of $n$ at $p(T_{\rm c}^{\rm max})\sim$~0.16 just outside $p_c(M_{\rm AFM}$=$0)\sim 0.14$, at which the AFM moment inherent in the CuO$_2$ plane disappears in the ground state.
\item[(v)] The pseudogap in magnetic excitations collapses in the AFM state for $p < p_c(n)$, where low-lying magnetic excitations develop upon cooling toward the AFM order.
\item[(vi)] The ground-state phase diagram of AFM and HTSC (see Fig.~\ref{Mvsp}) is in good agreement with the ground-state phase diagrams in terms of either the $t$-$J$ model~\cite{Chen,Giamarchi,Inaba,Anderson,Anderson1,Anderson2,Lee1,Himeda,Kotliar,Paramekanti1,Lee2,Yamase1,Yamase2,Paramekanti2,Shih1,Shih2,Ogata,Pathak} or the Hubbard model in a strong correlation regime~\cite{Senechal,Capone}.
\item[(vii)] The in-plane superexchange interaction $J_{\rm in}(p)$ for the $n$=4 and 5 compounds is larger than $J_{\rm in}(0)\sim$~1300 K for the infinite-layered AFM-Mott insulator Ca$_{0.85}$Sr$_{0.15}$CuO$_{2}$.
\end{enumerate}
In particular, we emphasize that the uniformly mixed phase of AFM and HTSC for $p < p_c(n)$ and the emergence of $d$-wave SC with the maximum of $T_{\rm c}$ just outside $p_c(M_{\rm AFM}$=$0)$ can be accounted for by the {\it Mott physics} based on the $t$-$J$ model. In the physics behind high-$T_{\rm c}$ phenomena, there is a very strong Coulomb repulsive interaction $U$~($ > 6$ eV), which prohibits the double occupancy of an up-spin electron and a down-spin electron on the same site. When noting that $U$ is almost unchanged with doping, being almost the same as that in AFM-Mott insulators, a large $J_{\rm in}$ attracts electrons of opposite spins to be in neighboring sites, raising the $T_{\rm c}$ of cuprates up to as high as 160 K~\cite{Anderson1,Ogata,Anderson2}; there are no bosonic glues.
\section*{Acknowledgement}
These works have been carried out in collaboration with M. Abe, Y. Yamaguchi, T. Sakaguchi, K. Itohara, S.-i. Tabata, S. Iwai, K. Matoba, Y. Araki, H. Kotegawa, Y. Tokunaga, K. Magishi, K. Ishida, G.-q. Zheng, and K. Asayama for NMR studies of multilayered cuprates.
The samples were provided by P. M. Shirage, H. Kito, Y. Kodama, Y. Tanaka, H. Ihara (AIST), K. Tokiwa, and T. Watanabe (Tokyo University of Science). We thank M. Mori, T. Tohyama, S. Maekawa, H. Yamase, T. K. Lee, M. Ogata, and H. Fukuyama for their valuable discussions and comments. These works were supported by a Grant-in-Aid for Specially Promoted Research (20001004) and by the Global COE Program (Core Research and Engineering of Advanced Materials-Interdisciplinary Education Center for Materials Science) from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.
|
2,877,628,091,125 | arxiv | \section{Significance}
Water is vital to our everyday life, but its structure at a molecular level is still not fully understood
from {either experiment or theory.
The latter is hampered} by our inability to construct a purely predictive, first principles model.
The difficulty in modeling water lies in capturing the delicate interplay among the many strong and weak forces that govern its behavior and phase diagram.
Herein, molecular simulations with a recently proposed non-empirical quantum mechanical approach (the SCAN density functional), yield an excellent description of the structural, electronic, and dynamic properties of liquid water.
SCAN-based approaches, which describe diverse types of bonds in materials on an equal, accurate footing, will likely enable efficient and reliable modeling of aqueous phase chemistry.
\newpage
Water is arguably the most important molecule for life
and is involved in almost all biological processes.
Without water,
life, as we know it, would not exist,
earning water the pseudonym {\it matrix of life}, among others~\cite{Ball:2008}.
Despite the apparent simplicity of an H$_2$O molecule, water in the condensed phase
displays a variety of anomalous properties which originates from its complex
structure.
In an ideal arrangement, water molecules form a tetrahedral network of hydrogen (H) bonds
with each vertex being occupied by a water molecule.
This tetrahedral network is realized in the solid phase ice I{\it h}, but thermal fluctuations
disrupt the H-bond network in the liquid state, with the network fluctuating on picosecond to nanosecond timescales.
Due to the complexity of the H-bond network and its competition with thermal fluctuations,
a precise molecular-level understanding of the structure of liquid water remains elusive.
Major challenges lie in unambiguously capturing the atomic-scale fluctuations
in water experimentally.
Current approaches such as time-resolved spectroscopy~\cite{Fecko:Science:2003,04S-Wernet}
and diffraction measurements~\cite{08L-Soper,13JCP-Skinner} may be able to resolve changes on picosecond
timescales, but rely on interpretation through models, which often cannot describe all the details of liquid
water with quantitative accuracy.
Not surprisingly, the nature of the H-bond network in liquid water continues to
be at the center of scientific debate
and advances in both experiment and theory are needed, especially with regard
to quantitative modeling of aqueous phase chemistry.
{\it Ab initio} molecular dynamics (AIMD) simulation~\cite{85L-CPMD} is an ideal approach
for modeling the condensed phases of water
across the phase diagram and aqueous phase chemistry
using quantum mechanical principles~\cite{09JCTC-Kuhne,13JCP-Biswajit,14JCP-Rob,16-Gillan,15JPCL-Alex},
although for some applications, such as the study of liquid vapor phase equilibria~\cite{06JPCA-McGrath}, Monte Carlo methods are better suited.
In particular, Kohn-Sham density functional theory (DFT)~\cite{kohn65} --- used to model the system in its electronic ground state ---
provides an efficient framework that enables the simulation of the length and time scales needed to converge
many statistical mechanical averages in disordered, liquid state systems.
The DFT formalism is exact for the electronic ground-state energy and density,
but in practice approximations must be adopted to describe many-body effects,
included in the exchange-correlation (XC) functional.
XC functionals can be conceptually arranged, by accuracy and computational efficiency,
according to Jacob's ladder~\cite{perdew2001jacob},
with the simplest local density approximation (LDA)~\cite{80L-Ceperley,81B-Perdew} on the bottom rung of the ladder,
followed by generalized gradient approximations (GGAs)~\cite{88A-Becke,88B-Lee,96L-PBE},
meta-GGAs, hybrid functionals~\cite{96JCP-Perdew,99JCP-Adamo}, and so on.
The past three decades have witnessed widespread successes of DFT
in elucidating and predicting properties of materials.
However, water still presents a major challenge, with many DFT-based simulations yielding
results that are not even qualitatively consistent with experimental measurements.
The H-bonds formed between gas-phase water clusters were first treated within the LDA~\cite{la92,la93},
which overestimates H-bond strengths and yields inter-water distances that are too close.
This overbinding is largely corrected by GGA-level functionals,
which became a class of popular functionals to study liquid water within the last two decades
~\cite{16-Gillan}.
Despite the improvements over LDA that are provided by GGAs,
H-bond strengths are overestimated and, consequently,
the dynamical properties predicted by GGAs are generally much too slow.
Worse still, GGAs predict that ice sinks in water, that is, water has a lower density than ice~\cite{15JPCL-Alex,09JPCB-Schmidt,11JCP-Wang,15JCP-Miceli,15JPCL-Alex}.
These disagreements remain even after considering hybrid functionals~\cite{15JPCL-Alex}
and accounting for nuclear quantum effects (NQEs)~\cite{13PNAS-Ceriotti}, illustrating that the deficiencies
are a manifestation of errors within the underlying GGA to the XC functional.
The difficulty in modeling liquid water with DFT arises from the delicate nature of the H-bond network.
A H-bond is a directional attractive force between
the oxygen of one molecule and the protons of another.
While mainly electrostatic in nature,
H-bonds also exhibit a non-negligible covalency.
Notably, a covalent O-H bond binds one order of magnitude stronger than a H-bond in water.
Therefore, a slightly misbalanced covalent bond inevitably incurs a
non-negligible error in the predicted H-bond strength.
Moreover, water molecules interact with each other through van der Waals (vdW) dispersion forces at larger distances,
which are non-directional and in general weaker than H-bonds by roughly an order of magnitude.
Thus, one needs to capture the balance among interactions whose magnitudes vary
by orders of magnitude in water.
The short-ranged portion of the vdW interactions have been captured by local and semi-local XC functionals.
In contrast, the intermediate- and long-ranged parts of the vdW interactions have not been captured
by any general-purpose GGA.
Recent studies have identified vdW interactions as an important determinant of water structure;
vdW interactions often lead to more disordered water structures,
more accurate water densities, and improved dynamic properties
~\cite{09JPCB-Schmidt,11JCP-Wang,Baer:2011rz,11JSP-Remsing,15JCP-Miceli,15JPCL-Alex,15JCP-Ben}.
Thus, the H-bond network of liquid water is produced by a delicate competition among covalent bonds,
H-bonds, and vdW interactions, and describing this complex interplay of interactions
continues to be a highly challenging task.
In this regard, non-empirical, general purpose XC functionals that describe all types of interactions
on an equal footing are imperative but still largely absent in the literature.
To address the above issues,
we performed AIMD simulations of liquid water in the isothermal-isobaric ensemble~\cite{80L-Parrinello},
employing the strongly constrained and appropriately normed (SCAN)
meta-GGA functional~\cite{15L-Sun}.
SCAN is inherently non-empirical,
developed by satisfying all 17 known exact constraints on semi-local XC functionals.
Thus, the results obtained from SCAN are purely predictive and do not rely on training data.
SCAN was shown to predict the energetics of gas-phase water hexamers
and ice phases with quantitative accuracy, while other XC functionals,
even with vdW corrections, were unable to make even qualitative predictions~\cite{16NC-Sun}.
This suggests that SCAN possesses the ingredients necessary to describe liquid water.
Indeed, we demonstrate that SCAN predicts
structural, electronic, and dynamic properties of liquid water in excellent agreement with experimental measurements.
In particular, due to its ability to describe vdW interactions on intermediate length-scales,
SCAN yields the correct density ordering between liquid water and ice,
correctly predicting that ice floats on liquid water.
The dynamics of liquid water are also improved to near quantitative agreement with experiments.
We expect the computationally-efficient and accurate SCAN functional to serve as a major quantum mechanics-based tool
for studying chemical processes in aqueous media.
\section{Molecular and Electronic Structure of Liquid Water}
The pair structure of liquid water can be measured by X-ray diffraction~\cite{08L-Soper,13JCP-Skinner}
and neutron diffraction experiments~\cite{08L-Soper},
from which structural information is contained in the resulting radial distribution functions (RDFs).
We compare the RDFs
obtained from AIMD simulations with SCAN and the Perdew-Burke-Ernzerhof (PBE)~\cite{96L-PBE} GGA,
as well as the experimental data.
{Here we compare two fully {ab initio} density functionals, without an empirical dispersion (D) correction to either.
While such a correction improves PBE for solids and liquids~\cite{14JPCC-Arindam}, it slightly worsens PBE's unacceptable overbinding of molecules, and thus PBE-D is not recommended for reactions in solvents.}
Figs.~1(a) and~1(b) show the oxygen-oxygen and oxygen-hydrogen RDFs,
g$_{\mathrm{OO}}$(r) and g$_{\mathrm{OH}}$(r), respectively.
SCAN dramatically improves almost all
features in g$_{\mathrm{OO}}$(r) and g$_{\mathrm{OH}}$(r),
producing a pair structure in much better agreement with experimental measurements than PBE.
The first peak of g$_{\mathrm{OH}}$(r) contains all correlations within the covalent O-H bonds.
SCAN enhances the covalency of water molecules,
shortening the covalent bond length to 0.977~\AA~(first maximum in g$_{\mathrm{OH}}$(r)),
in comparison to the 0.989~\AA~from PBE.
The shorter O-H bond length indicates that the
oxygen and protons bind more strongly.
Consequently, the protons of water molecules are less easily donated to form H-bonds.
\begin{figure}[tb]
\label{fig1}
\begin{center}
\includegraphics[width=0.35\textwidth]{Figure1.pdf}
\end{center}
\caption
{Radial distribution functions (a) g$_{\mathrm{OO}}$(r) and (b) g$_{\mathrm{OH}}$(r)
of liquid water predicted by PBE and SCAN at 330 K, as well as
that from X-ray diffraction experiments~\cite{13JCP-Skinner} for g$_{\mathrm{OO}}$(r)
and joint X-ray/neutron diffraction experiments~\cite{08L-Soper} for g$_{\mathrm{OH}}$(r). An elevated
temperature of 30 K was utilized in AIMD simulations to mimic NQEs~\cite{08L-Morrone}.
}
\end{figure}
Correlations between H-bonded neighbors are contained in the first peak of g$_{\mathrm{OO}}$(r)
and the second peak of g$_{\mathrm{OH}}$(r).
As evidenced by Fig.~1, SCAN captures these correlations with high accuracy
due to its ability to describe H-bonding.
%
The region between the first and second peaks of g$_{\mathrm{OO}}$(r) predominantly consists of
non-H-bonded water molecules that occupy the interstitial space between H-bonded neighbors;
the increased number of water molecules in the interstitial regions is due to vdW interactions,
as discussed further below.
Subsequent coordination shells are also captured by SCAN, evidenced by the good agreement between the second and third peaks in g$_{\mathrm{OO}}$(r).
We emphasize that the near perfect agreement between the SCAN g$_{\mathrm{OO}}$(r) and experiment
is non-trivial, because the structure of water is a manifestation of
the delicate interplay among covalent bonds, H-bonds, and vdW interactions.
\begin{table*}[tbh]
\caption{
Properties of water (330 K) and ice I{\it h} (273 K) predicted by SCAN and PBE functionals
in the isobaric-isothermal ensemble:
densities of water ($\rho_{w}$)
and ice I{\it h} ($\rho_{{\rm I}h}$),
density difference ($\Delta \rho$),
density ratio $\rho_{w}/\rho_{{\rm I}h}$,
{dipole moments of water ($\mu_{w}$)
and ice I{\it h} ($\mu_{{\rm I}h}$)},
band gap ($E_{g}$),
tetrahedral order parameter ($q$),
diffusion coefficient ($D$),
and rotational correlation time ($\tau_2$).
{The temperatures for experimental data (EXP) $\rho_{w}$,
$\rho_{{\rm I}h}$, $\mu_{w}$, $D$, and $\tau_2$
are 300~\cite{NIST}, 273~\cite{CRC}, 298~\cite{00JCP-Badyal}, 298~\cite{73JPC-Mills}, and 300 K~\cite{01JACS-Ropp}, respectively.
The experimental $q$ value~\cite{08L-Soper}
was obtained by combining X-ray diffraction at 296 K and
neutron diffraction data at 298 K in a structural model using empirical potential structural refinement.
No experimental data of $\mu_{Ih}$ are found but an induction model
gave rise to 3.09 D for $\mu_{Ih}$~\cite{98JCP-Batista}.}
Experimental data for $q$, $D$, and $\tau_2$ are for D$_2$O
chosen for consistency with the masses used in simulations for the dynamic properties.
Error bars correspond to one standard deviation.}
\scalebox{1.00}{
\begin{tabular}{cccccccccccc}
\hline
\hline
Method & $\rho_{w}$ (g/mL) & $\rho_{{\rm I}h}$ (g/mL) & $\Delta\rho$ (g/mL) & $\rho_{w}/\rho_{{\rm I}h}$ &
$\mu_{w}$ (D) & {$\mu_{Ih}$ (D)} & $E_{g}$ (eV) & $q$ & $D$ (\AA$^2$/ps) & $\tau_2$ (ps) \\
\hline
SCAN & 1.050$\pm$0.027 & 0.964$\pm$0.023 & 0.086{$\pm$0.035} & 1.089{$\pm$0.038} & 2.97$\pm$0.29 & {3.29$\pm$0.21} & 4.92$\pm$0.14 & 0.68$\pm$0.18 & {0.190$\pm$0.025} & 2.9$\pm$0.4 \\
PBE & 0.850$\pm$0.016 & 0.936$\pm$0.013 & -0.086{$\pm$0.021} & 0.908{$\pm$0.021} & 3.12$\pm$0.28 & {3.35$\pm$0.21} & 4.43$\pm$0.13 & 0.83$\pm$0.11 & {0.018$\pm$0.002} & 7.1$\pm$0.5 \\
EXP & {0.99656}~\cite{NIST} & {0.9167}~\cite{CRC} & {0.080} & {1.087} & 2.9$\pm$0.6~\cite{00JCP-Badyal} &
& 8.7$\pm$0.6~\cite{97CP-Bernas} & {0.593}~\cite{08L-Soper} & {0.187}~\cite{73JPC-Mills} & 2.4~\cite{01JACS-Ropp} \\
\hline
\hline
\end{tabular}
}
\end{table*}
The strength of directional H-bonds is largely determined by the electronic structure of water molecules.
The electronic density of states (DOS) of liquid water, averaged over trajectories, is shown in Fig.~2(a)
and compared to the DOS measured by full valence
band photoemission spectroscopy~\cite{04JPCA-Winter}.
The four peaks of the DOS are assigned to the
$2a_1$, $1b_2$, $3a_1$, and $1b_1$ orbitals based on the spatial symmetries of the water molecule.
The simulated DOS are aligned at the position of the $1b_1$ orbital peak~\cite{16L-WeiChen} .
The energy difference between the $2a_1$ peak predicted by PBE and experiment is 2.3 eV.
SCAN substantially lowers this energy difference to 0.9 eV, providing a much better
description of the strongly bound $2a_1$ orbital than the GGA-level description provided by PBE.
Note that the strongly bound $2a_1$ orbital is mainly composed of
the characteristic $2s$ orbital and is close to the oxygen atom.
\begin{figure}[tb]
\label{fig2}
\begin{center}
\includegraphics[width=0.35\textwidth]{Figure2.pdf}
\end{center}
\caption
{(a) Density of states (DOS) of liquid water, averaged over SCAN and PBE trajectories,
as well as from photoemission spectroscopy~\cite{04JPCA-Winter}.
The peaks are labeled according to the symmetric orbitals of a water molecule with $C_{2v}$ symmetry.
Data are aligned~\cite{16L-WeiChen} to the $1b_1$ peak of the experimental (EXP) data.
(b) Distributions of the centers of maximally localized Wannier functions (MLWFs) with respect to the oxygen position
for lone and bonding electron pairs.
The inset shows a representative snapshot of the MLWFs of a water molecule;
lone and bonding pair MLWFs are colored green and blue, respectively.
}
\end{figure}
The above four orbitals are related
to the two lone electron pairs and
two bonding electron pairs of a water molecule;
the lone electron pairs are closely connected to
the $2a_1$ and $1b_1$ orbitals while the bonding electron pairs
have a strong relation to the $1b_2$ and $3a_1$ orbitals.
Therefore, the improved DOS by SCAN implies that the lone and bonding electron
pairs are better captured than those from PBE.
We examine the lone and bonding electron pairs on an equal footing
through maximally localized Wannier functions (MLWFs)~\cite{12MLWF},
which are generated from a unitary transformation of the occupied Kohn-Sham eigenstates.
Fig.~2(b) shows the distributions of the centers of the MLWFs.
The lone electron pairs are closer to the oxygen atom in the SCAN description of water
than PBE, while the bonding electron pairs only differ slightly between the two XC functionals.
The smaller distance between lone electron pairs and oxygen in SCAN
leads to a less negative environment around the lone electron pairs and
explains the lower energy of the $2a_1$ orbital in comparison to that of PBE.
Meanwhile, the nearly unchanged description of bonded electron pairs in the two functionals
is consistent with the observation that $1b_2$ and $3a_1$ states are also similar.
Consequently, electrostatic attractions between oxygen nuclei and
protons of neighboring water molecules are weaker in SCAN than PBE,
weakening the directional H-bond strength.
In addition to improving the intermolecular structure, the reduced H-bond strength
in SCAN also improves the intramolecular structure of water.
The shorter distance between the lone electron pairs and the oxygen nucleus
weakens the capability to accept H-bonds and water molecules become less polarizable.
The reduction in polarizability is expected to improve
other electronic properties of liquid water, moving them in closer agreement with experimental measurements.
Indeed, the dipole moment $\mu$ of liquid water,
computed via MLWFs, is reduced by SCAN.
Table~1 shows that $\mu=3.12$~D with PBE, while $\mu$ reduces to 2.97~D with SCAN,
in {better agreement} with experimental measurements of 2.9$\pm$0.6~D~\cite{00JCP-Badyal}.
This improvement indicates that the important dipole-dipole interactions in liquid water
are better described by SCAN.
We also estimate the band gap of water, $E_{g}$, by averaging over eight randomly selected configurations
from the trajectories.
SCAN and PBE predict $E_g=4.92$ and 4.43~eV, respectively.
While SCAN improves $E_g$ by about 0.5~eV,
it differs significantly from the experimental value of 8.7~eV~\cite{97CP-Bernas}.
We attribute this discrepancy to the well-known underestimation
of band gaps by GGAs and meta-GGAs.
\begin{figure*}[tb]
\label{fig4}
\begin{center}
\includegraphics[width=0.75\textwidth]{Figure3.pdf}
\end{center}
\caption
{(a) Distributions of the number of hydrogen bonds in liquid water from SCAN and PBE. The inset
illustrates an ideal tetrahedral H-bonding structure.
Oxygen and hydrogen atoms are respectively depicted in red and white; H-bonds
are shown with dashed lines.
(b,c) Bond angle distributions $P_{\mathrm{OOO}}(\theta)$ from (b) PBE and (c) SCAN.
$P_{\mathrm{OOO}}(\theta)$
is decomposed into contributions arising from waters with a fixed number of HBs (2, 1, and 0) between
a central oxygen and its two nearest neighbors.
The experimental $P_{\mathrm{OOO}}$ of D$_2$O is inferred from experiments~\cite{08L-Soper},
and the area of $P_{\mathrm{OOO}}$ is normalized to unity.
(d,e) Free energies ($F$) as a function of $\theta$ and the oxygen-oxygen distance $r$ from (d) PBE and (e) SCAN.
The free energy minimum is identified by the red circle and referenced to zero.
The direction of change of the free energy minimum with increasing $r$ is shown with a red arrow.
The cutoff distance
used for computing the free energies is the same as that for $P_{\mathrm{OOO}}$ and is shown with a dashed red line.
}
\end{figure*}
The SCAN functional can describe the intermediate-ranged vdW interactions~\cite{16NC-Sun},
which shift the first minimum and the second maximum of $g_{\mathrm{OO}}$(r)
toward the first peak, with respect to that of PBE (without vdW interactions), as shown in Fig.~1(a).
Water molecules beyond the first coordination shell experience
non-directional attractions from surrounding water molecules in SCAN
and
are pulled into the interstitial spaces between H-bonded waters by these vdW forces.
Consequently, the peak position of the second coordination
shell shifts inward towards the central oxygen,
and the population of interstitial waters increases,
illustrated by the increase in the height of the first minimum in $g_{\mathrm{OO}}$(r).
Thus, the inclusion of non-directional vdW interactions on intermediate length-scales
leads to a more disordered and highly-packed water structure.
From the increased packing,
one expects the density of liquid water predicted by SCAN to be larger than that from PBE.
Moreover, the dominant effect of vdW interactions
is to provide cohesive interactions between molecules in condensed phases.
Within the vdW picture of liquids,
this leads to a cohesive pressure of magnitude $-a\rho_w^2$,
which ``squeezes'' water molecules closer together~\cite{11JSP-Remsing};
$a$ is the vdW constant and a measure of the strength of these attractive interactions,
and $\rho_w$ is the density of liquid water.
Indeed, the SCAN functional predicts $\rho_{w}$ to be
significantly higher than that predicted by PBE, as shown in Table~1.
Another problem of paramount importance is that solid water,
ice I{\it h}, floats on liquid water near ambient conditions.
This is probably the most widely known anomalous property of water.
Yet, almost all DFT-based approaches,
except some of those relying on empirical parameters,
predict a solid phase that is denser than the liquid.
In this regard, we also carried out AIMD simulations of ice I{\it h} containing 96 water molecules
at 273~K.
SCAN predicts a $\rho_{w}$ that is
larger than the density of ice I{\it h} ($\rho_{{\rm I}h}$), Table~1,
while for PBE, $\rho_{{\rm I}h}>\rho_{w}$.
The water density from SCAN is $\simeq5\%$ larger than that determined experimentally,
which is a significant improvement over the 15\%, 25\%, and 39\% underestimation
by the PBE, BLYP~\cite{88A-Becke,88B-Lee,09JPCB-Schmidt}, and PBE0~\cite{15JPCL-Alex}
functionals, respectively.
Compared to PBE,
SCAN increases $\rho_{w}$ and $\rho_{{\rm I}h}$ by 21\% and 3\%, respectively.
The 21\% increase of $\rho_w$ by SCAN
is vital in correcting the density ordering between the two phases by other functionals.
Indeed, the experimental density difference
between liquid water and ice I{\it h},
$\Delta \rho=\rho_w-\rho_{{\rm I}h}$,
is correctly predicted by SCAN as 0.086~g/mL,
while the opposite sign
is predicted by PBE.
SCAN also correctly predicts $\rho_w/\rho_{{\rm I}h}\simeq1.089$,
in agreement with the experimental value of $\simeq1.087$,
and in contrast to the 0.908 ratio via PBE.
{Note that the water density obtained by PBE is slightly lower
than the results of previous studies and we discuss the difference in the SI.}
\section{Tetrahedral Structure of the H-bond Network}
With the translational order encoded in RDFs
well captured by SCAN, we now focus on the tetrahedral orientational ordering of liquid water induced by the
H-bond network.
An ideal tetrahedral H-bonding structure shown in the inset of Fig.~3(a)
is formed because a water molecule can possess four optimal H-bonds:
two accepting and two donating.
Thermal fluctuations break and reform H-bonds, causing
the tetrahedral structures in liquid water to be distorted or broken by entropic effects.
This, combined with the increased packing due to vdW interactions,
leads to an average number of H-bonds per molecule slightly less than four in liquid water.
To illustrate the impact of SCAN on the H-bond network, distributions of the number of
H-bonds per molecule are presented in Fig.~3(a).
The percentage of water molecules participating in four H-bonds drops from 72\% in PBE to 56\% in SCAN.
This suggests that H-bonds are weaker with SCAN than with PBE.
SCAN predicts an average of 3.61 H-bonds per molecule,
smaller than the 3.77 obtained from PBE.
This reduction in the number of H-bonds
is consistent with the influences of the underlying SCAN functional on liquid water:
directional H-bonds are weakened and more easily broken by thermal fluctuations.
The increased disorder is further stabilized
by the inclusion of the intermediate-ranged vdW interactions naturally arising in SCAN.
The reduction of H-bonds produced by SCAN disrupts the tetrahedral structure of liquid water.
To quantify the amount of tetrahedral order, we adopt the tetrahedral order parameter
$q$~\cite{14JCP-Rob}.
A perfect tetrahedral local environment corresponds to $q=1$,
and $q$ decreases as the local structure becomes less tetrahedral.
Following experimental work~\cite{08L-Soper}, we evaluate $q$ using a cutoff radius
that yields an average coordination number of 4.
The resulting cutoffs
are 3.15 and 3.45 \AA~for SCAN and PBE, respectively, with SCAN in better agreement with
the cutoff of 3.18~\AA~inferred from experiment~\cite{08L-Soper}.
Despite the high first peak in the PBE $g_{\mathrm{OO}}$(r),
the shorter cutoff from SCAN suggests
a more compact first coordination shell,
consistent with the higher density of liquid water it predicts.
PBE results in an overly tetrahedral liquid (Table~1).
SCAN, however, yields $q$ in better agreement with experiments on heavy water~\cite{08L-Soper},
suggesting that SCAN provides a more accurate structural description of
the fluctuating H-bond network.
Three-body correlations in water can be quantified
by the bond angle distribution $P_{\mathrm{OOO}}(\theta)$,
where $\theta$ is the angle formed by an oxygen of a water molecule and two of its oxygen neighbors;
neighbors are defined using the same cutoff as above~\cite{08L-Soper}.
The PBE $P_{\mathrm{OOO}}$ in Fig.~3(b) displays a high peak centered around the tetrahedral angle, 109.5$^{\circ}$
and is a much narrower distribution than that from experiment.
This indicates that PBE
overestimates the tetrahedral character of the liquid, consistent with the above-described overstructuring.
In stark contrast, the SCAN $P_{\mathrm{OOO}}$ is in excellent agreement with experiment,
with almost exactly the same widths and intensities of the two peaks close to 109.5$^{\circ}$ and 55$^{\circ}$, Fig.~3(c).
The peak located near 109.5$^{\circ}$ arises from tetrahedral structures.
The peak at $\theta\simeq55^{\circ}$ is related to broken H-bonds and interstitial, non-H-bonded water,
and major differences between SCAN and PBE are observed in this region of the distribution.
We decompose $P_{\mathrm{OOO}}(\theta)$ into three contributions
according to the number of H-bonds a water molecule formed within a water triplet.
The $P_{\mathrm{OOO}}(\theta)$ of triplets formed with 2, 1, and 0 H-bonds
are plotted in Figs.~3(b) and~3(c).
Triplets involving 2 H-bonds dominate the PBE $P_{\mathrm{OOO}}(\theta)$,
while triplets with broken (0 or 1) H-bonds contribute much less.
In contrast, triplets with less than 2 H-bonds contribute significantly to the
$P_{\mathrm{OOO}}(\theta)$ predicted by SCAN, especially near $\theta\approx 55^{\circ}$.
The free energy as a function of $\theta$ and the distance $r$
between neighboring oxygen atoms in the triplet
reveals additional insights,
as shown in Figs.~3(d) and~3(e).
As expected, the minimum free energy corresponds to tetrahedral-like structures with $\theta\simeq109.5^{\circ}$
and $r\approx2.7$~\AA.
In contrast to PBE, SCAN predicts a significant fraction of triplets with
$\theta$ far from $109.5^{\circ}$, indicating that the SCAN liquid is more disordered.
The free energies suggest that
water molecules in the first coordination shell
experience a smaller free energy barrier to
adopt a broad range of $\theta$-values with SCAN than with PBE.
Importantly, there are substantial differences between the two functionals
in describing the dependence of the free energy on $r$.
With PBE, as $r$ is increased away from the free energy minimum,
$\theta$ hardly moves from 109.5$^{\circ}$, as depicted by the red arrow in Fig.~3(d).
This is consistent with the over-structuring of water by PBE
and implies that $\theta$ is weakly influenced by fluctuations of the first coordination shell.
In contrast, SCAN produces a stronger correlation between $r$ and $\theta$,
such that the free energy is lowered at larger $r$ by decreasing $\theta$,
illustrated by the red arrow in Fig.~3(e).
This is consistent with the higher population of non-H-bonded, interstitial waters in the SCAN prediction.
These non-tetrahedrally oriented water molecules contribute significantly to
$P_{\mathrm{OOO}}(\theta)$ below $109.5^{\circ}$ and highlight the reduced tetrahedrality
of the SCAN H-bond network.
\section{Dynamics}
Changes in the H-bond energy
alter the delicate enthalpy-entropy balance in liquid water that dictates its dynamic properties;
for example,
breakage and formation of H-bonds through thermal fluctuations controls diffusion.
Thus, stronger H-bonds tilt the enthalpy-entropy balance toward energetic contributions,
reducing the tendency to break H-bonds and consequently lowering the diffusion coefficient $D$.
We estimate $D$ from the long-time limit of the mean squared displacement,
averaged over the oxygen and hydrogen atoms (see SI).
Indeed, the $D$ value of PBE is an order of magnitude smaller than that of experiment,
while SCAN improves the estimate of $D$ to near agreement with experiment.
H-bond dynamics are more directly probed via the second-order
rotational correlation function of the O-H bond vector $\mathbf{r}_{\mathrm{OH}}$,
$C_2(t)=\langle P_2(\mathbf{r}_{\mathrm{OH}}(t)\cdot\mathbf{r}_{\mathrm{OH}}(0))\rangle/\langle P_2(\mathbf{r}_{\mathrm{OH}}(0)^2)\rangle$,
where $P_2(x)$ is a second-order Legendre polynomial.
The integral of $C_2(t)$ yields the rotational correlation time $\tau_2$ of the O-H bond;
correlation functions and details surrounding $\tau_2$ computation are given in the SI.
SCAN predicts a value of $\tau_2$ in agreement with
nuclear magnetic resonance spectroscopy~\cite{01JACS-Ropp}, Table~1,
while rotational dynamics are slowed in the PBE system.
The mechanism for rotational relaxation of the O-H bond vector is associated with breaking a H-bond.
In PBE, H-bonds are too strong, significantly hindering this pathway.
SCAN appropriately predicts the weight of these pathways for rotational relaxation due to its accurate description of H-bonding.
\section{Conclusions and Outlook}
The SCAN density functional provides
a genuinely predictive {\it ab initio} model of liquid water.
Importantly, SCAN is a long-awaited exchange-correlation functional
that can correctly predict liquid water that is denser than ice at ambient conditions.
SCAN excellently describes covalent and H-bonds due to an improved description of electronic structure,
and captures intermediate-ranged vdW interactions
that further improve the structure and thermodynamics of liquid water.
These vdW forces can play a critical and active role at interfaces,
for example, underlying drying transitions~\cite{Baer:2011rz,11JSP-Remsing,13JPCB-Remsing},
instilling confidence that SCAN will enable predictive modeling of heterogeneous chemical environments.
However, there are still improvements to be made regarding the water structure.
SCAN predicts a slightly over-structured first peak of g$_{\mathrm{OO}}$(r).
Previous studies have attributed the over-structuring
to self-interaction errors~\cite{81B-Perdew}, which can be mitigated by including
a fraction of exact exchange in hybrid functionals.
Moreover, the first peak in g$_{\mathrm{OH}}$(r) is too narrow,
and the error is dominated by the lack of NQEs of hydrogen~\cite{08L-Morrone}.
The widths and intensities of peaks in the computed DOS
are also respectively narrower and higher than those in the experimental DOS.
In fact, DFT is not rigorous for photoemission spectra, and
does not include lifetime broadening;
NQEs, however, can additionally broaden the DOS
bringing the resulting widths and intensities in closer agreement with experiment~\cite{16L-WeiChen}.
NQEs can be accounted for within the Feynman discretized path-integral approach~\cite{08L-Morrone,13PNAS-Ceriotti,16L-WeiChen}.
In conclusion, the SCAN XC functional within DFT shows promising predictive power
and will likely enable confident {\it ab initio} predictions for complex systems
at the forefront of physics, chemistry, biology, and materials science~\cite{13PNAS-Klein}.
\section{Methods}
We performed Car-Parrinello molecular dynamics~\cite{85L-CPMD} in \textsc{Quantum ESPRESSO}~\cite{09JPCM-QE}.
We employed the Hamann-Schl\"{u}ter-Chiang-Vanderbilt pseudopotentials~\cite{79L-Hamann}
generated using PBE. The valence electrons, including
the $1s$ electron of H and the $2s^2p^4$ electrons of O, were treated explicitly.
The energy cutoff was 150 Ry.
Simulations were performed in the isothermal-isobaric ensemble (constant {\it NpT}) by using the Parrinello-Rahman barostat~\cite{80L-Parrinello}
and a single Nos\'{e}-Hoover thermostat~\cite{92JCP-Martyna} with a frequency of 60~THz
to maintain a constant pressure ($p$) and temperature ($T$), respectively.
$T=330$~K for liquid water and 273~K for ice I{\it h};
the 30~K increase above ambient conditions in the former mimics NQEs on the liquid structure~\cite{08L-Morrone}.
We adopted a cubic cell with $N$=64 water molecules.
The fictitious mass of the electrons was set to 100~au and the corresponding
mass pre-conditioning with a kinetic energy cutoff of 25 Ry was used
to all Fourier components of wavefunctions~\cite{94B-Tassone}.
The deuterium mass was used instead of hydrogen to enable the use of a timestep of 2~au;
dynamics were compared to D$_{2}$O instead of H$_{2}$O.
SCAN and PBE trajectories for water were 30.0~and 20.0~ps in length, respectively.
Corresponding trajectories for ice I{\it h} were {11.1 and 13.8~ps}, respectively.
The first 5~ps of each trajectory was used for equilibration and the remainder used for analysis.
We utilized a standard geometric criterion for hydrogen bonding;
covalently bonded O-H are associated with an O-H distance less than 1.24~\AA~and
H-bonds have an O-O distance less than 3.5~\AA~and a $\angle{\rm OOH}$ angle less than $30^{\circ}$~\cite{96N-Chandler}.
Additional details are in the SI.
\newpage
\section{Author contributions}
X.W. and M.C. designed research. M.C. and Z.S. performed {research}.
H.-Y.K., M.F.C.A., and B.S. {contributed new methods}.
M.C. and R.C.R. analyzed data.
All authors contributed to {writing the paper}.
\section{Acknowledgements}
This work was supported by the U.S. Department of Energy (DOE) SciDac under Grant
\#DE-SC0008726. This research used resources of the National
Energy Research Scientific Computing Center, a DOE Office of Science User Facility
supported by the Office of Science of the U.S. DOE under
Contract \#DE-AC02-05CH11231. RCR, ZS, and JPP were supported as
part of the Center for the Computational Design of Functional Layered Materials, an Energy Frontier Research Center funded by the U.S. DOE, Office of Science, Basic Energy Sciences under Award \#DE-SC0012575.
MFCA is partially supported by the CNPq - Brazil.
XW is partially supported by the National Science Foundation (NSF), DMR
under Award \#DMR-1552287.
\newpage
\section{Supporting Information}
\subsection{{Computational Details}}
The {\it NpT} algorithm was implemented in the {\sc Quantum ESPRESSO}~\cite{09JPCM-QE} package.
In our water simulations, all of the plane waves $\{\mathbf{G}\}$ with kinetic energies below 150 Ry were included, {and we followed Ref.~\cite{bernasconi}}
to maintain a constant plane wave
kinetic energy cutoff of $E_{0}$=130 Ry for a fluctuating cell
{by adding a smooth step function with
height $A=200$ Ry and width $\sigma=15$ Ry to the plane wave kinetic
factor as $\mathbf{G}^{2} \rightarrow \mathbf{G}^{2} + A[1+\mathrm{erf}(\frac{\mathbf{G}^2/2-E_0}{\sigma})]$,
where $erf$ is the error function.
The reference cells were chosen to be cubes with side lengths of 14.3345 \AA~and
12.6579 \AA~for PBE- and SCAN-based AIMD simulations, respectively.}
{We ran parallel AIMD simulations using both
strongly constrained and appropriately normed (SCAN)~\cite{15L-Sun}
and Perdew-Burke-Ernzerhof (PBE)~\cite{96L-PBE} exchange-correlation functionals
on 216 computer cores and recorded the wall times of 100 MD steps.
The system was bulk liquid water consisting of 64 water molecules as utilized in this work.
Both simulations were carried out on nodes with 2 x Intel Ivy Bridge @ 2.4 GHz and up to 64 GB RAM.
We obtained 6.44 and 3.89 seconds/step for SCAN and PBE functionals, respectively,
with SCAN being only 1.66 times more costly than PBE.
Therefore, we conclude the cost of using the SCAN functional in studying liquid water is
not dramatically more expensive than that of using the PBE functional
and can be considered on the same level.}
\subsection{Bulk Densities}
Bulk densities as a function of time are shown in Fig. 4 for the
{SCAN and PBE} descriptions
of liquid water and ice I{\it h}.
The ice I{\it h} phase remained solid throughout and did not transform to the liquid phase in the AIMD trajectories.
{The averaged densities are listed in Table 1 in the main text with
the errors bars corresponding to one standard deviation.}
With PBE the dynamics of ambient liquid water is very sluggish
and we find that at 330 K the mean density of liquid water
(0.85 g/mL as shown in Table 1 in the main text) computed with PBE
can vary within ~0.01 g/mL (as estimated by another independent run of
more than 60 ps by members of Roberto Car’s group with PBE and all the same parameters)
depending on the initial configuration and trajectory length.
Since PBE liquid density is well below PBE ice I{\it h} density
we did not try to reduce the statistical uncertainty in the PBE liquid density.
In addition, a recent study found structural, dynamical, and electronic properties of liquid water as
obtained by AIMD simulations utilizing {\sc CP2K}~\cite{CP2K} and {\sc Quantum ESPRESSO} packages~\cite{16JCTC-Miceli}compared well,
with the latter equilibrium density 0.02 g/mL lower, and $g_{\rm OO}(r)$ first-peak ~0.1 higher, than corresponding CP2K values.
These differences are comparable to the statistical uncertainties.
Therefore the small difference in the PBE liquid density found here: 0.85 versus 0.865-0.887 g/mL by~\cite{09JPCB-Schmidt},
as well as the slightly higher $g_{\rm OO}(r)$ first-peak: 3.61 versus 3.36-3.54~\cite{09JPCB-Schmidt}
is attributed to both statistical uncertainties and numerical differences of these approaches.
The densities clearly fluctuate around an average value for each trajectory, illustrating equilibration of the trajectories.
Moreover, the fact that water is denser than ice I{\it h} in the SCAN prediction is clearly observed,
while the opposite is found for PBE.
Finally, we note that fluctuations in the density of water are larger in the SCAN trajectory than in the PBE
trajectory.
This indicates that water is more compressible in the SCAN description than PBE, which produces
a more rigid and ordered liquid structure.
\begin{figure}[tbh]
\label{fig3}
\begin{center}
\includegraphics[width=0.43\textwidth]{FigureS1.pdf}
\end{center}
\caption
{Density fluctuations of liquid water and ice I{\it h}
as obtained from both SCAN and PBE trajectories using the isobaric-isothermal ensemble ({\it NpT}).
A relatively shorter trajectory was generated for ice I{\it h}
because its density of solid phase converges quickly.
The PBE functional incorrectly predicts that ice I{\it h} (green line) is denser than water (orange line),
while the SCAN functional successfully captures the larger density of water (blue line) than that of ice I{\it h} (pink line).
The black dashed and dotted lines represent the approximate experimental values of water density
at ambient conditions (300 K)
and ice density at 273 K under ambient pressure. An additional 30 K was applied to water to
mimic the nuclear quantum effect~\cite{08L-Morrone}.
The averaged densities are listed Table 1 in the main text.
}
\end{figure}
\subsection{van der Waals Interactions in SCAN and PBE}
As discussed extensively in the main text,
an accurate description of water and ice from first principles is challenging
because the H-bond network of water arises from a delicate balance of strong intra-molecular covalent bonds,
weak inter-molecular H-bonds, and even weaker vdW interactions.
GGA functionals exhibit delocalization problems and
the intermediate- and long-ranged vdW attraction is strongly underestimated.
To be specific, the exchange energy density obtained from GGA
is much more negative than LDA in regions with a large reduced density gradient.
Therefore, the attractive vdW interactions between water molecules are missing in GGAs,
which results in more ordered water molecules and a lower bulk density.
The SCAN functional captures this delicate balance between covalent bonds, H-bonds, and vdW interactions in water,
which is critical in accurately describing the water density.
By including a dimensionless variable $\alpha$ in the kinetic energy density,
SCAN can reduce to different GGAs by recognizing
covalent single bonds when $\alpha=0$,
slowly varying densities when $\alpha\simeq1$,
and non-covalent bonds when $\alpha>1$.
It was recently demonstrated that SCAN captures the intermediate-ranged vdW interactions
for a variety of materials~\cite{16NC-Sun}.
To further illustrate this point,
we applied the Tkatchenko-Scheffler (TS) vdW scheme~\cite{09vdW-TS} to both the PBE and SCAN functionals.
The TS scheme determines the "turn-on" radius for atom pairs based on the XC functional used,
and a larger TS scaling parameter $S_R$ implies that a larger radius is adopted to turn on the vdW interactions.
The scaling parameters $S_R$ were obtained by fitting to the S22 molecular database.
We find that in both water and ice structures, the scaling parameters are 0.94 and 1.17 for the PBE and SCAN functionals, respectively.
The 24.5\% larger scaling parameter in SCAN demonstrates that it captures the vdW interactions out to significantly larger distances than PBE.
\subsection{Mean Squared Displacements}
\begin{figure}[tb]
\label{figs2}
\begin{center}
\includegraphics[width=0.35\textwidth]{FigureS2.pdf}
\end{center}
\caption
{Mean squared displacements for the SCAN and PBE systems consisting of 64 water molecules.
Shaded regions indicate one standard error.
}
\end{figure}
The mean squared displacements (MSDs) computed from SCAN and PBE trajectories
are shown in Fig. 5, where the center-of-mass positions of water molecules
were used to compute MSDs.
Each trajectory was divided into sections to compute the MSDs, each 12~ps in length and separated by 3.0~ps.
We chose five and three sections for SCAN and PBE trajectories, respectively.
Next, a linear fitting of the long-time, linear region of the MSDs was performed to obtain the $D$ values. Finally, the obtained $D$ values were averaged and the result is listed in Table~1 of the main text; the slope of the linear region is equal to $6D$.
Clearly, water diffuses much faster in the SCAN description than PBE,
due to the weaker H-bonds predicted by the SCAN functional.
The computed $D$ from SCAN and PBE are 0.190 and 0.018 $\AA^2/ps$, respectively.
The diffusion coefficient from PBE is close to the
0.020 $\AA^2/ps$ reported in a previous work~\cite{14JCP-Rob}.
However, we note that correlations may exist in divided sections and
affect the accuracy of computed diffusion coefficients~\cite{04JPCB-Kuo,06JCTC-Kuo}.
More accurate diffusion coefficients require longer simulation times
and we leave this investigation for future work.
\subsection{Rotational Time Correlation Functions}
\begin{figure}[tbh]
\label{figs2}
\begin{center}
\includegraphics[width=0.49\textwidth]{FigureS3.pdf}
\end{center}
\caption
{Second-order rotational time correlation functions, $C_2(t)$,
for the O-H bond vector of water as described by SCAN and PBE.
Panel (b) is the same as (a), but zoomed in on the region from 0 to 0.5 ps.
}
\end{figure}
\begin{figure}[tb]
\label{fig3}
\begin{center}
\includegraphics[width=0.43\textwidth]{FigureS4.pdf}
\end{center}
\caption
{Oxygen-oxygen radial distribution functions
$g_{\mathrm{OO}}(r)$ for 32 water molecules in condensed phase
as obtained from
the SCAN functional implemented in VASP and {\sc Quantum ESPRESSO}
electronic structure packages.
}
\end{figure}
The second-order rotational correlation function for the O-H bond vector $\mathbf{r}_{\mathrm{OH}}$
was calculated according to
$C_2(t)=\langle P_2(\mathbf{r}_{\mathrm{OH}}(t)\cdot \mathbf{r}_{\mathrm{OH}}(0))\rangle/\langle P_{2}(\mathbf{r}_{\mathrm{OH}}(0)^2)\rangle$,
where $P_2(x)$ is a second-order Legendre polynomial.
These time correlation functions as predicted for water by PBE and SCAN are shown in Fig. 6.
Clearly, SCAN results in significantly faster rotational dynamics,
evidenced by the much faster decay of the SCAN $C_2(t)$ than that of PBE.
As discussed in the main text, this is because SCAN results in weaker and more physical hydrogen bonding interactions than PBE.
The short-time behavior of $C_2(t)$ is shown in Fig. 6(b).
We find that the initial short-time decay ($<$50 fs) of $C_2(t)$ is identical in the two models.
This initial decay is due to rapid inertial,
librational motions of water that do not require hydrogen bond breakage.
Thus, in both models, hydrogen bonds are still intact on this short timescale and no major differences between SCAN and PBE are found.
However, the oscillation in $C_2(t)$ occurs at different times in the two models.
A larger oscillation is found in PBE, which occurs prior to the slight oscillation in the SCAN $C_2(t)$.
Finally, to estimate the rotational correlation time $\tau_2$ of the O-H bond,
we must integrate $C_2(t)$. To do so, we first note that the long time decay of $C_2(t)$
is well described by an exponential.
Thus, we fit $C_2(t)$ to an exponential at long times (after the initial change in slope associated with librations).
The fitted exponential is then used to describe the decay of $C_2(t)$ for times longer than 7 and 9 ps in PBE and SCAN, respectively.
We then numerically integrate this composite $C_2(t)$ to obtain $\tau_2$, which are listed in Table 1 of the main text.
\subsection{Validation of SCAN in Different Packages}
To further validate our results employing the SCAN functional for liquid water,
we ran AIMD simulations on
a cell of 32 water molecules for 20 ps by employing both Vienna Ab initio Simulation Package
(VASP)~\cite{96B-Kresse} and
{\sc Quantum ESPRESSO}~\cite{09JPCM-QE} packages.
The cell was chosen to be a cube of side length 9.877 \AA.
We adopted the NVT ensemble with the Nos\'{e}-Hoover thermostat and
the temperature was set to 330 K. We used the mass of deuterium instead of hydrogen to speed up the convergence.
In VASP, we used projector-augmented-wave (PAW) potentials
with configurations of [O]2s$^2$2p$^4$ and [H]1s$^1$. In particular,
we chose the hard PAW potentials for oxygen and hydrogen atoms
and set the energy cutoff to 1200 eV in order to converge our results.
The Born-Oppenheimer molecular dynamics
was performed with a time step of 0.5 fs.
In {\sc Quantum ESPRESSO}, we carried out Car-Parrinello molecular dynamics~\cite{85L-CPMD}
and the settings were chosen to be the same as described in the main text.
In Fig. 7, the oxygen-oxygen radial distribution functions
$g_{\mathrm{OO}}(r)$ are shown for the above two calculations.
We find both electronic structure packages yield almost the same $g_{\mathrm{OO}}(r)$ features.
In particular, the first peak from both packages are almost identical.
The results suggest that the properties of liquid water
as predicted by the SCAN functional are reliable and are reproducible
with converged basis set and electron dynamics.
\subsection{Radial Distribution Function $g_{OH}$}
We plot in Fig. 8 the zoomed-in first peak of radial distribution
function $g_{OH}$ from both PBE- and SCAN-based AIMD simulations.
The first peak position represents the length of O-H covalent bond and
we can see that SCAN predicts a slightly shorter covalent bond (0.977 \AA)
than that from PBE (0.989 \AA).
\begin{figure}[tbh]
\label{fig5}
\begin{center}
\includegraphics[width=0.35\textwidth]{FigureS5.pdf}
\end{center}
\caption
{Zoomed-in radial distribution function $g_{OH}$ (as shown in Fig. 1(b) in the main text)
as obtained from SCAN- and PBE-based AIMD simulations.
}
\end{figure}
\newpage
|
2,877,628,091,126 | arxiv | \section{Rotation numbers}
\label{app:rotation_numbers}
Let $\widetilde{\text{Sp}}(2)$ denote the universal cover of the group $\text{Sp}(2)$ of $2\times 2$ real symplectic matrices. Let $\operatorname{Diff}(S^1)$ denote the group of orientation-preserving $C^1$ diffeomorphisms\footnote{For the most part we could work more generally with orientation-preserving homeomorphisms.} of $S^1={\mathbb R}/{\mathbb Z}$, and let $\widetilde{\operatorname{Diff}}(S^1)$ denote its universal cover. In this appendix, we review two invariants of elements of $\widetilde{\text{Sp}}(2)$, and more generally $\widetilde{\operatorname{Diff}}(S^1)$: the rotation number $\rho$ and the ``minimum rotation number'' $r$. The former is a standard notion in dynamics and is a key ingredient in Theorem~\ref{thm:smoothtocomb}; and we use the latter to bound the former. We also explain how to use rotation numbers to efficiently compute certain products in $\widetilde{\text{Sp}}(2)$, which is needed for our algorithms.
\subsection{Rotation numbers of circle diffeomorphisms} \label{subsubsec:the_dynamical_rotation_number_of_circle_diffeomorphisms}
We can identify the universal cover $\widetilde{\text{Diff}}(S^1)$ with the group of $C^1$ diffeomorphisms $\Phi:{\mathbb R}\to{\mathbb R}$ which are ${\mathbb Z}$-equivariant in the sense that $\Phi(t+1)=\Phi(t)+1$ for all $t\in{\mathbb R}$. Such a diffeomorphism of ${\mathbb R}$ descends to an orientation-preserving diffeomorphism of $S^1$, and this defines the covering map $\widetilde{\text{Diff}}(S^1)\to\text{Diff}(S^1)$.
\begin{definition}
\label{def:dynamical_rotation_number_for_S1}
Given $\sigma\in S^1$, we define the {\bf rotation number with respect to $\sigma$\/}, denoted by
\[
r_\sigma:\widetilde{\text{Diff}}(S^1) \longrightarrow {\mathbb R},
\]
as follows. Let $\Phi$ be a ${\mathbb Z}$-equivariant diffeomorphism of ${\mathbb R}$ as above. Let $t\in{\mathbb R}$ be a lift of $\sigma\in{\mathbb R}/{\mathbb Z}$. We then define
\begin{equation}
\label{eqn:def_of_dynamical_rotation_wrts}
r_\sigma(\Phi) = \Phi(t) - t.
\end{equation}
\end{definition}
\begin{definition}
Given $\Phi\in\widetilde{\text{Diff}}(S^1)$, we define the {\bf rotation number\/}
\begin{equation}
\label{eqn:defrhophi}
\rho(\Phi) = \lim_{n\to\infty}\frac{r_\sigma(\Phi^n)}{n} \in {\mathbb R}
\end{equation}
where $\sigma\in S^1$. This limit does not depend on the choice of $\sigma$. Equivalently,
\begin{equation}
\label{eqn:defrhophi2}
\rho(\Phi) = \lim_{n\to\infty}\frac{\Phi^n(t)-t}{n}
\end{equation}
where $t\in{\mathbb R}$.
\end{definition}
Note that we have the ${\mathbb Z}$-equivariance property
\begin{equation}
\label{eqn:rhoequivariant}
\rho(\Phi+1)=\rho(\Phi)+1.
\end{equation}
We can bound the rotation number as follows.
\begin{definition}
We define the {\bf minimum rotation number\/} $r:\widetilde{\text{Diff}}(S^1)\to{\mathbb R}$ by
\begin{equation}
\label{eqn:def_of_dynamical_rotation}
r\left(\Phi\right) = \min_{\sigma\in S^1} r_\sigma\left(\Phi\right).
\end{equation}
\end{definition}
Alternatively, if $\Phi\in\widetilde{\operatorname{Diff}}(S^1)$ is presented as a piecewise smooth path $\{\phi_t\}_{t\in[0,1]}$ in $\operatorname{Diff}(S^1)$ with $\phi_0=\operatorname{id}_{S^1}$, then
\[
r(\Phi) = \min_{\sigma\in S^1}\int_0^1\frac{d}{ds}\phi_s(\sigma)ds.
\]
In particular, it follows that
\begin{equation}
\label{eqn:rminbound}
r(\Phi) \ge \int_0^1\min_{\sigma\in S^1}\left(\frac{d}{ds}\phi_s(\sigma)\right)\,ds.
\end{equation}
It follows from the definitions that
\begin{equation}
\label{eqn:rhor}
\rho(\Phi) \ge r(\Phi).
\end{equation}
\subsection{A partial order}
\begin{definition}
\label{def:order_on_DiffS1}
We define a partial order $\ge$ on $\widetilde{\text{Diff}}(S^1)$ as follows:
\begin{equation}
\Phi \ge \Psi \text{ if and only if } r_s(\Phi) \ge r_s(\Psi) \text{ for all }s \in S^1.
\end{equation}
Equivalently, $\Phi(t)\ge \Psi(t)$ for all $t\in{\mathbb R}$.
\end{definition}
\begin{lemma}
\label{lem:order_on_DiffS1_invariance}
The partial order $\ge$ on $\widetilde{\text{Diff}}(S^1)$ is left and right invariant.
\end{lemma}
\begin{proof}
Let $\Phi,\Psi,\Theta \in \widetilde{\text{Diff}}(S^1)$, and suppose that $\Phi\ge\Psi$, i.e.
\begin{equation}
\label{eqn:phigreaterthanpsi}
\Phi(t) \ge \Psi(t)
\end{equation}
for every $t\in{\mathbb R}$. We need to show that $\Phi\Theta\ge \Psi\Theta$ and $\Theta\Phi\ge\Theta\Psi$.
Since $\Theta:{\mathbb R}\to{\mathbb R}$ is an orientation preserving diffeomorphism, it preserves the order on ${\mathbb R}$, so it follows from \eqref{eqn:phigreaterthanpsi} that
\[
\Theta(\Phi(t)) \ge \Theta(\Psi(t))
\]
for every $t\in{\mathbb R}$, so $\Theta\Phi\ge\Theta\Psi$.
On the other hand, replacing $t$ by $\Theta(t)$ in the inequality \eqref{eqn:phigreaterthanpsi}, we deduce that
\[
\Phi(\Theta(t)) \ge \Psi(\Theta(t))
\]
for every $t\in{\mathbb R}$, so $\Phi\Theta\ge\Psi\Theta$.
\end{proof}
\begin{lemma}
\label{lem:rhoorder}
If $\Phi,\Psi\in\widetilde{\text{Diff}}(S^1)$ and $\Phi\ge\Psi$, then $\rho(\Phi)\ge \rho(\Psi)$.
\end{lemma}
\begin{proof}
By \eqref{eqn:defrhophi2}, it is enough to show that given $t\in{\mathbb R}$, we have $\Phi^n(t)\ge \Psi^n(t)$ for each positive integer $n$. This follows by induction on $n$, using the fact that $\Phi$ preserves the order on ${\mathbb R}$.
\end{proof}
\subsection{Rotation numbers of symplectic matrices}
\label{subsubsec:the_symplectic_rotation_number}
There is a natural homomorphism $\text{Sp}(2)\to\text{Diff}(S^1)$, sending a symplectic linear map $A:{\mathbb R}^2\to{\mathbb R}^2$ to its action on the set of positive rays (identified with ${\mathbb R}/{\mathbb Z}$ by the map sending $t\in{\mathbb R}/{\mathbb Z}$ to the ray through $e^{2\pi i t}$). This lifts to a canonical homomorphism $\widetilde{\text{Sp}}(2)\to\widetilde{\text{Diff}}(S^1)$. Under this homomorphism, the invariants $r_s$, $r$, and $\rho$ defined above pull back to functions $\widetilde{\text{Sp}}(2)\to{\mathbb R}$, which by abuse of notation we denote using the same symbols.
We can describe the rotation number $\rho:\widetilde{\text{Sp}}(2)\to{\mathbb R}$ more explicitly in terms of the following classification of elements of the symplectic group $\text{Sp}(2)$.
\begin{definition}
\label{def:classifySp2}
Let $A \in \text{Sp}(2)$. We say that $A$ is
\begin{itemize}
\item {\bf positive hyperbolic} if $\op{Tr}(A) > 2$ and {\bf negative hyperbolic} if $\op{Tr}(A) < -2$.
\item a {\bf positive shear} if $\op{Tr}(A) = 2$ and a {\bf negative shear} if $\op{Tr}(A) = -2$.
\item {\bf positive elliptic} if $-2 < \op{Tr}(A) < 2$ and $\det([v,Av]) > 0$ for all $v \in {\mathbb R}^2\setminus\{0\}$.
\item {\bf negative elliptic} if $-2 < \op{Tr}(A) < 2$ and $\det([v,Av]) < 0$ for all $v \in {\mathbb R}^2\setminus\{0\}$.
\end{itemize}
\end{definition}
By the equivariance property \eqref{eqn:rhoequivariant}, the rotation number $\rho:\widetilde{\text{Sp}}(2)\to{\mathbb R}$ descends to a ``mod ${\mathbb Z}$ rotation number'' $\bar{\rho}:\text{Sp}(2)\to{\mathbb R}/{\mathbb Z}$.
\begin{lemma}
\label{lem:compute_rho_bar}
The mod $\mathbb{Z}$ rotation number $\bar{\rho}:\text{Sp}(2) \to {\mathbb R}/{\mathbb Z}$ can be computed as follows:
\[
\bar{\rho}(A) = \left\{
\begin{array}{ccc}
0 & \text{ if } & A \text{ is positive hyperbolic or a positive shear,}\\
\frac{1}{2} & \text{ if } & A \text{ is negative hyperbolic or a negative shear,}\\
\theta & \text{ if } & A \text{ is positive elliptic with eigenvalues }e^{\pm 2 \pi i \theta} \text{ for } \theta \in (0,\frac{1}{2}),\\
-\theta & \text{ if } & A \text{ is negative elliptic with eigenvalues }e^{\pm 2 \pi i \theta} \text{ for } \theta \in (0,\frac{1}{2}).\\
\end{array}
\right.
\]
\end{lemma}
\begin{proof}
In the first two cases, $A$ has $1$ or $-1$ as an eigenvalue. This means that there exists $s\in S^1$ which is fixed or sent to its antipode, and one can use this $s$ in the definition \eqref{eqn:defrhophi}.
In the third case, $A$ is conjugate to rotation by $2\pi\theta$. One can then lift $A$ to an element of $\widetilde{\text{Sp}}(2)$ whose image in $\widetilde{\text{Diff}}(S^1)$ is a ${\mathbb Z}$-equivariant diffeomorphism $\Phi:{\mathbb R}\to{\mathbb R}$ such that $|\Phi^n(t)-t-n\theta|<1$ for each $t\in{\mathbb R}$. It then follows from \eqref{eqn:defrhophi2} that $\rho(\Phi)=\theta$. The last case is analogous.
\end{proof}
\subsection{Computing products in $\widetilde{\text{Sp}}(2)$}
\label{subsec:computing_with_Sp2}
Observe that $\widetilde{\text{Sp}}(2)$ can be identified with the set of pairs $(A,r)$, where $A\in\text{Sp}(2)$ and $r\in{\mathbb R}$ is a lift of $\overline{\rho}(A)\in{\mathbb R}/{\mathbb Z}$. The identification sends a lift $\widetilde{A}$ to the pair $(A,\rho(\widetilde{A}))$.
For computational purposes, we can keep track of the lifts of $A$ using less information, which is useful when for example we do not want to compute $\overline{\rho}(A)$ exactly. Namely, we can identify a lift $\widetilde{A}$ with a pair $(A,r)$, where $r$ is either an integer (when $A$ has positive eigenvalues), an open interval $(n,n+1/2)$ for some integer $n$ (when $A$ is positive elliptic), a half-integer (when $A$ has negative eigenvalues), or an open interval $(n-1/2,n)$ (when $A$ is negative elliptic).
The following proposition allows us to compute products in the group $\widetilde{\text{Sp}}(2)$ in terms of the above data, in the cases that we need (see Remark~\ref{rem:ucmult}).
\begin{proposition}
\label{prop:ucmult}
Let $\widetilde{A},\widetilde{B} \in \widetilde{\text{Sp}}(2)$. Suppose that $\rho(\widetilde{A})\in(0,1/2)$. Then
\[
\rho(\widetilde{B}) \le \rho(\widetilde{A}\widetilde{B}) \le \rho(\widetilde{B}) + \frac{1}{2}.
\]
\end{proposition}
To apply this proposition, if for example $\widetilde{B}$ is described by the pair $(B,(m,m+1/2))$, then it follows that $\widetilde{A}\widetilde{B}$ is described by either $(AB,(m,m+1/2))$, $(AB,m+1/2)$, or $(AB,(m+1/2,m+1))$. To decide which of these three possibilities holds, by Lemma~\ref{lem:compute_rho_bar} it is enough to check whether $AB$ is positive elliptic, has negative eigenvalues, or is negative elliptic.
\begin{proof}[Proof of Proposition~\ref{prop:ucmult}.]
Let $\Phi$ and $\Psi$ denote the elements of $\widetilde{\text{Diff}}(S^1)$ determined by $\widetilde{A}$ and $\widetilde{B}$ respectively.
Let $\Theta:{\mathbb R}\to {\mathbb R}$ denote translation by $1/2$. By Lemma~\ref{lem:compute_rho_bar}, $\widetilde{A}$ projects to a positive elliptic element of $\text{Sp}(2)$. It follows that with respect to the partial order on $\widetilde{\text{Diff}}(S^1)$, we have
\[
\operatorname{id}_{\mathbb R} \le \Phi \le \Theta.
\]
By Lemma~\ref{lem:order_on_DiffS1_invariance}, we can multiply on the right by $\Psi$ to obtain
\[
\Psi \le \Phi\Psi \le \Theta\Psi.
\]
Using Lemma~\ref{lem:rhoorder}, we deduce that
\[
\rho(\Psi) \le \rho(\Phi\Psi) \le \rho(\Theta\Psi).
\]
Since $\Psi$ comes from a linear map, it commutes with $\Theta$, so we have
\[
\rho(\Theta\Psi) = \rho(\Psi) + \frac{1}{2}.
\]
Combining the above two lines completes the proof.
\end{proof}
\section{Introduction And main results}
\label{sec:introduction_and_main_results}
This paper is about computational methods for testing Viterbo's conjecture and related conjectures, via combinatorial Reeb dynamics.
\subsection{Review of Viterbo's conjecture}
\label{sec:reviewviterbo}
We first recall two different versions of Viterbo's conjecture.
Consider ${\mathbb R}^{2n}={\mathbb C}^n$ with coordinates $z_i=x_i+\sqrt{-1}y_i$ for $i=1,\ldots,n$. Define the standard Liouville form
\[
\lambda_0 = \frac{1}{2}\sum_{i=1}^n\left(x_i\,dy_i - y_i\,dx_i\right).
\]
Let $X$ be a compact domain in ${\mathbb R}^{2n}$ with smooth boundary $Y$. Assume that $X$ is ``star-shaped'', by which we mean that $Y$ is transverse to the radial vector field. Then the $1$-form $\lambda = \lambda_0|_Y$ is a contact form on $Y$. Associated to $\lambda$ are the contact structure $\xi=\op{Ker}(\lambda)\subset TY$ and the Reeb vector field $R$ on $Y$, characterized by $d\lambda(R,\cdot)=0$ and $\lambda(R)=1$. A {\bf Reeb orbit\/} is a periodic orbit of $R$, i.e. a map $\gamma:{\mathbb R}/T{\mathbb Z}\to Y$ for some $T>0$ such that $\gamma'(t)=R(\gamma(t))$, modulo reparametrization. The {\bf symplectic action\/} of a Reeb orbit $\gamma$, denoted by $\mc{A}(\gamma)$, is the period of $\gamma$, or equivalently
\begin{equation}
\label{eqn:symplecticaction}
\mc{A}(\gamma) = \int_{{\mathbb R}/T{\mathbb Z}}\gamma^*\lambda_0.
\end{equation}
Reeb orbits on $Y$ always exist. This was first proved by Rabinowitz \cite{rabinowitz} and is a special case of the Weinstein conjecture; see \cite{tw} for a survey. We are interested here in the minimal period of a Reeb orbit on $Y$, which we denote by $\mc{A}_{\operatorname{min}}(X)\in(0,\infty)$, and its relation to the volume $\operatorname{vol}(X)$ of $X$ with respect to the Lebesgue measure. For this purpose, define the {\bf systolic ratio\/}
\[
\operatorname{sys}(X) = \frac{\mc{A}_{\operatorname{min}}(X)^n}{n!\operatorname{vol}(X)}.
\]
The exponent ensures that the systolic ratio of $X$ is invariant under scaling of $X$; and the constant factor is chosen so that if $X$ is a ball then $\operatorname{sys}(X)=1$.
\begin{conjecture}[weak Viterbo conjecture]
\label{conj:vweak}
Let $X\subset{\mathbb R}^{2n}$ be a compact convex domain with smooth boundary such that $0\in\operatorname{int}(X)$. Then $\operatorname{sys}(X)\le 1$.
\end{conjecture}
Conjecture~\ref{conj:vweak} asserts that among compact convex domains with the same volume, $\mc{A}_{\operatorname{min}}$ is largest for a ball. Although the role of the convexity hypothesis is somewhat mysterious, some hypothesis beyond the star-shaped condition is necessary: it is shown in \cite{abhs} that there exist star-shaped domains in ${\mathbb R}^4$ with arbitrarily large systolic ratio\footnote{It is further shown in \cite{abhs2} that there are star-shaped domains in ${\mathbb R}^4$ which are {\em dynamically convex\/} (meaning that every Reeb orbit on the boundary has rotation number greater than $1$, see Proposition~\ref{prop:ehwz}(a) below) and have systolic ratio $2-\epsilon$ for $\epsilon>0$ arbitrarily small.}.
One motivation for studying Conjecture~\ref{conj:vweak} is that it implies the Mahler conjecture in convex geometry \cite{ako}.
To put Conjecture~\ref{conj:vweak} in more context, recall\footnote{The precise definition of ``symplectic capacity'' varies in the literature. For an older but extensive survey of symplectic capacities see \cite{chls}.} that a {\bf symplectic capacity\/} is a function $c$ mapping some class of $2n$-dimensional symplectic manifolds to $[0,\infty]$, such that:
\begin{itemize}
\item (Monotonicity)
If there exists a symplectic embedding $\varphi:(X,\omega)\to(X',\omega')$, then $c(X,\omega)\le c(X',\omega')$.
\item (Conformality)
If $r>0$ then $c(X,r\omega)=rc(X,\omega)$.
\end{itemize}
Of course we can regard (open) domains in ${\mathbb R}^{2n}$ as symplectic manifolds with the restriction of the standard symplectic form $\omega=\sum_{i=1}^ndx_i\,dy_i$. Conformality for a domain $X\subset {\mathbb R}^{2n}$ means that $c(rX)=r^2c(X)$.
Following the usual convention in symplectic geometry, for $r>0$ define the ball
\[
B(r)=\left\{z\in{\mathbb C}^n\;\big|\; \pi|z|^2\le r\right\}
\]
and the cylinder
\[
Z(r)=\left\{z\in{\mathbb C}^n\;\big|\; \pi|z_1|^2\le r\right\}.
\]
We say that a symplectic capacity $c$ is {\bf normalized\/} if it is defined at least for all compact convex domains in ${\mathbb R}^{2n}$ and if
\[
c(B(r))=c(Z(r))=r.
\]
Note that the symplectic capacity $c(Z(r))$ is defined as the limit of $c(E_i)$, where $E_i \subset {\mathbb R}^{2n}$ is a sequence of ellipsoids exausting $Z(r)$.
An example of a normalized symplectic capacity is the {\bf Gromov width\/} $c_{\operatorname{Gr}}$, where $c_{\operatorname{Gr}}(X,\omega)$ is defined to be the supremum over $r$ such that there exists a symplectic embedding $B(r)\to (X,\omega)$. It is immediate from the definition that $c_{\operatorname{Gr}}$ is monotone and conformal. Since symplectomorphisms preserve volume, we have $c_{\operatorname{Gr}}(B(r))=r$; and the Gromov nonsqueezing theorem asserts that $c_{\operatorname{Gr}}(Z(r))=r$.
Another example of a normalized symplectic capacity is the {\bf Ekeland-Hofer-Zehnder capacity\/}, denoted by $c_{\operatorname{EHZ}}$. If $X$ is a compact convex domain with smooth boundary such that $0\in\operatorname{int}(X)$, then\footnote{Since translations act by symplectomorphism on ${\mathbb R}^{2n}$, the symplectic capacities of $X$ are invariant under translation. However, we will often assume that $0\in\operatorname{int}(X)$ so that we can sensibly discuss the Reeb flow on $\partial X$.}
\begin{equation}
\label{eqn:cehzamin}
c_{\operatorname{EHZ}}(X) = \mc{A}_{\operatorname{min}}(X).
\end{equation}
This is explained in \cite[Thm.\ 2.2]{artsteinostrover2014}, combining results from \cite{eh1,hz}.
Any symplectic capacity which is defined for compact convex domains in ${\mathbb R}^{2n}$ with smooth boundary is a $C^0$ continuous function of the domain (i.e., continuous with respect to the Hausdorff distance between compact sets), and thus extends uniquely to a $C^0$ continuous function of all compact convex sets in ${\mathbb R}^{2n}$.
\begin{conjecture}
[strong Viterbo conjecture\footnote{The original version of Viterbo's conjecture from \cite{viterbo} asserts that a normalized symplectic capacity, restricted to convex sets in ${\mathbb R}^{2n}$ of a given volume, takes its maximum on a ball. (This follows from what we are calling the ``strong Viterbo conjecture'' and implies what we are calling the ``weak Viterbo conjecture''.) Viterbo further conjectured that the maximum is achieved only if the interior of the convex set is symplectomorphic to an open ball; cf.\ Question~\ref{question:zoll_ball} below.}]
\label{conj:vstrong}
All normalized symplectic capacities agree on compact convex sets in ${\mathbb R}^{2n}$.
\end{conjecture}
\begin{remark} Convexity is a key hypothesis in both the weak and strong versions of the Viterbo conjecture. For star-shaped domains that are not convex, counterexamples to the conclusion of the strong Viterbo conjecture were given in \cite[Thm.\ 1.12]{hermann}, and counterexamples to the conclusion of the weak Viterbo conjecture were given later in \cite[Thm.\ 2]{abhs}. In \cite[Cor.\ 5.2]{ghr}, it is shown exactly where the conclusions of the strong and original Viterbo conjectures start to fail in a certain family of non-convex examples.
\end{remark}
Conjecture~\ref{conj:vstrong} implies Conjecture~\ref{conj:vweak}, because if Conjecture~\ref{conj:vstrong} holds, and if $X$ is a compact convex domain with smooth boundary and $0\in\operatorname{int}(X)$, then
\[
\mc{A}_{\operatorname{min}}(X)^n = c_{\operatorname{EHZ}}(X)^n = c_{\operatorname{Gr}}(X)^n \le n!\operatorname{vol}(X).
\]
Here the second equality holds by Conjecture~\ref{conj:vstrong}; and the inequality on the right holds because if there exists a symplectic embedding $B(r)\to X$, then $r^n/n! = \operatorname{vol}(B(r)) \le \operatorname{vol}(X)$.
There are also interesting families of non-normalized symplectic capacities.
For example, there are the Ekeland-Hofer capacities defined in \cite{eh}; more recently, and conjecturally equivalently, positive $S^1$-equivariant symplectic homology was used in \cite{gh} to define a symplectic capacity $c_k^{S^1}$ for each integer $k\ge 1$. Each equivariant capacity $c_k^{S^1}(X)$ is the symplectic action of some Reeb orbit, which when $X$ is generic (so that $\lambda$ is nondegenerate) has Conley-Zehnder index $n-1+2k$ (see \S\ref{sec:rotcz} below). Some other symplectic capacities give the total action of a finite set of Reeb orbits, such as the ECH capacities in the four-dimensional case \cite{qech}, or the symplectic capacities defined by Siegel using rational symplectic field theory \cite{siegel}.
Conjectures~\ref{conj:vweak} and \ref{conj:vstrong} are known for some special examples such as $S^1$-invariant convex domains \cite{ghr}, but they have not been well tested more generally. To test Conjecture~\ref{conj:vweak}, and as a first step towards computing other symplectic capacities and testing conjectures about them, we need good methods for computing Reeb orbits, their actions, and their Conley-Zehnder indices. The plan in this paper is to understand Reeb orbits on a smooth convex domain in terms of ``combinatorial Reeb orbits'' on convex polytopes approximating the domain.
\subsection{Combinatorial Reeb orbits}
\label{sec:cro}
Let $X$ be any compact convex set in ${\mathbb R}^{2n}$ with $0\in\operatorname{int}(X)$, and let $y\in\partial X$. The {\bf tangent cone\/}, which we denote by $T_y^+X$, is the closure of the set of vectors $v$ such $y+\epsilon v\in X$ for some $\epsilon>0$. For example, if $\partial X$ is smooth at $y$, then $T_y^+X$ is a closed half-space whose boundary is the usual tangent space $T_y\partial X$.
Also define the {\bf positive normal cone\/}
\[
N_y^+X = \left\{v\in{\mathbb R}^{2n}\;\big|\;\langle x-y,v\rangle\le 0\;\;\forall x\in X\right\}.
\]
If $\partial X$ is smooth at $y$, then $N_y^+X$ is a one-dimensional ray and consists of the outward pointing normal vectors to $\partial X$ at $y$.
Finally, define the {\bf Reeb cone\/}
\[
R_y^+X = T_y^+X\cap {\mathbf i}N_y^+X
\]
where ${\mathbf i}$ denotes the standard complex structure on ${\mathbb C}^n={\mathbb R}^{2n}$. We show that $R^+_yX$ is nonempty in the cases of interest for this paper in Lemma \ref{lem:wp1}. If $\partial X$ is smooth near $y$, then $R_y^+X$ is the ray consisting of nonnegative multiples of the Reeb vector field on $\partial X$ at $y$. Indeed, in this case we can write
\[
T_y\partial X = \left\{v\in{\mathbb R}^{2n}\;\big|\;\langle \nu,v\rangle=0\right\}
\]
where $\nu$ is the outward unit normal vector to $\partial X$ at $y$; and the Reeb vector field at $y$ is given by
\begin{equation}
\label{eqn:Reebinu}
R_y = 2\frac{{\mathbf i}\nu}{\langle \nu,y\rangle}.
\end{equation}
\begin{figure}[h!]
\label{fig:cones_on_polytopes}
\begin{center}
\includegraphics[width=.6\linewidth]{cones_on_polytopes.png}
\end{center}
\caption{We depict the tangent, normal and Reeb cones for two points $p,q \in X$ in a polytope $X \subset {\mathbb R}^2$.}
\end{figure}
Suppose now that $X$ is a convex polytope (i.e.\ a compact set given by the intersection of a finite set of closed half-spaces) in ${\mathbb R}^{2n}$ with $0\in\operatorname{int}(X)$.
Our convention is that a {\bf $k$-face\/} of $X$ is a $k$-dimensional subset $F\subset \partial X$ which is the interior of the intersection with $\partial X$ of some set of the hyperplanes defining $X$. For a given $k$-face $F$, the tangent cone $T_y^+X$, the positive normal cone $N_y^+X$, and the Reeb cone $R_y^+X$ are the same for all $y\in F$. Thus we can denote these cones by $T_F^+X$, $N_F^+X$, and $R_F^+X$ respectively.
We will usually restrict attention to polytopes of the following type:
\begin{definition}
\label{def:symplectic_polytope}
A {\bf symplectic polytope\/} in ${\mathbb R}^4$ is a convex polytope $X$ in ${\mathbb R}^4$ such that $0\in\operatorname{int}(X)$ and no $2$-face of $X$ is Lagrangian, i.e., the standard symplectic form $\omega_0 = \sum_{i=1}^2dx_i\,dy_i$ restricts to a nonzero $2$-form on each $2$-face.
\end{definition}
Symplectic polytopes are generic, in the sense that in the space of polytopes in ${\mathbb R}^4$ with a given number of $3$-faces, the set of non-symplectic polytopes is a proper subvariety. Moreover, the boundary of a symplectic polytope in $\mathbb{R}^4$ has a well-posed ``combinatorial Reeb flow'' in the following sense\footnote{There is also a more general notion of ``generalized Reeb trajectory'' on the boundary of a compact convex convex set in ${\mathbb R}^{2n}$ whose interior contains the origin; see Definition~\ref{def:gro} below. We do not know whether the generalized Reeb flow on the boundary of a four-dimensional symplectic polytope is well posed.}.
\begin{proposition}[Lemma \ref{lem:wp1}]
\label{prop:well-posed}
If $X$ is a symplectic polytope in ${\mathbb R}^4$, then the Reeb cone $R_F^+X$ is one-dimensional for each face $F$.
\end{proposition}
\begin{definition}
\label{def:cro}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. A {\bf combinatorial Reeb orbit\/} for $X$ is a finite sequence $\gamma=(\Gamma_1,\ldots,\Gamma_k)$ of oriented line segments in $\partial X$, modulo cyclic permutations, such that for each $i=1,\ldots,k$:
\begin{itemize}
\item The final endpoint of $\Gamma_i$ agrees with the initial endpoint of $\Gamma_{i+1\mod k}$.
\item
There is a face $F$ of $X$ such that $\operatorname{int}(\Gamma_i)\subset F$, the endpoints of $\Gamma_i$ are on the boundary of (the closure of) $F$, and $\Gamma_i$ points in the direction of $R_F^+X$.
\end{itemize}
The {\bf combinatorial symplectic action\/} of a combinatorial Reeb orbit as above is defined by
\[
\mc{A}_{\operatorname{comb}}(\gamma)=\sum_{i=1}^k\int_{\Gamma_i}\lambda_0.
\]
\end{definition}
To give a better idea of what combinatorial Reeb orbits look like, we have the following lemma.
\begin{lemma}
\label{lem:Reebcone}
(proved in \S\ref{sec:drc})
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. Then the Reeb cones of the faces of $X$ satisfy the following:
\begin{itemize}
\item
If $E$ is a 3-face, then $R_E^+X$ consists of all nonnegative multiples of the Reeb vector field on $E$.
\item If $F$ is a $2$-face, then $R_F^+X$ points into a 3-face $E$ adjacent to $F$, and agrees with $R_E^+X$.
\item If $L$ is a $1$-face, then one of the following possibilities holds:
\begin{itemize}
\item $R_L^+X$ points into a $3$-face $E$ adjacent to $L$ and agrees with $R_E^+X$. In this case we say that $L$ is a {\bf good\/} $1$-face.
\item $R_L^+X$ is tangent to $L$, and does not agree with $R_E^+X$ for any of the $3$-faces $E$ adjacent to $L$. In this case we say that $L$ is a {\bf bad\/} $1$-face.
\end{itemize}
\item If $P$ is a $0$-face, then $R_P^+X$ points into a $3$-face $E$ or bad $1$-face $L$ adjacent to $F$ and agrees with $R_E^+X$ or $R_L^+X$ respectively.
\end{itemize}
\end{lemma}
\begin{remark}
The reason we assume that $X$ has no Lagrangian $2$-faces in Definition~\ref{def:symplectic_polytope} is that if $F$ is a Lagrangian 2-face, then $R_F^+X$ is two-dimensional and tangent to $F$. In fact, $\partial R_F^+X = R_{E_1}^+X\cup R_{E_2}^+X$ where $E_1$ and $E_2$ are the two $3$-faces adjacent to $F$. In this case we do not have a well-posed ``combinatorial Reeb flow'' on $\partial X$.
\end{remark}
\begin{definition}
A combinatorial Reeb orbit as above is:
\begin{itemize}
\item {\bf Type 1\/} if it does not intersect the $1$-skeleton of $X$;
\item {\bf Type 2\/} if it intersects the $1$-skeleton of $X$, but only in finitely many points which are some of the endpoints of the line segments $\Gamma_i$;
\item {\bf Type 3\/} if it contains a bad $1$-face.
\end{itemize}
\end{definition}
\begin{figure}[h!]
\label{fig:types_of_orbits}
\includegraphics[width=\linewidth]{types_of_orbits.png}
\caption{We depict sub-trajectories of the three types of orbits, in red. Each cube above represents a 3-face of a hypothetical 4-polytope.}
\end{figure}
It follows from the definitions that each combinatorial Reeb orbit is of one of the above three types. Type 1 Reeb orbits are the most important for our computations. We expect that Type 2 combinatorial Reeb orbits do not exist for generic polytopes; see Conjecture~\ref{conj:genericity} below. Type 3 combinatorial Reeb orbits generally cannot be eliminated by perturbing the polytope; but we will see in Theorem~\ref{thm:smoothtocomb}(iii) below that they do not contribute to the symplectic capacities that we are interested in. See Remark~\ref{rem:spiral} for some intuition for this.
\subsection{Rotation numbers and the Conley-Zehnder index}
\label{sec:rotcz}
Let $X$ be a compact star-shaped domain in ${\mathbb R}^4$ with smooth boundary $Y$. Let $\Phi_t:Y\to Y$ denote the time $t$ flow of the Reeb vector field $R$. The derivative of $\Phi_t$ preserves the contact form $\lambda$ and so defines a map on the contact structure $\xi = \op{Ker}(\lambda)$, namely
\[
d\Phi_t:\xi_y \longrightarrow \xi_{\Phi_t(y)}
\]
for each $y\in Y$. The map $d\Phi_t$ is symplectic with respect to the symplectic form $d\lambda|_\xi$ on $\xi$.
We say that a Reeb orbit $\gamma:{\mathbb R}/T{\mathbb Z}\to Y$ is {\bf nondegenerate\/} if the ``linearized return map''
\begin{equation}
\label{eqn:dPhiT}
d\Phi_T:\xi_{\gamma(0)}\longrightarrow \xi_{\gamma(0)}
\end{equation}
does not have $1$ as an eigenvalue. The contact form $\lambda$ is called nondegenerate if all Reeb orbits are nondegenerate.
Now fix a symplectic trivialization $\tau:\xi\to Y\times{\mathbb R}^2$. If $\gamma$ is a Reeb orbit as above, then the trivialization $\tau$ allows us to regard the map \eqref{eqn:dPhiT} as an element of the $2$-dimensional symplectic group $\operatorname{Sp}(2)$. Moreover, the family of maps
\begin{equation}
\label{eqn:fom}
\left\{
{\mathbb R}^2 \stackrel{\tau^{-1}}{\longrightarrow} \xi_{\gamma(0)} \stackrel{d\Phi_t}{\longrightarrow} \xi_{\gamma(t)} \stackrel{\tau}{\longrightarrow} {\mathbb R}^2\right\}_{t\in[0,T]}
\end{equation}
defines a path $\phi_\tau$ in $\text{Sp}(2)$ from the identity to the map \eqref{eqn:dPhiT}, and thus an element of the universal cover $\widetilde{\text{Sp}}(2)$ of $\text{Sp}(2)$. As we review in Appendix~\ref{app:rotation_numbers}, any element of $\widetilde{\text{Sp}}(2)$ has a well-defined {\bf rotation number\/}. We denote the rotation number of $\phi_\tau$ by
\[
\rho(\gamma)\in{\mathbb R}.
\]
Note that the rotation number $\rho(\gamma)$ does not depend on the choice of symplectic trivialization $\tau$ of $\xi$. Since $Y\simeq S^3$, any two such trivializations are homotopic, giving rise to a homotopy of paths \eqref{eqn:fom} whose final endpoints are conjugate in $\operatorname{Sp}(2)$. Invariance of the rotation number then follows from Lemma~\ref{lem:compute_rho_bar}.
If $\gamma$ is nondegenerate (which holds automatically when $\rho(\gamma)$ is not an integer), then the {\bf Conley-Zehnder index\/} of $\gamma$ is defined by
\begin{equation}
\label{eqn:CZrot}
\operatorname{CZ}(\gamma) = \floor{\rho(\gamma)} + \ceil{\rho(\gamma)} \in {\mathbb Z}.
\end{equation}
\begin{proposition}
\label{prop:ehwz}
Let $X$ be a compact strictly convex domain in ${\mathbb R}^4$ with smooth boundary $Y$ and with $0\in\operatorname{int}(X)$. Then:
\begin{itemize}
\item[\emph{(a)}]
Every Reeb orbit $\gamma$ in $Y$ has $\rho(\gamma)>1$. In particular, if $\gamma$ is nondegenerate then $\operatorname{CZ}(\gamma)\ge 3$.
\item[\emph{(b)}]
There exists a Reeb orbit $\gamma$ which is action minimizing, i.e.\ $\mc{A}(\gamma) = \mc{A}_{\operatorname{min}}(X)$, with
\[
\rho(\gamma) \le 2.
\]
If $\gamma$ is also nondegenerate then the inequality is strict, so that $\operatorname{CZ}(\gamma)=3$.
\end{itemize}
\end{proposition}
\begin{proof}
(a) was proved by Hofer-Wysocki-Zehnder \cite{hwz}.
(b) follows from the construction of the Ekeland-Hofer-Zehnder capacity and an index calculation of Hu-Long \cite{hulong2002}. In fact, it was recently shown by Abbondandolo-Kang \cite{ak} and Irie \cite{irie} that $c_{\operatorname{EHZ}}(X)$ agrees with a capacity defined from symplectic homology, which by construction is the action of some Reeb orbit $\gamma$ with $\rho(\gamma)\le 2$, with equality only if $\gamma$ is degenerate.
\end{proof}
Suppose now that $X$ is a symplectic polytope in ${\mathbb R}^4$. As we explain in Definition~\ref{def:crn}, each Type 1 combinatorial Reeb orbit $\gamma$ has a well-defined {\bf combinatorial rotation number\/}, which we denote by $\rho_{\operatorname{comb}}(\gamma)\in{\mathbb R}$. There is also a combinatorial notion of nondegeneracy for $\gamma$, which automatically holds when $\rho_{\operatorname{comb}}(\gamma)\notin{\mathbb Z}$. When $\gamma$ is a nondegenerate Type 1 combinatorial Reeb orbit, we can then define its {\bf combinatorial Conley-Zehnder index\/} by analogy with \eqref{eqn:CZrot} as
\begin{equation}
\label{eqn:ccz}
\operatorname{CZ}_{\operatorname{comb}}(\gamma) = \floor{\rho_{\operatorname{comb}}(\gamma)} + \ceil{\rho_{\operatorname{comb}}(\gamma)}.
\end{equation}
The combinatorial rotation number and combinatorial Conley-Zehnder index of a Type 2 combinatorial Reeb orbit are not defined; and although we do not need this, it would be natural to define the combinatorial rotation number and combinatorial Conley-Zehnder index of a Type 3 combinatorial Reeb orbit to be $+\infty$.
\subsection{Smooth-combinatorial correspondence}
Let $X$ be a convex polytope in ${\mathbb R}^{2n}$. If $\epsilon>0$, define the {\bf $\epsilon$-smoothing\/} of $X$ by
\begin{equation}
\label{eqn:deltasmoothing}
X_\epsilon = \left\{z\in{\mathbb R}^{2n} \;\big|\; \operatorname{dist}(z,X)\le \epsilon \right\}.
\end{equation}
The domain $X_\epsilon$ is convex and has $C^1$-smooth boundary. The boundary is $C^\infty$ smooth except along strata arising from the boundaries of the faces of $X$; see \S\ref{sec:smoothings} for a detailed description.
Our main results are the following two theorems, giving a correspondence between combinatorial Reeb dynamics on a symplectic polytope in ${\mathbb R}^4$, and ordinary Reeb dynamics on $\epsilon$-smoothings of the polytope.
There is a slight technical issue here: since $\partial X_\epsilon$ is only $C^1$ smooth, the Reeb vector field on $\partial X_\epsilon$ is only $C^0$, so that for a Reeb orbit $\gamma$, the linearized Reeb flow \eqref{eqn:dPhiT} might not be defined. If $\gamma$ is transverse to the strata where $\partial X_\epsilon$ is not $C^\infty$ (which is presumably true for all $\gamma$ if $X$ and $\epsilon$ are generic), then the Reeb flow in a neighborhood of $\gamma$ has a well-defined linearization; we call such orbits {\bf linearizable\/}. It turns out that a non-linearizable Reeb orbit $\gamma$ on $\partial X_\epsilon$ still has a well-defined rotation number $\rho(\gamma)$, defined in \S\ref{sec:srn}.
The following theorem describes how combinatorial Reeb orbits give rise to Reeb orbits on smoothings. See Lemma~\ref{lem:combtosmooth} for a more precise statement.
\begin{theorem}
\label{thm:combtosmooth}
(proved in \S\ref{sec:combtosmooth})
Let $X$ be a symplectic polytope in ${\mathbb R}^4$, and let $\gamma$ be a nondegenerate Type 1 combinatorial Reeb orbit for $X$. Then for all $\epsilon>0$ sufficiently small, there is a distinguished Reeb orbit $\gamma_\epsilon$ on $\partial X_\epsilon$ such that:
\begin{itemize}
\item[\emph{(i)}] $\gamma_\epsilon$ converges in $C^0$ to $\gamma$ as $\epsilon\to0$.
\item[\emph{(ii)}] $\lim_{\epsilon\to 0}\mc{A}(\gamma_\epsilon) = \mc{A}_{\operatorname{comb}}(\gamma)$.
\item[\emph{(iii)}] $\gamma_\epsilon$ is linearizable and nondegenerate, $\rho(\gamma_\epsilon) = \rho_{\operatorname{comb}}(\gamma)$, and $\operatorname{CZ}(\gamma_\epsilon) = \operatorname{CZ}_{\operatorname{comb}}(\gamma)$.
\end{itemize}
\end{theorem}
The following theorem describes how Reeb orbits on smoothings give rise to combinatorial Reeb orbits.
\begin{theorem}
\label{thm:smoothtocomb}
(proved in \S\ref{sec:smoothtocomb})
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. Then there are constants $c_F>0$ for each $0$-, $1$-, or $2$-face $F$ of $X$ with the following property.
Let $\{(\epsilon_i,\gamma_i)\}_{i=1,\ldots}$ be a sequence of pairs such that $\epsilon_i>0$; $\gamma_i$ is a Reeb orbit on $\partial X_{\epsilon_i}$; and $\epsilon_i\to 0$ as $i\to\infty$. Suppose that $\rho(\gamma_i)<R$ where $R$ does not depend on $i$. Then after passing to a subsequence, there is a combinatorial Reeb orbit $\gamma$ for $X$ such that:
\begin{itemize}
\item[\emph{(i)}] $\gamma_i$ converges in $C^0$ to $\gamma$ as $i\to\infty$.
\item[\emph{(i)}] $\lim_{i\to\infty}\mc{A}(\gamma_i) = \mc{A}_{\operatorname{comb}}(\gamma)$.
\item[\emph{(iii)}] $\gamma$ is either Type 1 or Type 2.
\item[\emph{(iv)}] If $\gamma$ is Type 1, then for $i$ sufficiently large, $\gamma_i$ is linearizable and $\rho(\gamma_i) = \rho_{\operatorname{comb}}(\gamma)$. If $\gamma$ is also nondegenerate, then for $i$ sufficiently large, $\gamma_i$ is nondegenerate and $\operatorname{CZ}(\gamma_i) = \operatorname{CZ}_{\operatorname{comb}}(\gamma)$.
\item[\emph{(v)}] Let $F_1,\ldots,F_k$ denote the faces containing the endpoints of the segments of the combinatorial Reeb orbit $\gamma$. Then
\begin{equation}
\label{eqn:segmentbound}
\sum_{i=1}^kc_{F_i}\le R.
\end{equation}
\end{itemize}
\end{theorem}
\begin{remark}
One can compute explicit constants $c_F$ -- see \S\ref{sec:smoothtocomb} for the details -- and the resulting bound \eqref{eqn:segmentbound} is crucial in enabling finite computations. For example, combinatorial Reeb orbits with a given action bound could have arbitrarily many segments winding in a ``helix'' around a bad $1$-face. However the bound \eqref{eqn:segmentbound} ensures that combinatorial Reeb orbits with too many segments will not arise as limits of sequences of smooth Reeb orbits with bounded rotation number.
\end{remark}
\begin{remark}
The methods of this paper can be used to prove a version of Theorem \ref{thm:combtosmooth} (omitting the condition (c) on the rotation number and Conley-Zehnder index) for polytopes $X \subset {\mathbb R}^{2n}$ for $2n > 4$, under the hypothesis that the $(2n-2)$-faces of $X$ are symplectic. Generalizing Theorem~\ref{thm:smoothtocomb} to higher dimensions would be less straightforward, as its proof in four dimensions depends crucially on estimates on the rotation number in \S\ref{sec:smoothingdynamics}. Higher dimensional analogues of these estimates are an interesting topic for future work.
\end{remark}
Theorem~\ref{thm:smoothtocomb} allows one to compute the EHZ capacity of a four-dimensional polytope as follows:
\begin{corollary}
\label{cor:computecehz}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. Then
\begin{equation}
\label{eqn:corcehz}
c_{\operatorname{EHZ}}(X) = \operatorname{min}\{\mc{A}_{\operatorname{comb}}(\gamma)\}
\end{equation}
where the minimum is over combinatorial Reeb orbits $\gamma$ with $\sum_ic_{F_i}\le 2$ which are either Type 1 with $\rho_{\operatorname{comb}}(\gamma)\le 2$ or Type 2.
\end{corollary}
\begin{remark}
If the coordinates of the vertices of $X$ are rational, then the combinatorial action of every combinatorial Reeb orbit is rational. It follows from Theorem~\ref{thm:smoothtocomb} that in this case, $c_{\operatorname{EHZ}}(X)$, as well as the other symplectic capacities mentioned in \S\ref{sec:reviewviterbo} determined by actions of Reeb orbits, are all rational.
\end{remark}
To explain why Corollary~\ref{cor:computecehz} follows from Theorem~\ref{thm:smoothtocomb}, we need to recall a result of K\"unzle \cite{kunzle} as explained by Artstein-Avidan and Ostrover \cite{artsteinostrover2014}.
\begin{definition}
\label{def:gro}
If $X$ is any compact convex set in ${\mathbb R}^{2n}$ with $0\in\operatorname{int}(X)$, a {\bf generalized Reeb orbit\/} for $X$ is a map $\gamma:{\mathbb R}/T{\mathbb Z}\to\partial X$ for some $T>0$ such that $\gamma$ is continuous and has left and right derivatives at every point, which agree for almost every $t$, and the left and right derivatives at $t$ are in $R_{\gamma(t)}^+X$. If $\gamma$ is a generalized Reeb orbit, define its symplectic action by \eqref{eqn:symplecticaction}.
\end{definition}
\begin{proposition}
\cite[Prop.\ 2.7]{artsteinostrover2014}
\label{prop:aao}
If $X$ is a compact convex set in ${\mathbb R}^{2n}$ with $0\in\operatorname{int}(X)$, then
\[
c_{\operatorname{EHZ}}(X) = \operatorname{min}\{\mc{A}(\gamma)\}
\]
where the minimum is taken over all generalized Reeb orbits.
\end{proposition}
\begin{proof}[Proof of Corollary~\ref{cor:computecehz}.]
Pick a sequence of positive numbers $\epsilon_i$ with $\lim_{i\to\infty} \epsilon_i = 0$. For each $i$, by equation \eqref{eqn:cehzamin}, we can find a Reeb orbit $\gamma_i$ on $\partial X_{\epsilon_i}$ with $\mc{A}(\gamma_i) = c_{\operatorname{EHZ}}(X_{\epsilon_i})$. By Proposition~\ref{prop:ehwz}(b), we can assume that $\rho(\gamma_i)\le 2$. By Theorem~\ref{thm:smoothtocomb}, it follows that after passing to a subsequence, there is a combinatorial Reeb orbit $\gamma$ for $X$, satisying the conditions in Corollary~\ref{cor:computecehz}, such that
\[
\mc{A}_{\operatorname{comb}}(\gamma) = \lim_{i\to\infty}\mc{A}(\gamma_i) = \lim_{k\to\infty}c_{\operatorname{EHZ}}(X_{\epsilon_i}) = c_{\operatorname{EHZ}}(X).
\]
Here the last equality holds by the $C^0$ continuity of $c_{\operatorname{EHZ}}$. We conclude that
\[
c_{\operatorname{EHZ}}(X)\ge \operatorname{min}\{\mc{A}_{\operatorname{comb}}(\gamma)\}
\]
where the minimum is over combinatorial Reeb orbits $\gamma$ satisfying the conditions in Corollary~\ref{cor:computecehz}.
The reverse inequality follows from Proposition~\ref{prop:aao}, because by Definitions~\ref{def:cro} and \ref{def:gro}, every combinatorial Reeb orbit is a generalized Reeb orbit. (For a symplectic polytope in ${\mathbb R}^4$, a ``generalized Reeb orbit'' is equivalent to a generalization of a ``combinatorial Reeb orbit'' in which there may be infinitely many line segments.)
\end{proof}
\begin{remark}
Haim-Kislev \cite[Thm.\ 1.1]{haim-kislev} gives a different formula for $c_{\operatorname{EHZ}}$ of a convex polytope, which is valid in ${\mathbb R}^{2n}$ for all $n$.
That formula implies that in the minimum \eqref{eqn:corcehz}, we can also assume that $\gamma$ has at most one segment in each $3$-face.
\end{remark}
\subsection{Experiments testing Viterbo's conjecture}
If $X$ is a convex polytope in ${\mathbb R}^{2n}$, define its systolic ratio by
\[
\operatorname{sys}(X) = \frac{c_{\operatorname{EHZ}}(X)^n}{n!\operatorname{vol}(X)}.
\]
Note that $c_{\operatorname{EHZ}}$ is translation invariant, so we can make this definition without assuming that $0\in\operatorname{int}(X)$.
Since every compact convex domain in ${\mathbb R}^{2n}$ can be $C^0$ approximated by convex polytopes, it follows that the weak version of Viterbo's conjecture, namely Conjecture~\ref{conj:vweak}, is true if and only if every convex polytope $X$ has systolic ratio $\operatorname{sys}(X)\le 1$. The combinatorial formula for the systolic ratio given by Corollary~\ref{cor:computecehz} allows us to test this conjecture by computer when $n=2$. In particular, we ran optimization algorithms over the space of $k$-vertex convex polytopes in ${\mathbb R}^4$ to find local maxima of the systolic ratio\footnote{This is a somewhat involved process; convergence to a local maximum becomes very slow once one is close. It helps to mod out the space of polytopes by the $15$-dimensional symmetry group generated by translations, linear symplectomorphisms, and scaling. To find exact local maxima, one can look at symplectic invariants, such as areas of $2$-faces, and guess what these are converging to.}. In the results below, when listing the vertices of specific polytopes, we use Lagrangian coordinates $(x_1,x_2,y_1,y_2)$.
\subsection*{5-vertex polytopes (4-simplices).} Experimentally\footnote{Perhaps this could be proved analytically using the formula in \cite[Thm.\ 1.1]{haim-kislev}.}, every $4$-simplex $X$ has systolic ratio
\[
\operatorname{sys}(X) \le 3/4.
\]
The apparent maximum of $3/4$ is achieved by the ``standard simplex'' with vertices
\[
(0,0,0,0), (1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1).
\]
\begin{remark}
Corollary~\ref{cor:computecehz} does not directly apply to (a translate of) this polytope because it has some Lagrangian $2$-faces. For examples like these, we find numerically that a slight perturbation of the polytope to a symplectic polytope (to which Corollary~\ref{cor:computecehz} does apply) has systolic ratio very close to the claimed value. One can compute the systolic ratio of a polytope with Lagrangian $2$-faces rigorously using a generalization of Corollary~\ref{cor:computecehz}. For the particular example above, one can also compute the systolic ratio by hand using \cite[Thm.\ 1.1]{haim-kislev}.
\end{remark}
We have found families of other examples of 4-simplices with systolic ratio $3/4$, including some with no Lagrangian $2$-faces. An example is the simplex with vertices
\[
(0,0,0,0), (1,-1/3,0,0), (0,-1/3,1,0), (-2/3,-1,2/3,0), (0,0,0,1).
\]
\subsection*{6-vertex polytopes.} We found families of 6-vertex polytopes with systolic ratio equal to $1$. An example is the polytope with vertices
\[
(0,0,0,0), (1,0,0,0), (0,0,1,0), (0,0,0,1), (0,-1,1,0), (-1,-1,0,1).
\]
(Apparently the previous minimum number of vertices of a known example with systolic ratio $1$ was 12, given by the Lagrangian product of a triangle and a square \cite[Lem.\ 5.3.1]{schlenk}. Some more examples of Lagrangian products with systolic ratio 1 are presented in \cite{balitskiy}.)
\subsection*{7-vertex polytopes.} We also found families of $7$-vertex polytopes with systolic ratio $1$. One example has vertices
\begin{gather*}
(0,0,0,0), (1,0,0,0), (0,0,1,0), (0,0,0,1),\\
(1/3,-2/3,2/3,0), (-1,-1,0,1/2), (0,0,1/3,-1/3).
\end{gather*}
Presumably there exist $k$-vertex polytopes in ${\mathbb R}^4$ with systolic ratio equal to $1$ for every $k\ge 6$.
\subsection*{The 24-cell.} We also found a special example of a polytope with systolic ratio $1$: a rotation of the 24-cell (one of the six regular polytopes in four dimensions). See \S\ref{sec:24_cell} for details.
We have heavily searched the spaces of polytopes with $7$ or fewer vertices and have not found any counterexamples to Viterbo's conjecture. For polytopes with $8$ vertices, our computer program starts becoming slower (taking seconds to minutes per polytope on a standard laptop), and we have not yet searched as extensively.
\subsection*{Towards a proof of the weak Viterbo conjecture?}
Let $X$ be a star-shaped domain in ${\mathbb R}^4$ with smooth boundary $Y$. Following \cite{abhs}, we say that $X$ is {\bf Zoll\/} if every point on $Y$ is contained in a Reeb orbit with minimal action. Note that:
\begin{itemize}
\item[\emph{(a)}] If $X$ is strictly convex and a local maximizer for the systolic ratio of convex domains in the $C^0$ topology, then $X$ is Zoll.
\item[\emph{(b)}] If $X$ is Zoll, then $X$ has systolic ratio $\operatorname{sys}(X)=1$.
\end{itemize}
Part (a) holds because if $X$ is strictly convex and if $y\in Y$ is not on an action mimizing Reeb orbit, then one can shave some volume off of $X$ near $y$ without creating any new Reeb orbits of small action. Part (b) holds by a topological argument going back to \cite{weinstein}. (In fact one can further show that $X$ is symplectomorphic to a closed ball; see \cite[Prop.\ 4.3]{abhs}.) Of course, these observations are not enough to prove Conjecture~\ref{conj:vweak}, since we do not know that the systolic ratio for convex domains takes a maximum, let alone on a strictly convex domain. But this does suggest the following strategy for proving Conjecture~\ref{conj:vweak} via convex polytopes.
\begin{definition}
\label{def:combinatorially_Zoll}
Let $X$ be a convex polytope in ${\mathbb R}^4$ with $0\in\operatorname{int}(X)$. We say that $X$ is {\bf combinatorially Zoll\/} if there is an open dense subset $U$ of $\partial X$ such that every point in $U$ is contained in a combinatorial Reeb orbit (avoiding any Lagrangian $2$-faces of $X$) with combinatorial action equal to $c_{\operatorname{EHZ}}(X)$.
\end{definition}
We have checked by hand that the above examples of polytopes with systolic ratio equal to $1$ are combinatorially Zoll. This suggests:
\begin{conjecture}
Let $X$ be a convex polytope in ${\mathbb R}^4$ with $0\in\operatorname{int}(X)$. Then:
\begin{itemize}
\item[\emph{(a)}] If $X$ is combinatorially Zoll, then $\operatorname{sys}(X)=1$.
\item[\emph{(b)}] If $k$ is sufficiently large ($k\ge 6$ might suffice) and if $X$ maximizes systolic ratio over convex polytopes with $\le k$ vertices, then $X$ is combinatorially Zoll.
\end{itemize}
\end{conjecture}
Part (a) of this conjecture can probably be proved following the argument in the smooth case. Part (b) might be much harder. But both parts of the conjecture together would imply the weak Viterbo conjecture (using a compactness argument to show that for each $k$ the systolic ratio takes a maximum on the space of convex polytopes with $\le k$ vertices).
\begin{question}
\label{question:zoll_ball}
If a convex polytope $X$ in ${\mathbb R}^4$ is combinatorially Zoll, then is $\operatorname{int}(X)$ symplectomorphic to an open ball?
\end{question}
\subsection{Experiments testing other conjectures}
\label{sec:otherexp}
One can also use Theorems~\ref{thm:combtosmooth} and \ref{thm:smoothtocomb} to test conjectures about Reeb orbits that do not have minimal action. For example, if $X$ is a convex domain with smooth boundary and $0\in\operatorname{int}(X)$ such that ${\lambda_0}|_{\partial X}$ is nondegenerate, and if $k$ is a positive integer, define
\begin{equation}
\label{eqn:defak}
\mc{A}_k(X) = \operatorname{min}\{\mc{A}(\gamma)\mid\operatorname{CZ}(\gamma) = 2k+1\},
\end{equation}
where the minimum is over Reeb orbits $\gamma$ on $\partial X$. In particular $\mc{A}_1(X) = \mc{A}_{\operatorname{min}}(X)$ by Proposition~\ref{prop:ehwz}(b).
\begin{conjecture}
\label{conj:A2}
For $X$ as above we have $\mc{A}_2(X) \le 2\mc{A}_1(X)$.
\end{conjecture}
This conjecture has nontrivial content when every action-minimizing Reeb orbit has rotation number at least $3/2$. (If an action-minimizing Reeb orbit has rotation number less than $3/2$, then its double cover has Conley-Zehnder index $5$ and thus verifies the conjectured inequality.) To explain how to test this, we need the following definitions.
\begin{definition}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. Let $L>0$. We say that $X$ is {\bf $L$-nondegenerate\/} if:
\begin{itemize}
\item $X$ does not have any Type 2 combinatorial Reeb orbit $\gamma$ with $\mc{A}_{\operatorname{comb}}(\gamma)\le L$.
\item Every Type 1 combinatorial Reeb orbit $\gamma$ with $\mc{A}_{\operatorname{comb}}(\gamma)\le L$ is nondegenerate, see Definition~\ref{def:crn}.
\end{itemize}
\end{definition}
It follows from Theorem~\ref{thm:smoothtocomb} that if a symplectic polytope $X$ is $L$-nondegenerate, then for all $\epsilon>0$ sufficiently small, all Reeb orbits on $\partial X_\epsilon$ with action less than $L$ are nondegenerate.
\begin{conjecture}
\label{conj:genericity}
For any integer $k$ and any real number $L$, the set of $L$-nondegenerate symplectic polytopes with $k$ vertices is dense in the set of all $k$-vertex convex polytopes containing $0$, topologized as an open subset of ${\mathbb R}^{4k}$.
\end{conjecture}
\begin{definition}
Let $k$ be a positive integer and let $X$ be a symplectic polytope in ${\mathbb R}^4$. Suppose that $X$ is $L$-nondegenerate and has a combinatorial Reeb orbit $\gamma$ with $\mc{A}(\gamma)<L$ and $\operatorname{CZ}_{\operatorname{comb}}(\gamma)=2k+1$. By analogy with \eqref{eqn:defak}, define
\[
\mc{A}_k^{\operatorname{comb}}(X)=\operatorname{min}\left\{\mc{A}_{\operatorname{comb}}(\gamma) \mid \operatorname{CZ}_{\operatorname{comb}}(\gamma) = 2k+1\right\}
\]
where the minimum is over combinatorial Reeb orbits $\gamma$ with combinatorial action less than $L$.
\end{definition}
Conjecture~\ref{conj:A2} is now equivalent\footnote{More precisely, by Theorem~\ref{thm:combtosmooth}, if $X$ is a polytope as above for which $\mc{A}_1^{\operatorname{comb}}(X)$ and $\mc{A}_2^{\operatorname{comb}}(X)$ are defined, and if
$\mc{A}_2^{\operatorname{comb}}(X) > 2\mc{A}_1^{\operatorname{comb}}(X)$,
then Conjecture~\ref{conj:A2} fails for (nondegenerate $C^\infty$ perturbations of) $\epsilon$-smoothings of $X$ for $\epsilon$ sufficiently small. Thus Conjecture~\ref{conj:A2} implies Conjecture~\ref{conj:A2comb}. If Conjecture~\ref{conj:genericity} is true, then one can conversely show, by approximating smooth domains by $L$-nondegenerate symplectic polytopes, that Conjecture~\ref{conj:A2comb} implies Conjecture~\ref{conj:A2}.} to the following:
\begin{conjecture}
\label{conj:A2comb}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. Assume that $\mc{A}_1^{\operatorname{comb}}(X)$ and $\mc{A}_2^{\operatorname{comb}}(X)$ are defined. Then
\[
\mc{A}_2^{\operatorname{comb}}(X) \le 2\mc{A}_1^{\operatorname{comb}}(X).
\]
\end{conjecture}
One can use Theorems~\ref{thm:combtosmooth} and \ref{thm:smoothtocomb} to compute $\mc{A}_k^{\operatorname{comb}}(X)$. One can then test Conjecture~\ref{conj:A2comb} by using optimization algorithms to try to maximize the ratio $\mc{A}_2^{\operatorname{comb}}(X)/(2\mc{A}_1^{\operatorname{comb}}(X))$. So far we have not found any example where this ratio is greater than $1$.
\subsection*{The rest of the paper}
In \S\ref{sec:type1}, we investigate Type 1 combinatorial Reeb orbits in detail, we define the combinatorial rotation number, and we work out the example of the 24-cell. In \S\ref{sec:rdsp}, we establish foundational facts about the combinatorial Reeb flow on a symplectic polytope. In \S\ref{sec:quaternionic} we review a symplectic trivialization of the contact structure on a star-shaped hypersurface in ${\mathbb R}^4$ defined using the quaternions. We explain a key curvature identity due to Hryniewicz and Salom\~ao which implies that in the convex case, the rotation number of a Reeb trajectory increases monotonically as it evolves. In \S\ref{sec:smoothingdynamics} we study the Reeb flow on a smoothing of a polytope. In \S\ref{sec:correspondence} we use this work to prove the smooth-combinatorial correspondence of Theorems~\ref{thm:combtosmooth} and \ref{thm:smoothtocomb}. In the appendix, we review basic facts about rotation numbers that we need throughout.
\subsection*{Acknowledgments.} We thank A.\ Abbondandolo, P.\ Haim-Kislev, U.\ Hryniewicz, and Y.\ Ostrover for helpful conversations, and A. Balitskiy for pointing out some additional references. JC was partially supported by an NSF Graduate Research Fellowship. MH was partially supported by NSF grant DMS-1708899, a Simons Fellowship, and a Humboldt Research Award.
\section{Type 1 combinatorial Reeb orbits}
\label{sec:type1}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. In this section we give what amounts to an algorithm for finding the Type 1 combinatorial Reeb orbits and their combinatorial symplectic actions, see Proposition~\ref{prop:orbitbijection}. (Our actual computer implementation uses various optimizations not discussed here.) We also define combinatorial rotation numbers and work out the example of the 24-cell.
\subsection{Symplectic flow graphs}
\label{subsec:symplectic_flow_graphs}
We start by defining ``symplectic flow graphs'' in any even dimension. In the next subsection (\S \ref{subsubsec:symplectic_polytopes}), we will specialize to certain $2$-dimensional flow graphs that keep track of the combinatorics needed to find Type 1 Reeb orbits on the boundary of a symplectic polytope in ${\mathbb R}^4$.
\begin{definition}
\label{def:linear_domain}
A {\bf linear domain\/} is an intersection of a finite number of open or closed half-spaces in an affine space, or an affine space itself.
\end{definition}
\begin{definition}
\label{def:tangent_space} The {\bf tangent space} $TA$ of a linear domain $A$ is the tangent space $T_xA$ for any $x\in A$; the tangent spaces for different $x$ are canonically isomorphic to each other via translations.
\end{definition}
\begin{definition}
\label{def:affine_map_of_linear_domains}
Let $A$ and $B$ be linear domains. An {\bf affine map} $\phi:A \to B$ is the restriction of an affine map between affine spaces containing $A$ and $B$. Such a map induces a map on tangent spaces which we denote by $T\phi: TA\to TB$.
\end{definition}
\begin{definition}
\label{def:linear_flow}
Let $A$ and $B$ be linear domains. A {\bf linear flow} from $A$ to $B$ is a triple $\Phi = (D,\phi,f)$ consisting of:
\begin{itemize}
\item the {\bf domain of definition}: a linear domain $D \subset A$.
\item the {\bf flow map}: an affine map $\phi:D \to B$.
\item the {\bf action function}: an affine function $f:D \to {\mathbb R}$.
\end{itemize}
We sometimes write $\Phi:A\to B$. In the examples of interest for us, $\phi$ is injective, and $f\ge 0$.
\end{definition}
\begin{definition}
\label{def:linear_flow_composition}
Let $\Phi = (D,\phi,f)$ be a linear flow from $A$ to $B$ and let $\Psi = (E,\psi,g)$ be a linear flow from $B$ to $C$. Their {\bf composition} is the linear flow $\Psi \circ \Phi: A \to C$ defined by
\[
\Psi \circ \Phi = (\phi^{-1}(E),\psi \circ \phi, f + g \circ \phi).
\]
\end{definition}
\begin{remark} Composition of linear flows is associative, and there is an identity linear flow $\iota_A:A \to A$ given by $\iota_A = (A,\operatorname{id}_A,0)$. If $\Phi_i=(D_i,\phi_i,f_i)$ is a linear flow from $A_{i-1}$ to $A_i$ for $i=1,\ldots,k$, and if $\Phi=(D,\phi,f)$ is the composition $\Phi_k\circ\cdots\circ \Phi_1$, then for $x\in D$, we have
\begin{equation}
\label{eqn:actioncomposition}
f(x) = \sum_{i=1}^kf_i((\phi_{i-1}\circ\cdots\circ\phi_1)(x)).
\end{equation}
\end{remark}
\begin{definition}
\label{def:linear_flow_graph}
A {\bf linear flow graph} $G$ is a triple $G = (\Gamma,A,\Phi)$ consisting of:
\begin{itemize}
\item A directed graph $\Gamma$ with vertex set $V(\Gamma)$ and edge set $E(\Gamma)$.
\item For each vertex $v$ of $\Gamma$, an open linear domain $A_v$.
\item For each edge $e$ of $\Gamma$ from $u$ to $v$, a linear flow $\Phi_e = (D_e,\phi_e,f_e):A_u \to A_v$.
\end{itemize}
\end{definition}
\begin{figure}[h!]
\label{fig:flow_graph}
\includegraphics[width=\linewidth]{flow_graph.png}
\caption{An example of a flow graph with 4 nodes and 4 edges. The linear domains and flows are depicted above their corresponding nodes and edges.}
\end{figure}
Let $G=(\Gamma,A,\Phi)$ be a linear flow graph. If $p = e_1\dots e_k$ is a path in $\Gamma$ from $u$ to $v$, we define an associated linear flow
\[
\Phi_p = (D_p,\phi_p,f_p) : A_u \longrightarrow A_v
\]
by
\[
\Phi_p = \Phi_{e_k} \circ \dots \circ \Phi_{e_1}.
\]
\begin{definition}
\label{def:flow_graph_trajectory}
A {\bf trajectory} $\gamma$ of $G$ is a pair $\gamma = (p,x)$, where $p$ is a path in $\Gamma$ and $x \in D_p$.
\end{definition}
\begin{definition}
\label{def:flow_graph_periodic_orbit}
A {\bf periodic orbit} of $G$ is an equivalence class of trajectories $\gamma = (p,x)$ where $p$ is a cycle in $\Gamma$ and $x$ is a fixed point of $\phi_p$, i.e.\ $\phi_p(x) = x$. Two such trajectories $\gamma = (p,x)$ and $\eta = (q,y)$ are equivalent if there are paths $r$ and $s$ in $\Gamma$ such that $p = rs$, $q = sr$, and $\phi_r(x) = y$. We often abuse notation and denote the periodic orbit by $\gamma=(p,x)$, instead of by the equivalence class thereof.
\end{definition}
\begin{definition}
\label{def:action_of_periodic_orbit}
The {\bf action} of a periodic orbit $\gamma = (p,x)$ is defined by $f(\gamma) = f_p(x)$.
\end{definition}
\begin{definition} \label{def:degenerate_orbit} A periodic orbit $\gamma = (p,x)$, where $p$ is a cycle based at $u$, is {\bf degenerate} if the induced map on tangent spaces $T\phi_p:TD_u \to TD_u$ has $1$ as an eigenvalue. Otherwise we say that $\gamma$ is {\bf nondegenerate\/}.
\end{definition}
\begin{definition}
\label{def:symplectic_flow_graph}
An $2n$-dimensional {\bf symplectic flow graph\/} $G$ is a quadruple $G = (\Gamma,A,\omega,\Phi)$ where:
\begin{itemize}
\item $(\Gamma,A,\Phi)$ is a linear flow graph in which each linear domain $A_v$ has dimension $2n$.
\item $\omega$ assigns to each vertex $v$ of $\Gamma$ a linear symplectic form $\omega_v$ on $TA_v$.
\end{itemize}
We require that if $e$ is an edge from $u$ to $v$, then $\phi_e^*\omega_v = \omega_u$.
\end{definition}
\subsection{The symplectic flow graph of a 4d symplectic polytope}
\label{subsubsec:symplectic_polytopes}
\begin{definition}
\label{def:sfgp}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. We associate to $X$ the two-dimensional symplectic flow graph $G(X)=(\Gamma,A,\omega,\Phi)$ defined as follows:
\begin{itemize}
\item The vertex set of $\Gamma$ is the set of $2$-faces of $X$. The linear domain associated to a vertex is simply the corresponding $2$-face, regarded as a linear domain in ${\mathbb R}^4$. If $F$ is a $2$-face, then the symplectic form $\omega_F$ on $TF$ is the restriction of the standard symplectic form $\omega_0$ on ${\mathbb R}^4$.
\item If $F_1$ and $F_2$ are $2$-faces, then there is an edge $e$ in $\Gamma$ from $F_1$ to $F_2$ if and only if there is a $3$-face $E$ adjacent to $F_1$ and $F_2$, and a trajectory of the Reeb vector field $R_E$ on $E$ from some point in $F_1$ to some point in $F_2$. In this case, the linear flow
\[
\Phi_e = (D_e,\phi_e,f_e):F_1\longrightarrow F_2
\]
is defined as follows:
\begin{itemize}
\item
The domain $D_e$ is the set of $x\in F_1$ such that there exists a trajectory of $R_E$ from $x$ to some point $y\in F_2$.
\item
For $x$ as above, $\phi_e(x)=y$, and $f_e(x)$ is the time it takes to flow along the vector field $R_E$ from $x$ to $y$, or equivalently the integral of $\lambda_0$ along the line segment from $x$ to $y$.
\end{itemize}
\end{itemize}
\end{definition}
In the above definition, note that $\phi_e$ and $f_e$ are affine, because the vector field $R_E$ on $E$ is constant by equation \eqref{eqn:Reebinu}. A simple calculation as in \cite[Eq.\ (5.10)]{hwz} shows that the map $\phi_e$ is symplectic.
\begin{proposition}
\label{prop:orbitbijection}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. Then there is a canonical bijection
\[
\{\mbox{periodic orbits of $G(X)$}\} \longleftrightarrow \{\mbox{Type $1$ combinatorial Reeb orbits of $X$}\}.
\]
If $(p,x)$ is a periodic orbit of $G(X)$, and if $\gamma$ is the corresponding combinatorial Reeb orbit, then
\begin{equation}
\label{eqn:identifyactions}
f(p,x) = \mc{A}_{\operatorname{comb}}(\gamma).
\end{equation}
\end{proposition}
\begin{proof}
Suppose $(p=e_1\cdots e_k,x)$ is a periodic orbit of $G(X)$. Let $E_i$ denote the $3$-face of $X$ associated to $e_i$. There is then a combinatorial Reeb orbit $\gamma=(L_1,\ldots,L_k)$, where $L_i$ is the line segment in $E_i$ from $\phi_{e-1}\circ\cdots\circ \phi_{e_1}(x)$ to $\phi_{e_i}\circ\cdots\circ \phi_{e_1}(x)$. It follows from Definitions~\ref{def:cro} and \ref{def:sfgp} that this construction defines a bijection from periodic orbits of $G(X)$ to combinatorial Reeb orbits of $X$. The identification of actions \eqref{eqn:identifyactions} follows from equation \eqref{eqn:actioncomposition}.
\end{proof}
By Proposition~\ref{prop:orbitbijection}, to find the Type 1 Reeb orbits\footnote{When testing Viterbo's conjecture and related conjectures,
although all Type 1 orbits of $X$ are detected by the flow graph $G(X)$, in view of Corollary 1.13 we must also account for Type 2 orbits. One can do this by either (1) extending $G(X)$ to a flow graph that includes the lower-dimensional faces of $X$ or (2) working with a flow graph $G(X)$ whose linear domains $A_F$ are the closures of the $2$-faces, rather than $2$-faces themselves. We use the first strategy in our computer program.} of $X$, one can compute the symplectic flow graph $G(X)=(\Gamma,A,\omega,\Phi)$, enumerate the cycles in the graph $\Gamma$, and for each cycle $p$, compute the fixed points of the map $\phi_p$ in the domain $D_p$. In order to avoid searching for arbitrarily long cycles in the graph $\Gamma$ in the cases of interest, we now need to discuss combinatorial rotation numbers.
\subsection{Combinatorial rotation numbers}
\begin{definition}
\label{def:sfg_trivialization}
A {\bf trivialization} of a $2n$-dimensional symplectic flow graph $G=(\Gamma,A,\omega,\Phi)$ is a pair $(\tau,\widetilde{\phi})$ consisting of:
\begin{itemize}
\item For each vertex $u$ of $\Gamma$, an isomorphism of symplectic vector spaces
\[
\tau_u:(TA_u,\omega_u) \stackrel{\simeq}{\longrightarrow} ({\mathbb R}^{2n},\omega_0).
\]
\item For each edge $e$ in $\Gamma$ from $u$ to $v$, a lift $\widetilde{\phi}_{e,\tau} \in \widetilde{\operatorname{Sp}}(2n)$ of the symplectic matrix
\[
\tau_v \circ T\phi_e \circ \tau_u^{-1}\in\operatorname{Sp}(2n).
\]
\end{itemize}
Here $\omega_0$ denotes the standard symplectic form on ${\mathbb R}^{2n}$, and $\widetilde{\operatorname{Sp}}(2n)$ denotes the universal cover of the symplectic group $\operatorname{Sp}(2n)$. We sometimes abuse notation and denote the trivialization $(\tau,\widetilde{\phi})$ simply by $\tau$.
\end{definition}
If $p = e_1\dots e_n$ is a path in $\Gamma$ from $u$ to $v$, we define
\[
\widetilde{\phi}_{p,\tau} = \widetilde{\phi}_{e_n,\tau} \circ\cdots\circ \widetilde{\phi}_{e_1,\tau} \in \widetilde{\operatorname{Sp}}(2n).
\]
\begin{definition}
Let $G=(\Gamma,A,\omega,\Phi)$ be a $2$-dimensional symplectic flow graph, let $\tau$ be a trivialization of $G$, and let $p$ be a path in $\Gamma$. Define the {\bf rotation number\/} of $p$ with respect to $\tau$ by
\[
\rho_\tau(p) = \rho(\widetilde{\phi}_{p,\tau})\in{\mathbb R},
\]
where the right hand side is the rotation number on $\widetilde{\operatorname{Sp}}(2)$ reviewed in Appendix~\ref{app:rotation_numbers}.
\end{definition}
Suppose now that $X$ is a symplectic polytope in ${\mathbb R}^4$. We now define a canonical trivialization $\tau$ of the symplectic flow graph $G(X)$ which has the useful property that if $(p,x)$ is a periodic orbit of $G(X)$, and if $\gamma$ is the corresponding combinatorial Reeb orbit on $X$ from Proposition~\ref{prop:orbitbijection}, then the rotation number $\rho_\tau(p)$ is the limit of the rotation numbers of Reeb orbits on smoothings of $X$ that converge to $\gamma$.
Fix matrices ${\mathbf i}, {\mathbf j}, {\mathbf k} \in \operatorname{SO}(4)$ which represent the quaternion algebra, such that ${\mathbf i}$ is the standard almost complex structure. It follows from the formula $\omega_0(V,W)=\langle {\mathbf i}V,W\rangle$, together with the quaternion relations, that the matrices ${\mathbf i}$, ${\mathbf j}$, and ${\mathbf k}$ are symplectic. In examples below, in the coordinates $x_1,x_2,y_1,y_2$, we use the choice
\[
{\mathbf i} = \begin{pmatrix} & & -1 & \\ & & & -1 \\ 1 & & & \\ & 1 & &\end{pmatrix}, \quad {\mathbf j} = \begin{pmatrix} & -1 & & \\ 1 & & & \\ & & & 1 \\ & & -1 & \\ \end{pmatrix}, \quad {\mathbf k} = \begin{pmatrix} & & & -1 \\ & & 1 & \\ & -1 & & \\ 1 & & & \end{pmatrix}.
\]
\begin{definition}
\label{def:qtfg}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. We define the {\bf quaternionic trivialization\/} $(\tau,\widetilde{\phi})$ of the symplectic flow graph $G(X)$ as follows.
\begin{itemize}
\item Let $F$ be a $2$-face of $X$. We define the isomorphism
\[
\tau_F: TF \stackrel{\simeq}{\longrightarrow} {\mathbb R}^2
\]
as follows. By Lemma~\ref{lem:Reebcone}, there is a unique $3$-face $E$ adjacent to $F$ such that the Reeb cone $R_F^+$ consists of the nonnegative multiples of the Reeb vector field $R_E$, and the latter points into $E$ from $F$. Let $\nu$ denote the outward unit normal vector to $E$. If $V\in TF$, define
\begin{equation}
\label{eqn:tauF}
\tau_F(V) = (\langle V,{\mathbf j}\nu\rangle, \langle V,{\mathbf k}\nu\rangle).
\end{equation}
\item If $e$ is an edge from $F_1$ to $F_2$, define $\widetilde{\phi}_{e,\tau}\in\widetilde{\operatorname{Sp}}(2)$ to be the unique lift of the symplectic matrix
\begin{equation}
\label{eqn:transitionmap}
\tau_{F_2} \circ T\phi_e \circ \tau_{F_1}^{-1} \in \operatorname{Sp}(2)
\end{equation}
that has rotation number in the interval $(-1/2,1/2]$.
\end{itemize}
\end{definition}
The following lemma verifies that this is a legitimate trivialization.
\begin{lemma}
\label{lem:qtfg}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. If $F$ is a $2$-face of $X$, then the linear map $\tau_F$ in \eqref{eqn:tauF} is an isomorphism of symplectic vector spaces.
\end{lemma}
\begin{proof}
Let $E$ and $\nu$ be as in the definition of $\tau_F$. Then $\{{\mathbf i}\nu,{\mathbf j}\nu,{\mathbf k}\nu\}$ is an orthonormal basis for $TE$. We have $\omega_0({\mathbf i}\nu,{\mathbf j}\nu)=\omega_0({\mathbf i}\nu,{\mathbf k}\nu)=0$ and $\omega_0({\mathbf j}\nu,{\mathbf k}\nu)=1$. If $V$ and $W$ are any two vectors in $TF\subset TE$, then expanding them in this basis, we find that $\omega_0(V,W) = \omega_0(\tau_F(V),\tau_F(W))$.
\end{proof}
\begin{remark}
\label{rem:altcon}
An alternate convention for the quaternionic trivialization would be to define an isomorphism
\[
\tau'_F: TF \stackrel{\simeq}{\longrightarrow} {\mathbb R}^2
\]
as follows. Let $E'$ be the other $3$-face adjacent to $F$ (so that the Reeb vector field $R_{E'}$ points out of $E$ along $F$), and let $\nu'$ denote the outward unit normal vector to $E'$. Define
\[
\tau'_F(V) = (\langle V,{\mathbf j}\nu'\rangle, \langle V,{\mathbf k}\nu'\rangle).
\]
This is also an isomorphism of symplectic vector spaces by the same argument as in Lemma~\ref{lem:qtfg}.
\end{remark}
\begin{definition}
\label{def:transitionmatrix}
If $X$ is a symplectic polytope in ${\mathbb R}^4$ and $F$ is a $2$-face of $X$, define the {\bf transition matrix\/}
\[
\psi_F = \tau_F\circ (\tau_F')^{-1}\in\operatorname{Sp}(2).
\]
\end{definition}
\begin{lemma}
\label{lem:transitionmatrix}
If $X$ is a symplectic polytope in ${\mathbb R}^4$ and $F$ is a $2$-face of $X$, then the transition matrix $\psi_F$ is positive elliptic (see Definition~\ref{def:classifySp2}).
\end{lemma}
\begin{proof}
We compute that
\begin{equation}
\label{eqn:tauFprimeinverse}
(\tau'_F)^{-1} = \left( {\mathbf j}\nu' - \frac{\langle {\mathbf j}\nu',\nu\rangle}{\langle {\mathbf i}\nu',\nu\rangle}{\mathbf i}\nu', {\mathbf k}\nu' - \frac{\langle {\mathbf k}\nu',\nu\rangle}{\langle {\mathbf i}\nu',\nu\rangle}{\mathbf i}\nu'\right).
\end{equation}
To simplify notation, write $a_1 = \langle \nu',\nu\rangle$, $a_2=\langle {\mathbf i}\nu',\nu\rangle$, $a_3=\langle {\mathbf j}\nu',\nu\rangle$, and $a_4=\langle {\mathbf k}\nu',\nu\rangle$. It then follows from \eqref{eqn:tauF} and \eqref{eqn:tauFprimeinverse} that
\[
\psi_F = \frac{1}{a_2} \begin{pmatrix}
a_1a_2 - a_3a_4 & -a_2^2 -a_4^2\\ a_2^2 + a_3^2 & a_1a_2 + a_3a_4
\end{pmatrix}
\]
Then $\operatorname{Tr}(\psi_F) = 2\langle \nu',\nu\rangle \in (-2,2)$, so $\psi_F$ is elliptic. Moreover $a_2>0$ by Lemma~\ref{lem:EinEout} below, so $\psi_F$ is positive elliptic.
\end{proof}
\begin{corollary}
\label{cor:rotrange}
If $E$ is a $3$-face of $X$, if $F_1$ and $F_2$ are $2$-faces of $X$, and if there is a trajectory of the Reeb vector field on $E$ from some point in $F_1$ to some point in $F_2$, then $\widetilde{\phi}_{e,\tau}$ has rotation number in the interval $(0,1/2)$.
\end{corollary}
\begin{proof}
It follows from the definitions that the map \eqref{eqn:transitionmap} agrees with the transition matrix $\psi_{F_2}$. By Lemma~\ref{lem:transitionmatrix}, this matrix is positive elliptic. It then follows from Lemma~\ref{lem:compute_rho_bar} that its mod ${\mathbb Z}$ rotation number is in the interval $(0,1/2)$.
\end{proof}
\begin{definition}
\label{def:crn}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. Let $\gamma$ be a Type 1 combinatorial Reeb orbit for $X$.
\begin{itemize}
\item
We define the {\bf combinatorial rotation number\/} of $\gamma$ by
\[
\rho_{\operatorname{comb}}(\gamma) = \rho_\tau(p),
\]
where $(p,x)$ is the periodic orbit of $G(X)$ corresponding to $\gamma$ in Proposition~\ref{prop:orbitbijection}, and $\tau$ is the quaternionic trivialization of $X$.
\item We say that $\gamma$ is {\bf nondegenerate\/} if the periodic orbit $(p,x)$ is nondegenerate as in Definition~\ref{def:degenerate_orbit}. In this case we define the {\bf combinatorial Conley-Zehnder index\/} of $\gamma$ by equation \eqref{eqn:ccz}.
\end{itemize}
\end{definition}
\begin{remark}
\label{rem:ucmult}
By Corollary~\ref{cor:rotrange}, the combinatorial rotation number is the rotation number of a product of elements of $\widetilde{\operatorname{Sp}}(2)$ each with rotation number in the interval $(0,1/2)$. A formula for computing the rotation number of such a product is given by Proposition~\ref{prop:ucmult}.
\end{remark}
\subsection{Example: the 24-cell}
\label{sec:24_cell}
We now compute the symplectic flow graph $G(X)=(\Gamma,A,\omega,\Phi)$ and the quaternionic trivialization $\tau$ for the example where $X$ is the $24$-cell with vertices
\[
(\pm1,0,0,0),(0,\pm1,0,0),(0,0,\pm1,0),(0,0,0,\pm1),(\pm1/2,\pm1/2,\pm1/2,\pm1/2).
\]
The polytope $X$ has $24$ three-faces, each of which is an octahedron. The $3$-faces are contained in the hyperplaces
\[
\pm x_1 \pm x_2 = 1,\; \pm x_1 \pm y_1 = 1,\; \pm x_1 \pm y_2 = 1, \; \pm x_2 \pm y_1 = 1, \; \pm x_2 \pm y_2 = 1, \; \pm y_1 \pm y_2 = 1.
\]
There are $96$ two-faces, each of which is a triangle; thus the graph $\Gamma$ has $96$ vertices. It follows from the calculations below that none of the $2$-faces is Lagrangian, so that $X$ is a symplectic polytope.
To understand the edges of the graph $\Gamma$, consider for example the $3$-face $E$ contained in the hyperplane $x_1+y_1=1$. The vertices of this $3$-face are
\[
(1,0,0,0), (1/2,\pm1/2,1/2,\pm1/2), (0,0,1,0).
\]
The unit normal vector to this face is
\[
\nu = \frac{1}{\sqrt{2}}(1,0,1,0).
\]
The Reeb vector field on $E$ is
\[
R_E = 2\left(-\frac{\partial}{\partial x_1} + \frac{\partial}{\partial y_1}\right).
\]
Thus the Reeb flow on $E$ flows from the vertex $(1,0,0,0)$ to the vertex $(0,0,1,0)$ in time $1/2$. Each of the four $2$-faces of $E$ adjacent to $(1,0,0,0)$ flows to one of the four $2$-faces of $E$ adjacent to $(0,0,1,0)$, by an affine linear isomorphism.
For example, let $F_1$ be the $2$-face with vertices $(1,0,0,0)$, $(1/2,1/2,1/2,\pm 1/2)$, and let $F_2$ be the $2$-face with vertices $(0,0,1,0)$, $(1/2,1/2,1/2,\pm 1/2)$. Then $F_1$ flows to $F_2$, so there is an edge $e$ in the graph $\Gamma$ from $F_1$ to $F_2$. More explicitly, we can parametrize $F_1$ as
\[
\left(1-\frac{t_1+t_2}{2}, \frac{t_1+t_2}{2}, \frac{t_1+t_2}{2}, \frac{t_1-t_2}{2}\right), \quad t_1,t_2>0, \; t_1+t_2<1,
\]
and we can parametrize $F_2$ as
\[
\left(\frac{t_1+t_2}{2}, \frac{t_1+t_2}{2}, 1 - \frac{t_1+t_2}{2}, \frac{t_1-t_2}{2}\right), \quad t_1,t_2>0, \; t_1+t_2<1.
\]
With respect to these parametrizations, the flow map $\phi_e$ is simply
\[
\phi_e(t_1,t_2) = (t_1,t_2).
\]
The domain $D_e$ of $\phi_e$ is all of $F_1$, and the action function is
\[
f_e(t_1,t_2) = \frac{1-t_1-t_2}{2}.
\]
It turns out that for every other $3$-face $E'$, there is a linear symplectomorphism $A$ of ${\mathbb R}^4$ such that $AX=X$ and $AE=E'$. In fact, we can take $A$ to be right multiplication by an appropriate unit quaternion. It follows from this symplectic symmetry that the Reeb flow on each $3$-face behaves analogously. Putting these Reeb flows together, one finds that the graph $\Gamma$ consists of $8$ disjoint $12$-cycles. (This example is highly non-generic!) Further calculations show that for each $12$-cycle $p$, the map $\phi_p$ is the identity, so that every point in the interior of a $2$-face is on a Type 1 combinatorial Reeb orbit. Moreover, the action of each such orbit is equal to $2$. In particular, $X$ is ``combinatorially Zoll'' in the sense of Definition~\ref{def:combinatorially_Zoll}. Also, the volume of $X$ is $2$, so $X$ has systolic ratio $1$.
To see how the quaternionic trivialization works, let us compute $\widetilde{\phi}_{e,\tau}$ for the edge $e$ above. For the $2$-face $F_1$ above, the isomorphism $\tau_{F_1}$ is given in terms of the unit normal vector $\nu$ to $E$. We compute that
\[
{\mathbf j}\nu = \frac{1}{\sqrt{2}}(0,1,0,-1),\quad\quad
{\mathbf k}\nu = \frac{1}{\sqrt{2}}(0,1,0,1).
\]
It follows that in terms of the basis $(\partial_{t_1},\partial_{t_2})$ for $TF_1$, we have
\[
\tau_{F_1} = \frac{1}{\sqrt{2}}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}.
\]
For the $2$-face $F_2$ above, the isomorphism $\tau_{F_2}$ is given in terms of the unit normal vector to the {\em other\/} $3$-face adjacent to $F_2$. This other $3$-face is in the hyperplane $x_2+y_1=1$ and so has unit normal vector
\[
\nu' = \frac{1}{\sqrt{2}}(0,1,1,0).
\]
We then similarly compute that in terms of the basis $(\partial_{t_1},\partial_{t_2})$ for $TF_2$, we have
\[
\tau_{F_2} = \frac{1}{\sqrt{2}}\begin{pmatrix} -1 & 0 \\ 1 & 1 \end{pmatrix}
\]
Therefore the matrix \eqref{eqn:transitionmap} for the edge $e$ is
\[
\tau_{F_2} \circ T\phi_e\circ \tau_{F_1}^{-1} = \begin{pmatrix} -1 & 0 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}^{-1} = \begin{pmatrix} 0 & -1 \\ 1 & 1 \end{pmatrix}.
\]
This matrix is positive elliptic and has eigenvalues $e^{\pm i\pi/3}$. It follows that its lift $\widetilde{\phi}_{e,\tau}$ in $\widetilde{\operatorname{Sp}}(2)$ has rotation number $1/6$.
For one of the other three edges associated to $E$, the matrix \eqref{eqn:transitionmap} is the same as above, and for the other two edges associated to $E$, the matrix is $\begin{pmatrix} 1 & -1 \\ 1 & 0\end{pmatrix}$, whose lift also has rotation number $1/6$. It then follows from the quaternionic symmetry of $X$ mentioned earlier that for every edge $e'$ of the graph $\Gamma$, the lift $\widetilde{\phi}_{e',\tau}$ is one of the above two matrices with rotation number $1/6$. One can further check that for each $12$-cycle in the graph, one obtains just one of the above two matrices repeated $12$ times, so each corresponding Type 1 combinatorial Reeb orbit has rotation number equal to $2$.
\section{Reeb dynamics on symplectic polytopes}
\label{sec:rdsp}
The goal of this section is to prove Proposition~\ref{prop:well-posed} and Lemma~\ref{lem:Reebcone}, describing the Reeb dynamics on the boundary of a symplectic polytope in ${\mathbb R}^4$.
\subsection{Preliminaries on tangent and normal cones}
We now prove some lemmas about tangent and normal cones which we will need; see \S\ref{sec:cro} for the definitions.
Recall that if $C$ is a cone in ${\mathbb R}^m$, its {\bf polar dual\/} is defined by
\[
C^o=\{y\in{\mathbb R}^m\mid \langle x,y\rangle \le 0 \;\forall x\in C\}.
\]
\begin{lemma}
\label{lem:ntd}
Let $X$ be a convex set in ${\mathbb R}^m$ and let $y\in\partial X$. Then
\[
N_y^+X = (T_y^+X)^o, \quad\quad T_y^+X = (N_y^+X)^o.
\]
\end{lemma}
\begin{proof}
If $C$ is a closed cone then $(C^o)^o = C$, so it suffices to prove that $N_y^+X = (T_y^+X)^o$.
To show that $N_y^+X\subset (T_y^+X)^\circ$, let $v\in N_y^+X$ and $w\in T_y^+X$; we need to show that $\langle v,w\rangle \le 0$. By the definition of $T_y^+X$, there exist a sequence of vectors $\{w_i\}$ and a sequence of positive real numbers $\{\epsilon_i\}$ such that $y+\epsilon_iw_i\in X$ for each $i$ and $\lim_{i\to\infty}w_i=w$. By the definition of $N_y^+X$ we have $\langle v,w_i\rangle \le 0$, and so $\langle v,w\rangle \le 0$.
To prove the reverse inclusion, if $v\in (T_x^+X)^o$, then for any $x\in X$ we have $x-y\in T_y^+X$, so $\langle v,x-y\rangle \le 0$. It follows that $v\in N_y^+X$.
\end{proof}
If $X$ is a convex polytope in ${\mathbb R}^m$ and if $E$ is an $(m-1)$-face of $X$, let $\nu_E$ denote the outward unit normal vector to $E$.
\begin{lemma}
\label{lem:ncn}
Let $X$ be a convex polytope in ${\mathbb R}^m$ and let $F$ be a face of $X$. Let $E_1,\ldots, E_k$ denote the $(m-1)$-faces whose closures contain $F$. Then
\begin{align}
\label{eqn:TFcone}
T_F^+X &= \left\{w\in {\mathbb R}^m \mid \langle w,\nu_{E_i}\rangle \le 0 \;\; \forall i=1,\ldots,k \right\},
\\
\label{eqn:NFcone}
N_F^+X & = \operatorname{Cone}
\left(
\nu_{E_1},\ldots, \nu_{E_k}
\right).
\end{align}
\end{lemma}
\begin{proof}
Let $y\in F$, and let $B$ be a small ball around $y$. Then $B\cap X=\cap_i(B\cap H_i)$ where $\{H_i\}$ is the set of all defining half-spaces for $X$ whose boundaries contain $F$. The boundaries of the half-spaces $H_i$ are the hyperplanes that contain the $(m-1)$-faces $E_1,\ldots, E_k$. It follows that $B\cap X$ is the set of $x\in B$ such that $\langle x-y,\nu_{E_i}\rangle \le 0$ for each $i=1,\ldots, k$. Equation \eqref{eqn:TFcone} follows. Taking polar duals and using Lemma~\ref{lem:ntd} then proves \eqref{eqn:NFcone}.
\end{proof}
\begin{lemma}
\label{lem:pp1}
Let $X$ be a convex polytope in ${\mathbb R}^m$ and let $F$ be a face of $X$. Let $v\in N_F^+X\setminus\{0\}$ and let $w\in T_F^+X\setminus\{0\}$. Then $\langle v,w\rangle = 0$ if and only if there is a face $E$ of $X$ with $F\subset \overline{E}$ such that $v\in N_E^+X$ and $w\in T_F^+\overline{E}$.
\end{lemma}
Here if $E\neq F$ then $T_F^+\overline{E}$ denotes the tangent cone of the polytope $\overline{E}$ at the face $F$ of $\overline{E}$; if $E=F$, then we interpret $T_F^+\overline{E}=TF$.
\begin{proof}[Proof of Lemma~\ref{lem:pp1}.]
As in Lemma~\ref{lem:ncn}, let $E_1,\ldots, E_k$ denote the $(m-1)$-faces adjacent to $F$.
$(\Rightarrow)$
By the definitions of $N_F^+X$ and $T_F^+X$, if $v\in N_F^+X$ and $w\in T_F^+X$ then $\langle v,w\rangle \le 0$. Assume also that $v$ and $w$ are both nonzero and $\langle v,w\rangle = 0$. Then we must have $v\in\partial N_F^+X$ and $w\in\partial T_F^+X$; otherwise we could perturb $v$ or $w$ to make the inner product positive, which would be a contradiction.
Since $w\in\partial T_F^+X$, it follows from \eqref{eqn:TFcone} that $\langle w,\nu_{E_i}\rangle = 0$ for some $i$. By renumbering we can arrange that $\langle w,\nu_{E_i}\rangle = 0$ if and only if $i\le l$ where $1\le l\le k$. Let $E=\cap_{i=1}^l E_i$. Then $E$ is a face of $X$ adjacent to $F$, and $w\in T_F^+\overline{E}$.
We now want to show that $v\in N_E^+X$. By \eqref{eqn:NFcone}, we can write $v = \sum_{i=1}^k a_i\nu_{E_i}$ with $a_i\ge 0$. Since $\langle v,w\rangle = 0$ and $\langle w,\nu_{E_i}\rangle = 0$ for $i\le l$ and $\langle w,\nu_{E_i}\rangle < 0$ for $i>l$, we must have $a_i=0$ for $i>l$. Thus $v\in\operatorname{Cone}(\nu_{E_1},\ldots,\nu_{E_l})$, so by \eqref{eqn:NFcone} again, $v\in N_F^+X$.
$(\Leftarrow)$ Assume that there is a face $E$ adjacent to $X$ such that $v\in N_E^+X$ and $w\in T_F^+\overline{E}$. We can renumber so that $E=\cap_{i=1}^l E_i$ where $1\le l \le k$. Then $v\in\operatorname{Cone}(\nu_{E_1},\ldots,\nu_{E_l})$, and $\langle w,\nu_{E_i}\rangle = 0$ for $i\le l$, so $\langle v,w\rangle = 0$.
\end{proof}
\subsection{The combinatorial Reeb flow is locally well-posed}
\label{sec:wp}
We now prove Proposition~\ref{prop:well-posed}, asserting that the ``combinatorial Reeb flow'' on the boundary of a symplectic polytope in ${\mathbb R}^4$ is locally well-posed. This is a consequence of the following two lemmas:
\begin{lemma}
\label{lem:wp1}
Let $X$ be a convex polytope in ${\mathbb R}^4$, and let $F$ be a face of $X$. Then the Reeb cone
\[
R_F^+X = {\mathbf i}N_F^+X\cap T_F^+X
\]
has dimension at least $1$.
\end{lemma}
Note that there is no need to assume that $0\in\operatorname{int}(X)$ in the above lemma, because the Reeb cone is invariant under translation of $X$.
\begin{lemma}
\label{lem:wp2}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$ and let $F$ be a face of $X$. Then the Reeb cone $R_F^+X$ has dimension at most $1$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:wp1}.] The proof has four steps.
{\em Step 1.\/} We need to show that there exists a unit vector in $R_F^+X$. We first rephrase this statement in a way that can be studied topologically.
Define
\[
B = \left\{(v,w)\in N_F^+X\times T_F^+X \;\big|\; \|v\|=\|w\|=1,\; \langle v,w\rangle = 0\right\}.
\]
Define a fiber bundle $\pi:Z\to B$ with fiber $S^2$ by setting
\[
Z_{(v,w)}=\left\{u\in{\mathbb R}^4 \;\big|\; \|u\|=1, \; \langle u,v\rangle = 0 \right\}.
\]
Define two sections
\[
s_0,s_1: B \longrightarrow Z
\]
by
\[
\begin{split}
s_0(v,w) &= {\mathbf i}v,\\
s_1(v,w) &= w.
\end{split}
\]
To show that there exists a unit vector in $R_F^+X$, we need to show that there exists a point $(v,w)\in B$ with $s_0(v,w) = s_1(v,w)$.
{\em Step 2.\/}
Let
\[
B_0 = \left\{w\in\partial T_F^+X \; \big| \; \|w\|=1\right\}.
\]
The space $B_0$ is the set of unit vectors on the boundary of a nondegenerate cone, and thus is homeomorphic to $S^2$. Recall from the proof of Lemma~\ref{lem:pp1} that if $(v,w)\in B$ then $w\in B_0$. We now show that the projection $B\to B_0$ sending $(v,w)\mapsto w$ is a homotopy equivalence.
To do so, observe that by Lemma~\ref{lem:pp1}, we have
\begin{equation}
\label{eqn:Bunion}
B = \bigcup_{F\subset E} \left\{v\in N_E^+X\;\big|\; \|v\|=1\right\} \times \left\{w\in T_F^+\overline{E} \;\big|\; \|w\|=1\right\}.
\end{equation}
If $F$ is a $3$-face, then in the union \eqref{eqn:Bunion}, we only have $E=F$; there is a unique unit vector $v\in N_E^+X$, and so the projection $B\to B_0$ is a homeomorphism.
If $F$ is a $2$-face, then in \eqref{eqn:Bunion}, $E$ can be either $F$ itself, or one of the two three-faces adjacent to $F$, call them $E_1$ and $E_2$. The contribution from $E=F$ is a cylinder, while the contributions from $E=E_1$ and $E_2$ are disks which are glued to the cylinder along its boundary. The projection $B\to B_0$ collapses the cylinder to a circle, which again is a homotopy equivalence.
If $F$ is a $1$-face, with $k$ adjacent $3$-faces, then the contribution to \eqref{eqn:Bunion} from $E=F$ consists of two disjoint closed $k$-gons. Each $2$-face $E$ adjacent to $F$ contributes a square with opposite edges glued to one edge of each $k$-gon. Each $3$-face $E$ adjacent to $F$ contributes a bigon filling in the gap between two consecutive squares. The projection $B\to B_0$ collapses each $k$-gon to a point and each bigon to an interval, which again is a homotopy equivalence.
Finally, suppose that $F$ is a $0$-face. Then $E=F$ makes no contribution to \eqref{eqn:Bunion}, since $TF=\{0\}$ contains no unit vectors. Now $B_0$ has a cell decomposition consisting of a $k$-cell for each $(k+1)$-face adjacent to $F$. The space $B$ is obtained from $B_0$ by thickening each $0$-cell to a closed polygon, and thickening each $1$-cell to a square. Again, this is a homotopy equivalence.
{\em Step 3.\/} The $S^2$-bundle $Z\to B$ is trivial. To see this, observe that $Z$ is the pullback of a bundle over $N_F^+X\setminus\{0\}$, whose fiber over $v$ is the set of unit vectors orthogonal to $v$. Since $N_F^+X\setminus\{0\}$ is contractible, the latter bundle is trivial, and thus so is $Z$. In particular, the bundle $Z$ has two homotopy classes of trivialization, which differ only in the orientation of the fiber. We now show that, using a trivialization to regard $s_0$ and $s_1$ as maps $B\to S^2$, the mod $2$ degrees of these maps are given by $\operatorname{deg}(s_0)=0$ and $\operatorname{deg}(s_1)=1$.
It follows from the triviality of the bundle $Z$ that $\operatorname{deg}(s_0)=0$.
To prove that $\operatorname{deg}(s_1)=1$, we need to pick an explicit trivialization of $Z$. To do so, fix a vector $v_0\in\operatorname{int}(T_F^+X)$. Let $S$ denote the set of unit vectors in the orthogonal complement $v_0^\perp$. Let $P:{\mathbb R}^4\to v_0^\perp$ denote the orthogonal projection. We then have a trivialization
\[
Z \stackrel{\simeq}{\longrightarrow} B\times S
\]
sending
\[
((v,w),u) \longmapsto ((v,w),Pu/\|Pu\|).
\]
Note here that for every $(v,w)\in B$, the restriction of $P$ to $v^\perp$ is an isomorphism, because otherwise $v$ would be orthogonal to $v_0$, but in fact we have $\langle v,v_0\rangle < 0$.
With respect to this trivialization, the section $s_1$ is a map $B\to S$ which is the composition of the projection $B\to B_0$ with the map $B_0\to S$ sending
\[
w \longmapsto Pw/\|Pw\|.
\]
The former map is a homotopy equivalence by Step 2, and the latter map is a homeomorphism because $v_0$ is not parallel to any vector in $\partial T_F^+X$. Thus $\operatorname{deg}(s_1)=1$.
{\em Step 4.\/} We now complete the proof of the lemma. Suppose to get a contradiction that there does not exist a point $p\in B$ with $s_0(p)=s_1(p)$. It follows, using a trivialization of $Z$ to regard $s_0$ and $s_1$ as maps $B\to S^2$, that $s_1$ is homotopic to the composition of $s_0$ with the antipodal map. Then $\operatorname{deg}(s_1)=-\operatorname{deg}(s_0)$. This contradicts Step 3.
\end{proof}
\begin{remark}
It might be possible to generalize Lemma~\ref{lem:wp1} to show that if $X$ is any convex set in ${\mathbb R}^{2n}$ with nonempty interior and if $z\in\partial X$, then the Reeb cone $R_z^+X$ is at least one dimensional.
\end{remark}
We now prepare for the proof of Lemma~\ref{lem:wp2}.
\begin{lemma} \label{lem:weak_well_posedness_for_polytopes} Let $X$ be a convex polytope in ${\mathbb R}^{2n}$. Then for every face $F$ of $X$, there exists a face $E$ with $F\subset\overline{E}$ such that
\[
R_F^+X \subset T^+_F\bar{E}.
\]
\end{lemma}
\noindent
{\em Proof.\/}
Let $\{E_i\}_{i=1}^N$ denote the set of faces whose closures contain $F$. By Lemma~\ref{lem:pp1}, we have
\begin{equation}
\label{eqn:wwp}
R_F^+X \subset \bigcup_{i=1}^N T_F^+\bar{E}_i.
\end{equation}
Let $V$ denote the subspace of ${\mathbb R}^{2n}$ spanned by $R^+_FX$. Note that since the latter set is a cone, it has a nonempty interior in $V$. We claim now that $V\subset TE_i$ for some $i$. If not, then $V\cap TE_i$ is a proper subspace of $V$ for each $i$. But by \eqref{eqn:wwp}, we have
\[
R^+_FX = \left(\cup_i T^+_F\bar{E}_i\right) \cap R^+_FX \subset \left(\cup_i TE_i\right) \cap V.
\]
This is a contradiction, since the left hand side has a nonempty interior in $V$, while the right hand side is a union of proper subspaces of $V$.
Since $V \subset TE_i$, it follows that $R^+_FX \subset T^+_F\bar{E}_i$, because by \eqref{eqn:wwp} again,
\[
R^+_FX = R^+_FX \cap V = R^+_FX \cap TE_i
\]
\[
\hspace{1.8in}
\subset TE_i \cap \bigg(\bigcup_j T^+_F\bar{E}_j\bigg) = T_F\bar{E}_i, \hspace{1.8in}\Box
\]
\begin{lemma}
\label{lem:wwp2}
Let $X$ be a convex polytope in ${\mathbb R}^{2n}$, and let $F$ be a face of $X$. Let $v\in R_F^+X$. Suppose that $v\in\operatorname{int}(T_F^+\overline{E})$ for some $(2n-1)$-face $E$ whose closure contains $F$. Then $v$ is a positive multiple of ${\mathbf i}\nu_E$.
\end{lemma}
\begin{proof}
Let $E=E_1,\ldots,E_N$ denote the $(2n-1)$-faces whose closures contain $F$, and let $\nu_i$ denote the outward unit normal vector to $E$. Since $v\in\operatorname{int}(T_F^+\overline{E})$, we have $\langle v,\nu_1\rangle=0$ and $\langle v,\nu_i\rangle < 0$ for $i>1$. Since $-{\mathbf i}v\in N_F^+X$, it follows from Lemma~\ref{lem:ncn} that we can write
\[
-{\mathbf i}v = \sum_{i=1}^Na_i\nu_i
\]
with $a_i\ge 0$. Since $\langle v,{\mathbf i}v\rangle = 0$, we conclude that $a_i=0$ for $i>1$. Thus $-{\mathbf i}v=a_1\nu_1$, and $a_1>0$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:wp2}.]
Suppose $v_0,v_1$ are distinct unit vectors in $R_F^+X$. By Lemma~\ref{lem:weak_well_posedness_for_polytopes}, there is a $3$-face $E$ such that $v_0$ and $v_1$ are both in $T_F^+\bar{E}$. In particular, $v_1$ and $v_2$ are linearly independent.
Since $v_0$ and $v_1$ are both in the cone $R_F^+X$, it follows that if $t\in[0,1]$ then the affine linear combination $(1-t)v_0+tv_1$ is also in this cone. Since $v_0$ and $v_1$ are linearly independent, these affine linear combinations cannot be in the interior of $T_F^+\overline{E}$, or else this would contradict the projective uniqueness in Lemma~\ref{lem:wwp2}. Consequently $v_0$ and $v_1$ are both contained in $T_F^+\overline{E'}$ for some $2$-face $E'$ on the boundary of $\overline{E}$.
We now have
\[
\omega(v_0,v_1) = \langle v_0,-{\mathbf i}v_1\rangle \le 0,
\]
where the inequality holds since $v_0\in T_F^+X$ and $-{\mathbf i}v_1\in N_F^+X$. By a symmetric calculation, $\omega(v_1,v_0)\le 0$. It follows that $\omega(v_0,v_1)=0$. Since $v_0$ and $v_1$ are linearly independent vectors in $TE'$, this contradicts the hypothesis that $\omega|_{TE'}$ is nondegenerate.
\end{proof}
\subsection{Description of the Reeb cone}
\label{sec:drc}
We now prove Lemma~\ref{lem:Reebcone}, describing the possibilities for the Reeb cone of a face of a symplectic polytope in ${\mathbb R}^4$.
\begin{lemma}
\label{lem:EinEout}
Let $X$ be a convex polytope in ${\mathbb R}^4$ and let $F$ be a $2$-face of $X$. Let $E_1$ and $E_2$ denote the $3$-faces adjacent to $F$, and let $\nu_i$ denote the outward unit normal vector to $E_i$.
\begin{itemize}
\item[\emph{(a)}] If $\langle {\mathbf i}\nu_{1},\nu_{2}\rangle < 0$, then every nonzero vector $w$ in the Reeb cone $R_{E_1}^+$ points into $E_1$ from $F$, that is $w\in \operatorname{int}(T_F^+\overline{E_1})$.
\item[\emph{(b)}] If $\langle {\mathbf i}\nu_{1},\nu_{2}\rangle > 0$, then every nonzero vector $w$ in the Reeb cone $R_{E_1}^+$ points out of $E_1$ from $F$, that is $w\in \operatorname{int}(-T_F^+\overline{E_1})$.
\item[\emph{(c)}] If $\langle {\mathbf i}\nu_{1},\nu_{2}\rangle = 0$, then $F$ is Lagrangian.
\end{itemize}
\end{lemma}
\begin{proof}
Let $\eta$ denote the unit normal vector to $F$ in $T\overline{E_1}$ pointing into $E_1$. The vector $\eta$ must be a linear combination of $\nu_1$ and $\nu_2$ (since it is normal to $F$), it must be orthogonal to $\nu_1$ (since it is tangent to $E_1$), and it must have negative inner product with $\nu_2$ (since it points into $E_1$). It follows that
\begin{equation}
\label{eqn:eta}
\eta = \frac{-\nu_2 + \langle\nu_1,\nu_2\rangle\nu_1}{\|-\nu_2 + \langle\nu_1,\nu_2\rangle\nu_1\|}.
\end{equation}
The vector $w$ points into $E_1$ if and only if $\langle \eta,w\rangle >0$, and the vector $w$ points out of $E_1$ if and only if $\langle \eta,w\rangle < 0$. For $w$ in the Reeb cone of $E_1$, we know that $w$ is a positive multiple of ${\mathbf i}\nu_1$. By equation \eqref{eqn:eta}, we have
\[
\langle \eta,{\mathbf i}\nu_1\rangle = \frac{-\langle{\mathbf i}\nu_1,\nu_2\rangle}{\|-\nu_2 + \langle\nu_1,\nu_2\rangle\nu_1\|}.
\]
Thus if $\langle {\mathbf i}\nu_1,\nu_2\rangle$ is nonzero, then it has opposite sign from $\langle \eta,w\rangle$. This proves (a) and (b).
If $\langle {\mathbf i}\nu_1,\nu_2\rangle = 0$, then $\omega({\mathbf i}\nu_1,{\mathbf i}\nu_2) = 0$, but ${\mathbf i}\nu_1$ and ${\mathbf i}\nu_2$ are linearly independent tangent vectors to $F$, so $F$ is Lagrangian. This proves (c).
\end{proof}
\begin{lemma}
\label{lem:la}
Let $X$ be a convex polytope in ${\mathbb R}^4$ and let $F$ be a 2-face of $X$. If $TF\cap R_F^+X\neq\{0\}$, then $F$ is Lagrangian.
\end{lemma}
\begin{proof}
If $w\in TF\cap R_F^+X$, then for any other vector $u\in TF$, we have
\[
\omega(w,u) = \langle {\mathbf i}w,u\rangle = 0
\]
since $-{\mathbf i}w\in N_F^+X$. If we also have $w\neq 0$, then it follows that $F$ is Lagrangian.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:Reebcone}.]
If $F$ is a $3$-face, then by the definition of the Reeb cone, $R_F^+X$ consists of all nonnegative multiples of ${\mathbf i}\nu_F$; and ${\mathbf i}\nu_F$ is a positive multiple of the Reeb vector field on $F$ by equation \eqref{eqn:Reebinu}.
Suppose now that $F$ is a $k$-face with $k<3$, and that $w$ is a nonzero vector in the Reeb cone $R_F^+X$. Applying Lemma~\ref{lem:pp1} to $v=-{\mathbf i}w$ and $w$, we deduce that there is a face $E$ of $X$ with $F\subset \overline{E}$ such that $-{\mathbf i}w\in N_E^+X$ and $w\in T_F^+\overline{E}$. In particular,
\begin{equation}
\label{eqn:terex}
w\in TE\cap R_E^+X.
\end{equation}
By Lemma~\ref{lem:la} and our hypothesis that $X$ is a symplectic polytope, $E$ is not a $2$-face.
If $F$ is a $2$-face, we conclude that $w$ is in the Reeb cone $R_E^+X$ for one of the $3$-faces $E$ adjacent to $F$. By Lemma~\ref{lem:EinEout}, $w$ must point into $E$.
If $F$ is a $1$-face, then $E$ is either a $3$-face adjacent to $F$, or $F$ itself. In the case when $E=F$, the vector $w$ cannot be in the Reeb cone of any $3$-face $F_3$ adjacent to $F$. The reason is that if $F_2$ is one of the two $2$-faces with $F \subset \overline{F_2} \subset \overline{F_3}$, then by Lemma~\ref{lem:EinEout}, the Reeb cone of $F_3$ is not tangent to $F_2$, so it certainly cannot be tangent to $F$.
If $F$ is a $0$-face, then $E$ is adjacent to $F$ and is either a $3$-face or a $1$-face. If $E$ is a $1$-face, then it is a bad $1$-face by \eqref{eqn:terex}.
\end{proof}
\section{The quaternionic trivialization}
\label{sec:quaternionic}
In this section let $Y\subset{\mathbb R}^4$ be a smooth star-shaped hypersurface with the contact form $\lambda = \lambda_0|_Y$ and contact structure $\xi=\op{Ker}(\lambda)$. We now define a special trivialization $\tau$ of the contact structure $\xi$, and we prove a key property of this trivialization.
\subsection{Definition of the quaternionic trivialization}
The following definition is a smooth analogue of Definition~\ref{def:qtfg}.
\begin{definition}
Define the {\bf quaternionic trivialization}
\begin{equation}
\label{eqn:qt}
\tau: \xi \stackrel{\simeq}{\longrightarrow} Y\times {\mathbb R}^2
\end{equation}
as follows. If $y\in Y$ and $V\in T_yY$, let $\nu$ denote the outward unit normal to $Y$ at $y$, and define
\[
\tau(V) = \left(y,\langle V,{\mathbf j}\nu\rangle, \langle V,{\mathbf k}\nu\rangle\right).
\]
By abuse of notation, for fixed $y\in Y$ we write $\tau:\xi_y\stackrel{\simeq}{\longrightarrow} {\mathbb R}^2$ to denote the restriction of \eqref{eqn:qt} to $\xi_y$ followed by projection to ${\mathbb R}^2$.
\end{definition}
From now on we always use the quaternionic trivialization $\tau$ for smooth star-shaped hypersurfaces in ${\mathbb R}^4$.
\begin{lemma}
The quaternionic trivialization $\tau$ is a symplectic trivialization of $\xi$.
\end{lemma}
\begin{proof}
Same calculation as the proof of Lemma~\ref{lem:qtfg}(a).
\end{proof}
\begin{remark}
The inverse
\[
\tau^{-1}: Y\times {\mathbb R}^2\stackrel{\simeq}{\longrightarrow} \xi
\]
is described as follows. Recall from \eqref{eqn:Reebinu} that the Reeb vector field at $y$ is a positive multiple of ${\mathbf i}\nu$. Then $\tau^{-1}(y,(1,0))$ is obtained by projecting ${\mathbf j}\nu$ to $\xi_y$ along the Reeb vector field, while $\tau^{-1}(y,(0,1))$ is obtained by projecting ${\mathbf k}\nu$ to $\xi_y$ along the Reeb vector field.
\end{remark}
\subsection{Linearized Reeb flow}
We now make some definitions which we will need in order to bound the rotation numbers of Reeb orbits and Reeb trajectories.
\begin{definition}
\label{def:linearized}
If $y\in Y$ and $t\ge 0$, define the {\bf linearized Reeb flow\/} $\phi(y,t)\in\operatorname{Sp}(2)$ to be the composition
\begin{equation}
\label{eqn:lrf}
{\mathbb R}^2 \stackrel{\tau^{-1}}{\longrightarrow} \xi_y \stackrel{d\Phi_t}{\longrightarrow} \xi_{\Phi_t(y)} \stackrel{\tau}{\longrightarrow} {\mathbb R}^2
\end{equation}
where $\Phi_t:Y\to Y$ denotes the time $t$ flow of the Reeb vector field, and $\tau$ is the quaternionic trivialization. Define the {\bf lifted linearized Reeb flow\/} $\widetilde{\phi}(y,t)\in\widetilde{\operatorname{Sp}}(2)$ to be the arc
\begin{equation}
\label{eqn:llrf}
\widetilde{\phi}(y,t) =
\{\phi(y,s)\}_{s\in [0,t]}.
\end{equation}
\end{definition}
Note that we have the composition property
\[
\widetilde{\phi}(y,t_2+t_1) = \widetilde{\phi}(\phi_{t_1}(y),t_2) \circ \widetilde{\phi}(y,t_1).
\]
Next, let ${\mathbb P}\xi$ denote the ``projectivized'' contact structure
\[
{\mathbb P}\xi = (\xi\setminus Z)/\sim
\]
where $Z$ denotes the zero section, and two vectors are declared equivalent if they differ by multiplication by a positive scalar. Thus ${\mathbb P}\xi$ is an $S^1$-bundle over $Y$. The Reeb vector field $R$ on $Y$ canonically lifts, via the linearized Reeb flow, to a vector field $\widetilde{R}$ on ${\mathbb P}\xi$.
The quaternionic trivialization $\tau$ defines a diffeomorphism
\[
\overline{\tau}: {\mathbb P}\xi \stackrel{\simeq}{\longrightarrow} Y\times S^1.
\]
Let
\[
\sigma: {\mathbb P}\xi\longrightarrow S^1
\]
denote the composition of $\overline{\tau}$ with the projection $Y\times S^1\to S^1$.
\begin{definition}
Define the {\bf rotation rate\/}
\[
r: {\mathbb P}\xi\longrightarrow {\mathbb R}
\]
to be the derivative of $\sigma$ with respect to the lifted linearized Reeb flow,
\[
r=\widetilde{R}\sigma.
\]
Define the {\bf minimum rotation rate\/}
\[
r_{\operatorname{min}}:Y\longrightarrow {\mathbb R}
\]
by
\[
r_{\operatorname{min}}(y) = \min_{\widetilde{y}\in{\mathbb P}\xi_y}r(\widetilde{y}).
\]
\end{definition}
It follows from \eqref{eqn:rminbound} and \eqref{eqn:rhor} that we have the following lower bound on the rotation number of the lifted linearized flow of a Reeb trajectory.
\begin{lemma}
\label{lem:minrot}
Let $y$ be a smooth star-shaped hypersurface in ${\mathbb R}^4$, let $y\in Y$, and let $t\ge 0$. Then
\[
\rho(\widetilde{\phi}(y,t)) \ge \int_0^tr_{\operatorname{min}}(\Phi_s(y))ds.
\]
\end{lemma}
\subsection{The curvature identity}
We now prove a key identity which relates the linearized Reeb flow, with respect to the quaternionic trivialization $\tau$, to the curvature of $Y$. This identity (in different notation) is due to U.\ Hryniewicz and P.\ Salom\~ao \cite{umberto}. Below, let $S:TY\otimes TY\to{\mathbb R}$ denote the second fundamental form defined by
\[
S(u,w) = \langle \nabla_u\nu,w\rangle,
\]
where $\nu$ denotes the outward unit normal vector to $Y$, and $\nabla$ denotes the trivial connection on the restriction of $T{\mathbb R}^4$ to $Y$. Also write $S(u)=S(u,u)$.
\begin{proposition}
\label{prop:uj}
Let $Y$ be a smooth star-shaped hypersurface in ${\mathbb R}^4$, let $y\in Y$, let $\theta\in{\mathbb R}/2\pi{\mathbb Z}$, and write $\sigma = \theta/2\pi\in{\mathbb R}/{\mathbb Z}$. Then at the point $\overline{\tau}^{-1}(y,\sigma)\in{\mathbb P}\xi$, we have
\begin{equation}
\label{eqn:curvatureidentity}
\widetilde{R}\sigma = \frac{1}{\pi\langle\nu,y\rangle}\left(S({\mathbf i}\nu) + S(\cos(\theta){\mathbf j}\nu + \sin(\theta){\mathbf k}\nu)\right).
\end{equation}
\end{proposition}
\begin{proof}
It follows from the definitions that
\begin{equation}
\label{eqn:ffd}
\begin{split}
2\pi\widetilde{R}\sigma =& \left\langle \mc{L}_R((\cos\theta){\mathbf j}\nu + (\sin\theta){\mathbf k}\nu), (\sin\theta){\mathbf j}\nu - (\cos\theta){\mathbf k}\nu\right\rangle\\
=& -(\cos^2\theta)\langle \mc{L}_R{\mathbf j}\nu,{\mathbf k}\nu\rangle + (\sin^2\theta)\langle \mc{L}_R{\mathbf k}\nu,{\mathbf j}\nu\rangle\\
& +(\sin\theta\cos\theta)(\langle \mc{L}_R{\mathbf j}\nu,{\mathbf j}\nu\rangle - \langle \mc{L}_R{\mathbf k}\nu,{\mathbf k}\nu\rangle).
\end{split}
\end{equation}
We compute
\begin{align}
\nonumber
\langle \mc{L}_R{\mathbf j}\nu,{\mathbf k}\nu\rangle &= \langle \nabla_R{\mathbf j}\nu - \nabla_{{\mathbf j}\nu}R, {\mathbf k}\nu\rangle\\
\nonumber
&= \frac{2}{\langle \nu,y\rangle}\left(\langle \nabla_{{\mathbf i}\nu}{\mathbf j}\nu,{\mathbf k}\nu\rangle - \langle\nabla_{{\mathbf j}\nu}{\mathbf i}\nu, {\mathbf k}\nu\rangle\right)\\
\nonumber
&= \frac{2}{\langle \nu,y\rangle}\left(-\langle \nabla_{{\mathbf i}\nu}\nu,{\mathbf i}\nu\rangle -\langle \nabla_{{\mathbf j}\nu}\nu,{\mathbf j}\nu\rangle\right)\\
\label{eqn:Ljk}
&= \frac{2}{\langle \nu,y\rangle}\left(-S({\mathbf i}\nu) - S({\mathbf j}\nu)\right).
\end{align}
Here in the second to third lines we have used the fact that multiplication on the left by a constant unit quaternion is an isometry. Similar calculations show that
\begin{align}
\label{eqn:Lkj}
\langle \mc{L}_R{\mathbf k}\nu,{\mathbf j}\nu\rangle &= \frac{2}{\langle \nu,y\rangle}\left(S({\mathbf i}\nu) + S({\mathbf k}\nu)\right),\\
\label{eqn:Ljj}
\langle \mc{L}_R{\mathbf j}\nu,{\mathbf j}\nu\rangle = - \langle \mc{L}_R{\mathbf k}\nu,{\mathbf k}\nu\rangle &= \frac{2}{\langle\nu,y\rangle} S({\mathbf j}\nu,{\mathbf k}\nu).
\end{align}
Plugging \eqref{eqn:Ljk}, \eqref{eqn:Lkj} and \eqref{eqn:Ljj} into \eqref{eqn:ffd} proves the curvature identity \eqref{eqn:ffd}.
\end{proof}
\begin{remark}
Since the second fundamental form is positive definite when $Y$ is strictly convex, and positive semidefinite when $Y$ is convex, by Lemma~\ref{lem:minrot} we obtain the following corollary:
{\em If $Y$ is a convex star-shaped hypersurface in ${\mathbb R}^4$ then $\widetilde{R}\sigma\ge 0$ everywhere, so $\widetilde{\phi}(y,t)$ has nonnegative rotation number for all $y\in Y$ and $t\ge 0$.
If $Y$ is a strictly convex star-shaped hypersurface in ${\mathbb R}^4$ then $\widetilde{R}\sigma >0$ everywhere, so $\widetilde{\phi}(y,t)$ has positive rotation number for all $y\in Y$ and $t>0$.\/}
\end{remark}
\section{Reeb dynamics on smoothings of polytopes}
\label{sec:smoothingdynamics}
In \S\ref{sec:smoothings} and \S\ref{sec:Rfssp} we study the Reeb flow on the boundary of a smoothing of a symplectic polytope in ${\mathbb R}^4$. In \S\ref{sec:nss} and \S\ref{sec:srn} we explain some more technical issues arising from the fact that the smoothing is only $C^1$, and in particular how to make sense of the ``rotation number'' of Reeb trajectories. In \S\ref{sec:rnlb} we derive important lower bounds on this rotation number.
\subsection{Smoothings of polytopes}
\label{sec:smoothings}
If $X\subset{\mathbb R}^m$ is a compact convex set and $\epsilon>0$, define the $\epsilon$-smoothing $X_\epsilon$ of $X$ by equation \eqref{eqn:deltasmoothing}.
Observe that $X_\epsilon$ is convex. Denote its boundary by $Y_\epsilon = \partial X_\epsilon$. We now describe $Y_\epsilon$ more explicitly, in a way which mostly does not depend on $\epsilon$. We first have:
\begin{lemma}
\label{lem:Yepsilon}
If $X$ is a compact convex set then
\[
Y_\epsilon = \{y \in {\mathbb R}^m\mid \operatorname{dist}(y,X) = \epsilon\}.
\]
\end{lemma}
\begin{proof}
The left hand side is contained in the right hand side because distance to $X$ is a continuous function on ${\mathbb R}^m$. The reverse inclusion holds because given $y\in {\mathbb R}^m$ with $\operatorname{dist}(y,X)=\epsilon$, since $X$ is compact and convex, there is a unique point $x\in X$ which is closest to $y$. By convexity again, $X$ is contained in the closed half-space $\{z\in {\mathbb R}^m\mid \langle z, y-x\rangle \le 0\}$. It follows that $\operatorname{dist}(t(y-x),X)=\epsilon t$ for $t>0$, so that $y\in\partial X_\epsilon$.
\end{proof}
\begin{definition}
If $X\subset{\mathbb R}^m$ is a compact convex set, define the ``blown-up boundary''
\[
Y_0 = \left\{(y,v) \;\big|\; y\in \partial X,\; v\in N_y^+X,\; |v|=1\right\}\subset\partial X \times S^{m-1}.
\]
\end{definition}
We then have the following lemma, which is proved by similar arguments to Lemma~\ref{lem:Yepsilon}:
\begin{lemma}
\label{lem:bub}
Let $X\subset{\mathbb R}^m$ be a compact convex set and let $\epsilon>0$. Then:
\begin{itemize}
\item[\emph{(a)}]
There is a homeomorphism
\[
Y_0\stackrel{\simeq}{\longrightarrow} Y_\epsilon
\]
sending $(y,v)\mapsto y+\epsilon v$.
\item[\emph{(b)}]
The inverse homeomorphism sends $y\mapsto (x,\epsilon^{-1}(y-x))$ where $x$ is the unique closest point in $X$ to $y$.
\item[\emph{(c)}]
For $y\in Y_\epsilon$, if $x$ is the closest point in $X$ to $y$, then the positive normal cone $N_y^+X_\epsilon$ is the ray consisting of nonnegative multiples of $y-x$.
\end{itemize}\end{lemma}
Suppose now that $X\subset{\mathbb R}^m$ is a convex polytope and $\epsilon>0$.
\begin{definition}
If $F$ is a face of $X$, define the {\bf $\epsilon$-smoothed face\/}
\[
F_\epsilon = \{x \in Y_\epsilon \mid \operatorname{dist}(x,F)=\epsilon\}.
\]
\end{definition}
By Lemma~\ref{lem:bub}, we have
\[
Y_\epsilon = \bigsqcup_F F_\epsilon
\]
and
\[
F_\epsilon = F + \{v\in N_F^+X\mid |v|=\epsilon\}.
\]
Note that each $F_\epsilon$ is a $C^\infty$ smooth hypersurface, and where the closure of one $F_\epsilon$ meets another, the outward unit normal vectors agree. It follows that $Y_\epsilon$ is a $C^1$ smooth hypersurface, and it is $C^\infty$ except along strata\footnote{We do not also need to mention strata of the form $F+\partial \{v\in N_F^+X\mid |v|=\epsilon\}$, because any point in $\partial N_F^+X$ is contained in $N_E^+X$ where $E$ is a face with $F\subset \partial E$.} of the form $\partial F + \{v\in N_F^+X\mid |v|=\epsilon\}$.
\subsection{The Reeb flow on a smoothed symplectic polytope}
\label{sec:Rfssp}
Suppose now that $X$ is a symplectic polytope in ${\mathbb R}^4$ and $\epsilon>0$. As noted above, $Y_\epsilon = \partial X_\epsilon$ is a $C^1$ convex hypersurface, and as such it has a well-defined $C^0$ Reeb vector field, which is smooth except along the strata of $Y_\epsilon$ arising from the boundaries of the faces of $X$. We now investigate the Reeb flow on $Y_\epsilon$ in more detail, as well as the lifted linearized Reeb flow $\widetilde{\phi}$ from Definition~\ref{def:linearized}.
\subsection*{General remarks.}
By Lemma~\ref{lem:bub}, a point in $Y_\epsilon$ lives in an $\epsilon$-smoothed face $F_\epsilon$ for a unique face $F$ of $X$, and thus has the form $y+\epsilon v$ where $y\in F$ and $v\in N_F^+X$ is a unit vector. By equation \eqref{eqn:Reebinu} and Lemma~\ref{lem:bub}(c), the Reeb vector field at $y+\epsilon v$ is given by
\begin{equation}
\label{eqn:Reebsmoothed}
R_{y+\epsilon v} = \frac{2{\mathbf i}v}{\langle v,y\rangle + \epsilon}.
\end{equation}
\begin{lemma}
\label{lem:transinv}
The Reeb vector field \eqref{eqn:Reebsmoothed} on the $\epsilon$-smoothed face $F_\epsilon$, regarded as a map $F_\epsilon\to {\mathbb R}^4$, depends only $v\in N_F^+X$ and not on the choice of $y\in F$.
\end{lemma}
\begin{proof}
This follows from equation \eqref{eqn:Reebsmoothed}, because for fixed $v\in N_F^+X$ and for two points $y,y'\in F$, by the definition of positive normal cone we have $\langle v,y-y'\rangle = 0$.
\end{proof}
\subsection*{Smoothed 3-faces.} The Reeb flow on a smoothed $3$-face is very simple.
\begin{lemma}
\label{lem:smoothed3face}
Let $X\subset{\mathbb R}^4$ be a symplectic polytope, let $\epsilon>0$, and let $E$ be a $3$-face of $X$ with outward unit normal vector $\nu$.
\begin{itemize}
\item[\emph{(a)}]
The Reeb vector field on $E_\epsilon$, regarded as a map $E_\epsilon\to{\mathbb R}^4$, agrees with the Reeb vector field on $E$, up to rescaling by a positive constant which limits to $1$ as $\epsilon\to0$.
\item[\emph{(b)}]
If $\gamma:[0,t]\to E_\epsilon$ is a Reeb trajectory, then $\widetilde{\phi}(\gamma(0),t)=1\in\widetilde{\operatorname{Sp}}(2)$.
\item[\emph{(c)}]
If $y\in\partial E$, then at the point $y+\epsilon\nu\in Y_\epsilon$, the Reeb vector field on $Y_\epsilon$ is not tangent to $\partial E_\epsilon$.
\end{itemize}
\end{lemma}
\begin{proof}
(a)
This follows from equation \eqref{eqn:Reebsmoothed}.
(b) For $s\in[0,t]$, the Reeb flow $\Phi_s:Y_\epsilon\to Y_\epsilon$ is a translation on a neighborhood of $\gamma(0)$. Consequently the linearized Reeb flow $d\Phi_s:\xi_{\gamma(0)}\to \xi_{\gamma(s)}$ is the identity, if we regard $\xi_{\gamma(0)}$ and $\xi_{\gamma(s)}$ as (identical) two-dimensional subspaces of ${\mathbb R}^4$. The quaternionic trivialization $\tau:{\mathbb R}^2\to \xi_{\gamma(s)}$ likewise does not depend on $s\in[0,t]$. Consequently $\phi(y,s)=1$ for all $s\in[0,t]$. Thus $\widetilde{\phi}(y,t)$ is the constant path at the identity in $\operatorname{Sp}(2)$.
(c)
It is equivalent to show that the Reeb vector field on $E$ at $y$ is not tangent to $\partial E$. If the Reeb vector field on $E$ at $y$ is tangent to $\partial E$, then it is tangent to some $2$-face $F\subset \partial E$. By Lemma~\ref{lem:la}, the face $2$-face $F$ is Lagrangian, contradicting our hypothesis that the polytope $X$ is symplectic.
\end{proof}
\subsection*{Smoothed 2-faces.}
Let $F$ be a $2$-face. Let $E_1$ and $E_2$ be the $3$-faces adjacent to $F$. By Lemma~\ref{lem:Reebcone}, we can choose these so that $R_{E_2}$ points out of $F$; and a similar argument shows that then $R_{E_1}$ points into $F$. Let $\nu_1$ and $\nu_2$ denote the outward unit normal vectors to $E_1$ and $E_2$ respectively. By Lemma~\ref{lem:ncn}, the normal cone $N_F^+$ consists of nonnegative linear combinations of $\nu_1$ and $\nu_2$. Let $\{v,w\}$ be an orthonormal basis for $F^\perp$, such that the orientation given by $(v,w)$ agrees with the orientation given by $(\nu_1,\nu_2)$. For $i=1,2$ we can write $\nu_i=(\cos\theta_i) v + (\sin\theta_i) w$ where $0<\theta_2-\theta_1 < \pi$. We then have a homeomorphism
\begin{equation}
\label{eqn:smoothed2face}
\begin{split}
F \times [\theta_1,\theta_2] &\stackrel{\simeq}{\longrightarrow} F_\epsilon,\\
(y,\theta) &\longmapsto
y+\epsilon((\cos\theta) v + (\sin\theta) w).
\end{split}
\end{equation}
In the coordinates $(y,\theta)$, the Reeb vector field $R$ on $F_\epsilon$ depends only on $\theta$ by Lemma~\ref{lem:transinv}, and has positive $\partial_\theta$ coordinate for both $\theta=\theta_1$ and $\theta=\theta_2$ by our choice of labeling of $E_1$ and $E_2$. By equation \eqref{eqn:Reebsmoothed}, Lemma~\ref{lem:la}, and our hypothesis that the polytope $X$ is symplectic, the $\partial_\theta$ component of the Reeb vector field is positive on all of $F_\epsilon$.
Let $U_{F,\epsilon}\subset F$ denote the set of $y\in F$ such that the Reeb flow on $Y_\epsilon$ starting at $(y,\theta_1)\in F_\epsilon$ stays in $F_\epsilon$ until reaching a point in $F\times\{\theta_2\}$, which we denote by $(\phi_{F,\epsilon}(y),\theta_2)$. Thus we have a well-defined ``flow map'' $\phi_{F,\epsilon}: U_{F,\epsilon}\to F$.
\begin{lemma}
\label{lem:stf}
Let $F$ be a two-face of a symplectic polytope $X\subset {\mathbb R}^4$. Then:
\begin{itemize}
\item[\emph{(a)}] The flow map $\phi_{F,\epsilon}:U_{F,\epsilon}\to F$ above is translation by a vector $V_{F,\epsilon}\in TF$.
\item[\emph{(b)}]
$|V_{F,\epsilon}|=O(\epsilon)$ and $\lim_{\epsilon\to 0}U_{F,\epsilon}=F$.
\item[\emph{(c)}]
Let $y\in U_{F,\epsilon}$ and let $t$ be the Reeb flow time on $F_\epsilon$ from $y+\epsilon\nu_1$ to $\phi_{F,\epsilon}(y)+\epsilon\nu_2$. Then $\phi(y,t)\in\operatorname{Sp}(2)$ agrees with the transition matrix $\psi_F$ in Definition~\ref{def:transitionmatrix}, and $\widetilde{\phi}(y,t)\in\widetilde{\operatorname{Sp}}(2)$ is the unique lift of $\psi_F$ with rotation number in the interval $(0,1/2)$.
\end{itemize}
\end{lemma}
\begin{proof}
(a) If $y,y'\in U_{F,\epsilon}$, then it follows from the translation invariance in Lemma~\ref{lem:transinv} that $\phi_{F,\epsilon}(y)-y=\phi_{F,\epsilon}(y')-y'$, so $\phi_{F,\epsilon}$ is a translation.
(b) It follows from equation \eqref{eqn:Reebsmoothed} that for each $v$, the Reeb vector field $R_{y+\epsilon v}$, regarded as a vector in ${\mathbb R}^4$, has a well-defined limit as $\epsilon\to 0$, which by Lemma~\ref{lem:la} is not tangent to $F$. Since $\partial_\theta$, regarded as a vector in ${\mathbb R}^4$, has length $\epsilon$, it follows that the flow time of the Reeb vector field on $F_\epsilon$ from $F\times\{\theta_1\}$ to $F\times\{\theta_2\}$ is $O(\epsilon)$. Consequently the translation vector $V_{F,\epsilon}$ has length $O(\epsilon)$, and the complement $F\setminus U_{F,\epsilon}$ of the domain of the flow map is contained within distance $O(\epsilon)$ of $\partial F$.
(c) Write $y_1=y+\epsilon\nu_1$ and $y_2=\phi_{F,\epsilon}(y)+\epsilon\nu_2$. By part (a) and the translation invariance in Lemma~\ref{lem:transinv}, the time $t$ Reeb flow $\Phi_t$ on $Y_\epsilon$ restricted to $U_{F,\epsilon} + \epsilon \nu_1$ is a translation in ${\mathbb R}^4$.
Hence the derivative of $\Phi_t$ on the full tangent space of $Y_\epsilon$, namely
\[
d\Phi_t: T_{y_1}Y_\epsilon \longrightarrow T_{y_2}Y_\epsilon,
\]
restricts to the identity on $TF$. We now have a commutative diagram
\[
\begin{CD}
\xi_{y_1} @>>> TF @>{\tau_F'}>> {\mathbb R}^2 \\
@V{d\Phi_t}VV @V{1}VV @VV{\psi_F}V \\
\xi_{y_2} @>>> TF @>{\tau_F}>> {\mathbb R}^2.
\end{CD}
\]
Here the upper left horizontal arrow is projection along the Reeb vector field in $T_{y_1}Y_\epsilon$, and the lower left horizontal arrow is projection along the Reeb vector field in $T_{y_2}Y_\epsilon$. The right horizontal arrows were defined in Definition~\ref{def:qtfg} and Remark~\ref{rem:altcon}. The left square commutes because $d\Phi_t$ preserves the Reeb vector field. The right square commutes by Definition~\ref{def:transitionmatrix}. The composition of the arrows in the top row is the quaternionic trivialization $\tau$ on $\xi_{y_1}$, and the composition of the arrows in the bottom row is the quaternionic trivialization $\tau$ on $\xi_{y_2}$. Going around the outside of the diagram then shows that $\phi(y,t)=\psi_F$.
To determine the lift $\widetilde{\phi}(y,t)$, note that this is actually defined for, and depends continuously on, any $\epsilon>0$ and any pair of hyperplanes $E_1$ and $E_2$ that do not contain the origin and that intersect in a non-Lagrangian $2$-plane $F$. Thus we can denote this lift by $\widetilde{\phi}(E_1,E_2,\epsilon)\in\widetilde{\operatorname{Sp}}(2)$.
Now fixing $E_1$, $F$, and $\epsilon$, we can interpolate from $E_1$ and $E_2$ via a $1$-parameter family of hyperplanes $\{E_s\}_{s\in[1,2]}$ such that $0\notin E_s$ and $E_1\cap E_s=F$ for $1<s\le 2$. The rotation number $\rho:\widetilde{\operatorname{Sp}}(2)\to{\mathbb R}$ then gives us a continuous map
\[
\begin{split}
f: (1,2] &\longrightarrow {\mathbb R},\\
s &\longmapsto \rho\left(\widetilde{\phi}(E_1,E_s,\epsilon)\right)
\end{split}
\]
We have $\lim_{\tau\searrow 1}\widetilde{\phi}(E_1,E_s,\epsilon)=1$, so $\lim_{s\searrow 1}f(s) = 0$. On the other hand, for each $s\in(1,2]$, the fractional part of $f(s)$ is in the interval $(0,1/2)$ by Lemma~\ref{lem:transitionmatrix}. It follows by continuity that $f(s)\in(0,1/2)$ for all $s\in(1,2]$. Thus $f(2)\in(0,1/2)$, which is what we wanted to prove.
\end{proof}
\subsection*{Smoothed 1-faces.} The Reeb flow on a smoothed $1$-face is more complicated, but we will not need to analyze this in detail. We just remark that one can see the difference between good and bad $1$-faces in the Reeb dynamics on their smoothings. Namely:
\begin{remark}
\label{rem:spiral}
If $L$ is a bad $1$-face, then by definition, there is a unique unit vector $v\in N_L^+X$ such that ${\mathbf i}v$ is tangent to $L$. The line segment $L+\epsilon v\subset L_\epsilon$ is then a Reeb trajectory. On the complement of this line in $L_\epsilon$, the Reeb vector field spirals around the line, with the number of times that it spirals around going to infinity as $\epsilon\to 0$. This gives some intuition why Type 3 combinatorial Reeb orbits do not correspond to limits of sequences of Reeb orbits on smoothings with bounded rotation number.
\end{remark}
By contrast, if $L$ is a good $1$-face, then the Reeb vector field on $L_\epsilon$ always has a nonzero component in the $N_L^+X$ direction.
\subsection*{Smoothed 0-faces.} If $P$ is a $0$-face, then by Lemma~\ref{lem:bub}, $P_\epsilon$ is identified with a domain in $S^3$. By equation \eqref{eqn:Reebsmoothed}, the Reeb vector field on this domain agrees, up to reparametrization, with the standard Reeb vector field on the unit sphere in ${\mathbb R}^4$.
\subsection{Non-smooth strata}
\label{sec:nss}
We now investigate in more detail how Reeb trajectories on $Y_\epsilon$ intersect the strata where $Y_\epsilon$ is not $C^\infty$.
Let $\Sigma$ denote the subset of $Y_\epsilon$ where $Y_\epsilon$ is not locally $C^\infty$. By the discussion at the end of \S\ref{sec:smoothings}, we can write
\[
\Sigma = \Sigma_1 \sqcup \Sigma_2 \sqcup \Sigma_3
\]
where:
\begin{itemize}
\item
$\Sigma_1$ is the disjoint union of sets
\begin{equation}
\label{eqn:Sigma1}
P+\{v\in N_L^+X\mid |v|=\epsilon\}
\end{equation}
where $P$ is a vertex of $X$, and $L$ is a $1$-face adjacent to $P$.
\item
$\Sigma_2$ is the disjoint union of sets
\begin{equation}
\label{eqn:Sigma2}
L+\{v\in N_F^+X\mid |v|=\epsilon\}
\end{equation}
where $L$ is a $1$-face, and $F$ is a $2$-face adjacent to $L$.
\item
$\Sigma_3$ is the disjoint union of sets
\[
F+\epsilon\nu
\]
where $F$ is a $2$-face, and $\nu$ is the outward unit normal vector to one of the two $3$-faces $E$ adjacent to $F$.
\end{itemize}
\begin{lemma}
\label{lem:nsrt}
Let $X\subset{\mathbb R}^4$ be a symplectic polytope, let $\epsilon>0$, and let $\gamma:[a,b]\to Y_\epsilon$ be a Reeb trajectory. Then there exist a nonnegative integer $k$ and real numbers $a\le t_1 < t_2 < \cdots < t_k \le b$ with the following properties:
\begin{itemize}
\item[\emph{(a)}]
$\gamma(t_i)\in\Sigma$ for each $i$.
\item[\emph{(b)}]
For each $i=0,\ldots,k$, one of the following possibilities holds:
\begin{itemize}
\item[\emph{(i)}] $\gamma$ maps $(t_i,t_{i+1})$ to $Y_\epsilon\setminus\Sigma$. (Here we interpret $t_0=a$ and $t_{k+1}=b$.)
\item[\emph{(ii)}] $\gamma$ maps $(t_i,t_{i+1})$ to a Reeb trajectory in a component of $\Sigma_1$. (Each component of $\Sigma_1$ contains at most one Reeb trajectory of positive length.)
\item[\emph{(iii)}] $\gamma$ maps $(t_i,t_{i+1})$ to a Reeb trajectory in a component of $\Sigma_2$. (This can only happen when the corresponding $2$-face $F$ is complex linear, and in this case the component of $\Sigma_2$ is foliated by Reeb trajectories.)
\end{itemize}
\end{itemize}
\end{lemma}
\begin{proof}
We need to show that a Reeb trajectory intersects $\Sigma$ in isolated points, or in Reeb trajectories of the types described in (ii) and (iii).
We have seen in \S\ref{sec:Rfssp} that the Reeb vector field is transverse to all of $\Sigma_3$. Thus the Reeb trajectory $\gamma$ intersects $\Sigma_3$ only in isolated points.
Next let us consider the Reeb vector field on a component of $\Sigma_2$ of the form \eqref{eqn:Sigma2}. As in \S\ref{sec:Rfssp}, let $E_1$ and $E_2$ denote the $3$-faces adjacent to $F$, with outward unit normal vectors $\nu_1$ and $\nu_2$ respectively. The smoothing $F_\epsilon$ is parametrized by \eqref{eqn:smoothed2face}. This parametrization extends by the same formula to a parametrization of $\overline{F_\epsilon}$ by $\overline{F}\times [\theta_1,\theta_2]$. The latter parametrization includes the component \eqref{eqn:Sigma2} of $\Sigma_2$ as the restriction to $L\times [\theta_1,\theta_2]$. By equation \eqref{eqn:Reebsmoothed}, at the point corresponding to $(y,\theta)$ in \eqref{eqn:smoothed2face}, the Reeb vector is given by
\begin{equation}
\label{eqn:RSigma2}
R = \frac{2}{\langle (\cos\theta)v+(\sin\theta)w,y\rangle + \epsilon}{\mathbf i}((\cos\theta) v + (\sin\theta) w).
\end{equation}
This vector is tangent to the component \eqref{eqn:Sigma2} if and only if the orthogonal projection of ${\mathbf i}((\cos\theta)v + (\sin\theta)w)$ to $F$ is parallel to $L$.
If the projections of ${\mathbf i}v$ and ${\mathbf i}w$ to $F$ are not parallel, then this tangency will only happen for isolated values of $\theta$, and since the Reeb vector field on $\overline{F_\epsilon}$ always has a positive $\partial_\theta$ component, a Reeb trajectory will only intersect the component \eqref{eqn:Sigma2} in isolated points.
If on the other hand the projections of ${\mathbf i}v$ and ${\mathbf i}w$ to $F$ are parallel, then there is a nontrivial linear combination of ${\mathbf i}v$ and ${\mathbf i}w$ whose projection to $F$ is zero. This means that there is a nonzero vector $\nu$ perpendicular to $F$ such that ${\mathbf i}\nu$ is also perpendicular to $F$. This means that $F^\perp$ is complex linear, and thus $F$ is also complex linear. Then ${\mathbf i}v$ and ${\mathbf i}w$ are both perpendicular to $F$, so in the parametrization \eqref{eqn:smoothed2face}, the Reeb vector field vector field \eqref{eqn:RSigma2} is a just a positive multiple of $\partial_\theta$.
The conclusion is that a Reeb trajectory will intersect each component \eqref{eqn:Sigma2} of $\Sigma_2$ either in isolated points, or (when $F$ is complex linear) in Reeb trajectories which, in the parametrization \eqref{eqn:smoothed2face}, start on $L\times \{\theta_1\}$ and end on $L\times\{\theta_2\}$, keeping the $L$ component constant.
Finally we consider the Reeb vector field on a component \eqref{eqn:Sigma1} of $\Sigma_1$. The set of vectors $v$ that arise in \eqref{eqn:Sigma1} is a domain $D$ in the intersection of the sphere $|v|=\epsilon$ with the hyperplane $L^\perp$. As we have seen at the end of \S\ref{sec:Rfssp}, the Reeb vector field on $Y_\epsilon$ at a point in \eqref{eqn:Sigma1} agrees, up to scaling, with the standard Reeb vector field on the sphere $|v|=\epsilon$, whose Reeb orbits are Hopf circles. There is a unique Hopf circle $C$ contained entirely in $L^\perp$. All other Hopf circles intersect $L^\perp$ transversely. Thus any Reeb trajectory in $Y_\epsilon$ intersects the component \eqref{eqn:Sigma1} in isolated points and/or the arc corresponding to $C\cap D$, if the latter intersection is nonempty.
\end{proof}
\subsection{Rotation number of Reeb trajectories}
\label{sec:srn}
Suppose $\gamma:[a,b]\to Y_\epsilon$ is a Reeb trajectory. Let $D\subset Y_\epsilon$ be a disk through $\gamma(a)$ tranverse to $\gamma$, and let $D'\subset Y_\epsilon$ be a disk through $\gamma(b)$ transverse to $\gamma$. We can identify $D$ with a neighborhood of $0$ in $\xi_{\gamma(a)}$, and $D'$ with a neighborhood of $0$ in $\xi_{\gamma(b)}$, via orthogonal projection in ${\mathbb R}^4$. If $D$ is small enough, then there is a well-defined map continuous map $\phi:D\to D'$ with $\phi(\gamma(a))=\gamma(b)$, such that for each $x\in D$, there is a unique Reeb trajectory near $\gamma$ starting at $x$ and ending at $\phi(x)$.
\begin{lemma}
\label{lem:plrm}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$, let $\epsilon>0$, and let $\gamma:[a,b]\to Y_\epsilon$ be a Reeb trajectory. Then there is a unique (independent of the choice of $D$ and $D'$) homeomorphism
\[
P_\gamma:\xi_{\gamma(a)} \longrightarrow \xi_{\gamma(b)}
\]
such that:
\begin{itemize}
\item[\emph{(a)}]
\begin{equation}
\label{eqn:uniqueP}
\lim_{x\to 0}\frac{\phi(x)-P_\gamma(x)}{\|x\|}=0.
\end{equation}
\item[\emph{(b)}] $P_\gamma$ is linear along rays, i.e. if $x\in \xi_{\gamma(a)}$ and $c>0$ then $P_\gamma(cx) = cP_\gamma(x)$.
\end{itemize}
This map $P_\gamma$ has the following additional properties:
\begin{itemize}
\item[\emph{(c)}]
If $\gamma$ does not include any arcs as in Lemma~\ref{lem:nsrt}(ii)-(iii), and in particular if $\gamma$ does not intersect any smoothed $0$-face or smoothed $1$-face, then $P_\gamma$ is linear.
\item[\emph{(d)}]
For $t\in(a,b)$ we have the composition property
\[
P_\gamma = P_{\gamma|_{[t,b]}} \circ P_{\gamma|_{[a,t]}}.
\]
\item[\emph{(e)}]
For $t\in [a,b]$, the homeomorphism ${\mathbb R}^2\to{\mathbb R}^2$ given by the composition
\[
{\mathbb R}^2 \stackrel{\tau^{-1}}{\longrightarrow} \xi_{\gamma(a)} \stackrel{P_{\gamma|_{[a,b]}}}{\longrightarrow} \xi_{\gamma(t)} \stackrel{\tau}{\longrightarrow} {\mathbb R}^2
\]
is a continuous, piecewise smooth function of $t$.
\end{itemize}
\end{lemma}
\begin{proof}
Uniqueness of the homeomorphism $P_\gamma$ follows from properties (a) and (b). Independence of the choice of $D$ and $D'$ follows from properties (a) and (b) together with continuity of the Reeb vector field. Assuming existence of the homeomorphism $P_\gamma$, the composition property (d) follows from uniqueness.
We now need to prove existence of the homeomorphism satisfying properties (a), (b), (c), and (e). Let $a\le t_1<t_2<\cdots <t_k\le b$ be the subdivision of the inteveral $[a,b]$ given by Lemma~\ref{lem:nsrt}. For $i=0,\ldots,k$, let $\gamma_i$ denote the restriction of $\gamma$ to $[t_i,t_{i+1}]$, where we interpret $t_0=a$ and $t_k=b$. It is enough to prove existence of a homeomorphism
\[
P_{\gamma_i}: \xi_{\gamma(t_i)} \longrightarrow \xi_{\gamma(t_{i+1})}
\]
with the required properties for each $i$. The desired homeomorphism $P_\gamma$ is then given by the composition $P_k\cdots P_0$.
For case (i) in Lemma~\ref{lem:nsrt}, a homeomorphism $P_{\gamma_i}$ with properties (a), (b), and (e) is given by the usual linearized return map on the smooth hypersurface $Y_\epsilon\setminus\Sigma$ from $t_i+\delta$ to $t_{i+1}-\delta$, in the limit as $\delta\to 0$. Since $P_{\gamma_i}$ is linear, we also obtain property (c).
For case (ii) or (iii) in Lemma~\ref{lem:nsrt}, the existence of $P_{\gamma_i}$ with the desired properties follows from the fact that $\gamma_i$ is on a smooth hypersurface separating two regions of $Y_\epsilon$, on each of which the Reeb vector field is $C^\infty$.
\end{proof}
\begin{remark}
\label{rem:avf}
In case (ii) or (iii) above, the description of the Reeb flow in \S\ref{sec:Rfssp} allows us to write down the map $P_{\gamma_i}$ quite explicitly. Namely, for a suitable trivialization, $P_{\gamma_i}$ is given by the flow for some positive time of a continuous, piecewise smooth vector field $V$ on ${\mathbb R}^2$, which is the derivative of a shear on one half of ${\mathbb R}^2$, and which is the derivative of a rotation or the identity on the other half of ${\mathbb R}^2$. For case (ii), the vector field has the form
\begin{equation}
\label{eqn:avf2}
V(x,y) = \left\{\begin{array}{cl} -y\partial_x, & x\ge 0,\\ x\partial_y - y\partial_x, & x\le 0. \end{array}
\right.
\end{equation}
For case (iii), the vector field has the form
\begin{equation}
\label{eqn:avf3}
V(x,y) = \left\{\begin{array}{cl} x\partial_y, & x\ge 0,\\ 0, & x\le 0. \end{array}\right.
\end{equation}
\end{remark}
Since the map $P_\gamma:\xi_{\gamma(a)}\to\xi_{\gamma(b)}$ sends rays to rays, it induces a well-defined map ${\mathbb P}\xi_{\gamma(a)}\to{\mathbb P}\xi_{\gamma(b)}$. It follows from Lemma~\ref{lem:plrm}(c),(d) and equations \eqref{eqn:avf2} and \eqref{eqn:avf3} that the latter map is $C^1$. Similarly to \eqref{eqn:lrf}, we obtain a $C^1$ diffeomorphism of $S^1$ given by the composition
\[
S^1\stackrel{\tau^{-1}}{\longrightarrow} {\mathbb P}\xi_{\gamma(a)} \stackrel{P_\gamma}{\longrightarrow} {\mathbb P}\xi_{\gamma(b)} \stackrel{\tau}{\longrightarrow} S^1.
\]
Stealing the notation from Definition~\ref{def:linearized}, let us denote this map by $\phi(y,t)$ where $y=\gamma(a)$ and $t=b-a$. By analogy with \eqref{eqn:llrf}, we define
\[
\widetilde{\phi}(y,t) = \{\phi(y,s)\}_{s\in[0,t]}\in\widetilde{\operatorname{Diff}}(S^1).
\]
This then has a well-defined rotation number, see Appendix A, which we denote by
\[
\rho(\gamma) = \rho(\widetilde{\phi}(y,t))\in{\mathbb R}.
\]
\subsection{Lower bounds on the rotation number}
\label{sec:rnlb}
We now prove the following lower bound on the rotation number.
\begin{lemma}
\label{lem:rnlb1}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. Then there exists a constant $C>0$, depending only on $X$, such that if $\epsilon>0$ is small, then the following holds. Let $\gamma:[a,b]\to Y_\epsilon$ be a Reeb trajectory, and assume that if $t\in(a,b)$ and $E$ is a $3$-face then $\gamma(t)\notin E_\epsilon$. Then
\[
\rho(\gamma)\ge C\epsilon^{-1}(b-a).
\]
\end{lemma}
\begin{proof}
Define a function
\[
r^{\min}_\epsilon:Y_\epsilon\longrightarrow{\mathbb R}
\]
as follows. A point $Y_\epsilon$ can by uniquely written as $y+\epsilon v$ where $y\in Y$ and $v$ is a unit vector in $N_y^+X$. Then define
\begin{equation}
\label{eqn:remin}
r^{\min}_\epsilon(y+\epsilon v) = \min_{\theta\in{\mathbb R}/2\pi{\mathbb Z}}\frac{1}{\pi(\langle v,y\rangle + \epsilon)}(S({\mathbf i}v) + S(\cos(\theta){\mathbf j}v + \sin(\theta){\mathbf k}v)).
\end{equation}
Here $S:TY_\epsilon\to{\mathbb R}$ is the single-argument version of the second fundamental form, which is well-defined, even though along the non-smooth strata of $Y_\epsilon$ there is no corresponding bilinear form.
More explicitly, $T_{y+\epsilon v}Y_\epsilon$, regarded as a subspace of ${\mathbb R}^4$, does not depend on $\epsilon$. A tangent vector $V\in T_{y+\epsilon v}Y_\epsilon$ can be uniquely decomposed as
\begin{equation}
\label{eqn:vtn}
V = V_T + V_N
\end{equation}
where $V_T\in T_y\partial X$ is tangent to a face $F$ such that $y\in\overline{F}$ and $v\in N_F^+X$, and $V_N\in T_vN_y^+X$ is perpendicular to $v$. We then have
\begin{equation}
\label{eqn:svepsilon}
S(V) = \epsilon^{-1}|V_N|^2.
\end{equation}
Lemma~\ref{lem:minrot} and Proposition~\ref{prop:uj} carry over to the present situation to show that
\begin{equation}
\label{eqn:cops}
\rho(\gamma) \ge \int_a^b r_\epsilon^{\min}(\gamma(s))ds.
\end{equation}
In \eqref{eqn:remin}, by compactness, there is a uniform upper bound on $\langle v,y\rangle$ for $y\in\partial X$ and $v\in N_y^+X$ a unit vector. Thus by \eqref{eqn:svepsilon} and \eqref{eqn:cops}, to complete the proof of the lemma, it is enough to show that there is a constant $C>0$ such that
\begin{equation}
\label{eqn:annest}
\left|({\mathbf i}v)_N\right|^2 + \left|(\cos(\theta){\mathbf j}v + \sin(\theta){\mathbf k}v)_N\right|^2 \ge C
\end{equation}
whenever $y\in\partial X$, $v\in N_y^+X$ is a unit vector, $\theta\in{\mathbb R}/2\pi{\mathbb Z}$, and $y+\epsilon v$ is not in the closure of $E_\epsilon$ where $E$ is a $3$-face. To prove this, it is enough to show that for each $k$-face $F$ with $k<3$, there is a uniform positive lower bound on the left hand side of \eqref{eqn:annest} for all $y\in F$, all unit vectors $v$ in $N_F^+X$ that are not normal to a $3$-face adjacent to $F$, and all $\theta$.
If $k=2$, then we have a positive lower bound on $|({\mathbf i}v)_N|^2$ by the discussion of smoothed $2$-faces in \S\ref{sec:Rfssp}.
If $k=1$, denote the $1$-face $F$ by $L$. If $v$ is on the boundary of $N_L^+X$, then we have a positive lower bound on $|({\mathbf i}v)_N|^2$ as in the case $k=2$ above. Suppose now that $v$ is in the interior of $N_L^+X$. We have a positive lower bound on $|({\mathbf i}v)_N|^2$ when ${\mathbf i}v_N$ is away from the Reeb cone of $L$. This is sufficient when $L$ is a good $1$-face. If $L$ is a bad $1$-face, then we have to consider the case where ${\mathbf i}v$ is on or near the Reeb cone $R_L^+X$. If ${\mathbf i}v$ is in the Reeb cone, then all vectors in $V\in T_{y+\epsilon v}Y_\epsilon$ that are not in the real span of the Reeb cone $R_L^+X$ have $V_N\neq 0$. Since the vectors $\cos(\theta){\mathbf j}v + \sin(\theta){\mathbf k}v$ are all unit length and orthogonal to ${\mathbf i}v$, we get a positive lower bound on $\left|(\cos(\theta){\mathbf j}v + \sin(\theta){\mathbf k}v)_N\right|^2$ for all $\theta$ when ${\mathbf i}v$ is on or near the Reeb cone.
Suppose now that $k=0$. If $v$ is on the boundary of $N_L^+X$, then the desired lower bound follows as in the cases $k=1$ and $k=2$ above. If $v$ is in the interior of $N_F^+X$, then we have $|({\mathbf i}v)_N|^2=1$.
\end{proof}
We now deduce a related rotation number bound. Let $\gamma:[a,b]\to Y_\epsilon$ be a Reeb trajectory. By Lemma~\ref{lem:bub}, we can write
\[
\gamma(t) = y(t) + \epsilon v(t)
\]
where $y(t)\in \partial X$ and $v(t)$ is a unit vector in $N_{y(t)}^+X$ for each $t$.
\begin{lemma}
\label{lem:rnlb2}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$. Then there exists a constant $C>0$, depending only on $X$, such that if $\epsilon>0$ is small and $\gamma:[a,b]\to Y_\epsilon$ is a Reeb trajectory as above, then
\[
\rho(\gamma) \ge C \int_a^b|v'(s)|ds.
\]
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:rnlb1}, it is enough to show that there is a constant $C$ such that
\[
|v'(s)|\le C\epsilon^{-1}.
\]
To prove this last statement, observe that by equation \eqref{eqn:Reebsmoothed}, in the notation \eqref{eqn:vtn} we have
\[
v'(s) = \frac{2\epsilon^{-1}}{\langle v(s),y(s)\rangle + \epsilon}({\mathbf i}v(s))_N.
\]
Thus
\[
|v'(s)|
\le \frac{2\epsilon^{-1}}{\langle v(s),y(s)\rangle + \epsilon}.
\]
If $y\in\partial X$ and $v\in N_y^+X$ is a unit vector, then $\langle v,y\rangle >0$ because $X$ is convex and $0\in\operatorname{int}(X)$. By compactness, there is then a uniform lower bound on $\langle v,y\rangle$ for all such pairs $(y,v)$.
\end{proof}
\section{The smooth-combinatorial correspondence}
\label{sec:correspondence}
We now prove Theorems~\ref{thm:combtosmooth} and \ref{thm:smoothtocomb}.
\subsection{From combinatorial to smooth Reeb orbits}
\label{sec:combtosmooth}
We first prove Theorem~\ref{thm:combtosmooth}. In fact we will prove a slightly more precise statement in Lemma~\ref{lem:combtosmooth} below.
Let $X$ be a symplectic polytope in ${\mathbb R}^4$ and let $\gamma=(L_1,\ldots,L_k)$ be a Type 1 combinatorial Reeb orbit. This means that there are $3$-faces $E_1,\ldots,E_k$ and $2$-faces $F_1,\ldots,F_k$ such that $F_i$ is adjacent to $E_{i-1}$ and $E_i$, and $L_i$ is an oriented line segment in $E_i$ from a point in $F_i$ to a point in $F_{i+1}$ which is parallel to the Reeb vector field on $E_i$. Here the subscripts $i-1$ and $i+1$ are understood to be mod $k$. Below we will regard $\gamma$ as a piecewise smooth parametrized loop $\gamma:{\mathbb R}/T{\mathbb Z}\to X$, where $T=\mc{A}_{\operatorname{comb}}(\gamma)$, which traverses the successive line segments $L_i$ as Reeb trajectories.
\begin{lemma}
\label{lem:combtosmooth}
Let $X$ be a symplectic polytope in ${\mathbb R}^4$, and let $\gamma=(L_1,\ldots,L_k)$ be a nondegenerate Type 1 combinatorial Reeb orbit. Then there exists $\delta>0$ such that for all $\epsilon>0$ sufficiently small:
\begin{itemize}
\item[\emph{(a)}]
There is a unique Reeb orbit $\gamma_\epsilon$ on the smoothed boundary $Y_\epsilon$ such that
\[
|\gamma_\epsilon - \gamma|_{C^0} < \delta.
\]
\item[\emph{(b)}]
$\gamma_\epsilon$ converges in $C^0$ to $\gamma$ as $\epsilon\to 0$.
\item[\emph{(c)}]
$\gamma_\epsilon$ does not intersect $F_\epsilon$ where $F$ is a $0$-face or $1$-face.
\item[\emph{(d)}]
$\gamma_\epsilon$ is linearizable, i.e. has a well-defined linearized return map.
\item[\emph{(e)}]
$\mc{A}(\gamma_\epsilon) - \mc{A}_{\operatorname{comb}}(\gamma) = O(\epsilon)$.
\item[\emph{(f)}]
$\gamma_\epsilon$ is nondegenerate, $\rho(\gamma_\epsilon)=\rho_{\operatorname{comb}}(\gamma)$, and $\operatorname{CZ}(\gamma_\epsilon)=\operatorname{CZ}_{\operatorname{comb}}(\gamma)$.
\end{itemize}
\end{lemma}
\begin{proof}
{\em Setup.\/} For $i=1,\ldots,k$, let $p_i$ denote the initial point of the segment $L_i$. Using the notation $E_i$, $F_i$ above, let $D_i$ denote the set of points $y\in F_i$ such that Reeb flow along $E_i$ starting at $y$ reaches a point in $F_{i+1}$, which we denote by $\phi_i(y)$. Thus we have a well-defined affine linear map
\[
\phi_i:D_i\longrightarrow F_{i+1}.
\]
and by definition $\phi_i(p_i) = p_{i+1}$. In particular, the composition
\[
\phi_k\circ\cdots\circ \phi_1: F_1 \longrightarrow F_1
\]
is an affine linear map defined in a neighborhood of $p_1$ sending $p_1$ to itself. For $V\in TF_1$ small, this composition sends
\[
p_1 + V \longmapsto p_1 + AV,
\]
where $A$ is a linear map $TF_1\to TF_1$. Since the combinatorial Reeb orbit $\gamma$ is assumed nondegenerate, the linear map $A$ does not have $1$ as an eigenvalue.
By Lemma~\ref{lem:stf}(a), the Reeb flow along the smoothed $2$-face $(F_i)_{\epsilon}$ induces a well-defined map
\begin{equation}
\label{eqn:stf}
\phi_{F_i,\epsilon}: U_{F_i,\epsilon} \longrightarrow F_i
\end{equation}
which is translation by a vector $V_{F_i,\epsilon}$.
{\em Proof of (a).\/} If $\epsilon>0$ is sufficiently small, then $p_i$ is in the domain $U_{F_i,\epsilon}$ for each $i$, and Reeb orbits on $Y_\epsilon$ that are $C^0$ close to $\gamma$ correspond to fixed points of the composition
\begin{equation}
\label{eqn:2kcomp}
\phi_{F_1,\epsilon} \circ \phi_k \circ \cdots \circ \phi_2 \circ \phi_{F_2,\epsilon} \circ \phi_1 : F_1 \longrightarrow F_1.
\end{equation}
It follows from the above that for $V\in TF_1$ small, the composition \eqref{eqn:2kcomp} sends
\begin{equation}
\label{eqn:p1V}
p_1 + V \longmapsto p_1 + AV + W_\epsilon
\end{equation}
where $W_\epsilon\in TF_1$ has length $O(\epsilon)$. Since the linear map $A-1$ is invertible, the affine linear map \eqref{eqn:p1V} has a unique fixed point $p_1+V$ for some $V\in TF_1$. If $\epsilon$ is sufficiently small, this fixed point will also be in the domain of the composition \eqref{eqn:2kcomp}, and thus will correspond to the desired Reeb orbit $\gamma_\epsilon$.
{\em Proof of (b).\/} This holds because for the above fixed point, $V$ has length $O(\epsilon)$.
{\em Proof of (c).\/} The Reeb orbit $\gamma_\epsilon$ does not intersect $F_\epsilon$ where $F$ is a $0$-face or $1$-face, by the definition of the domain of the map \eqref{eqn:stf}.
{\em Proof of (d).\/} This follows from Lemma~\ref{lem:plrm}(c).
{\em Proof of (e).\/} The symplectic action of the Reeb orbit $\gamma_\epsilon$ is the sum of its flow times over the smoothed $2$-faces $(F_i)_\epsilon$, plus the sum of its flow times over the smoothed $3$-faces $(E_i)_\epsilon$. The former sum is $O(\epsilon)$ as explained in the proof of Lemma~\ref{lem:stf}(b). The latter sum is $(1+O(\epsilon))$ times the sum of the corresponding flow times over the $3$-faces $E_i$, and the latter differs from $\mc{A}_{\operatorname{comb}}(\gamma)$ by $O(\epsilon)$, because the fixed point of \eqref{eqn:p1V} has distance $O(\epsilon)$ from $p_1$.
{\em Proof of (f).\/} Let $T_\epsilon$ denote the period of $\gamma_\epsilon$, and let $y_\epsilon$ be a point on the image of $\gamma_\epsilon$ in $E_k$. If $F$ is a $2$-face, let $\widetilde{\psi}_F\in\widetilde{\operatorname{Sp}}(2)$ denote the lift of the transition matrix $\psi_F$ in Definition~\ref{def:transitionmatrix} with rotation number in the interval $(0,1/2)$. By Lemmas~\ref{lem:smoothed3face}(b) and \ref{lem:stf}(c), the lifted return map $\widetilde{\phi}(y_\epsilon,T_\epsilon)$ is given by
\begin{equation}
\label{eqn:liftedreturnmap}
\widetilde{\phi}(y_\epsilon,T_\epsilon) = \widetilde{\psi}_{F_k}\circ \cdots \circ \widetilde{\psi}_{F_1}.
\end{equation}
Nondegeneracy of the combinatorial Reeb orbit $\gamma$ means that the projection
\[
\phi(y_\epsilon,T_\epsilon) = \psi_{F_k}\circ \cdots \circ \psi_{F_1} \in \operatorname{Sp}(2)
\]
does not have $1$ as an eigenvalue, so $\gamma_\epsilon$ is nondegenerate. Moreover, it follows from \eqref{eqn:liftedreturnmap} and the definition of combinatorial rotation number in Definition~\ref{def:crn} that $\rho_{\operatorname{comb}}(\gamma) = \rho(\gamma_\epsilon)$. This implies that $\operatorname{CZ}_{\operatorname{comb}}(\gamma) = \operatorname{CZ}(\gamma_\epsilon)$.
\end{proof}
\subsection{From smooth to combinatorial Reeb orbits}
\label{sec:smoothtocomb}
\begin{proof}[Proof of Theorem~\ref{thm:smoothtocomb}.]
We proceed in four steps.
{\em Step 1.\/}
We claim that for each $i$, the Reeb orbit $\gamma_i$ can be expressed as a concatenation of a finite number, $k_i$, of arcs such that:
\begin{itemize}
\item[(a)]
Each endpoint of an arc maps to the boundary of $E_{\epsilon_i}$ where $E$ is a $3$-face.
\item[(b)]
For each arc, either:
\begin{itemize}
\item[(i)] There is a $3$-face $E$ such that the interior of the arc maps to $E_{\epsilon_i}$, or
\item[(ii)] No point in the interior of the arc maps to $E_{\epsilon_i}$ where $E$ is a $3$-face.
\end{itemize}
\end{itemize}
The above decomposition follows from parts (a) and (b)(i) of Lemma~\ref{lem:nsrt}, because the boundary of $E_{\epsilon_i}$ where $E$ is a $3$-face is contained in the singular set $\Sigma$. (Note that the decomposition into arcs in Lemma~\ref{lem:nsrt} is a subdivision of the above decomposition into arcs. Moreover, if $k_i>1$, then $k_i$ is even and the arcs alternate between types (i) and (ii).)
{\em Step 2.\/}
We claim now that there is a constant $C>0$, not depending on $i$, such that if $\gamma:[a.b]\to Y_{\epsilon_i}$ is an arc of type (ii) above, then if we write $\gamma(t)=y(t)+\epsilon_iv(t)$ for $y(t)\in\partial X$ and $v(t)\in N_{y(t)}^+X$ a unit vector, then we have
\begin{equation}
\label{eqn:annint}
\int_a^b|v'(s)ds|\ge C.
\end{equation}
To see this, note that by (a) above, there are 3-faces $E$ and $E'$ such that $\gamma(a)\in\overline{E_{\epsilon_i}}$ and $\gamma(b)\in\overline{E'_{\epsilon_i}}$. Then $v(a)=\nu_E$, where $\nu_E$ denotes the outward unit normal vector to $E$, and likewise $v(b)=\nu_{E'}$. If $E\neq E'$, then the integral in \eqref{eqn:annint} is bounded from below by the distance in $S^3$ between $\nu_E$ and $\nu_{E'}$, and this distance has a uniform positive lower bound because $X$ has only finitely many $3$-faces, each with distinct outward unit normal vectors.
We now consider the case where $E=E'$. The proof of Lemma~\ref{lem:rnlb2} shows that there is a neighborhood $U$ of $\nu_E$ in $S^3$, and a constant $C>0$, such that for any point $y+\epsilon_i v\in Y_{\epsilon_i}\setminus E_{\epsilon_i}$ with $v\in U$, with respect to the decomposition \eqref{eqn:vtn}, we have $|({\mathbf i}v)_N|^2\ge C$. By shrinking the the neighborhood $U$, we can replace this last inequalty with $\langle ({\mathbf i}v)_N,\nu_E\rangle > 0$. Since $v'(t)$ is a positive multiple of $({\mathbf i}v(t))_N$, it follows that the path $[a,b]\to S^3$ sending $t\mapsto v(t)$ must initially exit the neighborhood $U$ before returning to $\nu_E$. So in this case, we can take the constant $C$ in \eqref{eqn:annint} to be twice the distance in $S^3$ from $\nu_E$ to $\partial U$.
{\em Step 3.\/} We now show that we can pass to a subsequence so that the sequence of Reeb orbits $\gamma_i$ on $Y_{\epsilon_i}$ converges in $C^0$ to a Type 1 or Type 2 combinatorial Reeb orbit $\gamma$ for $X$.
By Lemma~\ref{lem:rnlb1} and our hypothesis that $\rho(\gamma_i)<R$, we must have $k_i>1$ when $i$ is sufficiently large. Then, by Lemma~\ref{lem:rnlb2} and Step 2, there is an $i$-independent upper bound on $k_i$. We can then pass to a subsequence such that $k_i$ is equal to an even constant $k$.
By compactness, we can pass to a further subsequence such that the endpoints of the $k$ arcs from Step 1 for $\gamma_i$ converge to $k$ points in the $2$-skeleton of $X$. By Lemma~\ref{lem:smoothed3face}, the $k/2$ arcs of type (i) converge to Reeb trajectories on $3$-faces of $X$. On the other hand, by Lemma~\ref{lem:rnlb1}, for each arc of type (ii), the length of its parametrizing interval converges to $0$. A compactness argument also shows that there is an upper bound on the length of the Reeb vector field on $Y_{\epsilon_i}$. It follows that each arc of type (ii) is converging in $C^0$ to a point. Then $\gamma_i$ converges in $C^0$ to a Type 1 or Type 2 combinatorial Reeb orbit consisting of the line segments on $3$-faces given by the limits of the $k/2$ arcs of type (i).
{\em Step 4.\/} To complete the proof, we now prove that the subsequence and limiting orbit constructed above satisfy all of the requirements (i)-(v) of the theorem.
We have proved assertions (i) and (iii). Assertion (ii) follows from the proof of Lemma~\ref{lem:combtosmooth}(e). Assertion (iv) follows from the proof of Lemma~\ref{lem:combtosmooth}(d),(f). Assertion (v) follows from Lemma~\ref{lem:rnlb2} and Step 2. (To get explicit constants $C_F$, one only needs to consider the case $E\neq E'$ in Step 2.)
\end{proof}
|
2,877,628,091,127 | arxiv | \section{Introduction}
The mechanism that leads to high-temperature superconductivity in
the cuprates remains an open question despite intense study for the
past two decades. Although the field has been challenged by many
high-quality data from different types of measurement, there is no
uniformly-accepted theoretical picture that can offer a unified and
consistent description for these data. It is often believed that
underlying physics can be understood in terms of individual
particles interacting through appropriately-chosen interactions.
However, even with greatly-simplified Hamiltonians, describing
collective motion in these strongly-correlated, many-electron
systems has had only limited success. This has led some authors
\cite{Anderson} to conclude that the many-body correlations in
cuprates are so strong that dynamics may no longer be described
meaningfully in terms of electrons and must be described instead in
terms of new effective building blocks that fractionalize spin and
charge.
Simplicity is a kind of beauty in physics. Even if one could solve
the problem with the help of large-scale numerical calculations,
such a practice may not be interesting because physics can be
completely buried in the numerous configurations used in the
calculation. On the other hand, alternative approaches to many-body
problems have been proposed. One is the method of fermion dynamical
symmetries \cite{FDSM}. This approach is based on the fact that
collective motions in strongly correlated many-body systems are
often governed by only a few collective degrees of freedom, and a
quantum system exhibiting dynamical symmetries usually contains two
or more competing collective modes. Once these degrees of freedom
are identified and properly incorporated into a model, the problem
may be considerably simplified and, most importantly, the physics in
such approaches may become transparent.
This is the philosophy that the SU(4) model of high-temperature
superconductivity \cite{Gu01,Lawu03,Wu04,Sun06,Sun07} is based on.
For cuprate systems we have proposed that the most relevant
collective degrees of freedom are $d$-wave superconductivity (SC)
and antiferromagnetism (AF), and that coherent pairs (not individual
particles), formed from two electrons (or holes) centered on
adjacent lattice sites, are appropriate dynamical building blocks of
the wave function. The choice of this space, which is small in size
but rich in physics, corresponds to a physically-motivated
truncation of the huge Hilbert space corresponding to the original
problem.
It has been found \cite{Gu01} that these spin-singlet ($D$) and the
spin-triplet ($\pi$) pair operators, when supplemented with the
particle-hole type operators for staggered magnetization ($Q$), spin
($S$), and charge, constitute a 16-element operator set that is
closed under a U(4) $\supset$ U(1) $\times$ SU(4) algebra if the
$d$-wave formfactor $g(\bm k)$ in $D$ and $\pi$ pair operators is
replaced by sgn$[g(\bm k)]$. The U(1) factor corresponds to a charge
density wave that is independent of the SU(4) subspace because of
the direct product. This implies that in the minimal U(4) model
charge density waves do not influence the AF--SC competition in
lowest order and our discussions have been focused in the SU(4)
subspace with its coherent-state approximation \cite{Gu01,Lawu03}.
It has been further discovered \cite{Wu04} that the SU(4) symmetry
is a consequence of non-double-occupancy---the constraint that each
lattice site cannot have more than one valence electron. This
suggests a fundamental relationship between SU(4) symmetry and
Mott-insulator normal states at half-filling for cuprate
superconductors.
Thus, the SU(4) model has ingredients of competing AF and SC modes,
$d$-wave $D$ and $\pi$ pairs (entering as ``preformed pairs" that
are mixtures of the two kinds of paired states under the SU(4)
constraint \cite{Sun06}) in wave functions, and non-double-occupancy
imposed by the symmetry \cite{Wu04}. All of these appear to be
relevant for the physics of cuprate superconductors. For data that
do not resolve an explicit $k$ dependence, the coherent-state
solutions of the SU(4) model (with properly adjusted parameters for
effective interaction strengths) can consistently describe SC gaps,
pseudogaps (PG), and the corresponding transition temperatures
$T\tsub c$ and $T^*$ in cuprates, as demonstrated in Ref.
\cite{Sun07}.
However, there are experimental indications for explicit
$k$-dependence that are observed by experiments such as
angle-resolved photoemission spectroscopy (ARPES)
\cite{Norman98,Kanigel06}. These data probe electrons near the Fermi
surface having particular $k$ directions. To describe $k$-dependence
in energy gaps, we must extend our original SU(4) model by
displaying explicit $k$-dependence in the gap equations and their
corresponding solutions. This is the goal of the present paper.
The paper is organized as follows. In Sec.\ II, we outline the SU(4)
background by pointing out the assumptions made when the original
$k$-independent SU(4) model was constructed. Sections III and IV are
respectively devoted to presentation of the new $k$-dependent
SU(4)$_k$ model and the $k$-dependent gap equations obtained using
the generalized coherent-state method. We solve these gap equations
in Sec.\ V and give analytical solutions for the superconducting gap
and the pseudogap. Finally, we discuss some immediate consequences
of the SU(4)$_k$ model in Sec.\ VI, and a short summary is given in
Sec.\ VII.
\section{Dynamical symmetries and the original SU(4) model}
Interactions in dynamical symmetry theories are determined by
symmetry groups \cite{Gu01}. A general SU(4) Hamiltonian with
pairing and AF interactions can be written as \cite{spinNote}
\begin{equation}
H = H_0 - V_d - V_\pi - V_q,
\label{H}
\end{equation}
where $H_0$ is the single particle (s.p.) energy, and $V_d$,
$V_\pi$, and $V_q$ are the two-body spin-singlet pairing,
spin-triplet pairing, and AF interactions, respectively:
\begin{subequations}
\label{interaction}
\begin{eqnarray}
H_0 &=& \sum_{\bm k} \varepsilon_{\bm k} n_{\bm k}
\label{inter:h0} \\
V_d &=& \sum_{\bm k,\bm k'} G^0_{\bm k \bm k'} D^\dag({\bm
k})D({\bm k'})
\label{inter:vd} \\
V_\pi &=& \sum_{\bm k,\bm k'} G^1_{\bm k \bm k'} \vec
\pi^\dag({\bm k})\cdot\vec \pi({\bm k'})
\label{inter:vpi} \\
V_q &=& \sum_{\bm k,\bm k'} \chi^0_{\bm k \bm k'} \vec Q({\bm
k})\cdot \vec Q({\bm k'}). \label{inte:vq}
\end{eqnarray}
\end{subequations}
The operators appearing in Eqs.\ (\ref{interaction}) can be
expressed as
\begin{subequations}
\label{operator}
\begin{eqnarray}
D^\dag(\bm k) &=& g(\bm k) c_{\bm k\uparrow}^\dag c_{- \bm
k\downarrow}^\dag
\label{Dk}\\
\pi^\dag_{ij}(\bm k) &=& g(\bm k) c_{\bm k+\bm q,i}^\dag c_{\bm
k,j}^\dag
\label{pik}\\
Q_{ij}(\bm k) &=& c_{\bm k+\bm q,i}^\dag c_{\bm k,j}, \label{Qk}
\end{eqnarray}
\end{subequations}
where $\pi^\dag_{ij}(\bm k)$ and $Q_{ij}(\bm k)$ are, respectively,
tensor forms of $\vec \pi^\dag({\bm k})$ and $\vec Q({\bm k})$. In
Eqs.\ (\ref{operator}), $c_{\bm k,i}^\dagger$ creates an electron of
momentum $\bm k$ and spin projection $i,j= 1 {\rm\ or\ }2 \ (\equiv$
$\uparrow$ or $\downarrow)$, and $\bm q = (\pi,\pi,\pi)$ is an AF
ordering vector. The $d$-wave formfactor,
\begin{equation}
g(\bm k) = g(k_x,k_y) = \cos k_x-\cos k_y, \label{gk}
\end{equation}
appears in (\ref{Dk}) and (\ref{pik}) because of strong
experimental evidence that in cuprates the coherent pairs exhibit
$d$-wave orbital symmetry \cite{Sc95}. Energy gaps thus generally
are $k$-dependent
\begin{subequations}
\label{gap}
\begin{eqnarray}
\Delta_d(\bm k) &=& \sum_{\bm k'} G^0_{\bm k\bm k'} \left< D^\dag(\bm k)\right>\\
\Delta_\pi(\bm k) &=& \sum_{\bm k'} G^1_{\bm k\bm k'} \left< \pi^\dag_z(\bm k')\right>\\
\Delta_q(\bm k) &=& \sum_{\bm k'} \chi^0_{\bm k\bm k'} \left<
Q_z(\bm k')\right>.
\end{eqnarray}
\end{subequations}
The discussion to this point is general and no approximations have
been made. In our original $k$-independent SU(4) model
\cite{Gu01,Lawu03,Wu04,Sun06,Sun07}, we have introduced
approximations through the following assumptions
\begin{subequations}
\label{approximation}
\begin{eqnarray}
g(\bm k) & \approx & \mbox{sgn} [g(\bm k)] \label{gk1}\\
\varepsilon_{\bm k} & \approx & \varepsilon \label{ek}\\
G^i_{\bm k\bm k'} & \approx & G^i \quad (i=0,1), \qquad \chi^0_{\bm
k\bm k'} \approx \chi^0 \label{ggc}.
\end{eqnarray}
\end{subequations}
Assumption (\ref{gk1}) removes the $k$-dependence from formfactors
in the pair operators, and assumptions (\ref{ek}) and (\ref{ggc}),
respectively, replace the s.p.\ energy and interaction strengths
with $k$-independent constants. These approximations thus lead to
$k$-independent gaps
\begin{equation}
\Delta_d = G^0 \left< D^\dag \right> \quad
\Delta_\pi = G^1 \left< \pi^\dag_z \right> \quad
\Delta_q = \chi^0 \left< Q_z \right>,
\label{gap1}
\end{equation}
which are expressed in terms of the collective operators
\begin{subequations}
\label{operator1}
\begin{equation}
\begin{array}{ll}
\displaystyle D^\dag = \sum_{\bm k} \mbox{sgn} [g(\bm k)] c_{\bm
k\uparrow}^\dag c_{-\bm k\downarrow}^\dag \nonumber
\\
\displaystyle \pi^\dag_{ij} = \sum_{\bm k} \mbox{sgn} [g(\bm k)]
c_{\bm k+\bm q,i}^\dag c_{\bm k,j}^\dag \nonumber
\\
\displaystyle Q_{ij} = \sum_{\bm k} c_{\bm k+\bm q,i}^\dag c_{\bm
k,j}. \nonumber
\end{array}
\end{equation}
\end{subequations}
The preceding equations constitute the basis for discussions in
Refs.\ \cite{Gu01,Lawu03,Wu04,Sun06,Sun07}, and all our previous
SU(4) results are obtained within this framework. As most cuprate
data presumably represent weighted averages over contributions of
different $k$ components, the original SU(4) scheme works well. In
Ref.\ \cite{Sun06}, we derived and solved $k$-independent (but
temperature and hole-doping dependent) SU(4) gap equations, and
used the results to construct generic gap and phase diagrams. We
compared the results with some representative cuprate data in
Ref.\ \cite{Sun07} and found that, for data that do not resolve an
explicit $k$ dependence, the coherent-state solutions of the
original SU(4) model can consistently describe SC gaps,
pseudogaps, and the corresponding transition temperatures $T\tsub
c$ and $T^*$.
\section{The $\bm k$-dependent SU(4)$_k$ model}
As we have noted, there is experimental evidence for explicit
$k$-dependence of certain observables in the cuprates. Although
interpretation of some of the results remains somewhat
controversial, their momentum-dependent nature is clear. One example
is the observation of Fermi arcs in angle-resolved photoemission
spectroscopy (ARPES) data \cite{Norman98,Kanigel06}: ARPES
measurements suggest that the Fermi surface is gapped out arcwise in
the pseudogap region below $T^*$, indicating clear anisotropy of the
PG in the $k$-space.
In order to describe momentum dependence of energy gaps, we must
extend our original SU(4) model \cite{Gu01,Lawu03,Wu04,Sun06,Sun07}
in a way that restores the $k$-dependence that is washed out by the
assumptions in Eqs.\ (\ref{approximation}), but preserves the SU(4)
symmetry. The replacement (\ref{gk1}) for the pair operators is a
necessary condition for preserving the SU(4) algebra \cite{Gu01},
which is required physically because it imposes the
non-double-occupancy condition \cite{Wu04}. Therefore, the only way
to restore $k$-dependence in energy gaps but keep the SU(4) symmetry
(and its associated non-double-occupancy constraint) is to modify
(\ref{ek}) and (\ref{ggc}) to allow the s.p.\ energy $\varepsilon$
and the interaction strengths to carry $k$-dependence. The s.p.\
energy term $\varepsilon$ is less important in this regard because
it does not contribute to energy gaps and transition temperature in
our formalism \cite{Sun06}. We may thus employ it in the most
general form $\varepsilon_{\bm k}$.
\begin{figure}
\includegraphics[height=.20\textheight]{fig1.eps}
\caption{Geometry and definitions in the $k_x$--$k_y$ plane,
where $\tilde k$ is hole momentum.}
\end{figure}
\begin{figure}
\includegraphics[height=.20\textheight]{fig2.eps}
\caption{The curve $|g(k)|$ and its maximum value $g_{0k}=1-\cos \tilde k$
for a given momentum $\bm k=(k_x,k_y)$, under the constraint (\ref{ktilde}).}
\end{figure}
Without loss of generality, $g(\bm k)$ can be written as the
product of the absolute value and a sign
\begin{equation}
g(\bm k)=|g(\bm k)| \times \mbox{sgn}[g(\bm k)]. \label{gk2}
\end{equation}
Therefore, the approximation (\ref{gk1}) implies that in our
original SU(4) model we have assumed that the magnitude $|g(\bm k)|$
is unity, regardless of $\bm k$. (Note that this is also the
condition used to close the algebra of the SO(5) model
\cite{Kohno98,Hen98}.) Instead of $|g(\bm k)|=1$, we now introduce
\begin{equation}
|g(\bm k)| \approx \sigma_{\bm k} = g_{0k} \delta(\theta_k),
\label{gk3}
\end{equation}
where $g_{0k}$ is the maximum value of $|g(\bm k)|$. In Figs.~1
and 2, we illustrate the associated geometry and definitions. In
our notation, $\bm k = (k_x,k_y)$ is the electron momentum under
the constraint
\begin{equation}
(\pi-k_x)^2 + (\pi-k_y)^2 =
\tilde k^2 ,
\label{ktilde}
\end{equation}
where $\tilde k$ is the hole momentum with $\theta_k$ its azimuthal
angle, as shown in Fig.~1. In Eq.~(\ref{gk3}), $\delta(\theta_k)$
takes a value of unity except in a narrow region around the nodal
points (corresponding to $|k_x|=|k_y|$ or $\theta_k = \pm\pi/4$ for
the first Brillouin zone; see Fig.~1), where it quickly diminishes
and vanishes exactly at the nodal points. A possible mathematical
expression could be of the Gaussian type
\begin{equation}
\delta(\theta_k)=1-e^{-\left({{\theta_k -\pi/4}\over
{\Delta\theta}}\right)^2} ~~~{\rm with}~~~ \Delta\theta\ll
\pi/4,\nonumber
\end{equation}
where $\Delta\theta$ measures width of the Gaussian. With very small
$\Delta\theta$, the exponential term has a negligible contribution
to the average, which ensures the averaged $\delta(\theta_k)$ equal
to 1. The so-defined $\delta(\theta_k)$ becomes exactly zero at
$\theta_k = \pm\pi/4$. Therefore, our pairing gaps have nodes at
$\theta_k = \pm\pi/4$, which agrees with experiments.
The behavior of $|g(\bm k)|$ is illustrated in Fig.\ 2. It is easy
to show that
\begin{equation}
g_{0k}=\left| 1-\cos \tilde k \right|.
\label{gzerok}
\end{equation}
Thus, for the first Brillouin zone $k_x$ and $k_y$ can take values
from zero to $\pi$, while $g_{0k}$ changes from 0 to 2. (The
assumption $|g(\bm k)|=1$ in the original SU(4) model is thus
equivalent to taking an average of $g_{0k}$ over $\bm k$.) Equation
(\ref{gk3}), with its explicit dependence on $\bm k$, clearly
improves on the original SU(4) model for observables having a
possible $k$ dependence.
With the new approximation (\ref{gk3}) for $|g(\bm k)|$, the
pairing interaction strengths in Eq.\ (\ref{interaction}) are
\begin{equation}
G^i_{\bm k\bm k'} = G^0_i\sigma_{\bm k}\sigma_{\bm k'} \quad
(i=0,1). \label{newg}
\end{equation}
The factor $\mbox{sgn}[g(\bm k)]$ in the product (\ref{gk2}) remains
unchanged in the pair operators, which ensures preservation of the
SU(4) symmetry.
The $k$-dependence of the AF interaction $\chi_{\bm k\bm k'}$
follows from the nature of exchange interactions. The corresponding
matrix elements are proportional to the wavefunction overlap between
the states, which are one-particle, one-hole states with momenta
$(\bm k+\bm q, \bm k)$. The $d$-wave symmetry in the pair structure
implies that the amplitude of a pair wavefunction with two electrons
having momenta $(\bm k, -\bm k)$ in the background mean-field of the
SU(4) collective subspace is proportional to $g(\bm k)$. This means
physically that the two electrons in a pair favor aligning their
momenta $\bm k$ along the Cu--O bond direction (maximum of $|g(\bm
k)|$) rather than along the diagonal to the Cu--O bonds (nodal
direction of $g(\bm k)$; see Fig.\ 1). Since the wavefunction for
momentum $\bm k+\bm q$ differs from that for $\bm k$ only by a sign
\cite{Gu01}, and a hole wavefunction is the conjugate of a particle
wavefunction, the amplitude of a particle--hole wavefunction with
momenta $(\bm k+\bm q, \bm k)$ should be very similar to that of a
pair and thus proportional to $g(\bm k)$. Therefore, we can write
\begin{equation}
\chi_{\bm k\bm k'} = \chi^0 g_{\bm k} g_{\bm k'}, \label{newchi}
\end{equation}
where $g_{\bm k}\equiv |g(\bm k)|$. Note that for the $g_{\bm k}$
factor in Eq.\ (\ref{newchi}), no approximation like the one in Eq.\
(\ref{gk3}) is necessary. Explicitly, we mean here that $g_{\bm k} =
|g(\bm k)| = |\cos k_x -\cos k_y|$.
Inserting Eqs.\ (\ref{newg}) and (\ref{newchi}) into
(\ref{interaction}), we can rewrite the Hamiltonian (\ref{H}) as
\begin{eqnarray}
H &=& \sum_{\bm k} \varepsilon_{\bm k} n_{\bm k} - G^0_0 \sum_{\bm
k,\bm k'} \sigma_{\bm k}\sigma_{\bm k'} D^\dag(\bm k)D(\bm k')
\nonumber\\
&&- G^0_1 \sum_{\bm k,\bm k'} \sigma_{\bm k}\sigma_{\bm k'} \vec
\pi^\dag(\bm k)\cdot\vec \pi(\bm k')
\nonumber\\
&&- \chi^0 \sum_{\bm k,\bm k'} g_{\bm k} g_{\bm k'} \vec Q(\bm
k)\cdot \vec Q(\bm k'). \label{Hk}
\end{eqnarray}
The $k$-dependent Hamiltonian (\ref{Hk}) possesses a $\sum_k
\otimes$ SU(4)$_k$ symmetry, with 15 $k$-dependent generators
\begin{eqnarray}
{\bf D}^\dag (\bm k) &=& D^\dag (\bm k) + D^\dag (-\bm k) \nonumber\\
\vec{\bm \pi}^\dag (\bm k) &=& \vec\pi^\dag (\bm k) + \vec\pi^\dag (-\bm k) \nonumber\\
\vec {\bf Q} (\bm k) &=& \vec Q (\bm k) + \vec Q (-\bm k) \nonumber\\
{\bf M} (\bm k) &=& M (\bm k) + M (-\bm k) \nonumber\\
\vec {\bf S} (\bm k) &=& \vec S (\bm k) + \vec S (-\bm k),
\nonumber
\end{eqnarray}
where $\bf M (\bm k)$ and $\vec {\bf S} (\bm k)$ are, respectively,
the charge and the spin operators. For each $\bm k$, the commutation
relation among generators, the structure of subgroup chains, and
their corresponding properties are analogous to those of the
original SU(4) group structure \cite{Gu01}. We term this
$k$-dependent extension \cite{Dukelsky07} of the original SU(4)
model the SU(4)$_k$ model.
\section{$\bm k$-dependent gap equations}
In the preceding section, we demonstrated that with a better
approximation to the absolute value of the formfactor $g(\bm k)$ it
is possible to introduce explicit $k$ dependence through a symmetry
structure that corresponds to a product of SU(4) groups, each
labeled by $k$. Therefore, the following discussions for gap
equations and their solutions for a given $k$ are rather similar to
those in Ref.\ \cite{Sun06} for the $k$-independent SU(4) model.
By analogy with discussions in Appendix B of Ref.\ \cite{Sun06},
under the coherent-state (symmetry-constrained, generalized
Hartree--Fock--Bogoliubov) approximation, one obtains for the
$k$-dependent case
\begin{equation}
2u_{\bm k\pm}v_{\bm k\pm} (\varepsilon_{\bm k\pm}-\lambda) -
\Delta_{\bm k\pm} (u^2_{\bm k\pm} - v^2_{\bm k\pm}) = 0
\label{BCSext}
\end{equation}
with
$$
\varepsilon_{\bm k\pm} = \varepsilon_{\bm k} \mp \Delta_q(\bm k)
\qquad \Delta_{\bm k\pm} = \Delta_d(\bm k) \pm \Delta_\pi(\bm k)
\nonumber
$$
and
\begin{eqnarray}
\Delta_d(\bm k) &=& G^0_0 \sigma_{\bm k} \sum_{\bm k'>0} \sigma_{\bm
k'} \left< {\bf D}^\dag(\bm k')\right> \nonumber\\
\Delta_\pi(\bm k) &=& G^0_1 \sigma_{\bm k} \sum_{\bm k'>0}
\sigma_{\bm k'} \left< {\bm \pi}_z^\dag(\bm k')\right> \nonumber\\
\Delta_q(\bm k) &=& \chi^0 g_{\bm k} \sum_{\bm k'>0} g_{\bm k'}
\left< {\bf Q}_z (\bm k')\right>. \nonumber
\end{eqnarray}
${\bm k'>0}$ in the above and following equations means $k'_x
>0$ or $k'_y >0$. Solving Eq.\ (\ref{BCSext}) gives $k$-dependent
occupation probabilities
$$
u^2_{\bm k\pm} = \frac12 \left[ 1 + {{\varepsilon_{\bm
k\pm}-\lambda}\over {e_{\bm k\pm}}}\right] \qquad v^2_{\bm k\pm} =
\frac12 \left[ 1 - {{\varepsilon_{\bm k\pm}-\lambda}\over {e_{\bm
k\pm}}}\right]
$$
and a quasiparticle energy
\begin{equation}
e_{\bm k\pm} = \sqrt{(\varepsilon_{\bm k\pm}-\lambda)^2 +
\Delta_{\bm k\pm}^2}. \nonumber
\end{equation}
The gap equations in $k$-space can then be obtained:
\begin{subequations}
\label{kgapeqs}
\begin{eqnarray}
\Delta_d(\bm k) &=& {{G^0_0\sigma_{\bm k}}\over 2} \sum_{\bm k'
> 0} \sigma_{\bm k'} \left( w_{\bm k'+}\Delta_{\bm k'+}
+ w_{\bm k'-}\Delta_{\bm k'-} \right) \\
\Delta_\pi(\bm k) &=& {{G^0_1\sigma_{\bm k}}\over 2} \sum_{\bm k'
> 0}\sigma_{\bm k'}\left( w_{\bm k'+}\Delta_{\bm k'+}
- w_{\bm k'-}\Delta_{\bm k'-} \right) \\
\Delta_q(\bm k) &=& {{\chi^0 g_{\bm k}}\over 2} \sum_{\bm k' > 0}
g_{\bm k'}
\{ w_{\bm k'+} [\Delta_q(\bm k') + \lambda'_{\bm k'}] \nonumber\\
&&+ w_{\bm k'-} [\Delta_q(\bm k') - \lambda'_{\bm k'}] \} \\
-2x &=& {2\over \Omega} \sum_{\bm k' > 0} \{ w_{\bm k'+}
[\Delta_q(\bm k') + \lambda'_{\bm k'}] \nonumber\\
&&- w_{\bm k'-} [\Delta_q(\bm k') - \lambda'_{\bm k'} ] \}
\label{15d}
\end{eqnarray}
\end{subequations}
with
\begin{equation}
\begin{array}{c}
w_{\bm k\pm} = \displaystyle {{P_{\bm k\pm}(T)}\over {e_{\bm k\pm}}}
\qquad \lambda'_{\bm k} = \lambda - \varepsilon_{\bm k}
\\[5pt]
P_{\bm k\pm}(T) = \displaystyle \mbox{tanh}\left({{e_{\bm
k\pm}}\over{2k\tsub BT}}\right). \nonumber
\end{array}
\end{equation}
In Eq.\ (\ref{15d}), $\Omega=\sum_{\bm k>0}$ is the maximum number
of doped holes (or doped electrons for electron-doped compounds)
that can form coherent pairs, assuming the normal state (at half
filling) to be the vacuum. $x$ is the relative doping fraction in
the model \cite{Sun06}. Positive $x$ represents the case of hole
doping, with $x=0$ corresponding to half filling (no doping) and
$x=1$ to maximal hole doping. The true doping $P$ is related to $x$
by $x \simeq 4P$ \cite{Sun06}.
The $k$-dependent gap equations (\ref{kgapeqs}) are coupled
algebraic equations. By solving these equations, one can in
principle obtain $k$-dependent (also temperature and hole-doping
dependent) energy gaps. However, general and exact solutions are
difficult because gaps for given $\bm k$ are related to all other
$\bm k$ points, which means that solutions at each $\bm k$ point are
not independent from the other $k$-components. In the next section,
we show that one can obtain analytical solutions by applying some
approximations.
\section{Solutions for $\bm k$-dependent gap equations}
We may greatly simplify the solution of Eqs.\ (\ref{kgapeqs})
through the following three steps. First, we replace the quantities
in the summations on the right hand side of (\ref{kgapeqs}) with
their corresponding mean values:
\begin{equation}
\begin{array}{c}
\Delta_{\bm k\tau} \Longrightarrow \overline{\Delta}_\tau
(\tau=+, - ,q) \qquad
\lambda'_{\bm k} \Longrightarrow \overline{\lambda'}
\\[5pt]
w_{\bm k\pm} \Longrightarrow \overline{w}_\pm \equiv \displaystyle
{{\overline P_\pm(T)}\over {\overline e_\pm}}, \nonumber
\end{array}
\end{equation}
with
\begin{equation}
\overline P_{\pm}(T) = \mbox{tanh}\left({{\overline
e_{\pm}}\over{2k\tsub BT}}\right) \qquad \overline e_{\pm} =
\sqrt{(\overline {\lambda'}\pm \overline \Delta_q)^2 + \overline
\Delta_\pm^2}. \nonumber
\end{equation}
The functions $\sigma_{\bm k}$ and $g_{\bm k}$ in the summations can
then be simplified as
\begin{eqnarray}
\sum_{\bm k'>0} \sigma_{\bm k'} &\Longrightarrow&
\sum_{\bm k'>0} \overline g_0 = {\Omega\over 2} \overline g_0 \label{intg0}\\
\sum_{\bm k'>0} g_{\bm k'} &\Longrightarrow& \sum_{\bm k'>0}
\overline g = {\Omega\over 2}\overline g. \label{intg}
\end{eqnarray}
The second level of simplification is based on physical
considerations. Experimentally-measured energy gaps are dominated by
contributions from near the Fermi surface. Therefore, we assume that
measured gaps may be approximated by their values at $\tilde
k=k\tsub f$. Using this approximation and the average values
introduced in the first approximation step, we can write for the gap
equations of (\ref{kgapeqs}) evaluated at $\tilde k=k\tsub f$:
\begin{subequations}
\label{kgapseqs2}
\begin{eqnarray}
\Delta_d(\bm k) &=& {\Omega\over 4} G^0_0 g_0 \overline g_0
\delta(\theta_{k\tsub f}) \left( \overline w_{+}\overline\Delta_{+}
+ \overline w_{-} \overline\Delta_{-} \right) \\
\Delta_\pi(\bm k) &=& {\Omega\over 4} G^0_1 g_0 \overline g_0
\delta(\theta_{k\tsub f}) \left( \overline w_{+} \overline\Delta_{+}
- \overline w_{-} \overline\Delta_{-} \right) \\
\Delta_q(\bm k) &=& {\Omega\over 4} \chi^0 g_0 \overline g
\gamma(\theta_{k\tsub f}) \{ \overline w_{+} [\overline\Delta_q +
\overline{\lambda'}]
\nonumber\\
&&+ \overline w_{-} [\overline\Delta_q - \overline{\lambda'} ]\} \\
-2x &=& \overline w_{+} [\overline\Delta_q + \overline{\lambda'}] -
\overline w_{-} [\overline\Delta_q - \overline{\lambda'} ] ,
\end{eqnarray}
\end{subequations}
where $g_0 \equiv g_{0k\tsub f}$, and
\begin{equation}
\gamma(\theta_{k\tsub f}) \equiv \left| {\frac{g(\bm k)}{g_0}}
\right|. \label{gammaOfTheta}
\end{equation}
Equations (\ref{kgapseqs2}) are $k$-dependent gap equations
constrained on the Fermi surface through
\begin{equation}
(\pi-k_x)^2 + (\pi-k_y)^2 = k\tsub f^2. \label{fSurface}
\end{equation}
It can be shown that $\gamma(\theta_{k\tsub f})$ is independent of
$|k\tsub f|$, to high accuracy, and therefore can be considered in
later discussions to be a function of azimuthal angle only.
In the third simplification step, we assume the average values
$\overline\Delta_\tau$ and $\overline{\lambda'}$ to be proportional
to the unknown quantities $\Delta_{\tau}(\bm k)$ and $\lambda'_{\bm
k}$, respectively, with a constant of proportionality $R$:
\begin{equation}
\overline \Delta_\tau = R \Delta_{\tau}(\bm k) \quad (\tau=+, -, q)
\qquad \overline {\lambda'} = R \lambda'_{\bm k}, \label{assum3}
\end{equation}
which implies that
\begin{equation}
\overline e_\pm = R e_{\bm k \pm}.
\end{equation}
The parameter $R$ serves as a renormalization factor that corrects
on average for the errors caused by the approximation and is
determined by fitting data. With (\ref{assum3}), Eqs.\
(\ref{kgapseqs2}) now become
\begin{subequations}
\label{kgapseqs3}
\begin{eqnarray}
\Delta_d(\bm k) &=& {\Omega\over 4} G_{0} \delta(\theta_{k\tsub f})
\left[ \tilde w_{+} \Delta_{+}(\bm k) + \tilde w_{-} \Delta_{-}(\bm
k) \right]
\\
\Delta_\pi(\bm k) &=& {\Omega\over 4} G_{1} \delta(\theta_{k\tsub
f}) \left[ \tilde w_{+} \Delta_{+}(\bm k) - \tilde w_{-}
\Delta_{-}(\bm k) \right]
\\
\Delta_q(\bm k) &=& {\Omega\over 4} \chi {{\gamma(\theta_{k\tsub
f})}\over{\overline\gamma}}
\{ \tilde w_{+} [\Delta_{q}(\bm k) + \lambda'_{\bm k}] \nonumber\\
&&+ \tilde w_{-} [\Delta_{q}(\bm k) - \lambda'_{\bm k} ]\}
\\
-2x &=& \tilde w_{+} [\Delta_{q}(\bm k) + \lambda'_{\bm k}]
\nonumber\\
&& -\tilde w_{-} [\Delta_{q}(\bm k) - \lambda'_{\bm k} ],
\end{eqnarray}
\end{subequations}
with
\begin{equation}
G_{i} = G^0_i g_0 \overline g_0 \qquad \chi = \chi^0 g_0
\overline\gamma \overline g \nonumber
\end{equation}
and
\begin{equation}
\tilde w_{\pm} = {{\tilde P_{\pm}(T)}\over {e_{\bm k\pm}}} \qquad
\tilde P_{\pm}(T) = \mbox{tanh}\left({{R e_{\bm k\pm}}\over{2k\tsub
BT}}\right).
\end{equation}
In the above equations, $\overline\gamma$ is the average value of
$\gamma(\theta_{k\tsub f})$.
The simplified gap equations (\ref{kgapseqs3}) can now be solved
analytically. They have the same structure as the gap equations
discussed in the $k$-independent SU(4) model \cite{Sun06}, except
that the interaction strengths in the present case are
$k$-anisotropic. Therefore, all the SU(4) formulas in Sections
III--VI of Ref.\ \cite{Sun06} remain valid, provided that the
following replacements are made for the singlet-pairing,
triplet-pairing, and antiferromagnetic coupling strengths,
respectively:
\begin{equation}
G_0 \rightarrow G_0 \delta(\theta_{k\tsub f}) \quad\ G_1 \rightarrow
G_1 \delta(\theta_{k\tsub f}) \quad\ \chi \rightarrow \chi
\gamma(\theta_{k\tsub f}) / \overline\gamma.
\end{equation}
For example, if we introduce the the doping parameter $x$ defined in
\S II.c of Ref.\ \cite{Sun06}, the $k$-dependent critical
hole-doping fraction is [compare Eq.\ (23) of Ref.\ \cite{Sun06}]
\begin{equation}
x_{q\theta} \equiv x_q(\theta_{k\tsub f}) = \sqrt{{\chi
\gamma(\theta_{k\tsub f}) / \overline\gamma -
G_0\delta(\theta_{k\tsub f})} \over {\chi \gamma(\theta_{k\tsub
f}) / \overline\gamma - G_1\delta(\theta_{k\tsub f})}},
\label{xqf}
\end{equation}
and the $T=0$ energy gaps at the Fermi momentum $k\tsub f$ for $x\le
x_{q\theta}$ are obtained as
\begin{subequations}
\label{solution1}
\begin{eqnarray}
\Delta_d(\bm k) &=& {\Omega\over 2} G_0\delta(\theta_{k\tsub f})
\sqrt{x(x_{q\theta}^{-1}-x)} \\
\Delta_\pi(\bm k) &=& {\Omega\over 2} G_1\delta(\theta_{k\tsub f})
\sqrt{x(x_{q\theta}-x)} \\
\Delta_q(\bm k) &=& {\Omega\over 2} \chi {{\gamma(\theta_{k\tsub
f})}\over {\overline\gamma}}
\sqrt{(x_{q\theta}^{-1}-x)(x_{q\theta}-x)} \\
\lambda'_{\bm k} &=& - {\Omega\over 2} \left[\chi
{{\gamma(\theta_{k\tsub f})}\over {\overline\gamma}} -
G_1\delta(\theta_{k\tsub f})\right] x_{q\theta}
\left(1-x_{q\theta}x\right)
\nonumber\\
&& - {\Omega\over 2} G_1\delta(\theta_{k\tsub f}) x,
\end{eqnarray}
\end{subequations}
while for $x>x_{q\theta}$ we obtain $\Delta_{q}(\bm k) =
\Delta_\pi(\bm k)=0$ and
\begin{subequations}
\label{solution2}
\begin{eqnarray}
\Delta_d(\bm k) &=& {\Omega\over 2} G_0\delta(\theta_{k\tsub f}) \sqrt{1-x^2} \\
\lambda'_{\bm k} &=& -{\Omega\over 2} G_0\delta(\theta_{k\tsub f}) x
\end{eqnarray}
\end{subequations}
for the solutions. The energy gaps obtained in Eqs.\
(\ref{solution1})--(\ref{solution2}) are $k$-anisotropic. The
pairing gaps have nodal points at $k_x=k_y$, where $
\delta(\theta_{k\tsub f})=0. $ The pseudogap $\Delta_{q}(\bm k)$ is
a function of $\bm k$ by virtue of the factor $\gamma(\theta_{k\tsub
f})$ defined in Eq.\ (\ref{gammaOfTheta}).
Because all SU(4) formulas in Sections III--VI of Ref.\ \cite{Sun06}
remain valid, it is easily proven that the PG closure temperature
$T^*$ acquires the same $g(\bm k)$ dependence as the pseudogap, and
we obtain for the PG closure temperature
\begin{equation}
T^*(\bm k) = \chi {{\gamma(\theta_{k\tsub f})}\over {
\overline\gamma}} \Omega {{R(1-x^2)} \over {4k\tsub B}} .
\label{Tstark}
\end{equation}
We do not expect a corresponding effect in the superconducting
region because below $T\tsub c$ the pairing gap opens and the entire
Fermi surface will be destroyed except at the nodal points. We find
a superconducting transition temperature
\begin{equation}
T\tsub c(\bm k) = G_0 \delta(\theta_{k\tsub f}) \Omega {{Rx} \over
{4k\tsub B \, {\mbox {atanh}}(x)}},
\end{equation}
which has no $g(\bm k)$ factor.
\section{Discussions and predictions}
Most experimental techniques do not resolve $k$ and we expect for
those that transition temperatures are dominated by contributions
from near the Fermi surface $(\tilde k = k\tsub f)$, averaged over
all $k$-directions. If one takes the average over $\theta_{k\tsub
f}$, then
$$
\delta(\theta_{k\tsub f})\rightarrow 1
\qquad \gamma(\theta_{k\tsub f})/\overline\gamma\rightarrow 1,
$$
and the gap equations of the $k$-dependent SU(4)$_k$ model and their
solutions become identical to those obtained for the original
$k$-independent SU(4) model \cite{Sun06}. Thus our original SU(4)
model predicts \cite{Sun06,Sun07} values of energy gaps and the
corresponding transition temperatures that are (perhaps weighted)
{\em averages over $k$.} These are relevant for comparison with
experiments that do not resolve $k$. However, the explicit
appearance of the anisotropic factor $\gamma(\theta_{k\tsub
f})/\overline\gamma$ in the gap solutions of the SU(4)$_k$ model
leads to some interesting new consequences. We note that although
our following discussions are made through the anisotropic factor
$\gamma(\theta_{k\tsub f})/\overline\gamma$, its relation with
$\delta(\theta_{k\tsub f})$ guarantees that the model still
preserves the $d$-wave nature and has nodes in the pairing gaps. In
this section we discuss three predictions following from the new
formalism that could have important implications for experiments
that detect explicitly $k$-dependent properties.
\subsection{Two pseudogap closure temperatures:
the maximum and the averaged}
The $k$-dependent PG closure temperature $T^*(\bm k)$ in
Eq.~(\ref{Tstark}) differs from the $k$-averaged one derived in
Eq.~(49) of Ref.~\cite{Sun06},
\begin{equation}
T^*_{\rm ave} = \chi \Omega {{R(1-x^2)} \over {4k\tsub B}} ,
\label{Tstar}
\end{equation}
by the factor ${\gamma(\theta_{k\tsub f})} / {\overline\gamma}$. We
know that ${\gamma(\theta_{k\tsub f})}$, and also $T^*(\bm k)$, take
their maximum values at the antinodal points; for example,
$$
{\gamma_{\rm
max}(\theta_{k\tsub f})}|_{\theta_{k\tsub f}=0,{\pi\over 2}} = 1
$$
(see Figs.~1 and 2). We can denote the maximum PG closure
temperature as $T^*_{\rm max}$. Thus, the SU(4)$_k$ model predicts
two PG closure temperatures that are related to each other through
\begin{equation}
T^*_{\rm max} = T^*_{\rm ave}/ {\overline\gamma} .
\label{TstarRatio}
\end{equation}
It is straightforward to evaluate the averaged $\gamma$ value by
integration. Noting that Eqs. (\ref{solution1}) are restricted to
$x\le x_{q\theta}$, we can define the maximum allowed azimuthal
angle $\theta\tsub c$ through the condition $x=
x_{q\theta}(\theta\tsub c)$. We then have
\begin{equation}
{\overline\gamma} = {2\over\pi} \int_0^{\theta\tsub c}
\left|{{g(k(\theta))}\over {g_0}}\right| d\theta .
\label{gammaBar}
\end{equation}
In the above calculation, we have used for the integrand the
expression (\ref{gammaOfTheta}) for ${\gamma(\theta_{k\tsub f})}$
and the constraint (\ref{fSurface}). The resulting
${\overline\gamma}$ depends on the size of the Fermi surface
$|k\tsub f|$. Assuming an isotropic hole Fermi surface, we have
\begin{equation}
{k\tsub f}^2 = 2\pi (1+P) .\nonumber
\end{equation}
Therefore, ${\overline\gamma}$ is essentially a
hole-doping-dependent quantity. In Fig.~3, we show the behavior of
$1 / {\overline\gamma}$ as a function of doping $P$ assuming
coupling-strength parameters characteristic of the cuprate
superconductors. As one can see, it has a nonlinear dependence on
doping, taking the maximum value 1.6 at very small dopings, falling
rapidly between $P=0.05$ and 0.08, and continuously decreases but
with a smaller rate until it reaches unity at the critical doping
$P=0.18$.
\begin{figure}
\includegraphics[height=.23\textheight]{fig3.eps}
\caption{The doping-dependent $1 / {\overline\gamma}$ factor. The
calculation employs Eq.~(\ref{gammaBar}) and utilizes realistic
interaction strengths $\chi, G_0$ and $G_1$ taken from
Ref.~\cite{Sun07}, with the pairing onset at $P=0.05$. }
\end{figure}
\begin{figure}
\includegraphics[height=.27\textheight]{fig4.eps}
\caption{(Color online) SU(4) cuprate phase diagram compared with
data. Strengths of the AF and singlet pairing correlations were
determined in Ref. \cite{Sun07} by global fits to cuprate data. The
PG temperature is $T^*$ and the SC transition temperature is $T\tsub
c$. The AF correlations vanish, leaving a pure singlet $d$-wave
condensate, above the critical doping $P_q$. Dominant correlations
in each region are indicted by italic labels. Data in green (open
triangles) and blue (open squares) are taken from Ref.\
\cite{Dai99}, and those in red (open circles) from Ref.\
\cite{Camp99} (arrows indicate that the point is a lower limit).}
\end{figure}
\begin{figure*}
\includegraphics[height=.19\textheight]{fig5.eps}
\caption{(Color online) Construction of Fermi arcs for doping
$P=0.15$ and values of $T/T^*_{\rm max}$ decreasing left to right.
The ranges of $k_x$ and $k_y$ are $-\pi$ to $\pi$ and dashed lines
indicate nodes. Hole Fermi surfaces in the absence of gaps are
illustrated by full solid arcs in each corner. For $T \ge T^*_{\rm
max}$ a full Fermi surface exists; for $T<T^*_{\rm max}$, opening of
the pseudogap destroys Fermi surfaces in the shaded regions (dotted
lines), leaving arcs (solid lines) centered on the nodal lines.
These arcs have absolute lengths that depend on $P$ and $T$, but
relative lengths that depend only very weakly on $P$ and are
determined almost entirely by the ratio $T/T^*_{\rm max}$. The sizes
of the shaded regions grow with decreasing $T$, so at very low
temperature almost all of the Fermi surface becomes gapped and the
Fermi arcs shrink to the nodal points as $T/T^*_{\rm max}
\rightarrow 0$. }
\end{figure*}
We thus obtain two distinct PG closure temperatures, $T^*_{\rm ave}$
and $T^*_{\rm max}$, having the same microscopic origin
\cite{Sun06,Sun07}, but differing in the kinds of experimental
observables for which they are appropriate. The largest difference
between the two is found for small doping; they take similar values
at large dopings, becoming identical at the optimal doping point. In
Ref.~\cite{Sun07}, experimental values of $T\tsub c$ and $T^*$
\cite{Dai99} that do not resolve $k$ were compared to our
theoretical $T\tsub c$ and $T^*_{\rm ave}$. In Fig.~4, we re-plot
these values in green for $T\tsub c$ (with open triangles for data
and dotted curve for theory) and blue for $T^*_{\rm ave}$ (with open
squares for data and solid curve for theory). Above the blue (solid)
curve, we now add the maximum PG closure temperatures $T^*_{\rm
max}$ in red (with open circles for data and dashed curve for
theory). Because of the $1 / {\overline\gamma}$ factor shown in
Fig.~3, the red (dashed) curve lies well above the blue (solid)
curve at low dopings. In Ref.~\cite{Camp99}, Campuzano {\it et al.}
reported their ARPES data (plotted as red circles in Fig.~4). In the
underdoped regime it is clear that the results from
Refs.~\cite{Camp99} and \cite{Dai99} differ substantially. Because
the ARPES experiment typically detects $k$-dependent properties, we
suggest that the data from Ref.~\cite{Camp99} actually measure
$T^*_{\rm max}$, as predicted in the present paper, while the data
cited in Ref.~\cite{Dai99} measure the $k$-averaged $T^*_{\rm ave}$,
as described in our earlier paper (Ref.~\cite{Sun07}). We emphasize
that in this interpretation the two types of experiments are seeing
the {\em same underlying physics,} but the observations differ
because what is actually being measured differs in the two cases.
\subsection{Temperature-dependent Fermi arcs}
The pseudogap closure temperature $T^*(\bm k)$ is anisotropic in
$\bm k$. Combining Eqs.\ (\ref{Tstark}), (\ref{Tstar}) and
(\ref{TstarRatio}), we have
\begin{equation}
T^*(\bm k) = T^*_{\rm max} \gamma(\theta_{k\tsub f}) = {{T^*_{\rm
max}}\over {g_0}} |g({k\tsub f})|, \label{fg1.1}
\end{equation}
where the doping-dependent quantity $T^*_{\rm max}$ is the maximum
value of $T^*(\bm k)$ in the antinodal direction, $\theta_{k\tsub
f}=0$ or $\pi/2$. For an arbitrary temperature $T < T^*_{\rm max}$,
the $k$-dependent pseudogap closes when $T=T^*(\bm k)$, which is
equivalent to the requirement that
\begin{equation}
|\cos k_x -\cos k_y| = g_0 (T/T^*_{\rm max}), \label{fg1.2}
\end{equation}
upon substituting (\ref{gk}) and (\ref{fg1.1}). This equation says
that the magnitude of the $d$-wave formfactor (\ref{gk}) that
expresses the nodal structure \cite{Sc95} in cuprate superconductors
is proportional to the scaled quantity $T/T^*_{\rm max}$, with a
proportionality factor $g_0$ that is related to the size of the
Fermi surface.
Simultaneous solution of Eqs. (\ref{fSurface}) and (\ref{fg1.2})
gives values of $k_x$ and $k_y$ where the pseudogap closes at a
given $P$ and $T$. Figure 5 illustrates the solution of
(\ref{fSurface}) and (\ref{fg1.2}) graphically for several
temperatures at fixed doping. The solution of Eq.\ (\ref{fg1.2}) is
represented by the curves bounding the shaded regions and the
solution of Eq.\ (\ref{fSurface}) is represented by the Fermi
surface curves in each corner. The intersection of these curves
defines two simultaneous solutions in each of the four quadrants
that bound the surviving part of the Fermi surface (heavier portions
of the curves in Fig.\ 5). In the shaded portions of Fig.\ 5 the
Fermi surface has been destroyed by the pseudogap, leaving only a
vestigial Fermi arc between the shaded regions.
The solution in Fig.\ 5 represents a derivation of Fermi-arc
structure expected in the underdoped regime above $T\tsub c$. Below
$T\tsub c$, the Fermi surface is completely destroyed by the opening
of the pairing gap except at the nodal point, since the pairing gap
has no $\gamma(\theta_{k\tsub f})$ dependence [see Eq.\
(\ref{solution1})]. We emphasize that the calculations presented in
Fig.\ 5 do not involve any new parameters as long as the PG closure
temperatures have been calculated (presented in Fig.\ 4).
Kanigel {\it et al.} \cite{Kanigel06} have reported from their ARPES
experiment arc lengths for slightly underdoped Bi2212. A direct
reading of the Fermi arc length from Fig.~5 permits us to compare it
quantitatively with the Kanigel data. Details will be published
elsewhere \cite{Gu07}.
\subsection{Complete suppression of antiferromagnetism: pure
superconducting states in underdoped compounds}
\begin{figure}
\includegraphics[height=.23\textheight]{fig6.eps}
\caption{The anisotropic factor $\gamma(\theta)$. In this figure,
$\theta_{{\scriptstyle\rm c}_{1}}=\theta\tsub c$,
$\theta_{{\scriptstyle\rm c}_{2}}=\pi/2-\theta\tsub c$, and
$\theta\tsub c$ is determined by $x_{q\theta}(\theta\tsub c)=x$.
The value of $\gamma(\theta)$ at the critical angle $\theta\tsub
c$ is denoted by $\gamma(\theta\tsub c) \equiv \gamma\tsub c(x)$,
where $\gamma\tsub c(x)$ is a monotonically increasing function of
doping $x$ that becomes equal to unity when $x=x_q$. }
\end{figure}
\begin{figure*}
\includegraphics[height=.28\textheight]{fig7.eps}
\caption{(Color online) Dependence of the energy gaps $\Delta_d$,
$\Delta_\pi$, and $\Delta_q$ on the momentum direction $\theta$ for
several representative dopings $P$. For each doping there is a
$\theta$-window (yellow shaded region), centered at the node
$\theta=\pi/4$ (indicated by the open circle), in which
$\Delta_q=\Delta_\pi=0$, and $\Delta_d\ne 0$. This window
encompasses only a few percent of the Brillouin zone at low doping
(tending to zero at zero doping), but rapidly expands to fill the
entire Brillouin zone near the critical doping $P\simeq 0.18$ and
beyond.}
\end{figure*}
As we have discussed extensively in Ref.~\cite{Sun06}, the
antiferromagnetic correlation that plays a key role in understanding
underdoped cuprates is completely suppressed at and beyond the
critical doping point $x_q$. A pure ($d$-wave) BCS superconducting
state occurs at zero temperature in the overdoped portion of the
phase diagram. We now show that a similar situation can also occur
in certain $k$-windows in the {\it underdoped} regime, in which AF
correlation is completely suppressed and a pure SC state emerges.
This is another interesting consequence of the SU(4)$_k$ model due
to the anisotropic factor $\gamma(\theta_{k\tsub f}) /
\overline\gamma$. The critical doping point defined in
Eq.~(\ref{xqf}), which is constant in the $k$-averaged SU(4) model
\cite{Sun06}, is now a function of momentum direction
$\theta_{k\tsub f}$ because of the anisotropic factor
$\gamma(\theta_{k\tsub f}) / \overline\gamma$. Consequently, for
each given doping $x$ there always exists a window in the momentum
azimuthal angle, $\theta\tsub c < \theta_{k\tsub f} <
(\pi/2-\theta\tsub c)$, and centering at the nodal point $\pi/4$,
within which the AF correlation vanishes and only the pairing gap
$\Delta_d$ exists. This follows because inside the window
$x_{q\theta}<x$; therefore, the solution (\ref{solution1}) is not
permitted but the solution (\ref{solution2}) is. The critical angle
$\theta\tsub c$ is determined by the condition
$x_{q\theta}(\theta\tsub c)=x$. Figure 6 illustrates the situation.
Because $\gamma(\theta_{k\tsub f}) / \overline\gamma$, and thus
$x_{q\theta}$, is a doping-dependent quantity, the above phenomenon
depends on doping. The sequential figures, plotted for four
different dopings in Fig.~7, show the behavior of the energy gaps as
functions of the momentum direction. Whenever
$$
x=x_q=\sqrt{ \frac{\chi-G_0}{\chi-G_1} },
$$
$\theta_c=0$, which means that there is no momentum space available
to $\Delta_q$ and $\Delta_\pi$, and the AF correlations and triplet
pairing states are completely suppressed. Therefore, for $x\ge x_q$
the system can only be in a pure superconducting state at zero
temperature. It can be seen that larger doping $x$ implies a smaller
critical angle $\theta\tsub c$, and thus a wider pairing window, and
that the width of the pairing window decreases rapidly toward zero
as the doping goes to zero.
In the original SU(4) model, we found that the critical doping point
defines a natural boundary (quantum phase transition) between
underdoped and overdoped regimes that have qualitatively different
wavefunctions \cite{Sun06}. We termed the underdoped superconducting
regime the AF+SC phase (antiferromagnetic superconducting phase); it
is characterized by having all gaps nonzero but is dominated by AF
and SC gaps. The present extension to the SU(4)$_k$ model reveals
the additional feature that in this AF+SC phase the gaps are {\it
highly anisotropic} in the momentum space, implying the possibility
of a pure SC window around the nodal points even in the underdoped
regime. The proposed existence of these pure superconducting
windows may have considerable implication for the nature of the
Fermi surface at low doping, for the Nernst effect, and for the
relationship of impurities to inhomogeneities in the underdoped
region. These deserve further investigation.
The above analysis has assumed $T=0$ for simplicity, but the basic
picture should be valid also in the case with nonzero temperature.
The formulation and solution of the gap equations described in this
paper can be extended to finite temperature using the methods
described in Ref.~\cite{Sun07}. While of considerable practical
importance, this extension does not involve conceptually new ideas
and will be deferred to a later paper.
\section{Summary}
In this paper, we have extended the SU(4) model for high-$T\tsub c$
superconductivity to include explicit momentum dependence in
observables. To do so, we have started from a general SU(4)
Hamiltonian and introduced a new approximation for the $d$-wave
formfactor in the pair operators. This leads to the new SU(4)$_k$
model, which retains explicit $k$-dependence while preserving SU(4)
symmetry. We have solved the gap equations derived from the
SU(4)$_k$ coherent states with some plausible approximations,
obtaining analytical solutions for $k$-dependent superconducting
gaps, pseudogaps, and their transition temperatures $T\tsub c$ and
$T^*$. The new SU(4)$_k$ model reduces to the original SU(4) model
for observables that are averaged over all possible $k$ directions.
Therefore we propose that the original SU(4) model describes the
averaged features and thermal properties of cuprates, while the new
SU(4)$_k$ model presented in this paper extends this description to
detailed anisotropic properties in the $k$-space. The present
results have been obtained for zero temperature but the formalism
presented here may be extended to finite temperature in a manner
similar to that extension of the $k$-averaged SU(4) model.
Because of an anisotropic factor $\gamma(\theta_{k\tsub
f})/\overline\gamma$ in the analytical gap solutions, the cuprate
phase structure in the underdoped regime becomes even richer than
that for $k$-averaged observations. We have discussed three
immediate consequences that emerge in the new SU(4)$_k$ model:
\begin{enumerate}
\item We have suggested the possibility of two distinct, measurable,
pseudogap closure temperatures: the maximum and the averaged. In the
coherent-state SU(4) theory the pseudogap could be interpreted
either as arising from competing AF and SC degrees of freedom, or
alternatively as fluctuations of pairing subject to SU(4)
constraints \cite{Sun06,Sun07}. The proposed $T^*_{\rm ave}$ and
$T^*_{\rm max}$ share this same microscopic origin, but differ from
each other by a doping-dependent factor. The temperature $T^*_{\rm
ave}$ represents PG closure temperatures that are averages over $k$,
while $T^*_{\rm max}$, which is generally higher than $T^*_{\rm
ave}$, represents the pseudogap temperature expected if one retains
explicit $k$-dependence. Experimentally, then, we predict that
$T^*_{\rm ave}$ is the pseudogap temperature that should be measured
in experiments that do not resolve $k$ explicitly, but (the
generally higher) $T^*_{\rm max}$ is the expected measured pseudogap
temperature for experiments like ARPES that resolve $k$.
\item We have provided a theoretical framework to understand ARPES
Fermi-arc data. Using two analytical equations, we have obtained
solution for the $T/T^*$ dependent Fermi-arc lengths in quantitative
agreement with existing measurements. The essence of this result is
the appearance of the new factor $\gamma(\theta)$ in the PG closure
temperature that has been derived in Eq.~(\ref{Tstark}).
\item We have predicted the existence of doping-dependent windows
in the momentum space where antiferromagnetic correlation is
completely suppressed in the underdoped regime. Without AF
competition, it is possible for pure superconducting states to
emerge in these windows. Thus, we find that pure BCS-type
superconducting states can exist, not only in conventional
superconductors or in overdoped cuprate high-$T\tsub c$
superconductors (where such behavior is well established), but in
localized islands even in underdoped cuprate superconductors. It is
of interest whether this prediction is related to the recent
observation of small pockets of well-defined fermi surface in
underdoped cuprate superconductors.
\end{enumerate}
\noindent Some of these predictions (for example, the two
temperature scales for pseudogap behavior) have the potential to
reconcile apparent discrepancies in existing data. All make
predictions that can be tested in experiments capable of resolving
$k$-dependent behavior.
Finally, we note that the recent discovery of superconductivity in
layered iron-based transition metal oxypnictides \cite{new1} has
generated a new wave of research interest. In place of copper and
oxygen, the new compounds contain iron and arsenic, and the highest
critical temperature for them has already reached 55 kelvin
\cite{new2}. It has been demonstrated in neutron-scattering
experiments \cite{Dai08} that, like high-$T_c$ copper oxides,
superconductivity in these iron-based materials is likely competing
strongly with antiferromagnetic degrees of freedom. It will be of
considerable interest to see whether approaches like the one
presented in this paper, or other models capable of handling
multiple competing degrees of freedom in strongly-correlated systems
on an equal footing, can explain these new high temperature
superconductors and their relationship to the old ones \cite{Sun08}.
In particular, we note that what is already known about the new
iron-based superconductors suggests that $k$-dependent phenomena of
the sort described in this paper should also be observable in these
new superconductors.
\bibliographystyle{unsrt}
|
2,877,628,091,128 | arxiv | \section{Introduction}
In \cite{vf} we generalized MacMahon's \cite{MacM} beautiful formula stating that the number of lozenge tilings of a hexagonal region of side-lengths $x$, $y$, $z$, $x$, $y$, $z$ (in cyclic order) on a triangular lattice is
\begin{equation}
\prod_{i=1}^x\prod_{j=1}^y\prod_{k=1}^z \frac{i+j+k-1}{i+j+k-2},
\label{eaa}
\end{equation}
by showing that the number of lozenge tilings of hexagonal regions with a 4-lobed structure (called a shamrock) removed from their center is given by a product formula generalizing \eqref{eaa}. Motivated by the singularly elegant situation that all symmetry classes of lozenge tilings of a hexagon are given by equally beautiful formulas (see \cite{And}\cite{Sta}\cite{Kup}\cite{Ste}\cite{KKZ} and the survey \cite{Bres} for more recent developments), it is natural to consider the problem of enumerating the symmetry classes of tilings of these more general regions. Six new questions arise in this way. In \cite{symffa} we solved the cyclically symmetric and the cyclically symmetric and transpose complementary cases (invariance under rotation by 120 degrees, resp. invariance under the same plus reflection across vertical), and in \cite{symffb} we presented the transpose complementary case (invariance under reflection across vertical).
The purpose of this paper is to present the enumeration of a fourth case, that of symmetric and self complementary tilings (i.e., tilings that are horizontally symmetric and centrally symmetric). We achieve this by first generalizing the family of regions, and then proving product formulas for the number of tilings of the more general regions (see Theorem \ref{tbc} and Proposition~\ref{tca}).
Very useful for proving such formulas is Kuo's graphical condensation method \cite{KuoOne}\cite{KuoTwo}. However, the tilings we consider are tilings or regions for which part of their boundary is free (i.e. lozenges are allowed to protrude out halfway through those portions), and Kuo's original results did not deal with the case of a free boundary. We therefore need to first work out free boundary versions of Kuo's formulas.
We present three such free boundary analogs (see Theorem \ref{tba} and Corollary \ref{tbb}). The first one (which applies in a more general setting than the other two) is an eight-term recurrence. The other two are four-term recurrences (one being exactly Kuo's Pfaffian recurrence!) that can be deduced from the first one. One of the latter is what we use to prove our results.
Our results allow us to compute exactly the correlation in a sea of dimers of a macroscopic dent in a 90 degree wedge with mixed boundary conditions (see Theorem \ref{tda}). We use previous results to compute the correlation of the corresponding symmetrized system with no boundary(see Theorem \ref{tdb}), and show that its fourth root has the same log-asymptotics as the correlation of the dent in the 90 degree wedge (see Corollary \ref{tdd}). This is the first result of this kind involving a macroscopic defect. It suggests that the connections between dimer systems with gaps and 2D electrostatics may be deeper that previously thought.
\section{Statement of main results}
For a planar graph $G$ with weights on its edges and a distinguished subset $S$ of vertices on some face, we denote by $M_f(G)$ the sum of the weights\footnote{ The weight of a matching is the product of the weights of the edges in it.} of all the (not necessarily perfect) matchings of $G$ in which all the vertices that are not in $S$ are matched, but those in $S$ are free to be matched or not matched (the distinguished subset of vertices $S$ will be clear from context, so we do not need to include $S$ in the notation). Clearly, setting all the edge weights equal to 1 results in $M_f(G)$ simply counting all such matchings.
If $a,b,c,d\notin S$ are four vertices appearing in this cyclic order on the same face as the one containing the vertices in $S$, we say that $S$ is {\it $a,c$-separated} if there are no mutually disjoints paths $P_1,P_2,P_3$ in $G$ so that $P_1$ connects $a$ to $c$, $P_2$ connects $b$ to some vertex in $S$, and $P_3$ connects $d$ to some other vertex of $S$.
\begin{theo}
\label{tba}
Let $G$ be a weighted planar graph with the vertices $a$, $b$, $c$, $d$ appearing in that cyclic order on a face $F$ of $G$. Let $S$ be a subset of the vertices of $F$ that is disjoint with $\{a,b,c,d\}$, and assume that $S$ is $a,c$-separated.
Then we have
\begin{align}
&\!\!\!\!\!\!
\M_f(G)\M_f(G\setminus\{a,b,c,d\})
+
\M_f(G\setminus\{b,d\})\M_f(G\setminus\{a,c\})
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
+
\M_f(G\setminus\{b\})\M_f(G\setminus\{a,c,d\})
+
\M_f(G\setminus\{d\})\M_f(G\setminus\{a,b,c\})
\nonumber
\\
=
&
\M_f(G\setminus\{a,d\})\M_f(G\setminus\{b,c\})
+
\M_f(G\setminus\{a,b\})\M_f(G\setminus\{c,d\})
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
+
\M_f(G\setminus\{a\})\M_f(G\setminus\{b,c,d\})
+
\M_f(G\setminus\{a,b,d\})\M_f(G\setminus\{c\}).
\label{eba}
\end{align}
\end{theo}
\begin{proof} For any subgraph $H$ of $G$ containing the vertices in $S$, denote by $\mathcal M_f(H)$ the set of matchings of $H$ in which all vertices not in $S$ are matched, but the ones in $S$ are free to be matched or not matched. Patterned on the two sides of equation \eqref{eba}, consider the disjoint unions of Cartesian products
\begin{align}
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\mathcal M_f(G)\times \mathcal M_f(G\setminus\{a,b,c,d\})
\cup
\mathcal M_f(G\setminus\{b,d\})\times\mathcal M_f(G\setminus\{a,c\})
\nonumber
\\
&\ \ \ \ \ \
\cup
\mathcal M_f(G\setminus\{b\})\times\mathcal M_f(G\setminus\{a,c,d\})
\cup
\mathcal M_f(G\setminus\{d\})\times\mathcal M_f(G\setminus\{a,b,c\})
\label{ebb}
\end{align}
and
\begin{align}
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\mathcal M_f(G\setminus\{a,d\})\times \mathcal M_f(G\setminus\{b,c\})
\cup
\mathcal M_f(G\setminus\{a,b\})\times\mathcal M_f(G\setminus\{c,d\})
\nonumber
\\
&\ \ \ \ \ \
\cup
\mathcal M_f(G\setminus\{a\})\times\mathcal M_f(G\setminus\{b,c,d\})
\cup
\mathcal M_f(G\setminus\{a,b,d\})\times\mathcal M_f(G\setminus\{c\}).
\label{ebc}
\end{align}
For any element $(\mu,\nu)$ of \eqref{ebb} or \eqref{ebc}, think of the edges of $\mu$ as being marked by solid lines, and of the edges of $\nu$ as marked by dotted lines, {\it on the same copy of the graph $G$} (any edge common to $\mu$ and $\nu$ will be marked both solid and dotted, by two parallel arcs). Note that for all $(\mu,\nu)$ corresponding to cartesian products in \eqref{ebb}, $a$ is matched by a solid edge (this is the reason for the chosen order of factors in the terms of \eqref{eba}).
Define the weight of $(\mu,\nu)$ to be the product of the weight of $\mu$ and the weight of $\nu$. Then the total weight of the elements of the set \eqref{ebb} is equal to the left hand side of equation~\eqref{eba}, while the total weight of the elements of the set \eqref{ebc} equals the right hand side of \eqref{eba}. Therefore, to prove \eqref{eba} it suffices to construct a weight-preserving bijection between the sets~\eqref{ebb} and~\eqref{ebc}.
We construct such a bijection as follows. Let $(\mu,\nu)$ be an element of \eqref{ebb}. Map $(\mu,\nu)$ to what we get from it by ``shifting along the path containing $a$.'' More precisely, note that when considering the edges of $\mu$ and $\nu$ together on the same copy of $G$, each of the vertices $a,b,c,d$ is incident to precisely one edge. All the other vertices of $G$ that are not in $S$ are incident to one solid edge and one dotted edge. Finally, each vertex in $S$ could be incident to no edge, to a single edge (solid or dotted), or to one solid and one dotted edge.
\begin{figure}[h]
\centerline{
\hfill
{\includegraphics[width=0.90\textwidth]{proof8a.eps}}
\hfill
}
\vskip-0.1in
\caption{Schematic representation of the bijection proving \eqref{eba}. Shifting along path containing $a$ matches the partition classes according to the pattern \newline
$A_1\ \ A_2\ \ A_3\ \ B_1\ \ B_2\ \ B_3\ \ C_1\ \ C_2\ \ C_3\ \ D_1\ \ D_2\ \ D_3$\newline
$B'_1\ \ A'_2\ \ C'_3\ \ A'_1\ \ B'_2\ \ D'_3\ \ C'_1\ \ D'_2\ \ B'_3\ \ D'_1\ \ C'_2\ \ A'_3$
}
\vskip-0.1in
\label{fba}
\end{figure}
This implies that $\mu\cup\nu$ is the disjoint union of (1) paths connecting each of $a$, $b$, $c$ and $d$ either to another element of $\{a,b,c,d\}$ or to some vertex in $S$; (2) paths (if any) connecting in pairs some of the vertices of $S$ not connected to $\{a,b,c,d\}$; and (3) cycles covering all the remaining vertices of~$G\setminus S$ and possibly some of the remaining vertices of $S$. Consider the path containing $a$, and change each solid edge in it to dotted, and each dotted edge to solid. Denote the resulting pair of matchings by $(\mu',\nu')$.
Clearly, the weight of $(\mu',\nu')$ is the same as the weight of $(\mu,\nu)$. Therefore, it is enough to show that this map is a bijection.
To see this, we partition each of the four cartesian products in \eqref{ebb} into three classes, according to the three connection possibilities for vertex $a$: We gather those $(\mu,\nu)$ for which, in the superposition of $\mu$ and $\nu$, $a$ is connected by a path to $b$, into one class; those for which $a$ is connected by a path to $d$ into another class; and those for which $a$ is connected by a path to some vertex in $S$ into a third class (these partitions are represented schematically in the top half of Figure \ref{fba}).
The key to our proof (and the reason the above description forms a partition) is that, due to our assumption that $S$ is $a,c$-separated, the situation of $a$ being connected by a path to $c$ does not arise.
Partition similarly each of the four cartesian products in \eqref{ebc} into three classes, according to the same three possible connection types for $a$. These are illustrated in the bottom half of Figure \ref{fba}.
Under the above mapping $(\mu,\nu)\mapsto(\mu',\nu')$, each of the twelve classes of superpositions of matchings corresponding to the cartesian products in \eqref{ebb} turns out to be mapped bijectively to a different class of superpositions of matchings corresponding to the cartesian products in~\eqref{ebc}. The correspondence is indicated in Figure \ref{fba} (the top four groups of 3 ``balls'' are denoted from left to right by $A$, $B$, $C$, $D$, and the bottom ones by $A'$, $B'$, $C'$, $D'$; the subscript $i$ indicates that the $i$th ball from the group --- counting from the top --- is chosen).
Indeed, consider for instance from among the twelve classes of superpositions of matchings corresponding to the cartesian products in \eqref{ebb}, the class consisting of those $(\mu,\nu)$ corresponding to the first cartesian product in \eqref{ebb} for which $a$ is connected by a path to $b$ in the superposition of $\mu$ and $\nu$. Since both $a$ and $b$ are matched by solid edges, after we apply our construction to obtain $(\mu',\nu')$ (which recall consists in reversing the type of all edges in the path containing $a$ --- turning solid edges to dotted, and dotted to solid), both $a$ and $b$ will be matched by dotted edges. They will also clearly still be connected to one another. Therefore this class is mapped into the ``$a$ connected to $b$'' class of the second cartesian product in \eqref{ebc}. Since our map is clearly an involution, it establishes a bijection between these two classes.
Similarly, the ``$a$ connected to $d$'' class of the first cartesian product in \eqref{ebb} is mapped bijectively onto the ``$a$ connected to $d$'' class of the first cartesian product in \eqref{ebc}.
The remaining class of the first cartesian product in \eqref{ebb} consists of those $(\mu,\nu)$ for which, when superimposing $\mu$ and $\nu$, $a$ gets connected by a path to some vertex in $S$. As the edge matching $a$ in this path is solid, and we are reversing the types of the edges in this path to get $(\mu',\nu')$, the edge matching $a$ in the superposition of $(\mu',\nu')$ is dotted, and the edges matching $b$, $c$ and $d$ remain all solid. Thus this third class is mapped bijectively onto the ``$a$ connected to some vertex in $S$'' class of the third cartesian product in \eqref{ebc}.
All remaining bijective mappings of classes indicated in Figure \ref{fba} are justified similarly. This completes the proof. \end{proof}
\begin{cor}
\label{tbb}
Suppose we have all the hypotheses of Theorem \ref{tba}, and assume in addition that $S$ is also $b,d$-separated. Then we have
\medskip
\begin{align}
&\!\!\!\!\!\!
\M_f(G)\M_f(G\setminus\{a,b,c,d\})
+
\M_f(G\setminus\{b,d\})\M_f(G\setminus\{a,c\})
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
=
\M_f(G\setminus\{a,d\})\M_f(G\setminus\{b,c\})
+
\M_f(G\setminus\{a,b\})\M_f(G\setminus\{c,d\})
\label{ebd}
\end{align}
\medskip
and
\medskip
\begin{align}
&\!\!\!\!\!\!
\M_f(G\setminus\{b\})\M_f(G\setminus\{a,c,d\})
+
\M_f(G\setminus\{d\})\M_f(G\setminus\{a,b,c\})
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
=
\M_f(G\setminus\{a\})\M_f(G\setminus\{b,c,d\})
+
\M_f(G\setminus\{a,b,d\})\M_f(G\setminus\{c\}).
\label{ebe}
\end{align}
\end{cor}
\begin{proof} By our assumptions and Theorem \ref{tba}, equation \eqref{eba} holds. In addition, by applying Theorem \ref{tba} by viewing $S$ as being $b,d$-separated, we obtain
\medskip
\begin{align}
&\!\!\!\!\!\!
\M_f(G)\M_f(G\setminus\{a,b,c,d\})
+
\M_f(G\setminus\{a,c\})\M_f(G\setminus\{b,d\})
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
+
\M_f(G\setminus\{a\})\M_f(G\setminus\{b,c,d\})
+
\M_f(G\setminus\{c\})\M_f(G\setminus\{a,b,d\})
\nonumber
\\
=
&
\M_f(G\setminus\{a,b\})\M_f(G\setminus\{c,d\})
+
\M_f(G\setminus\{b,c\})\M_f(G\setminus\{a,d\})
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
+
\M_f(G\setminus\{b\})\M_f(G\setminus\{a,c,d\})
+
\M_f(G\setminus\{a,b,c\})\M_f(G\setminus\{d\}).
\label{ebf}
\end{align}
Equations \eqref{eba} and \eqref{ebf} imply \eqref{ebd} and \eqref{ebe}. \end{proof}
\bigskip
\parindent=0pt
{\bf Remark 1.} When $S=\emptyset$, equation \eqref{ebd} becomes Kuo's Proposition 1.1 of \cite{KuoTwo}:
\medskip
\begin{align}
&\!\!\!\!\!\!
\M(G)\M_f(G\setminus\{a,b,c,d\})
+
\M(G\setminus\{b,d\})\M_f(G\setminus\{a,c\})
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
=
\M(G\setminus\{a,d\})\M_f(G\setminus\{b,c\})
+
\M(G\setminus\{a,b\})\M_f(G\setminus\{c,d\}),
\label{ebg}
\end{align}
\medskip
where $\M(G)$ stands for the number (or total weight, if $G$ is weighted) of {\it perfect} matchings of~$G$.
\parindent=15pt
If in addition $G$ is bipartite, has two more white than black vertices, and $a$, $b$, $c$, $d$ are all white, \eqref{ebd} becomes Kuo's Theorem 2.5 of \cite{KuoOne}:
\medskip
\begin{figure}[h]
\centerline{
\hfill
{\includegraphics[width=0.44\textwidth]{Scored1a.eps}}
\hfill
{\includegraphics[width=0.40\textwidth]{SSCff.eps}}
\hfill
}
\caption{\label{fbb} The $S$-cored hexagon $SC_{6,8,4}(3,1,2,2)$ (left; see \cite{vf} for details of its definition) and the region $H_{8,10}(4)$ (right).}
\label{fbb}
\end{figure}
\begin{align}
&\!\!\!\!\!\!
\M(G\setminus\{b,d\})\M_f(G\setminus\{a,c\})
\nonumber
=
\M(G\setminus\{a,d\})\M_f(G\setminus\{b,c\})
+
\M(G\setminus\{a,b\})\M_f(G\setminus\{c,d\}).
\nonumber
\end{align}
\medskip
Equation \eqref{ebe} also has a specialization that appeared before in Kuo's work: If $G$ is bipartite, has one more white than black vertices, $a$, $b$, $c$ are white and $d$ is black, then \eqref{ebe} becomes Kuo's Theorem 2.4 of \cite{KuoOne}:
\medskip
\begin{align}
&\!\!\!\!\!\!
\M(G\setminus\{b\})\M(G\setminus\{a,c,d\})
\nonumber
=
\M(G\setminus\{a\})\M(G\setminus\{b,c,d\})
+
\M(G\setminus\{a,b,d\})\M(G\setminus\{c\}).
\nonumber
\end{align}
\medskip
\parindent=0pt
More generally, if $G$ is not necessarily bipartite and $S=\emptyset$, \eqref{ebe} becomes
\medskip
\begin{align}
&\!\!\!\!\!\!
\M(G\setminus\{b\})\M(G\setminus\{a,c,d\})
+
\M(G\setminus\{d\})\M(G\setminus\{a,b,c\})
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
=
\M(G\setminus\{a\})\M(G\setminus\{b,c,d\})
+
\M(G\setminus\{a,b,d\})\M(G\setminus\{c\}).
\label{ebh}
\end{align}
\medskip
This counterpart of Kuo's 2006 theorem \eqref{ebg} seems to have gone unnoticed until now.
\medskip
\parindent=15pt
We state now the other main result of our paper, which deals with enumerating the ``symmetric and self complementary'' lozenge tilings of a hexagonal region with a shamrock removed from its center. This requires our tilings to be invariant under reflection across the horizontal, and under rotation by 180 degrees (i.e., central symmetry). Clearly, a necessary condition for the existence of such tilings is that the region itself is invariant under these symmetries. Central symmetry implies that the bottom lobes of the shamrock are empty, and that the top lobe is congruent to the central lobe. Thus the region is a hexagon $H_{x,y,k}$ of side-lengths $x$, $y$, $y$, $x$, $y$, $y$ (clockwise from top) with a vertical ``bowtie'' consisting of two triangles of side $k$ removed from its center (see the picture on the right in Figure \ref{fbb}).
Since horizontal symmetry and central symmetry imply vertical symmetry, and because in any vertically symmetric tiling the vertical symmetry axis must be entirely covered by vertical lozenges (see the shaded lozenges in Figure \ref{fbb}), it follows that all of $x$, $y$ and $k$ must be even if a tiling with the required symmetries exists.
\begin{figure}[h]
\centerline{
\hfill
{\includegraphics[width=0.44\textwidth]{cb.eps}}
\hfill
{\includegraphics[width=0.30\textwidth]{Rxykpaa.eps}}
\hfill
}
\caption{The carpenter's butterfly region $H_{x,y}(k,p)$ for $x=8$, $y=10$, $k=4$, $p=1$ (left) and the flashlight region $F_{x/2,(y-k)/2,k/2,p}$ whose tilings can be identified with the horizontally and vertically symmetric tilings of the former (right).}
\label{fbc}
\end{figure}
\begin{figure}[h]
\centerline{
\hfill
{\includegraphics[width=0.25\textwidth]{Rxykpbb.eps}}
\hfill
{\includegraphics[width=0.30\textwidth]{Rxykpb.eps}}
\hfill
}
\caption{The flashlight region $F_{x,z,k,p}$ (left) and the reduced flashlight region $\hat{F}_{x,z,k,p}$ (right) for $x=4$, $z=3$, $k=2$, $p=1$.}
\label{fca}
\end{figure}
Therefore, the symmetry case corresponding to symmetric and self complementary plane partitions amounts to enumerating horizontally and vertically symmetric lozenge tilings of the regions $H_{x,y}(k)$, where $x$, $y$ and $k$ are all even.
In our main result we actually enumerate tilings of a more general family of regions. This extension turns out to be crucial in order for our proof to work. The more general regions are obtained by ``thickening'' the removed bowtie: Translate its left boundary $p$ units to the left, and its right boundary $p$ units to the right, turning the removed portion into a region resembling a carpenter's butterfly (we are assuming that the latter still fits inside the outer hexagon). Denote by $H_{x,y}(k,p)$ the resulting region (see Figure \ref{fbc} for an example).
The main enumeration result of this paper is the following.
\begin{figure}[h]
\centerline{
\hfill
{\includegraphics[width=0.40\textwidth]{Fmon.eps}}
\hfill
}
\caption{\label{fbcc} Applying free boundary Kuo condensation to the region $\hat{F}_{x,z,k,p}$.}
\label{fcb}
\end{figure}
\begin{theo}
\label{tbc}
For any non-negative integers $x$, $y$, $k$ and $p$, we have
\begin{align}
&
\M_{\,-\,,\,|\,}(H_{2x,2y}(2k,p))=
\prod_{i=1}^{y-k-1}\frac{k+i}{i}
\prod_{i=0}^{p-1}\frac{(x+y+p-2i)_{y-k-1}}
{(x+k+p-2i)_{y-k-1}}
\prod_{i=1}^{y-k-1}\prod_{j=2}^i\frac{2k+i+j-1}{i+j-1}
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \
\times
\dfrac
{\prod_{j=1}^{k}(x-k-p+2j-1)_{2y+2k-4j+3}
\prod_{j=1}^{y-k}(x+k-p+j)_{2y-2k-2j+1}}
{\prod_{j=1}^{k}(2j-1)_{2y+2k-4j+3}
\prod_{j=1}^{y-k}(2k+j)_{2y-2k-2j+1}},
\label{ebi}
\end{align}
where all products for which the index limits are out of order are taken to be $1$.
\end{theo}
\section{Proof of Theorem \ref{tbc}}
Since, as we mentioned, each tiling with these symmetries must contain the vertical lozenges along the vertical symmetry axis (these are shaded in Figure \ref{fbb}), it follows that the horizontally and vertically symmetric lozenge tilings of $H_{x,y}(k,p)$ can be identified with tilings of the subregion of $H_{x,y}(k,p)$ that is to the right of the shaded vertical lozenges and above the horizontal symmetry axis, with the specification that the boundary along the horizontal symmetry axis is free, i.e. lozenges are allowed to protrude out halfway across it (the region obtained this way for the example on the left in Figure \ref{fca} is shown on the right in the same figure). Denote by $F_{x,z,k,p}$ the region of this type which has the dimensions indicated in Figure \ref{fca}, and call it a {\it flashlight region}. It is defined for all non-negative integers $x,z,k,p$ with $x+z\geq k+p$ (so that the dent on the lower left does not go through the boundary on the right).
Then we have
\begin{equation}
\M_{\,-\,,\,|\,}(H_{2x,2y}(2k,p))=\M_f(F_{x,y-k,k,p}),
\label{eca}
\end{equation}
and Theorem \ref{tbc} will follow from the following result.
\begin{figure}[h]
\vskip-0.1in
\centerline{
\hfill
{\includegraphics[width=0.35\textwidth]{Frec1.eps}}
\hfill
{\includegraphics[width=0.35\textwidth]{Frec2.eps}}
\hfill
}
\vskip0.1in
\centerline{
\hfill
{\includegraphics[width=0.35\textwidth]{Frec3.eps}}
\hfill
{\includegraphics[width=0.35\textwidth]{Frec4.eps}}
\hfill
}
\caption{The $\hat{F}$-regions on the left in \eqref{ecc}.}
\label{fcc}
\end{figure}
\begin{prop}
\label{tca}
For non-negative integers $x,z,k,p$ we have
\begin{align}
&
\M_f(F_{x,z,k,p})=
\prod_{i=1}^{z-1}\frac{k+i}{i}
\prod_{i=0}^{p-1}\frac{(x+z+k+p-2i)_{z-1}}
{(x+k+p-2i)_{z-1}}
\prod_{i=1}^{z-1}\prod_{j=2}^i\frac{2k+i+j-1}{i+j-1}
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \
\times
\dfrac
{\prod_{j=1}^{k}(x-k-p+2j-1)_{2z+4k-4j+3}
\prod_{j=1}^{z}(x+k-p+j)_{2z-2j+1}}
{\prod_{j=1}^{k}(2j-1)_{2z+4k-4j+3}
\prod_{j=1}^{z}(2k+j)_{2z-2j+1}},
\label{ecb}
\end{align}
where all products for which the index limits are out of order are taken to be $1$.
\end{prop}
\begin{figure}[h]
\centerline{
\hfill
{\includegraphics[width=0.35\textwidth]{Frec8.eps}}
\hfill
{\includegraphics[width=0.35\textwidth]{Frec7.eps}}
\hfill
}
\vskip0.1in
\centerline{
\hfill
{\includegraphics[width=0.35\textwidth]{Frec6.eps}}
\hfill
{\includegraphics[width=0.35\textwidth]{Frec5.eps}}
\hfill
}
\caption{The $F$-regions on the right in \eqref{ecc}.}
\label{fcd}
\end{figure}
\begin{proof} Note that if $z>0$, the zig-zag boundary portion of $F_{x,z,k,p}$ is non-empty, and $x$ lozenges along the top as well as $k+p$ lozenges just above the horizontal portion of the notch are forced. Denote by $\hat{F}_{x,z,k,p}$ the region obtained from $F_{x,z,k,p}$ after removing these forced lozenges (see Figure \ref{fca}). Then we clearly have
\begin{equation}
\M_f(F_{x,z,k,p})=\M_f(\hat{F}_{x,z,k,p}).
\label{ecbb}
\end{equation}
Identify the region $\hat{F}_{x,z,k,p}$ with its planar dual graph. Note that if we choose $a,b,c,d$ as indicated in Figure \ref{fcb}, then the set $S$ of free vertices (which correspond to the up-pointing unit triangles resting on the bottom dotted boundary) is both $a,c$-separated and $b,d$-separated. Therefore we can apply Corollary \ref{tbb}. When we do so, all the subregions obtained by removing from $\hat{F}_{x,z,k,p}$ the subsets of $\{a,b,c,d\}$ that show up in \eqref{ebd} turn out to be, after removing forced lozenges, flashlight regions of various arguments (see Figures \ref{fcc} and \ref{fcd}). We therefore obtain from \eqref{ebd} that
\begin{align}
&
\M_f(\hat{F}_{x,z,k,p})\M_f(\hat{F}_{x,z-2,k+1,p+1})
+
\M_f(\hat{F}_{x-1,z-1,k+1,p})\M_f(\hat{F}_{x+1,z-1,k,p+1})
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \
=
\M_f(\hat{F}_{x+1,z-2,k+1,p})\M_f(\hat{F}_{x-1,z,k,p+1})
+
\M_f(\hat{F}_{x,z-1,k,p})\M_f(\hat{F}_{x,z-1,k+1,p+1}),
\label{ecc}
\end{align}
which gives, using \eqref{ecbb}, that
\begin{figure}[h]
\centerline{
\hfill
{\includegraphics[width=0.15\textwidth]{Fxis0.eps}}
\hfill
{\includegraphics[width=0.35\textwidth]{Fzis0.eps}}
\hfill
}
\caption{The base cases $x=0$ (left) and $z=0$ (right).}
\label{fce}
\end{figure}
\begin{figure}[h]
\centerline{
\hfill
{\includegraphics[width=0.35\textwidth]{Fzis1.eps}}
\hfill
}
\caption{The base case $z=1$.}
\label{fcf}
\end{figure}
\begin{align}
&
\M_f(F_{x,z,k,p})\M_f(F_{x,z-2,k+1,p+1})
+
\M_f(F_{x-1,z-1,k+1,p})\M_f(F_{x+1,z-1,k,p+1})
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \
=
\M_f(F_{x+1,z-2,k+1,p})\M_f(F_{x-1,z,k,p+1})
+
\M_f(F_{x,z-1,k,p})\M_f(F_{x,z-1,k+1,p+1}).
\label{ecd}
\end{align}
Equation \eqref{ecd} holds for all integers $x\geq1$, $z\geq2$ and $k,p\geq0$ (strictly speaking, we assumed in Figures \ref{fcc} and \ref{fcd} that $z-1\geq2$; however, one readily sees by considering the analogous pictures for $z-1=1$ that the resulting regions in this case also lead to \eqref{ecd}).
We prove \eqref{ecb} by induction on $x+2z$. This works because for the eight $F$-regions in \eqref{ecd}, the value of this statistic is $x+2z$ for first region, and strictly less for all the others. View therefore \eqref{ecd} as a recurrence giving the number of tilings of the first region in terms of the others.
For the $F$-regions in \eqref{ecd} other than $F_{x,y,k,p}$, the value of the statistic $x+2z$ is either one, two, three or four units less than the value for $F_{x,y,k,p}$. On the other hand, in order for all the regions involved in \eqref{ecc} to be defined, one needs $x\geq1$ and $z\geq2$. Therefore, the base cases of our induction are the situations when $x+2z=i$, $i\in\{0,1,2,3\}$, and the additional three cases $x=0$, $z=0$ and $z=1$. Since $x$ and $z$ are non-negative integers, $x+2z\leq3$ implies $z=0$ or $z=1$. Thus it is enough to check the base cases $x=0$, $z=0$ and $z=1$.
Before we address these base cases, note that due to the Pochhammer symbols at the numerator in the second line of \eqref{ecb}, the expression on the right hand side in \eqref{ecb} is equal to zero if $x<k+p$. This proves \eqref{ecb} in this case, as if $x<k+p$, the $k+p$ paths of lozenges that would start upward from the side of length $k+p$ in any tiling of $F_{x,z,k,p}$ would not have enough room to end on the top side.
Assume now that $x=0$. If $k+p>0$, \eqref{ecb} follows from the above observation. In the remaining case of $x=k=p=0$, the region $F_{0,z,0,0}$ looks as shown on the left in Figure \ref{fce}. All tiles are forced, so $\M_f(F_{0,z,0,0})=1$, which agrees with the $x=k=p=1$ specialization of the expression on the right hand side of \eqref{ecb}.
Consider now the base case $z=0$. The region $F_{x,0,k,p}$ is as shown on the right in Figure \ref{fce}. As we saw above, we may assume that $x\geq k+p$. Then it follows from the figure that
\begin{equation}
\M_f(F_{x,0,k,p})=SPP(2k,2k,x-k-p),
\label{ece}
\end{equation}
where $SPP(a,a,b)$ is the number of symmetric plane partitions that fit in an $a\times a \times b$ box, given by MacMahon's formula (proved by Andrews \cite{AndSPP})
\begin{equation}
SPP(a,a,b)=\prod_{i=1}^a\left[\frac{b+2i-1}{2i-1}\prod_{j=i+1}^a\frac{b+i+j-1}{i+j-1}\right].
\label{ecee}
\end{equation}
Then \eqref{ecb} follows by the fact that the right hand side of \eqref{ece} (given by the above formula) agrees with the $z=0$ specialization of the expression on the right hand side of \eqref{ecb}.
For the last base case, $z=1$, the region $F_{x,1,k,p}$ looks as pictured in Figure \ref{fcf}. Upon removing the forced lozenges, the leftover region is a trapezoid of side-lengths $2k+1$, $x-k-p-1$, $2k+1$, with free boundary along its base. Thus,
\begin{equation}
\M_f(F_{x,1,k,p})=SPP(2k+1,2k+1,x-k-p-1),
\label{ecf}
\end{equation}
and \eqref{ecb} follows by the fact that the right hand side of \eqref{ecf} (given by \eqref{ecee}) agrees with the $z=0$ specialization of the expression on the right hand side of \eqref{ecb}.
For the induction step, assume that \eqref{ecb} holds for all $F$-regions for which the value of the $x$-parameter plus twice the value of the $z$-parameter is strictly less than $x+2z$. Use equation~\eqref{ecd} to express $\M_f(F_{x,z,k,p})$ in terms of $\M_f(F_{x',z',k',p'})$'s with $x'+2z'<x+2z$. By the induction hypothesis, all the involved $\M_f(F_{x',z',k',p'})$'s are given by formula \eqref{ecb}. It is routine to check that the resulting formula for $\M_f(F_{x,z,k,p})$ agrees with the expression on the right hand side of~\eqref{ecb}. This completes the proof. \end{proof}
\section{Asymptotics --- promontory in constrained/free corner}
If $k$ and $p$ are fixed while $x$ and $z$ grow to infinity, the region $F_{x,z,k,p}$ becomes an infinite~90 degree wedge with constrained boundary along the vertical zig-zag boundary portion and free boundary along the horizontal lattice line boundary portion. Our formulas allow us to find the answer to the following natural question: What is the effect of the presence of the dent (``promontory in the sea of dimers'') in the corner?
In view of the relationship of the flashlight regions $F_{x,y-k,k,p}$ to the carpenter's butterfly regions $H_{2x,2y}(k,p)$ (see Figure \ref{fbc}), so as to not distort the dimer statistics, we take $x=y$ as the boundary is sent to infinity. Therefore the question is to determine the correlation $\omega_c(k,p)$ of the dent with the corner, defined by
\begin{equation}
\omega_c(k,p):=\lim_{x\to\infty}\frac{\M_f(F_{x,x-k,k,p})}{\M_f(F_{x,x,0,0})}
\label{eda}
\end{equation}
(the regions at the numerator and denominator in the above limit are shown on the left in Figure \ref{fda} for $x=10$, $k=2$, $p=1$).
\begin{figure}[h]
\centerline{
\hfill
{\includegraphics[width=0.25\textwidth]{Fnum.eps}}
\hfill
{\includegraphics[width=0.50\textwidth]{Hnum.eps}}
\hfill
}
\vskip0.1in
\centerline{
\hfill
{\includegraphics[width=0.25\textwidth]{Fden.eps}}
\hfill
{\includegraphics[width=0.50\textwidth]{Hden.eps}}
\hfill
}
\caption{The regions $F_{10,8,2,1}$ and $F_{10,10,0,0}$ (left) and $H_{10,10}(2,1)$ and $H_{10,10}(0,0)$ (right).}
\label{fda}
\end{figure}
The set-up is very similar to the one in our previous work \cite{rangle}, where instead of a dent (of shape and size depending on $k$ and $p$) in the corner, we had a single triangular hole of side-length 2 in the interior of the wedge at given distances from the two sides of the wedge. We saw in \cite{rangle} that the limit analogous to \eqref{eda} had, up to a multiplicative constant, the same asymptotics as the fourth root of of the correlation of the ``symmetrized system'' --- the system obtained by reflecting the 90 degree wedge in the two sides (thus ending up in that case with four triangular holes of side 2), obtaining the whole plane and eliminating the boundaries.
It would therefore be interesting to compare the $k,p\to\infty$ asymptotics of $\omega_c(k,p)$ with that of the fourth root of the bulk correlation $\omega(k,p)$ of the carpenter's butterfly, defined by
\begin{equation}
\omega(k,p):=\lim_{x\to\infty}\frac{\M(H_{2x,2x}(k,p))}{\M(H_{2x,2x}(0,0))}
\label{edb}
\end{equation}
(the regions at the numerator and denominator in the above limit are shown on the right in Figure \ref{fda} for $x=10$, $k=2$, $p=1$).
The results of this section give explicit expressions for the correlations $\omega_c(k,p)$ and $\omega(k,0)$.
\begin{theo}
\label{tda}
For non-negative integers $k$ and $p$ we have
\begin{equation}
\omega_c(k,p)=\frac
{3^{\frac12 k^2-kp+\frac12 p^2+\frac{k}{2}-\frac{p}{2}}}
{2^{2k^2+k+p^2}}.
\label{edc}
\end{equation}
\end{theo}
\begin{proof}
Using the formula from Proposition \ref{tbb}, we get
\begin{equation}
\frac{\M_f(F_{x,x-k,k,p})}{\M_f(F_{x,x,0,0})}
=
P_1P_2P_3P_4\frac{P_5}{\left(P_5|_{k=0,p=0}\right)},
\label{edca}
\end{equation}
where
\begin{align}
P_1&=\prod_{i=1}^{x-k-1}\frac{k+i}{i}
\label{edcb}
\\
P_2&=\prod_{i=0}^{p-1}\frac{(2x+p-2i)_{x-k-1}}{(x+k+p-2i)_{x-k-1}}
\label{edcc}
\end{align}
\begin{align}
P_3&=\prod_{i=1}^{x-k-1}\prod_{j=2}^i\frac{2k+i+j-1}{i+j-1}
\label{edcd}
\\
P_4&=\prod_{j=1}^{k}\frac{(x-k-p+2j-1)_{2x+2k-4j+3}}{(2j-1)_{2x+2k-4j+3}}
\label{edce}
\\
P_5&=\prod_{j=1}^{x-k}\frac{(x+k-p+j)_{2x-2k-2j+1}}{(2k+j)_{2x-2k-2j+1}}.
\label{edcf}
\end{align}
We have
\begin{equation}
P_1=\frac{(x-1)!}{k!(x-k-1)!}=\frac{\Gamma(x)}{k!\Gamma(x-k)}~\sim\frac{1}{k!}x^k,\ \ \ x\to\infty,
\label{edcg}
\end{equation}
where we used the classical formula (see e.g. \cite{Olver}, (5.02)/p.\! 119)
\begin{equation}
\frac{\Gamma(z+a)}{\Gamma(z+b)}\sim z^{a-b},\ \ \ z\to\infty.
\label{edch}
\end{equation}
Expressing the Pochhammer symbols as ratios of Gamma functions using
\begin{equation}
(x)_k=\frac{\Gamma(x+k)}{\Gamma(x)},
\label{edci}
\end{equation}
the product $P_2$ can be written as
\begin{equation}
P_2=\prod_{i=0}^{p-1}
\frac{\Gamma(3x+p-k-2i-1)}{\Gamma(2x+p-2i)}
\frac{\Gamma(x+k+p-2i)}{\Gamma(2x+p-2i-1)}.
\label{edcj}
\end{equation}
Using Stirling's approximation (see e.g. \cite{Olver}, (8.16)/p.\! 88)
\begin{equation}
\Gamma(x)\sim e^{-x}x^x\left(\frac{2\pi}{x}\right)^{\frac12},\ \ \ x\to\infty
\label{edck}
\end{equation}
it follows that
\begin{equation}
P_2\sim\frac{3^{3xp-pk-\frac{p}{2}}}{2^{4xp}},\ \ \ x\to \infty.
\label{edcl}
\end{equation}
To find the asymptotics of $P_3$, write
\begin{align}
P_3
&=
\prod_{i=1}^{x-k-1}\frac{(2k+i+1)_{i-1}}{(i+1)_{i-1}}
=
\prod_{i=1}^{x-k-1}\frac{\Gamma(2k+2i)}{\Gamma(2k+i+1)}\frac{\Gamma(i+1)}{\Gamma(2i)}
\nonumber
\\
&=
\Gamma(3)\Gamma(5)\cdots\Gamma(2k+1)
\frac
{\Gamma(2x-2)\Gamma(2x-4)\cdots\Gamma(2x-2k)}
{\Gamma(x+k)\Gamma(x+k-1)\cdots\Gamma(x-k+1)}.
\label{edcm}
\end{align}
Using the identity (see e.g. \cite{Olver}, (1.08)/p.\! 35)
\begin{equation}
\Gamma(2z)=\frac{2^{2z-1}}{\pi^\frac12}\Gamma(x)\Gamma\left(x+\frac12\right)
\label{edcn}
\end{equation}
and \eqref{edch}, we obtain from \eqref{edcm} that
\begin{align}
P_3
&=\Gamma(3)\Gamma(5)\cdots\Gamma(2k+1)
\frac{2^{(2x-3)+(2x-5)+\cdots+(2x-2k-1)}}{\pi^\frac{k}{2}}
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\times
\frac{\Gamma(x-1)\Gamma\left(x-\frac12\right)\Gamma(x-2)\Gamma\left(x-\frac32\right)\cdots
\Gamma(x-k)\Gamma\left(x-k+\frac12\right)}
{\Gamma(x+k)\Gamma(x+k-1)\cdots\Gamma(x-k+1)}
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\sim \Gamma(3)\Gamma(5)\cdots\Gamma(2k+1)
\frac{2^{2kx-k(k+2)}}{\pi^\frac{k}{2}}x^{-k\left(k+\frac32\right)},\ \ \ x\to\infty.
\label{edco}
\end{align}
The product $P_4$ can be handled in a similar fashion. We obtain
\begin{align}
P_4
&=
\prod_{j=1}^k
\frac{\Gamma(3x+k-p-2j+2)}{\Gamma(x-k-p+2j-1)}
\frac{\Gamma(2j-1)}{\Gamma(2x+2k-2j+2)}
\nonumber
\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\sim\frac{\Gamma(1)\Gamma(3)\cdots\Gamma(2k-1)}{(3\pi)^\frac{k}{2}}
\frac{3^{3kx+k(1-p)}}{2^{2kx+k(k+1)}}x^{-k\left(k-\frac12\right)},\ \ \ x\to\infty.
\label{edcp}
\end{align}
Using the notation
\begin{equation}
[\Gamma(x)]_k:=\Gamma(x)\Gamma(x+1)\cdots\Gamma(x+k-1),
\label{edcq}
\end{equation}
one can write $P_5$ as
\begin{equation}
P_5=
\frac
{
\dfrac{[\Gamma(2x-p+1)]_p}{[\Gamma(3x-k-p+1)]_{k+p}}[\Gamma(2x+1)]_x
\dfrac{[\Gamma(x+1)]_k}{[\Gamma(1)]_{2k}}[\Gamma(1)]_x
}
{
\dfrac{[\Gamma(x+k-p+1)]_{p-k}}{[\Gamma(2x-p+1)]_{p}}[\Gamma(x+1)]_x
\dfrac{1}{[\Gamma(x+1)]_{k}}[\Gamma(x+1)]_x
}.
\label{edcr}
\end{equation}
Using the recurrence relation $\Gamma(z+1)=z\Gamma(z)$, we can then express the asymptotics of $P_5$ in terms of Barnes' $G$-function\footnote{ It suffices for us to use that Barnes' $G$-function satisfies the recurrence relation $G(z+1)=\Gamma(z)G(z)$.} as
\begin{equation}
P_5
\sim
\frac{2^{4px+k-p^2}}{3^{(k+p)(6x-k-p)/2}}\frac{\pi^k}{\Gamma(1)\Gamma(2)\cdots\Gamma(2k)}
\frac{G(3x)G(x)^3}{G(2x)^3}x^{2k^2},\ \ \ k\to\infty.
\label{edcs}
\end{equation}
Substituting the above asymptotics relations into \eqref{edca} (and using that the asymptotics of $P_6$ is just the $k=p=0$ specialization of \eqref{edcs}), we obtain after simplifications the statement of the theorem. \end{proof}
We only determine the bulk correlation $\omega(k,p)$ in the case $p=0$. What allows us to do this is exact product formulas we found in earlier work \cite{symffb} for the enumeration of lozenge tilings of what we call axial shamrock regions. For $p>0$ $\M(H_{x,x}(k,p)$ does not seem to be given by a simple product formula, and there are no other currently known manageable expressions for it that would allow one to compute the bulk correlation \eqref{edb}.
\begin{theo}
\label{tdb}
For non-negative integers $k$ we have
\begin{align}
&
\omega(k,0)=\frac{1}{\pi^k}
\frac{\Gamma(2k+1)\Gamma\left(k+\frac12\right)}
{\Gamma(k+1)\Gamma\left(2k+\frac12\right)}
\left[
\prod_{i=1}^k
\frac
{\Gamma(i)\Gamma(2i-1)}
{\Gamma\left(k+i-\frac12\right)}
\right]^2
\frac
{3^{2k^2}}
{2^{6k^2+2k}}.
\label{edd}
\end{align}
\end{theo}
\begin{proof}
The starting point for the proof is the explicit product formula for the numerator in the $p=0$ specialization of \eqref{edb} provided by Corollary 2.3 and Theorems 2.1 and 2.2 of our earlier work \cite{symffb}. The asymptotics of this product formula can be analyzed in a manner similar to the one presented in the proof of Theorem \ref{tda}. One arrives at the asymptotic formula \eqref{edd}. \end{proof}
\begin{cor}
\label{tdc}
The asymptotics of the bulk correlation $\omega(k,0)$ is
\begin{equation}
\omega(k,0)\sim
\frac{e^{\frac14}}{A^3 2^\frac16 k^\frac14}
\frac{3^{2k^2}}{2^{8k^2}},\ \ \ k\to\infty,
\label{ede}
\end{equation}
where $A=1.2824271291...$ is the Glaisher-Kinkelin constant.\footnote{The Glaisher-Kinkelin constant (see \cite{Glaish}) is the value $A$ for which
$\lim_{n\to\infty}
\dfrac
{0!\,1!\,\cdots\,(n-1)!}
{n^{\frac{n^2}{2}-\frac{1}{12}}\,(2\pi)^{\frac{n}{2}}\,e^{-\frac{3n^2}{4}}}
=
\dfrac
{e^{\frac{1}{12}}}
{A}
$.}
\end{cor}
\begin{proof}
This follows using \eqref{edch} and the asymptotic relation that defines the Gaisher-Kinkelin constant $A$ (see footnote 3).
\end{proof}
It is clear from Theorem \ref{tda} and Corollary \ref{tdc} that corner correlation $\omega_c(k,0)$ and the fourth root of the bulk correlation $\omega(k,0)$ do not have (up to a multiplicative constant) the same asymptotics as $\to\infty$. The fact that this agreement in the asymptotics, which holds in the set-up of \cite{rangle} mentioned above, fails here, is not so surprising: In \cite{rangle}, as the arguments of the corner correlation approach infinity, the defects (a triangular hole of side 2 in that case) are removed infinitely far from the boundary; by contrast, as $k\to\infty$, the dent in the corner whose effect is recorded by $\omega_c(k)$ still starts at the corner of the boundary.
What is remarkable is that $\omega_c(k,0)$ and $\omega(k,0)^{\frac14}$ do have the same log-asymptotics.\footnote{ We say that $f(n)$ and $g(n)$ have the same log-asymptotics if $\ln f(n) \sim \ln g(n)$, $n\to\infty$.} Given the parallels between the correlation of gaps in dimer systems and 2D electrostatics we found in previous work (see \cite{sc}\cite{ec}\cite{ef}\cite{ov}), and in particular that the electrostatic potential corresponds to the logarithm of the correlation, we can view the next result as stating that the method of images from electrostatics still works in this new circumstance.
\begin{cor}
\label{tdd}
The corner correlation $\omega_c(k,0)$ and the fourth root of the bulk correlation $\omega(k,0)$ have the same log-asymptotics:
\begin{equation}
\ln \omega_c(k,0) \sim \ln \omega(k,0)^{\frac14} \sim k^2\ln\frac{\sqrt{3}}{4},\ \ \ k\to\infty.
\end{equation}
\end{cor}
\begin{proof} This follows directly from Theorem \ref{tda} and Corollary \ref{tdc}. \end{proof}
\section{Concluding remarks}
In this paper we have enumerated the lozenge tilings of a hexagon with a shamrock removed from its center that are in a fourth symmetry class, that extending the class of symmetric and self complementary plane partitions. The remaining two cases (which correspond to self complementary, resp. symmetric plane partitions) will be presented in subsequent work.
One interesting feature of our proof is that for it to work, we needed to generalize the regions under consideration, and we ended up proving a simple product formula for these more general regions. There are natural counterpart regions generalizing the base case, but they are not round.
We have also analyzed the asymptotics of the corner correlation of a macroscopic dent in a 90 degree wedge with mixed boundary conditions, and found that it has the same log-asymptotics as the fourth root of the bulk correlation of the region obtained by reflecting the dent in the two sides of the wedge. This represents an analog of the method of images from electrostatics which turns out to hold in this circumstance as well (in the presence of a macroscopic dent touching the boundary). The analogy to electrostatics may be deeper than previously thought.
|
2,877,628,091,129 | arxiv | \section{Introduction}
Many extra-dimensional models have four-dimensional (4D) brane-like defects
on the compact space, such as orbifold fixed points
or solitonic objects~\cite{ArkaniHamed:1998rs}-\cite{Randall:1999vf}.
We can freely introduce 4D terms localized
at the branes\footnote{
Here we do not consider branes spread over other dimensions.
The word~``brane'' is understood as the ``3-brane'' in this paper.
}~\cite{Goldberger:2001tn,Davoudiasl:2002ua,delAguila:2006atw}.
Such brane-localized terms are induced by quantum effect
even if they are absent at tree level~\cite{Mirabelli:1997aj,Georgi:2000ks}.
They change the Kaluza-Klein (KK) spectrum
and deform the profiles of the mode
functions~\cite{Cacciapaglia:2005da,Dudas:2005vn,Maru:2010ap}.
In particular, the brane-localized mass terms are often introduced
in order to remove unwanted modes from the 4D effective
theory~\cite{Agashe:2006at,Hosotani:2007qw,Hosotani:2008tx}.
In five-dimensional (5D) theories, the effects of such brane masses
can be translated into the change of the boundary conditions
for the bulk fields.
This is because the branes in 5D can be regarded as the boundaries of the extra dimension.
In this case,
large brane masses can make zero-modes of the bulk fields heavy enough
up to half of the compactification scale.
In contrast, the branes are no longer the boundaries of the extra compact space
in higher-dimensional theories.
Since effects of the brane terms spread over higher-dimensional space
and are diluted, they are expected to be smaller than those in the 5D case.
Therefore, it is important to check whether the brane mass can make
unwanted modes heavy enough or not.
In this paper, we evaluate the lightest mass eigenvalue of
a six-dimensional (6D) theory in the presence of the brane-localized mass term.
The authors of Ref.~\cite{Dudas:2005vn} discussed a closely related issue
in the case of the $T^2/Z_2$ compactification
whose torus moduli parameter is $\tau=i$, and obtained
the result that the inverse of the lightest
mass eigenvalue has a logarithmic dependence on the cutoff scale.
Here we generalize their setup and consider a generic torus whose moduli parameter
is arbitrary.
Then we can explicitly see the relation to the well-known results in the 5D theories
by squashing or stretching the torus.
Besides, we are interested in a different parameter region
from that discussed in Ref.~\cite{Dudas:2005vn}.
We mainly focus on the limit of a large brane mass,
in which the dependence of the mass eigenvalues on the brane mass is negligible,
and evaluate the ratio of the lightest mass to the compactification scale
by numerical calculations.
The paper is organized as follows.
After explaining the setup in the next section,
we will see the dependences of the lightest mass eigenvalue
on the cutoff scale of the theory and on the brane mass in Sec.~\ref{MassMatrix}.
In Sec.~\ref{ap_expr}, we find an approximate expression of the lightest mass
as a function of the torus moduli parameter~$\tau$,
and estimate its ratio to the compactification scale.
Sec.~\ref{summary} is devoted to the summary.
We provide a brief review of the case of a 5D theory in Appendix~\ref{5Dcase},
and discuss theories with fermion or vector field in Appendix~\ref{other_cases}.
\section{Setup} \label{setup}
We consider a 6D theory of a complex scalar field~$\phi$ as a simple example.~\footnote{
Cases of fermion and vector fields are briefly discussed in Appendix~\ref{other_cases}.
}
The Lagrangian is given by
\be
\cL = -\der^M\phi^*\der_M\phi-\cm^2\abs{\phi}^2\dlt(x^4)\dlt(x^5)+\cdots,
\label{cL}
\ee
where $M=0,1,2,\cdots,5$, and the ellipsis denotes interaction terms,
which are irrelevant to the following discussion.
The brane mass parameter~$\cm$ is a real dimensionless constant.
The extra dimensions are compactified on a torus~$T^2$.\footnote{
The spectrum in the case of $T^2/Z_N$ compactification ($N=2,3,4,6$)
can easily be obtained by thinning out the spectrum on $T^2$.
}
The background metric is assumed to be flat, for simplicity.
For the coordinates of the extra dimensions, it is convenient to use a complex
(dimensionless) coordinate~$z\equiv\frac{1}{2\pi R}(x^4+ix^5)$,
where $R>0$ is one of the radii of $T^2$.
The torus is defined by identifying points in the extra dimensions as
\be
z \sim z+n_1+n_2\tau, \;\;\; (n_1, n_2 \in \mathbb{Z})
\ee
where $\tau$ is a complex constant that satisfies $\Im\tau>0$.
The Lagrangian~(\ref{cL}) is then rewritten as
\be
\cL = -\der^\mu\phi^*\der_\mu\phi-\frac{1}{2(\pi R)^2}\brc{
\abs{\der_z\phi}^2+\abs{\der_{\bar{z}}\phi}^2+\cm^2\abs{\phi}^2\dlt^{(2)}(z)}+\cdots,
\label{cL2}
\ee
where $\mu=0,1,2,3$, and we have used that
\be
\dlt(x^4)\dlt(x^5) = \frac{1}{2(\pi R)^2}\dlt^{(2)}(z).
\ee
We can expand $\phi$ as
\be
\phi(x^\mu,z) = \sum_{n,l=-\infty}^\infty f_{n,l}(z)\phi_{n,l}(x^\mu),
\label{KKexpand}
\ee
where
\be
f_{n,l}(z) = \frac{1}{2\pi R\sqrt{\Im\tau}}\exp\brc{
\frac{2\pi i}{\Im\tau}\Im\brc{(n+l\bar{\tau})z}}
\ee
are normalized as
\bea
&&\int_{T^2}dx^4dx^5\;\abs{f_{n,l}\brkt{\frac{x^4+ix^5}{2\pi R}}}^2
= 2(\pi R)^2\int\dr^2z\;\abs{f_{n,l}(z)}^2 \nonumber\\
\eql (2\pi R)^2\Im\tau\int_0^1 dw_1\int_0^1 dw_2\;
\abs{f_{n,l}(w_1+\tau w_2)}^2 = 1,
\eea
and satisfy
\be
\der_z\der_{\bar{z}}f_{n,l} = -\tl{\lmd}_{n,l}^2f_{n,l}, \;\;\;\;\;
\tl{\lmd}_{n,l} = \frac{\pi\abs{n+l\tau}}{\Im\tau}. \label{wocm}
\ee
This corresponds to the KK expansion
in the absence of the brane-localized mass term.
The KK masses are given by $\tl{m}_{n,l}\equiv \tl{\lmd}_{n,l}/(\pi R)$.
Since the 6D theory is non-renormalizable, it should be regarded as
an effective theory valid only below the cutoff scale~$\Lmd$.
Here we relabel the KK modes by using the KK label~$a=0,1,2,\cdots$
defined in such a way that
\be
0 = \tl{m}_0 < \tl{m}_1 \leq \tl{m}_2 \leq \cdots \leq \tl{m}_{N_\Lmd}
< \Lmd \leq \tl{m}_{N_\Lmd+1} \leq \cdots.
\ee
The correspondence of the labels~$(n,l)$ and $a$ depends on the value of $\tau$,
as shown in Tables~\ref{relabelKK:1} and \ref{relabelKK:2}.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|}
\hline \rule[-2mm]{0mm}{7mm}
$a$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline
$(n,l)$ & (0,0) & $(1,0)$ & $(0,1)$ & $(0,-1)$ & $(-1,0)$ & $(1,1)$ & $(1,-1)$ &
$(-1,1)$ \\ \hline
$\tl{\lmd}_a$ & 0 & 3.14 & 3.14 & 3.14 & 3.14 & 4.44 & 4.44 & 4.44
\\\hline\hline
$a$ & 8 & 9 & 10 & 11 & 12 & 13 & 14 & $\cdots$ \\ \hline
$(n,l)$ & $(-1,-1)$ & (2,0) & (0,2) & $(0,-2)$ & $(-2,0)$ & (2,1) & $(2,-1)$ &
$\cdots$ \\ \hline
$\tl{\lmd}_a$ & 4.44 & 6.28 & 6.28 & 6.28 & 6.28 & 7.02 & 7.02 & $\cdots$ \\ \hline
\end{tabular}
\end{center}
\caption{Relabeling the KK modes in the case of $\tau=i$}
\label{relabelKK:1}
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|}
\hline \rule[-2mm]{0mm}{7mm}
$a$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline
$(n,l)$ & (0,0) & $(2,-1)$ & $(-2,1)$ & $(1,0)$ & $(-1,0)$ & $(1,-1)$ & $(-1,1)$ &
$(3,-1)$ \\ \hline
$\tl{\lmd}_a$ & 0 & 3.20 & 3.20 & 4.10 & 4.10 & 4.69 & 4.69 & 5.68
\\\hline\hline
$a$ & 8 & 9 & 10 & 11 & 12 & 13 & 14 & $\cdots$ \\ \hline
$(n,l)$ & $(-3,1)$ & $(4,-2)$ & $(-4,2)$ & $(3,-2)$ & $(-3,2)$ & (2,0) & (0,1)
& $\cdots$ \\ \hline
$\tl{\lmd}_a$ & 5.68 & 6.41 & 6.41 & 6.90 & 6.90 & 8.21 & 8.21 & $\cdots$ \\ \hline
\end{tabular}
\end{center}
\caption{Relabeling the KK modes in the case of $\tau=2\exp(\pi i/8)$}
\label{relabelKK:2}
\end{table}
The number of the KK excited modes below $\Lmd$, \ie, $N_\Lmd$, grows as
\be
N_\Lmd \propto \Lmd^2,
\ee
except for regions: $\arg\tau\simeq 0,\pi$, $\abs{\tau}\ll 1$ and $\abs{\tau}\gg 1$,
in which the spacetime approaches 5D and thus $N_\Lmd\propto\Lmd$.
Then, (\ref{KKexpand}) is rewritten as
\be
\phi(x^\mu,z) = \sum_{a=0}^\infty f_a(z)\phi_a(x^\mu).
\label{KKexpand2}
\ee
Plugging (\ref{KKexpand2}) into (\ref{cL2}) and performing the $d^2z$-integral,
we obtain the 4D Lagrangian:
\be
\cL^{\rm (4D)} = -\sum_a\der^\mu\phi_a^*\der_\mu\phi_a
-\sum_{a,b}M_{ab}^2\phi_a^*\phi_b+\cdots,
\ee
where
\bea
M_{ab}^2 \defa \frac{\tl{\lmd}_a^2}{\pi^2R^2}\dlt_{ab}
+\cm^2f_a^*(0)f_b(0) \nonumber\\
\eql \tl{m}_a^2\dlt_{ab}+\frac{\cm^2}{4\pi^2R^2\Im\tau}
\label{def:M_ab}
\eea
is the mass matrix of our theory.
\section{Cutoff dependence} \label{MassMatrix}
Since the theory is valid below $\Lmd$, we only consider
the KK modes~$\phi_a$ ($a=0,1,\cdots,N_\Lmd$).
Then, the mass squared eigenvalues, which are denoted as
$\brc{m_0^2,m_1^2,\cdots,m_{N_\Lmd}^2}$, are obtained as
eigenvalues of the finite matrix~$M_{ab}^2$ ($a,b=0,1,\cdots,N_\Lmd$).
Since $\tl{m}_{n,l}^2=\tl{m}_{-n,-l}^2$, all the nonzero modes have degenerate modes
when the brane mass is absent.
Especially, $\tl{m}_1^2=\tl{m}_2^2$.
This means that $M_{ab}^2$ has the eigenvalue~$\tl{m}_1^2$
with the eigenvector~$(0,1,-1,0,0,\cdots,0)$.
In fact, this is the second smallest eigenvalue of $\tl{M}_{ab}^2$.
Namely, the mass of the first KK excited mode~$m_1$ is independent of $\cm$
and $\Lmd$:
\be
m_1 = \tl{m}_1 = \frac{1}{R\Im\tau}\cdot\min_{(n,l)\neq (0,0)}\abs{n+l\tau}.
\label{expr:m1}
\ee
Thus we take $m_1$ as the compactification scale throughout the paper.
Plots in Fig.~\ref{Lmd-dep} show the $\Lmd$-dependence of the lightest eigenvalue~$m_0$
in the cases of $\tau=\exp(\frac{\pi i}{120}),\exp(\frac{2\pi i}{3})$,
and $50\exp(\frac{2\pi i}{3})$ and $\cm=10.0$, in the unit of $m_1$.
\begin{figure}[th]
\begin{center}
\includegraphics[width=7.5cm]{Lmd-dep-1-2}
\includegraphics[width=7.5cm]{Lmd-dep-1-160}
\includegraphics[width=7.5cm]{Lmd-dep-50-160}
\end{center}
\caption{The lightest mass eigenvalue~$m_0$ as a function of $\Lmd$
in the case of $\tau=\exp\brkt{\pi i/120}$, $\exp\brkt{2\pi i/3}$
and $50\exp\brkt{2\pi i/3}$ and $\cm=10.0$.
The solid lines represent the function~(\ref{expr:lmd0})
with the parameters~$(\alp_1,\alp_2,\alp_3,\alp_4)=(11.9,4.01,0.0728,0.466)$,
$(1.82,3.69,0.270,0.142)$ and
$(12.4,4.72,0.0701,0.470)$, respectively. }
\label{Lmd-dep}
\end{figure}
The right end of the horizontal axis in each plot corresponds to
the value of $\Lmd$ such that $N_\Lmd\simeq 4000$.
For a given value of $\cm$, the ratio~$m_0/m_1$ can be approximated by
\be
\frac{m_0}{m_1} \simeq \brkt{\alp_1+\alp_2\ln\frac{\Lmd}{m_1}
+\alp_3\frac{\Lmd}{m_1}}^{-1}+\alp_4,
\label{expr:lmd0}
\ee
where $\alp_i$ ($i=1,2,3,4$) are real constants.
The solid lines in Fig.~\ref{Lmd-dep} represent the fitting functions
of the form~(\ref{expr:lmd0}).
The constant~$\alp_4$ is the asymptotic value of $m_0/m_1$ in the limit of $\Lmd\to\infty$:
\be
\lim_{\Lmd\to\infty}\frac{m_0(\Lmd)}{m_1} = \alp_4. \label{alp4}
\ee
The horizontal axes in Fig.~\ref{Lmd-dep} denote the asymptotic lines
that the curves approach.
Typically, $m_0$ approaches to the limit value
much more slowly compared with the 5D case
(see Fig.~\ref{Lmd-dep:5D} in Appendix~\ref{5D:num}).
Thus the cutoff dependence of the spectrum cannot be neglected
even when $\Lmd/m_1=\cO(100)$.
This cutoff dependence becomes smaller
when $\arg\tau\simeq 0,\pi$ or $\abs{\tau}\ll 1$ or $\abs{\tau}\gg 1$.
This is because the torus is squashed or stretched in such cases,
and the spacetime approaches to 5D.
In fact, as we can see from Fig.~\ref{Lmd-dep},
\be
\frac{m_0(15m_1)}{m_1} \simeq \lim_{\Lmd\to\infty}\frac{m_0(\Lmd)}{m_1} \times
\begin{cases} 1.40 & (\tau=e^{2\pi i/3}) \\ 1.07 &
(\tau=e^{\pi i/120},\; 50e^{2\pi i/3})
\end{cases}.
\ee
Note that the curve for $\Lmd<40m_1$ in the top-left plot
or in the bottom plot are almost the same
as that of the 5D case (shown in Fig.~\ref{Lmd-dep:5D}).
The cusp at $\Lmd=40m_1$ indicates that the field begins to feel the width of
the squashed torus or the smaller cycle of the long thin torus.
In the following, we focus on the limit value~(\ref{alp4}).
Fig.~\ref{vsc} shows its dependence on the brane mass~$\cm$.
The unit here is taken as $1/(\pi R)$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{lmd0vsc-1-160.eps}
\end{center}
\caption{The lightest mass eigenvalue~$m_0$ as a function of $\cm$
in the case of $\tau=e^{2\pi i/3}$.
The solid line represents (\ref{lmd0:smallc}). }
\label{vsc}
\end{figure}
For small values of $\cm$, the lightest mass eigenvalue~$m_0$
is approximated as
\be
m_0 \simeq \sqrt{M_{00}^2} = \frac{\cm}{2\pi R\sqrt{\Im\tau}},
\label{lmd0:smallc}
\ee
which is plotted as the solid line in Fig.~\ref{vsc}.
This is because the brane mass can be treated as a perturbation in this region,
and the mixing among the KK modes induced by it is negligible.
As the brane mass grows, such mixing effect becomes significant,
and $m_0$ saturates and is almost independent of $\cm$ when $\cm\simgt 5$.
This situation is the same as the 5D case (see Fig.~\ref{vsc:5D} in Appendix~\ref{5D:anl}).
In the following discussion, we take $c=10.0$ as a representative of $\cm\gg 1$.
\section{Approximate expression} \label{ap_expr}
\begin{figure}[t]
\begin{center}
\includegraphics[width=7cm]{lmd0vstht-02.eps}
\includegraphics[width=7cm]{lmd0vstht-08.eps}
\includegraphics[width=7cm]{lmd0vstht-1.eps}
\includegraphics[width=7cm]{lmd0vstht-5.eps}
\end{center}
\caption{The lightest mass eigenvalue~$m_0$ as a function of $\arg\tau$
for various values of $\abs{\tau}$.
The solid lines represent the approximate expression~(\ref{ap:m_0}). }
\label{lmdvstht}
\end{figure}
Fig.~\ref{lmdvstht} shows the dependence of $m_0$ on $\arg\tau$
for various values of $\abs{\tau}$.
Here we will find an approximate expression of $m_0$ as a function of $\tau$.
First, we should note that the mass eigenvalues~$m_a$ are
functions of $\cm$ and $\tau$, and should satisfy
\be
m_a\brkt{\cm;-\frac{1}{\tau}} = \abs{\tau}m_a(\cm;\tau),
\label{cond1_for_lmd0}
\ee
since the theory is defined on the torus.
Besides, from (\ref{wocm}) and (\ref{def:M_ab}), we also find that
\be
m_a(\cm;-\bar{\tau}) = m_a(\cm;\tau).
\label{cond2_for_lmd0}
\ee
As mentioned in the previous section,
there are two limits in which the spacetime approaches to 5D, \ie,
$\arg\tau\to 0,\pi$ (squashed torus) and $\abs{\tau}\to 0,\infty$ (stretched torus).
In these cases, the low-lying KK masses in the absence of the brane mass
are approximately expressed as follows.
\begin{description}
\item[$\bdm{\abs{\tau}\gg 1}$]
\be
\tl{m}_a = \tl{m}_{n(a),0}
\simeq \frac{\abs{n(a)}}{R\Im\tau}, \label{apex1:ma}
\ee
where $a\simlt 2\abs{\tau}$,
and $n(a)\equiv (-1)^a{\rm floor}\brkt{\frac{a+1}{2}}$.
\item[$\bdm{\tht\equiv\arg\tau\ll 0}$]
\be
m_{n,l} = \frac{1}{R}\brc{\frac{(n+l\abs{\tau}\cos\tht)^2}
{\abs{\tau}^2\sin^2\tht}+l^2}^{1/2}
\simeq \frac{1}{R}\brc{\frac{1}{\tht^2}\brkt{\frac{n}{\abs{\tau}}+l}^2+l^2}^{1/2}.
\ee
Especially when $\abs{\tau}$ is a rational number, \ie, $\abs{\tau}=p/q$
($p$ and $q$ are relatively prime integers and $q>0$),
the light masses are approximated as
\be
m_a = m_{n(a)p,-n(a)q} \simeq \frac{\abs{n(a)q}}{R},
\label{apex2:ma}
\ee
where $a\simlt\frac{2}{\tht}\min(1,\abs{\tau^{-1}\pm 1})$.
\end{description}
As for the cases of $\abs{\tau}\ll 1$ and of $\pi-\arg\tau\ll 1$,
approximate expressions of $m_a$ are obtained from (\ref{apex1:ma}) and (\ref{apex2:ma})
by using (\ref{cond1_for_lmd0}) and (\ref{cond2_for_lmd0}), respectively.
Then, we identify the effective radius of $S^1$ as
\be
R_{\rm eff} = \begin{cases} R\Im\tau & (\abs{\tau} \gg 1) \\
R\Im\tau/\abs{\tau} & (\abs{\tau} \ll 1) \\
R/q & (\arg\tau \ll 1 \;\;\mbox{or}\;\; \pi-\arg\tau\ll 1) \end{cases}.
\ee
Using this, the low-lying KK masses~$m_a$ can be expressed as
(see Appendix~\ref{5D:anl})
\be
m_a \simeq \frac{\abs{n(a)}}{R_{\rm eff}},
\ee
or solutions of
\be
m_a \simeq \frac{\hat{c}_{\rm eff}^2}{2}\cot(\pi R_{\rm eff}m_a),
\label{cond3_for_lmd0}
\ee
where the ``effective 5D brane mass''~$\hat{c}_{\rm eff}$ is defined as
\be
\hat{c}_{\rm eff}^2 \equiv \frac{R_{\rm eff}\cm^2}{2\pi R^2\Im\tau},
\ee
which is identified from the condition that (\ref{lmd0:smallc}) is reproduced.
When $c$ is sufficiently large, the solutions of (\ref{cond3_for_lmd0}) are
\be
m_a \simeq \frac{\abs{n(a)+\frac{1}{2}}}{R_{\rm eff}}.
\ee
Especially, the lightest mass eigenvalue is
\be
m_0 \simeq \frac{1}{2R_{\rm eff}} \simeq \frac{m_1}{2}.
\label{cond4_for_lmd0}
\ee
Taking into account the properties~(\ref{cond1_for_lmd0}),
(\ref{cond2_for_lmd0}) and (\ref{cond4_for_lmd0}),
we find an approximate expression of $m_0$ that fits Fig.~\ref{lmdvstht} as
\be
m_0^{\rm (ap)} = \frac{\sqrt{\sin\brc{\arcsin(\tl{\lmd}_1^2\Im\tau)}}}
{2\pi R\sqrt{\Im\tau}}. \label{ap:m_0}
\ee
This is plotted as solid lines in Fig.~\ref{lmdvstht}.
Finally, we evaluate the ratio of $m_0$
to the compactification scale~$m_1$.
As Fig.~\ref{rtovstht} shows, this ratio is much smaller than the value
of the 5D case, 1/2, except for the extreme cases
in which the spacetime is 5D-like.
\begin{figure}[t]
\begin{center}
\includegraphics[width=7cm]{rtovstht-02.eps}
\includegraphics[width=7cm]{rtovstht-08.eps}
\includegraphics[width=7cm]{rtovstht-1.eps}
\end{center}
\caption{The ratio of the lightest mass eigenvalue~$m_0$
to the compactification~$m_1$.
The solid lines represents the ratio of (\ref{ap:m_0}) to (\ref{expr:m1}). }
\label{rtovstht}
\end{figure}
Typically, $m_0$ is lighter than $m_1$ by one order of magnitude.
Namely, the brane-localized mass cannot make the zero-modes
as heavy as the compactification scale.
This is an important fact in model building.
\section{Summary and comments} \label{summary}
\subsection{Summary}
We have evaluated the mass eigenvalues of a 6D theory compactified on a torus
in the presence of the brane-localized mass term.
Especially we focus on the lightest mode that becomes massless
in the zero brane-mass limit.
From the numerical calculations,
we confirmed that the lightest mass eigenvalue~$m_0$
has non-negligible dependence on the cutoff scale~$\Lmd$
even when $\Lmd$ is larger than the compactification scale
by two orders of magnitude.
This indicates that $m_0$ is sensitive to the internal structure of
the brane when the brane has a finite size.
This is consistent with the results in Ref.~\cite{Dudas:2005vn}.
We find an approximate expression of $m_0$ which is valid for a large brane mass.
It clarifies the dependence on the size and the shape of the torus,
and reduces to the known result in the 5D case when the torus is squashed or
stretched.
In contrast to the 5D case, $m_0$ is much smaller than
the compactification scale unless the torus is squashed or stretched.
Their ratio is typically $\cO(0.1)$.
This is because the effects of the brane term are spread out
over the codimension two compact space and diluted.
Hence we should be careful in model building
especially when we introduce the brane mass terms
in order to decouple unwanted modes.
Although we have not discussed in this paper,
the brane mass also deforms the profiles of the mode functions.
They can be obtained by calculating the eigenvectors of $M_{ab}^2$
in (\ref{def:M_ab}).
The main effect of the brane mass on the mode functions is
to push them out from the position of the brane.
Namely, it reduces their absolute values at the brane to zero.
\subsection{On more general setups}
We have discussed in a theory of a scalar field because it is the simplest case.
However, the properties of the spectrum clarified in the text are also found
in cases of fermion and vector fields, as shown in Appendix~\ref{other_cases}.
So our result is valid in a wider class of 6D theory.
Besides, we have assumed that the bulk mass is zero and the brane squared mass
is positive.
In the presence of the bulk mass~$M_{\rm bk}$,
the mass matrix~(\ref{def:M_ab}) becomes
\be
M_{ab}^2 = \brkt{M_{\rm bk}^2+\tl{m}_a^2}\dlt_{ab}+\frac{c^2}{4\pi^2R^2\Im\tau},
\ee
where $\tl{m}_{n,l}=\abs{n+l\tau}/(R\Im\tau)$.
Thus the bulk mass just raises the whole spectrum.
However, if we allow a tachyonic brane mass, \ie, $c^2<0$,
a light mode may appear below the compactification scale.
If $\abs{c}^2$ is large enough, $m_0$ becomes tachyonic
and thus $\vev{\phi}=0$ is no longer the vacuum.
In such a case, $\phi$ has a nontrivial background that depend on
the extra-dimensional coordinates~$z$ and $\bar{z}$, and we have to expand $\phi$
around it in order to obtain the mass matrix~$M_{ab}^2$.
It is not an easy work to find such a nontrivial background.
Here we do not discuss this issue further, but give a comment on it.
Note that the smallest diagonal
element~$M_{00}^2=M_{\rm bk}^2+c^2/(4\pi^2R^2\Im\tau)$ provides the upper bound on $m_0$.
Thus, $2\pi R\sqrt{\Im\tau}M_{\rm bk}>\abs{c}$ must be satisfied
in order to avoid the vacuum instability for $\vev{\phi}=0$.
In other words, there is a value of $c$ that leads to a tachyonic mass eigenvalue
no matter how large $M_{\rm bk}$ is.
This indicates that the effect of the brane mass on the spectrum does not saturate,
which is in contrast to the non-tachyonic brane mass.
The mode function is attracted toward the brane by the tachyonic brane mass.
We considered the scalar field with the periodic boundary condition.
Twisted boundary conditions are also allowed, but they just raise the mass spectrum.
This can be understood from the fact that imposing the twisted boundary conditions is
equivalent to introducing a non-vanishing background gauge field coupled to
the scalar field with the periodic boundary conditions.
Such a background gauge field play the same role as the bulk scalar mass $M_{\rm bk}$
mentioned above.
We have also assumed that the spacetime is flat,
no background magnetic fluxes exist,\footnote{
The introduction of the background fluxes leads to the multiplication
of the modes at each KK level.
Thus the size of the mass matrix~(\ref{def:M_ab}) becomes larger,
and it will take much more time to calculate the mass eigenvalues.
So we need to develop more efficient way to discuss in such a case.
}
and there is only one brane, for simplicity.
It is an interesting and useful extension to relax these assumptions.
This will be discussed in separate papers.
\subsection*{Acknowledgements}
The author would like to thank Yukihiro Fujimoto for valuable information.
This work was supported in part by
Grant-in-Aid for Scientific Research (C) No.~25400283
from Japan Society for the Promotion of Science (Y.S.).
|
2,877,628,091,130 | arxiv | \section{Introduction}
\label{section:Introduction}
The Galactic centre (GC) is a benchmark object for studying the galactic nuclei of present-day large spiral galaxies. It contains the super-massive black hole Sagittarius\,A* \citep[Sgr\,A*; e.g.][]{Eckart:1996tg,Eckart:1997jl,Ghez:2008fk}, the nearest nuclear star cluster (NSC), and the nearest nuclear stellar disc \citep[NSD; see][]{Launhardt:2002nx,Schodel:2014bn}. The GC is located only about 8\,kpc from Earth and is therefore the only nucleus of a galaxy in which we can observationally resolve a significant fraction of stars.
The NSD is a dense rotating stellar structure with a radius of roughly 200\,pc and a scale height of $\sim$50\,pc \citep{Launhardt:2002nx,Nishiyama:2013uq,Schoenrich:2015,gallego-cano:2020}. The star formation history of the NSD was thought until recently to be characterised by quasi-continuous, quasi-constant star formation \citep{Figer:2004fk}. However, \cite{nogueras:2020} showed that the luminosity functions based on data from the new GALACTICNUCLEUS survey \citep[GNS;][]{nogueras:2019cat, nogueras:2019ext} are inconsistent with this scenario. Instead, about 90\% of the stars in the NSD are older than 8 Gyr, and there was a pronounced star formation event about 1 Gyr ago that contributed 5\% of the mass of the NSD.
Based on the K-band Multi Object Spectrograph (KMOS)/Very Large Telescope (VLT) data of \citet{fritz:2020}, \cite{schultheis2021} recently demonstrated that the NSD is kinematically and chemically distinct from both the Galactic bar and the NSC. They propose that the metal-rich stars in the NSD may have formed from gas in the central molecular zone.
\begin{figure*}[!t]
\includegraphics[width=1.\textwidth]{GNS.png}
\caption{Pointings of the central region of the GNS used in this work overlaid on a 4.5 $\mu$m Spitzer/IRAC extinction-corrected image of the GC from \cite{schoedel2014}. Each white rectangle shows a field with a size of $\sim 7.95' \times 3.43'$ and is identified by a number. The NSC, the Quintuplet, and the Arches clusters are also marked.
}
\label{fig:GNS}
\end{figure*}
Averaged by volume, the GC has been the Milky Way's most active star forming site over the past $\sim30$\,Myr. Three of the most massive young star clusters in the Milky Way are located in the NSD, the Arches, the Quintuplet, and the Central Parsec clusters. They have ages of 3-6\,Myr and total masses $\gtrsim10^{4}$\,M$_{\odot}$ \citep[e.g.][]{Figer:2004fk}.
The recent star formation history of the NSD implies that there should be at least ten more massive young clusters in the NSD that have still not been discovered \citep{Matsunaga:2011uq,nogueras:2020}. The most likely explanation for the missing clusters is that tidal disruption makes their density drop below the high background density of old stars within < 10\,Myr \citep{Portegies-Zwart:2002fk, Kruijssen2014}. Due to the high and variable extinction towards the GC, these clusters cannot be discovered by means of colour magnitude diagrams (CMDs) with currently available data \citep{nogueras:2021}.
A reasonably complete picture of the stellar population at the GC is difficult to obtain because of (1) the large and highly variable interstellar extinction towards the GC \citep{schoedel2010, Fritz:2011fk, nogueras:2019ext} that makes it extremely challenging to classify stars based on their near-infrared colours; and (2) overlapping and co-penetrating Galactic components (the Galactic disc and bulge, NSD, NSC, and young clusters) along the line-of-sight.
Proper motions can be a powerful tool for disentangling different kinematic components in complex regions of the Milky Way, such as the NSD and inner bulge. In particular, in the GC, proper motions can help us assign stellar populations to the bulge or the NSD \citep[see e.g. the discussion in][]{Matsunaga2018}. With proper motions we may also be able to identify stellar streams associated with past accretion events \citep[e.g.][]{Feldmeier-Krause:2020pi} and to find young stellar clusters in the form of compact co-moving groups \citep[e.g.][]{Stolte:2008uq,Hosek:2015sh,Rui:2019ch,Shahzamanian:2019}.
So far, precise proper motions in the GC have generally been measured on fields smaller than a few square arcminutes \citep[e.g.\ to study the Central Parsec, Arches, or Quintuplet clusters;][]{Hosek:2015sh,Rui:2019ch,Trippe:2008it,Schodel:2009zr}. The recent work by \cite{Libralato:2021} covers a significantly larger field of about 56 arcmin$^{2}$ within the NSD (and a similarly large area outside of it) with the Wide-Field Camera 3 (WFC3)/infrared (IR) and Advanced Camera for Surveys (ACS)/Wide-Field Channel (WFC) of the Hubble Space Telescope (HST). However, their work does not cover the innermost $\sim$100\,pc of the NSD.
Gaia is blind towards the GC but the Gaia Data Release 2 \citep[DR2;][]{gaia:2016, gaia:2018} catalogue provides precise proper motions of foreground stars that can be used to determine absolute astrometry of GC sources. The Vista Variables in the Via Lactea (VVV) survey \citep{Minniti:2010fk} covers the entire inner Galaxy but is severely limited by seeing-limited resolution and saturation in the GC. Therefore, precise proper motions from the VVV in the NSD are only available for stars in a very limited magnitude range \citep[approximately $12\lesssim K_{s}\lesssim 13.5$;][]{Smith:2018qf}.
The GNS has been designed specifically for the GC and stands out from other surveys due to its uniform $0.2"$ angular resolution, which minimises confusion and maximises astrometric precision. Confusion is more than a factor of 10 lower in the GNS, and its dynamic range is about five magnitudes higher than in seeing-limited surveys of the GC, such as the VVV \citep{nogueras:2019cat}. It is therefore the ideal basis for a proper motion survey of the NSD. Currently, the only other dataset that approximately matches the GNS in angular resolution and observed field is the the HST Paschen-$\alpha$ survey \citep[H-P$\alpha$S;][]{Wang:2010fk, Dong:2011ff}. In this work we combine the 2008 H-P$\alpha$S with the 2015 GNS to obtain the first proper motion catalogue for the central $\sim 36' \times 16'$ of the NSD. We have already demonstrated this method on a small field and identified a new co-moving group in \citet{Shahzamanian:2019}. This work presents the proper motions for the entire overlapping field of the GNS and H-P$\alpha$S.
This paper is structured as follows. In Sect.~\ref{section:Data sets} we describe the data and methodology. Section~\ref{section:Proper motions} describes the proper motion catalogue and compares it with other data in overlapping fields. We analyse the kinematics of stellar populations at the GC in Sect.~\ref{section:Kinematics in the GC}. In Sect.~\ref{section:Finding co-moving groups} we describe a clustering search algorithm and its application. We summarise our results in Sect.~\ref{section:summary}.
\section{Data and methodology}
\label{section:Data sets}
We use two datasets, the GNS \citep{nogueras:2019cat}, epoch 2015/2016, and the H-P$\alpha$S \citep{Wang:2010fk, Dong:2011ff}, epoch 2008, with a time baseline of seven to eight years between them. Here we only briefly summarise our methodology. Further details are described in \citet{Shahzamanian:2019}.
\begin{figure}
\includegraphics[width=\columnwidth]{Measured_PMs_regions.png}
\caption{Positions of the stars in our proper motion catalogue. Three sub-regions of the catalogue from the vertical division are also shown (see Sect. 4.1).}
\label{fig:measured_pos}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=\columnwidth]{GNS_PMs_uncertainties_all_stars.png}
\caption{Uncertainties of proper motions in the catalogue as a function of H magnitude (see Sect.~\ref{subsection:Proper motions catalogue}).}
\label{fig:dpm}
\end{figure}
All relevant details of the GNS are described in \cite{Nogueras2018a,nogueras:2019cat}. \citet{Dong:2011ff} describes all relevant information about the H-P$\alpha$S. Both surveys have very similar angular resolutions of $\sim$$0.2"$. The GNS observations were taken in the near-infrared ($J$, $H$, and $K_{\rm s}$) with the High Acuity Wide-field K-band Imager (HAWK-I) at the VLT. HAWK-I has a detector composed of four different chips with a gap in between them \citep{Kissler-Patig:2008uq}. For the proper motion catalogue, we used only the $H$-band data of the GNS because the extreme extinction in $J$ ($A_{J} \gtrsim7$) implies a significantly smaller number of detections than at longer wavelengths and because saturation is a concern in $K_{\rm s}$ for stars brighter than $K_{\rm s} \sim 11$. We only selected GNS stars with a relative astrometric uncertainty of $<2$\,milliarcseconds (mas) along each axis \cite[see Fig.~A.1 of][]{Shahzamanian:2019}. The central region of the GNS includes 30 pointings and is centred on Sgr~A*, covering the GC area of about $36' \times 16'$. We analysed 24 pointings (labelled 1-24) of the central region that have full or partial overlap with the H-P$\alpha$S (see Fig.~\ref{fig:GNS}).
The H-P$\alpha$S maps an area of $\sim 36' \times 16'$ that has been acquired with the Near-Infrared Camera and MultiObject Spectrometer (NICMOS) Camera 3 (NIC3) on the HST. The field-of-view (FoV) of NIC3 is $51.2'' \times 51.2''$, and the final mosaic has an angular resolution of $0.2''$ and a pixel size of $0.1''$. Its FoV per pointing is about 37 times smaller than a single GNS pointing. The survey and the source list catalogue are described in \cite{Dong:2011ff}. Because of the mosaicing process, the astrometric positions given in this list have uncertainties of a few tens of mas and can therefore not be directly used to measure proper motions. We therefore carried out astro-photometry on each survey image with the \textit{StarFinder} point spread function (PSF) fitting package \citep{Diolaiti:2000fk}. As uncertainties we used the formal uncertainties provided by \textit{StarFinder}. We used the data acquired with the narrow-band filter F190N and only accepted stars with less than 3~mas astrometric uncertainty along each axis for our proper motion analysis \cite[see Fig.~A.2 of][]{Shahzamanian:2019}. All these stars have a significantly smaller positional uncertainty in the HAWK-I images.
The GNS data were reduced and calibrated for each chip and pointing individually. Each GNS chip overlaps (fully or partially) with a different number of NICMOS/HST images. Assuming net zero motion and rotation for the stars present in each image, we transformed the stellar positions of the H-P$\alpha$S images into the reference frame of the corresponding GNS pointing and chip via a third-order polynomial transformation, as described in \citet{Shahzamanian:2019} and \citet{schoedel2009}. We calculated the alignment uncertainties with the jackknife method,\ repeating the procedure multiple times and dropping different groups of reference stars in each repetition. The alignment uncertainties mainly depend on the number and distribution of reference stars for this procedure, which are determined primarily by the size of the overlapping region of H-P$\alpha$S and GNS pointings and by the stellar surface density.
After alignment, we identified common stars by the condition that they had to coincide within a $0.1''$ radius (i.e.\ half the resolution limit of the two surveys). Since a proper motion of 1\,mas\,yr$^{-1}$ corresponds to a physical velocity of about 40\,km\,s$^{-1}$ at the distance of the GC \citep[we assumed 8\,kpc; see e.g.][]{Contreras-Ramos:2018kf,Abuter:2019fk,Do:2019ha}, our criterion corresponds to a maximum detectable velocity of $100$\,mas$/7$\,yr\,$\approx14$\,mas\,yr$^{-1}$ or $600$\,km\,s$^{-1}$. This is roughly five times larger than the maximum velocity dispersion expected for the different populations in the target field, which is $\lesssim3$\,mas\,yr$^{-1}$ or 120\,km\,s$^{-1}$ \citep[e.g. Field Gaussian\,2 of the multi-component fit of][]{Rui:2019ch}.
Finally, we obtained the proper motions for each star parallel and perpendicular to the Galactic plane, $\mu_{l}$ and $\mu_{b}$, by dividing their displacement by the time baseline of seven years (eight years for the two GNS fields observed in 2016). We obtained the uncertainties of the proper motions by quadratically combining the uncertainties of the astrometric positions in the two epochs with the alignment uncertainty between the epochs. The latter depends on the position of any given star within each NICMOS/HST image \citep[see][]{Shahzamanian:2019}.
Because of the overlap of NICMOS/HST images, there are multiple measurements for a fraction of stars. In cases of multiple measurements, we computed the mean proper motion for each star, with the corresponding uncertainty given by the error of the mean of the multiple measurements. A comparison between the uncertainties estimated in this way and the proper motion uncertainties computed as described in the previous paragraph showed that both methods provide similar uncertainties.
\section{Results}
\label{section:Proper motions}
\subsection{A proper motion catalogue for the GC}
\label{subsection:Proper motions catalogue}
Since the proper motions were determined for each of the HAWK-I chips independently, we combined the lists of the four chips corresponding to each GNS pointing. There are some stars with multiple measurements, which can happen when they lie near the inner edges of the chips,\ close to the detector gap that is covered in the observations through jittering. For these stars we computed the mean proper motion and its corresponding uncertainty from the multiple measurements.
Finally, we combined the proper motions of all the pointings of our study to obtain the proper motion catalogue. Again, for all stars with multiple measurements in the overlapping regions, we calculated their mean proper motions and corresponding uncertainties via averaging. To summarise, we determined the proper motion uncertainties from a combination of astrometric and epoch alignment uncertainties. For stars with multiple proper motion measurements (located in overlapping pointings in the HST and/or the GNS), we determined their proper motion uncertainties directly from the multiple measurements. Both methods result in similar uncertainties.
Figure~\,\ref{fig:measured_pos} shows the positions of the stars in our proper motion catalogue. The different size of the NICMOS images and their tiling pattern, the complex procedure of matching NICMOS data to GNS data, and the increased uncertainties near the edges of the images (both because of lower signal-to-noise ratios and because of larger astrometric transformation uncertainties) mean that the density of our measurements is not homogeneous across the studied field.
Figure~\ref{fig:dpm} shows the uncertainties of the measured proper motions as a function of stellar magnitude. There is a small fraction of stars fainter than $H\approx17$. These stars are located in regions with very low stellar densities (i.e. on dark clouds). As is clear from the figure, the proper motion uncertainty of the majority of stars is $<1$\,mas\,yr$^{-1}$, which is due to considering the astrometric uncertainty limit for each of the surveys in the proper motion analysis.
The proper motions of the Quintuplet cluster region in this work have higher uncertainties compared to \cite{Shahzamanian:2019} because we did not use the GNS catalogue data in the latter work but specifically optimised point source detection for the Quintuplet cluster. This also suggests that we may still be able to improve the quality of our proper motions if we optimise the GNS point source detection pipeline (which goes far beyond the scope of the present work).
Figure~\ref{fig:CMD_hawki} presents CMDs for all stars with measured proper motions using GALACTICNUCLEUS photometry. A colour cut separates foreground stars (mostly Galactic disc and some low-extinction bulge or bar stars) from those in the GC (NSD and inner bulge or bar) via the criteria $H-K_{\rm s} < 1.3$ and $J-K_{\rm s} < 3.8$ \citep[for a justification of the criterion and the extinction curve used, see][]{Nogueras-Lara:2019zv, nogueras:2021}. About 12\% of the stars in our catalogue have $H-K_{\rm s} < 1.3$ and are foreground sources.
The proper motion data only reach down to the top of the red clump. This is a consequence of the relative shallowness of the HST narrow-band images.
Table~\ref{tab:data_cat} contains the first rows of our proper motion catalogue. We have made the complete catalogue, which contains 77414 objects, available in electronic form via the CDS. The proper motions reported here are relative, not absolute. We established our reference frame for the proper motions by assuming mean zero motion and rotation between the stellar positions extracted from each H-P$\alpha$S pointing and the ones in the corresponding chips and pointings of the GNS.
\begin{figure}
\includegraphics[width=\columnwidth]{GNS_PMs_CMD_all_stars.png}
\caption{CMDs of the stars in the proper motion catalogue. The dashed red lines illustrate the colour cuts used to remove foreground stars. The arrows mark the asymptotic giant branch (AGB) and the red clump (RC).}
\label{fig:CMD_hawki}
\end{figure}
\begin{figure}[!b]
\begin{center}
\subfigure{%
\includegraphics[width=0.45\textwidth]{diff_v_x.pdf}
}
\subfigure{%
\includegraphics[width=0.45\textwidth]{diff_mu_b.pdf}
}
\end{center}
\caption{Proper motion differences between our catalogue and the work of \citet{Libralato:2021} measured for each star and normalised by the quadratically combined uncertainties. The orange lines indicate Gaussians fitted to the histograms, and amp, $\mu$, and $\sigma$ are the amplitude, mean, and standard deviation of the Gaussians.
}
\label{fig:hist_lib}
\end{figure}
\subsection{Verification}
We performed two tests to verify the proper motion measurements of our catalogue.
\subsubsection{Comparison with \citet{Libralato:2021}}
\label{section:Comparison with Libralato work}
\citet{Libralato:2021} present proper motion measurements of stars in the GC acquired with WFC3/HST (F153M) in several non-contiguous areas towards the GC with a total surface area of $\sim144$\,arcmin$^{2}$. Some of their survey area overlaps with ours. We compared the proper motions of stars common to their and our work, using only stars that have an associated proper motion uncertainty in the catalogue of \citet{Libralato:2021}. The two works have consistent photometry, and the cross-identification of sources is straightforward.
The proper motions given by \citet{Libralato:2021} are absolute (referenced to the Gaia DR2 catalogue), while our proper motions are relative, meaning that net zero rotation and proper motion for each match between a GNS image and an H-P$\alpha$S image has been assumed. Therefore, in order to compare their proper motions with ours, we obtained the offset between both works and corrected the values for the offset.
The Libralato proper motions are given in equatorial coordinates, while our proper motions are given in Galactic coordinates. We therefore rotated their proper motions to align them with ours. We then computed the differences of the proper motions measured for each star and normalised them by the quadratically combined uncertainties. The histograms of the normalised differences are shown in Fig.~\ref{fig:hist_lib}. We fitted Gaussian distributions to the histograms. They provide very good fits, are centred on a mean difference of zero, and have standard deviations of 1$\sigma$ (b) and 2$\sigma$ (l), respectively. The expected outcome of this comparison under the assumption of purely statistical (random) uncertainties is a Gaussian with mean zero and a standard deviation of 1.
We conclude that there is good, if not perfect, agreement between the data. A likely cause of systematic uncertainties is that the assumption of zero net motion for our catalogue is probably violated because of the rotation of the NSD combined with differential extinction (see Sect.\,\ref{section:Kinematics in the GC}): We will see more stars on the near side of the NSD than on its far side, which means that our assumption of zero net proper motion is violated to a certain degree. The small FoV of NICMOS may worsen this problem in some fields.
\begin{figure}[!ht]
\includegraphics[width=\columnwidth]{PMs_uncertainties_test.png}
\caption{Uncertainties of proper motions measured in HST/WFC3 images of the Quintuplet cluster.}
\label{fig:sigmaQuin}
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=\columnwidth]{Quintuplet_PMs_test.png}
\caption{Vector point diagram for proper motions measured in the Quintuplet cluster obtained with HST data.}
\label{fig:VPD_quintuplet}
\end{figure}
\subsubsection{Comparison with HST data for the Quintuplet cluster}
\label{section:Comparison with WFC3/HST data}
Precision proper motions were measured with WFC3/HST imaging on fields centred on the Arches and Quintuplet clusters by \citet{Hosek:2015sh} and \citet{Rui:2019ch}. We downloaded the corresponding HST WFC3 F153M images for the Quintuplet cluster from the Hubble Legacy Archive, which provides fully reduced and astrometrically calibrated images. \citet{Rui:2019ch} provide a description of the observations (see their Sect. 2 and Table\,1). Specifically, we used images from 16 August 2010, 9 September 2011, 12 August 2012, and 22 October 2016.
We carried out PSF extraction and PSF fitting astrometry and photometry on these images with the \textit{StarFinder} package. We used the 2012 epoch as a reference frame and transformed all stellar positions into this frame via a linear fit (rotation, axis scales, and shear). Common stars were identified by the conditions that they could not lie more than $0.13"$ (one pixel) apart from each other. The alignment procedure and identification of common stars were iterated five times and quickly (after one iteration) converged to a stable set of stars. Finally, proper motions were computed by linear fits to the time-position data. Uncertainties were estimated by assuming uniform weighting and scaling the uncertainties of the fit to a reduced $\chi^{2}$ of one. The uncertainties of the measured proper motions are shown in Fig.~\ref{fig:sigmaQuin}. The precision of our measurements is significantly worse than what has been achieved by \citet{Rui:2019ch} because we have used a far less elaborate procedure, addressing fewer potential sources of systematics. Nevertheless, the measurements are good enough for comparison with our proper motion catalogue. For this purpose, we only used proper motions with an uncertainty of $<1.5$\,mas\,yr$^{-1}$ along each direction.
\begin{figure}[!ht]
\includegraphics[width=\columnwidth]{Quintuplet_GNS_HST.png}
\caption{Comparison of proper motions as given in our catalogue ($\mu_{l}, \mu_{b}$) and as measured from WFC3/HST data ($\mu_{l, HST}, \mu_{b, HST}$).}
\label{fig:Quintuplet_comparison}
\end{figure}
We show a vector point diagram of measured proper motions in the Quintuplet cluster in Fig.\,\ref{fig:VPD_quintuplet}. It appears very similar to the one presented in Fig.\,7 (top left) of \citet{Rui:2019ch}. We used a less elaborate method than they did, so the uncertainties and therefore also the scatter of the data points are probably larger. Also, we assumed zero mean motion and rotation for all the stars in the field, while \citet{Rui:2019ch} fixed their frame of reference on the probable Quintuplet cluster members. Therefore, there is an offset between their proper motions and ours.
In Fig.~\ref{fig:Quintuplet_comparison} we compare the proper motions from our catalogue with the proper motions measured on the WFC3/HST data in the Quintuplet field. For the comparison, we used stars that coincided within $0.05"$ in position and corrected for the mean offsets between the two lists ($0.51$\,mas\,yr$^{-1}$ parallel to and $0.24$\,mas\,yr$^{-1}$ perpendicular to the Galactic plane). We only considered stars with proper motion uncertainties of less than $0.8$\,mas\,yr$^{-1}$ in both datasets.
We obtain a Pearson correlation coefficient value, which measures the linear relationship between the two data samples,
of $0.652 \pm 0.016$ for the left plot and $0.624 \pm 0.025$ for the right plot of Fig.~\ref{fig:Quintuplet_comparison}. The uncertainties were obtained using the Monte Carlo method.
The p-values for the correlations in both plots are lower than the significance level of 0.05, which indicates that both correlations are statistically significant. There is still some offset in the left panel of Fig.~\ref{fig:Quintuplet_comparison} that points towards a remaining offset.
We conclude that the agreement is good, considering the uncertainties of the individual measurements.
\begin{figure}[!th]
\begin{center}
\subfigure{%
\includegraphics[width=\columnwidth]{vx_triple_35_all_stars_1.3_newest.pdf}
}
\subfigure{%
\includegraphics[width=\columnwidth]{vy_double_35_all_stars_1.3_newest.pdf}
}
\\
\end{center}
\caption{Normalised velocity distributions of GC stars in the catalogue. The individual Gaussian components are indicated as dashed lines and the global solution as a thick solid line.
Top: Direction parallel to the Galactic plane.\ NSD stars moving eastwards are shown in red, NSD stars moving westwards in orange, and bulge stars in green. Bottom: Direction perpendicular to the Galactic plane.\ NSD stars are shown in orange and bulge stars in green.}
\label{fig:hist_cat}
\end{figure}
\section{Observed kinematics }
\label{section:Kinematics in the GC}
\subsection{Kinematics of stars in the GC}
In order to study the kinematics of stars inside the GC, we excluded all foreground stars, which we define as those stars with $H\mbox{-}K_{\rm s}$ < 1.3 \citep[see Fig.~\ref{fig:CMD_hawki} and also][]{nogueras:2019cat}. These stars with low reddening are mostly located in the Galactic disc, but some stars from the near edges of the bulge or bar may also be counted among them.
\begin{table*}[
\caption{Best-fit values for the velocity distributions of GC stars.
\centerin
\begin{tabular}{c c c c
\hline
Distribution of $\mu_{l}$ & & & \\
\hline
Component & amp & $\overline{\mu_{l}}$ & $\sigma_{\mu_{l}}$\\
& &mas\,yr$^{-1}$ & mas\,yr$^{-1}$ \\
\hline
\hline
east & $0.316\pm0.01$ & $1.991\pm0.13$ & $1.999\pm0.01$\\
west & $0.38\pm0.01$ & $-2.287\pm0.15$ & $2.49\pm0.0002$\\
broad & $0.282\pm0.01$ & $-0.065\pm0.06$ & $3\pm0.32$\\
\hline
\hline
Distribution of $\mu_{b}$ & & & \\
\hline
Component & amp & $\overline{\mu_{b}}$ & $\sigma_{\mu_{b}}$\\
& &mas\,yr$^{-1}$ & mas\,yr$^{-1}$ \\
\hline
narrow & $0.453\pm0.01$ & $-0.0004 \pm 0.0001$ & $1.499\pm0.0002$\\
broad & $0.542\pm0.01$ & $-0.0001\pm0.0001$ & $3.174\pm0.03$\\
\hlin
\end{tabular}
\begin{tablenotes}
\small
\textbf{Notes:} amp: amplitudes of different Gaussians; $\overline{\mu_{l}}$: means of Gaussians fitted to the distributions of $\mu_{l}$; $\sigma_{\mu_{l}}$: standard deviations of the Gaussians fitted to the distributions of $\mu_{l}$; $\overline{\mu_{b}}$: means of Gaussians fitted to the distributions of $\mu_{b}$; $\sigma_{\mu_{b}}$: standard deviations of Gaussians fitted to the distributions of $\mu_{b}$; Gaussian function:
$\frac{amp}{{\sigma \sqrt {2\pi } }}e^{{{ - \left( {x - \mu } \right)^2 } \mathord{\left/ {\vphantom {{ - \left( {x - \mu } \right)^2 } {2\sigma ^2 }}} \right. \kern-\nulldelimiterspace} {2\sigma ^2 }}}.$
\end{tablenotes}
\label{table:data}
\end{table*}
In Fig.~\ref{fig:hist_cat} we show histograms of the proper motions.
We fitted multiple Gaussians to the observed distributions. In order to constrain the number of Gaussians and to compare different models, we computed the Bayesian model log evidence (ln Z) using the the dynamic nested sampling provided by the \emph{dynesty} package \citep{speagle:2020}.
We regarded a model to be favoured over another when the difference in their Bayesian log evidence was larger than two \citep{Trotta:2008}.
The best fitting values and their uncertainties obtained by considering different bin sizes are listed in Table~\ref{table:data}.
A single Gaussian cannot fit the wings of the histogram of $\mu_{b}$ satisfactorily. However, a fit with two Gaussians provides an almost perfect fit (bottom panel of Fig.\,\ref{fig:hist_cat}). Both Gaussians are precisely centred on zero but have significantly different standard deviations, which are measurements of the velocity dispersion. We interpret the broader distribution, with a proper motion dispersion of $\sigma_{\mu_{b,\mathrm{broad}}}=3.17$\,mas\,yr$^{-1}$, as arising from stars from the Galactic bulge because we expect stars from the bulge to be present in our sample and because the velocity dispersion in the bulge was previously determined to have similar values \citep[][]{Clarkson:2008kx,kunder:2012,Soto:2014}. The narrower one, with $\sigma_{\mu_{b,\mathrm{narrow}}}= 1.50$\,mas\,yr$^{-1}$, then corresponds to stars in the NSD, which have a smaller velocity dispersion perpendicular to the Galactic plane than bulge stars.
\begin{figure*}[!htb]
\includegraphics[width=\textwidth]{rotation_all_stars.png}
\caption{Proper motion density plot for stars with low (left) and high (right) reddening.}
\label{fig:rotation}
\end{figure*}
This is consistent with previous work, which suggests that we should expect to distinguish two populations of stars by their velocity dispersion perpendicular to the Galactic plane: a broad distribution for bulge stars and a narrower one for NSD stars \citep{Schoenrich:2015}. The different velocity dispersions of the metal-rich and metal-poor stars in \cite{schultheis2021} show the mixture and co-penetration of kinematically cooler NSD stars and hotter stars from the bulge. High precision proper motion studies with WFC3/HST on fields containing the Arches and the Quintuplet clusters indicate that there are indeed stellar populations that may be identified from their kinematics as pertaining to the NSD and bulge or bar, with $\sigma_{b,\mathrm{NSD}}=0.6-0.8$\,mas\,yr$^{-1}$ and $\sigma_{b,\mathrm{Bulge}}=3.0-3.4$\,mas\,yr$^{-1}$, respectively \citep[Field Gaussians 1 and 2-3 in][]{Hosek:2015sh,Rui:2019ch}. These studies also indicate the potential existence of an additional population with intermediate properties.
The rotation of the nuclear disc broadens the velocity distribution along Galactic longitude, $\mu_{l}$. Although the histogram of the velocities parallel to the Galactic plane can be fit satisfactorily with two Gaussians, we require a third distribution to be present, corresponding to bulge stars, which must also be present in this histogram. The superposition of three Gaussians provides a perfect fit to the data (top panel of Fig.\,\ref{fig:hist_cat}). The broad Gaussian centred on a mean motion of $\overline{\mu_{l,\mathrm{broad}}}\approx0$\,mas\,yr$^{-1}$ and with a dispersion of $\sigma_{\mu_{l,\mathrm{broad}}}\approx3$\,mas\,yr$^{-1}$ represents the bulge stars.
\begin{figure*}[]
\begin{center}
\subfigure{%
\includegraphics[width=0.32\textwidth]{vx_double_35_vertic_1_1.3_all_stars_newest.pdf}
}
\subfigure{%
\includegraphics[width=0.32\textwidth]{vx_double_30_vertic_2_1.3_all_stars_newest.pdf}
}
\subfigure{%
\includegraphics[width=0.32\textwidth]{vx_double_30_vertic_3_1.3_all_stars_newest.pdf}
}
\\
\subfigure{%
\includegraphics[width=0.32\textwidth]{vy_double_35_vertic_1_1.3_all_stars.pdf}
}
\subfigure{%
\includegraphics[width=0.32\textwidth]{vy_double_35_vertic_2_1.3_all_stars.pdf}
}
\subfigure{%
\includegraphics[width=0.32\textwidth]{vy_double_35_vertic_3_1.3_all_stars.pdf}
}
\\
\end{center}
\caption{Normalised velocity distributions of GC stars for three sub-regions of the catalogue from the vertical division. The first, second, and third columns correspond to sub-regions 1, 2, and 3, respectively, progressing from south to north. The individual Gaussian components are presented as dashed lines and the global solution by a thick solid line. Upper panels: Direction parallel to the Galactic plane.\ NSD stars moving eastwards are shown in red, NSD stars moving westwards in orange, and bulge stars in green. Lower panels: Direction perpendicular to the Galactic plane. NSD stars are shown in orange and bulge stars in green.
}
\label{fig:hist_cat_vertic}
\end{figure*}
We interpret the other two Gaussians, with mean proper motions of $\overline{\mu_{l,\mathrm{east}}}=1.99$\,mas\,yr$^{-1}$ and
$\overline{\mu_{l,\mathrm{west}}}=-2.28$\,mas\,yr$^{-1}$ and velocity dispersions $\sigma_{\mu_{l,\mathrm{east}}}=1.99$\,mas\,yr$^{-1}$ and $\sigma_{\mu_{l,\mathrm{west}}}=2.49$\,mas\,yr$^{-1}$, as stars in the rotating NSD. Their mean velocities are almost equal within uncertainties in opposite directions, and therefore the NSD is symmetric. The Gaussian with a broader distribution corresponds to stars moving eastwards (positive Galactic longitude) and the other one to stars moving westwards. Stars moving eastwards are located preferentially on the near side of the NSD and those moving westwards at its far side. This corresponds to the direction of Galactic rotation and the rotation of the NSC and of the NSD \citep{Trippe:2008it,Schodel:2009zr,Feldmeier:2014kx,Chatzopoulos:2015uq,Lindqvist:1992fk,Schoenrich:2015,schultheis2021}.
The mean values of eastward and westward proper motions correspond to 80\,km\,s$^{-1}$, in good agreement with the rotation velocity of the NSD that has been derived via near-infrared spectroscopic observations of giants \citep{Schoenrich:2015} and radio observations of OH/IR stars \citep{Lindqvist:1992fk}.
Due to differential extinction along the line-of-sight through the NSD, we expected to detect more stars at the near side of the NSD that have a higher amplitude of the Gaussian that corresponds to the population with positive (eastward) $\overline{\mu_{l}}$. Although we cannot say that we detect more eastward than westward moving stars, it is still true that the mean movement depends on the reddening selection. We can test this prediction by producing proper motion density plots for GC stars with low and high reddening. We made cuts in colour to have two groups, low ($1.3<H-K_{s}<1.7$) and high ($1.9<H-K_{s}$) reddening stars. These plots are shown in Fig.\,\ref{fig:rotation} and confirm that stars with lower extinction move preferentially eastwards, while those with higher extinction move preferentially westwards.
To study whether the velocity distributions change as a function of Galactic latitude, we divided our catalogue into three equal areas parallel to the Galactic plane, one on the plane and the other two above and below it (see Fig.~\ref{fig:measured_pos}). Again, we fitted Gaussians to the distributions with the {\it dynesty} package as described above. The distributions with their fits are shown in Fig.~\ref{fig:hist_cat_vertic}. In all cases, the $\mu_{l}$ distribution can be fitted with three Gaussians and the $\mu_{b}$ distribution with two. The velocity dispersions of the nuclear disc and bulge populations do not change in any significant way.
Also, the mean velocities of the eastward and westward moving populations in the nuclear disc remain equal within uncertainties in different regions.
\subsection{Kinematics of foreground stars}
Figure~\ref{fig:hist_cat_foreground} shows the velocity distribution of the foreground stars ($H-K_{s}<1.3$). The velocity distributions of these stars are different from the ones for the GC stars. In particular, there is a significant mean motion towards the east. This corresponds to the direction of Galactic rotation and therefore with aligns with our expectation for the kinematics of stars in the Galactic disc. The $\mu_{l}$ distribution is asymmetric and can be fitted with two Gaussians. A single Gaussian does not appear to be sufficient to fit the $\mu_{b}$ distribution satisfactorily. The interpretation of the foreground population is complex, among other reasons because the foreground stars are placed at significantly different distances from Earth. Since in this work we are concerned with the stellar populations in the GC, we will not further discuss the kinematics of the foreground population.
\begin{figure*}[]
\begin{center}
\subfigure{%
\includegraphics[width=0.45\textwidth]{vx_double_35_foreground.pdf}
}
\subfigure{%
\includegraphics[width=0.45\textwidth]{vy_double_35_foreground.pdf}
}
\\
\end{center}
\caption{Normalised velocity distributions of foreground stars.
Left: In the direction parallel to the Galactic plane. Right: In the direction perpendicular to the Galactic plane.}
\label{fig:hist_cat_foreground}
\end{figure*}
\section{Finding co-moving groups}
\label{section:Finding co-moving groups}
We applied the unsupervised clustering algorithm `density-based
spatial clustering of applications with noise' \citep[DBSCAN;][]{Ester96} to search for co-moving groups of stars in our data, using the Python implementation\footnote{http://scikit-learn.org/stable/modules/generated/sklearn.\\cluster.DBSCAN.html}. This method has been successfully applied to Gaia DR2 data to find new open clusters \citep[e.g.][]{Castro-Ginard:2018, beccari:2018}. DBSCAN can handle noise because not all the points are assigned to a cluster. It can also detect arbitrarily shaped clusters and does not need any prior information about the expected number of substructures in the data. Here, we show the application of this algorithm to the data from chip 2 of GNS pointing 10, described in \cite{Shahzamanian:2019}, because the Quintuplet cluster is located in this region.
DBSCAN is based on two parameters: a minimum number of points ($N_{min}$) and a length scale ($\epsilon$). A hypersphere with a radius of $\epsilon$ is centred on each star, and the points are regarded to be clustered if the number of stars that are inside the hypersphere is equal to or greater than the pre-determined $N_{min}$. There are three types of points: core, border, and outlier. A point is a core point if at least $N_{min}$ number of points are within radius $\epsilon$, while a border point is a point that is reachable from a core point and has fewer than $N_{min}$ number of points within its surrounding area. If a point is not a core point and cannot be reached from any core points, it is labelled as an outlier. A cluster includes core points that are reachable from one another and all of their border points.
Here, in order to reduce the free input parameters of the algorithm, we obtained the $\epsilon$ value by using the Nearest Neighbors (KDTree algorithm) implementation on scikit-learn \citep{pedregosa2011}. A minimum threshold of ten neighbours per source was considered, and the distances to the neighbours of each source is returned by the algorithm. The optimal epsilon value is the knee value of the nearest neighbours distances plot, determined from the \textit{kneed} python package\footnote{https://pypi.org/project/kneed/}.
We used stars with the astrometric uncertainty cut that we applied in the beginning of our proper motion analysis. We first ran the DBSCAN algorithm on the position space of the image (x, y), after standardising the data so that they all have a mean of zero and a standard deviation of 1, using an $\epsilon$ of $\sim$0.2, obtained from running the KDTree algorithm (see left panel of Fig.~\ref{fig:epsilon}), and an $N_{min}$ of 10. We distinguished two groups in the position space (see the top panel of the left column of Fig.~\ref{fig:dbscan_plots}): the more populated one is the Quintuplet cluster, and the one with the lower density is a potential newly found group. In the bottom panel of the left column of Fig.~\ref{fig:dbscan_plots}, the proper motions of the stars belonging to the two groups found in the position space are shown. In this panel, one can see that the Quintuplet cluster sources show a relatively large scatter, and in spite of this, most of the stars of the new group fall outside the area covered by them. Also, only a few stars of the new group share the same kinematics as the Quintuplet cluster.
\begin{figure*}[]
\centering
\subfigure{%
\includegraphics[width=0.48\textwidth]{Distance_curve_xy_Quintuplet.pdf}
}
\subfigure{%
\includegraphics[width=0.48\textwidth]{Distance_curve_vxvy_Quintuplet.pdf}
}\\
\caption{Points sorted by distance to the tenth nearest neighbour in position space (left) and velocity space (right). The distance is based on the standardised data. The dashed line shows the knee value of the plot from which the best length scale is determined.
}
\label{fig:epsilon}
\end{figure*}
We also checked clustering in the velocity space ($\mu_{l}$, $\mu_{b}$) by running the DBSCAN algorithm using an $\epsilon$ of $\sim$0.2 (see the right panel of Fig.~\ref{fig:epsilon}) and an $N_{min}$ of 10. Here we identify the Quintuplet cluster; however, we cannot detect the potential new group in this space (see the top panel of the right column of Fig.~\ref{fig:dbscan_plots}). The positions of the stars selected in the velocity space are shown in the lower plot of the right column of Fig.~\ref{fig:dbscan_plots}, which includes not only the positions of Quintuplet cluster sources but also some other sources that happen to have proper motions close to these cluster sources. Since the new group is detected in the position space and not the velocity space, we needed to check these spaces separately and not search for clusters in 4D space (positions and velocities) in our data.
The reason that we cannot detect the new group of stars in the velocity space might be due to its movement parallel to the Quintuplet cluster. The new group is probably part of the Quintuplet cluster since it comprises few stars and lies not very far from the cluster centre. The projected distance of this group from the Quintuplet cluster centre is about 1.3 times larger than the half-light radius of the cluster.
Moreover, we investigated whether we can find this new group of stars in the WFC3/HST data described in Sect.~\ref{section:Comparison with WFC3/HST data}. In Fig.~\ref{fig:quin-hst-overplot} we show the Quintuplet cluster proper motions together with the new co-moving group marked on the WFC3/HST image. Applying the DBSCAN algorithm to these data, we can find the Quintuplet cluster and the new group (see Fig.~\ref{fig:dbscan_hst}). The uncertainties of proper motions in these data are smaller compared to our proper motion catalogue, and as a result there is less scatter in the vector point diagram shown in the panel right of Fig.~\ref{fig:dbscan_hst}.
We see clearly in this panel that five stars of the new group move with the Quintuplet cluster and that the remaining sources in this group are field stars that could be at any distance; therefore, they are not necessarily a group.
\begin{figure}[!ht]
\includegraphics[width=\columnwidth]{Quintuplet-hst-overplot.png}
\caption{Proper motion measurements of the Quintuplet cluster region. The new co-moving group of stars is shown with green arrows.}
\label{fig:quin-hst-overplot}
\end{figure}
Figure~\ref{fig:cmd_new_cluster} (left) shows the CMD of stars in the region using WFC3/HST data, with the new group found in both these data and our proper motion catalogue indicated.
In this figure the source with F127M-F153M < 0.5 is a foreground source. Comparing this plot with the CMD in Fig.~6 of \cite{rui:2019} shows that the members of the new group in the direction of the Quintuplet cluster (inside the green ellipse) are possibly four stars of the Quintuplet cluster that appear close in the sky. The remaining coloured sources (outside the green ellipse) are probably old giant interlopers that are field stars and do not necessary move coherently. The CMD of the new group using the GALACTICNUCLEUS data is also shown in the right plot of Fig.~\ref{fig:cmd_new_cluster}. We have also produced a de-reddened CMD in this plot using a dedicated extinction map following the methodology described in Appendix A of \cite{nogueras:2021}. The isochrone plotted over the CMD illustrates a 5 Myr old stellar population using PARSEC evolutionary models \citep[release v1.2S + COLIBRI S$\_$35 + PR16][]{Bressan2012,Chen2014, Chen2015, Tang2014, marigo2017, Pastorelli2019}.
Our technique can detect small co-moving groups of stars, and in the future we want to use it across the entire region of our study. Because this approach is density-based and the density might vary greatly from one region to the next, the entire region should be divided into smaller sub-regions for the DBSCAN to be run on each.
\begin{figure*}[]
\begin{center}
\subfigure{%
\includegraphics[width=0.48\textwidth]{DBSCAN_xy_Quintuplet.pdf}
}
\subfigure{%
\includegraphics[width=0.48\textwidth]{DBSCAN_mu_Quintuplet.pdf}
}\\
\subfigure{%
\includegraphics[width=0.48\textwidth]{mu_equivalent_xy_Quintuplet.pdf}
}
\subfigure{%
\includegraphics[width=0.48\textwidth]{xy_equivalent_vxvy_Quintuplet.pdf}
}
\\
\end{center}
\caption{Clustering search in our proper motion catalogue. Left column: Cluster selection in the position space (top) and the same sources shown in the velocity space (bottom). The Quintuplet cluster is shown in red and the new group in cyan. Right column: Cluster selection in the velocity space (top) and the same stars shown in the position space in red (bottom). In all panels, the larger coloured points are the core points and the smaller ones are the border points.
}
\label{fig:dbscan_plots}
\end{figure*}
\begin{figure*}[]
\begin{center}
\subfigure
\includegraphics[width=0.48\textwidth]{DBSCAN_xy_Quintuplet_new.pdf}
}
\subfigure{%
\includegraphics[width=0.48\textwidth]{vxvy_equivalent_xy_Quintuplet_new.pdf}
}
\\
\end{center}
\caption{Clustering search in WFC3/HST data. Left: Cluster selection in the position space. Right: Same sources shown in the velocity space. The Quintuplet cluster is shown in red and the new group in cyan. In all panels, the larger coloured points are the core points and the smaller ones are the border points.}
\label{fig:dbscan_hst}
\end{figure*}
\begin{figure*}[]
\begin{center}
\subfigure
\includegraphics[width=0.48\textwidth]{cmd_new.png}
}
\subfigure{%
\includegraphics[width=0.48\textwidth]{HKs.png}
}
\\
\end{center}
\caption{CMD of the new group. Left: CMD of stars using WFC3/HST data. The red crosses are the new co-moving group sources found in the WFC3/HST data, and the blue triangles are the ones found in our proper motion catalogue. The source with F127M-F153M < 0.5 is a foreground source. The coloured points inside the green ellipse are the new group sources moving in the direction of the Quintuplet cluster movement. Right: CMD of the new group using the GALACTICNUCLEUS data. The co-moving group of stars, excluding the foreground star, is marked in blue. The de-reddened CMD is shown by black points, and an isochrone of 5 Myr old is over-plotted.}
\label{fig:cmd_new_cluster}
\end{figure*}
\section{Discussion and conclusions}
\label{section:summary}
Stellar kinematics can allow us to disentangle the overlapping and co-penetrating components of the GC (the Galactic disc, bulge or bar, NSD, and NSC) and detect and characterise so far undiscovered young clusters as co-moving groups. This work presents our first step in providing precision proper motion measurements for a large fraction of the GC. We have previously demonstrated our methodology for proper motion measurements on a small GC field in \cite{Shahzamanian:2019} and discovered a new co-moving group of stars.
In this work we have used all overlapping fields between the epoch 2008 H-P$\alpha$S and the epoch 2015-2016 GNS to create the first proper motion catalogue for the central $\sim 36' \times 15'$ of the Milky Way's NSD. We have made this catalogue, which comprises $\sim$$80,000$ stars (Appendix\,\ref{app:catalogue}), publicly available.
Given that the polynomial fits used for image alignment can diverge at the edges of the images, the small FoV of NICMOS/HST images poses one of the main limitations of our work. The second most important limitation is the low signal-to-noise ratio of the narrow-band HST images, which limits the number of detected sources and their astrometric precision. Due to these limitations, the NICMOS data allow us to measure the proper motions of only a few percent of the stars detected in GNS, and we can only reach the brightest red clump stars. Nevertheless, this proper motion dataset for the GC is unprecedented in terms of the combination of its number of sources, the quality of the proper motions, and the FoV.
We clearly detect the presence of at least two GC stellar populations in the kinematic data: stars belonging to the NSD and stars belonging to the bulge. The bulge population has a net zero proper motion and a velocity dispersion of $3-3.17$\,mas\,yr$^{-1}$, in agreement with the literature. Stars in the NSD have a smaller velocity dispersion: $1.5$\,mas\,yr$^{-1}$ in the direction perpendicular to the Galactic plane and about
$2-2.5$\,mas\,yr$^{-1}$ parallel to the Galactic plane. The small FoV of the NICMOS images makes precision alignment between the two epochs difficult and impedes us from registering our proper motions in an absolute frame of reference (Gaia). This may give rise to systematic errors that increase the observed velocity dispersions. We therefore caution that the measured velocity dispersions of the nuclear disc population are probably biased towards higher values and should be interpreted as upper limits.
We clearly detect the rotation of the NSD and the effect of differential reddening along the line-of-sight through the GC.
The mean velocities of eastward and westward moving stars correspond to 80\,km\,s$^{-1}$, in agreement with literature values derived from line-of-sight velocities of stars in the nuclear disc.
We compared our NSD velocity distributions to those obtained by axisymmetric self-consistent dynamical models of the NSD from \cite{Sormani2021}. Their models are based on kinematics and do not take photometric information into account. We selected the NSD stars from the same region as our study (see Fig.~1) from their model-generated data and fitted Gaussians with the \textit{dynesty} package. As a result, we obtain a velocity dispersion of $1.49$\,mas\,yr$^{-1}$ for stars in the direction perpendicular to the Galactic plane, which agrees well with our observations (see Table~\ref{table:data}). However, the velocity dispersion obtained for the stars in the direction parallel to the Galactic plane is about $1.5$\,mas\,yr$^{-1}$, which is less than what we find from our data. The larger velocity dispersion of the nuclear disc that we get for $\mu_{l}$ compared to \cite{Sormani2021} models is likely due to systematic uncertainties that we have because of the small FoV of NICMOS data, which prevents us from registering our proper motions in an absolute reference frame.
Furthermore, we demonstrate a technique, based on a density-based clustering algorithm, that shows how proper motions can be used to identify new potential young co-moving groups, which can be subsequently targeted with instruments with smaller FoVs but higher angular resolution for more detailed follow-up studies. We applied the method to the Quintuplet cluster region, which resulted in the detection of a new potential co-moving group of stars. However, further analysis shows that four sources of this group are possibly stars of the Quintuplet cluster and that the remaining sources are field stars.
In future work, we will expand this catalogue with a new HAWK-I imaging epoch. The higher sensitivity and larger FoV of HAWK-I mean that we will be able to measure the kinematics of about 100 times more stars, minimise the uncertainties of aligning data from different epochs, and tie the measured proper motions into the Gaia reference frame. We will then obtain a much clearer view of the kinematics of the stellar populations at the GC.
\begin{acknowledgements}
The authors would like to thank the anonymous referee
for the helpful comments on this paper. B.Sh, R.S, and A.T.G.C acknowledge financial support from the State Agency for Research of the Spanish MCIU through the ''Center of Excellence Severo Ocho'' award for the Instituto de Astrof\'isica de Andaluc\'ia (SEV-2017-0709). B.Sh, A.T.G.C, A.A, and R.S acknowledge financial support from national project PGC2018-095049-B-C21 (MCIU/AEI/FEDER, UE). F.N.L and M.C.S gratefully acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 138713538 SFB\,881 (The Milky Way System, subproject B8). M.C.S furthermore acknowledges support from the ERC via the ERC Synergy Grant “ECOGAL” (grant 855130). F.N.L acknowledges the sponsorship provided by the Federal Ministry for Education and Research of Germany through the Alexander von Humboldt Foundation. This work is based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programmes IDs 195.B-0283. We thank the staff of ESO for their great efforts and helpfulness. This research is based on observations made with the NASA/ESA Hubble Space Telescope
obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. These observations are associated with programs 11671, 12318, 12667, and 14613.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,091,131 | arxiv | \section{Introduction}
\IEEEPARstart{N}{}umerous applications involving Unmanned Aerial Vehicles (UAVs), and in
particular quadrotors, require them to move inside areas characterized by
physical boundaries, obstacles and even tight space constraints (as e.g., urban
environments) in order to accomplish their robotics tasks. Such applications
are, for example, structural inspections, transportation tasks, surveillance and
search and rescue missions. Trajectory generation, a core step for physical
task realization \cite{dadkhah2012survey}, becomes extremely challenging in this
scenario. A physically realizable trajectory must satisfy (i) the (nonlinear)
system dynamics, (ii) the physical limits of the vehicle, such as the maximum
thrust, and (iii) the position constraints.
Although safety (ensured by a feasible trajectory) is the primary requirement
for all applications, trajectory optimization
is becoming necessary in different application domanis.
The cost to minimize can be, for example, the time to execute a
maneuver (in a search and rescue scenario), the energy consumption (during long
endurance missions), or the ``distance'' from a desired unfeasible state-input curve
(during inspections). The further requirement of performance optimization
poses an additional challenge in the trajectory generation problem.
The problem of computing optimal paths (or trajectories)
for UAVs (e.g., \cite{bottasso2008path} and \cite{ambrosino2009path}) has received significant attention and a number of
algorithms for quadrotors have been proposed to accomplish complex tasks, e.g., landing on a
moving target \cite{herisse2012landing} and blind navigation in unknown
populated environments \cite{naldi2015robust}. Focusing on collision avoidance,
two different approaches, namely reacting or planning, can be applied. The
reactive approach is based on navigation laws preventing from possible
collisions. It can be performed, e.g., modulating the velocity reference
\cite{hou2016dynamic}, selecting ad-hoc reference way-points
\cite{furci2015plan} and defining an harmonic potential field
\cite{masoud2015plan}. On the contrary, the planned approach deals with a
problem involving dynamics and state-input constraints with (possibly) a
performance criterion to optimize. The majority of the planning algorithms
regarding quadrotors, such as \cite{cowling2010direct},
\cite{mellinger2011minimum}, \cite{bouktir2008trajectory}, \cite{van2013time}, \cite{chen2016online},
takes advantage of the differential flatness property and relies on approximations via motion primitives.
When dealing with obstacle dense environments, trajectory generation is often
performed using a decoupled approach (\cite{bry2015aggressive},
\cite{koyuncu2008probabilistic},
\cite{bouffard2009hybrid}). In
a first stage, a collision-free path is generated by sampling-based path planning
algorithms, such as the Rapidly-exploring Random Tree (RRT) in
\cite{bry2015aggressive, koyuncu2008probabilistic} or the Probabilistic Roadmap
(PRM) in
\cite{bouffard2009hybrid}, and without the dynamics constraint. In a
second stage, an optimal trajectory (satisfying the system dynamics) is
generated from the collision-free path. Optimization techniques such as
\cite{cowling2010direct}, \cite{mellinger2011minimum},
\cite{bouktir2008trajectory} can be used at this stage.
In order to overcome the limitations due to the
decoupled approach,
a variant of the RRT algorithm is developed in \cite{devaurs2015optimal},
an approximated dynamics with an a-posteriori correction is used in \cite{allen2016real}
and a space-parameterized problem reformulation, suitable for modeling complex flight scenarios, is adopted in \cite{van2013time}.
Differently from the previous works, in \cite{hehn2015real} the structure of the minimum-time trajectories is found by the Pontryagin's minimum principle. Nevertheless, position constraints are not considered.
Finally, in \cite{augugliaro2012generation} a discretized simple point-mass dynamics and
approximated convex constraints are considered.
The approximation of non-convex constraints into convex ones is also used in \cite{augugliaro2013dance}, in which a sequential convex programming approach is used to achieve a collision free motion for dancing quadrotors.
Our main contribution is the design of an optimization framework to generate
feasible minimum-time quadrotor trajectories in structured environments as,
e.g., rooms, corridors, passages, or urban areas. Our strategy computes
optimal trajectories that satisfy the quadrotor nonlinear dynamics.
The strategy can be applied to general models, which may be more complicated than
the differentially flat ones.
Instead of addressing the minimum-time problem in its standard free-horizon
formulation, we derive a fixed-horizon reformulation in which transverse
coordinates, expressing the ``transverse" distance from a frame path, are used
to parameterize the vehicle position. The resulting problem, having a spatial
parameter as independent variable, is easier to solve than the
time-parametrized one. Position constraints can be easily added into the
reformulated problem by defining the constraint boundaries as a function of
the spatial parameter and shaping them according to the presence of obstacles.
Approximate solutions to the infinite-dimensional optimization problem are
numerically computed by combining the Projection Operator Newton method for
Trajectory Optimization (PRONTO) \cite{hauser2002projection} with a barrier
function approach \cite{hauser2006barrier}. This method generates
trajectories in a numerically stable manner and guarantees recursive
feasibility during the algorithm evolution, i.e., at each algorithm iteration
a system trajectory is available. Moreover the approximated solution always
satisfies the constraints since the barrier function approach is an interior
function method.
As an additional contribution, we present numerical computations to show the
effectiveness of the proposed strategy on two challenging scenarios. In the
first one, the moving space is delimited by rooms with obstacles of different shapes.
In the second scenario, the constrained
environment is a tubular region delimited by hula hoops.
The optimal minimum-time trajectory related to this second scenario is experimentally
performed on our nano-quadrotor testbed.
Our algorithm compares to the literature in the following way.
The majority of works, such as \cite{cowling2010direct, mellinger2011minimum, bouktir2008trajectory, van2013time, chen2016online}, uses the differential
flatness to avoid the integration of nonlinear differential equations, to reduce
the order of the problem and to simplify the definition of constraints
\cite{cowling2010direct}.
On the contrary, our strategy does not rely on the differential flatness hypothesis and thus it can be applied to more complex models.
In the previously cited works, the optimization problem is posed in the flat output space, where outputs are approximated using motion primitives, such as
polynomial functions \cite{cowling2010direct, mellinger2011minimum,
chen2016online}, B-splines \cite{bouktir2008trajectory}, or ``convex
combinations of feasible paths" \cite{van2013time}. The optimization variables are thus the parameters of the motion primitives.
Differently from these works, we do not rely on motion primitives: the state-input trajectory is the optimization variable in our problem formulation. Similarly to the problem formulation in \cite{hauser2006motorcycle}, our reformulated minimum-time problem has a spatial parameter, instead of time, as independent variable. While in \cite{hauser2006motorcycle} the maximum velocity profile (for a given path) is computed for a motorcycle model by using a quasi-static approximation of the dynamics, we optimize the whole state-input trajectory and we consider the full nonlinear dynamics of the quadrotor.
Finally, other optimization strategies using the PRONTO method are \cite{hauslerenergy} and \cite{rucco2015virtual}, which aim to compute respectively minimum-energy trajectories for two-wheeled mobile robots and minimum-distance trajectories (from an unfeasible desired maneuver) for UAVs.
Differently from these works, we consider a more general three-dimensional space with position constraints and we
reformulate the minimum-time problem by using the transverse coordinates.
The paper is organized as follows. In Section \ref{sec:problem_formulation} we
present the standard formulation of the optimization problem we aim to
solve. In Section \ref{sec:strategy} our trajectory generation strategy, based
on an appealing reformulation of the problem, is described. Finally, in
Section \ref{sec:computations}, we provide numerical computations and
experiments, and discuss interesting features of the computed minimum-time
trajectories.
\section{The quadrotor minimum-time problem}
\label{sec:problem_formulation}
We first briefly introduce the quadrotor model used in the paper and then recall
the standard problem formulation.
\subsection{Quadrotor model}
The quadrotor dynamics can be described by the so called vectored-thrust
dynamical model in \cite{hua2013introduction}, where the gravity is the only external force and the generated torque does not influence the translational dynamics,
i.e.,
\begin{align}
\dot{\boldsymbol{p}} &= {\text{\textbf{v}}}\label{eq:state_pos}\\
\boldsymbol{\dot{\text{\textbf{v}}}}& = g \boldsymbol{e}_3 - \frac{F}{m} R(\boldsymbol{\Phi}) \boldsymbol{e}_3 \label{eq:state_vel}\\
\dot{\boldsymbol{\Phi}}&=J(\boldsymbol{\Phi}) \boldsymbol{\omega} \label{eq:state_ang}\\
\boldsymbol{\dot{\omega}} &= -I^{-1}\hat{\boldsymbol{\omega}} I \boldsymbol{\omega} + I^{-1} \boldsymbol{\gamma}. \label{eq:state_omega}
\end{align}
with $\boldsymbol{p}=[p_1 \; p_2 \: p_3]^T$, $\text{\textbf{v}}=[\text{v}_1 \; \text{v}_{2} \; \text{v}_3]^T$, $\boldsymbol{\Phi} = [\varphi \; \theta \; \psi]^T$, where $\varphi$, $\theta$, $\psi$ are respectively the roll, pitch and yaw angles, and $\boldsymbol{\omega} = [p \; q \; r]^T$.
The symbols in equations (\ref{eq:state_pos}-\ref{eq:state_omega}) are defined in the following table, where $\mathcal{F}_i$ and $\mathcal{F}_b$ respectively denote the inertial and the body frame.
\begin{table}[hb]
\begin{center}
\caption{Nomenclature}\label{tb:symbols}
\begin{tabular}{cc}
\hline\\[-2ex]
$\boldsymbol{p} \in {\mathbb{R}}^3$ & position vector expressed in $\mathcal{F}_i$ \\[0.5ex]
$\text{\textbf{v}}\in {\mathbb{R}}^3$ & velocity vector expressed in $\mathcal{F}_i$\\[0.5ex]
$\boldsymbol{\Phi} \in {\mathbb{R}}^3$ & vector of angles (yaw-pitch-roll w.r.t. current frame)\\[0.5ex]
$R(\boldsymbol{\Phi}) \!\in\! SO(3)$ & rotation matrix to map vectors in $\mathcal{F}_b$ into vectors in $\mathcal{F}_i$\\[0.5ex]
$\boldsymbol{\omega} \in {\mathbb{R}}^3$ & angular rate vector expressed in $\mathcal{F}_b$\\[0.5ex]
$\hat{\boldsymbol{\omega}} \in so(3)$ & skew-symmetric matrix associated to $\boldsymbol{\omega}$ \\[0.5ex]
$J(\boldsymbol{\Phi}) \in \mathbb{R}^{3 \times 3}$ & matrix mapping $\boldsymbol{\omega}$ into $\dot{\boldsymbol{\Phi}}$ \\[0.5ex]
$m \in {\mathbb{R}}$ & vehicle mass\\[0.5ex]
$I \in \mathbb{R}^{3 \times 3}$ & inertia matrix\\[0.5ex]
$g \in {\mathbb{R}}$ & gravity constant\\[0.5ex]
$\boldsymbol{e}_3\in {\mathbb{R}}^3$ & vector defined as $\boldsymbol{e}_3:=[ 0 \; 0 \; 1]^T$ \\[0.5ex]
$F \in {\mathbb{R}}$ & thrust\\[0.5ex]
$\boldsymbol{\gamma} \in {\mathbb{R}}^3$ & torque vector\\[0.5ex]
\hline
\end{tabular}
\end{center}
\end{table}
For the vehicle maneuvering, we adopt a cascade control scheme with an off-board
position/attitude control loop and an on-board angular rate controller.
Assuming that the \emph{virtual} control input $\boldsymbol{\omega}$ is tracked by the
on-board angular rate controller, we restrict our trajectory generation problem
on the position/attitude subsystem (\ref{eq:state_pos}-\ref{eq:state_ang}),
which can be written in state-space form as
\begin{equation}
\dot{\boldsymbol{x}}(t) = f(\boldsymbol{x}(t),\boldsymbol{u}(t)),
\label{eq:state_space}
\end{equation}
with state $\boldsymbol{x} = [\boldsymbol{p}^T \; \text{\textbf{v}}^T \; \boldsymbol{\Phi}^T]^T,$ input $\boldsymbol{u} = [\boldsymbol{\omega}^T \; F]^T$ and suitably defined $f$.
\subsection{Quadrotor minimum-time problem: standard formulation}
We deal with the following optimal control problem:
\begin{align}
\begin{split}
\min_{\boldsymbol{x}(\cdot),\boldsymbol{u}(\cdot),T} &\;\; T\\
\text{subj. to} &\;\; \dot{\boldsymbol{x}}(t) = f(\boldsymbol{x}(t),\boldsymbol{u}(t)), \quad \boldsymbol{x}(0)= \boldsymbol{x_0} \; \text{\emph{(dynamics)}}\\
& \;\; \boldsymbol{x}(T) \in {X}_T \; \text{\emph{(final constraint)}}\\
& \;\; |p(t)| \leq p_{max} \; \text{\emph{(roll rate)}}\\
& \;\; |q(t)| \leq q_{max} \; \text{\emph{(pitch rate)}}\\
& \;\; |r(t)| \leq r_{max} \; \text{\emph{(yaw rate)}}\\
& \;\; 0 < F_{min} \leq F(t) \leq F_{max} \; \text{\emph{(thrust)}} \\
& \;\; |\varphi(t)| \leq \varphi_{max}(t) \; \text{\emph{(roll angle)}}\\
& \;\; |\theta(t)| \leq \theta_{max}(t) \; \text{\emph{(pitch angle)}}\\
& \;\; |\psi(t)| \leq \psi_{max}(t) \; \text{\emph{(yaw angle)}}\\
& \;\; c_{obs}(\boldsymbol{p}(t)) \leq 0 \; \text{\emph{(position constraints)}},
\end{split}
\label{eq:mintime_standard}
\end{align}
where ${X}_T \subset {\mathbb{R}}^9$ is a desired final region, $p_{max}$, $q_{max}$ and $r_{max}$ are bounds on roll, pitch and yaw rate,
respectively, $F_{min}$ and $F_{max}$ are lower and upper bounds on thrust,
$\varphi_{max}(\cdot)$, $\theta_{max}(\cdot)$ and $\psi_{max}(\cdot)$ are bounds on roll-pitch-yaw angles, and
$c_{obs} : {\mathbb{R}}^3 \rightarrow {\mathbb{R}}$ represents position
constraints.
The $t$-dependent constraints in \eqref{eq:mintime_standard} hold for all $t \in [0,T]$, with the exception of $c_{obs}(\boldsymbol{p}(t)) \leq 0$, which holds for all $t \in [0,T)$.
The bounds on the angular rates avoid fast solutions. The vehicle thrust is also limited:
quadrotor vehicles can only generate positive thrust and the maximum rotor speed
is limited. Furthermore, constraints on roll and pitch angles are imposed into the optimization
problem in order to avoid acrobatic vehicle configurations and to satisfy $\theta \neq \pm \frac{\pi}{2}$ (which makes the matrix $J(\boldsymbol{\Phi})$ in \eqref{eq:state_ang} always well defined).
Time dependent boundaries can be used for roll and pitch constraints when the vehicle has to move through small passages.
The constraint on the yaw angle may be useful in applicative scenarios in which a sensor, e.g., a camera, is provided onboard the vehicle and needs to be pointed toward a target region. The position constraints take into account physical boundaries (possibly shaped by the presence of obstacles) and may also represent GPS denied areas or spaces with limited communication.
\section{Minimum-Time Trajectory Generation Strategy}
\label{sec:strategy}
In this section, we describe our strategy to compute minimum-time
trajectories.
Minimum-time problem \eqref{eq:mintime_standard} is difficult to solve, since it
is a constrained, free-horizon problem (time $T$ is an optimization
variable). For this reason, instead of directly designing an algorithm to solve
problem \eqref{eq:mintime_standard}, we provide a strategy to obtain an
equivalent, but computationally more appealing, fixed-horizon formulation.
In the following, we give an informal idea of the strategy steps to derive the new problem formulation. First, we
define a \emph{frame path} as a (purely geometric) curve in ${\mathbb{R}}^3$ used to
express the quadrotor position in terms of new coordinates. That is, as depicted in Figure \ref{fig:manreg}, the
position is identified by the arc-length of the point on the path at minimum
distance and by two transverse coordinates expressing how far the quadrotor
position is from the curve.
Second, we rewrite the dynamics in terms of the transverse coordinates and show
that it depends on time only through the arc-length time-evolution. Thus, by
using the arc-length as independent variable, we obtain a ``space-dependent''
transverse dynamics.
Third, the time $T$ can be expressed itself as a function of the
arc-length over a fixed ``spatial'' horizon $[0,L]$, with $L$ being the total
length of the frame path. Thus, minimizing $T$ can be rewritten as minimizing
an integral function over the fixed spatial interval $[0,L]$. Similarly,
pointwise constraints can be written in terms of the transverse coordinates and
as function of the arc-length.
The resulting fixed-horizon optimal control problem is solved by using
the Projection Operator Newton method (PRONTO), \cite{hauser2002projection},
combined with a barrier function approach to handle the constraints,
\cite{hauser2006barrier}.
We provide a detailed and formal explanation of the strategy steps in the
following subsections.
\subsection{Frame path}
\label{sec:3a}
The first step of the strategy is the
generation
of an arc-length parameterized frame path $\bar{\boldsymbol{p}}_f(s)$,
$\forall {s} \in [0,L]$, where $s$ is the arc-length of the path and $L$ is
its total length. In the following, we denote the arc-length parameterized
functions with a bar, and the derivative with respect to the arc-length with a
prime, i.e.,
$\bar{\boldsymbol{p}}_f'(s) :=
d\bar{\boldsymbol{p}}_f(s)/ds$.
The frame path $\bar{\boldsymbol{p}}_{f}(\cdot)$ has to be locally a
non-intersecting $C^2$ curve with non-vanishing
$\bar{\boldsymbol{p}}'_{f} (\cdot)$. Note that the frame path is only a
geometric path and it is not required to satisfy the position constraints. A possibility is the computation of the frame path as a $C^{\infty}$ geometric curve, e.g., using arctangent
functions as in our numerical computations. More details on the frame path used
for our numerical computations will be given in Section \ref{sec:computations}.
The frame path is used to parameterize the inertial position of the vehicle in the new transverse coordinates, as will be clear later. In order to define the transverse coordinates, we consider the Serret-Frenet frame, whose origin has $\bar{\boldsymbol{p}}_{f}(s)$ as coordinates, and defined $\forall s \in [0,L]$.
In particular, the tangent, normal and bi-normal vectors, respectively $\bar{\boldsymbol{t}}(s), \bar{\boldsymbol{n}}(s),
\bar{\boldsymbol{b}}(s)$, are defined, with components in the inertial frame, as
\begin{align}
\bar{\boldsymbol{t}}(s)&:=\bar{\boldsymbol{p}}'_f(s),\\
\bar{\boldsymbol{n}}(s)&:=\frac{\bar{\boldsymbol{p}}''_f(s)}{\bar{k}(s)},\\
\bar{\boldsymbol{b}}(s)&:= \bar{\boldsymbol{t}}(s) \times
\bar{\boldsymbol{n}}(s),
\end{align}
where $\bar{k}(s):=\Vert \bar{\boldsymbol{p}}''_f(s)\Vert_2$ is the
curvature of $\bar{\boldsymbol{p}}_f(\cdot)$ at $s$.
Moreover, we define the rotation matrix
\begin{align}
\bar{R}_{SF}:=[\: \bar{\boldsymbol{t}} \; \bar{\boldsymbol{n}} \; \bar{\boldsymbol{b}}\:]
\label{eq:Rsf}
\end{align}
mapping
vectors with components in the Serret-Frenet frame into vectors with
components in the inertial frame.
According to the Serret-Frenet formulas \cite{PSM:08}, the arc-length derivative of the Serret-Frenet rotation matrix is
\begin{equation}
\bar{R}'_{SF}(s) =
\bar{R}_{SF}(s)
\left[
\begin{array}{ccc}
0 & -\bar{k}(s) & 0 \\
\bar{k}(s) & 0 & -\bar{\tau}(s) \\
0 & \bar{\tau}(s) & 0 \\
\end{array}
\right],
\label{eq:Rsf'*dot_s_lqr}
\end{equation}
where $\bar{\tau}(s):=\bar{\boldsymbol{n}}(s) \; \bar{\boldsymbol{b}}'(s)$ is
the torsion of $\bar{\boldsymbol{p}}_f(\cdot)$ at $s$.
\subsection{Transverse dynamics}
\label{sec:transv}
The second step of the strategy is the derivation of the transverse dynamics by using the transverse coordinates defined with respect to the frame path $\bar{\boldsymbol{p}}_f(\cdot)$. In order to rewrite the standard dynamics (\ref{eq:state_pos}-\ref{eq:state_ang}) into the transverse dynamics, we proceed as follows.
First, we design a change of coordinates from the inertial position $\boldsymbol{p} \in {\mathbb{R}}^3$ to
the transverse coordinate vector $\boldsymbol{w} \in {\mathbb{R}}^2$, such that
$\boldsymbol{w} = [w_1 \; w_2]^T$,
where $w_1$ and $w_2$ are the transverse coordinates.
Let us consider the quadrotor center of mass with position $\boldsymbol{p}(t)$.
As depicted in Figure \ref{fig:manreg}, its orthogonal projection on the frame path identifies a point with position $\bar{\boldsymbol{p}}_f(s_f(t))$,
where the function $s_f : {\mathbb{R}}_{0}^+ \rightarrow {\mathbb{R}}_{0}^+$ is
\begin{align}
s_f(t):=\text{arg} \min_{s \in {\mathbb{R}}_0^+}
\|\boldsymbol{p}(t)-\bar{\boldsymbol{p}}_f(s)\|^2.
\label{eq:pi-lpqr}
\end{align}
For simplicity, in the following we use $s_f^t := s_f(t)$ and $\dot{s}_f^t := \dot{s}_f(t)$.
Note that, the minimizing arc-length is unique provided that $\bar{\boldsymbol{p}}_f(\cdot)$
is locally a non-intersecting $C^2$ curve with non-vanishing $\bar{\boldsymbol{p}}'_f (\cdot)$.
By mapping
$\boldsymbol{p}-\bar{\boldsymbol{p}}_f(s_f^t)$ into a vector with components in the Serret-Frenet frame attached to $\bar{\boldsymbol{p}}_f(s_f^t)$, we obtain
\begin{align}
\boldsymbol{d} &:= \bar{R}_{SF}(s_f^t)^T (\boldsymbol{p}-\bar{\boldsymbol{p}}_f(s_f^t)).
\label{eq:d}
\end{align}
Noticing that the component related to the tangent vector is always zero by construction, we define the components $w_1$ and $w_2$
of the transverse error vector $\boldsymbol{w}$
as, respectively, the second and third components of $\boldsymbol{d}$, i.e.,
\begin{align}
\begin{split}
w_1& := \bar{\boldsymbol{n}}(s_f^t)^T (\boldsymbol{p}-\bar{\boldsymbol{p}}_f(s_f^t)),\\
w_2& := \bar{\boldsymbol{b}}(s_f^t)^T
(\boldsymbol{p}-\bar{\boldsymbol{p}}_f(s_f^t)),
\end{split}
\label{eq:x_change_coordinates}
\end{align}
and thus obtaining
\begin{align}
\boldsymbol{d} = [0 \; w_1 \; w_2]^T.
\label{eq:d-def}
\end{align}
\vspace{-0.2cm}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.3]{manreg.pdf}\\
\caption{Selection of the arc-length $s$ identifying the point on the frame path at minimum distance from the quadrotor position at the time instant $t$.}
\label{fig:manreg}
\end{center}
\end{figure}
Second, we rewrite equation \eqref{eq:state_pos} using $\boldsymbol{w}$ instead of $\boldsymbol{p}$.
We note that, the invertible function $s_f(\cdot)$ provides a change of variables from the time $t$ to the arc-length $s$.
A generic arc-length function $\bar{\alpha}(\cdot)$ can be expressed as the time function $\bar{\alpha}(s_f(\cdot))$ and its time derivative is $\frac{d\bar{\alpha}(s_f(t))}{dt} = \bar{\alpha}'(s_f^t) \; \dot{s}_f^t$.
Let us rewrite equation \eqref{eq:state_pos}.
By using
equation \eqref{eq:d}, the position of the quadrotor center of mass $\boldsymbol{p}(t)$, at time
instant $t$, can be written as
\begin{align}
\boldsymbol{p}(t) = \bar{\boldsymbol{p}}_f(s_f^t)+\bar{R}_{SF}(s_f^t) \; \boldsymbol{d}(t).
\label{eq:y=y_xi+Rsf*d_lqr}
\end{align}
Differentiating \eqref{eq:y=y_xi+Rsf*d_lqr} with respect to time, since \eqref{eq:state_pos} holds, we get
\begin{equation}
\text{\textbf{v}}(t)=\bar{\boldsymbol{p}}'_f(s_f^t) \; \dot{s}_f^t +
{\bar{R}}_{SF}'(s_f^t) \; \dot{s}_f^t \; \boldsymbol{d}(t) +
\bar{R}_{SF}(s_f^t) \; \boldsymbol{\dot{d}}(t).
\label{eq:dot_y=y_xi+Rsf*d_lqr}
\end{equation}
Multiplying both sides of equation \eqref{eq:dot_y=y_xi+Rsf*d_lqr} by $\bar{R}^{T}_{SF}$,
using \eqref{eq:Rsf'*dot_s_lqr}, \eqref{eq:d-def} and $\bar{\boldsymbol{p}}'_f(s_f^t) = \bar{R}_{SF}(s_f^t)
[1 \; 0 \; 0]^T$,
we get
\begin{equation*}
\left[
\begin{array}{c}
0 \\
\dot{w}_1(t)\\
\dot{w}_2(t)
\end{array}
\right]
+
\dot{s}_f^t
\left[
\begin{array}{ccc}
1-\bar{k}(s_f^t) w_1(t)\\
-\bar{\tau}(s_f^t) w_2(t)\\
\bar{\tau}(s_f^t) w_1(t)\\
\end{array}
\right]
-
\bar{R}_{SF}^T(s_f^t) \text{\textbf{v}}(t)
=
0,
\label{eq:dot_y2}
\end{equation*}
i.e., using \eqref{eq:Rsf},
\begin{align}
\dot{s}_{f}^t &= \frac{\bar{\boldsymbol{t}}(s_{f}^t)^T \text{\textbf{v}}(t)}{1-\bar{k}(s_{f}^t) w_1(t)} \label{eq:s_}\\
\dot{w}_1(t) &= \bar{\boldsymbol{n}}(s_{f}^t)^T \text{\textbf{v}}(t) +\bar{\tau}(s_{f}^t) \dot{s}_{f}^t w_2(t) \label{eq:w1_}\\
\dot{w}_2(t) &= \bar{\boldsymbol{b}}(s_{f}^t)^T \text{\textbf{v}}(t) -\bar{\tau}(s_{f}^t) \dot{s}_{f}^t w_1(t). \label{eq:w2_}
\end{align}
Third and final, we rewrite equations \eqref{eq:w1_}, \eqref{eq:w2_}, \eqref{eq:state_vel}, \eqref{eq:state_ang},
by using the arc-length $s$ as independent variable.
Let us denote by $\bar{t}_f : {\mathbb{R}}_{0}^+ \mapsto {\mathbb{R}}_{0}^+$ the inverse function of $s_f : {\mathbb{R}}_{0}^+ \mapsto {\mathbb{R}}_{0}^+$, satisfying $t = \bar{t}_f(s_f^t)$.
Due to the invertibility of $s_f(\cdot)$, a generic time function $\alpha(\cdot)$ can be expressed as the arc-length function $\alpha(\bar{t}_f(\cdot))$ and, defining $\bar{\alpha} := \alpha \circ \bar{t}_f$,
we have $\alpha(t) = \bar{\alpha}(s_f^t)$.
In particular,
\begin{align}
\boldsymbol{w}(t)&=\bar{\boldsymbol{w}}(s_f^t), \quad \text{\textbf{{v}}}(t)=\bar{\text{\textbf{{v}}}}(s_f^t), \quad \boldsymbol{\Phi}(t)=\bar{\boldsymbol{\Phi}}(s_f^t), \label{eq:bar-w}\\
\boldsymbol{\omega}(t)&=\bar{\boldsymbol{\omega}}(s_f^t), \quad F(t)=\bar{F}(s_f^t). \label{eq:bar-omega}
\end{align}
Deriving with respect to time equations \eqref{eq:bar-w}, we get
\begin{align*}
\begin{split}
\dot{\boldsymbol{w}}(t)&=\bar{\boldsymbol{w}}'(s_f^t) \dot{s}_f^t,
\quad
\dot{\text{\textbf{{v}}}}(t) = \bar{\text{\textbf{{v}}}}'(s_f^t) \dot{s}_f^t,
\quad
\dot{\boldsymbol{\Phi}}(t) = \bar{\boldsymbol{\Phi}}'(s_f^t) \dot{s}_f^t,
\end{split}
\end{align*}
and equations \eqref{eq:w1_},\eqref{eq:w2_},\eqref{eq:state_vel},\eqref{eq:state_ang} become
\begin{align}
\begin{split}
\label{eq:transv-all-v1}
\bar{w}'_1(s_f^t) &= \bar{\boldsymbol{n}}(s_f^t)^T {\text{\textbf{v}}}(t) \; \frac{1}{\dot{s}_f^t} +\bar{\tau}(s_f^t) {w}_2(t),\\
\bar{w}'_2(s_f^t) &= \bar{\boldsymbol{b}}(s_f^t)^T {\text{\textbf{v}}}(t) \;\frac{1}{\dot{s}_f^t} -\bar{\tau}(s_f^t) {w}_1(t),\\
\bar{\text{\textbf{v}}}'(s_f^t) &= (g \boldsymbol{e}_3 - \frac{{F}(t)}{m} R({\boldsymbol{\Phi}}(t)) \boldsymbol{e}_3) \;\frac{1}{\dot{s}_f^t},\\
\bar{\boldsymbol{\Phi}}'(s_f^t) &=J ({\boldsymbol{\Phi}}(t)) {\boldsymbol{\omega}(t)} \; \frac{1}{\dot{s}_f^t}.
\end{split}
\end{align}
Using \eqref{eq:s_}, \eqref{eq:bar-w} and \eqref{eq:bar-omega},
equations \eqref{eq:transv-all-v1} depend on time only through the variable $s_f^t$.
Thus, we can rewrite the dynamics in the arc-length, $s \in [0,L]$, domain. Formally, considering $s$ as
the independent variable, we get the \emph{transverse dynamics}
\begin{align}
\begin{split}
\label{eq:transv-all}
\bar{w}'_1 &= \bar{\boldsymbol{n}}^T \bar{\text{\textbf{v}}} \; \frac{1-\bar{k} \bar{w}_1}{\bar{\boldsymbol{t}}^T \bar{\text{\textbf{v}}}} +\bar{\tau} \bar{w}_2,\\
\bar{w}'_2 &= \bar{\boldsymbol{b}}^T \bar{\text{\textbf{v}}} \; \frac{1-\bar{k} \bar{w}_1}{\bar{\boldsymbol{t}}^T \bar{\text{\textbf{v}}}} -\bar{\tau} \bar{w}_1,\\
\bar{\text{\textbf{v}}}'
&=
(g \boldsymbol{e}_3 - \frac{\bar{F}}{m} R(\bar{\boldsymbol{\Phi}}) \boldsymbol{e}_3) \; \frac{1-\bar{k} \bar{w}_1}{\bar{\boldsymbol{t}}^T \bar{\text{\textbf{v}}}},\\
\bar{\boldsymbol{\Phi}}' &=J (\bar{\boldsymbol{\Phi}}) \bar{\boldsymbol{\omega}} \; \frac{1-\bar{k} \bar{w}_1}{\bar{\boldsymbol{t}}^T \bar{\text{\textbf{v}}}}.
\end{split}
\end{align}
Note that the dependence by $s$ is omitted for simplicity.
Equations \eqref{eq:transv-all} can be written in state-space form as
\begin{equation}
\bar{\boldsymbol{x}}'_w (s) = \bar{f}(\bar{\boldsymbol{x}}_w(s),\bar{\boldsymbol{u}}(s)),
\label{eq:tran_dynamics}
\end{equation}
with state $\bar{\boldsymbol{x}}_w = [\bar{\boldsymbol{w}}^T \; \bar{\text{\textbf{v}}}^T \; \bar{\boldsymbol{\Phi}}^T]^T$, input $\bar{\boldsymbol{u}} = [\bar{\boldsymbol{\omega}}^T \; \bar{F}]^T$ {and suitable $\bar{f}$}.
\begin{remark}
The general theory regarding the transverse coordinates is introduced in
\cite{AB-JH:94} and used to design a maneuver regulation controller for a
bi-dimensional case in \cite{AS-JH-AB:13}. Differently from \cite{AS-JH-AB:13},
we use the transverse coordinates in a more general three-dimensional case and
in order to develop a trajectory optimization strategy rather than a
controller.
\end{remark}
\subsection{Arc-length parameterization of cost and constraints}
\label{sec:cost-constraints}
The third step of the strategy consists into the reformulation of cost and constraints in problem \eqref{eq:mintime_standard} by using the new (arc-length dependent) variables $\bar{\boldsymbol{x}}_w$ and $\bar{\boldsymbol{u}}$.
The cost functional in \eqref{eq:mintime_standard}, i.e., $T = \int_{0}^T 1 \; dt$,
is rewritten into an arc-length parameterization by considering the change of variable from $t$ to $s$, i.e.,
$$
\int_{0}^T 1 \; dt = \int_{s_f(0)}^{s_f(T)} \bar{t}_f^{\;'}(s) \; ds.
$$
Since $\frac{d\bar{t}_f(s_f(t))}{dt} = \bar{t}_f^{\;'}(s_f^t) \dot{s}_f^t$
and
$\bar{t}_f(s_f^t) = t$,
we get
\begin{align}
\bar{t}_f^{\;'}(s_f^t) = 1/\dot{s}_f^t
\label{eq:t_f_pr}
\end{align}
with $\dot{s}_f^t$ as in \eqref{eq:s_}.
Since $w_1(t) = \bar{w}_1(s_f^t)$ and $\text{\textbf{v}}(t) = \bar{\text{\textbf{v}}}(s_f^t)$, as in \eqref{eq:bar-w}, equation \eqref{eq:t_f_pr} can be written as
\begin{align}
\bar{t}_f^{\;'}(s_f^t) &= \frac{1-\bar{k}(s_{f}^t) \bar{w}_1(s_{f}^t)}{\bar{\boldsymbol{t}}(s_f^t)^T \bar{\text{\textbf{v}}}(s_f^t)}, \label{eq:int_}
\end{align}
where all the variables depend on time only through $s_f^t$.
Thus, we can rewrite \eqref{eq:int_} in the arc-length, $s \in [0,L]$, domain, obtaining
\begin{align}
\bar{t}_f^{\;'}(s) &= \frac{1-\bar{k}(s) \bar{w}_1(s)}{\bar{\boldsymbol{t}}(s)^T \bar{\text{\textbf{v}}}(s)}.
\label{eq:int_s}
\end{align}
Finally, since $s_f(0) = 0$, $s_f(T) = L$, and \eqref{eq:int_s} holds, we rewrite the cost functional in \eqref{eq:mintime_standard} as
\begin{equation}
\int_0^L \!\!\! \quad \frac{1-\bar{k}(s) \bar{w}_1(s)}{\bar{\boldsymbol{t}}(s)^T \bar{\text{\textbf{v}}}(s)} \; ds.
\label{eq:cost_s}
\end{equation}
Notice that, according to \eqref{eq:cost_s}, the hypothesis $\bar{\boldsymbol{t}}(s)^T \bar{\text{\textbf{v}}}(s) \neq 0$, has to be satisfied $\forall s \in [0,L]$, i.e., the velocity projected on the tangent vector of the frame path has to be not null.
The constraints in \eqref{eq:mintime_standard} are rewritten into an arc-length parameterization suitable to apply the barrier function approach \cite{hauser2006barrier}.
The constraint $\boldsymbol{x}(T) \in \boldsymbol{X}_T$ is written in the form
\begin{align}
c_f(\bar{\boldsymbol{x}}_w(L)) \leq 0,
\label{eq:cT}
\end{align}
with scalar components
\begin{align}
c_{f,i}(\bar{x}_{w_i}(L)) = \Big( \frac{2 \; \bar{x}_{w_i}(L) - (\bar{x}_{w_i,max}+\bar{x}_{w_i, min})}{(\bar{x}_{w_i, max}-\bar{x}_{w_i, min})} \Big)^2 -1,
\label{eq:c_fi}
\end{align}
$\forall i= 1,...,8$, where
$\bar{x}_{w_i}$ is the $i$-th component of $\bar{\boldsymbol{x}}_{w}$,
and $\bar{x}_{w_i, min}$ and $\bar{x}_{w_i, max}$ are the bounds on the final states.
The constraints on the angular rates, thrust and roll-pitch-yaw angles are rewritten by using
equations \eqref{eq:bar-w}, \eqref{eq:bar-omega} and reparameterizing the time-dependent bounds
$\varphi_{max}(t)$, $\theta_{max}(t)$ and $\psi_{max}(t), \; \forall t \in [0,T]$, by the arc-length $s$.
Thus, we have
\begin{align}
\begin{split}
\label{eq:constr_s}
&\Big( \frac{\bar{p}(s)}{p_{max}} \Big)^2 \!\!\!-1 \leq 0, \;\;\;
\Big( \frac{\bar{q}(s)}{q_{max}} \Big)^2 \!\!\!-1 \leq 0, \;\;\;
\Big( \frac{\bar{r}(s)}{r_{max}} \Big)^2 \!\!\!-1 \leq 0, \\
&\Big( \frac{\bar{\varphi}(s)}{\bar{\varphi}_{max}(s)} \Big)^2\!\!\!\!-\!1\!\leq\!0,
\Big( \frac{\bar{\theta}(s)}{\bar{\theta}_{max}(s)} \Big)^2\!\!\!\!-\!1\!\leq\!0,
\Big( \frac{\bar{\psi}(s)}{\bar{\psi}_{max}(s)} \Big)^2\!\!\!\!-\!1\!\leq\!0,\\
& \Big( \frac{2 \bar{F}(s) - (F_{max}+F_{min})}{(F_{max}-F_{min})}\Big)^2 \!\!\!-1 \leq 0.
\end{split}
\end{align}
As regards the position constraints $c_{obs}(\boldsymbol{p}(t)) \leq 0$, they are written in the generic form
\begin{align}
\bar{c}_{obs}(\bar{w}_1(s),\bar{w}_2(s)) \leq 0,
\label{eq:pos_constr}
\end{align}
which can be particularized according to the shape of the flying region.
For environments with circular sections, the inequality \eqref{eq:pos_constr} becomes
\begin{align}
\Big( \frac{\sqrt{\bar{w}_1^2(s)+\bar{w}_2^2(s))}}{\bar{r}_{obs}(s)} \Big)^2 -1 \leq 0,
\label{eq:circ_constr}
\end{align}
where $\bar{r}_{obs}(s)$ identifies the radius of the circular boundary at a given arc-length $s$.
For environments with rectangular sections, the inequality \eqref{eq:pos_constr} becomes
\begin{align}
\Big( \frac{2 \bar{w}_i(s) - (\bar{w}_{i,max}(s)+\bar{w}_{i,min}(s))}{(\bar{w}_{i,max}(s)-\bar{w}_{i,min}(s))} \Big)^2 -1 \leq 0,
\label{eq:rect_constr}
\end{align}
$\forall i = 1,2,$ where $\bar{w}_{i,min}(s)$ and $\bar{w}_{i,max}(s)$ are the
lower and upper bounds at a given arc-length $s$, defining the boundaries of the region.
The constraint boundaries are arc-length functions suitable to model
fairly complex regions. They represent the physical boundary of a region
and they can be shaped in order to take into account the presence of
obstacles attached to the boundary. As an illustrative example, let us consider the environment with
rectangular sections depicted in Figure~\ref{fig:w1_obs}. An obstacle
restricts the collision-free space inside the physical boundary of the
region.
\begin{figure}[htbp]
\begin{center}
{\includegraphics[scale=0.5]{obstacles}\vspace{0.1cm}\label{fig:path_s2}}\\
\caption{Representation of $\bar{w}_{1,obs}$ related to a point A on the obstacle boundary. The frame path is depicted in red (portion identifying $s_{obs}$) and dot-dashed green.}
\label{fig:w1_obs}
\end{center}
\end{figure}
Let us denote by $\bar{w}^{PB}_{i,max}(s)$ and $\bar{w}^{PB}_{i,min}(s)$ the (respectively) positive and negative distance of the
physical boundary from the frame path at a given arc-length $s$.
We first set $\bar{w}_{i,min}(s) = \bar{w}^{PB}_{i,min}(s)$ and $\bar{w}_{i,max}(s)
= \bar{w}^{PB}_{i,max}(s)$, $ \forall s\in [0,L]$. Then, in order to
take into account the obstacle, we suitably restrict the bounds as follows.
Let us consider a point $A$ on the
boundary surface of the obstacle. Let $\boldsymbol{p}_{obs}$ be the position of
point $A$ with components in the inertial frame. We map $\boldsymbol{p}_{obs}$ in
the transverse coordinate vector
$\bar{\boldsymbol{w}}_{obs}$.
First, we select the arc-length on the frame
path, identifying the point at minimum distance from $A$, as
\begin{align}
s_{obs}:=\text{arg} \min_{s \in {\mathbb{R}}_0^+}
\|\boldsymbol{p}_{obs}-\bar{\boldsymbol{p}}_f(s)\|^2.
\label{eq:sobs}
\end{align}
Second, we map $\boldsymbol{p}_{obs} - \bar{\boldsymbol{p}}_f(s_{obs})$ into a vector with components in the Serret-Frenet frame attached to the point identified by $s_{obs}$, obtaining
\begin{align}
\bar{w}_{1,obs} &= \bar{\boldsymbol{n}}^T(s_{obs}) (\boldsymbol{p}_{obs} - \bar{\boldsymbol{p}}_f(s_{obs})), \label{eq:w1obs}\\
\bar{w}_{2,obs} &= \bar{\boldsymbol{b}}^T(s_{obs}) (\boldsymbol{p}_{obs} - \bar{\boldsymbol{p}}_f(s_{obs})). \label{eq:w2obs}
\end{align}
Since, according to the particular scenario, the obstacle only affects the function $\bar{w}_{1,min}(\cdot)$, we update
\begin{align*}
\bar{w}_{1,min}(s_{obs}) &= \max\{\bar{w}^{PB}_{1,min}(s_{obs}),
\bar{w}_{1,obs}\}.
\end{align*}
\subsection{Equivalent minimum-time formulation and optimal control solver}
\label{sec:mintime-new}
The minimum-time problem \eqref{eq:mintime_standard} is reformulated in the new (arc-length dependent) variables $\bar{\boldsymbol{x}}_w$ and $\bar{\boldsymbol{u}}$, by using the cost \eqref{eq:cost_s}, the transverse dynamics \eqref{eq:tran_dynamics} and the constraints \eqref{eq:cT}, \eqref{eq:constr_s}, and \eqref{eq:pos_constr}. Denoting by $c(\bar{\boldsymbol{x}}_w(s),\bar{\boldsymbol{u}}(s)) \leq 0, \; \forall s \in [0,L],$ the constraints \eqref{eq:constr_s} and \eqref{eq:pos_constr} in vectorial form, the reformulated problem is
\begin{equation}
\begin{split}
\label{eq:mintime}
\min_{\bar{\boldsymbol{x}}_w(\cdot),\bar{\boldsymbol{u}}(\cdot)} &\; \int_0^L \!\!\! \quad \frac{1-\bar{k}(s) \bar{w}_1(s)}
{\bar{\boldsymbol{t}}(s)^T \bar{\text{\textbf{v}}}(s)}
\; ds,\\
\!\!\text{subj. to} &\;
\bar{\boldsymbol{x}}'_w (s)= \bar{f}(\bar{\boldsymbol{x}}_w(s),\bar{\boldsymbol{u}}(s)), \quad \bar{\boldsymbol{x}}_w(0) = \boldsymbol{x}_{w0},\\[0.5ex]
& \; c_f(\bar{\boldsymbol{x}}_w(L)) \leq 0,\\
& c(\bar{\boldsymbol{x}}_w(s),\bar{\boldsymbol{u}}(s))\leq 0, \; \forall s \in [0,L].
\end{split}
\end{equation}
Note that the fixed horizon problem \eqref{eq:mintime} is equivalent to \eqref{eq:mintime_standard} since
trajectories solving \eqref{eq:mintime} can be mapped into trajectories solving
\eqref{eq:mintime_standard}.
In order to solve problem \eqref{eq:mintime}, we use a combination of the
PRojection Operator based Newton method for Trajectory Optimization (PRONTO)
\cite{hauser2002projection} with a barrier function approach
\cite{hauser2006barrier}.
We relax state-input constraints by adding them in
the cost functional, i.e., we consider the problem
\begin{equation}
\begin{split}
\min_{\bar{\boldsymbol{x}}_w(\cdot),\bar{\boldsymbol{u}}(\cdot)} \!\!\!\!&\;\; \int_0^L \!\!\!
\Big(
\frac{1-\bar{k}(s) \bar{w}_1(s)}{\bar{\boldsymbol{t}}(s)^T \bar{\text{\textbf{v}}}(s)} + \epsilon \sum_j \beta_\nu (-c_j(\bar{\boldsymbol{x}}_w(s),\bar{\boldsymbol{u}}(s)))
\Big) ds\\
& \quad + \; \epsilon_f \sum_i \beta_{\nu_f} (-c_{f,i}(\bar{\boldsymbol{x}}_w(L))),\\
\!\!\text{subj. to} &\;\; \bar{\boldsymbol{x}}'_w(s) = \bar{f}(\bar{\boldsymbol{x}}_w(s),\bar{\boldsymbol{u}}(s)), \quad \forall s \in [0,L],\\
& \;\; \bar{\boldsymbol{x}}_w(0) = \boldsymbol{x}_{w0}.
\end{split}
\label{eq:mintime2}
\end{equation}
where $\epsilon$ and $\epsilon_f$ are positive parameters and $\beta_\ell(\cdot)$,
$\ell \in \{\nu,\nu_f\}$, is a function depending on the parameter $\ell$ and
defined as
\begin{align*}
\beta_\ell(x) &:=
\begin{cases}
-\log(x) & x > \ell,\\
-\log(\ell) + \frac{1}{2} \big[ (\frac{x - 2\ell}{\ell})^2 -1
\big] & x \leq \ell.
\end{cases} \nonumber
\end{align*}
Let an initial trajectory for the initialization of the algorithm be given.
The strategy to find an approximated solution to \eqref{eq:mintime} can be summarized as follows.
Problem \eqref{eq:mintime2} is iteratively solved
by reducing the parameters $\epsilon, \nu, \epsilon_f$ and $\nu_f$ at each iteration, and thus pushing
the trajectory towards the constraint boundaries.
Each instance of problem \eqref{eq:mintime2} is solved by means of the PRONTO
algorithm described in Appendix \ref{app:pronto}.
\subsection{Summary of the strategy}
\label{sec:summary}
A pseudo code of the whole strategy to compute minimum time trajectories
is reported in the following table (Algorithm 1). We denote with
$(\bar{\boldsymbol{x}}_{w}(\cdot),\bar{\boldsymbol{u}}(\cdot))^0$ the initial trajectory to initialize the algorithm and with
$\texttt{PRONTO}$ the PRojection Operator based Newton method for Trajectory Optimization routine
that, given a trajectory $(\bar{\boldsymbol{x}}_{w}(\cdot),\bar{\boldsymbol{u}}(\cdot))^{i-1}$, computes the solution $(\bar{\boldsymbol{x}}_{w}(\cdot),\bar{\boldsymbol{u}}(\cdot))^i$ to problem \eqref{eq:mintime2}, i.e., $(\bar{\boldsymbol{x}}_{w}(\cdot),\bar{\boldsymbol{u}}(\cdot))^i = \texttt{PRONTO}((\bar{\boldsymbol{x}}_{w}(\cdot),\bar{\boldsymbol{u}}(\cdot))^{i-1})$.
\begin{algorithm}[H]
\caption{Minimum-time strategy}
\label{alg:proj_newt}
\begin{algorithmic}
\REQUIRE initial condition $\boldsymbol{x}_0$, final desired region $X_T$, bounds $p_{max}, q_{max}, r_{max}, f_{min}, f_{max}, \varphi_{max}(\!\cdot\!),
\theta_{max}\!(\!\cdot\!),$ $\psi_{max}\!(\!\cdot\!)$, and the dynamic model \eqref{eq:state_space}
\STATE
\textbf{A. Frame path} \\
generate $\bar{\boldsymbol{p}}_f(s), \; \forall s \in [0,L]$\\
compute
\begin{itemize}
\item tangent, normal and binormal vectors\\[0.5ex]
$\bar{\boldsymbol{t}}(s)\!=\!\bar{\boldsymbol{p}}'_f(s)$, \;
$\bar{\boldsymbol{n}}(s)\!=\!\frac{\bar{\boldsymbol{p}}''_f(s)}{\Vert \bar{\boldsymbol{p}}''_f(s)\Vert_2}$, \;
$\bar{\boldsymbol{b}}(s)\!=\!\bar{\boldsymbol{t}}(s) \times \bar{\boldsymbol{n}}(s)$
\item curvature
$\bar{k}(s)\!=\!\Vert \bar{\boldsymbol{p}}''_f(s)\Vert_2$
\item torsion
$\bar{\tau}(s)\!=\!\bar{\boldsymbol{n}}(s) \; \bar{\boldsymbol{b}}'(s)$
\end{itemize}
\textbf{B. Transverse dynamics} \\
\STATE set-up transverse dynamics \eqref{eq:transv-all} \\[0.5ex]
\textbf{C. Cost and constraints} \\[0.5ex]
\STATE
set-up cost $ \int_0^L \frac{1-\bar{k}(s) \bar{w}_1(s)}{\bar{\boldsymbol{t}}(s)^T \bar{\text{\textbf{v}}}(s)} \; ds$\\[0.5ex]
set-up constraints \eqref{eq:cT} and \eqref{eq:constr_s}\\[0.5ex]
define boundary constraints by using \eqref{eq:circ_constr} and/or \eqref{eq:rect_constr} \\[0.5ex]
\textbf{E. Numerical solution to \eqref{eq:mintime}} \\
\STATE
compute initial trajectory $(\bar{\boldsymbol{x}}_{w}(\cdot),\bar{\boldsymbol{u}}(\cdot))^0$\\
\STATE
set $\epsilon = 1$, $\epsilon_f = 1$, $\nu = 1$, $\nu_f = 1$
\FOR{$i = 1, 2 \ldots$}
\STATE compute: $\!(\bar{\boldsymbol{x}}_{w}(\cdot),\bar{\boldsymbol{u}}(\cdot))^i \!=\! \texttt{PRONTO}((\bar{\boldsymbol{x}}_{w}(\cdot),\bar{\boldsymbol{u}}(\cdot))^{i-1})$
\STATE update: $\epsilon$, $\epsilon_f$, $\nu$, $\nu_f$
\ENDFOR
\ENSURE $(\bar{\boldsymbol{x}}_{w}(\cdot),\bar{\boldsymbol{u}}(\cdot))_{opt} =(\bar{\boldsymbol{x}}_{w}(\cdot),\bar{\boldsymbol{u}}(\cdot))^{i}$
\end{algorithmic}
\end{algorithm}
\section{Numerical computations}
\label{sec:computations}
In this section, we present numerical computations and experimental tests on a nano-quadrotor with mass $m = 0.0325 \; \text{kg}$, in order to show the effectiveness of the proposed strategy.
First, we consider a scenario with two obstacles: a parallelepiped and a cylinder, as depicted in Figure \ref{fig:path_s2}.
Second, we consider an experimental scenario
and we show the results related to the execution of the optimal trajectory using our maneuver regulation control scheme \cite{SS-GN-HHB-AF:13}.
\subsection{Rooms with obstacles}
The first scenario is as follows. The vehicle has to move from one room to another through a
narrow corridor.
There is a parallelepiped in the first room and a cylinder in the second room.
As an additional
requirement, the quadrotor must reach a neighborhood of $\boldsymbol{x}_{w0}$ at the end of its motion.
In order to fulfil this objective, we consider the final constraint \eqref{eq:cT} with $ c_{f,i}(\bar{x}_{w_i}(L))$ as in \eqref{eq:c_fi}, where
$\bar{x}_{w_i, min} = x_{w_i,0} - \text{tol}_i$, $\bar{x}_{w_i, max} = x_{w_i,0} + \text{tol}_i$, $\text{tol}_i$ is a given tolerance and $x_{w_i,0}$ is the $i$-th component of $\boldsymbol{x}_{w0}$.
Results are depicted in
Figures \ref{fig:scenario2_path}, \ref{fig:scenario2_velang},
\ref{fig:scenario2_inputs}. The initial trajectory is depicted in dot-dashed
green, intermediate trajectories in dotted black and the minimum-time trajectory
in solid blue. Collision-free boundaries are depicted in grey and remaining
state-input constraints in dashed red.
We choose as frame path a $C^{\infty}$ curve on the $\bar{p}_1 - \bar{p}_2$
plane with constant binormal vector $\bar{\boldsymbol{b}} = [0 \; 0 \; 1]^T$ and curvature
\begin{align*}
\bar{k}(s) = \frac{1}{5} \; \frac{\tanh(s-5) - \tanh(s-5(1+\frac{\pi}{2}))}{\max(\tanh(s-5) - \tanh(s-5(1+\frac{\pi}{2})))}.
\end{align*}
The collision free region is defined by constraint \eqref{eq:rect_constr} where
obstacle boundaries $\bar{w}_{i,min}(\cdot)$ and $\bar{w}_{i,max}(\cdot)$,
$i = 1,2$, are chosen as follows.
Functions $\bar{w}_{1,max}$ and $\bar{w}_{2,max}$ are not affected by obstacles.
As depicted in Figures \ref{fig:w1_s2} and \ref{fig:w2_s2},
$\bar{w}_{1,max}(\cdot)$ and $\bar{w}_{2,max}(\cdot)$ are obtained using sigmoid functions
with values varying from $2 \; \text{m}$ to $0.25 \; \text{m}$.
Functions $\bar{w}_{1,min}$ and $\bar{w}_{2,min}$ are affected by obstacles.
In order to model the obstacles, we consider the position of the obstacle boundary as a function of its arc-length.
We choose $10^{-3}$ as discretization step for the arc-length
and for every value of the boundary position $\boldsymbol{p}_{obs}$, we compute $s_{obs}$ and $\bar{w}_{1,obs}, \bar{w}_{2,obs}$ by using equations \eqref{eq:sobs}, \eqref{eq:w1obs} and \eqref{eq:w2obs}, respectively.
Thus, in order to define $\bar{w}_{1,min}$ and $\bar{w}_{2,min}$, we
first set $\bar{w}_{1,min}(s) = -\bar{w}_{1,max}(s)$ and $\bar{w}_{2,min}(s) = -\bar{w}_{2,max}(s)$,
$\forall s \in [0,L]$.
Second, for each $s_{obs}^R$ and $\bar{w}_{1,obs}^R$ related to a point $R$ on the parallelepiped, we update
\begin{align*}
\bar{w}_{1,min}(s_{obs}^R) &= \max\{-\bar{w}_{1,max}(s_{obs}^R), \bar{w}_{1,obs}^R\}.
\end{align*}
Third and final, for each $s_{obs}^C$ and $\bar{w}_{2,obs}^C$ related to a point $C$ on the cylinder,
we update
\begin{align*}
\bar{w}_{2,min}(s_{obs}^C) &= \max\{-\bar{w}_{2,max}(s_{obs}^C), \bar{w}_{2,obs}^C\}.
\end{align*}
We choose the initial trajectory as follows. We set the frame path as the position, a velocity module of $0.5 \; \text{m/s}$ along the curve and a zero yaw angle. The remaining initial states and inputs are
computed by using the differential flatness of the quadrotor dynamics \cite{mellinger2011minimum}. It is worth noting that the position part of the initial trajectory does not have to necessarily match the frame path. Also, the initial trajectory could be alternatively computed through the projection of a state-input curve by using the projection operator \eqref{eq:proj_oper_def} described in Appendix \ref{app:pronto}, instead of using the differential flatness.
Having the initial trajectory in hand, we run the algorithm to numerically
compute solutions. Note that the PRONTO method (described in Appendix
\ref{app:pronto}) is designed considering an $s$-dependent continuous
dynamics. In order to implement it by using a numerical toolbox (Matlab), we
consider a suitable tolerance. We choose $10^{-3}$ as discretization step on
$s$ and we use the tolerance of the Matlab solver to integrate the
differential equations. Each intermediate optimal trajectory is computed by
solving the optimization problem \eqref{eq:mintime2} with constant values
of the parameters $\epsilon$, $\nu$, $\epsilon_f$ and $\nu_f$. We start
with $\epsilon = 1$, $\nu = 1$, $\epsilon_f = 1$, $\nu_f = 1$ and,
following a suitable heuristic, we decrease them at each iteration.
Since the algorithm
operates in an interior point fashion, intermediate trajectories are all
feasible and are pushed to the constraint boundaries when $\epsilon,\nu,\epsilon_f, \nu_f$ are decreased.
As regards the minimum-time
trajectory, the maneuver is performed in $3.57 \; \text{s}$ and the path touches
the constraint boundaries when the vehicle is inside the corridor and in the proximity of obstacles (Figures
\ref{fig:w1_s2} and \ref{fig:w2_s2}). The velocity $\bar{\boldsymbol{t}}^T \bar{\text{\textbf{v}}}$ (Figure
\ref{fig:dp1_s2}) reaches a peak of about $8.5 \; \text{m/s}$ in the middle of the path
and approaches the final desired value at the end. Velocities $\bar{\boldsymbol{n}}^T \bar{\text{\textbf{v}}}$ and
$\bar{\boldsymbol{b}}^T \bar{\text{\textbf{v}}}$ (Figures \ref{fig:dp2_s2} and \ref{fig:dp3_s2}, respectively)
are between $-2.0 \; \text{m/s}$ and $2.0 \; \text{m/s}$. Roll and pitch angles (Figures \ref{fig:phi_s2}, \ref{fig:th_s2},
respectively) do not touch constraint boundaries and alternate positive and
negative values between $-50 \; \text{deg}$ and $50 \; \text{deg}$.
The yaw angle has values between $-20 \; \text{deg}$ and $50 \; \text{deg}$ (Figure \ref{fig:psi_s2}).
As regards
the inputs, while constraints on roll and pitch rates (Figures \ref{fig:pp_s2},
\ref{fig:qq_s2}, respectively) are always active, yaw rate and thrust (Figures
\ref{fig:rr_s2} and \ref{fig:ff_s2}, respectively) alternate intervals with
active and inactive constraints. Furthermore, note that the final state reaches a neighborhood
of the initial state, satisfying $|| \bar{\boldsymbol{x}}_w(L) - \boldsymbol{x}_{w0}|| < 0.07$.
\begin{figure}[htbp]
\begin{center}
\vspace{-0.4cm}\subfloat[][Path: 3D view]
{\includegraphics[scale=0.47]{path_sim}\label{fig:path_s2}}\\
\vspace{-0.2cm}
\subfloat[][Transverse coordinate $\bar{w}_1$]
{\hspace{-0.1cm}\includegraphics[width=4.7cm]{w1_sim}\label{fig:w1_s2}}
\subfloat[][Transverse coordinate $\bar{w}_2$]
{\vspace{-0.5cm}\includegraphics[width=4.7cm]{w2_sim}\label{fig:w2_s2}}\\
\caption{Path and transverse coordinates. Initial (dot-dashed green), intermediate (dotted black) and minimum-time (solid blue) trajectory. Constraint boundaries are depicted in grey.}
\label{fig:scenario2_path}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\vspace{-0.5cm}\subfloat[][Velocity $\bar{\boldsymbol{t}}^T \bar{\text{\textbf{v}}}$]
{\includegraphics[width=4.7cm]{dp1_sim}\label{fig:dp1_s2}}
\subfloat[][Roll angle $\bar{\varphi}$]
{\hspace{-0.25cm}\includegraphics[width=4.7cm]{phi_sim}\label{fig:phi_s2}}\\
\subfloat[][Velocity $\bar{\boldsymbol{n}}^T \bar{\text{\textbf{v}}}$]
{\includegraphics[width=4.7cm]{dp2_sim}\label{fig:dp2_s2}}
\subfloat[][Pitch angle $\bar{\theta}$]
{\hspace{-0.25cm}\includegraphics[width=4.7cm]{th_sim}\label{fig:th_s2}}\\
\subfloat[][Velocity $\bar{\boldsymbol{b}}^T \bar{\text{\textbf{v}}}$]
{\includegraphics[width=4.7cm]{dp3_sim}\label{fig:dp3_s2}}
\subfloat[][Yaw angle $\bar{\psi}$]
{\hspace{-0.25cm}\includegraphics[width=4.7cm]{psi_sim}\label{fig:psi_s2}}\\
\caption{Velocities and angles. Initial (dot-dashed green), intermediate (dotted black) and minimum-time (solid blue) trajectory. Constraint boundaries are depicted in red.}
\label{fig:scenario2_velang}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\vspace{-0.7cm}\subfloat[][Roll rate $\bar{p}$]
{\includegraphics[width=4.7cm]{pp_sim}\label{fig:pp_s2}}
\subfloat[][Pitch rate $\bar{q}$]
{\hspace{-0.25cm}\includegraphics[width=4.7cm]{qq_sim}\label{fig:qq_s2}}\\
\subfloat[][Yaw rate $\bar{r}$]
{\includegraphics[width=4.7cm]{rr_sim}\label{fig:rr_s2}}
\subfloat[][Thrust $\bar{f}$]
{\hspace{-0.25cm}\includegraphics[width=4.7cm]{ff_sim}\label{fig:ff_s2}}\\
\caption{Inputs. Initial (dot-dashed green), intermediate (dotted black) and minimum-time (solid blue) trajectory. Constraint boundaries are depicted in dashed red.}
\label{fig:scenario2_inputs}
\end{center}
\end{figure}
\subsection{{Tubular passage}}
As a second test,
we consider a region delimited by hula hoops as constrained environment.
First, we compute a minimum-time trajectory through our optimization strategy and second, we
experimentally execute the minimum-time trajectory on the CrazyFlie nano-quadrotor (https://www.bitcraze.io/crazyflie/), by using a suitable controller.
We invite the reader to watch the attached video related to this experiment.
We set-up the optimization algorithm as follows.
We approximate the collision free region as a tube with circular section.
We choose as {frame path} a curve on the $\bar{p}_2 - \bar{p}_3$
plane with constant binormal vector $\bar{\boldsymbol{b}} = [1 \; 0 \; 0]^T$ and curvature
\begin{align*}
\bar{k}(s) = \frac{\frac{1}{1+e^{-8(s-2.27)}}-\frac{1}{1+e^{-8(s-3.67)}}}{\max(\frac{1}{1+e^{-8(s-2.27)}}-\frac{1}{1+e^{-8(s-3.67)}})}.
\end{align*}
Moreover, we consider the constraint {\eqref{eq:circ_constr}} with {constant} $\bar{r}_{obs}=r_{hh}-l-e_p$, where $r_{hh} = 0.33 \; \text{m}$ is the hula hoop radius, $l = 0.04 \; \text{m}$ is the distance between the quadrotor center of mass and its propellers and $e_p = 0.01 \; \text{m}$ is the estimated position error arising during control.
As regards input constraints, we impose, for safety reasons, more severe bounds than the ones required by the physical vehicle limitations. In this way, we also ensure that the ``experimental" trajectory remains feasible although the imperfect tracking of desired inputs by actual values (naturally arising during control).
We choose $p_{min}=-15 \; \text{deg/s}$ and $p_{max}=15 \; \text{deg/s}$ for the roll rate and $F_{min}= 0.1779 \; \text{N}$ and $F_{max}= 0.3411 \; \text{N}$ for thrust.
{By using our minimum-time strategy, we obtain the following result.}
The optimal trajectory, performed in $2.38$ s,
is depicted in Figure \ref{fig:exp} (solid blue). Constraint boundaries are depicted in dashed red and the hula hoops are depicted in solid green.
The optimal path (blue line in Figure \ref{fig:path_exp}) first takes negative values of $p_2$ until changing direction toward positive $p_2$ values, touching the constraint boundary in the proximity of the maximum curvature of the tube, and staying in the middle of the feasibility region at the end.
The roll angle (blue line in Figure \ref{fig:phi_exp}) decreases in order to push the vehicle to negative $p_2$ values and then it monotonically increases during the remaining time interval.
The velocity module (blue line in Figure \ref{fig:vel_exp}) always increases, as we expect for a minimum-time trajectory.
As regards the inputs, in the beginning, the angular rate $p$ (blue line in Figure \ref{fig:pp_exp}) stays on the lower bound and then it switches to the upper bound.
The thrust $F$ (blue line in Figure \ref{fig:ff_exp}) always takes the upper bound.
We execute the computed minimum-time trajectory on the CrazyFlie nano-quadrotor
by using the closed-loop, maneuver regulation controller developed in
\cite{SS-GN-HHB-AF:13}, in which the minimum-time trajectory is used for the
desired maneuver.
The maneuver regulation controller computes thrust and angular rate virtual
inputs, which are tracked by the standard off-the-shelf angular rate
controller provided on board the CrazyFlie.
The actual (experimental) trajectory performed using our maneuver regulation
controller is depicted in Figure \ref{fig:exp} in solid magenta. Snapshots of
the experiment are depicted in Figure \ref{fig:exp_snapshots}. As expected,
the quadrotor passes close to the second hula hoop maintaining the distance
imposed by the restrictive constraints in the optimization problem.
The actual velocity does not perfectly match (at higher velocities) the desired one, due to the unmodeled drag effect.
Since the vehicle is asked to follow the desired thrust, the actual velocity becomes lower than the desired one because of the opposing aerodynamic force.
The experiment shows the actual feasibility of the optimal trajectory and also reveals that a more accurate model including aerodynamic effects would improve the control performance.
\begin{figure}[htbp]
\hspace{-0.5cm}
\begin{tabular}{cc}
{\multirow{-2}[15.0]{*}{\subfloat[][Path]{\includegraphics[width=4.3cm]{path_yz_mod_v3_exp} \label{fig:path_exp}}}} &
\hspace{-0.7cm}{\subfloat[][Roll angle $\varphi$]{{\includegraphics[width=4.7cm]{phi_exp} \label{fig:phi_exp}}}}\\
&
\hspace{-0.7cm}{\subfloat[][Velocity norm $||\text{\textbf{v}}||$]{{\includegraphics[width=4.7cm]{vel_exp} \label{fig:vel_exp}}}}\\
\hspace{0.3cm}{\subfloat[][Angular rate $p$]{\includegraphics[width=4.7cm]{pp_exp}\vspace{0.2cm}\label{fig:pp_exp}}} &
\hspace{-0.4cm}{\subfloat[][Thrust $F$]{\hspace{-0.2cm}\includegraphics[width=4.7cm]{ff_exp}\vspace{0.2cm}\label{fig:ff_exp}}}\\
\multicolumn{2}{c}{\subfloat[][Experiment snapshots]{\includegraphics[scale=0.15]{experiment_v2_exp}\label{fig:exp_snapshots}}}
\end{tabular}
\caption{Experimental test. Desired trajectory (blue) and actual trajectory (magenta). Constraint boundaries are depicted in dashed red. Hula hoops are depicted in solid green.}
\label{fig:exp}
\end{figure}
\section{Conclusion}
In this paper, we have presented a strategy to address the minimum-time problem
for quadrotors in constrained environments. Our approach consists of: (i)
generating a frame path, (ii) expressing the quadrotor dynamics in a new
set of coordinates ``transverse" with respect to that path, and (iii) redefining
cost and constraints in the new coordinates. Thus, we obtain a
reformulation of the problem, which we solve by combining the PRONTO algorithm
with a barrier function approach. Numerical computations on two challenging
scenarios prove the effectiveness of the strategy and allow us to show
interesting dynamic capabilities of the vehicle. Moreover, the experimental test
of the second scenario shows the feasibility of the computed
trajectory. As a future work, we aim at extending our strategy to a
scenario with moving obstacles. Challenges to be addressed include how to
combine trajectory generation and control, and how to take into account a fast
integration of the dynamics for realtime computation.
\appendices
\section{Projection Operator Newton Method}
\label{app:pronto}
Here, we provide a brief description of the PRONTO algorithm \cite{hauser2002projection}.
The PRONTO algorithm is based on a properly designed \emph{projection operator}
$\mathcal{P} : \xi_c \rightarrow \xi$, mapping a state-control curve
$\xi_c =(\bar{\boldsymbol{x}}_{w,c}(\cdot), \bar{\boldsymbol{u}}_c(\cdot))$ into a system
trajectory
$\xi =(\bar{\boldsymbol{x}}_w(\cdot), \bar{\boldsymbol{u}}(\cdot))$, by the nonlinear feedback system
\begin{align}
\bar{\boldsymbol{x}}'_w(s) & = \bar{f}(\bar{\boldsymbol{x}}_w(s),\bar{\boldsymbol{u}}(s)), \quad \bar{\boldsymbol{x}}_w(0) = \boldsymbol{x}_{w0}, \nonumber\\
\bar{\boldsymbol{u}}(s) & = \bar{\boldsymbol{u}}_c(s) + \bar{K}(s)(\bar{\boldsymbol{x}}_{w,c}(s)-\bar{\boldsymbol{x}}_w(s)),
\label{eq:proj_oper_def}
\end{align}
where the feedback gain $\bar{K}(\cdot)$ is designed by solving a suitable
linear quadratic optimal control problem on the linearized dynamics of \eqref{eq:tran_dynamics} about the trajectory $\xi$.
Note that the feedback gain $\bar{K}(\cdot)$ is only used to define the projection operator and it is not related to the controller used to execute the optimal trajectory in our experimental test.
The projection operator is used to
convert the dynamically constrained optimization problem \eqref{eq:mintime2}
into the unconstrained problem
\begin{equation}
\begin{split}
\min_{\xi} &\; g(\xi;\bar{k}),
\end{split}
\label{eq:mintime3}
\end{equation}
where
$g(\xi;\bar{k}) = h(\mathcal{P}(\xi); \bar{k})$, and $h(\xi; \bar{k}) := \int_0^L \!
(\frac{1-\bar{k}(s) \bar{w}_1(s)}{\bar{\boldsymbol{t}}^T(s) \bar{\text{\textbf{v}}}(s)} + \epsilon \sum_j \beta_\nu (-c_j(\bar{\boldsymbol{x}}_w(s),\bar{\boldsymbol{u}}(s)))) ds +
\; \epsilon_f \sum_i \beta_{\nu_f} (-c_{f,i}(\bar{\boldsymbol{x}}_w(L)))$.
Then, using an (infinite dimensional) Newton descent method, a local minimizer
of \eqref{eq:mintime3} is computed iteratively.
Given the current trajectory iterate $\xi_i$, the search
direction $\zeta_i$ is obtained by solving a linear quadratic optimal control
problem with cost
$Dg(\xi_i; \bar{k}) \cdot \zeta + \frac{1}{2} D^2 g(\xi_i;
\bar{k})(\zeta,\zeta)$,
where $\zeta \mapsto Dg(\xi_i; \bar{k}) \cdot \zeta$ and
$\zeta \mapsto D^2 g(\xi_i; \bar{k})(\zeta,\zeta)$ are respectively the first
and second Fr\'echet differentials of the functional $g(\xi,\bar{k})$ at
$\xi_i$. Then, the curve $\xi_i + \gamma_i \zeta_i$, where $\gamma_i$ is a step
size obtained through a standard backtracking line search, is projected, by
means of the projection operator, in order to get a new trajectory $\xi_{i+1}$.
The strength of this approach is that the local minimizer of \eqref{eq:mintime3}
is obtained as the limit of a sequence of trajectories, i.e., curves satisfying
the dynamics. Furthermore, the feedback system \eqref{eq:proj_oper_def},
defining the projection operator, allows us to generate trajectories in a
numerically stable manner.
\begin{remark}
An elegant extension of the PRONTO method to Lie groups is developed in \cite{saccon2013optimal} and could be alternatively used in our strategy.
\end{remark}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,877,628,091,132 | arxiv | \section{Introduction}
\label{sec:1}
The equation of state (EOS) is a critical input for astrophysical simulations
such as core-collapse supernovae and neutron-star mergers, which require information
over wide ranges of temperature, proton fraction, and baryon density.
The EOS should include reasonable descriptions for both nonuniform matter at subsaturation
densities and uniform matter at high densities.
Due to the complex phase structure of stellar matter, it is not an easy task to construct
the EOS covering the full range of thermodynamic conditions.
Currently, there are a set of EOSs available for supernova
simulations, which have been summarized in the review by Oertel et al.~\citep{oert17}.
One of the most commonly used EOSs is that of Lattimer and Swesty~\citep{latt91},
which was based on the compressible liquid-drop model with a Skyrme force.
Another commonly used EOS is often referred to as Shen EOS~\citep{shen98a,shen98b,shen11},
which used a relativistic mean-field (RMF) model and Thomas-Fermi approximation
with a parameterized nucleon distribution for the description of nonuniform matter.
Both EOSs employed the so-called single nucleus approximation (SNA), where only a
single representative nucleus was considered instead of an ensemble of nuclei.
It was shown that SNA could adequately describe the thermodynamics of the
system~\citep{burr84}.
Recently, EOS tables were developed beyond the SNA by including multiple
nuclei in nuclear statistical equilibrium (NSE) based on some RMF or
Skyrme parameterizations~\citep{hemp10,furu11,furu13,furu17a,stei13,schn17}.
In~\citet{shenG11a,shenG11b}, the authors employed a hybrid approach for constructing
EOS tables, where NSE was used at low densities and SNA was adopted at intermediate
densities near the transition to uniform matter.
It is known that considering detailed nuclear composition plays an important role
in determining the electron-capture rates and neutrino-matter
interactions, but it has less influence on thermodynamic quantities of dense matter.
In addition, microscopic approaches based on realistic nuclear forces have been also
applied to construct the EOS tables for astrophysical simulations~\citep{toga17,furu17b}.
In~\citet{schn19}, the authors developed the EOS tables based on the Skyrme-type
parameterization of the nuclear force, where the parameters were tuned to reproduce
the Akmal, Pandharipande, and Ravenhall (APR) EOS.
The recent developments in astrophysical observations provide quantitative
constraints on the EOS of dense matter.
One strong constraint comes from the mass measurements of massive
pulsars, PSR J1614-2230~\citep{demo10,fons16}, PSR J0348+0432~\citep{anto13},
and PSR J0740+6620~\citep{crom19},
which requires the maximum neutron-star mass to be larger than $\sim 2 M_\odot$.
Another constraint is provided by the radius estimations from quiescent low-mass
X-ray binaries and objects with photospheric radius expansion bursts,
which suggest small neutron-star radii, but it has much larger uncertainties
than the mass measurements~\citep{fort15}.
Furthermore, the first detection of gravitational waves from a binary neutron-star
merger, known as GW170817, provides an upper limit on the tidal deformability
of neutron stars~\citep{abbo17,abbo18}, which implies small neutron-star
radii also~\citep{fatt18,most18}.
More recently, the second detection of gravitational waves, GW190425, was
reported by LIGO and Virgo Collaborations~\citep{abbo19}, which implies a rather
large total mass of the binary system of $3.4^{+0.3}_{-0.1} M_\odot$
and may offer valuable information for the EOS at high densities.
The recent observations by {\it Neutron Star Interior Composition Explorer} ({\it NICER})
for PSR J0030+0451 provided a simultaneous measurement of the mass and radius of
a neutron star. From independent analyses of the {\it NICER} data on PSR J0030+0451,
Riley et al.~\citep{Rile19} reported a mass of $1.34^{+0.15}_{-0.16} M_\odot$
with an equatorial radius of $12.71^{+1.14}_{-1.19}$ km,
while Miller et al.~\citep{Mill19} reported a mass of
$1.44^{+0.15}_{-0.14} M_\odot$ with a radius of $13.02^{+1.24}_{-1.06}$ km.
It is interesting to notice that constraints on the neutron-star radii
from various observations are consistent with each other.
At present, some available EOS tables for supernova simulations are inconsistent
with these constraints. The EOS based on the FSU parametrization predicts
a maximum neutron-star mass of only $1.75 M_\odot$, which was improved by introducing
an additional phenomenological pressure at high densities~\citep{shenG11b}.
The RMF parametrizations, NL3 and TM1, lead to too large neutron-star radii
in comparison with the extracted values from astrophysical observations~\citep{oert17}.
In our previous work~\citep{shen11}, the EOS tables (EOS2 and EOS3)
were constructed by employing the TM1 model, while the nonuniform matter was described
in the Thomas--Fermi approximation with a parameterized nucleon distribution.
In EOS2, only nucleonic degrees of freedom were taken into account, while additional
contributions from $\Lambda$ hyperons were included at high densities in EOS3.
The TM1 model can provide a satisfactory description for finite nuclei
and a maximum neutron-star mass of $2.18 M_\odot$ with nucleonic degrees of freedom
only, but the resulting neutron-star radii seem to be too large~\citep{suga94,shen98a}.
Therefore, we would like to improve our EOS table in order to be consistent
with all available constraints from astrophysical observations.
It is well known that the neutron-star radius is closely related to the density
dependence of nuclear symmetry energy~\citep{horo01}.
There exists a positive correlation between the slope parameter $L$ of
the symmetry energy and the neutron-star radius~\citep{alam16}.
Since the TM1 model has a rather large slope parameter $L=110.8$ MeV, it predicts
too large radii for neutron stars as compared to the estimations from astrophysical
observations. In the present work, we prefer to employ an extended version of the
TM1 model with $L=40$ MeV (hereafter referred to as the TM1e model),
where an additional $\omega$-$\rho$ coupling term is introduced to modify the
density dependence of the symmetry energy~\citep{bao14b}.
By adjusting simultaneously two parameters associated to the $\rho$ meson in the TM1e
model, we achieve the slope parameter $L=40$ MeV at saturation density and the same
symmetry energy as the original TM1 model at a density of 0.11 fm$^{-3}$.
It is noteworthy that the TM1e and original TM1 models have the same isoscalar
properties and fixed symmetry energy at $0.11\, \rm{fm}^{-3}$, so that
both models can provide very similar descriptions of stable nuclei.
There are also other extended TM1 models for varying the symmetry energy slope $L$ by
including $\omega$-$\rho$ or $\sigma$-$\rho$ coupling term~\citep{prov13,prov16},
where the coupling constants associated to the $\rho$ meson
are adjusted to yield the same symmetry energy
as the original TM1 model at a density of 0.1 fm$^{-3}$.
In our TM1e model, we prefer to fix the symmetry energy at the density of
$0.11\, \rm{fm}^{-3}$, since this choice can provide almost unchanged
binding energy of $^{208}$Pb for different $L$ (see Figure 1 of~\citet{bao14b}).
Furthermore, the TM1e model predicts much smaller neutron-star radii than the original
TM1 model due to the difference in the slope parameter $L$.
It is found that the TM1e model yields a radius of 13.1 km for a canonical $1.4 M_\odot$
neutron star, while the corresponding value of the original TM1 model is as large
as 14.2 km~\citep{ji19}. According to the constraints based on astrophysical
observations and terrestrial nuclear experiments~\citep{oert17,tews17,tami17},
the slope parameter $L=40$ MeV of the TM1e model is more favored than $L=110.8$ MeV
of the original TM1 model. Moreover, the neutron-star radius in the TM1e model
is well within the new observational data by {\it NICER}.
We have two aims in this article. The first is to construct a new EOS
table (hereafter referred to as EOS4) for numerical simulations of
core-collapse supernovae and neutron-star mergers
based on the TM1e model, which is compatible
with both experimental nuclear data and recent observations of neutron stars.
The second is to make a detailed comparison between the new EOS4
and previous EOS2 in~\citet{shen11}, so that we can examine the influences of
symmetry energy and its slope on various aspects of the EOS for
astrophysical simulations.
We emphasize that both EOS4 and EOS2 are constructed using the same treatment for
nonuniform matter and uniform matter with nucleonic degrees of freedom,
but employ different RMF models for nuclear interaction.
Since the TM1e and TM1 models have the same properties of symmetric nuclear matter
but different behavior of symmetry energy, the differences between these two EOS
tables are solely due to different density dependence of symmetry energy.
For convenience in use and comparison, the new EOS4 is designed in the same tabular
form covering the full range of temperature, proton fraction, and baryon density
as described in~\citet{shen11}. For simplicity, only nucleonic degrees of freedom
are taken into account in EOS4, while the appearance of hyperons and/or quarks
at high densities is neglected.
By applying the new EOS4 together with EOS2 in astrophysical simulations,
it is possible to estimate the effects of symmetry energy and its density dependence
on core-collapse supernovae, black hole formation, and binary neutron-star merger.
This paper is arranged as follows. In Section~\ref{sec:2}, we briefly
describe the framework for building the EOS table.
In Section~\ref{sec:3}, we discuss and compare the new EOS4
with previous EOS2 by examining the phase diagram, compositions,
and thermodynamic quantities.
Section~\ref{sec:4} is devoted to a summary.
\section{Formalism}
\label{sec:2}
For making the article self-contained, we give a brief description of
the RMF model and Thomas--Fermi approximation used for constructing the EOS table.
\subsection{RMF model}
\label{sec:2.1}
We employ the RMF model with an extended TM1 parametrization, namely the TM1e model,
to describe the nuclear system, where nucleons interact through
the exchange of various mesons including the isoscalar-scalar meson $\sigma$,
isoscalar-vector meson $\omega$, and isovector-vector meson $\rho$~\citep{bao14a,bao14b}.
The nucleonic Lagrangian density reads
\begin{eqnarray}
\label{eq:LRMF}
\mathcal{L}_{\rm{RMF}} & = & \sum_{i=p,n}\bar{\psi}_i
\left[ i\gamma_{\mu}\partial^{\mu}-\left(M+g_{\sigma}\sigma\right) \right. \nonumber \\
&& \left. -\gamma_{\mu} \left(g_{\omega}\omega^{\mu} +\frac{g_{\rho}}{2}
\tau_a\rho^{a\mu}\right)\right]\psi_i \nonumber \\
&& +\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma
-\frac{1}{2}m^2_{\sigma}\sigma^2-\frac{1}{3}g_{2}\sigma^{3} -\frac{1}{4}g_{3}\sigma^{4}
\nonumber \\
&& -\frac{1}{4}W_{\mu\nu}W^{\mu\nu} +\frac{1}{2}m^2_{\omega}\omega_{\mu}\omega^{\mu}
+\frac{1}{4}c_{3}\left(\omega_{\mu}\omega^{\mu}\right)^2 \nonumber
\\
&& -\frac{1}{4}R^a_{\mu\nu}R^{a\mu\nu} +\frac{1}{2}m^2_{\rho}\rho^a_{\mu}\rho^{a\mu} \nonumber \\
&& +\Lambda_{\rm{v}} \left(g_{\omega}^2 \omega_{\mu}\omega^{\mu}\right)
\left(g_{\rho}^2\rho^a_{\mu}\rho^{a\mu}\right),
\end{eqnarray}
where $W^{\mu\nu}$ and $R^{a\mu\nu}$ denote the antisymmetric field
tensors for $\omega^{\mu}$ and $\rho^{a\mu}$, respectively\footnote{
Note that the coupling constant for isovector-vector meson,
$g_{\rho}$, is different by a factor of 2 from the one in~\citet{shen11}.
We follow here the convention of~\citet{bao14a}. }.
Under the mean-field approximation, the meson fields are treated as classical
fields and the field operators are replaced by their expectation values.
In a static uniform system, the nonzero components are
$\sigma =\left\langle \sigma \right\rangle$, $\omega =\left\langle
\omega^{0}\right\rangle$, and $\rho =\left\langle \rho^{30} \right\rangle$.
We derive the equations of motion for mesons and the Dirac equation
for nucleons, which are coupled with each other and could be solved self-consistently.
Compared with the original TM1 model adopted in~\citet{shen11},
an additional $\omega$-$\rho$ coupling term is introduced in the Lagrangian
density (\ref{eq:LRMF}), which plays a crucial role in determining the density
dependence of the symmetry energy~\citep{horo01,cava11,prov13,bao14a,bao14b}.
By adjusting the coupling constants, $g_{\rho}$ and $\Lambda_{\rm{v}}$,
it is possible to control the behavior of symmetry energy and its
density dependence. In the TM1e model, the slope parameter $L=40$ MeV and the
symmetry energy $E_{\text{sym}}=31.38$ MeV at saturation density are obtained,
which fall well within the constraints from various observations~\citep{oert17}.
The corresponding values in the original TM1 model are
$L=110.8$ MeV and $E_{\text{sym}}=36.89$ MeV, which are rather large and disfavored
by recent astrophysical observations.
In Table~\ref{tab:1}, we present the coupling constants of the TM1e and TM1 models
for completeness. It is shown that only $g_{\rho}$ and $\Lambda_{\rm{v}}$ related
to isovector parts are different, while all other parameters remain the same.
It is noteworthy that the TM1e model provides the same isoscalar saturation properties
and similar binding energies of finite nuclei as the original TM1 model,
whereas the density dependence of symmetry energy is very different.
In Figure~\ref{fig:1mat}, we plot the energy per baryon $E/A$ of symmetric nuclear
matter and neutron matter as a function of the baryon number density $n_B$.
It is shown that the behavior of symmetric nuclear matter is exactly the same between
the TM1e and TM1 models, while significant differences are observed in neutron matter.
This is related to different density dependence of symmetry energy between these
two models, which is displayed in Figure~\ref{fig:2Esym}.
One can see that the symmetry energy $E_{\text{sym}}$ in the TM1e model is slightly
larger at low densities and much smaller at high densities than that in the original
TM1 model. It is interesting and convenient to explore the influence of symmetry
energy and its density dependence on the properties of the EOS for supernova simulations
by using these two models.
For the Thomas--Fermi calculations of nonuniform matter, we need to input
the energy density and entropy density of uniform nuclear matter,
which are given in the TM1e model by
\begin{eqnarray}
\label{eq:ERMF}
\epsilon &=& \displaystyle{\sum_{i=p,n} \frac{1}{\pi^2}
\int_0^{\infty} dk\,k^2\,
\sqrt{k^2+{M^{\ast}}^2}\left( f_{i+}^{k}+f_{i-}^{k}\right) } \nonumber\\
& &
+\frac{1}{2}m_{\sigma}^2\sigma^2+\frac{1}{3}g_{2}\sigma^{3}
+\frac{1}{4}g_{3}\sigma^{4}
+\frac{1}{2}m_{\omega}^2\omega^2+\frac{3}{4}c_{3}\omega^{4} \nonumber\\
& &
+\frac{1}{2}m_{\rho}^2\rho^2
+3 \Lambda_{\rm{v}}\left(g^2_{\omega}\omega^2\right)
\left(g^2_{\rho}\rho^2\right),
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:SRMF}
s &=& -\displaystyle{\sum_{i=p,n}\frac{1}{\pi^{2}}
\int_{0}^{\infty}dk\,k^{2}}
\left[ f_{i+}^{k}\ln f_{i+}^{k}+\left( 1-f_{i+}^{k}\right)
\ln \left(1-f_{i+}^{k}\right) \right. \nonumber \\
& & \left. +f_{i-}^{k}\ln f_{i-}^{k}
+\left( 1-f_{i-}^{k}\right) \ln \left( 1-f_{i-}^{k}\right) \right].
\end{eqnarray}
Here $M^{\ast}=M+g_{\sigma}\sigma$ is the effective nucleon mass.
$f_{i+}^{k}$ and $f_{i-}^{k}$ ($i=p,n$) denote, respectively, the occupation
probabilities of nucleon and antinucleon at momentum $k$,
which are given by the Fermi-Dirac distribution,
\begin{eqnarray}
\label{eq:firmf}
f_{i\pm}^{k}=\left\{1+\exp \left[ \left( \sqrt{k^{2}+{M^{\ast}}^2}
\mp \nu_{i}\right)/T\right]
\right\}^{-1},
\end{eqnarray}
with the kinetic part of the chemical potential $\nu_i$ related to
the chemical potential $\mu_i$ as
\begin{eqnarray}
\nu_{i} = \mu_{i} - g_{\omega}\omega - \frac{g_{\rho}}{2}\tau_{3i}\rho .
\end{eqnarray}
The number density of protons ($i=p$) or neutrons ($i=n$) is calculated by
\begin{equation}
\label{eq:nirmf}
n_{i}=\frac{1}{\pi^2}
\int_0^{\infty} dk\,k^2\,\left(f_{i+}^{k}-f_{i-}^{k}\right).
\end{equation}
Using the results of the TM1e model as input in the Thomas--Fermi calculation,
we compute the average free energy density of nonuniform matter, and compare it with
the one of uniform matter. At a given temperature $T$, proton fraction $Y_p$,
and baryon mass density $\rho_B$, the thermodynamically stable state is the
one having the lowest free energy density. We determine the stable state and
the phase transition between nonuniform matter and uniform matter by
minimizing the free energy density.
\begin{table*}[tbp]
\caption{Coupling constants of the TM1e and TM1 models.}
\begin{center}
\begin{tabular}{lcccccccccccc}
\hline\hline
Model & $g_\sigma$ & $g_\omega$ & $g_\rho$ & $g_{2}$ [fm$^{-1}$] & $g_{3}$ & $c_{3}$ & $\Lambda_{\textrm{v}}$ \\
\hline
TM1e & 10.0289 & 12.6139 & 13.9714 & $-$7.2325 &0.6183 & 71.3075 & 0.0429 \\
TM1 & 10.0289 & 12.6139 & 9.2644 & $-$7.2325 &0.6183 & 71.3075 & 0.0000 \\
\hline\hline
\end{tabular}
\label{tab:1}
\end{center}
\end{table*}
\begin{figure}[htbp]
\centering
\includegraphics[width=8.6 cm]{1mat.eps}
\caption{Energy per baryon $E/A$ of symmetric nuclear matter and neutron matter
as a function of the baryon number density $n_B$ in the TM1e and TM1 models. }
\label{fig:1mat}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=8.6 cm]{2Esym.eps}
\caption{Symmetry energy $E_{\rm{sym}}$ as a function of the baryon number density $n_{B}$
in the TM1e and TM1 models. }
\label{fig:2Esym}
\end{figure}
\subsection{Thomas--Fermi approximation}
\label{sec:2.2}
At low temperature and subnuclear density region, heavy nuclei are
formed in order to lower the free energy of the system.
For the description of nonuniform matter, we employ the Thomas--Fermi
approximation with a parameterized nucleon distribution,
which was developed by~\citet{oyam93} and used in our previous works~\citep{shen98b,shen11}.
The nonuniform matter is modeled as a mixture of a single species of
heavy nuclei, alpha particles, and free nucleons outside nuclei,
while the leptons are approximated as an ideal relativistic gas separately.
The spherical nuclei are arranged in a body-centered-cubic (BCC) lattice
to minimize the Coulomb lattice energy~\citep{oyam93},
while the Wigner--Seitz cell is introduced to simplify the calculation of free energy.
It is likely that nonspherical nuclei, known as pasta phases, may appear as the
density approaches the phase transition to uniform matter~\citep{avan10,pais12,okam13,bao15}.
The appearance of pasta phases can smooth the transition to uniform
matter (see, e.g.,~\citet{furu13}),
but the effects on thermodynamic quantities in the EOS table
are rather small. For simplicity, we consider only spherical configuration
in constructing the EOS table.
In the Wigner--Seitz cell, a spherical heavy nucleus is located at the center,
while free nucleons and alpha particles exist outside the nucleus.
Each cell is assumed to be charge neutral and the background electron gas is uniform.
The density distribution of particle $i$ ($i=p$, $n$, or $\alpha$) in the cell
is assumed to have the form
\begin{equation}
\label{eq:nitf}
n_i\left(r\right)=\left\{
\begin{array}{ll}
\left(n_i^{\rm{in}}-n_i^{\rm{out}}\right) \left[1-\left(\frac{r}{R_i}\right)^{t_i}
\right]^3 +n_i^{\rm{out}}, &\hspace{0cm} 0 \leq r \leq R_i, \\
n_i^{\rm{out}}, &\hspace{-1.5cm} R_i \leq r \leq R_{\rm{cell}}, \\
\end{array} \right.
\end{equation}
where $r$ denotes the distance from the center of the cell.
$R_{\rm{cell}}$ is the radius of the cell, which is related to the cell
volume $V_{\rm{cell}}$ and the lattice constant $a$ by
$V_{\rm{cell}} = a^3 = 4 \pi R_{\rm{cell}}^3 / 3 =N_B / n_B $
with $N_B$ and $n_{B}$ being the baryon number per cell and the average
baryon number density, respectively.
The baryon mass density is defined as $\rho_B=m_{u} n_B$ with
$m_{u}=931.494$ MeV being the atomic mass unit.
For nonuniform matter at given temperature $T$, proton fraction $Y_p$,
and baryon mass density $\rho_B$, the thermodynamically stable state is
the one with the lowest free energy density, $f=F_{\rm{cell}}/V_{\rm{cell}}$.
The free energy per cell $F_{\rm{cell}}$ is given by
\begin{equation}
\label{eq:fc}
F_{\rm{cell}}=\left(E_{\rm{bulk}}+E_{\rm{surf}}+E_{\rm{Coul}}\right)- T S_{\rm{cell}},
\end{equation}
where the bulk energy $E_{\rm{bulk}}$ and entropy $S_{\rm{cell}}$ are computed by
performing integrations over the cell. The local energy and entropy densities
can be expressed as the sum of contributions from nucleons and alpha particles.
We use the RMF results of the TM1e model for the contributions of nucleons,
while the alpha particles are treated as an ideal Boltzmann gas.
To describe the dissolution of alpha particles at high densities,
the excluded-volume correction is taken into account as described in~\citet{shen11}.
For performing numerical integrations of $E_{\rm{bulk}}$ and $S_{\rm{cell}}$,
we use the tabulated results of the TM1e model given by Equations~(\ref{eq:ERMF})
and~(\ref{eq:SRMF}) as input in the Thomas--Fermi calculation, and then the
corresponding local densities contributed by nucleons are computed from
the input table using a linear interpolation procedure.
The input table is designed to include 871 grid points for the baryon number
density $n_B$ and 1001 grid points for the proton fraction $Y_p$, so that
the linear interpolation can be used with good accuracy.
As for the contribution of alpha particles, it is calculated within the ideal-gas
approximation, where the alpha-particle binding energy $B_{\alpha}=28.3$ MeV is
taken into account~\citep{latt91,shen11}.
Generally, the number density of alpha particles is rather small, and therefore,
the ideal-gas approximation can provide a reasonable description for alpha particles.
In Equation~(\ref{eq:fc}), $E_{\rm{surf}}$ represents the surface energy due to the inhomogeneity
of nucleon distributions. We use the simple form as
\begin{equation}
\label{eq:es}
E_{\rm{surf}}=\int_{\rm{cell}} F_0 \mid \nabla \left( \, n_n\left(r\right)+
n_p\left(r\right) \, \right) \mid^2 d^3r,
\end{equation}
where the parameter $F_0=70 \, \rm{MeV\,fm^5}$ is the same as that adopted in Shen EOS
with the original TM1 model, which was determined in~\citet{shen98a} by performing
the Thomas--Fermi calculation for finite nuclei so as to reproduce
the gross properties of nuclear masses and charge radii, as described in the Appendix
of~\citet{oyam93}. The reason why we use the same value of $F_0$
in the new EOS4 is because the TM1e model can predict very similar properties of
finite nuclei as the original TM1 model (see Table 2 below), and hence the Thomas--Fermi
calculation in the TM1e model with $F_0=70 \, \rm{MeV\,fm^5}$ is able to reproduce
similar gross properties of nuclear masses and charge radii.
The Coulomb energy per cell $E_{\rm{Coul}}$ is given by
\begin{equation}
\label{eq:ec}
E_{\rm{Coul}}=\frac{1}{2}\int_{\rm{cell}} e \left[n_p\left(r\right)
+2n_{\alpha}\left(r\right)-n_e\right]\,\phi(r) d^3r
\,+\,\triangle E_C,
\end{equation}
where $\phi(r)$ denotes the electrostatic potential calculated
in the Wigner--Seitz approximation and $\triangle E_C$ is the correction term
for the BCC lattice~\citep{oyam93,shen11}.
At given temperature $T$, proton fraction $Y_p$, and baryon mass density $\rho_B$,
we perform the minimization of the free energy density with respect to
independent variables in the parameterized Thomas--Fermi approximation.
To avoid the presence of too many parameters in the minimization procedure,
we use the same parameters $R_p$ and $t_p$ for both proton and
alpha-particle distribution functions. Furthermore, $n_{\alpha}^{\rm{in}}=0$
is adopted, so that alpha particles disappear at the center of the nucleus.
In principle, the nucleon distribution in the Wigner--Seitz cell can be
determined in a self-consistently Thomas--Fermi approximation,
where the set of coupled equations is solved by the iteration method
in coordinate space~\citep{zhang14}.
However, the self-consistent Thomas--Fermi calculation
requires much more computational effort than the parameterized Thomas--Fermi
approximation. In our previous work~\citep{zhang14}, we made a detailed
comparison between the self-consistent Thomas--Fermi approximation
and the parameterized Thomas--Fermi approximation, which showed that
the differences in thermodynamic quantities between these two methods
are negligible and would not affect the general behavior of the EOS.
Therefore, we prefer to employ the parameterized Thomas--Fermi approximation
in the present calculation. Furthermore, it is also helpful for examining
the effects of symmetry energy by comparing EOS4 with EOS2 based on
the same method.
After the thermodynamically favorable state is determined in the minimization procedure,
we calculate the thermodynamic quantities like the pressure and chemical potentials
from the free energy per baryon $F \left(T,Y_{p},n_{b}\right)$ over the full range
of the EOS table by the thermodynamic relations:
\begin{eqnarray}
p\left(T,Y_{p},n_{B}\right) &=& \left[ n_B^{2} \frac{\partial F} {\partial n_B} \right]_{T,Y_{p}},
\label{eq:ppre} \\
\mu_{p}\left(T,Y_{p},n_{B}\right) &=& \left[\frac{\partial \left( n_{B} F\right)}{\partial n_p} \right]_{T,n_n},
\label{eq:pmup} \\
\mu_{n}\left(T,Y_{p},n_{B}\right) &=& \left[\frac{\partial \left( n_{B} F\right)}{\partial n_n} \right]_{T,n_p},
\label{eq:pmun}
\end{eqnarray}
where $n_{p}=Y_{p}n_{B}$ and $n_{n}=\left(1-Y_{p}\right)n_{B}$ are the average number densities
of protons and neutrons, respectively. The final EOS table contains not only thermodynamic
quantities but also compositions of matter and other information.
For convenience in use and comparison, the new EOS4 is designed
to have the same tabular form as EOS2, while the definitions of the physical
quantities in the EOS table have been given in Appendix A of~\citet{shen11}.
Compared to the treatment of nonuniform matter in~\citet{shen11},
the results of the TM1e model are used as input in the Thomas--Fermi calculation
of EOS4, instead of the original TM1 model used in EOS2.
The different density dependence of symmetry energy between TM1e ($L=40$ MeV)
and TM1 ($L=110.8$ MeV) would lead to more significant effects
in low $Y_p$ region.
It is interesting to make a detailed comparison between EOS4 and EOS2,
so that we can explore the influences of symmetry energy and
its density dependence on properties of the EOS for astrophysical simulations.
\section{Results}
\label{sec:3}
We construct the new EOS4 based on the TM1e model with $L=40$ MeV covering a wide
range of temperature $T$, proton fraction $Y_p$, and baryon mass density $\rho_B$
for numerical simulations of core-collapse supernovae and neutron-star mergers.
For convenience in practical use, we provide the new EOS4 in the same tabular form
within the ranges as given in Table 1 of~\citet{shen11}.
All physical quantities included in the EOS table have been defined in
Appendix A of~\citet{shen11}.
Compared to the EOS2 based on the original TM1 model in~\citet{shen11},
the new EOS4 is more compatible with both experimental nuclear data and recent
observations of neutron stars.
In Table~\ref{tab:2}, we present some properties of nuclear symmetry energy,
finite nuclei, and neutron stars, so as to examine the compatibility of the models
with current constraints and experimental data.
It is shown that the results of finite nuclei in the TM1e and TM1 models are very
similar to each other and in good agreement with experimental data.
On the other hand, the TM1e model provides much smaller radius and tidal
deformability for a $1.4 M_\odot$ neutron star, which is more consistent
with the current constraints.
It is reasonable that different behaviors of the symmetry energy between these
two models have more pronounced effects for neutron-rich objects like neutron stars.
More detailed properties of neutron stars obtained in the TM1e model have been
reported in our recent study~\citep{ji19}.
\begin{table*}[htb]
\caption{Properties of symmetry energy, finite nuclei, and neutron stars
in the TM1e and TM1 models.
$E_{\rm{sym}}$ and $L$ are the nuclear symmetry energy and its slope
parameter at saturation density, respectively.
The binding energy per nucleon $E/A$, charge radius $r_{\text{c}}$,
and neutron-skin thickness $\triangle r_{\text{np}}$ of $^{208}$Pb
obtained in the RMF approach and Thomas--Fermi (TF) approximation
are compared with the experimental data in the last column.
$M_{\mathrm{max}}$ is the maximum mass of neutron stars,
while $R_{1.4}$ and $\Lambda_{1.4}$ denote the radius and tidal
deformability for a $1.4 M_\odot$ neutron star, respectively.}
\begin{center}
\begin{tabular}{llccl}
\hline\hline
& & EOS4(TM1e) & EOS2(TM1) & constraints\\
\hline
symmetry energy & $E_{\rm{sym}}$ [MeV] & 31.38 & 36.89 & $31.7\pm 3.2$~\citep{oert17} \\
& $L$ [MeV] & 40 & 110.8 & $58.7\pm 28.1$~\citep{oert17} \\
\hline
finite nuclei (RMF) & $E/A$ ($^{208}$Pb) [MeV] & 7.88 & 7.88 & 7.87~\citep{audi03} \\
& $r_{\rm{c}}$ ($^{208}$Pb) [fm] & 5.56 & 5.54 & 5.50~\citep{ange13} \\
& $\triangle r_{\rm{np}}$ ($^{208}$Pb) [fm] & 0.16 & 0.27 & $0.33^{+0.16}_{-0.18}$~\citep{abra12}
\vspace{0.2cm}\\
finite nuclei (TF) & $E/A$ ($^{208}$Pb) [MeV] & 8.05 & 8.08 & 7.87~\citep{audi03} \\
& $r_{\rm{c}}$ ($^{208}$Pb) [fm] & 5.68 & 5.65 & 5.50~\citep{ange13} \\
& $\triangle r_{\rm{np}}$ ($^{208}$Pb) [fm] & 0.10 & 0.21 & $0.33^{+0.16}_{-0.18}$~\citep{abra12} \\
\hline
neutron stars & $M_{\rm{max}}$ [$M_\odot$] & 2.12 & 2.18 & $1.928 \pm 0.017$~\citep{fons16} \vspace{-0.0cm}\\
& & & & $2.01 \pm 0.04$~\citep{anto13} \vspace{-0.0cm}\\
& & & & $2.14^{+0.10}_{-0.09}$~\citep{crom19} \\
& $R_{1.4}$ [km] & 13.1 & 14.2 & $10.5<R_{1.4}<13.3$~\citep{abbo18} \\
& & & & $12.0<R_{1.4}<13.45$~\citep{most18} \\
& $\Lambda_{1.4}$ & 652 & 1047 & $<800$~\citep{abbo17} \\
& & & & $190^{+390}_{-120}$~\citep{abbo18} \\\hline\hline
\end{tabular}
\label{tab:2}
\end{center}
\end{table*}
\begin{figure}[htbp]
\centering
\includegraphics[width=8.6 cm]{3TRho.eps}
\caption{Phase diagram in the $\rho_B$--$T$ plane for $Y_p=0.1$, $0.3$, and $0.5$.
The shaded region corresponds to the nonuniform matter phase where heavy
nuclei are formed. }
\label{fig:3TRho}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=8.6 cm]{4YpRho.eps}
\caption{Phase diagram in the $\rho_B$--$Y_p$ plane at $T=10$ MeV.
The shaded region corresponds to the nonuniform matter phase where heavy
nuclei are formed. }
\label{fig:4YpRho}
\end{figure}
To build the EOS table for astrophysical simulations, we perform the free energy
minimization at each $T$, $Y_p$, and $\rho_B$ for both nonuniform matter and
uniform matter. The thermodynamically favorable state is the one with the
lowest free energy density among all configurations considered.
The phase transition is determined by comparing the free energy density
between nonuniform matter and uniform matter.
In Figure~\ref{fig:3TRho}, we show the phase diagram in the $\rho_B$--$T$
plane for $Y_p=0.1$, $0.3$, and $0.5$ obtained in EOS4 (red solid lines)
which is compared with that of EOS2 (blue dashed lines).
One can see that the nonuniform matter phase with heavy nuclei can exist
only at low temperature and subnuclear density region.
At low densities, the uniform matter consists of a free nucleon gas together
with a small fraction of alpha particles. As the density increases,
heavy nuclei are formed in the nonuniform matter phase to lower
the free energy. When the density is beyond $\sim 10^{14.1}\,\rm{g/cm^{3}}$,
heavy nuclei dissolve and the favorable state becomes the uniform nuclear matter.
The density range of the nonuniform matter phase depends on both $T$ and $Y_p$.
As the temperature increases, the onset density of nonuniform matter increases
significantly, while the transition from nonuniform matter to uniform matter
is almost independent of $T$. When the temperature reaches the critical value $T_c$,
the nonuniform matter phase disappears completely, i.e. heavy nuclei cannot be
formed at $T>T_c$.
It is interesting to note the effects of symmetry energy
on the boundary of nonuniform matter.
For the case of $Y_p=0.5$ shown in the top panel of Figure~\ref{fig:3TRho},
there is no visible difference between EOS4(TM1e) and
EOS2(TM1) due to the same isoscalar properties in the two models.
For the case of $Y_p=0.1$ shown in the bottom panel,
the critical temperature $T_c$ in EOS4(TM1e) is
significantly higher than the one obtained in EOS2(TM1).
Furthermore, the transition density to uniform matter in EOS4(TM1e) is slightly
larger than that in EOS2(TM1). This is consistent with the correlation between
the symmetry energy slope and the crust-core transition density of neutron
stars~\citep{bao15}.
In Figure~\ref{fig:4YpRho}, we show the density range of nonuniform matter
as a function of $Y_p$ at $T=10$ MeV.
It is seen that there is clear difference between EOS4(TM1e) and EOS2(TM1)
in low $Y_p$ region, where the behavior of symmetry energy plays an important role
in determining the properties of neutron-rich matter. One can see that heavy nuclei
do not appear in EOS2(TM1) at $T=10$ MeV for $Y_p<0.15$, whereas the
nonuniform matter phase exists until $Y_p\sim 0.04$ in EOS4(TM1e).
Similar effects of the symmetry energy and its slope on the phase diagram were also
observed in~\citet{toga17}, where the authors constructed the EOS table using
a non-relativistic variational method based on realistic nuclear forces.
It is interesting to find this similarity for both non-relativistic and relativistic
many-body frameworks with small $L$ values.
\begin{figure}[htbp]
\centering
\includegraphics[width=8.6 cm]{5XiRho.eps}
\caption{Fraction of neutrons (green dash-dotted line), protons (red dotted line), alpha
particles (blue dashed line), and heavy nuclei (black solid line) as a function of the
baryon mass density $\rho_B$ for $Y_p=0.1$ at $T=1$, $4$, and $10$ MeV.
The results obtained in EOS4(TM1e) and EOS2(TM1) are shown by thick and thin
lines, respectively. }
\label{fig:5XiRho}
\end{figure}
In Figure~\ref{fig:5XiRho}, we show the fractions of neutrons, protons, alpha particles,
and heavy nuclei as a function of the baryon mass density $\rho_B$ for $Y_p=0.1$
at $T=1$, $4$, and $10$ MeV. At low densities, the matter is a uniform gas of
neutrons and protons together with a small fraction of alpha particles.
The alpha-particle fraction $X_{\alpha}$ increases with increasing $\rho_B$
before the formation of heavy nuclei, but it rapidly decreases in the nonuniform
matter where heavy nuclei use up most of the nucleons.
When the density increases beyond $\sim 10^{14.1}\,\rm{g/cm^{3}}$,
heavy nuclei dissolve and the matter is composed of uniform neutrons and protons.
In general, the results of EOS2 (thin lines) are just slightly different from those
of EOS4 (thick lines). In the case of $T=10$ MeV (top panel), heavy nuclei
do not appear in EOS2 with the TM1 model, but alpha particles exist at intermediate densities.
This is different from the results of EOS4, where heavy nuclei are formed in the
density range $10^{13.6}\leq \rho_B \leq 10^{14.0}\,\rm{g/cm^{3}}$ with the TM1e
model. Due to the formation of heavy nuclei, $X_{n}$ and $X_{p}$ in this density
range are much different between EOS4 and EOS2.
\begin{figure}[htb]
\centering
\includegraphics[width=8.6 cm]{6dis.eps}
\caption{Density distributions of protons and neutrons inside the
Wigner--Seitz cell for the case of $T=1$ MeV and $Y_p=0.1$
at $\rho_B = 10^{13.8}\, \rm{g/cm^{3}}$.
The results obtained in EOS4 (red solid lines)
are compared with those of EOS2 (blue dashed lines).
The radius of the Wigner--Seitz cell is indicated by the hatch,
while the radius of the heavy nucleus is shown by the dash-dotted line. }
\label{fig:6dis}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=8.6 cm]{7A.eps}
\caption{Nuclear mass number $A$ as a function of the baryon mass density $\rho_B$
at $T=1$ MeV for $Y_p=0.5$, $0.3$, and $0.1$. The results obtained in EOS4 (red solid lines)
are compared with those of EOS2 (blue dashed lines). }
\label{fig:7A}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=8.6 cm]{8Z.eps}
\caption{Charge number $Z$ as a function of the baryon mass density $\rho_B$
at $T=1$ MeV for $Y_p=0.5$, $0.3$, and $0.1$. The results obtained in EOS4 (red solid lines)
are compared with those of EOS2 (blue dashed lines). }
\label{fig:8Z}
\end{figure}
In nonuniform matter, the properties of heavy nuclei are determined by
minimizing the free energy density in the parameterized Thomas--Fermi approximation.
We display in Figure~\ref{fig:6dis} the resulting density distributions of protons
and neutrons inside the Wigner--Seitz cell for the case of $T=1$ MeV and $Y_p=0.1$
at $\rho_B = 10^{13.8}\, \rm{g/cm^{3}}$.
The radius of the Wigner--Seitz cell is indicated by the hatch,
while the radius of the heavy nucleus is shown by the dash-dotted line.
The results obtained in EOS4 (red solid lines)
are compared with those of EOS2 (blue dashed lines).
It is shown that both the cell radius $R_{\rm{cell}}$ and the neutron
radius $R_n$ (i.e., the radius of the heavy nucleus) obtained in EOS4
are larger than those in EOS2. Furthermore, the neutron-skin
thickness, $R_n-R_p$, is relatively small in the case of EOS4.
This is because the TM1e model used in EOS4 has a smaller symmetry energy
slope $L=40$ MeV than the value of $L=110.8$ MeV in the TM1 model of EOS2.
It is well known that the neutron-skin thickness of finite nuclei is
positively correlated to the symmetry energy slope $L$.
On the other hand, the density distributions, $n_n$ and $n_p$, are also
largely affected by the symmetry energy slope $L$.
The dripped neutron density $n_n^{\rm{out}}$ of EOS4 is smaller than
that of EOS2, while the neutron density at the center $n_n^{\rm{in}}$
is much larger. This tendency can be understood from different behaviors
of the symmetry energy between TM1e and TM1 models.
As shown in Figure~\ref{fig:2Esym}, the TM1e model has larger $E_{\text{sym}}$
at low densities but smaller $E_{\text{sym}}$ at high densities
compared to the TM1 model. Therefore, the TM1e model results in
relatively larger $n_n^{\rm{in}}$ and smaller $n_n^{\rm{out}}$ than
the TM1 model. It is seen that the density gradient in EOS4 is larger
than that in EOS2, which leads to larger surface energy and nuclear radius.
A similar behavior was also reported in~\citet{toga17},
where the authors used the model with small $L$ based on realistic
nuclear forces and compared to the results of EOS2.
In Figures~\ref{fig:7A} and \ref{fig:8Z}, we show respectively the nuclear mass
number $A$ and charge number $Z$ as a function of the baryon mass density $\rho_B$
at $T=1$ MeV for $Y_p=0.5$, $0.3$, and $0.1$.
It is seen that both $A$ and $Z$ weakly depend on $\rho_B$ at low densities and
rapidly increase before the transition to uniform matter.
There are significant differences between EOS4 and EOS2 for small $Y_p$.
The values of $A$ and $Z$ obtained in EOS4 are larger than those of EOS2.
This is because the TM1e model with a small $L$ results in a large nuclear radius
as shown in Figure~\ref{fig:6dis}, which implies more protons
and neutrons are bound inside the heavy nucleus.
The differences of heavy nuclei between EOS4 and EOS2 may affect the neutrino
transport and emission in core-collapse supernovae, which need to be explored
in further studies.
\begin{figure}[htb]
\centering
\includegraphics[width=8.6 cm,clip]{9F.eps}
\caption{Free energy per baryon $F$ as a function of the baryon mass
density $\rho_B$ with $Y_p=0.1$ and $0.5$ at $T=1$ and $10$ MeV.
The results obtained in EOS4 (red solid lines)
are compared with those of EOS2 (blue dashed lines). }
\label{fig:9F}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=8.6 cm,clip]{10p.eps}
\caption{Same as Figure~\ref{fig:9F}, but for the pressure $p$. }
\label{fig:10p}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=8.6 cm,clip]{11S.eps}
\caption{Same as Figure~\ref{fig:9F}, but for the entropy per baryon $S$. }
\label{fig:11S}
\end{figure}
It is essential to discuss the effects of symmetry energy on the thermodynamic
quantities in the EOS table.
In Figure~\ref{fig:9F}, we show the free energy per baryon $F$ as a function of the
baryon mass density $\rho_B$ for $Y_p=0.1$ and $0.5$ at $T=1$ and $10$ MeV.
The results in EOS4(TM1e) are shown by solid lines,
while those in EOS2(TM1) are displayed by dashed lines for comparison.
There is almost no difference between EOS4 and EOS2 for the case of $Y_p=0.5$
due to the same isoscalar properties of the two models.
In the case of $Y_p=0.1$, the values of $F$ in EOS4 are smaller than those in EOS2,
and their difference increases with increasing $\rho_B$.
This is because the TM1e model has smaller symmetry energy than the TM1 model
at high densities, which leads to smaller free energy in neutron-rich matter.
Comparing the cases between $T=1$ MeV and $T=10$ MeV, the tendencies of the free
energy are very similar to each other. This implies that the dependence of
the symmetry energy effect on $T$ is rather weak.
We plot in Figure~\ref{fig:10p} the pressure $p$ as a function of $\rho_B$
for $Y_p=0.1$ and $0.5$ at $T=1$ and $10$ MeV.
The pressure is calculated from the derivative of the free energy, as given
in Equation~(\ref{eq:ppre}). Due to the formation of heavy nuclei
in nonuniform matter, the pressure has a rapid drop as shown in the case
of $Y_p=0.5$ in the top panel (Note that this drop does not appear when contributions
from leptons and photons are added).
In contrast, the pressure for $Y_p=0.1$ is
much smooth due to less fraction of heavy nuclei.
It is noticed that there is a clear discontinuity in EOS4(TM1e) around the
phase transition to uniform matter $\sim 10^{14.2}\,\rm{g/cm^{3}}$ for the case
of $Y_p=0.1$ and $T=1$ MeV. In fact, this discontinuity is also found
in other cases (see, e.g., Figure 7 of~\citet{shen98b} and Figure 14 of~\citet{toga17}).
This is because the phase transition is determined by minimizing the free energy,
and as a result, the free energy shown in Figure~\ref{fig:9F} is a smooth function
of the density. However, the pressure calculated from the first derivative of
the free energy may exhibit a discontinuity at the first-order phase transition~\citep{pais14}.
Compared to the results of EOS2 shown by dashed lines, the pressure of uniform matter
beyond $\sim 10^{14.1}\,\rm{g/cm^{3}}$ in EOS4 for $Y_p=0.1$ is relatively small,
so the discontinuity is more obvious in this case.
Generally, the pressure at high densities obtained in EOS4 is lower than that in EOS2,
which is a result of small density dependence of symmetry energy in the TM1e model.
Therefore, the new EOS4 is softer than EOS2 due to different behaviors
of the symmetry energy between these two models.
In Figure~\ref{fig:11S}, we show the entropy per baryon $S$ as a function of
$\rho_B$ for $Y_p=0.1$ and $0.5$ at $T=1$ and $10$ MeV.
At $T=1$ MeV, the values of $S$ for $Y_p=0.5$ are much smaller than those
for $Y_p=0.1$. This is because most of nucleons exist inside heavy nuclei for
the case of $Y_p=0.5$, while there is a large fraction of free neutrons
for $Y_p=0.1$ as shown in Figure~\ref{fig:5XiRho}.
At $T=10$ MeV, the difference of $S$ between $Y_p=0.5$ and $Y_p=0.1$
is relatively small, because the formation of heavy nuclei becomes less
important as the temperature increases.
It is found that the difference of symmetry energy between TM1e and TM1 models
has minor influence on the entropy, and as a result, the behavior of $S$
obtained in EOS4 is very similar to that in EOS2.
Generally, the TM1e model with $L=40$ MeV leads to visible differences in EOS4
from EOS2 for $Y_p\leq 0.3$, and the difference increases as the matter
becomes more neutron-rich.
\section{Summary}
\label{sec:4}
In this work, we constructed a new EOS table (EOS4) based on an extended
TM1 model with $L=40$ MeV (referred to as the TM1e model) for astrophysical
simulations of core-collapse supernovae and neutron-star mergers.
Following the method described in our previous study~\citep{shen11},
we employed the Thomas--Fermi approximation with a parameterized nucleon
distribution for the description of nonuniform matter,
which is modeled as a mixture of a single species of heavy nuclei,
alpha particles, and free nucleons outside nuclei.
At given temperature $T$, proton fraction $Y_p$, and baryon mass
density $\rho_B$, we perform the minimization of the free energy density
with respect to independent variables involved, so as to determine
the thermodynamically stable state with the lowest free energy.
For convenience in use and comparison, the new EOS4 was designed in the
same tabular form as the previous version EOS2 presented in~\citet{shen11}.
Now, both EOS4 and EOS2 are available at
{\it http://my.nankai.edu.cn/wlxy/sh\_en/list.htm},
{\it http://user.numazu-ct.ac.jp/$^{\sim}$sumi/eos/index.html},
and {\it http://doi.org/10.5281/zenodo.3612487}.
The main difference between the new EOS4 in this work and the previous EOS2
in~\citet{shen11} is that the TM1e model with a small
slope parameter $L=40$ MeV was used in EOS4 instead of the original
TM1 model with $L=110.8$ MeV adopted in EOS2.
The different behaviors of the symmetry energy between TM1e
and TM1 lead to visible impacts on various aspects of the EOS for
astrophysical simulations, especially in the neutron-rich region.
The effects of the symmetry energy and its slope observed in this work
are consistent with those reported in~\citet{toga17}.
The present work was motivated by recent developments in astrophysical observations,
such as the binary neutron-star merger GW170817, which provided new constraints
on the tidal deformability and radii of neutron stars.
It is likely that the TM1 model with $L=110.8$ MeV used in EOS2 predicts too large
neutron-star radii compared to the current observations. Therefore,
we prefer to revise our EOS table by employing the TM1e model with
$L=40$ MeV, which could provide much smaller neutron-star radius.
It is well known that the neutron-star radius is positively correlated to the
symmetry energy slope $L$. By introducing an additional $\omega$-$\rho$ coupling
term, it is possible to modify the density dependence of the symmetry energy
according to the constraints from astrophysical observations and terrestrial
nuclear experiments.
In the TM1e model, we adjusted simultaneously two parameters associated to
the $\rho$ meson, and as a result, the slope parameter $L=40$ MeV and the
symmetry energy $E_{\text{sym}}=31.38$ MeV at saturation density were achieved,
which fall well within the constraints from various observations.
It is noteworthy that the TM1e model provides the same properties of symmetric
nuclear matter and similar binding energies of finite nuclei as the original
TM1 model, whereas the density dependence of the symmetry energy is very
different. This choice allows us to explore
the effect solely from the symmetry energy without interference of
the isoscalar part.
To examine the effect of symmetry energy, we made a detailed comparison
between the new EOS4 and previous EOS2.
It was found that the TM1e model used in EOS4 could predict relatively larger
region of nonuniform matter and softer EOS in the neutron-rich region
compared with the original TM1 model used in EOS2.
In the case of EOS4, the critical temperature, where the nonuniform matter
phase disappears completely, is clearly higher than the one in EOS2
for the case of low $Y_p$.
Furthermore, the transition density to uniform matter in EOS4 is slightly
larger than that in EOS2. In nonuniform matter, the mass number $A$ and
charge number $Z$ of heavy nuclei obtained in EOS4 were found to be
larger than those of EOS2. We also found noticeable differences in the
thermodynamic quantities like the free energy and pressure, especially
for neutron-rich matter at high densities. All these differences between
EOS4 and EOS2 become more significant as $Y_p$ decreases.
This is because the TM1e and TM1 models have the same isoscalar properties
but different density dependence of the symmetry energy.
It is interesting and important to explore the effects of symmetry energy on
astrophysical phenomena such as core-collapse supernovae and neutron-star mergers.
In our recent work~\citep{sumi19}, we have studied the influence of symmetry
energy and its density dependence in numerical simulations of
gravitational collapse of massive stars and cooling of protoneutron stars
by using a hybrid EOS, where the TM1e model was adopted for
uniform matter at densities above $\sim 10^{14}\,\rm{g/cm^{3}}$
combined with the previous EOS2 of nonuniform matter at low densities.
While the TM1e EOS at high densities is shown to have major effects on the birth
of neutron stars in neutron-rich regions, the full table of TM1e EOS including
the part of low densities will have influence on collapse and bounce of supernova
cores, by non-trivial feedback through compositional changes with neutrino reactions,
thereby the outcome of the (non)explosion and the compact object formation.
The numerical simulation of core-collapse supernovae and the analysis of
symmetry energy effects using the new EOS4 are currently underway.
\acknowledgments
We would like to thank H. Toki, K. Oyamatsu, S. Yamada, H. Togashi, S. Furusawa,
and Y. Sekiguchi for fruitful discussions and suggestions on the EOS tables
and their applications.
This work was supported in part by the National Natural Science Foundation of
China (Grants No. 11675083 and No. 11775119).
K.S. is supported by Grant-in-Aid for Scientific Research (15K05093, 19K03837)
and Grant-in-Aid for Scientific Research on Innovative areas
\textquotedblleft Gravitational wave physics and astronomy:Genesis\textquotedblright\
(17H06357, 17H06365)
from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.
KS also acknowledges the high performance computing resources
at KEK, RCNP, Osaka University and YITP, Kyoto University.
|
2,877,628,091,133 | arxiv | \section{Introduction}
\begin{figure*}[htbp!]
\includegraphics[width=\textwidth,height=4cm]{fig_ptb_setup.jpg}
\caption{The irradiation setup of the EJ-301 detector (left) corresponding to the 0$^{\circ}$ orientation in Table \ref{table:data_sets}. The shadow cone is visible toward the right. The neutron beam enters the setup from the right.}
\end{figure*}\label{fig:detector_setup}
Liquid scintillators such as EJ-301 (which is similar to NE-213 and BC-501) are very popular for neutron detection as they can easily be shaped into the desired size and geometry of a given application and offer fast timing performance. However, since such liquid scintillators are also sensitive to gamma rays, pulse-shape discrimination (PSD) techniques are essential in order to correctly identify neutron interactions in the detector.
The ability to discriminate nuclear recoil (NR) events from electronic recoil (ER) events originates in the particular production mechanisms of scintillation light in organic liquid scintillators. These liquids are aromatic compounds which have planar molecular structures built up from benzenoid rings. Such structures allow for extended groupings of conjugated molecular bonds between unsaturated carbon atoms~\cite{Brooks1979}. This results in some of the valence electrons of the carbon atoms being delocalized in $\pi$-molecular orbitals. It is the excitations of these $\pi$-electronic states that create the fluorescence observed in organic scintillators. During these excitations, $\pi$-electrons can be promoted from the ground state~S$_0$ to excited singlet states~S$_n$ or triplet~T$_n$ states. For low excitation densities, all excited singlet states above the first excited singlet states S$_1$ decay rapidly and non-radiatively to the lowest excited singlet state. This state then decays exponentially producing fluorescence in the process.
In contrast, the decay of the triplet state is governed by the diffusion time-scale of the triplet exciton and results in delayed fluorescence in which the intensity does not decay exponentially. NRs exhibit greater energy-loss rates and thus have higher densities of triplet states. Pulses from the ionization tracks of these particles exhibit higher yields of delayed fluorescence, hence decaying more slowly than those of ERs. Scintillation light from EJ-301 has three main decay components: $3.2\1{ns}$, $32\1{ns}$ and $270\1{ns}$~\cite{Kuchnir1968}. The slowest of these decay times is produced by the delayed fluorescence of triplet states.
The different pulse shapes that arise from electronic and nuclear recoils in liquid scintillators can be exploited using different PSD techniques. The most popular techniques applied are the Charge Comparison Method~\cite{Brooks1959} and the Zero Crossing Method~\cite{Alexander1961}. These methods were originally implemented in purpose-designed analogue electronics~\cite{Adams1978}, but with the advent of greater computing power at reduced costs, these techniques have been implemented digitally~\cite{Kaschuck2005, Cester2014, Liao2014}. Digital capture of the full waveform allows for offline processing of events, reducing dead time in data acquisition systems. Techniques designed for analogue circuits do not take advantage of the increased information available in the digital domain. Consequently, new PSD techniques have been developed recently~\cite{DMellow2007, Gamage2011, Yousefi2009}. These techniques offer new PSD approaches in the time domain of the waveform, allow frequency-domain and decay-time differences to be investigated using wavelet analysis, and can implement Fourier and Laplace transforms.
Traditionally, the performance of PSD techniques is characterized using the Figure of Merit (FOM), defined as:
\begin{equation}\label{eq:FOM}
\text{FOM} = \frac{\text{Peak Seperation}}{\text{FWHM}_\gamma+\text{FWHM}_n}
\end{equation}
where peak separation refers to the distance between the center of the neutron and gamma distributions in a histogram of the discrimination parameter, and $\text{FWHM}_i$ is the full-width half maximum of the respective distributions. Hence, the FOM does not provide any information on the energy dependence of the performance of PSD techniques. This precludes a comparison of the various algorithms across different authors that may use different energy thresholds in the calculation of their FOM, and additionally, may mask performance issues of the algorithms in particular at low recoil energy. Therefore, we examined the energy-dependent ability of PSD techniques to discriminate between ER and NR events. Furthermore, we determined the efficiency of EJ-301 for detection of neutrons as a function of energy.
\section{Setup}
The fast neutron detector used in this work is a 3"~cell of EJ-301 liquid organic scintillator optically coupled to a fast photomultiplier tube (PMT), type 9821KB manufactured by ET Enterprises. The detector response to neutrons was characterized at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, Germany, using a deuterium ion beam hitting a Ti($^3$H) target. The deuterium ion beam energy (3.356 MeV) was chosen to produce $(2.500\pm 0.010)\1{MeV}$ (k=1 according to~\cite{GUM}) monoenergetic neutrons via $^{3}$H(d,n)$^{4}$He nuclear reaction, in the direction of the ion beam. The detector was placed $3\1{m}$ from the target. The output of the PMT was connected to a CAEN DT5751 digitizer, which samples at 1~GHz with a resolution of 10~bits. This digitizer has a $1\1{V}$ dynamic range. A $1\1{MeV_{ee}}$ pulse from an ER event in the PMT produces a $550\1{mV}$ signal.
Data were collected at three different nominal beam current settings to study the effect of neutron flux on the performance of the detector. The detector was placed such that the neutron beam was parallel to the normal of the front face, defined as an angle of 0$^{\circ}$. The distance between the front face of the detector and the active layer of the target was $(3000\pm 2) \1{mm}$ (k=2~\cite{GUM}) for all measurements. Additional data were taken at each setting with a shadow cone, made of iron and polyethylene, placed between the target and the detector to measure the in-scatter of neutrons, as illustrated in Figure~\ref{fig:detector_setup}. At the highest nominal beam current, data was also collected with an angle of 90$^{\circ}$ between the direction of the ion beam and the front face of the detector. In the 90$^{\circ}$ orientation the detector is rotated such that the neutron flux is incident on the side of the detector, rather than the front face.
These datasets are listed in Table~\ref{table:data_sets} with their known fluxes as measured using calibrated detectors at PTB. Dataset 4 has a greater flux than dataset 1, despite the beam conditions being the same, due to the greater cross-sectional area the detector presents to the neutron beam in this orientation. The known flux in Dataset 4 is slightly higher than can be attributed to geometric factors alone, as the nominal beam charge for Dataset 4 is 5.6$\%$ greater than in Dataset 1.
\begin{table}[ht]
\caption{Data for the irradiation of the detector in the neutron field with a mean energy of $2.5\1{MeV}$. }\label{table:data_sets}
\begin{center}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{c c c c r }
\hline
Data Set & Current & Orientation & Nominal Charge \newline [$\mu$C] & \;\; Flux [s$^{-1}$] \\
\hline
1 & 1.5 $\mu A$ & 0$^{\circ}$ & 2721 & \;16400 $\pm$ 700 \\
2 & 300 $nA$ & 0$^{\circ}$ & 1014 & \;3080 $\pm$ 140\\
3 & 35 $nA$ & 0$^{\circ}$ & 56.17 & \;340 $\pm$ 15\\
4 & 1.5 $\mu A$ & 90$^{\circ}$ & 2883 & \;22200 $\pm$ 970\\
\hline
\end{tabular}%
}
\end{center}
\end{table}
The response of EJ-301 to ERs is known to be linear. Data presented in this work is therefore given in terms of the electron recoil equivalent energy keV$_{ee}$. This energy scale is set using the Compton backscatter edge of gammas from $^{60}$Co, $^{137}$Cs and $^{54}$Mn, measured from data collected with the detector in the experimental hall. The background rate of ER events in the experimental hall was measured during an overnight measurement.
A total of 80 million waveforms (amounting to $85\1{GB}$) were collected from the neutron source, background, and calibration gamma-sources, and stored for offline processing.
\section{Discrimination Algorithms}\label{sec:algorithms}
As EJ-301 features different decay constants for NR and ER signals, a variety of methods can be used to discriminate the corresponding waveforms. Five PSD algorithms were implemented in a C++ program to perform offline analysis of the data and compute discrimination parameters for each waveform. These algorithms, described in detail below, are the Charge Comparison Method (CCM), Pulse Gradient Analysis (PGA), Fourier Series Expansion (FSE), Laplace Transform (LAP), and a fit to standard events (SEF). Typical scintillation pulses last for $0.5\1{ns}$ per keV of energy deposited. Each digitized waveform was 525~ns in duration, with the trigger falling between 78 and 94~ns. The first 40~ns were used to calculate a simple baseline average as well as the baseline RMS to indicate the noise level, and the integral of the pulse yields the energy.
\begin{figure}[htbp]
\includegraphics[width=\columnwidth]{fig_annotated_event.pdf}
\caption{A typical gamma event of energy 100 keV. Shown are the trigger and integration thresholds, the ends of the Fast and Slow integral windows, the location of the sample used in the PGA method, the region used for baseline calculations, and the noise level.}
\label{fig:waveform}
\end{figure}
\subsection{Charge Comparison Method}
The Charge Comparison Method (CCM)~\cite{Cester2014,Kaschuck2005,Liao2014,WOLSKI1995,Klein2002,Ranucci1998,Soderstrom2008,RANUCCI1995} predates modern digital computing and was first implemented via passive electronics~\cite{Alexander1961}. In this method, the baseline-subtracted waveform is integrated over two time windows of different lengths, called \textit{slow} and \textit{fast} or \textit{long} and \textit{short}, respectively. The start of these integral windows is the onset of the pulse, which is defined here as the point at which the waveform exceeds 3$\sigma$ of the baseline RMS (as shown in Figure~\ref{fig:waveform}). The lengths of the two windows are generally set to match the decay modes of the detector. As a NR pulse will decay more slowly than an ER pulse, the slow integral value $I_{\n{slow}}$ will be larger for NR waveforms than for ER, while the fast integral values $I_{\n{fast}}$ are typically comparable for both ER and NR waveforms. We have optimized these times according to the traditional Figure of Merit~\cite{ANNAND1987} and found that values of 50~ns for the fast window and 310~ns for the slow window result in optimal discrimination. The discrimination parameter is the ratio of the two integral values,
\[
\n{PSD}_{\n{CCM}} = \frac{I_{\n{slow}}}{I_{\n{fast}}}.
\]
\subsection{Pulse Gradient Analysis}
The Pulse Gradient Analysis method (PGA)~\cite{DMellow2007} compares the relative height of the peak $H_{\n{peak}}$ to that of a sample $H_{\n{sample}}$ a set time after the peak, here 50~ns. This second sample is averaged with the neighboring 10 samples to reduce noise. As ER pulses decay more quickly than NR pulses, the gradient between the peak and this second sample should be larger for the ER than for the NR. We define the discrimination parameter for this method as the ratio between the two amplitudes,
\[
\n{PSD}_{\n{PGA}} = \frac{H_{\n{sample}}}{H_{\n{peak}}}.
\]
\subsection{Fourier Series Expansion}
The third is a method based on a Fourier series expansion of the waveform ($f(t) = \sum_n A_n \exp (i\omega_n t); A_n = \int_0^T \n{d}t\,f(t) \exp (i\omega_n t)$). Using an approach similar to~\cite{Liu2010}, the difference between an even order coefficient $A_{2n}$ (odd order $A_{2n+1}$) and the zeroth coefficient $A_0$ (first coefficient $A_1$) is normalized to the zeroth (first) coefficient and then summed:
\[
\displaystyle
F_{\n{even}} = \sum_{n} \frac{A_{2n}-A_0}{A_0} \n{~and~} F_{\n{odd}} = \sum_{n} \frac{A_{2n+1}-A_1}{A_1}.
\]
The expansion is computed to the 30$^{th}$ order. The discrimination parameter is then defined as the ratio between these two parameters:
\[
\n{PSD}_{\n{FSE}} = \frac{F_{\n{odd}}}{F_{\n{even}}}.
\]
\subsection{Laplace Transformation}
We also evaluate a discrimination algorithm based on the Laplace transform $\mathcal{L}\{f\} = \int_0^{\infty} \n{d}t\,f(t)\exp(-st)$. A 9-point moving average is calculated over the trailing edge of the waveform, and the smoothed pulse is then transformed. The transformed waveform is integrated over two frequency ranges, $0.01\1{GHz}\leq s<0.1\1{GHz}$ and $0.1\1{GHz}\leq s<1\1{GHz}$, denoted `low' and `high', respectively, yielding $L_{\n{low}}$ and $L_{\n{high}}$. The frequency ranges are chosen such that the contributions from the $32\1{ns}$ and $270\1{ns}$ decay modes are maximized in each respective range. The discrimination parameter for this method is defined as $L_{\n{high}}-L_{\n{low}}$. However, since this parameter varies significantly with energy, we choose to re-scale it according to
\[
\n{PSD}_{\n{LAP}} = I- A \log(B + L_{\n{high}} - L_{\n{low}})
\]
in order to aide visualization, with the integral of the pulse given as $I$. Parameters $A$ and $B$ are constants chosen as 1/30 and 2, respectively, simply to linearise the plot.
\subsection{Standard Event Fit (SEF)}
\begin{figure}[htbp]
\includegraphics[width=\columnwidth]{fig_standardevents.pdf}
\caption{Standard nuclear recoil (red dotted line) and electronic recoil (blue solid line) events after baseline subtraction. The free fit parameters are a horizontal shift $\tau$ and a vertical scale factor $\Lambda$}
\label{fig:stdevents}
\end{figure}
Prior to analysis, $10^5$ events identified as NR or ER by CCM are selected in a narrow energy region around the Compton edge of the 662~keV gamma from $^{137}$Cs. Averaging these waveforms together forms the standard NR or ER event. Both these standard events are then fitted to a given waveform~\cite{Guerrero2008,Ambers2011}. While the baseline is fixed to the average of the first 40~ns of the waveform, the free parameters of the fit are a horizontal shift $\tau_{N,E}$ and a scaling factor $\Lambda_{N,E}$, see Figure~\ref{fig:stdevents}. The ER/NR discrimination parameter is then defined as the difference between the chi-squared value $\chi^2_{N,E}$ of each fit, normalized to the vertical scaling fit parameter $\Lambda$:
\[
\n{PSD}_{\n{SEF}} = \frac{\chi^2_N}{\Lambda_N} - \frac{\chi^2_E}{\Lambda_E}.
\]
\section{Analysis}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\columnwidth]{fig_discrim_hists_viridis.pdf}
\caption{Plots of various discrimination parameters versus energy for the CCM~(a), FSE~(b), LAP~(c), PGA~(d), and SEF~(e) algorithms. The 99$\%$ rejection cut of electronic recoils is also shown in each case. The upper populations are the respective nuclear recoil bands.}
\label{fig:discrimination_plot}
\end{center}
\end{figure}
\subsection{Rejection of Electronic Recoils}\label{sec:backgrounds}
The collected data were each processed using the 5~PSD methods discussed in Section~\ref{sec:algorithms}. Rejection cuts for ER events are defined using the datasets taken with various gamma-sources using histograms of the discrimination parameter versus energy. In each energy bin, we consider the one-sided 95$\%$, 96$\%$, 97$\%$, 98$\%$ and 99$\%$ ER rejection quantiles. Application of the resulting rejection cuts to the overnight background data confirm their performance on neutron datasets. These rejection cuts are robust against changes in detector orientation and data acquisition rate.
The event distribution from neutron data are shown in Figure~\ref{fig:discrimination_plot} for all PSD methods together with the one-sided 99$\%$ ER rejection cut. For ease of comparison, all PSD parameters were defined such that the NR band is the upper event population.
\subsection{Neutron Flux}
Using the rejection levels defined from the ER band, events are tagged as being either an electronic or nuclear recoil.
Figure~\ref{fig:rate_plot} shows the live time-corrected NR energy spectrum, measured from dataset~2, after 99$\%$ rejection of ERs using the CCM~algorithm as an example. The expected double-peaked structure due to neutron double scatter events within the detector volume is evident.
The recoil spectrum extends to just below $1\1{MeV_{ee}}$, as expected from $2.5\1{MeV}$ neutrons and the non-linear response spectrum of proton recoil energies in EJ-301 detectors~\cite{Verbinski1968, Aksoy1994, Naqvi1993}. Other algorithms show similar spectra with some variations below $200\1{keV_{ee}}$ which will be discussed in section~\ref{sec:acceptance}.
\begin{figure}[htbp]
\includegraphics[width=\columnwidth]{fig_rate_hists.pdf}
\caption{Nuclear recoil spectra as measured in the EJ-301 detector using the CCM algorithm. The measured backscatter flux (dotted, red) is subtracted from the total measured flux (dashed, blue) in order to obtain the recoil spectrum of $2.5\1{MeV}$ neutrons (solid, black).}
\label{fig:rate_plot}
\end{figure}
Data collected under the same conditions as dataset~2, but with the shadow cone between the detector and the target is also shown. The observed spectrum is consistent with a range of energies from neutrons scattered off of the air in the experimental hall. The absence of a similar NR spectrum in background data (with no ions on the target), also shown, confirms that these events are related to the beam.
Since we are interested in the efficiency of the EJ-301 detector with regards to detecting $2.5\1{MeV}$ neutrons, we subtract the neutron backscatter rate (Figure \ref{fig:rate_plot}, dotted (red) line) from the total neutron rate (Figure \ref{fig:rate_plot}, dashed (blue) line). This results in the direct neutron flux at the position of the detector (Figure~\ref{fig:rate_plot}, solid (black) line).
\section{Results}\label{sec:acceptance}
\subsection{Acceptance}
Rather than reducing the performance of different PSD algorithms to a single Figure of Merit value (Eq.~\ref{eq:FOM}), we investigate their energy-dependent behaviour. Specifically, for a given ER discrimination cut, we compare the efficiency of various algorithms using the number of accepted neutrons as a function of energy.
In order to quantify the acceptance of neutrons, a pure band of NRs directly from the beam is required. However, given the reduced performance of all algorithms at low energies, the subsequent overlapping of the NR and ER band, as well as the background from scattered neutrons, no such pure sample is available. We thus invoke a simple statistical algorithm as follows. We use the data taken with the neutron beam incident on the detector
to produce the event distributions in the space of PSD parameter $PSD_i$ versus energy $E$, as shown in Figure~\ref{fig:discrimination_plot}. These distributions are corrected for the livetime of each dataset and the integrated beam current, to obtain the time-normalized event density $\varrho_{\mathrm{sig+bck}}(PSD_i,E)$. Similarly, datasets in which the shadow cone was present result in a time-normalized background density of both ER and NR events $\varrho_{\mathrm{bck}}(PSD_i,E)$. Subtracting these two event densities results in an event density $\varrho_{\mathrm{sig}}(PSD_i,E)=\varrho_{\mathrm{sig+bck}}(PSD_i,E)-\varrho_{\mathrm{bck}}(PSD_i,E)$ that is assumed to be representative of the pure band of NRs directly from the beam. This 2D-histogram of event density is then projected onto the energy axis to obtain a "pure" spectrum of all $2.5\1{MeV}$ neutrons detected by the EJ-301 cell for each algorithm. To check that any mismatch between measurement of ERs with the shadow cone and the background data does not bias the pure neutron flux, we subtract the background rate from the ER rate measured with the shadow cone. The resulting spectrum is found to contribute at most 0.1$\%$ uncertainty to the assumed pure neutron flux directly from the beam.
The acceptance of neutrons is determined by dividing the measured NR spectrum for a given dataset and algorithm (as shown in Figure~\ref{fig:rate_plot}) by this "pure" NR spectrum. The resulting energy-dependent acceptance of neutrons at a constant rejection level of ERs is shown in Figure~\ref{fig:acceptance}.
\begin{figure}[ht]
\centering
\includegraphics[width = \columnwidth]{fig_acceptance.pdf}
\caption{Energy dependence of the fraction of nuclear recoil events that are accepted by each algorithm after 99$\%$ (top) and 95$\%$ (bottom) rejection of electronic recoils.}
\label{fig:acceptance}
\end{figure}
In the low-energy region shown in Figure~\ref{fig:acceptance}, the acceptance of the SEF~algorithm performs best above $30\1{keV_{ee}}$ at 95$\%$ rejection of ERs. The acceptances of the LAP and FSE algorithms match each other at this rejection level. As the rejection level of ERs is increased to 99$\%$, the acceptance of the LAP algorithm at higher energies is marginally but consistently higher than that of the SEF algorithm. Of note here is that the performance of the traditional CCM algorithm decreases as the rejection criteria for ER events becomes more stringent. Specifically, at 95$\%$ rejection of ERs, its acceptance is consistent with that of the SEF, FSE and LAP algorithms, and better than the PGA algorithm. However, at 99$\%$ rejection of ERs, the CCM algorithm has the lowest acceptance below $80\1{keV_{ee}}$.
\subsection{Purity of the Nuclear Recoil Spectrum}
At low energy ($<100\1{keV_{ee}}$), there is considerable overlap between the NR and ER bands. Therefore, we not only want to consider the ability of an algorithm to accept neutrons, but also the purity of the resulting NR spectrum. To this end, we construct a reference sample for each algorithm $i$ that contains events that are NRs with high confidence, using those events that are identified by all other algorithms $j\neq i$ as a NR. As a function of energy, we then calculate how many of the events in the reference sample were also tagged as a NR by the algorithm under investigation. The fraction of events in the reference sample that an algorithm has tagged as a NR is shown in Figure~\ref{fig:n-1_comparison}.
\begin{figure}[htbp]
\centering
\includegraphics[width = \columnwidth]{fig_likelihood.pdf}
\caption{The fraction of NRs identified by a given algorithm from a reference sample. The reference sample consists of events identified as NRs by all algorithms except the algorithm under consideration. The shaded region indicates the systematic uncertainty.}
\label{fig:n-1_comparison}
\end{figure}
The inability of the traditional CCM algorithm to identify NR events in a reference sample below $100\1{keV_{ee}}$ is striking. In contrast, the SEF algorithm performs best at low energies and correctly identifies more than 95$\%$ of NR events down to energies below $50\1{keV_{ee}}$. For the SEF algorithm, this fraction is flat down to energies well below the point at which the acceptance of neutrons for all algorithms has fallen below 50$\%$ (compare Figure~\ref{fig:acceptance}). Intriguingly, while the LAP algorithm does provide lower efficiency at low energies, it provides a slightly higher level of confidence that an event is truly a neutron above $\sim75\1{keV_{ee}}$. In contrast to~\cite{DMellow2007}, we find that the performance of the PGA algorithm is inferior to those of all other algorithms above $80\1{keV_{ee}}$. This observation can be attributed to the fact that the PGA algorithm is based on only two samples of the recorded waveform, thus being highly susceptible to electronic noise.
The performance of algorithms which perform poorly at low energies, such as the CCM algorithm, can bias the selection of the reference neutron sample. To estimate the systematic uncertainty in the fraction of identified NRs, we neglect one algorithm, and repeat the procedure above, i.e. we construct reference samples from three of the four remaining samples, and examine which fraction of NRs are identified by the fourth method. In this way we obtain four neutron identification curves for each method, from which we determine the standard deviation. This is taken as a measure of the systematic uncertainty of each method and is shown as the shaded regions around each curve in Figure~\ref{fig:n-1_comparison}.
\subsection{Detector Efficiency}
Table~\ref{table:data_sets} lists the total expected neutron flux of $2.5\1{MeV}$ neutrons through the detector for each dataset. Based on the determined absolute neutron rates from the PTB measurements, we reconstruct the detection efficiency of the detectors as function of energy threshold. The results at 99$\%$ rejection of ERs are shown in Figure~\ref{fig:efficiencies}. In addition, the expected efficiency from a MC simulation using the NEFF7 code~\cite{GH1982} is shown in Figure~\ref{fig:efficiencies} for the arrangement in which the detector orientation was 90$^{\circ}$ at selected energy thresholds. We find that the efficiency measured in the detector is inconsistent with both the overall efficiency described by the NEFF7 code, as well as the functional dependence of the efficiency on the threshold. Independent of the PSD algorithm, the measured efficiency in the EJ-301 detector is lower and less threshold-dependent than that described by the NEFF7 code. We thus conclude that the NEFF7 code can currently not be used to obtain accurate detection efficiency information for EJ-301 liquid scintillator cells.
\begin{figure}[h!]
\centering
\includegraphics[width = \columnwidth]{fig_efficiency.pdf}
\caption{(Top) Comparison between the threshold-dependent efficiency of the SEF algorithm and the prediction from the NEFF7 code in EJ-301. The detector is orientated at 90$^{\circ}$. (Bottom) The efficiency of all algorithm as a function of threshold. Note the different energy ranges.}
\label{fig:efficiencies}
\end{figure}
The PGA algorithm has the lowest efficiency above $30\1{keV_{ee}}$, consistent with it having a poorer ability to accept neutrons at a given rejection level of ERs. The traditional CCM algorithm has poor efficiency below $60\1{keV_{ee}}$ when compared to the three newer algorithms FSE, SEF and LAP. The LAP algorithm has the highest efficiency at all energy thresholds. This observation can be ascribed to its higher acceptance of NRs at higher energies. The LAP algorithm is better at discriminating between the NR and ER band at higher energies, as the ER band of the SEF algorithm spreads as the energy increases (compare Figure~\ref{fig:discrimination_plot}), while the ER band of the LAP algorithm has a more consistent width.
\subsection{Processing Speed Benchmarks}
The processing speed of each algorithm was measured on a dataset of approximately $500\,000$ events (about $550\1{MB}$), both with and without the C++ compiler optimizations available in the GNU compiler collection version 4.8.3, using a standard desktop computer. The results are show in Table~\ref{tab:benchmarks}. From this we see that the increased low-energy performance of the SEF algorithm comes at a very high computational cost.
\begin{table}[htbp]
\begin{center}
\caption{Processing rates of the different algorithms, given in units of waveforms/sec, using a standard desktop computer.}\label{tab:benchmarks}
\begin{tabular}{c p{26mm}p{26mm}}
\hline
Algorithm & Rate without \newline optimizations & Rate with \newline optimizations \\
\midrule
PGA & $1.6\times10^6$ & $4.8\times10^6$ \\
CCM & $168\,000$ & $1.6\times10^6$ \\
FSE & $4\,300$ & $84\,000$ \\
LAP & $1\,100$ & $16\,000$ \\
SEF & 110 & 230 \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
\section{Conclusions}
We have compared the performance of five different pulse shape discrimination algorithms using a commercial liquid scintillator cell. The studied algorithms include the avant-garde algorithms Standard Event Fit SEF, Fourier-Series Expansion FSE, and Laplace Transform LAP in addition to the traditional Charge Comparison Method CCM and Pulse-Gradient Analysis PGA. The energy-dependent behaviour of all five algorithms was discussed as a better means of describing PSD algorithms than the Figure of Merit previously used in the literature.
Specifically, we considered the ability of each algorithm to accept NRs as function of recoil energy, as well as the purity of the resulting NR sample. We find that at 99$\%$ rejection of ER background events and above $80\1{keV_{ee}}$, the PGA algorithm accepts the fewest number of NRs and is the least likely to identify a NR event from a reference sample. The CCM algorithm performs better than the PGA method, but a marked deterioration in the performance of the CCM algorithm is observed as the rejection level of ER events becomes more stringent.
Both the SEF algorithm and the LAP algorithm display improved performance compared to the traditional methods considered. The SEF algorithm is more likely to accept NRs from a reference sample at low energies, and it provides a higher acceptance of NR events below $80\1{keV_{ee}}$. The LAP method however provides a slightly higher efficiency overall for the detection of NRs in EJ-301 due to a better-resolved ER band at higher energies.
\section{Acknowledgements}
This work was supported by grant \#PHYS-1412965 from the National Science Foundation (NSF) and carried out under a cooperation agreement between Purdue University and PTB. JP~is supported by scholarship \#SFH13071722071 from the National Research Foundation (NRF).
\section{References}
|
2,877,628,091,134 | arxiv |
\section{Introduction}\label{sec:intro}
\input{introduction}
\section{Results}\label{sec:results}
\input{results}
\section{Discussion} \label{sec:discussion}
\input{discussion}
\small{
\section{Methods}\label{sec:methods}
\input{methods_formulation}
\input{methods_bempp}
\input{methods_exafmm}
}
\section{Data availability}
We deposited the meshes and \texttt{pqr} files on the Zenodo service: \href{http://doi.org/10.5281/zenodo.4568768}{doi:10.5281/zenodo.4568768}.
The raw and secondary data for all results are available in the archival deposit of our paper’s GitHub repository: \href{http://doi.org/10.5281/zenodo.4568951}{doi:10.5281/zenodo.4568951}.
\section{Code availability}
Exafmm is available at \href{https://github.com/exafmm/exafmm-t}{https://github.com/exafmm/exafmm-t}.
Bempp-cl is available at \href{https://github.com/bempp/bempp-cl}{https://github.com/bempp/bempp-cl}.
The scripts for plotting and rerunning our experiments are available in the archival deposit of our paper’s GitHub repository: \href{http://doi.org/10.5281/zenodo.4568951}{doi:10.5281/zenodo.4568951}.
|
2,877,628,091,135 | arxiv | \section{Introduction}\label{sec:Intro}
\begin{table*}
\begin{threeparttable}
\centering
\caption{Main properties of accretion solutions in one- and two-component
galaxy models with central MBH.}
\begin{tabular}{@{}l@{\hskip 4em}c@{\hskip 4em}c@{\hskip 4em}c@{\hskip 4em}c}
\toprule[1.25pt]\midrule[0.3pt]
& KCP16 & CP17 & CP18 & MCP21 (this paper)\\
\midrule
Galaxy models & Hernquist (1990) & Hernquist (1990), Jaffe (1983) & JJ models (CZ18) & J3 models (CMP19)\\
\midrule
Type of accretion & Polytropic & Isothermal & Isothermal & Polytropic \\
\midrule
Number of sonic points & One or two & One or two (Hernquist), One (Jaffe) & One & One or two\tnote{b} \\
\midrule
Sonic radius & Analytic\tnote{a} & Analytic & Analytic & Analytic/numerical\tnote{c} \\
\midrule
$\lt$ & Analytic\tnote{a} & Analytic & Analytic & Analytic/numerical\tnote{c} \\
\midrule
Mach number profile & Numerical & Analytic & Analytic & Analytic/numerical\tnote{c} \\
\bottomrule
\end{tabular}
\vspace{1.5mm}
\begin{tablenotes}[para,flushleft]
\item[a] The general expression can be written as a function of the polytropic
index, but only special cases were given explicitly;\\
\item[b] Function of the polytropic index $\gamma$;\\
\item[c] In the isothermal ($\gamma=1$) and monoatomic adiabatic ($\gamma=5/3$) cases
it is analytic, in the $1<\gamma<5/3$ case only
a numerical exploration is possible.
\end{tablenotes}
\end{threeparttable}
\label{table:models}
\end{table*}
Theoretical and observational studies indicate that
galaxies host at their centre a massive black hole (MBH) that
has grown its mass predominantly through gas accretion (see e.g.
Kormendy \& Richstone 1995).
A generic accretion flow may be broadly classified as
quasi-spherical or axisymmetric, and what mainly determines
the deviation from spherical symmetry is the angular momentum of
the flow itself.
A perfect spherical flow is evidently only possible when the
angular momentum is exactly zero.
Spherical models are a useful starting
point for a more advanced modelling,
and thus gas accretion toward a central MBH in galaxies is
often modelled with the classical Bondi (1952) solution.
For example, in semi-analytical models and cosmological simulations
of the co-evolution of galaxies and their central MBHs, the mass
supply to the accretion discs is linked to the
temperature and density of their environment by making use of the
Bondi accretion rate (see e.g. Fabian \& Rees 1995; Volonteri \& Rees
2005; Booth \& Schaye 2009; Wyithe \& Loeb 2012; Curtis \& Sijacki 2015;
Inayoshi, Haiman \& Ostriker 2016).
In fact, in most cases, the resolution of simulations cannot describe in detail
the whole complexity of accretion, and so Bondi
accretion does represent an important approximation to more
realistic treatments (see e.g. Ciotti \& Ostriker 2012; Barai et al. 2012;
Ram\'{i}rez-Velasquez et al. 2018; Gan et al. 2019 and references therein).
Recently, Bondi accretion has been generalised to include the effects on the flow
of the gravitational field of the host galaxy and of
electron scattering, at the same time preserving the (relative)
mathematical tractability of the problem.
Such a generalised Bondi problem has been applied to elliptical
galaxies by Korol et al. (2016, hereafter KCP16), who
discussed the case of a Hernquist (1990) galaxy model, for generic values of the
polytropic index.
Restricting to isothermal accretion, also taking
into account the effects of radiation pressure due to electron scattering,
Ciotti \& Pellegrini (2017, hereafter CP17) showed that
the whole accretion solution can be found analytically for the
Jaffe (1983) and Hernquist galaxy models with a central MBH; quite
remarkably, not only can the critical accretion parameter be explicitly
obtained, but it is also possible to write the radial profile of the Mach
number via the Lambert-Euler $W$-\hspace{0.4mm}function (see e.g. Corless et
al. 1996).
Then, Ciotti \& Pellegrini (2018, hereafter CP18) further extended the
isothermal accretion solution to the case of Jaffe's
two-component (stars plus dark matter) galaxy models
(Ciotti \& Ziaee Lorzad 2018, hereafter CZ18).
In these `JJ' models, a Jaffe stellar profile is embedded in a DM halo such that the total density
distribution is also a Jaffe profile, and all the relevant
dynamical properties can be written with analytical expressions.
CP18 derived all accretion properties analytically,
linking them to the dynamical and structural properties of the host galaxies.
These previous results are summarised in Table 1.
In this paper we extend the study of CP18
to a different family of two-component galaxy models with a central MBH,
in the general case of a polytropic gas.
In this family (J3 models; Ciotti, Mancino \& Pellegrini 2019, hereafter
CMP19) the stellar density follows a Jaffe profile, while the total
follows a $r^{-3}$ law at large radii; thus the DM halo (resulting from
the difference between the total and the stellar distributions) can reproduce the
Navarro-Frenk-White profile (Navarro, Frenk \& White 1997, hereafter NFW)
at all radii.
As we are concerned with polytropic accretion, we also clarify some thermodynamical
aspect of the problem, not always stressed. In fact, it is obvious
that for a polytropic index $\gamma\neq\gammaAD$ (the adiabatic index of the gas,
with $\gammaAD=\cp/\cV$) the flow is not adiabatic, and heat exchanges with the
environment are unavoidable. We investigate in detail this point, obtaining the expression
of the radial profile of the heat exchange (i.e. radiative losses) of the fluid elements
as they move towards the galaxy centre.
Qualitatively, an implicit cooling/heating function is contained
in the polytropic accretion when $\gamma\neq\gammaAD$.
The paper is organised as follows.
In Section \ref{sec:Bondi}, we recall the main properties of the polytropic Bondi
solution, and in Section \ref{sec:J3} we list the main properties of the J3 models.
In Section \ref{sec:BondiJ3}, we set up and discuss the polytropic Bondi problem in
J3 galaxy models, while in Section \ref{sec:Entropy}, we investigate some important
thermodynamical properties of accretion. The main results are finally summarised in
Section \ref{sec:Conclusions}, while some technical detail is given in the Appendix.
\section{Bondi accretion in galaxies}\label{sec:Bondi}
In order to introduce the adopted notation, and for consistency with previous works,
in this Section we summarise the main properties of Bondi accretion,
both on a point mass (i.e., a MBH) and on a MBH at the centre of a
spherical galaxy. In particular, the flow is spherically symmetric,
and the gas viscosity and conduction are neglected.
\subsection{The classical Bondi accretion}\label{subsec:Bondi_classic}
In the classical Bondi problem, a spatially infinite distribution of
perfect gas is accreting onto an isolated central point mass (a
MBH in our case) of mass $\MBH$. The pressure $p$ and density $\rho$
are related by
\begin{equation}
p=\frac{\kB\hspace{0.1mm}\rho\hspace{0.4mm}T}{\mugas\hspace{0.15mm}\mH}
=\pinf\hspace{-0.15mm}\times\left(\frac{\rho}{\rhoinf}\right)^{\hspace{-0.1mm}\gamma}\!,
\label{eq:pgas}
\end{equation}
where $\kB=1.38\times 10^{-16}\,{\rm erg}\hspace{0.7mm}{\rm K}^{-1}$
is Boltzmann's constant, $\mugas$ is the mean molecular
weight, $\mH=1.67\times 10^{-24}\,{\rm g}$ is the mass of the proton,
and $\gamma \geq 1$ is the polytropic
index\footnote{In principle $\gamma \geq 0$; in this paper we consider
$0 \leq \gamma < 1$ as a purely academic interval.}.
Finally, $\pinf$ and $\rhoinf$ are the gas pressure and density
at infinity, and $\cs=\sqrt{\gamma\hspace{0.1mm} p/\rho\hspace{0.7mm}}$
is the local polytropic speed of sound.
As some confusion unfortunately occurs in the literature, it is important to
recall that in general $\gamma$ is {\it not} the adiabatic index $\gammaAD$ of the
gas\footnote{$\gammaAD \equiv \cp/\cV$ is the ratio of specific heats at
constant pressure and volume; for a perfect gas, it always exceeds unity.}
(e.g. Clarke \& Carswell 2007).
The equation of continuity reads
\begin{equation}
4\upi r^2\hspace{-0.3mm}\rho\hspace{0.4mm}\varv=\MBp\hspace{0.2mm},
\label{eq:continuity}
\end{equation}
where $\varv(r)$ is the modulus of the gas radial velocity, and $\MBp$ is
the time-independent accretion rate onto the MBH. Bernoulli's
equation, by virtue of the boundary conditions at infinity, is
\begin{equation}
\frac{\varv^2}{2}\hspace{0.15mm}+\int_{\pinf}^p\frac{dp}{\rho}-\frac{G\MBH}{r}=0.
\label{eq:Bernoulli}
\end{equation}
Notice that, unless $\gamma=\gammaAD$, the integral at the left hand side
is {\it not} the enthalpy change per unit mass (see Section \ref{sec:Entropy}).
The natural scale length of the problem is the Bondi radius
\begin{equation}
\rB \equiv \frac{G\MBH}{\cinf^2},
\label{eq:rB}
\end{equation}
and, by introducing the dimensionless quantities
\begin{equation}
x \equiv \frac{r}{\rB}\hspace{0.2mm},
\qquad\,\,\,
\rhotil \hspace{0.3mm}\equiv \frac{\rho}{\rhoinf}\hspace{0.2mm},
\qquad\,\,\,
\M \equiv \frac{\varv}{\cs}\hspace{0.2mm},
\end{equation}
where $\M$ is the local Mach number,
equations \eqref{eq:continuity} and \eqref{eq:Bernoulli}
become respectively (for $\gamma \neq 1$)
\begin{equation}
x^2\hspace{-0.15mm}\M\hspace{0.35mm}\rhotil^{\hspace{0.15mm}\frac{\gamma+1}{2}}\hspace{-0.4mm}=\hspace{0.2mm}\lambdaup\hspace{0.5mm},
\qquad\,\,
\left(\frac{\M^2}{2}+\frac{1}{\gamma-1}\right)\!\rhotil^{\hspace{0.5mm}\gamma-1}\hspace{-0.2mm}=\frac{1}{x}+\frac{1}{\gamma-1}\hspace{0.2mm},
\label{eq:cont+Bern_lambda}
\end{equation}
where
\begin{equation}
\lambdaup \equiv \frac{\MBp}{4\upi\rB^2\rhoinf\cinf}\hspace{0.2mm},
\end{equation}
is the (dimensionless) accretion parameter. Once $\MBH$, $\rhoinf$, and
$\cinf$ are assigned, if $\lambdaup$ is known it is possible to
determine the accretion rate $\MBp$ and derive the profile $\M(x)$,
thus solving the Bondi (1952) problem. As well known, $\lambdaup$ cannot assume
arbitrary values. In fact, by elimination of $\rhotil$ in between equations
\eqref{eq:cont+Bern_lambda}, one obtains the identity
\begin{equation}
g(\M)=\Lambda\hspace{0.2mm}f(x)\hspace{0.4mm},
\qquad\,\,\,
\Lambda \equiv \lambdaup^{-\frac{2(\gamma-1)}{\gamma+1}},
\label{eq:Bondi_eq_poly}
\end{equation}
where
\begin{equation}
\begin{cases}
\displaystyle{g(\M) \equiv \M^{-\frac{2(\gamma-1)}{\gamma+1}}\!\left(\frac{\M^2}{2}+\frac{1}{\gamma-1}\right)}\hspace{0.4mm},
\\[12pt]
\displaystyle{f(x) \equiv x^{\frac{4(\gamma-1)}{\gamma+1}}\!\left(\frac{1}{x}+\frac{1}{\gamma-1}\right)}\hspace{0.2mm}.
\end{cases}
\label{eq:B_poly}
\end{equation}
Since both $g$ and $f$ have a minimum, the solutions of equation \eqref{eq:Bondi_eq_poly}
exist only when $\gmin \leq \Lambda\fmin$, i.e.
$\Lambda \geq \Lcr \equiv \gmin/\fmin$. For $\gamma < 5/3$,
\begin{equation}
\begin{cases}
\displaystyle{\gmin=\frac{\gamma+1}{2(\gamma-1)}}\hspace{0.2mm},
\hspace{2.304cm}\hspace{2mm} \Mmin=1\hspace{0.1mm},
\\[12pt]
\displaystyle{\fmin=\frac{\gamma+1}{4(\gamma-1)}}\hspace{-0.2mm}\left(\frac{4}{5-3\gamma}\right)^{\hspace{-0.4mm}\frac{5-3\gamma}{\gamma+1}}\!,
\hspace{2mm}\hspace{5.85mm} \xmin=\dfrac{5-3\gamma}{4}\hspace{0.2mm};
\end{cases}
\end{equation}
therefore, from equation \eqref{eq:Bondi_eq_poly}, the classical Bondi problem
admits solutions only for
\begin{equation}
\lambdaup \leq \lcr \equiv \left(\frac{\fmin}{\gmin}\right)^{\hspace{-0.4mm}\frac{\gamma+1}{2(\gamma-1)}} = \hspace{0.1mm}\frac{1}{4}\hspace{-0.2mm}\left(\frac{2}{5-3\gamma}\right)^{\hspace{-0.3mm}\frac{5-3\gamma}{2(\gamma-1)}}.
\label{eq:lambda_cr}
\end{equation}
Notice that for $\gamma=5/3$, $\fmin \to 1$, $\xmin \to 0$, and
$\lambdaup \leq 1/4$. When $\gamma>5/3$, instead, $\xmin \to 0$ and
$\fmin=0$, and so no accretion can take place:
$\gamma=5/3$ is then a {\it hydrodynamical limit} for the classical Bondi problem.
For $\lambdaup=\lcr$ (the critical solutions), $\xmin$ indicates
the position of the sonic point, i.e. $\M(\xmin)=1$. When $\lambdaup<\lcr$,
instead, two regular subcritical solutions exist, one everywhere supersonic
and another everywhere subsonic; the position $\xmin$ marks the minimum
and maximum value of $\M$, respectively for these two solutions
(see e.g. Bondi 1952; Frank, King \& Raine 1992; Krolik 1998).
In the $\gamma=1$ (isothermal) case, $p=\cinf^2\rho$, and $\cs=\cinf$,
while equation \eqref{eq:Bondi_eq_poly} becomes
\begin{equation}
g(\M)=f(x)-\Lambda\hspace{0.4mm},
\qquad\,\,
\Lambda\equiv\ln\lambdaup\hspace{0.5mm},
\label{eq:Bondi_eq_iso}
\end{equation}
where now
\begin{equation}
\begin{cases}
g(\M) \equiv \dfrac{\M^2}{2}-\ln\M\hspace{0.2mm},
\\[10pt]
f(x) \equiv \dfrac{1}{x}+2\ln x\hspace{0.1mm}.
\end{cases}
\label{eq:B_iso}
\end{equation}
Solutions of equation \eqref{eq:Bondi_eq_iso} exist provided that
$\gmin \leq \fmin - \Lambda$;
$\gmin=1/2$ occurs for $\Mmin=1$, while $\fmin=2-\ln 2$ is reached at
$\xmin=1/2$. Therefore, in the isothermal case,
\begin{equation}
\lambdaup \leq \lcr \equiv \e^{\hspace{0.5mm}\fmin-\,\gmin} = \frac{\e^{3/2}}{4}\hspace{0.2mm},
\label{eq:lcr_classic}
\end{equation}
in agreement with the limit of equation \eqref{eq:lambda_cr} for $\gamma \to 1$.
\subsection{Bondi accretion with electron scattering in galaxy models}\label{subsec:Bondi_gal}
For future use, we now resume the framework used in the previous works
(KCP16; CP17; CP18) to discuss the Bondi accretion onto MBHs at the centre
of galaxies, also in presence of radiation pressure due to electron scattering
(see e.g. Taam, Fu \& Fryxell 1991; Fukue 2001; Lusso \& Ciotti 2011;
Raychaudhuri, Ghosh \& Joarder 2018; Samadi, Zanganeh \& Abbassi 2019;
Ram\'{i}rez-Velasquez et al. 2019),
and including the additional gravitational field of the galaxy.
The radiation feedback, in the optically thin regime, can be implemented as a
reduction of the gravitational force of the MBH by the factor
\begin{equation}
\chiup \equiv 1-\frac{L}{\Ledd}\hspace{0.2mm},
\qquad\,\,\,
\Ledd=\frac{4\upi c\hspace{0.3mm}G\MBH\hspace{0.1mm}\mH}{\sigmaThom}\hspace{0.2mm},
\end{equation}
where $L$ is the accretion luminosity, $\Ledd$ is Eddington's
luminosity, $c$ is the speed of light in vacuum, and
$\sigmaThom=6.65 \times 10^{-25}\;{\rm cm}^2$
is the Thomson cross section.
The (relative) gravitational potential of the galaxy, in general, can be written as
\begin{equation}
\Psig=\frac{G\Mg}{\rg}\hspace{0.4mm}\psi\!\left(\frac{r}{\rg}\right)\hspace{-0.1mm},
\end{equation}
where $\rg$ is a characteristic scale length of the galaxy density distribution
(stars plus dark matter), $\psi$ is the dimensionless galaxy potential,
and finally $\Mg$ is the total mass of the galaxy.
For galaxies of infinite total mass, as the J3 models, $\Mg=\Rg\Ms$ is a mass scale
(see equations \eqref{eq:rhos} and \eqref{eq:Ms&Psis}).
By introducing the two parameters
\begin{equation}
\MR \equiv \frac{\Mg}{\MBH}\hspace{0.2mm},
\qquad\quad
\xi \equiv \frac{\rg}{\rB}\hspace{0.2mm},
\label{eq:R_xi}
\end{equation}
where $\rB$ is again defined as in equation \eqref{eq:rB},
the total relative potential becomes
\begin{equation}
\PsiT=\frac{G\MBH}{\rB}\!\left[\hspace{0.3mm}\frac{\chiup}{x}+\frac{\MR}{\xi}\hspace{0.4mm}\psi\!\left(\frac{x}{\xi}\right)\hspace{-0.2mm}\right]\!.
\label{eq:PsiT_Bondi}
\end{equation}
Of course, when $\MR \to 0$ (or $\xi \to \infty$), the
galaxy contribution to the total potential vanishes\footnote{For galaxy models
of finite total mass, or with a total density profile decreasing at large radii at least as
$r^{-3}$ (as for NFW or King (1972) profiles) $\psi$ can be taken to be zero at infinity
(e.g. Ciotti 2021, Chapter 2).},
and the problem reduces to classical case.
In the limit of $L=\Ledd$ (i.e. $\chiup=0$), the radiation pressure cancels
the gravitational field of the MBH, then the problem describes
accretion in the potential of the galaxy only, in absence of electron
scattering and an MBH; when $L=0$ (i.e. $\chiup=1$), the radiation
pressure has no effect on the accretion flow.
Therefore, for MBH accretion in galaxies and in presence of electron scattering,
the Bondi problem reduces to the solution of equations
\eqref{eq:Bondi_eq_iso} and \eqref{eq:B_iso}, or
\eqref{eq:Bondi_eq_poly} and \eqref{eq:B_poly},
where $f$ is now given by
\begin{equation}
f(x)=
\begin{cases}
\displaystyle \frac{\chiup}{x}+\frac{\MR}{\xi}\hspace{0.4mm}\psi\!\left(\frac{x}{\xi}\right)\hspace{-0.3mm}+2\ln x\hspace{0.2mm},
\hspace{2.2cm} \gamma=1\hspace{0.1mm},
\\[15pt]
\displaystyle\hspace{0.2mm} x^{\frac{4(\gamma-1)}{\gamma+1}}\!\left[\hspace{0.3mm}\frac{\chiup}{x}+\frac{\MR}{\xi}\hspace{0.4mm}\psi\!\left(\frac{x}{\xi}\right)\hspace{-0.3mm}+\frac{1}{\gamma-1}\hspace{0.2mm}\right]\hspace{-0.2mm},
\hspace{6.2mm} 1<\gamma\leq\frac{5}{3}\hspace{0.2mm},
\end{cases}
\label{eq:f_gen}
\end{equation}
while the function $g$ (and in particular the value of $\gmin$) is
unchanged by the presence of the galaxy.
Of course, $\Psig$ affects the values of $\xmin$, $\fmin$, and
of the critical $\lambdaup$ (which now we call $\lt$).
Two considerations are in order here.
Firstly, $\Psig$ can produce more than one minimum for the function
$f$ (see the case of Hernquist galaxies in CP17); in this circumstance, the
general considerations after equations \eqref{eq:Bondi_eq_poly} and
\eqref{eq:Bondi_eq_iso} force to conclude that $\lt$ is determined by the
absolute minimum of $f$. Secondly, for a generic galaxy model one cannot
expect to be able to determine analytically the value of $\xmin$; quite
surprisingly, in a few cases it has been shown that this is possible
(see CP18 and references therein). In the following we add another analytical
case to this list.
\section{The J3 galaxy models}\label{sec:J3}
The J3 models (CMP19) are an extension of the
JJ models (CZ18), adopted
in CP18 to study the isothermal Bondi accretion in two-component
galaxies with a central MBH. The J3 models are an analytically tractable
family of spherical models with a central MBH, with a Jaffe (1983) stellar
density profile, and with a {\it total} density distribution
such that the DM halo (obtained as the difference between the total and
stellar density profiles) is described very well by the NFW
profile; in the case of JJ models, instead, the DM profile at large radii declines
as $r^{-4}$ instead of $r^{-3}$.
The stellar and total (stars plus DM) density profile of J3 galaxies are
then given by
\begin{equation}
\rhos(r)=\frac{\rhon}{s^2(1+s)^2}\hspace{0.2mm},
\qquad\quad
\rhog(r)=\frac{\Rg\rhon}{s^2(\xig\hspace{-0.15mm}+s)}\hspace{0.2mm},
\label{eq:rhos}
\end{equation}
with
\begin{equation}
\rhon=\frac{\Ms}{4\upi\rs^3}\hspace{0.2mm},
\qquad\,\,\,\,
s=\frac{r}{\rs}\hspace{0.2mm},
\qquad\,\,\,\,
\xig=\frac{\rg}{\rs}\hspace{0.2mm},
\end{equation}
where $\rs$ is the stellar scale length, $\Ms$ is the total stellar
mass, $\rg$ is the galaxy scale length, and $\Rg$ measures the
total-to-stellar density; for example, we recall that $\Rg/\xig$ gives
the ratio $\rhog/\rhos$ for $r \to 0$. The effective radius $\Reff$ of the Jaffe
profile is $\Reff \simeq 0.75\,\rs$. The stellar and total mass
profiles read
\begin{equation}
\Ms(r)=\Ms\,\frac{s}{1+s}\hspace{0.2mm},
\qquad\,\,\,
\Mg(r)=\Ms\Rg\ln\frac{\xig\hspace{-0.1mm}+s}{\xig}\hspace{0.2mm},
\label{eq:Ms&Psis}
\end{equation}
so that $\Mg(r)$ diverges logarithmically for $r \to \infty$.
\begin{figure*}
\centering
\includegraphics[width=0.48\linewidth]{fig1a.pdf}
\quad\,\,\,
\includegraphics[width=0.48\linewidth]{fig1b.pdf}
\caption{Left: dark-to-total mass ratio of the J3 models (solid lines) within a
sphere of radius $r=\Reff \simeq 0.75\,\rs$ as a function of
$\xig=\rg/\rs$ for the minimum halo case $\alpha=1$ (black),
$\alpha=2$ (blue), and $\alpha=3$ (red).
For comparison, the analogous curves in the case of JJ
models (dashed lines) are shown.
Right: the galactic virial velocity dispersion $\sigV$ (solid lines) as
a function of $\xig$, for
$\alpha=1$ (black), $\alpha=2$ (blue), and $\alpha=3$ (red). For
comparison, the analogous curves in the case of JJ models (dashed
lines) are shown.}
\label{fig:MDM_sigV}
\end{figure*}
The DM halo density profile is therefore
\begin{equation}
\rhoDM(r)\equiv\rhog(r)-\rhos(r)
=\frac{\rhon}{s^2}\hspace{-0.3mm}
\left[\hspace{0.3mm}
\frac{\Rg}{\xig\hspace{-0.1mm}+s}-
\frac{1}{(1+s)^2}
\hspace{0.3mm}
\right]\!,
\label{eq:rhoDM}
\end{equation}
and, as shown in CMP19, the condition for a nowhere negative $\rhoDM$ is
\begin{equation}
\Rg \geq \Rm \equiv
\begin{cases}
\displaystyle{\frac{1}{4\hspace{0.1mm}(1-\xig)}}\hspace{0.2mm},
\hspace{5mm} 0 < \xig \leq \frac{1}{2}\hspace{0.2mm},
\\[10pt]
\displaystyle{\hspace{0.25mm}\xig}\hspace{0.4mm},
\hspace{1.59cm} \xig \geq \frac{1}{2}.
\end{cases}
\label{eq:pos_cond}
\end{equation}
A model with $\Rg=\Rm$ is called a {\it minimum halo} model.
For assigned $\xig$, it is convenient to introduce the parameter
$\alpha$, defined as
\begin{equation}
\Rg=\alpha\hspace{0.2mm}\Rm\hspace{0.4mm},
\qquad\quad
\alpha \geq 1,
\label{eq:alpha}
\end{equation}
and, as we shall restrict to the natural situation $\xig \geq 1$, in
the following $\Rg=\alpha\hspace{0.2mm}\xig$, with $\alpha=1$ corresponding
to the minimum halo model. Therefore, from equations \eqref{eq:Ms&Psis},
the relative amount of dark-to-total mass as a function of radius is
\begin{equation}
\frac{\MDM(r)}{\Mg(r)}=
1-\frac{s}{\alpha\hspace{0.2mm}\xig(1+s)\hspace{-0.35mm}\ln\hspace{0.2mm}(1+s/\xig)}\hspace{0.2mm},
\label{eq:MDM}
\end{equation}
where $\MDM(r)=\Mg(r)-\Ms(r)$. In Fig. \ref{fig:MDM_sigV} (left panel, solid lines) we
plot equation \eqref{eq:MDM} as a function of $\xig \geq 1$ for $r=\Reff$
and for three values of $\alpha$\hspace{0.1mm}: the
minimum halo model ($\alpha=1$) and two cases with $\alpha>1$. Fractions
of DM with values for the minimum
halo case in agreement with those required by the dynamical modelling of
early-type galaxies (see e.g. Cappellari et al. 2015) can be easily obtained.
These fractions are unsurprisingly sligthly larger than those obtained in the
case of JJ models for the same values of $\xig$ (see
dashed lines in Fig. \ref{fig:MDM_sigV}, left panel).
Notice that by construction, $\rhoDM \propto r^{-3}$ at large radii, while,
as in JJ models, at small radii
$\rhoDM \propto r^{-2}$ (i.e. the DM and stellar densities are locally
proportional), with the exception of the minimum halo models, in which
$\rhoDM \propto r^{-1}$. We now compare the DM profile of J3 models with
the untruncated NFW profile (Navarro et al. 1997), that in our notation can be written as
\begin{equation}
\rhoNFW(r)=\frac{\rhon\RNFW}{q(c)s\hspace{0.2mm}(\xiNFW+s)^2}\hspace{0.2mm},
\quad
q(c) \equiv \ln\hspace{0.2mm}(1+c)-\frac{c}{1+c}\hspace{0.2mm},
\label{eq:rhoNFW}
\end{equation}
where $\xiNFW \equiv \rNFW/\rs$ is the NFW scale length in units of $\rs$
and, for a chosen reference radius $\rt$, we define $\RNFW \equiv \MNFW(\rt)/\Ms$
and $c \equiv \rt/\rNFW$. The densities $\rhoDM$ and $\rhoNFW$
can be made asymptotically identical both at small {\it and} large radii
by fixing
\begin{equation}
\RNFW=q(c)\hspace{0.4mm}\xig\hspace{0.4mm},
\qquad\quad
\xiNFW=\frac{\xig}{\sqrt{\hspace{0.3mm}2\hspace{0.3mm}\xig-1}}\hspace{0.2mm}.
\label{eq:NFW_conds}
\end{equation}
Hence, once a specific minimum halo galaxy model is
considered, equations \eqref{eq:rhoNFW} and
\eqref{eq:NFW_conds} allow to determine the NFW profile that best reproduces
the DM halo density profile.
Cosmological simulations suggest for galaxies $c\simeq 10$ (see e.g.
Bullock \& Boylan-Kolchin 2017), and $\RNFW \simeq $ a few tens.
Moreover, the value of $\xig$ cannot be too large, otherwise the DM fraction
inside $\Reff$ would exceed the values derived from observations (see e.g. Napolitano et al. 2010;
see also Fig. 1 in CMP19). For these reasons we conclude
that {\it the} NFW {\it shape and the cosmological expectations
are reproduced if we consider minimum halo models with} $\xig \simeq 10 \div 20$.
In the following, we choose as `reference model' a minimum halo model with $\Rg=\xig=13$,
$c=10$, $\RNFW \simeq 20$, and $\rNFW=2.6\,\rs$.
\subsection{Central and Virial properties of J3 models}\label{subsec:Centr_Vir}
Now we recall a few dynamical properties of the J3 models needed in the following
discussion (see CMP19 for more details).
A MBH of mass $\MBH=\mu\Ms$ is added at the centre of the galaxy, and the total
(relative) potential is
\begin{equation}
\PsiT(r)=\frac{\Psin\mu}{s}+\Psig(r)\hspace{0.4mm},
\quad\,\,\,
\Psin=\frac{G\Ms}{\rs}\hspace{0.2mm},
\quad\,\,\,
\mu=\frac{\MBH}{\Ms}\hspace{0.2mm},
\end{equation}
where
\begin{equation}
\Psig(r)=\frac{\Psin\Rg}{\xig}
\left(
\hspace{0.25mm}
\ln\frac{\xig\hspace{-0.1mm}+s}{s}+
\frac{\xig}{s}\ln\frac{\xig\hspace{-0.1mm}+s}{\xig}
\hspace{0.25mm}
\right)\hspace{-0.2mm};
\label{eq:Psig}
\end{equation}
in particular, $\Psig \propto (\ln s)/s$ at large radii, and $\Psig \propto -\ln s$
near the centre.
The stellar orbital structure is limited to the isotropic case.
The radial component of the velocity dispersion is given by
\begin{equation}
\sigr^2(r)=\sigBH^2(r)+\sigg^2(r)\hspace{0.4mm},
\end{equation}
where $\sigBH$ and $\sigg$ indicate, respectively, the contribution of
the MBH and of the galaxy potential. As shown in CMP19, the Jeans equations
for J3 models can be solved analytically, here we just recall that
in the isotropic case
\begin{equation}
\sigr^2(r) \sim \Psin\hspace{-0.2mm}\times
\begin{cases}
\displaystyle \frac{\mu}{3\hspace{0.1mm}s}
+\frac{\Rg}{2\hspace{0.2mm}\xig}-\frac{\mu}{3}\hspace{0.2mm},
\hspace{8.5mm} r \to 0\hspace{0.2mm},
\\[12pt]
\displaystyle \Rg\hspace{0.2mm}\frac{\ln s}{5\hspace{0.1mm}s}\hspace{0.2mm},
\hspace{1.78cm} r \to \infty\hspace{0.2mm},
\end{cases}
\label{eq:sigr_center_inf}
\end{equation}
where, for mathematical consistency, we retained also the constant term
$-\hspace{0.2mm}\mu/3$ in the asymptotic expansion of $\sigr$ near the centre, although
this contribution is fully negligible in realistic galaxy models.
Notice that, when $\xig \geq 1$, from equation \eqref{eq:alpha} it follows that
the constant term due to the galaxy is independent of $\xig$, with
$\sigg^2(0)=\Psin\alpha/2$. This latter expression provides the
interesting possibility of adopting $\sigg(0)$ as a proxy for the observed velocity
dispersion of the galaxy in the central regions, outside the sphere of influence of the
central MBH.
In order to derive an estimate of the sphere of influence of the MBH,
it is interesting to consider the projected velocity dispersion
$\sigp(R\hspace{0.3mm})=\sqrt{\hspace{0.3mm}\sigpBH^2(R\hspace{0.3mm})+\sigpg^2(R\hspace{0.3mm})\hspace{0.2mm}}\hspace{0.2mm}$,
where $R$ is the radius in the
projection plane. At large radii $\sigp$ is dominated
by the galaxy contribution: from equation \eqref{eq:sigr_center_inf} one
has, at the leading order,
\begin{equation}
\sigp^2(R\hspace{0.3mm}) \sim \frac{\Psin\hspace{0.15mm}8\hspace{0.2mm}\Rg\hspace{-0.2mm}\ln\eta}{15\upi\eta}\hspace{0.2mm},
\qquad
\eta \equiv \frac{R}{\rs}.
\end{equation}
At small radii, instead (CMP19, equations (57) and (58)),
\begin{equation}
\sigpg(0)=\sigg(0)=\frac{\Psin\Rg}{2\hspace{0.2mm}\xig}\hspace{0.2mm},
\qquad\,\,\,\,\,
\sigpBH^2(R\hspace{0.3mm}) \sim \frac{\Psin\hspace{0.15mm}2\hspace{0.2mm}\mu}{3\upi\eta}\hspace{0.1mm}.
\label{eq:sigpg=sigg}
\end{equation}
Equation \eqref{eq:sigpg=sigg} allows to estimate
the radius $\Rinf$ of the sphere of influence, defined
as the distance from the centre in the projection plane where $\sigp$ in presence of the MBH
exceedes by a factor $(1+\epsilon)$ the galaxy projected velocity dispersion $\sigpg$
in absence of the MBH:
\begin{equation}
\sqrt{\hspace{0.4mm}\sigpBH^2(\Rinf)+\sigpg^2(\Rinf)\hspace{0.4mm}}\hspace{0.5mm}
\equiv
\hspace{0.2mm}(1+\epsilon)\hspace{0.2mm}\sigpg(\Rinf)\hspace{0.3mm}.
\label{eq:Rinf_def}
\end{equation}
In practice, for a galaxy model with finite $\sigpg(0)$, the formula above reduces
to equation (36) in CP18, and for $\xig \geq 1$ equation \eqref{eq:sigpg=sigg}
yields
\begin{equation}
\frac{\Rinf}{\rs}
\simeq\frac{4\mu}{3\upi\alpha\hspace{0.2mm}\epsilon\hspace{0.2mm}(2+\epsilon)}\hspace{0.2mm}.
\label{eq:Rinf}
\end{equation}
Notice that equation \eqref{eq:Rinf} is
coincident with the same estimate in JJ models (CP18, equation 37), being the
two models identical in the central regions.
A fundamental ingredient in Bondi accretion is the gas temperature at infinity $\Tinf$.
As in CP18, in the next Section we shall use
$\TV\!=\!\mugas\hspace{0.1mm}\mH\hspace{0.2mm}\sigV^2/(3\hspace{0.2mm}\kB)$
(see e.g. Pellegrini 2011) as the natural scale for $\Tinf$,
where $\sigV$ is the (three dimensional) virial velocity dispersion of stars obtained
from the Virial Theorem:
\begin{equation}
\Ms\sigV^2 \equiv 2\Ks = -\hspace{0.7mm}\Wsg-\WsBH\hspace{0.2mm}.
\label{eq:VT}
\end{equation}
In the equation above, $\Ks$ is the total kinetic energy of the stars,
\begin{equation}
\Wsg=-\,4\upi G\!\int_0^{\infty}\!\Mg(r)\rhos(r)\hspace{0.3mm}rdr
\end{equation}
is the interaction energy of the stars with the gravitational field of
the galaxy (stars plus DM), and the MBH contribution $\WsBH$
diverges near the origin for a Jaffe density distribution.
Since we shall use $\sigV$ as a proxy for the gas temperature at large distance
from the centre, we neglect $\WsBH$ in equation \eqref{eq:VT}, so that
\begin{equation}
\sigV^2=-\hspace{0.4mm}\frac{\Wsg}{\Ms}=\Psin\Rg\tWsg\hspace{0.2mm},
\qquad
\tWsg=\Hf(\xig,0)-\frac{\ln\xig}{\xig-1}\hspace{0.2mm},
\label{eq:sigV}
\end{equation}
where the function $\Hf(\xig,s)$ is given in Appendix C of CMP19.
Fig. \ref{fig:MDM_sigV} (right panel) shows the trend of $\sigV$ as a function of $\xig \geq 1$,
for three J3 (solid) and JJ (dashed) models. As expected, $\sigV$
increases with $\xig$, and
$\sigV \simeq \sqrt{\alpha\hspace{0.1mm}\Psin}$ when $\rg \gg \rs$.
For comparison, we show in Fig. \ref{fig:MDM_sigV} (right panel) $\sigV$
for the JJ models of same parameters.
\begin{table*}
\centering
\caption{Galaxy Structure and Accretion Flow parameters}
\begin{tabular}{@{}l@{\hskip 15.2em}r@{\hskip 3em}l@{\hskip 15.6em}r@{}}
\toprule[1.25pt]\midrule[0.3pt]
\multicolumn{2}{c@{\hskip 3.25em}}{Galaxy Structure} & \multicolumn{2}{c}{Accretion Flow}\\
\cmidrule{1-2}\cmidrule{3-4}
\multicolumn{1}{@{}l}{Symbol} & \multicolumn{1}{r@{\hskip 3em}}{Quantity} & \multicolumn{1}{@{}l}{Symbol} &
\multicolumn{1}{r@{}}{Quantity} \\
\midrule
\vspace{1pt}
$\Ms$ & Total stellar mass & $\Tinf$ & Gas temperature at infinity \\
\vspace{1pt}
$\rs$ & Stellar density scale length & $\rhoinf$ & Gas density at infinity \\
\vspace{1pt}
$\Mg$ & Total$^{\hspace{0.25mm}\rm a}$\tnote{a} galaxy mass & $\cinf$ & Speed of sound at infinity \\
\vspace{1pt}
$\rg$ & Total density scale length & $\gamma$ & Polytropic index ($1 \leq \gamma \leq 5/3$) \\
\vspace{1pt}
$\MBH$ & Central MBH mass & $\gammaAD$ & Adiabatic index ($\hspace{0.15mm}=\cp/\cV$) \\
\vspace{1pt}
$\mu$ & $\MBH/\Ms$ & $\MR$ & $\Mg/\MBH$ ($\hspace{0.15mm}=\hspace{-0.2mm}\Rg/\mu$) \\
\vspace{1pt}
$\Rg$ & $\Mg/\Ms$ ($\hspace{0.15mm}=\alpha\hspace{0.1mm}\Rm\hspace{0.2mm}$) & $\beta$ & $\Tinf/\TV$ \\
\vspace{1pt}
$\Rm$ & Minimum value of $\Rg$ & $\rB$ & Bondi radius \\
\vspace{1pt}
$\xig$ & $\rg/\rs$ & $\rmin$ & Sonic radius \\
\vspace{1pt}
$s$ & $r/\rs$ & $x$ & $r/\rB$ \\
\vspace{1pt}
$\sigV$ & Stellar virial velocity dispersion & $\xi$ & $\rg/\rB$ \\
\vspace{1pt}
$\TV$ & Stellar virial temperature & $\lt$ & Critical accretion parameter \\
\vspace{1pt}
$\Wsg$ & Virial energy of stars & $\M$ & Mach number \\
\bottomrule
\end{tabular}
\vspace{1.5mm}
\begin{tablenotes}[para,flushleft]
\item[a] For example, from our definition $\Mg=\Rg\Ms$, and equation \eqref{eq:rhos},
$\Mg$ is the total mass (stellar plus DM) inside a sphere of radius
$(\e-1)\hspace{0.3mm}\rg$.
\end{tablenotes}
\label{tab:parameters}
\end{table*}
\subsection{Linking Stellar Dynamics to Fluid Dynamics}\label{sec:Combo}
We now link the stellar dynamical properties of the
galaxy models with the defining parameters of Bondi accretion introduced
in Section \ref{subsec:Bondi_gal}.
In fact, the function $f$ in equation
\eqref{eq:f_gen} is written in terms of quantities referring to the central
MBH and to the gas temperature at infinity, while the stellar
dynamical properties of the J3 models are written in terms of the
observational properties of the galaxy stellar component.
The two groups of parameters are summarised in Table 2.
The first accretion parameter we consider is $\MR$ in equation \eqref{eq:R_xi}.
It is linked to the galaxy structure by the following expression
\begin{equation}
\MR \equiv \frac{\Mg}{\MBH}
=\frac{\Rg}{\mu}
=\frac{\alpha\hspace{0.3mm}\xig}{\mu}\hspace{0.2mm},
\label{eq:R}
\end{equation}
where the last identity derives from equation \eqref{eq:alpha} with
$\xig \geq 1$; notice that $\MR \approx 10^4$ for $\xig$ of the order of tens
and $\alpha$ of order unity, and $\mu=0.002$
(see Kormendy \& Ho 2013 for this choice of $\mu$).
The determination of the accretion parameter $\xi$ is more articulated. This quantity depends on
the Bondi radius $\rB$; we stress again that in the present discussion,
even in presence of the galaxy gravitational potential, $\rB$
is still defined in the classical sense, i.e., just considering the mass of
the MBH, as in equation \eqref{eq:rB}.
Of course, $\rB$ depends on the gas temperature at infinity.
In principle, arbitrary values of $\Tinf$
could be adopted, but in real systems the natural scale for the global temperature
is represented by the virial temperature
$\TV$ defined via the virial velocity dispersion in equation \eqref{eq:VT}.
Accordingly, we set
\begin{equation}
\Tinf=\beta\hspace{0.3mm}\TV,
\qquad
\cinf^2=\gamma\hspace{0.4mm}\frac{\pinf}{\rhoinf}
=\frac{\gamma\beta\hspace{0.1mm}\sigV^2}{3}.
\end{equation}
From equations \eqref{eq:rB} and \eqref{eq:sigV} we then obtain
\begin{equation}
\frac{\rB}{\rs}=\frac{3\mu}{\alpha\beta\gamma\hspace{0.1mm}\Fg(\xig)}\hspace{0.3mm},
\qquad\quad
\Fg \hspace{-0.2mm}\equiv \xig\hspace{0.2mm}\tWsg(\xig)\hspace{0.3mm},
\label{eq:rB/rs}
\end{equation}
where the function $\Fg$ monothonically increases with $\xig$ from
$\Fg(1)=\upi^2\hspace{-0.45mm}/\hspace{0.15mm}6-1$ to $\Fg(\infty)=1$.
For example, at fixed $\alpha$, $\beta$, and $\gamma$, one has
\begin{equation}
\frac{3\mu}{\alpha\beta\gamma}<\frac{\rB}{\rs}\leq\frac{18\mu}{(\upi^2\hspace{-0.4mm}-6)\hspace{0.1mm}\alpha\beta\gamma}.
\label{eq:rBrs_extremes}
\end{equation}
In Fig. \ref{fig:rmin} (top left) we show the trend of $\rB/\rs$ as
a function of $\xig$ in the minimum halo case ($\alpha=1$) with $\beta=1$ and $\mu=0.002$, for
three values of $\gamma$; in general, $\rB$ is of the order of a few
$\times\hspace{0.5mm}10^{-3}\hspace{0.3mm}\rs$. Note that, for fixed $\xig$,
the isothermal profile (black line) is above that in the corresponding adiabatic case
(red line); in general, for fixed $\alpha$, $\beta$ and $\xig$,
$\rB/\rs$ always lies between the isothermal
and the monoatomic adiabatic case, as shown by equation \eqref{eq:rBrs_extremes}.
Finally, by combining equations \eqref{eq:R_xi} and
\eqref{eq:rB/rs},
\begin{equation}
\xi \equiv\frac{\rg}{\rB}
=\frac{\alpha\beta\gamma\hspace{0.2mm}\xig\Fg(\xig)}{3\mu}
=\frac{\MR\beta\hspace{0.2mm}\gamma\Fg(\xig)}{3}\hspace{0.2mm},
\label{eq:csib}
\end{equation}
and so $\MR$ and $\xi$ increase with $\xig$. Curiously,
from the general definitions in equation \eqref{eq:R_xi}, and making use
of equation \eqref{eq:sigpg=sigg},
\begin{equation}
\frac{\MR}{\xi}=\frac{2\hspace{0.3mm}\sigpg^2(0)}{\cinf^2},
\label{eq:R/xi}
\end{equation}
which links directly the parameters of Bondi accretion to the observable
$\sigpg(0)=\sigg(0)$.
For observational purposes, it is also useful to express the position of $\rB$
in terms of the radius $\Rinf$ as given in equation \eqref{eq:Rinf};
since the parameter $\alpha=\Rg/\Rm$ cancels out, we have
\begin{equation}
\frac{\rB}{\Rinf}=\frac{9\upi\hspace{0.1mm}\epsilon\hspace{0.2mm}(2+\epsilon)}{4\hspace{0.1mm}\beta\gamma\Fg(\xig)}\hspace{0.2mm},
\end{equation}
independently of the minimum halo assumption.
In Fig. \ref{fig:rmin} (bottom left panel) we show the trend of $\rB/\Rinf$ when $\epsilon=0.5$
and $\beta=1$, for the same three values of $\gamma$ as in the upper left panel:
$\rB\approx$ a few times $\Rinf$.
Now we move the discussion to the sonic radius $\rmin$, one of the most
important properties of the accretion solution. The effects of the galaxy do indeed
manifest themselves in the position of $\rmin$.
When measured in terms of the scale length $\rs$, it can
be written, by making use of equation \eqref{eq:rB/rs}, as
\begin{equation}
\frac{\rmin}{\rs}=\hspace{0.6mm}\xmin(\chiup,\MR,\xi)\hspace{0.4mm}\frac{\rB}{\rs}\hspace{0.2mm},
\label{eq:rminrs}
\end{equation}
where $\xmin \equiv \rmin/\rB$ gives the (absolute) minimum of $f$.
Finally, we must recast the galaxy potential in equation \eqref{eq:Psig}
by using the normalization scales in equation \eqref{eq:PsiT_Bondi}\hspace{0.1mm}:
as $s/\xig=x/\xi$, it is immediate that in our problem
\begin{equation}
\psi\!\left(\frac{x}{\xi}\right)\!=
\ln\!\left(1+\frac{\xi}{x}\right)\hspace{-0.2mm}
+\hspace{0.25mm}\frac{\xi}{x}\ln\!\left(1+\frac{x}{\xi}\right)\hspace{-0.3mm}.
\label{eq:psi(x/csi)}
\end{equation}
\begin{figure*}
\includegraphics[width=0.48\linewidth]{fig2a.pdf}
\quad\,\,\,
\includegraphics[width=0.48\linewidth]{fig2b.pdf}\\
\vspace{-12mm}
\caption{Left: Bondi radius $\rB$ in units of $\rs$ (top) and $\Rinf$ (bottom), as a function of the
galaxy-to-stellar scale length ratio $\xig=\rg/\rs$, for three
J3 galaxy models with $\beta \equiv \TV/\Tinf=1$, $\mu=0.002$, and
$\gamma=1$, $4/3$, $5/3$;
the solid dots at $\rB/\rs \simeq 0.0066$, $0.0049$, $0.0039$
correspond to the minimum halo case with $\xig=13$;
solid dots at $\rB/\Rinf \simeq 9.71$, $7.28$, $5.82$ correspond
to the case $\xig=13$ and $\epsilon=0.5$.
Right: position of $\xmin \equiv \rmin/\rB$ (top) and
$\rmin/\rs$ (bottom) as a function of
$\beta=\Tinf/\TV$, in the case of minimum halo models with
$\xig=13$, $\mu=0.002$, and $\chiup=1$, for different values of the
polytropic index given close to the curves.
The black square points at $\rmin/\rB \simeq 57.34$ and
$\rmin/\rs \simeq 0.23$ correspond to the critical case
$\beta=\bc \simeq 1.65$.}
\label{fig:rmin}
\end{figure*}
\section{Bondi accretion in J3 models}\label{sec:BondiJ3}
We can now discuss the full problem, investigating how the standard
Bondi accretion is modified by the additional potential of J3 galaxies,
and by electron scattering. We show that in the isothermal case ($\gamma=1$)
the solution is fully analytical, as for the monoatomic adiabatic case
($\gamma=5/3$); for $1<\gamma<5/3$, instead,
it is not possible to obtain analytical expressions, and so
a numerical investigation is presented.
\subsection{The $\gamma=1$ case}\label{sec:J3poly}
\begin{figure*}
\includegraphics[width=0.48\linewidth]{fig3a.pdf}
\quad\,\,\,
\includegraphics[width=0.48\linewidth]{fig3b.pdf}\\
\caption{Critical accretion parameter $\lt$ as a function of $\xig$, for the
minimum halo case $\alpha=1$ (black), $2$ (blue), and $3$ (red),
and $\chiup=1$ and $\beta=1$. The dotted curves refer to $\alpha=1$
and three different values of $\beta$. The left panel shows the case
$\gamma=1$, the right panel shows the case $\gamma=4/3$.
Notice that $\lt$ in the isothermal case is several order of magnitude
larger than the $\gamma=4/3$ case.}
\label{fig:lt}
\end{figure*}
The isothermal case stands out not
only because $f(x)$ in equation \eqref{eq:f_gen} is not of
the general family, for $\gamma=1$, but also because the position of the
sonic radius $\xmin$ can be obtained explicitely. Indeed, for
$0<\chiup\leq 1$, $f(x)\sim\chiup/x$ for $x \to 0$, and $\sim 2\ln x$ for $x \to \infty$.
Therefore, the continuous function $f$ has at least one critical point
over the range $0 \leq x < \infty$, obtained by solving
\begin{equation}
2\hspace{0.2mm}\xmin-\MR\ln\frac{\xi+\xmin}{\xi}=\chiup\hspace{0.3mm}.
\label{eq:df/dx_iso}
\end{equation}
As shown in Appendix \ref{app:W}, the positive solution can be
obtained for generic values of the model parameters in terms of the
Lambert-Euler $W$ function\footnote{The $W$ function is not new in the study of isothermal flows.
See, e.g., Cranmer (2004), Waters \& Proga (2012), Herbst (2015), CP17, CP18.}, and
the {\it only} minimum of $f$ is reached at
\begin{equation}
\xmin \equiv \frac{\rmin}{\rB}
= -\hspace{0.8mm}\xi-\frac{\MR}{2}\,\Wmu\hspace{-0.4mm}\left(-\hspace{0.5mm}\frac{2\xi}{\MR}\hspace{0.4mm}\e^{-\hspace{0.4mm}\frac{\chiup+2\xi}{\MR}}\right)\hspace{-0.3mm}.
\label{eq:xminISO}
\end{equation}
Once $\xmin$ is known, all the other quantities in
the Bondi solution, such as the critical
accretion parameter $\lt=\exp\hspace{0.25mm}(\fmin\hspace{-0.1mm}-1/2)$
in equation \eqref{eq:lcr_classic}, the mass accretion
rate in equation \eqref{eq:continuity}, and the Mach number profile
$\M$, can be expressed as a function of $\xmin$.
Therefore, J3 galaxies belong to the family of models for which a
fully analytical discussion of the isothermal Bondi accretion problem is
possible (see Table 1).
In particular, from CP17 and CP18, the critical accretion solution reads
\begin{equation}
\M^2=-
\begin{cases}
\displaystyle{\Wz\hspace{0.2mm}\big(\hspace{-0.3mm}-\hspace{-0.2mm}\lt^2\hspace{0.3mm} \e^{-2f}\big)}\hspace{0.3mm}, \hspace{11mm} x \geq \xmin\hspace{0.3mm},
\\[7pt]
\displaystyle{\Wmu\hspace{0.2mm}\big(\hspace{-0.3mm}-\hspace{-0.2mm}\lt^2\hspace{0.3mm} \e^{-2f}\big)}\hspace{0.3mm}, \hspace{6.5mm} 0 < x \leq \xmin\hspace{0.3mm},
\end{cases}
\label{eq:MachISO}
\end{equation}
where
$f$ is given in equation \eqref{eq:f_gen} with the function $\psi(x/\xi)$
defined by equation \eqref{eq:psi(x/csi)}.
Summarising, $\Wmu$ describes supersonic accretion,
while $\Wz$ subsonic accretion\footnote{
As $x$ decreases from $\infty$ to $\xmin$, the argument of
$\Wz$ decreases from $0$ to $-\hspace{0.4mm}1/\e$ (points $A$ and $B$ in Fig.
\ref{fig:Lambert}, left panel),
and $\M^2$ increases from $0$ to $1$.
As $x$ further decreases from $\xmin$ to $0$, the argument of $\Wmu$ increases
again from $-\hspace{0.4mm}1/\e$ to $0$ (points $B$ and $C$),
and $\M^2$ increases from $1$ to $\infty$. The other critical solution, with
$\M^2$ increasing for increasing $x$, is obtained by switching the functions $\Wz$
and $\Wmu$ in equation \eqref{eq:MachISO}.}.
Although equation \eqref{eq:MachISO} provides an explicit expression of $\M$,
it can be useful to have its asymptotic
trend at small and large distances from the centre; from equation
\eqref{eq:Wz_asymp} and the expansion of $f(x)$, one has
\begin{equation}
\M^2 \sim
\begin{cases}
\displaystyle{\frac{2\chiup}{x}+\frac{2(2\xi-\MR)}{\xi}\ln\frac{x}{\xmin}}\hspace{0.2mm},
\hspace{0.8cm} x \to 0\hspace{0.4mm},
\\[12pt]
\displaystyle{\lt^2\hspace{0.4mm}x^{-\hspace{0.25mm}2\hspace{0.1mm}\left(2\hspace{0.4mm}+\frac{\MR}{x}\hspace{-0.2mm}\right)}}\hspace{0.2mm},
\hspace{2.273cm} x \to \infty\hspace{0.1mm}.
\end{cases}
\label{eq:M2_asymp}
\end{equation}
Of course, the same result can be established also by asympotic expansion of
equation \eqref{eq:Bondi_eq_iso}.
Therefore, in the central region $\M \propto x^{-1/2}$ for $\chiup>0$, while
$\M \sim \sqrt{\hspace{0.2mm}2(2-\MR/\xi)\hspace{-0.35mm}\ln\hspace{0.2mm}(x/\xmin)}$
when $\chiup=0$ (provided that $\MR>2\xi$).
As already found for JJ models in the isothermal case, also for J3 models the case $\chiup=0$
(from equation (\ref{eq:PsiT_Bondi}) corresponding to a galaxy
without a central MBH) reveals some interesting properties of the
gas flow, also relevant for the understanding of the more natural situation $\chiup>0$.
In fact, near the centre $f(x) \sim (2-\MR/\xi)\ln x$, and a solution is possible only for $\MR \geq 2\xi$,
with $\xmin$ given by equation \eqref{eq:xminISO}.
When $\MR<2\xi$, $\fmin=-\,\infty$ (reached at the origin), and therefore no accretion
is possible since $\lt$ would be zero.
In the special case $\chiup=0$ and $\MR=2\xi$, $\fmin$
is again reached at the origin, but $f$ now converges to $\fmin=2(1+\ln\xi)$, with $\lt=\xi^2\e^{3/2}$.
Given the similarity of JJ and J3 models near the centre, the fact that both models
share the same properties at small radii is not surprising\footnote{For a further
discussion of the effect of the central density slope on the existence of
isothermal accretion solutions with $\chiup=0$, see CP17 and CP18.}.
Equation \eqref{eq:M2_asymp} can still be used with $\chiup=0$ for
$\MR>2\xi$, while for $\MR=2\xi$, from equations \eqref{eq:MachISO}
and \eqref{eq:Wz_asymp} it can be shown that $\M^2 = 1+\Og(x)$.
We now show how the condition $\MR\geq 2\xi$ when $\chiup=0$, in order to have accretion,
imposes an upper limit on $\Tinf$. In fact, from equation \eqref{eq:csib},
with $\gamma=1$, the identity $\MR/(2\xi)=3/(2\beta\Fg)$ produces a condition
for $\beta$\hspace{0.1mm}:
\begin{equation}
\beta \leq \frac{3}{2\Fg(\xig)} \equiv \bc\hspace{0.2mm},
\label{eq:bc}
\end{equation}
where the critical parameter $\bc$ depends only on $\xig$.
It follows that {\it in absence of a central MBH, isothermal accretion in J3 galaxies
is possible only for}
\begin{equation}
\Tinf \leq \bc\hspace{0.3mm}\TV\hspace{0.2mm},
\quad\,
{\rm i.e.},
\quad\,
\sigpg(0)=\sigg(0)\geq\cinf\hspace{0.4mm},
\end{equation}
where the last inequality derives from equation \eqref{eq:R/xi}.
For reference, in Fig. \ref{fig:bc} (right panel) we show
$\bc$ as a function of $\xig$, for both JJ and J3 models; it is easy to prove that
$3/2 < \bc \leq 9/(\upi^2\!-6)\simeq 2.33$.
As anticipated, the limitation $\MR\geq 2\xi$ when $\chiup=0$ is
also relevant for the understanding of the flow behaviour when $\chiup>0$.
In fact, it is possible
to show that, by defining $\tau \equiv \beta/\bc = 2\xi/\MR$, for $\MR\to\infty$
and fixed\footnote{As for JJ models (CP18, equation (48)), from equation \eqref{eq:xminISO} it follows
that the limit for $\MR\to\infty$ is {\it not} uniform in $\tau$.}
$\tau$, we have
\begin{equation}
\xmin \sim
\begin{cases}
\displaystyle -\hspace{0.4mm}\frac{\tau+\Wmu(-\hspace{0.4mm}\tau\hspace{0.2mm}\e^{-\hspace{0.2mm}\tau})}{2}\hspace{0.4mm}\MR\hspace{0.4mm},
\hspace{0.75cm} \tau < 1\hspace{0.2mm},
\\[10pt]
\displaystyle \sqrt{\frac{\chiup\hspace{0.2mm}\MR}{2}}\hspace{0.4mm},
\hspace{2.755cm} \tau = 1\hspace{0.2mm},
\\[14pt]
\displaystyle \frac{\chiup\hspace{0.2mm}\tau}{2\hspace{0.2mm}(\tau-1)}\hspace{0.4mm},
\hspace{2.45cm} \tau > 1.
\end{cases}
\label{eq:xmin_asymp}
\end{equation}
The trend of $\xmin$ as a function of $\beta$ is shown by the black solid line
in Fig. \ref{fig:rmin} (top right panel), for a minimum halo model with $\xig=13$
and $\mu=0.002$. For example, equation \eqref{eq:xmin_asymp} allows to explain the drop
at increasing $\beta$ when $\tau$
switches from being less than unity to being larger than unity, with
$\xmin \simeq \chiup/2$ independently of $\tau$;
the black square point at $\rmin \simeq 57.34\,\rB$ correspond to
$\beta=\bc\simeq 1.65$, well approximated by the value $57.01\,\rB$
obtained with the previous equation.
Equation \eqref{eq:xmin_asymp} allows us to find the behaviour of $\lt$ for large
values of $\MR$ (at fixed $\beta$). For example, in the peculiar case $\beta=\bc$ (i.e. $\tau=1$),
an asymptotic analysis shows that $\lt\sim\e^{3/2}\MR^2/4$;
for simplicity, we do not
report the expression of $\lt$ for $\beta\neq\bc$, which can, however, be easily calculated.
As shown in Fig. \ref{fig:lt} (left panel), the presence of the galaxy makes $\lt$
several orders of magnitude larger than without it.
A summary of the results can be seen by inspection of
Fig. \ref{fig:Mach} (top panels), where we show the radial profile of the
Mach number for three different values of the temperature parameter
($\beta=1$, $2$, $3$).
Solid lines show the two critical solutions, one in which the gas flow begins supersonic
and approaches the centre with zero velocity, and the other in which $\M$ continuously
increases towards the centre.
The dotted lines show two illustrative subcritical solutions with $\lambdaup=0.8\,\lt$.
It is apparent that $\rmin$ decreases very rapidly with
increasing temperature at the transition from $\beta=1$ to $\beta=2$:
$\rmin \simeq 19.89\,\rs$, $0.0093\,\rs$, and $0.0024\,\rs$, for
$\beta=1$, $2$, and $3$, respectively.
Finally, once the Mach number profile is known, the gas density profile
is obtained from the first equation of the
system \eqref{eq:cont+Bern_lambda} with $\gamma=1$, i.e.,
\begin{equation}
\rhotil(x)=\frac{\rho(x)}{\rhoinf}=\frac{\lambdaup}{x^2\M(x)}.
\end{equation}
Along the critical solution, by virtue of equation \eqref{eq:M2_asymp} it follows that
$\rhotil \sim \lt\hspace{0.2mm}x^{-\hspace{0.2mm}3/2}/\sqrt{2\chiup}$ at the centre when
$\chiup>0$, while $\rhotil \sim x^{\MR/x}$ at large radii.
Fig. \ref{fig:rho(x)_T(x)} (top panel) shows the radial trend of $\rhotil$
for the critical accretion solution in our reference model, with $\lt \simeq 2.14\times 10^8$.
The bottom panel shows the gas velocity profile and, for comparision,
the isotropic velocity dispersion $\sigr$.
Notice that near the centre, $\sigBH \propto r^{-\hspace{0.2mm}1/2}$ and
$\varv=\cinf\hspace{0.3mm}\M\propto r^{-\hspace{0.2mm}1/2}$ (provided that $\chiup>0$),
so that their ratio is constant; it can be easily shown that $\varv/\sigBH \sim 6\hspace{0.2mm}\chiup$.
The value of $\sigBH$ near the centre (i.e. of $\sigr$ if a central MBH is present),
is then a proxy for the isothermal gas inflow velocity.
\begin{figure}
\includegraphics[width=1\linewidth]{fig4.pdf}\\
\vspace{-2.5mm}
\caption{Critical temperature parameter
$\bc \equiv 3/[\hspace{0.15mm}2\hspace{0.2mm}\Fg(\xig)]$
as a function of $\xig$, for J3 and JJ galaxy models.
For $\beta=\Tinf/\TV>1$,
the isothermal accretion in absence of a central MBH is possible
provided that $\beta \leq \bc$. In these circumstances, once $\xig$
is fixed, the upper limit of $\beta$ for J3 models is lower
than that for JJ ones.}
\label{fig:bc}
\end{figure}
\begin{figure*}
\includegraphics[width=0.326\linewidth]{fig5a.pdf}
\,
\includegraphics[width=0.326\linewidth]{fig5b.pdf}
\,
\includegraphics[width=0.326\linewidth]{fig5c.pdf}\\
\vspace{4mm}
\includegraphics[width=0.326\linewidth]{fig5d.pdf}
\,
\includegraphics[width=0.326\linewidth]{fig5e.pdf}
\,
\includegraphics[width=0.326\linewidth]{fig5f.pdf}\\
\vspace{4mm}
\includegraphics[width=0.326\linewidth]{fig5g.pdf}
\,
\includegraphics[width=0.326\linewidth]{fig5h.pdf}
\,
\includegraphics[width=0.326\linewidth]{fig5i.pdf}\\
\caption{Radial profile of the Mach number for polytropic Bondi
problem in a minimum halo J3 galaxy model with $\xig=13$, $\chiup=1$ and
$\mu=0.002$, in case of three different values of the gas temperature
($\beta=1$, $2$, $3$). Solid lines show the two critical solutions ($\lambdaup=\lt$),
while dotted lines indicate the two subcritical solutions ($\lambdaup=0.8\,\lt$);
the distance from the centre is given in units of both $\rB$
(bottom axis) and $\rs$ (top axis, using equation \eqref{eq:rB/rs}).
In blue we plot the subsonic regime and in red the supersonic one.
The top panels show the isothermal case ($\gamma=1$): notice how, in accordance with
Fig. \ref{fig:rmin}, the position of $\rmin$ decreases very rapidly passing
from $\beta=1$ to $\beta=2$.
Middle panels show the case $\gamma=4/3$: in accordance with
the dashed black lines in Fig. \ref{fig:rmin}, $\rmin/\rs$ decreases for increasing
$\beta$, while $\rmin/\rs$ decreases (note that a logarithmic scale for radius axes has
been used).
Finally, bottom panels show the adiabatic case ($\gamma=5/3$): the position of the sonic
point is reached at the centre, the accretion solutions are always subsonic (i.e. $\M<1$),
and the wind solutions are always supersonic (i.e. $\M>1$).}
\label{fig:Mach}
\end{figure*}
\begin{figure}
\includegraphics[width=1\linewidth]{fig6.pdf}\\%0.331 se orizzontale
\vspace{-3.5mm}
\caption{Density (top), temperature (middle), and velocity (bottom) profiles,
as a function
of $r/\rs$, for the critical (i.e. $\lambdaup=\lt$) accretion
solution of the polytropic
Bondi problem in a minimum halo J3 galaxy model with $\xig=13$,
$\chiup=1$, and $\mu=0.002$. The gas temperature at
infinity $\Tinf$ equals the stellar virial temperature $\TV$, i.e.
$\beta=1$. For comparison, the dotted line in the bottom panel
shows the isotropic velocity dispersion profile $\sigr$.}
\label{fig:rho(x)_T(x)}
\end{figure}
\subsection{The $1<\gamma<5/3$ case}
When $1<\gamma<5/3$, from the expression for $f(x)$ we have that in one case the
determination of $\xmin$ and $\fmin$ is trivial, i.e. for $\xi\to\infty$
(or $\MR \to 0$)\hspace{0.1mm}: in this situation the galaxy contribution vanishes,
and the position of the only minimum of $f$ reduces to
$\xmin=\chiup\hspace{0.2mm}(5-3\gamma)/4$. Therefore, following
KCP16, the behaviour of the associated $\lt$ could be found just by carrying out
a perturbative analysis (see KCP16, Appendix A); however, since in our models
$\MR$ falls in the range $10^3 \div 10^4$, we shall not further discuss this limit case.
In general, the problem of the determination of $\xmin$ (and so
of $\lt$) cannot be solved analytically, as apparent by combining equations
\eqref{eq:f_gen} and \eqref{eq:psi(x/csi)}, and setting $df/dx=0$\hspace{0.2mm};
a numerical investigation is then needed.
As in the case of isothermal flows, we begin by considering $0<\chiup\leq 1$.
Of course, as $f$ is strictly positive, continuous, and divergent to infinity
for $x \to 0$ and $x \to \infty$, the existence of at least a minimum is
guaranteed.
A detailed numerical exploration shows that, in analogy with the isothermal
case in Hernquist galaxies (CP17), it is possible to have more than one critical
point for $f$ as a function of $\beta$ an $\gamma$.
In particular, there can be a single minimum for $f$, or two minima and one maximum.
We found that for $\xig=13$ and $\beta \approx 1 \div 2$, only one minimum is
present for $\gamma \lesssim 1.01$ and $\gamma \gtrsim 1.1$; instead,
for $1.01 \lesssim \gamma \lesssim 1.1$,
three critical points and two minima are present.
When $\beta$ is small (i.e. $\Tinf$ is low), the absolute minimum of $f$ is
reached at the outer critical point; as $\beta$ increases, the value of $f$
at the inner critical point decreases, and the flow is finally characterised
by two sonic points. Increasing further $\Tinf$, the inner
critical point becomes the new sonic point, with a jump to a smaller value.
Fig. \ref{fig:rmin} (top right panel) shows the position of $\xmin$ as a function
of $\beta$ for different values of $\gamma$, and confirms these trends of
$\xmin$ with $\Tinf$ and $\gamma$.
Notice how the location of $\xmin$
(shown in the bottom panel) now decreases with
an extremely slow decline for $\gamma>1$.
According with equation \eqref{eq:rminrs}, this means that, for polytropic indeces
sufficiently greater than $1$, the ratio $\rB/\rs$ decreases faster than what
$\xmin$ increases.
In the case $\chiup=0$ the sonic point is reached
at the origin. Indeed, $f$
tends to zero when $x \to 0$, and runs to infinity for $x \to \infty$, and so
$\xmin \to 0$ for every choice of the model parameters. Therefore, from equation
\eqref{eq:lambda_cr} one has $\lt \to 0$, concluding the discussion
of the problem in absence of the central MBH since no accretion can take place.
Having determined the position $\xmin$, we can compute
numerically the corresponding value of $\lt$, given in the polytropic case
by equation \eqref{eq:lambda_cr} with $\fmin=f(\xmin)$ obtained from equation \eqref{eq:f_gen}.
In Fig. \ref{fig:lt} (right panel) the critical accretion parameter is shown as a function
of $\xig$, for a reference model with $\gamma=4/3$ and different values of $\beta$.
We note that, at variance with the isothermal case (left panel), $\lt$ is roughly constant
for fixed $\beta$ independently of the extension of the DM halo, while, at fixed $\xig$, it
increases for decreasing $\Tinf$.
Having also determined $\lt$, we finally solve numerically
equation \eqref{eq:Bondi_eq_poly}, obtaining the Mach profile $\M(x)$.
In Fig. \ref{fig:Mach} (middle panels) we show $\M(x)$
for three different values of the temperature parameter
($\beta=1$, $2$, $3$).
The logarithmic scale allows to appreciate how, according to Fig. \ref{fig:rmin},
$\xmin$ suddenly falls down to values under unity as $\gamma$ increases with respect
to the isothermal case.
As an illustrative example, we show the case $\gamma=4/3$.
Although the trend is not very strong, the location of the sonic point,
at variance with the $\gamma=1$ case, moves away from the
centre as the temperature increases: $\rmin \simeq 0.025\,\rB$, $0.046\,\rB$, and $0.062\,\rB$,
for $\beta=1$, $2$, and $3$, respectively. For comparison, in the top axis we give the
distance from the origin in units of $\rs$, from which it can be seen that, in accordance
with Fig. \ref{fig:rmin} (bottom panel), $\rmin$ now tends to increase slightly,
while still of the order of $10^{-4}\,\rs$.
Once the radial profile of the Mach number is known, both the gas density and
temperature profiles can be obtained from the following relations:
\begin{equation}
\rhotil\hspace{0.25mm}
=\hspace{0.3mm}\tT^{\hspace{0.35mm}\frac{1}{\gamma-1}}\hspace{-0.2mm}
=\hspace{0.3mm}\left(\frac{\lambdaup}{x^2\M}\right)^{\hspace{-0.2mm}\frac{2}{\gamma+1}},
\label{eq:chain}
\end{equation}
with $\tT=T/\Tinf$.
Fig. \ref{fig:rho(x)_T(x)} shows the trends of $\rho$ (top panel) and $T$ (middle panel),
as a function of $r/\rs$, for the critical accretion
solution in our usual reference model.
The parameter $\beta$ is fixed to unity, and the curves refer to different
polytropic indeces.
For what concerns the Mach profile for the critical accretion solution,
an asymptotic analysis of equation \eqref{eq:Bondi_eq_poly} shows that, at the leading order
\begin{equation}
\M \sim
\begin{cases}
\displaystyle{\hspace{0.2mm}\lt^{-\frac{\gamma-1}{2}}(2\hspace{0.1mm}\chiup)^{\frac{\gamma+1}{4}}\hspace{0.2mm}x^{-\frac{5-3\gamma}{4}}}\hspace{0.2mm},
\hspace{9mm} x \to 0\hspace{0.4mm},
\\[10pt]
\displaystyle{\hspace{0.2mm}\lt\hspace{0.45mm} x^{-\hspace{0.35mm}2}}\hspace{0.2mm},
\hspace{2.725cm} x \to \infty\hspace{0.1mm}.
\end{cases}
\label{eq:M_poly}
\end{equation}
Notice that all the information about the specific galaxy model in the two
regions is contained in the parameter $\lt$.
Equation \eqref{eq:M_poly} allows us to find the asymptotic behaviour at small and
large radii of the most important quantities concerning the Bondi accretion.
Close to the centre, for example,
$\rhotil \sim \lt\hspace{0.2mm}x^{-\hspace{0.2mm}3/2}/\sqrt{2\chiup}\hspace{0.3mm}$
(as for the isothermal case),
independently on the value of $\gamma$, and so for
the gas velocity $\varv=\cs\hspace{0.3mm}\M$ one finds
\begin{equation}
\varv^2(r)
\sim \frac{\Psin\hspace{0.2mm}2\hspace{0.2mm}\chiup\hspace{0.2mm}\mu}{s}
\sim 6\hspace{0.2mm}\chiup\hspace{0.3mm}\sigBH^2(r)\hspace{0.2mm}.
\end{equation}
Therefore, the central value of $\sigBH$ is a proxy for the gas inflow velocity
also in the range $1 < \gamma < 5/3$.
Fig. \ref{fig:rho(x)_T(x)} (bottom panel) shows the radial trend of $\varv$ for
for different values of $\gamma$: notice how, moving away from the centre, it
decreases progressively faster for $\gamma>1$ (see the green dashed line, corresponding
to $\gamma=1.1$), while deviating significantly from the isotropic stellar velocity
dispersion profile.
We conclude by noting that the inclusion of the effects of the
gravitational field of an host galaxy allows to estimate the total mass profile,
$\MT(r)=\MBH+\Mg(r)$, under the assumption of hydrostatic equilibrium
(see e.g. Ciotti \& Pellegrini 2004; Pellegrini \& Ciotti 2006).
First of all, we note that the estimated mass reads
\begin{equation}
\Mest(r)=
\MT(r)+\frac{r^2}{2G}\frac{d\varv^2}{dr}\hspace{0.3mm},
\end{equation}
whence it is clear that the hypothesis of hydrostatic equilibrium always leads to underestimate $\MT$
in the accretion studies, where the velocity increases in magnitude towards the centre.
Simple algebra shows that the expression of $\Mest$ is given by
\begin{equation}
\Mest(r)
=-\hspace{0.4mm}\frac{r^2}{G\rho(r)}\hspace{0.1mm}\frac{dp}{dr}
=-\hspace{0.4mm}\MBH\hspace{0.4mm}\frac{x^2}{\rhotil^{\hspace{0.3mm}2-\hspace{0.3mm}\gamma}}\hspace{0.1mm}\frac{d\rhotil}{dx},
\end{equation}
and, near the MBH (i.e. for $x \to 0$), where
$\rhotil \sim \lt\hspace{0.2mm}x^{-3/2}/\sqrt{2\chiup}\hspace{0.3mm}$,
\begin{equation}
\frac{\Mest(r)}{\MBH} \sim
\frac{3}{2}\left(\frac{\lt}{\sqrt{\hspace{0.2mm}2\hspace{0.1mm}\chiup\hspace{0.3mm}}}\hspace{0.3mm}\right)^{\gamma-1}
\hspace{-0.2mm}x^{\frac{5-3\gamma}{2}};
\label{eq:Mest_poly}
\end{equation}
notice that in the isothermal limit case one has $\Mest(r) \propto r$.
\subsection{The $\gamma=5/3$ case}
The monoatomic case ($\gamma=5/3$) presents some special behaviour
deserving a short description.
By considering equation \eqref{eq:f_gen} with $\gamma=5/3$, it follows that
$f$ is monotonically increasing and the only
minimum is reached at the centre (KCP17); moreover, for galaxy models with
$r\hspace{0.1mm}\Psig(r) \to 0$ when $r \to 0$ (as for J3 models), one
finds $\fmin=\chiup$, whence $\lt=\chiup^2/4$.
Therefore, $\chiup>0$ in order to have accretion.
When $\lambdaup=\chiup^2/4$, the Bondi problem \eqref{eq:Bondi_eq_poly} reduces
to the fourth degree equation
\begin{equation}
\M^2\hspace{-0.2mm}-\frac{4f(x)}{\chiup}\hspace{0.35mm}\sqrt{\M\hspace{0.5mm}}+3=0\hspace{0.4mm},
\end{equation}
provided that the condition on the central potential mentioned
above is satisfied; note that the dependence on the specific galaxy model is contained
only in the function $f(x)$.
In the bottom panels of Fig. \ref{fig:Mach} we show the radial profile of the
Mach number.
In this situation, $\xmin=0$, and so the accretion solutions (blue lines) are
subsonic everywhere.
The asymptotic bahaviour of $\M(x)$ for the critical accretion solution when $x \to \infty$ is obtaned from
equation \eqref{eq:M_poly} just by fixing $\lt=\chiup^2\hspace{-0.2mm}/\hspace{0.2mm}4$.
When $x \to 0$, instead, the $\gamma=5/3$ case does {\it not} coincide with the limit of
equation \eqref{eq:M_poly} for $\gamma \to 5/3$\hspace{0.1mm}:
in fact, now $\M \to 1$ instead of infinity, and its asymptotic trend reads
\begin{equation}
\M(x) \sim 1-\hspace{0.1mm}\sqrt{\hspace{0.4mm}-\hspace{0.4mm}\frac{8\hspace{0.1mm}\MR\hspace{0.3mm}x\ln x}{3\hspace{0.2mm}\xi\hspace{0.15mm}\chiup}\hspace{0.4mm}}\hspace{0.3mm},
\qquad
x \to 0\hspace{0.15mm};
\end{equation}
of course, the same situation at small radii occurs in the case of any
other quantity deriving from Mach's profile: for example,
\begin{equation}
\varv^2(r) \sim \frac{\Psin\hspace{0.2mm}\chiup\hspace{0.2mm}\mu}{2\hspace{0.1mm}s}\hspace{0.2mm},
\qquad
\Mest(r) \sim \frac{3\hspace{0.2mm}\chiup}{4}\hspace{0.3mm}\MBH\hspace{0.1mm}.
\end{equation}
Notice that $\varv$ decreases by a factor of $2$ with respect to the $1 \leq \gamma < 5/3$ case,
and $\Mest$ differs from what would be obtained
setting $\gamma=5/3$ and $\lt=\chiup^2\hspace{-0.2mm}/\hspace{0.2mm}4$ in
equation \eqref{eq:Mest_poly}.
\section{Entropy and Heat Balance Along The Bondi Solution}\label{sec:Entropy}
\begin{figure*}
\includegraphics[width=0.48\linewidth]{fig7a.pdf}
\quad\,\,\,
\includegraphics[width=0.48\linewidth]{fig7b.pdf}\\
\caption{Absolute value of the rate of heat per unit lenght exchanged by the fluid element,
$4\hspace{0.1mm}\upi r^2|\Qdot\hspace{0.15mm}|$, in units
of $\rB^2\hspace{0.1mm}\Qn$,
as a function $r/\rs$, for the critical Bondi accretion of
a minimum halo J3 model with $\xig=13$, $\chiup=1$, and $\mu=0.002$.
Left panel: isothermal case for $\beta=1$, $1.5$, and $2$.
Right panel: monoatomic gas
($\gammaAD=5/3$) with $\beta=1$, for different values of the polytropic index.
In both panels, the dashed lines correspond to isothermal
accretion with $\Tinf=\TV$.}
\label{fig:4pir2Q}
\end{figure*}
In this Section we employ the obtained polytropic solutions to elucidate some
important thermodynamical aspects of the Bondi accretion, not always sufficiently
stressed in the literature. In fact, it is not uncommon to consider Bondi accretion
as an `adiabatic' problem, where no radiative losses or other forms of heat
transfer take place: after all, no heating or cooling functions seem to be specified
at the outset of the problem. Obviously, this is not true, being
the Bondi solution a purely hydrodynamical flow where all the
thermodynamics of heat exchange is implicitly described by the polytropic
index $\gamma$.
Therefore, for given $\gamma$ (and in absence of shock
waves), one can follow the entropy evolution of each fluid element along the
radial streamline, and determine the reversible heat exchanges.
Let us consider polytropic Bondi accretion with\footnote{Notice that not necessarily
$\gamma<\gammaAD$; for example, one could study
a $\gamma=5/3$ accretion in a biatomic gas with $\gammaAD=7/5$.}
$\gamma\neq\gammaAD$.
From the expression of the entropy per unit mass $\s$ for a perfect gas
(e.g. Chandrasekhar 1939; Zel'dovich \& Raizer 1966), and assuming
as reference value for $\s$ its value at infinity,
we can write the change of entropy of an element of the
accreting flow along its radial streamline, during a polytropic transformation, as
\begin{equation}
\frac{D\s}{Dt}=\cV\hspace{0.2mm}(\hspace{0.15mm}\gamma-\gammaAD)\hspace{0.4mm}
\frac{D\ln\rhotil}{Dt}\hspace{0.2mm},
\qquad\,\,
\Delta\s \equiv \s-\hspace{0.15mm}\sinf\hspace{0.2mm},
\label{eq:Ds/Dt}
\end{equation}
where $D/Dt=\partial/\partial t+{\bm \varv}\hspace{0.15mm}\cdot\Grad$ is
the material derivative.
Of course, for $\gamma=\gammaAD$ no change of entropy occurs along regular solutions,
being the process isentropic; instead, for $\gamma\neq\gammaAD$, once the
solution of the Bondi problem is known, equation \eqref{eq:Ds/Dt} allows
to compute the entropy change of a fluid element.
From the second law of thermodynamics, the rate of heat per unit mass
exchanged by the fluid element can be written as
\begin{equation}
\frac{Dq}{Dt}=T\hspace{0.25mm}\frac{D\s}{Dt}\hspace{0.1mm}.
\label{eq:DQ/Dt_s}
\end{equation}
Therefore, from equation \eqref{eq:Ds/Dt}, it follows that, for $\gamma\neq\gammaAD$, a fluid element
necessarily exchanges heat with the ambient; this fact can be restated in terms of
the specific heat as
\begin{equation}
\frac{Dq}{Dt}=\mathcal{C}\hspace{0.5mm}\frac{DT}{Dt}
=\cV\hspace{0.2mm}\frac{\gamma-\gammaAD}{\gamma-1}\hspace{0.1mm}\frac{DT}{Dt}
\hspace{0.1mm},
\label{eq:DQ/Dt_T}
\end{equation}
where $\mathcal{C}$ is the {\it constant} specific heat for polytropic trasformations
(see e.g. Chandrasekhar 1939).
A third (equivalent) expression for the heat exchange can be
finally obtained from the first law of thermodynamics, i.e.,
\begin{equation}
\frac{Dq}{Dt}
=\frac{De}{Dt}-\frac{p}{\rho^2}\frac{D\rho}{Dt}\hspace{0.2mm},
\label{eq:DQ/Dt_First}
\end{equation}
where $e$ is the internal energy per unit mass, and,
apart from an additive constant,
$h=e+p/\rho=\cp\hspace{0.15mm}T$ is the enthalpy per unit mass.
In the stationary case, from equations \eqref{eq:DQ/Dt_s}, \eqref{eq:DQ/Dt_T}
and \eqref{eq:DQ/Dt_First}, one has
\begin{equation}
\frac{\Qdot}{\rho}\equiv\frac{Dq}{Dt}=
\begin{cases}
\hspace{0.5mm}\displaystyle\cV\hspace{0.2mm}(\hspace{0.15mm}\gamma-\gammaAD)\hspace{0.5mm}T\hspace{0.3mm}{\bm \varv}\hspace{0.15mm}\cdot\frac{\Grad\rho}{\rho}\hspace{0.2mm},
\\[11pt]
\hspace{0.5mm}\displaystyle \cV\hspace{0.2mm}\frac{\gamma-\gammaAD}{\gamma-1}\hspace{0.6mm}{\bm \varv}\hspace{0.15mm}\cdot\Grad\hspace{0.25mm}T
\hspace{0.1mm},
\\[10pt]
\hspace{0.5mm}\displaystyle {\bm \varv}\hspace{0.15mm}\cdot\Grad\hspace{-0.25mm}
\left(\frac{\varv^2}{2}+h-\PsiT\hspace{-0.2mm}\right)\hspace{-0.25mm},
\end{cases}
\label{eq:DQ/Dt}
\end{equation}
where $\Qdot$ is the rate of heat exchange per unit volume,
${\bm \varv}=-\hspace{0.5mm}\varv\hspace{0.25mm}\er$,
$\Grad=\er\hspace{0.2mm}d/dr$, and the last expression can be easily proved
(e.g. Ciotti 2021, Chapter 10).
Summarising, a fluid element undergoing a generic polytropic transformation
loses energy as it moves inward and heats when $1<\gamma<\gammaAD$, while
for $\gamma>\gammaAD$ it experiences a temperature decrease.
In the polytropic Bondi accretion both cases are possible,
except for a monoatomic gas, when accretion is possible {\it only} for
$\gamma \leq \gammaAD = 5/3$ (see Section \ref{sec:Bondi}).
We can now use each expression in equation \eqref{eq:DQ/Dt} to compute the rate
of heat exchange just by substituting in them the solution of the Bondi problem.
Defining $\Qn=\cinf^3\hspace{0.2mm}\rhoinf/\rB$, the first two expression in
\eqref{eq:DQ/Dt}, and the third one, become respectively
\begin{equation}
\Qdot=\frac{\Qn\hspace{0.2mm}\lt}{x^2}\times
\begin{cases}
\hspace{0.2mm}\displaystyle\frac{\gammaAD-\hspace{0.2mm}\gamma}{\gamma\hspace{0.2mm}(\gammaAD-\hspace{0.2mm}1)}\hspace{0.4mm}
\rhotil^{\hspace{0.5mm}\gamma-2}\hspace{0.4mm}\frac{d\rhotil}{dx}\hspace{0.2mm},
\\[14pt]
\hspace{0.2mm}\displaystyle-\hspace{0.4mm}\frac{d\E}{dx}\hspace{0.2mm},
\end{cases}
\label{eq:Q_normal}
\end{equation}
where, up to an additive constant,
\begin{equation}
\E \equiv
\left[\hspace{0.1mm}\frac{\M^2}{2}+\frac{\gammaAD}{\gamma(\gammaAD-1)}\hspace{0.1mm}\right]\hspace{-0.6mm}\rhotil^{\hspace{0.5mm}\gamma-1}
-\hspace{0.2mm}\left[\hspace{0.3mm}\frac{\chiup}{x}+\frac{\MR}{\xi}\hspace{0.4mm}\psi\!\left(\frac{x}{\xi}\right)\hspace{-0.2mm}\right]\hspace{-0.4mm}.
\end{equation}
The situation is illustrated in Fig. \ref{fig:4pir2Q}: the left panel refers
to the isothermal case and three values of $\beta$; the right panel
shows the case of a {\it monoatomic} gas (i.e. $\gammaAD=5/3$), for
a fixed $\beta$ and different values of $\gamma<\gammaAD$.
The plotted quantity is
$-\hspace{0.5mm}4\hspace{0.1mm}\upi\hspace{0.1mm}r^2\Qdot(r)$, i.e. the rate
of heat per unit lenght exchanged by the infalling gas element.
In practice, by integrating the curves between two radii $r_1$ and $r_2$, one
obtain the heat per unit time exchanged with the ambient by the spherical shell of
thickness $|\hspace{0.3mm}r_2-r_1|$.
For comparison, the dashed lines correspond to the same case, i.e.
isothermal accretion with $\Tinf=\TV$. Notice how in general the profile is almost
a power law over a very large radial range, and how the heat exchange decreases for
increasing $\Tinf$ and for $\gamma$ approaching $\gammaAD$.
An important region for observational and theoretical works is the galactic centre.
The general asymptotic trend of $\Qdot$, for $x \to 0$ and $\chiup>0$, reads
\begin{equation}
\frac{\Qdot}{\Qn}\sim
\begin{cases}
\hspace{0.2mm}\displaystyle\frac{3\hspace{0.2mm}\lt^{\gamma}\hspace{0.2mm}(2\hspace{0.1mm}\chiup)^{-\frac{\gamma-1}{2}}(\gamma-\hspace{0.2mm}\gammaAD)}{2\hspace{0.2mm}\gamma\hspace{0.2mm}(\gammaAD-\hspace{0.2mm}1)}\hspace{0.5mm}x^{-\frac{3(\gamma+1)}{2}}
\sim \frac{3\hspace{0.1mm}\chiup\hspace{0.3mm}(\gamma-\hspace{0.2mm}\gammaAD)}{\lt\hspace{0.2mm}\gamma\hspace{0.2mm}(\gammaAD-\hspace{0.2mm}1)}\hspace{0.5mm}\rhotil^{\hspace{0.4mm}2}\hspace{0.2mm}\tT,
\\[14pt]
\hspace{0.2mm}\displaystyle\frac{3\hspace{0.2mm}\chiup^3(5-3\gammaAD)}{80\hspace{0.3mm}(\gammaAD-\hspace{0.2mm}1)}\hspace{0.3mm}x^{-\hspace{0.4mm}4}
\sim \frac{3\hspace{0.3mm}(5-3\gammaAD)}{5\hspace{0.2mm}\chiup\hspace{0.3mm}(\gammaAD-\hspace{0.2mm}1)}\hspace{0.5mm}\rhotil^{\hspace{0.4mm}2}\hspace{0.2mm}\tT,
\end{cases}
\end{equation}
where in the first expression, $1\leq\gamma< 5/3$ and
$\rhotil \sim \lt\hspace{0.2mm}x^{-\hspace{0.2mm}3/2}/\sqrt{2\chiup}\hspace{0.2mm}$,
and in the second, $\gamma=5/3$ and $\rhotil \sim (\chiup/2)^{3/2}x^{-\hspace{0.2mm}3/2}$.
In practice, close to the centre, $\Qdot$ is a pure power law of logarithmic slope
decreasing from $-\hspace{0.5mm}3$ to $-\hspace{0.6mm}4$ for $\gamma$
increasing from $1$ to $5/3$.
It follows that the volume integrated heat exchanges are always dominated by
the innermost region.
We conclude by noticing the interesting fact that the heat per unit mass exchanged by
a fluid element as it moves from $\infty$ down to the radius $r$, admits a very simple
physical interpretation; in fact, by integrating the last expression of
equation \eqref{eq:DQ/Dt} along the streamline,
one obtains for this exchange the remarkable result that
\begin{equation}
\Delta q=\frac{\varv^2}{2}+\Delta h-\PsiT\hspace{0.3mm},
\qquad
\Delta h \equiv h(r)-h(\infty)\hspace{0.2mm};
\label{eq:Deltaq}
\end{equation}
the total heat exchanged by a unit mass
of fluid (moving from $\infty$ to $r$) can then be interpreted as the change
of the Bernoulli `constant' when
the enthalpy change in equation \eqref{eq:Deltah-Deltaq} is evaluated along the
polytropic solution.
There is an interesting alternative way to obtain the result above.
In fact, from the first law of thermodynamics, $dq=dh-dp/\rho$, thus
in our problem we also have
\begin{equation}
\int_{\pinf}^p\frac{dp}{\rho}
\hspace{0.25mm}=\hspace{0.25mm}\Delta h\hspace{0.2mm}-\hspace{0.2mm}\Delta q
=\left(1-\frac{\mathcal{C}}{\cp}\right)\hspace{-0.3mm}\Delta h\hspace{0.2mm}.
\label{eq:Deltah-Deltaq}
\end{equation}
This shows that the integral at the left hand side, which appears in Bondi accretion
through equation \eqref{eq:Bernoulli},
equals $\Delta h$ {\it only} for $\gamma=\gammaAD$, while, in general,
it is just {\it proportional} to $\Delta h$.
Equation \eqref{eq:Deltaq} can also be obtained
by inserting equation \eqref{eq:Deltah-Deltaq} in equation \eqref{eq:Bernoulli}, and
considering the total potential (galaxy plus MBH).
\section{Discussion and conclusions}\label{sec:Conclusions}
A recent paper (CP18) generalised the Bondi accretion theory
to include the effects of the gravitational field of the
galaxy hosting a central MBH, and of electron scattering,
finding the analytical isothermal accretion solution for
Jaffe's two-component JJ galaxy models (CZ18).
The JJ models are interesting because
almost all their relevant dynamical properties can be expressed in
relatively simple analytical form, while reproducing the main structural
properties of real ellipticals.
However, their DM haloes cannot reproduce the expected $r^{-3}$ profile
at large radii, characteristic of the NFW profile; as Bondi accretion
solution is determined by the gas properties at `infinity', it is important
to understand the effect of a more realistic DM potential at large radii.
Moreover, in CP18 only isothermal solution were studied.
Later, CMP19 presented two-component J3 galaxy models, similar to the JJ ones
but with the additional property that the DM halo can reproduce the NFW profile
at all radii. J3 models then represent an improvement over JJ ones, while
retaining the same analytical simplicity, and so avoiding the need for numerical investigations to
study their dynamical properties.
In this paper we take advantage of J3 models to study again the generalised Bondi problem,
further extending the investigation to the general case of a polytropic gas,
and elucidating some important thermodynamical properties of accretion.
The parameters describing the solution are linked to the galaxy structure
by imposing that the gas temperature at infinity ($\Tinf$) is proportional
to the virial temperature of the stellar component ($\TV$) through a dimensionless
parameter ($\beta$) that can be arbitrarily fixed.
The main results can be summarised as follows.
\begin{enumerate}
\item The isothermal case can be solved in a fully analytical way.
In particular, there is only one sonic point for any choice of the
galaxy structural parameters and of the value of $\Tinf$.
It is found however that $\rmin$, the position of the sonic radius, is strongly
dependent on $\Tinf$, with values of the order of, or larger than, the
galaxy effective radius ($\Reff$) for temperatures of the order of
$\TV$, and with a sudden decrease down to $\approx 10^{-2}\hspace{0.3mm}\Reff$,
or even lower, at increasing $\Tinf$ (say $\gtrsim 1.5\hspace{0.6mm}\TV$).
In absence of a central MBH (or $\chiup=0$, i.e. when the gravitational
attraction of the central MBH is perfectly balanced by the radiation pressure),
accretion is possible provided that $\cinf\leq\sigpg(0)$, i.e. when $\Tinf$ is lower than a
critical value, with $\sigpg(0)$ the central projected stellar velocity dispersion.
\item When $1<\gamma<5/3$, the Bondi accretion problem does not allow
for an analytical solution.
A numerical exploration shows that $\rmin$ suddenly drops to
values $\lesssim\rs$ as $\gamma$ increases at fixed $\Tinf$.
Moreover, depending on the specific values of $\MR$,
$\xi$, and $\gamma$, the accretion flow can have one or three critical points,
and in very special circumstances two sonic points.
For a given $\gamma$, quite independently of the extension of the DM halo,
the accretion parameter $\lt$ is roughly constant at fixed $\beta$, with values
several order of magnitudes lower than the isothermal case.
In absence of a central MBH, no accretion can take place.
\item In the monoatomic adiabatic case ($\gamma=5/3$)
the Mach number profile can be obtained for a generic galaxy model
by solving a fourth degree algebraic equation.
However, the solution is quite impractical, and
a numerical evaluation is preferred.
As already shown in KCP16, in this case $\lt=\chiup^2\hspace{-0.3mm}/\hspace{0.2mm}4$,
so that, again, the absence of the central MBH makes accretion
impossible.
\item We consider in detail the
thermodynamical properties of Bondi accretion when the polytropic index
$\gamma$ differs from the adiabatic index $\gammaAD$.
Under this circumstance, the entropy of fluid elements changes along their
pathlines, and it is possible to compute the associated heat exchanges ($\Qdot$\hspace{0.3mm}).
We provide the mathematical expressions to compute $\Qdot$ as a function
of radius, once the Bondi problem is solved, and in particular its
asymptotic behaviour near the MBH.
\end{enumerate}
\section*{Data Availability}
No datasets were generated or analysed in support of this research.
|
2,877,628,091,136 | arxiv | \section{Statement of results}
\subsection{Lyapunov exponent of random product of diffeomorphisms of the torus}
We consider the random compositions $g_n=f_{n-1}\circ\cdots \circ f_0$ where $(f_k)_{k\in\mathbb{N}}$ is a sequence of i.i.d. copies of some random diffeomorphism $f$ of the unidmensional torus $\mathbb{T}=\mathbb{R}/\mathbb{Z}$. The general expected behaviour under few assumptions is that alsmost surely, the random orbits $(g_n(x))_{n\in\mathbb{N}}$ distribute themselves toward a unique \textit{stationary probability measure} $\mu$ on $\mathbb{T}$, and that the derivatives $g_n'(x)$ decrease toward $0$ with a fixed exponential rate given by a \textit{Lyapunov exponent} $\lambda$ (we will recall the precise definitions). The objective is to estimate the measure $\mu$ and the number $\lambda$ when $f$ is the perturbation a random rotation, and to obtain by an explicit estimate that $\lambda$ is an obstruction to the existence of a linearization of $f$, that is to say a deterministic diffeomorphism $g$ such that $gfg^{-1}$ is a rotation.\\
Let us begin by introducing some notations: the circle is identified with the torus $\mathbb{T}=\mathbb{R}/\mathbb{Z}$. For $k\in\mathbb{N}$ we identify $C^k(\mathbb{T})$ with the space of $1$-periodic $C^k$ maps from $\mathbb{R}$ into $\mathbb{R}$ endowed with its standard norm $\|\cdot\|_k$ defined by $\|\varphi\|_k=\sup_{j\leq k,x\in\mathbb{R}}|\phi^{(j)}(x)|$. In the same way $\mbox{Diff}_+^k(\mathbb{T})$ is the space of increasing diffeomorphisms $f$ from $\mathbb{R}$ onto $\mathbb{R}$ on the form $f=Id+\varphi$ with $\varphi\in C^k(\mathbb{T})$. Noting that the difference of two elements of $\mbox{Diff}_+^k(\mathbb{T})$ belongs to $C^k(\mathbb{T})$ allows to naturally endow $\mbox{Diff}_+^k(\mathbb{T})$ with the metric $d_k$ defined by $d_k(f,g)=\|f-g\|_k$. With these definitions, a rotation of $\mathbb{T}$ of angle $\alpha$ is simply the translation $Id+\alpha$, that we denote $r_\alpha$.\\
A random diffeomorphism of $\mathbb{T}$ is a random variable valued in $\mbox{Diff}_+(\mathbb{T})$. In the paper all the random variables are implicitely assumed defined on a same probability space $(\Omega,\mathcal{F},\mathbb{P})$. Let us recall the notions of stationary measure and Lyapunov exponent for a random diffeomorphism:
\begin{Def1}
Let $f$ be a random diffeomorphism of $\mathbb{T}$ valued in $\mbox{Diff}_+^k(\mathbb{T})$ such that $\ln_+ \|f'\|_0\in L^1(\Omega)$. A probability measure $\mu$ on $\mathbb{T}$ is stationary for $f$ if $\mathbb{E}[f_*\mu]=\mu$ (such a measure always exists by Kakutani fixed point theorem). The associated (mean) Lyapunov exponent is
$$\lambda(\mu)=\mathbb{E}\int_\mathbb{T}\ln{|f'(x)|}d\mu(x).$$
\end{Def1}
We recall some known facts about stationary measures and Lyapunov exponents. We will not use them in this paper but it may enlighten the reader on their meaning and their interest.
\begin{prop1}\label{recap}
Let $f$ be a random diffeomorphism valued in $\mbox{Diff}_+^1(\mathbb{T})$ such that $\ln_+ \|f'\|_0\in L^1(\Omega)$, and let $g_n=f_{n-1}\circ\cdots \circ f_0$, where $(f_k)_{k\in\mathbb{N}}$ is a sequence of i.i.d. copies of $f$.
\begin{itemize}
\item If $f$ is minimal in the sense that the unique closed sets of $\mathbb{T}$ almost surely invariant by $f$ are $\emptyset$ and $\mathbb{T}$, then the stationary measure is unique. (see \cite{Deroin-inter}, \cite{Malicet})
\item If there is a unique stationary measure $\mu$ for $f$ and so a unique Lyapunov exponent $\lambda=\lambda(\mu)$, then for every $x$ in $\mathbb{T}$ we have
$$\frac{1}{n}\ln (g_n'(x))\xrightarrow[n\to +\infty]{}\lambda \mbox{ a.s.}$$
\item $\lambda(\mu)$ is a negative number unless maybe if almost every realization of $f$ preserves $\mu$ (it is an early version due to Crauel \cite{Crauel} of the so called ``invariance principle'' of Avila-Viana \cite{Avila}, both inspired by the linear version in the seminal paper \cite{Ledrappier} of Ledrappier).\\
If $f$ is minimal, it implies the existence of a homeomorphism $h$ of $\mathbb{T}$ such that $hfh^{-1}$ is almost surely a rotation, and so implies in particular that a.e. realizations of $f$ commute.
\end{itemize}
\end{prop1}
We are going to give an estimate for $\lambda(\mu)$ when $f$ is a perturbation of a random rotation. We need an arithmetical condition on the angle of the random rotation. We recall that a number $\alpha$ is diophantine if for some $A,\sigma>0$ we have $\mbox{dist}(q\alpha,\mathbb{Z})\geq \frac{A}{|q|^{\sigma}}$ for any $q$ in $\mathbb{Z}-\{0\}$, definition generalized by Moser in \cite{Moser} where $m$ numbers $\alpha_1,\ldots,\alpha_m$ are said simultaneously diophantine if for some $A,\sigma>0$ we have $\sup_i \mbox{dist}(q\alpha_i,\mathbb{Z})\geq \frac{A}{|q|^{\sigma}}$ for any $q$ in $\mathbb{Z}-\{0\}$ (in particular, it holds if at least one of the $\alpha_i$ is diophantine). Here we introduce a definition generalizing the classical notion of diophantine number for random variables.
\begin{Def1}\label{diophantien}
Let $\alpha$ be a random variable in $\mathbb{T}$. For any $A>0$ and $\sigma\geq 0$, we say that $\alpha$ is diophantine of type $(A,\sigma)$ if for any $q$ in $\mathbb{Z}-\{0\}$,
\begin{equation}\label{dio}\left\|\mbox{dist}(q\alpha,\mathbb{Z})\right\|_{L^2(\Omega)}\geq \frac{A}{|q|^{\sigma}}.\end{equation}
We say that $\alpha$ is diophantine if there exists $A>0$ and $\sigma\geq 0$ such that $\alpha$ is diophantine of type $(A,\sigma)$.
\end{Def1}
\begin{rem}~
\begin{itemize}
\item If $\alpha$ is deterministic (i.e. is a constant random variable), then we obtain the classical definition of diophantine number, and if the set of realizations of $\alpha$ is a finite set $\{\alpha_1,\ldots,\alpha_m\}$, then $\alpha$ is diophantine if and only if $\alpha_1,\ldots,\alpha_m$ are simultaneously diophantine.
\item If $\alpha$ has positive probability to be a diophantine number, then $\alpha$ is a diophantine random variable.
\item At the contrary to the deterministic case, it can happen that $\sigma=0$. It is for exemple the case if $\alpha$ is uniform on $\mathbb{T}$ by a simple computation(or more generally if the law of $\alpha$ is not Lebesgue singular, by a consequence of Riemman Lebesgue Lemma)
\end{itemize}
\end{rem}
To check the second point, consider the sets $E_{A,\sigma}$ of $x$ in $\mathbb{T}$ such that for every $q$ in $\mathbb{Z}^*, \mbox{dist}(qx,\mathbb{Z})\geq\frac{A}{|q|^\sigma}$. If $\alpha$ has positive probability to be diophantine, then there must exist $A$ and $\sigma$ such that $\alpha$ belongs to $E_{A,\sigma}$ with positive probability $p$, and then: $\forall q\in \mathbb{Z}^*, \left\|\mbox{dist}(q\alpha,\mathbb{Z})\right\|_{L^2(\Omega)}\geq \frac{A}{|q|^{\sigma}}\sqrt{p}$.\\
Our first theorem gives a precise estimate for the Lyapunov exponent of a random diffeormorphism $f=r_\alpha+\zetaup$ when $f$ is a perturbation (in a smooth sense) of order $\varepsilon$ of a random rotation $r_\alpha$ with $\alpha$ diophantine. We obtain a quadratic estimate $\lambda=O(\varepsilon^2)$ (instead of the obvious bound $\lambda=O(\varepsilon)$) and a formula for the quadratic term. In the statement of the theorem, a term $O(M)$ means a term bounded by $CM$ with $C$ a constant depending only on $A$ and $\sigma$.
\begin{thm1}\label{lyapu}
Let $\alpha$ be a diophantine random variable of type $(A,\sigma)$. Then there exists an integer $k$ depending only on $\sigma$ such that for any random random diffeomorphism in $\mbox{Diff}_+^k(\mathbb{T})$ on the form $f=r_\alpha+\zetaup$ and for any Lyapunov exponent $\lambda$ associated to any stationary measure of $f$, we have
$$\lambda=-\frac{1}{2}\mathbb{E}\int_{\mathbb{T}}\left( \zetaup '+\etaup'-\etaup'\circ r_\alpha \right)^2dx+O(\varepsilon^3)$$
(and so $\lambda=O(\varepsilon^2)$), where $\varepsilon=\|d_k(f,r_\alpha)\|_{L^3(\Omega)}=\mathbb{E}\cro{d_k(f,r_\alpha)^3}^{\frac{1}{3}}$, and where $\etaup$ is a deterministic map depending linearly on $\zetaup$ and satisfying $|\eta '|=O(\varepsilon)$. The non zero Fourier coefficients of $\etaup$ are given by the formula
\begin{equation}\label{estilambda}
\hat{\etaup}(p)=\frac{\mathbb{E}[\hat{\zetaup}(p)e^{-2i\pi p \alpha}]}{1-\mathbb{E}[e^{-2i\pi p\alpha}]}. \end{equation}
\end{thm1}
The formula \ref{estilambda} can also be rewritten by Parseval identity as
$$\lambda=-\frac{1}{2}\mathbb{E}\sum_{p\in\mathbb{Z}^*}p^2\left|\hat{\zetaup}(p)+\frac{\mathbb{E}[\hat{\zetaup}(p)e^{-2i\pi p \alpha}]}{1-\mathbb{E}[e^{-2i\pi p\alpha}]}(1-e^{2i\pi\alpha})\right|^2+O(\varepsilon^3)$$
\begin{rem}
Our method can actually allow to obtain the higher terms in the Taylor expansion of $\lambda$, on the form $\lambda=\sum_{j=2}^{n-1} q_j(\zetaup)+O(\varepsilon^n)$ where $q_j(\zetaup)$ is a $j$-linear form evaluated at $(\zetaup,\ldots,\zetaup)$.
\end{rem}
In the next theorem we prove that if $f$ is a random diffeomorphism close to rotations whose rotation number $\rho(f)$ is diophantine, then $\lambda$ measures in an explicit sense how much close to rotations $f$ can be (smoothly) conjugated by a deterministic diffeomorphism. Note that $\lambda$ is indeed a natural obstruction to the existence of such a diffeomorphism because $\lambda$ is invariant by conjugation.
\begin{thm1}\label{principal}
Let $(A,\sigma)$ be a couple of positive real numbers. There exists an integer $r$ depending only on $\sigma$ such that for any integer $K$ larger than $r$, there exists in $\mbox{Diff}_+^K(\mathbb{T})$ a neighborhood $\mathcal{U}$ of the set of rotations such that for any random diffeomorphism $f$ valued in $\mathcal{U}$ whose rotation number $\alpha=\rho(f)$ is $(A,\sigma)$ diophantine, there exists in $\mbox{Diff}_+^{K-r}(\mathbb{T})$ a (non random) diffeomorphism $h$ such that
$$\|d_0(hfh^{-1},r_\alpha)\|_{L^2(\Omega)}\leq 3|\lambda|^{\frac{1}{2}},$$
for any Lyapunov exponent $\lambda$ associated to a stationary measure of $f$, with $h$ satisfying $d_{K-r}(h,Id)\leq C \|d_K(f,r_\alpha)\|_{L^2(\Omega)}$ for some $C$ depending on $A$, $\sigma$ and $K$.
\end{thm1}
The constant $3$ in the inequality above is not optimal. By analyzing carefully our proof we could actually replace it by any number larger than $\sqrt{2}$. However the bound $|\lambda|^{\frac{1}{2}}$ is essentially optimal since by Theorem \ref{lyapu}, $|\lambda|^{\frac{1}{2}}=O(d_k(hfh^{-1},r_\alpha))$ for some integer $k$. The number $r$ represents the ``loss of derivative''. It can be explicited from our proof as an affin function of $\sigma$, though we did not try at all to obtain an optimal expression.
\begin{rem}
If $\lambda=0$ and $f$ is valued in a finite set $\{f_1,\ldots,f_m \}$ the theorem gives a smooth diffeomorphism $h$ conjugating silmutaneously $f_1,\ldots,f_m$ to rotations. This particular case can actually be obtained by using a succession of already known results: $f$ is minimal by Denjoy theorem (the diophantine condition implies that at least one of the rotation numbers $\rho(f_i)$ is irrational), so if $\lambda=0$ the maps $f_i$ are simultaneously $C^0$-conjugated to rotations $r_1,...,r_m$ and so pairwise commute (see Proposition \ref{recap}).Then one can use a result of Moser \cite{Moser} which generalizes the classical works of Arnold \cite{Arnold} and Moser on the linearization of a single map close to rotations in the case of several commuting maps, and which states that under the diophantine condition given in assumption, the conjugacy $h$ can be taken smooth and close to Identity with the estimate $d_{K-r}(h,Id)=O(\sup_j d_K(f_j,r_j))$.\end{rem}
Since the maps close to rotations almost commute, we can deduce from Theorem \ref{principal} the following corollary:
\begin{cor1}\label{coro}
Let $(A,\sigma)$ be a couple of positive real numbers. Then there exists an integer $k$
and a neighborhood $\mathcal{U}$ of the set of rotations in $\mbox{Diff}_+^{k}(\mathbb{T})$ such that for any random diffeomorphism $f$ valued in $\mathcal{U}$, if $\alpha=\rho(f)$ is $(A,\sigma)$ diophantine then, by denoting by $\tilde{f}$ an independent copy of $f$ we have
$$||d_{0}(f\circ \tilde{f},\tilde{f}\circ f)||_{L^2(\Omega)}\leq C|\lambda|^{\frac{1}{2}}$$
for any Lyapunov exponent $\lambda$ associated to a stationary measure of $f$, where $C$ is a universal constant.
\end{cor1}
By Theorem \ref{principal} there exists an integer $k$ and a neighborhood $\mathcal{U}$ of rotations in $\mbox{Diff}^{k}(\mathbb{T})$ such that for $f$ valued in $\mathcal{U}$, there exists $h$ in $\mbox{Diff}_+^1(\mathbb{T})$ with $\max(h',(h^{-1})')\leq 2$ such that $f_1=hfh^{-1}$ satisfies $\|d_0(f_1,r_\alpha)\|_{L^2(\Omega)}\leq 3|\lambda|^{\frac{1}{2}}$. Then, setting $\tilde{f_1}=h\tilde{f}h^{-1}$ and $\tilde{\alpha}=\rho(\tilde{f})$ we deduce that $\|d_0(\tilde{f_1}\circ f_1,r_{\alpha+\tilde{\alpha}})\|_{L^2(\Omega)}\leq 6|\lambda|^{\frac{1}{2}}$, and so $\|d_0(f_1\circ\tilde{f_1},\tilde{f_1}\circ f_1)\|_{L^2(\Omega)}\leq 12 |\lambda|^{\frac{1}{2}}$, and finally by mean value inequality $\|d_0(f\circ \tilde{f},\tilde{f}\circ f)\|_{L^2(\Omega)}\leq 48|\lambda|^{\frac{1}{2}}$.
\begin{rem} One could expect a converse inequality by using Moser's ideas \cite{Moser} to obtain a diffeomorphohism $h$ such that $\|d_{K-r}(hfh^{-1},r_\alpha)\|_{L^2(\Omega)}\ll \|d_{K}(f\circ \tilde{f},\tilde{f}\circ f)||_{L^2(\Omega)}$ and then deduce from Theorem \ref{lyapu} that $|\lambda|^{\frac{1}{2}}\ll \|d_{K}(f\circ \tilde{f},\tilde{f}\circ f)||_{L^2(\Omega)}$ for some $K$.
\end{rem}
The proof of Theorem \ref{principal} follows a ``KAM scheme'': in the same way as Arnold linearization Theorem \cite{Arnold} for a single diffeomorphism or Moser linearization theorem \cite{Moser} for commuting diffeomorphisms, we linearize the equation $hfh^{-1}=r_\alpha$ at $h=Id$, $f=r_\alpha$ so that a solution of the linear equation gives an approximate solution of the initial equation and thus define a conjugation $h$ such that $hfh^{-1}$ is closer to rotations than $f$. We prove that this can be achieved if the obstruction $\lambda$ is small enough by using the estimate given by Theorem \ref{lyapu}. Then we reiterate the process in order to conjugate $f$ to random diffeomorphisms $f_n$ closer and closer to rotations. The diophantine condition allows to control $C^k$ norms of the conjugations (up to some loss of derivatives phenomenomen, known problem classical to solve in these kind of KAM scheme), and the rotation number condition $\rho(f)=\alpha$ ensures that the diophantine condition is satisfied at each step of the process. Finally, if $\lambda=0$ we check that the sequence of conjugations converges and gives a conjugation between $f$ and $r_\alpha$, and if $\lambda\not=0$, we stop the process when $\lambda$ becomes large in front of $\mbox{dist}(f_n,r_\alpha)$ and it gives the wanted conjugation.\\
This scheme of the proof is smilar to the one in the paper of Dolgopyat and Krikorian \cite{Dolgopyat} where they prove an analog result on the sphere $S^d$ for $d\geq 2$ (though only the case $\lambda=0$).
\subsection{Lyapunov exponent of random product of matrices}
Our technics also apply to estimate the Lyapunov exponent of the product of i.i.d. random matrices $2\times 2$ close to rotation matrices, by studying the action on the projective line, identified to $\mathbb{T}$. And in this case we do not require a diophantine condition on the angle of the rotation but only a weak non degenerescence condition.\\
Let $||\cdot||$ be a norm in $\mathcal{M}_2(\mathbb{R})$. Let $M$ be a random variable in $GL_2(\mathbb{R})$. such that $\mathbb{E}[|\ln_+\|M\|]<+\infty$. It is a well known result of Kesten-Furstenberg \cite{Furstenberg-Kesten} that if $(M_n)_{n\in\mathbb{N}}$ is a sequence of independant copies of $M$, then the limit
$$\Lambda=\lim_{n\to\infty}\frac{\ln{\|M_{n-1}\cdots M_0\|}}{n}$$
exists almost sureley and does not depend on the alea. We call this number Lyapunov exponent of $M$.
For $\alpha\in\mathbb{T}$, we denote by $R_\alpha$ the rotation matrix of angulus $\pi\alpha$, that is to say $R_\alpha=\begin{pmatrix}
\cos \pi\alpha & -\sin \pi\alpha \\ \sin \pi\alpha &\cos \pi\alpha \end{pmatrix}$.
The following theorem is the analog of Theorem \ref{lyapu} for random product of matrices.
\begin{thm1}\label{mainproj}
Let $\alpha$ be a random variable in $\mathbb{T}$ which does not belong almost surely to $\{0,\frac{1}{2}\}$. Let $M$ be a random variable in $SL_2(\mathbb{R})$ of the form $M=R_\alpha+E$. Let $\varepsilon=\mathbb{E}[||E||^3]^{\frac{1}{3}}$, that we assume to be finite, and let $\Lambda$ be the Lyapunov exponent of $M$. Then
$$\Lambda=\frac{1}{8}\mathbb{E}\pa{\left|Z e^{i\pi\alpha}-\mathbb{E}[Ze^{i\pi\alpha}]\pa{\frac{1-e^{2i\pi \alpha}}{1-\mathbb{E}[e^{2i\pi \alpha}]}}\right|^2}+O(\varepsilon^3)$$
where $$Z=(a+d)+i(b-c)=\mbox{Tr}(E)+i \mbox{Tr}(ER_{\frac{1}{2}})$$
(in particular, $\Lambda=O(\varepsilon^2)$). If $\alpha$ is constant (i.e. non random), the formula simplifies itself and becomes
$$\Lambda=\frac{1}{8}\mathbb{E}\cro{\left|Z-\mathbb{E}[Z]\right|^2}+O(\varepsilon^3)=\frac{Var(Z)}{8}+O(\varepsilon^3).$$
The term $O(\varepsilon^3)$ represents here a quantity bounded by $C\varepsilon^3$ where $C$ is a constant depending only on $\alpha$ (and is actually uniformly bounded on the sets $\{\|d(\alpha,\{0,\frac{1}{2}\}\|_{L^2(\Omega)}\geq \mbox{const.}\}$)
\end{thm1}
\begin{rem}~
\begin{itemize}
\item In the general case $M\in GL_2(\mathbb{R})$ (instead of $SL_2(\mathbb{R})$), we can also obtain a Taylor expansion of its Lyapunov exponent $\Lambda$ by applying the Theorem to estimate the Lyapunov exponent $\widetilde{\Lambda}$ of $\widetilde{M}=M/\sqrt{\det(M)}$, since then $\Lambda=\widetilde{\Lambda}+\frac{1}{2}\mathbb{E}[\ln(\det(M))]$.
\item As in Theorem \ref{lyapu}, the method can be generalized to obtain a Taylor expansion at any order, but it requires more restrictions on $\alpha$: to obtain an expansion at order $q$, $\alpha$ must not belong a.s. to $\{0,\frac{1}{q},\ldots,\frac{q-1}{q}\}$.
\item We can obtain from the theorem an estimate of Figotin and Pastur \cite{Figotin} for the Lypunov exponent of a Schrodinger matrix with small random potential: if $M=\begin{pmatrix}E-gV&-1\\1&0\end{pmatrix}$, with $E=2\cos(\theta)\in]-2,2[-\{0\}$ and $V$ a random real variable having a third moment, then $M$ is conjugated to $R_\theta+gV\begin{pmatrix}1&\cot\theta\\0&0\end{pmatrix}$ and then by Theorem \ref{lyapu}, when $g$ tends to $0$ :
$$\Lambda=\frac{Var(V)}{8\sin^2\theta}g^2+O(g^3)=\frac{Var(V)}{2(4-E^2)}g^2+O(g^3).$$
\end{itemize}
\end{rem}
The following theorem is the analog of Theorem \ref{principal} for random product of matrices.
\begin{thm1}\label{mainproj2}
Let $\mathcal{R}$ be the set of rotation matrices. For any $\delta>0$, there exists a neighborhood $\mathcal{U}$ of $\mathcal{R}$ in $SL_2(\mathbb{R})$ such that for any random variable $M$ in $\mathcal{U}$ satisfying $||Tr(M)||_{L^2(\Omega)}\leq 2-\delta$, there exists $P\in SL_2(\mathbb{R})$ such that
$$\|d(PMP^{-1},\mathcal{R})\|_{L^2(\Omega)}\leq C\Lambda^{\frac{1}{2}},$$
where $\Lambda$ is the Lyapunov exponent of $M$ and $C$ is a constant depending only on the chosen norm on $\mathcal{M}_2(\mathbb{R})$. Moreover, $\|P-I_2\|\leq C' \|d(M,\mathcal{R})\|_{L^2(\Omega)}$ for some $C'$ depending on $\delta$ and the norm.
\end{thm1}
From the proof it should not be difficult to explicit a constant $C$ for a given norm. The assumption $||Tr(M)||_{L^2(\Omega)}\leq 2-\delta$ gives a control of the ellipticity of $M$ in average, and should be seen as the analog of the the diophantine condition on $\rho(f)$ in the non linear case.
We also deduce the same corollary as in the non linear case (with the same proof)
\begin{cor1}\label{commutateur}
For any $\delta>0$, there exists a neighborhood $\mathcal{U}$ of $\mathcal{R}$ in $SL_2(\mathbb{R})$ such that for any random variable $M$ in $\mathcal{U}$ satisfying $||Tr(M)||_{L^2(\Omega)}\leq 2-\delta$, if $\widetilde{M}$ is an independant copy of $M$ we have
$$\mathbb{E}\cro{\|M\widetilde{M}-\widetilde{M}M\|^2}\leq C\Lambda,$$
where $\Lambda$ is the Lyapunov exponent of $M$ and $C$ is a constant depending only on the chosen norm on $\mathcal{M}_2(\mathbb{R})$.
\end{cor1}
From the proof it should not be difficult to obtain an explicit constant $C$ for a given norm. Moreover, by using compacity aguments in $\mathcal{M}_2(\mathbb{R})$ we can deduce global results in more specific contexts, but then one can not hope to explicit the constants anymore without additional work. Here is an example of global result:
\begin{cor1}
Let $m$ be an integer and let $\delta$ and $C_0$ be two positive numbers, Then there exists $C>0$ such that for any matrices $A_1,\ldots,A_m$ in $SL_2(\mathbb{R})$ satisfying $|Tr(A_i)|\leq 2-\delta$ (control of the ellipticity) and $\|A_i\|\leq C_0$ (control of the norm), we have
$$\sup_{i,j} \|A_i A_j -A_j A_i\|\leq C\Lambda^{\frac{1}{2}},$$
where $\Lambda$ is the Lyapunov exponent of the uniformly distributed random matrix in $\{A_1,\ldots,A_m\}$.
\end{cor1}
\begin{proof}
Let us consider $\Lambda$ as a function of $A_1,\ldots, A_m$ on $SL_2(\mathbb{R})^m$. It is known by \cite{Bocker} that this function is continuous. In particular it is continuous on the compact subset
$$\mathcal{K}=\{(A_1,\ldots,A_m), \|A_i\|\leq C_0, |Tr(A_i)|\leq 2-\delta\}$$
(the continuity of $\Lambda$ is actually a lot easier to prove on this subset $\mathcal{K}$ thanks to the ellipticity condition $|Tr(A_i)|\leq 2-\delta$).
Moreover, if the function $\Lambda$ vanishes at a point $(A_1,\ldots,A_m)$ then by the classical Furstenberg Theorem \cite{Furstenberg} (and the ellipticity condition) the matrices $A_i$ commute. Thus there exists $P$ in $SL_2(\mathbb{R})$ such that $PA_iP^{-1}$ is a rotation for every $i$, and using that $\|A_i\|\leq C_0$ and $|Tr(A_i)|\leq 2-\delta$ one can actually choose $P$ with a controled norm $\|P\|\leq C_1$ for some constant $C_1$ depending only on $C_0$ and $\delta$ (we leave this detail to the reader).
Let $\mathcal{U}$ be the open set given by Corollary \ref{commutateur}, and let $$\mathcal{V}=\bigcup_{||P||\leq C_1}\left(P\mathcal{U}P^{-1}\right)^m\subset SL_2(\mathbb{R})^m.$$ Then, $\Lambda$ is continuous and does not vanish on the compact set $\mathcal{K}\setminus \mathcal{V}$, hence $\Lambda\geq m$ for some $m>0$. Then:
\begin{itemize}
\item if $(A_1,...,A_m)\in \mathcal{V}$, there is $P$ in $Sl_2(\mathbb{R})$ with $\|P\|\leq C_1$ such that $B_i=PA_iP^{-1}\in \mathcal{U}$ for every $i$, by Corollary \ref{commutateur} $\|B_i B_j -B_j B_i\|\leq C\Lambda^{\frac{1}{2}}$ for some constant $C$, and then $\|A_i A_j -A_j A_i\|\leq C'\Lambda^{\frac{1}{2}}$ for some new constant $C'=CC_1^2$
\item if $(A_1,...,A_m)\notin \mathcal{V}$, then $\Lambda\geq m$ so $\|A_i A_j -A_j A_i\|\leq 2C_0^2\leq C\Lambda^{\frac{1}{2}}$ with $C=\frac{2C_0^2}{m^{\frac{1}{2}}}$.
\end{itemize}
\end{proof}
\begin{rem}
In the corollary above, one can actually obtain also a converse inequality $\sup_{i,j} \|A_i A_j -A_j A_i\|\geq c\Lambda^{\frac{1}{2}}$, by using that we can find $P$ with controlled norm and rotations matrices $R_i$ so that $\sup_i \|PA_iP^{-1}-R_i\|\ll \sup_{i,j} \|A_i A_j -A_j A_i\|$ and then by using Theorem \ref{mainproj} to get $\Lambda \ll \left(\sup_i \|PA_iP^{-1}-R_i\|\right)^2$.
\end{rem}
\section{Preliminaries}\label{prelim}
\subsection{Some $C^k$ estimates}
We begin by state various estimates in $\mbox{Diff}_+^k(\mathbb{T})$. All of them are classical estimates of KAM theory. Nevertheless, we give proofs in an appendix (section \ref{appen}).\\
A key tool is the so called \textit{Kolmogorov inequality}.
\begin{prop1}(Kolmogorov inequality)\label{Kolmo}\\
For any integers $j\leq k$ and for any $\varphi$ in $C^k(\mathbb{T})$,
\begin{equation}\|\varphi\|_j\leq C\|\varphi\|_k^{j/k}\|\varphi\|_0^{1-j/k}.\end{equation}
where $C$ is a constant depending only on $k$.
\end{prop1}
The three following propositions give $C^k$ estimates of $gfg^{-1}$ when $f$ is a diffeomorphism close to a rotation $r_\alpha$ and $g$ is a diffeomorphism close to $Id$.
The first estimate allows to control the large $C^k$ norms of such a conjugation:
\begin{prop1}\label{estiK}
Let $f$, $g$ be in $\mbox{Diff}_+^k(\mathbb{T})$ and let $\alpha$ be in $\mathbb{T}$ with $d_1(f,r_\alpha)\leq 1$ and $d_1(g,Id)\leq \frac{1}{2}$. Then :
$$d_k(gfg^{-1},r_\alpha)\leq C(d_k(f,r_\alpha)+d_k(g,Id)).$$
where $C$ is a constant depending only on $k$.
\end{prop1}
The assumption of the bound $1$ for $d_1(f,Id)$ is arbitrary and could be replace by any other number. In the same way the bound $\frac{1}{2}$ for $d_1(g,Id)$ could be replaced by any number less than $1$.\\
The second estimate bounds the distance between two conjugations in function of the distance between the cojugacies.
\begin{prop1}\label{esti0}
Let $f$, $g$ and $\tilde{g}$ be in $\mbox{Diff}_+^1(\mathbb{T})$ and let $\alpha$ be in $\mathbb{T}$, with $d_1(f,r_\alpha)\leq 1$, $d_1(g,Id)\leq \frac{1}{2}$ and $d_1(\tilde{g},Id)\leq \frac{1}{2}$. Then:
$$ d_0(gfg^{-1},\tilde{g}f\tilde{g}^{-1})\leq C_0d_0(g,\tilde{g})$$
where $C_0$ is an absolute constant.
\end{prop1}
\begin{rem}
It is actually more generally possible to bound $d_k(gfg^{-1},\tilde{g}f\tilde{g}^{-1})$ in function of $d_k(g,\tilde{g})$, but we will not need it.
\end{rem}
The third estimate gives a classical linear approximation of $gfg^{-1}$
\begin{prop1}\label{esti1}
Let $k\geq 2$, let $f$, $g$ be in $\mbox{Diff}_+^2(\mathbb{T})$ and let $\alpha$ be in $\mathbb{T}$. Writing $f=r_\alpha+\zetaup$, $g=Id+\etaup$ and denoting $\varepsilon=\max(\|\zetaup\|_2,\|\etaup\|_2)$, we have
$$gfg^{-1}=r_\alpha+\left(\zetaup+\eta\circ r_\alpha-\eta\right)+R$$
where $R$ is a quadratic remainder satisfying $\|R\|_{1}\leq C\varepsilon^2$ for some absolute constant $C$.
\end{prop1}
\begin{rem}
The $\varepsilon^2$ upper bound can actually be replaced by the more precise term $\max(\|\zetaup\|_2,\|\etaup\|_2)\cdot\max(\|\zetaup\|_0,\|\etaup\|_0)$. There also exists a $C^k$ version of this estimate.
\end{rem}
We conclude with a last required estimate.
\begin{prop1}\label{estival}
Let $f$, $g$, $h$ be in $\mbox{Diff}_+^k(\mathbb{T})$ with $d_k(h,Id)\leq 1$. Then :
$$d_k(f\circ h, g\circ h)\leq Cd_k(f,g).$$
where $C$ is a constant depending only on $k$.
\end{prop1}
\begin{rem}
Note that at the contrary of the previous propositions, we need to bound a large norm $d_k(h,Id)$, this is a strong assumption. Under the weak assumption $d_1(h,Id)\leq 1$ we actually have $d_k(f\circ h, g\circ h)\leq C(1+d_k(h,Id))d_k(f,g).$
\end{rem}
\subsection{Cohomological equation}
We fix $r_\alpha=Id+\alpha$ a random rotation and $f=r_\alpha+\zetaup$ a perturbation of $r_\alpha$. We assume that $\alpha$ is a $(A,\sigma)$-diophantine. We will assume that $\sigma$ is an integer, in order to avoid the use of $C^k$-norms with $k$ non integer. It is obviously not a restriction since we can replace $\sigma$ by $[\sigma]+1$.\\
We denote respectively by $T_0$ and $T$ the transfer operators of $r_\alpha$ and $f$. That is, for any map $\varphi:\mathbb{T}\rightarrow\mathbb{R}$,
$$T_0\varphi=\mathbb{E}[\varphi\circ r_\alpha], \mbox{ } T\varphi=\mathbb{E}[\varphi\circ f].$$
Since $f$ is a perturbation of $r_\alpha$, $T$ is a perturbation of $T_0$. Note also that a measure $\mu$ is stationary for $f$ if and only if $\int\varphi d\mu=\int T\varphi d\mu$ for any map $\varphi\in C(\mathbb{T})$.\\
The understanding of stationary measures is naturally related to the understanding of the cohomological equation $\varphi-T\varphi=\psi$. Our main ingredient in our proofs is that the approximated cohomological equation $\varphi-T_0\varphi=\psi$
is easily solvable in $\varphi$ by Fourier methods, in the same way as in the classical deterministic case: the equation can be rewritten
$$\forall q\in \mathbb{Z}, \hat{\varphi}(q)(1-\mathbb{E}[e^{2i\pi q\alpha}])=\hat{\psi}(q).$$
For $q=0$ we get the obvious restriction $\hat{\psi}(0)=\int_\mathbb{T}\psi(x)dx=0$, and for $q\not=0$, if $q\alpha$ is not almost surely an integer (which is the case for $\alpha$ diophantine), then $\mathbb{E}[e^{2i\pi q\alpha}]\not=1$ and we obtain $\hat{\varphi}(q)=\frac{\hat{\psi}(q)}{1-\mathbb{E}[e^{2i\pi q\alpha}]}$. It leads us to define the following operator $U$: for $\psi:\mathbb{T}\rightarrow \mathbb{R}$,
$$U\psi(x)=\sum_{q\in\mathbb{Z}^*}\frac{\hat{\psi}(q)}{1-\mathbb{E}[e^{2i\pi q\alpha}]}e^{2i\pi qx}.$$
This operator apriori well defined at least for $\psi$ trigonometrical polynomial, gives the unique solution $\varphi$ if it exists to the equation $$\varphi-T_0\varphi=\psi-\int_\mathbb{T}\psi(x)dx$$
such that $\int_{\mathbb{T}}\varphi dx=0$.\\
It is also convenient to define its adjoint $\overline{U}$ by
$$\overline{U}\psi(x)=\sum_{q\in\mathbb{Z}^*}\frac{\hat{\psi}(q)}{1-\mathbb{E}[e^{-2i\pi q\alpha}]}e^{2i\pi qx},$$
so that for any map trigonometric polynomials $\psi_1$ and $\psi_2$ we have
$$\int_\mathbb{T} U\psi_1(x) \psi_2(x)dx=\int_\mathbb{T} \psi_1(x)\overline{U}\psi_2(x)dx.$$
The following lemma states that under the diophantine condition, $U$ and $\overline{U}$ are acutally well defined on sufficiently smooth maps, and are bounded up to some loss of derivative.
\begin{lem1}\label{cohomo}
Let $k_0=2\sigma+2$. Then the operators $U$ and $\overline{U}$ are well defined on $C^{k_0}(\mathbb{T})$, and for any integer $k$, if $\psi\in C^{k+k_0}(\mathbb{T})$ then $U\psi\in C^k(\mathbb{T})$ and $\displaystyle\|U\psi\|_{k}\leq \frac{1}{A^2}\|\psi\|_{k+k_0}$. The same estimate holds if we replace $U$ by $\overline{U}$.
\end{lem1}
\begin{proof}
It si enough to prove that for any integer $k$ the inequality $\displaystyle\|U\psi\|_{k}\leq \frac{1}{A^2}\|\psi\|_{k+k_0}$ holds for any trigonometric polynomial $\psi$ (the same estimate for $\overline{U}$ follows by replacing $\alpha$ with $-\alpha$). To estimate $\|U\psi\|_{k}$ we are going to bound for $q\not=0$ the Fourier coefficient
$$|\widehat{U\psi}(q)|=\left|\displaystyle\frac{\hat{\psi}(q)}{1-\mathbb{E}[e^{2i\pi q\alpha}]}\right|.$$ The numerator can be bounded by above by
\begin{equation}\label{num}
|\hat{\psi}(q)|\leq\frac{\|\psi\|_{k+k_0}}{(2\pi|q|)^{k+k_0}}.\end{equation}
To bound by below the denominator, we use that for any real number $x$, writing $x=k+\theta$ with $k\in\mathbb{Z}$ and $|\theta|=d(x,\mathbb{Z})\leq \frac{1}{2}$ we have
$$1-\cos(2\pi x)=2\left(\sin(\pi x)\right)^2=2\left(\sin(\pi \theta)\right)^2\geq 2\left(\frac{2}{\pi}\pi\theta\right)^2=8 d(x,\mathbb{Z})^2\geq d(x,\mathbb{Z})^2,$$
hence by using the diophantine condition (\ref{dio}),
\begin{equation}\label{den}\begin{disarray}{ll}
|1-\mathbb{E}[e^{2i\pi q\alpha}]|&\geq 1-\mathbb{E}[\cos(2\pi q\alpha)]\\
&\geq \mathbb{E}\cro{d(q\alpha,\mathbb{Z})^2}\\
&\geq \frac{A^2}{|q|^{2\sigma}}.
\end{disarray}\end{equation}
Thus (\ref{num}) and (\ref{den}) give, using that $k_0=2\sigma+2$:
$$\displaystyle |\widehat{U\psi}(q)|\leq \frac{\|\psi\|_{k+k_0}}{(2\pi)^{k+k_0}A^2|q|^{k+2}}.$$
In consequence,
$$\displaystyle\|U\psi\|_{k}\leq \sum_{q\in\mathbb{Z}^*} |2\pi q|^{k}|\widehat{U\psi}(q)|\leq \frac{1}{(2\pi)^{k_0}A^2}\pa{\sum_{q\in\mathbb{Z}^*}\frac{1}{|q|^2}}\|\psi\|_{k+k_0}\leq \frac{1}{(2\pi)^2A^2}\frac{\pi^2}{3}\|\psi\|_{k+k_0}\leq \frac{1}{A^2}\|\psi\|_{k+k_0}.$$
\end{proof}
\section{Proof of Theorem \ref{lyapu}}
We fix a random rotation $r_\alpha$ and a perturbation $f=r_\alpha+\zetaup$, and we assume that $\alpha$ is $(A,\sigma)$-diophantine. The operators $T_0$, $T$, $U$ and $\overline{U}$ are defined as in previous section. We are going to obtain a Taylor expansion for the stationary measures of $f$ and the associated Lyapunov exponents.
\subsection{Estimate of the stationary measures}
\begin{prop1}\label{estimu}
If $\mu$ is a stationary measure for $f$, then:
$$\int_\mathbb{T}\varphi d\mu=\int_{\mathbb{T}}\varphi dx+O(\varepsilon\|\varphi\|_{k_1})=\int_\mathbb{T}\varphi dx+\int_\mathbb{T}(\overline{U}\bar{\zetaup})\varphi 'dx+O(\varepsilon^2\|\varphi\|_{k_2})$$
where $k_1=2\sigma+3$, $k_2=4\sigma+6$, $\bar{\zetaup}=\mathbb{E}[\zetaup\circ r_{-\alpha}]$ and $\varepsilon=\mathbb{E}\left[\|\zetaup\|_{k_1}^2\right]^{\frac{1}{2}}$.
\end{prop1}
(As before $O(M)$ is a notation for a quantity bounded by $CM$ where $C$ is a constant depending only on $A$ and $\sigma$)
\begin{proof}
To prove the first equality of the statement, we start from the Taylor formula at order $0$: $\varphi\circ f=\varphi\circ r_\alpha+O(\|\zetaup\|_0\|\varphi\|_1)$, and we take the expectation, so
$$T\varphi=T_0\varphi+O(\varepsilon\|\varphi\|_1).$$
Then, we use the invariance of $\mu$ :
$$\int_\mathbb{T}(\varphi-T_0\varphi) d\mu=O(\varepsilon\|\varphi\|_1).$$
For $\psi$ in $C^{2\sigma+3}(\mathbb{T})$, we apply the previous formula to $\varphi=U\psi$ and we get, thanks to Lemma \ref{cohomo} with $k=1$:
\begin{equation}\label{order 0}\int_\mathbb{T}\psi d\mu=\int_\mathbb{T}\psi dx+O(\varepsilon\|\psi\|_{2\sigma+3}).\end{equation}
That gives the first equality.
\\ \\
To prove the second equality of the statement, we use this time a Taylor formula at order $1$ :
$$T\varphi=T_0\varphi+\mathbb{E}[(\varphi '\circ r_\alpha)\zetaup]+O(\varepsilon^2\|\varphi\|_2).$$
Using the invariance of $\mu$, the first estimate (\ref{order 0}) and the inequality $\|uv\|_k\leq 2^k\|u\|_k\|v\|_k$ (consequence of Leibnitz formula), we get:
$$\begin{disarray}{ll}\int_\mathbb{T}(\varphi-T_0\varphi) d\mu&=\int_\mathbb{T} \mathbb{E}[(\varphi '\circ r_\alpha)\zetaup]d\mu+O(\varepsilon^2\|\varphi\|_2)\\
&=\int_\mathbb{T} \mathbb{E}[(\varphi '\circ r_\alpha)\zetaup]dx+O(\varepsilon^2\|\varphi\|_2+\varepsilon\|\mathbb{E}[(\varphi '\circ r_\alpha)\zetaup]\|_{2\sigma+3})\\
&=\int_\mathbb{T}\varphi '\bar{\zetaup}dx+O(\varepsilon^2\|\varphi\|_{2\sigma+4})
\end{disarray}$$
As before, for $\psi$ in $C^{4\sigma+5}(\mathbb{T})$ we take $\varphi=U\psi$ to get, thanks to Lemma \ref{cohomo} with $k=2\sigma+4$:
$$\begin{disarray}{ll}\int_\mathbb{T}\psi d\mu&=\int_\mathbb{T}\psi dx+\int_\mathbb{T} (U\psi)'\bar{\zetaup}dx+O(\varepsilon^2\|U\psi\|_{2\sigma+4})\\
&=\int_\mathbb{T}\psi dx+\int_\mathbb{T} \psi '(\overline{U}\bar{\zetaup})dx+O(\varepsilon^2\|\psi\|_{4\sigma+6})
\end{disarray}$$
\end{proof}
\begin{rem}
We got that $\mu$ can be approximated by the density $h_0=1$ with accuracy $\varepsilon$, and by the density $h_1=1-\overline{U}\bar{\zetaup}'$ with accuracy $\varepsilon^2$ (in some sense to precise: we omit here the detail of the $C^k$-norms involved). We can easily generalize the method to have higher accuracy. Once defined an approximation $h_{n-1}$ with accuracy $\varepsilon^{n-1}$, we write $T\varphi=T_0\varphi+T_1\varphi+\cdots+T_{n-1}\varphi+O(\varepsilon^n\|\varphi\|)$ where $T_k\varphi=\frac{1}{k!}\mathbb{E}[(\varphi^{(k)}\circ r_\alpha)\zetaup^k]$. By a computation similar to the one in the proof we get $\int (\varphi-T_0\varphi)d\mu=\sum_{k=1}^{n-1}\int_{\mathbb{T}}\varphi \overline{T_k}h_{n-k} dx+O(\varepsilon^n\|\varphi\|)$ where $\overline{T_k}\varphi=\frac{(-1)^k}{k!}\mathbb{E}[(\varphi^{(k)}\zetaup^k)\circ r_\alpha^{-1}]$. Then we apply to $\varphi=U\psi$ and we obtain that the density $h_n=1+\sum_{k=1}^{n-1}\overline{U}~\overline{T_k}h_{n-k}$ approximate $\mu$ with accuracy $\varepsilon^n$.
\end{rem}
\subsection{Estimate of the Lyapunov exponents}
Thanks to Proposition \ref{estimu} we can estimate the Lyapunov exponents of $f$:
\begin{prop1}\label{lyapu2} Let $k_0=4\sigma+7$.
If $\mu$ is a stationary probability for $f$ and $\lambda$ is the associated Lyapunov exponent, then
$$\lambda=-\frac{1}{2}\mathbb{E}\int_{\mathbb{T}}\left( \zetaup '-(\overline{U}\bar{\zetaup})'\circ r_\alpha+(\overline{U}\bar{\zetaup})'\right)^2dx+O(\varepsilon^3)$$
where $\bar{\zetaup}=\mathbb{E}[\zetaup\circ r_{-\alpha}]$ and $\varepsilon=\mathbb{E}[\|\zetaup\|_{k_0}^3]^{\frac{1}{3}}$.
\end{prop1}
This will conclude the proof of Theorem \ref{lyapu}, setting $\eta=\overline{U}\bar{\zetaup}$.
\begin{proof}
Let $\eta=\overline{U}\bar{\zetaup}$, $g=Id-\eta$, $\tilde{f}=gfg^{-1}$ ($\|\eta\|_1=O(\varepsilon)$ so $g$ is invertible if $\varepsilon$ is small enough), $\tilde{\zetaup}=\tilde{f}-r_{\alpha}$ and $\tilde{\mu}=g_*\mu$. If $\varphi$ is in $C^{4\sigma+5}(\mathbb{T})$, then thanks to Proposition \ref{estimu}, writing $\varphi \circ g=\varphi-\varphi'\etaup+O(\varepsilon^2)$, we have, keeping the notations $k_1=2\sigma+3$ and $k_2=4\sigma+6$:
$$\begin{disarray}{ll}\int_\mathbb{T}\varphi d\tilde{\mu}&=\int_\mathbb{T}\varphi\circ g d\mu\\
&=\int_\mathbb{T}\varphi d\mu-\int_\mathbb{T}\varphi '\eta d\mu+O(\varepsilon^2\|\varphi\|_2)\\
&=\left(\int_{\mathbb{T}}\varphi dx+\int_{\mathbb{T}}\varphi '\eta dx\right)-\int_{\mathbb{T}}\varphi '\eta dx+O(\varepsilon^2\|\varphi\|_{k_2}+\varepsilon\|\eta\|_{k_1}\|\varphi'\|_{k_1})\\
&=\int_\mathbb{T}\varphi dx+O(\varepsilon^2\|\varphi\|_{k_2}).
\end{disarray}$$
where we used Lemma \ref{cohomo} to get $\|\eta\|_{k_1}=O(\|\bar{\zetaup}\|_{k_1+2\sigma+2})=O(\varepsilon)$. Thus $\tilde{\mu}$ is ``$\varepsilon^2$-close'' to Lebesgue measure.\\
The Lyapunov exponent $\lambda$ of $f$ associated to $\mu$ is equal to the Lyapunov exponent of $\tilde{f}$ associated to $\tilde{\mu}$ (this invariance of Lyapunov exponent by conjugation follows by taking the expectation and integrating with respect to $\mu$ the equality $\ln((gfg^{-1})')\circ g=\ln f'+(\ln g'\circ f-\ln g')$ ). We use this fact and the previous computation to estimate $\lambda$. We also use that by Proposition \ref{estiK}, $||\tilde{\zetaup}||_{k}=O(||\zetaup||_{k}+\|\etaup\|_k)$, and that by Proposition \ref{esti1}, $\tilde{\zetaup}'=\left(\zetaup'-\eta'\circ r_\alpha+\eta'\right)+R$ with $\mathbb{E}[R^2]^{1/2}=O(\varepsilon^2)$ . Then:
$$\begin{disarray}{ll}\lambda &=\mathbb{E}\int_\mathbb{T}\ln (1+\tilde{\zetaup}')d\tilde{\mu}\\
&=\mathbb{E}\int_\mathbb{T} (\tilde{\zetaup}'-\tilde{\zetaup}'^2/2)d\tilde{\mu}+O(\varepsilon^3)\\
&=\mathbb{E}\int_{\mathbb{T}}(\tilde{\zetaup}'-\tilde{\zetaup}'^2/2)dx+O(\varepsilon^3)\\
&=-\frac{1}{2}\mathbb{E}\int_{\mathbb{T}}\tilde{\zetaup}'^2 dx+O(\varepsilon^3)\\
&=-\frac{1}{2}\mathbb{E}\int_{\mathbb{T}}( \zetaup '-\eta'\circ r_\alpha+\eta')^2dx+O(\varepsilon^3).
\end{disarray}$$
\end{proof}
\begin{rem}
We could avoid the conjugation by $g$ to estimate $\lambda$ and directly expand $\mathbb{E}\int\ln{ f '(x)}d\mu(x)$ using Proposition \ref{estimu}, but the method we have used has the advantage to make appear a main term clearly non-positive in the expansion of $\lambda$. Moreover, in the context of Theorem \ref{principal} this conjugation $g$ will correspond to the first step of the KAM scheme in order to conjugate $f$ to a diffeomorphism closer to rotations.
\end{rem}
\section{Proof of Theorem \ref{principal}}
\subsection{Preliminaries}
We begin by introduce some convenient notations: if $u$ is a random variable valued in $C^k(\mathbb{T})$, we set
$$|||u|||_k=\mathbb{E}[\|u\|_k^2].$$
To avoid the profusion of constants, if $k$ is an integer we write $X\ll_k Y$ if $X\leq CY$ with $C$ a constant depending only on $A$, $\sigma$ and $k$, or simply $X\ll Y$ if $C$ depends only on $A$ and $\sigma$.\\
An other important tool is the smoothing operators, allowing to fix the loss of derivative phenomenom which will occur in the KAM scheme. Here we are going to simply use Fourier truncation, which does not give the optimal estimates but is sufficient for our purpose. So, for $\varphi:\mathbb{T}\rightarrow \mathbb{R}$ and $T\geq 0$ we denote
$$\left\{\begin{array}{l}\displaystyle
S_T\varphi(x)=\sum_{|p|\leq T}\hat{\varphi}(p)e^{2i\pi p x}\\
\displaystyle R_T\varphi(x)=\sum_{|p|>T}\hat{\varphi}(p)e^{2i\pi p x}.
\end{array}\right.$$
Then we have the standard Fourier estimates:
\begin{prop1}\label{Fourier}
For any integers $j$ and $k$ with $j<k$, we have
\begin{equation}\label{Kolmo1}\left\{\begin{array}{l}
\forall\varphi\in C^j(\mathbb{T}), \|S_T\varphi\|_k\ll_k T^{k-j+1}\|\varphi\|_j\\
\displaystyle\forall\varphi\in C^k(\mathbb{T}),\|R_T\varphi\|_j\ll_k \frac{\|\varphi\|_k}{T^{k-j-1}}.
\end{array}\right.\end{equation}
\end{prop1}
\subsection{First conjugation}
In this section we fix a random diffeomorphism $f=r_\alpha+\zetaup$ with $\alpha=\rho(f)$ diophantine of type $(A,\sigma)$, and $\lambda$ a Lyapunov exponent of $f$ associated to some stationary measure $\mu$. We assume that $f$ is valued in the open set
$$\mathcal{U}_0=\{h\in \mbox{Diff}_+^1(\mathbb{T}), |h'-1|<\frac{1}{2}\}.$$
In other words, $\mathcal{U}_0$ is the $\frac{1}{2}$-neighborhood of the set of rotations in $\mbox{Diff}_+^1(\mathbb{T})$.
\begin{lem1}\label{conjugaison1}
Let $k_0=4\sigma+7$ and $r=2\sigma+2$. There exists $C_0>0$ depending only on $A$ and $\sigma$ so that $f$ is conjugated by a deterministic diffeomorphism $g=Id-\eta$ to $\tilde{f}=gfg^{-1}=r_\alpha+\tilde{\zetaup}$ such that either
$$|||\tilde{\zetaup}|||_0\leq 3|\lambda|^{\frac{1}{2}} \quad \mbox{or} \quad |||\tilde{\zetaup}|||_0\leq C_0|||\zetaup|||_{k_0}^{\frac{3}{2}},$$
with $\eta$ statisfying that for any integer $K\geq r$,
$$\|\etaup\|_{K-r}\ll_K |||\zetaup|||_{K}.$$
\end{lem1}
\begin{proof}
We begin with the same setting as in Proposition \ref{lyapu2}. First we set $\eta=\overline{U}\bar{\zetaup}$, which satisifes the inequality $\|\etaup\|_{K-r}\ll_K |||\zetaup|||_{K}$ by Lemma \ref{cohomo}. In particular $\|\etaup\|_{1}\ll |||\zetaup|||_{k_0}$ so we can assume $|||\zetaup|||_{k_0}$ small enough so that $\|\etaup\|_1 <\frac{1}{7}$ (if not, then $g=Id$ satisfies the conclusion of the statement). Then we set $g=Id-\eta$ which is invertible, $\tilde{f}=gfg^{-1}$, $\tilde{\zetaup}=\tilde{f}-r_{\alpha}$ and $\tilde{\mu}=g_*\mu$.
Now, we follow the computation of the proof of Proposition \ref{lyapu2} with one slight difference: we cannot expand $\ln(1+\tilde{\zetaup}')$ at order $3$ because we do not have a good bound for the third moment of $||\zetaup||_1$. Instead we use that for every $t$ in $]-1,1[$ we have $\ln(1+t)\leq t-\frac{1}{4}t^2$. We can apply this inequality to $t=\tilde{\zetaup} '$ because $f\in \mathcal{U}_0$ so $\tilde{f}'\leq \sup(f')\frac{\sup(g')}{\inf(g')}< (1+\frac{1}{2})\frac{1+\frac{1}{7}}{1-\frac{1}{7}}=2$ and so $-1<\tilde{\zetaup}'<1$. We get
$$\lambda=\mathbb{E}\int_\mathbb{T} \ln(1+\tilde{\zetaup}')d\tilde{\mu}\leq \mathbb{E}\int_\mathbb{T} (\tilde{\zetaup}'-\tilde{\zetaup}'^2/4)d\tilde{\mu}=-\frac{1}{4}\int_\mathbb{T} \tilde{\zetaup}'^2 dx +O(|||\zetaup|||_{k_0}^2)$$
hence there exists $C$ depending only on $A$ and $\sigma$ such that
$$\mathbb{E}\int_\mathbb{T}\tilde{\zetaup}'^2dx\leq 4|\lambda|+C|||\zetaup|||_{k_0}^3.$$
Next, we notice that for a fixed event, for every $a$, $b$, $|\tilde{\zetaup}(a)-\tilde{\zetaup}(b)|\leq \int_\mathbb{T}|\tilde{\zetaup}'|dx$, and since $\rho(\tilde{f})=\rho(f)=\alpha$, we have $\tilde{\zetaup}(b)=0$ for some $b$, and so $\|\tilde{\zetaup}\|_0\leq \int_\mathbb{T}|\tilde{\zetaup}'|dx$. Thus, by Cauchy-Schwarz, $\|\tilde{\zetaup}\|_0^2\leq\int_\mathbb{T}\tilde{\zetaup}'^2dx$, and taking the expectation we get
$$|||\tilde{\zetaup}|||_0\leq \left(4|\lambda|+C|||\zetaup|||_{k_0}^{3}\right)^{\frac{1}{2}}\leq \left(\max(8|\lambda|,2C|||\zetaup|||_{k_0}^{3})\right)^{\frac{1}{2}}=\max\left(3|\lambda|^{\frac{1}{2}},\sqrt{2C}|||\zetaup|||_{k_0}^{\frac{3}{2}}\right),$$
which concludes the proof with $C_0=\sqrt{2C}$.
\end{proof}
In view of the dichotomy given by this lemma, we will say that ``$\lambda$ is an obstruction for the linearization of $f$ '' if $|\lambda|^{\frac{1}{2}}\geq \frac{C_0}{3}|||\zetaup|||_{k_0}^{\frac{3}{2}}$ where $C_0$ and $k_0$ are defined in the lemma. Thus, if $\lambda$ is an obstruction then one can find a conjugacy as stated in Theorem \ref{principal}, and if it is not an obstruction then $f$ is conjugated to a new random diffeomorphism $\tilde{f}$ closer to $r_\alpha$ and we can hope to iterate the process.
Though, we cannot use directly the lemma in an iterating process because of the loss of regularity in the inequality $|||\tilde{\zetaup}|||_0\leq C_0|||\zetaup|||_{k_0}^{\frac{3}{2}}$. We fix that by replacing the conjugation $g$ by a good $C^\infty$ approximation. In that way, there will be no loss of regularity anymore (at the cost of a less sharp bound). Precisely:
\begin{lem1}\label{conjugaison2}
Let $k_0=4\sigma+7$ and $r=6\sigma+11$. If $\lambda$ is not an obstruction for $f$ then for any $T\geq 1$, $f$ is conjugated by a diffeomorphism $g_T=Id-\eta_T$ to $\tilde{f}_T=g_Tfg_T^{-1}=r_\alpha+\tilde{\zetaup}_T$ such that
$$\forall K\geq r, \left\{\begin{array}{l}\displaystyle|||\tilde{\zetaup}_T|||_{k_0}\ll_K
T^{r}|||\zetaup|||_{k_0}^{\frac{3}{2}}+\frac{1}{T^{K-r}}|||\zetaup|||_K \\
\displaystyle |||\tilde{\zetaup}_T|||_K\ll_K T^r |||\zetaup|||_K\end{array}\right.$$
Moreover,
$$
\forall K\geq r, \|\etaup_T\|_{K-r}\ll_K |||\zetaup|||_K
$$
\end{lem1}
\begin{proof}
Let $k_0=4\sigma+7$ and $s=2\sigma+2$. Let $g=Id-\etaup$ be the diffeomorphism given by Lemma \ref{conjugaison1}. We set $\etaup_T=S_T\etaup$ and $g_T=Id-\etaup_T$. By Lemma \ref{conjugaison1} and Proposition \ref{Fourier} we have for $K\geq s+1$
\begin{equation}\label{state1}\|\etaup_T\|_{K-(s+1)}\ll_K \|\etaup\|_{K-s}\ll_K |||\zetaup|||_{K}.\end{equation}
Applying with $K=s+2\leq k_0$ we have $\|\etaup\|_1\ll |||\zetaup|||_{k_0}$, so we can assume $|||\zetaup|||_{k_0}$ small enough so that $\|\etaup\|_1\leq \frac{1}{2}$ (if not we set instead $g_T=Id$). Then $g_T$ is invertible and we can set $f_T=g_Tfg_T^{-1}=r_\alpha+\zetaup_T$. We also have for any $K\geq s+1$:
$$\|\etaup_T\|_{K}\ll_K T^{s+1}\|\etaup\|_{K-s}\ll_K T^{s+1}|||\zetaup|||_K,$$
so, by Proposition \ref{estiK}:
\begin{equation}\label{yop}|||\tilde{\zetaup}_T|||_K\ll_K|||\zetaup|||_{K}+\|\etaup_T\|_{K}\ll_K T^{s+1}|||\zetaup|||_K.\end{equation}
On another hand, since $\lambda$ is assumed not to be an obstruction for $f$ we have by Lemma \ref{conjugaison1}
$$|||gfg^{-1}-r_\alpha|||_0\ll |||\zetaup|||_{k_0}^{3/2},$$
and by Proposition \ref{esti0},
$$|||g_Tfg_T^{-1}-gfg^{-1}|||_0\ll\|g_T-g\|_0=\|R_T\etaup\|_0\ll_K \frac{1}{T^{K-s-1}}\|\etaup\|_{K-s}\ll_K \frac{1}{T^{K-s-1}}|||\zetaup|||_K.$$
\\
The combination of the two last inequalities gives
\begin{equation}\label{yap}
|||\tilde{\zetaup}_T|||_{0}=|||g_Tfg_T^{-1}-r_\alpha|||_0\ll_K
|||\zetaup|||_{k_0}^{\frac{3}{2}}+\frac{1}{T^{K-s-1}}|||\zetaup|||_K. \end{equation}
Finally, we write $\tilde{\zetaup}_T=S_T\tilde{\zetaup}_T+(\tilde{\zetaup}_T-S_T\tilde{\zetaup}_T)$ to use Proposition \ref{Fourier}, and then by using (\ref{yop}) and (\ref{yap}) we get
\begin{equation}\label{state2}|||\tilde{\zetaup}_T|||_{k_0}\ll_K T^{k_0+1}|||\tilde{\zetaup}_T|||_{0}+\frac{1}{T^{K-k_0-1}}|||\tilde{\zetaup}_T|||_{K}\ll_K T^{k_0+1}|||\zetaup|||_{k_0}^{\frac{3}{2}}+\frac{1}{T^{K-k_0-s-2}}|||\zetaup|||_K.\end{equation}
Thus, with $r=k_0+s+2=6\sigma+11$, (\ref{state1}), (\ref{yop}) and (\ref{state2}) give all the estimates claimed in the statement.
\end{proof}
\subsection{KAM iteration}
Now we begin the KAM scheme by iterating the conjugation process given by Lemma \ref{conjugaison2}. We fix $k_0$ and $r$ the numbers given by the Lemma \ref{conjugaison2}, and we fix a sequence of numbers $(T_n)_{n\in\mathbb{N}}$. We initialize the construction with $f_0=f$, $\zetaup_0=\zetaup$. Then, assuming that $f_{n-1}=r_\alpha+\zetaup_{n-1}$ is defined, if we have the two conditions
\begin{enumerate}\label{test}
\item $f_{n-1}\in \mathcal{U}_0$ a.s.,
\item $\lambda$ is not an obstruction for $f_{n-1}$, that is $|\lambda|^{\frac{1}{2}}\leq \frac{C_0}{3}|||\zetaup_{n-1}|||_{k_0}^{\frac{3}{2}}$,
\end{enumerate}
then Lemma \ref{conjugaison2} applies, so that by choosing $T=T_n$ we get a conjugation $g_{n-1}=Id-\etaup_{n-1}$ and a random diffeomorphism $f_{n}=g_{n-1}f_{n-1}g_{n-1}^{-1}=r_\alpha+\zetaup_{n}$ satisfying for $K\geq r$
$$\left\{\begin{disarray}{l}
|||\zetaup_{n}|||_K\ll_K T_n^{r}|||\zetaup_{n-1}|||_K\\
|||\zetaup_{n}|||_{k_0}\ll_K
T_n^{r}|||\zetaup_{n-1}|||_{k_0}^{\frac{3}{2}}+\frac{1}{T_n^{K-r}}|||\zetaup_{n-1}|||_K
\end{disarray}\right.$$
and
$$\|\eta_{n-1}\|_{K-r}\ll_K T_n^{r}|||\zetaup_{n-1}|||_K.$$
If one of the two conditions is not satisified, then we stop the process.
Thus we get a sequence of random diffeomorphisms $(f_n)_{n<N}$ where $N\in \mathbb{N}\cup\{+\infty\}$.
The choice of $T_n$ we do is the following: $T_n=2^{Q^n}$ where $Q$ is any number in $(1,\frac{3}{2})$. With this choice, we prove that the large $C^k$-norms of $\zetaup$ do not grow up too fast while the small $C^k$-norms decrease quickly. Note that in the sequel we consider $Q$ as fixed, for exemple $Q=\frac{4}{3}$, so we will not explicit the dependence of the constants in $Q$.
\begin{lem1}\label{induc}
There exists integers $p$ and $K_0$ depending only on $\sigma$ such that for any $K\geq K_0$, if $\varepsilon=|||\zetaup|||_{K}$ is small enough then for any $n<N$,
$$\left\{\begin{array}{l}
|||\zetaup_n|||_K\ll_K T_{n}^p\varepsilon\\
\displaystyle|||\zetaup_n|||_{k_0}\ll_K \frac{1}{T_{n}^{K-p}}\varepsilon
\end{array}\right.$$
\end{lem1}
\begin{proof}
There exists a constant $C$ depending only on $A$, $\sigma$ and $K$ such that for any $n<N$
$$
\left\{\begin{array}{l}
\displaystyle |||\zetaup_{n}|||_{K}\leq C T_n^{r}|||\zetaup_{n-1}|||_{K}\\
\displaystyle |||\zetaup_{n}|||_{k_0}\leq C\left(
T_n^{r}|||\zetaup_{n-1}|||_{k_0}^{\frac{3}{2}}+\frac{1}{T_n^{K-r}}|||\zetaup_{n-1}|||_{K}\right)
\end{array}\right.
$$
By iteration of the first inequality we have for any $n\geq 1$:
$$|||\zetaup_n|||_{K}\leq C^n(T_{n}\cdots T_1)^r|||\zetaup_0|||_{K}\leq C^{n} 2^{r(Q+Q^2+\ldots+Q^{n})}\varepsilon\leq C^{n} 2^{\frac{rQ}{Q-1} Q^{n}}\varepsilon,$$
hence $|||\zetaup_n|||_K\ll_K T_n^s\varepsilon$ where $s=\frac{2rQ}{Q-1}$. That proves the first part of the statement if $p\geq s$.\\
Let $\varepsilon_n=|||\zetaup_n|||_{k_0}$. Using in the second inequality that $|||\zetaup_{n-1}|||_K\ll_K T_n^{s}\varepsilon$, we obtain, up to modifying the constant $C$:
$$\varepsilon_{n}\leq C \left(T_n^{r}\varepsilon_{n-1}^{\frac{3}{2}}+\frac{1}{T_n^{K-p}}\varepsilon\right).$$
where we have set $p=r+s$. If $K$ is large enough and $\varepsilon$ small enough, we are going to prove by induction that for every $n<N$,
\begin{equation}\label{recu}\varepsilon_n\leq \frac{2C\varepsilon}{T_{n}^{K-p}}.\end{equation}
It holds for $n=0$ if $C\geq 2^K$, what we can assume up to changing $C$ one more time. Now, for $n<N$ let us assume that $\varepsilon_{n-1}\leq \frac{2C\varepsilon}{T_{n-1}^{K-p}}$. Then if $\varepsilon$ is small enough we have
$$\varepsilon_{n-1}^{\frac{3}{2}}\leq \frac{1}{T_{n-1}^{\frac{3}{2}(K-p)}}(2C\varepsilon)^{\frac{3}{2}}\leq \frac{1}{T_n^{\frac{3}{2Q}(K-p)}}\varepsilon,$$
and so
$$\varepsilon_{n}\leq C\varepsilon\left(\frac{1}{T_n^{\frac{3}{2Q}(K-p)-r}}+\frac{1}{T_n^{K-p}}\right),$$
which implies that
$$\varepsilon_{n}\leq\frac{2C\varepsilon}{T_n^{K-p}}$$
provided that $\frac{3}{2Q}(K-p)-r\geq K-p$, or equivalently (since $\frac{3}{2Q}>1$)
$$K\geq p+s\frac{1}{\frac{3}{2Q}-1}.$$
If it is satisfied then (\ref{recu}) is proved by induction for any $n<N$. That concludes the proof of the lemma, choosing $K_0=\lceil p+s\frac{1}{\frac{3}{2Q}-1}\rceil $.
\end{proof}
In the sequel we fix the integer $K_0$ given by Lemma \ref{induc}, and an integer $K\geq K_0$.
\begin{lem1}\label{upgrade}
There exists $q$ depending only on $\sigma$ such that if $\varepsilon=|||\zetaup|||_K$ is small enough then
for any $n<N$, $|||\zetaup_n|||_{K-q}\ll_K \frac{1}{T_{n}}\varepsilon$ and $||\etaup_n||_{K-q}\ll_K \frac{1}{T_{n}}\varepsilon$
\end{lem1}
\begin{proof}
Let $p$ as in previous lemma and let $K\geq K_0$. If $\varepsilon$ is small enough we have $
|||\zetaup_n|||_K\ll_K T_{n}^p\varepsilon$ and $|||\zetaup_n|||_{0}\ll_K \frac{1}{T_{n}^{K-p}}\varepsilon$, so by Kolomogorov inequality (Proposition \ref{Kolmo}), for any $k\leq K$ we have
$$|||\zetaup_n|||_{K-k}\ll_K|||\zetaup_n|||_0^{\frac{k}{K}}|||\zetaup_n|||_K^{\frac{K-k}{K}}\ll_K \frac{\varepsilon}{T_{n}^\tau}$$
with
$$\tau=\frac{k}{K}(K-p)-\left(\frac{K-k}{K}\right)p=k-p$$
In particular, $|||\zetaup_n|||_{K-q}\ll_K \frac{1}{T_{n}}\varepsilon$ if $q\geq p+1$, and $||\etaup_n||_{K-q}\ll_K |||\zetaup_n|||_{K-q+r}\ll_K \frac{1}{T_{n}}\varepsilon$ if $q-r\geq p+1$. So we get the result with $q=p+1+r$.
\end{proof}
Now we consider the compositions $h_n=g_{n-1}\circ\cdots\circ g_0$, so that $f_n=h_{n}fh_{n}^{-1}$. The diffeomorphisms $h_n$ satisfy the following estimates:
\begin{lem1}\label{finish}
Let $q$ as in previous lemma. If $\varepsilon=|||\zetaup|||_K$ is small enough then for any $n<N$ $d_{K-q}(h_n,Id)\ll_k \varepsilon$ and $\sum_{n<N} d_{K-q}(h_n,h_{n-1})\ll_K \varepsilon$.
\end{lem1}
\begin{proof}
Let $\delta_n=d_{K-q}(h_n,Id)$. For a fixed $n$ let us assume that $\delta_j\leq 1$ for $j=0,..,n-1$. Then, by Proposition \ref{estival} and Lemma \ref{upgrade},
$$d_{K-q}(h_n,h_{n-1})\ll_K d_{K-q}(g_n,Id)\ll_K \frac{\varepsilon}{T_{n}},$$
and so
$$\delta_n\leq \sum_{j<n}d_{K-q}(h_j,h_{j-1})\ll_K \varepsilon.$$
So if $\varepsilon$ is small enough we get $\delta_n\leq 1$. Thus we get by induction that $\forall n<N, \delta_n\leq 1$. In particular the estimates above hold for every $n$, and the result follows.
\end{proof}
We are now ready to finish the proof of Theorem \ref{principal}.
\begin{proof}(Theorem \ref{principal})\\
We fix $K_0$ and $q$ as above, an integer $K\geq K_0$, we assume that $\varepsilon=|||\zetaup|||_K$ is small enough so that the lemmas above apply, and we also assume that $|f'-1|\leq \frac{1}{4}$. We separate the cases $N=+\infty$ and $N<+\infty$.\\
\begin{itemize}
\item If $N=+\infty$, then $\sum_n d_{K-q}(h_n,h_{n-1})\ll_K \varepsilon$ hence
$(h_n)_{n\in\mathbb{N}}$ converges in $\mbox{Diff}_+^{K-q}(\mathbb{T})$ to a limit $h$
satisfying $d_{K-q}(h,Id)\ll_K \varepsilon$. In particular if $\varepsilon$ is small enough $h$ is invertible and $hfh^{-1}=\lim_n h_nfh_n^{-1}=\lim_n f_n=r_\alpha$ almost surely.\\
\item If $N<+\infty$. Then, $f_{N-1}=h_{N-1}fh_{N-1}^{-1}$ with $d_{K-q}(h_{N-1},Id)\ll_K \varepsilon$. Morever, one of the two coditions stated at the beginning of the section does not hold for $f_{N-1}$, that is, either $f_{N-1}\notin \mathcal{U}_0$ or $\lambda$ is an obstruction for $f_{N-1}$. Since $|f'-1|\leq \frac{1}{4}$ and $|h_{N-1}'-1|\ll_K \varepsilon$, we deduce that the condition $f_{N-1}\in \mathcal{U}_0$ is satisfied if $\varepsilon$ is small enough. So it means that $\lambda$ is an obstruction for $f_{N-1}$, that is $|\lambda|^{\frac{1}{2}}\geq \frac{C_0}{3}\varepsilon_n^{\frac{3}{2}}$. Then Lemma \ref{conjugaison1} gives a diffeomorphism $g$ satisfying $d_{K-q}(g,Id)\ll_K\varepsilon$ conjugating $f_{N-1}$ to $\tilde{f}=r_\alpha+\tilde{\zeta}$ such that $|||\tilde{\zetaup}|||_0\leq 3|\lambda|^{\frac{1}{2}}$, and then the conjugation $h=g\circ h_{N-1}$ satisfies the conclusion of Theorem \ref{principal}.
\end{itemize}
Choosing $\overline{\varepsilon}$ in $(0,\frac{1}{2})$ so that the lemmas above and the final argument apply for $|||\zetaup|||_K\leq \overline{\varepsilon}$, we get the conclusion of Theorem \ref{principal} for any random diffeomorphism $f$ such that $\rho(f)$ is $(A,\sigma)$-diophantine and valued in the open set
$$\mathcal{U}=\left\{h\in \mbox{Diff}_+^K(\mathbb{T}), d_{K}(h,\mathcal{R})< \frac{\overline{\varepsilon}}{2}\right\},$$
where $\mathcal{R}$ is the set of rotations: for such a $f$, we obviously have $|f'-1|\leq \frac{1}{4}$, and $d_K(f,r_\beta)<\frac{\overline{\varepsilon}}{2}$ for some $\beta$ so actually $|\beta-\alpha|<\frac{\overline{\varepsilon}}{2}$ with $\alpha=\rho(f)$, so $d_K(f,r_\alpha)<\overline{\varepsilon}$ and in particular $|||\zetaup|||_K\leq \overline{\varepsilon}$. Hence the argument above applies to $f$ and gives the conjugation stated in Theorem \ref{principal}.
\end{proof}
\section{Random products of matrices (Theorems \ref{mainproj} and \ref{mainproj2})}
\subsection{Generalities}
We consider $\mathcal{M}_2(\mathbb{R})$ equipped with any norm $||\cdot||$. By identifying the complex plane with $\mathbb{R}^2$, any matrix $M$ in $\mathcal{M}_2(\mathbb{R})$ naturally acts on $\mathbb{C}$.\\
We denote by $\mathcal{T}$ the space of trigonometrical polynomials $p:\mathbb{T}\rightarrow\mathbb{R}$, generated by the maps $x\mapsto \cos(2k\pi x)$ and $x\mapsto \sin(2k\pi x)$. We denote by $\mathcal{T}_n$ the space of trigonometrical polynomials of $\mathcal{T}$ of degree at most $n$. We fix a norm $|| \cdot ||$ on $\mathcal{T}$.\\
To any $M$ in $GL_2(\mathbb{R})$ we naturally associate a diffeomorphism $f_M$ of $\mathbb{T}$ by
$$e^{i\pi f_M(x)}=\frac{M(e^{i\pi x})}{|M(e^{i\pi x})|}.$$
We admit the following elementary lemma.
\begin{lem1}\label{equiv}
There exists a constant $A_0>0$ depending only of the norm on $\mathcal{M}_2(\mathbb{R})$ such that for any $M$ in $SL_2(\mathbb{R})$ and $\alpha$ in $\mathbb{T}$,
$$\frac{1}{A_0}d_0(f_M,r_\alpha)\leq ||M-R_\alpha||\leq A_0d_0(f_M,r_\alpha).$$
\end{lem1}
In particular, if $M$ is a perturbation of $R_\alpha$ of order $\varepsilon$, then $f_M$ is a perturbation of $r_\alpha$ of order $\varepsilon$. The next lemma specifies the form of the perturbation:
\begin{lem1}\label{proj}
If $M=R_\alpha+E$ then, writing $f_M=r_\alpha+\zetaup$ we can write $\zetaup=\zetaup_1+\zetaup_2+\zetaup_3$ where $\zetaup_1\in \mathcal{T}_1$ and $||\zetaup_1||=O(\|E\|)$, $\zetaup_2\in \mathcal{T}_2$ and $||\zetaup_2||=O(\|E\|^2)$, $\zetaup_3\in C^\infty(\mathbb{T})$ and $||\zetaup_3||_1=O(\|N\|^3)$. Moreover,
$$\zetaup_1(x)=\frac{1}{\pi}\mbox{Im}\pa{E(e^{i\pi x})e^{-i\pi(x+\alpha)}}$$
\end{lem1}
\begin{proof}
From $e^{i\pi f_M(x)}=\frac{M(e^{i\pi x})}{|M(e^{i\pi x})|}$ we obtain the formula
$$\zetaup(x)=\frac{1}{i\pi}\ln\pa{\frac{1+E(e^{i\pi x})e^{-i\pi(x+\alpha)}}{|1+E(e^{i\pi x})e^{-i\pi(x+\alpha)}|}},$$
where the (complex) logarithm is well defined for $||E||$ small. Then the result follows by doing Taylor expansions.
\end{proof}
The following lemma is a counterpart of the previous lemma when $\alpha=0$ that we will use to create a conjugation matrix in the proof of Theorem \ref{mainproj2}.
\begin{lem1}\label{projR}
If $\zetaup$ belongs to $\mathcal{T}_1$, then one can find $M$ in $SL_2(\mathbb{R})$ such that $||M-I_2||=O(||\zetaup\|)$ and
$$f_M(x)=x+\zetaup(x)+O(\|\zetaup\|^2).$$
\end{lem1}
\begin{proof}
By assumption, $\zetaup(x)=A+B\cos(2\pi x)+C\sin(2\pi x)$ for some $A,B,C$. Let us set $M=I_2+E$ with $E=\begin{pmatrix}a & b \\ c & d\end{pmatrix}$, where $a,b,c$ have to be chosen, and $d$ is determined so that $\det M=1$. Since $\det(M)=1+ Tr(E)+O(||E||^2)$, in particular $d=-a+O(||E||^2)$. From Lemma \ref{proj} and a simple computation, we have
$$\begin{disarray}{ll}f_M(x)&=x+\frac{1}{\pi}\mbox{Im}\pa{E(e^{i\pi x})e^{-i(\pi x+\alpha)}}+O(||E||^2)\\
&=x+\frac{c-b}{\pi}+\frac{c+b}{\pi}\cos(2\pi x)+\frac{d-a}{\pi} \sin(2\pi x)+O(||E||^2)\\
&=x+\frac{c-b}{\pi}+\frac{c+b}{\pi}\cos(2\pi x)-\frac{2a}{\pi} \sin(2\pi x)+O(||E||^2).\end{disarray}$$
By chosing $a,b,c$ so that $c-b=\pi A$, $c+b=\pi B$ and $-2a=\pi C$, we obviously have $||E||=O(||\zetaup||)$ and so $f_M(x)=x+\zetaup(x)+O(||\zetaup||^2)$.
\end{proof}
\begin{lem1}\label{Lyapuproj}
Let $M$ be a random matrix in $SL_2(\mathbb{R})$ with $\mathbb{E}[\ln_+ ||M||]<+\infty$, and let $\Lambda$ be the Lyapunov exponant of $M$. Then there exists a stationary measure $\mu$ of the random diffeomorphism $f_M$ so that the corresponding Lyapunov exponent $\lambda(\mu)$ satisfies $\Lambda=-\frac{1}{2}\lambda(\mu)$.
\end{lem1}
\begin{proof}
Since $M\in SL_2(\mathbb{R})$, we have for every $\theta$ and $\theta'$ in $\mathbb{T}$ $$\det(M(e^{i\pi\theta}),M(e^{i\pi\theta'}))=\det(e^{i\pi\theta},e^{i\pi\theta'}),$$ that we can rewrite
$$|\sin(\pi(\theta-\theta '))|=|M(e^{i\pi\theta})|~|M(e^{i\pi\theta '})|~|\sin(\pi(f_M(\theta)-f_M(\theta ')))|,$$
which leads to
$$1=|M(e^{i\pi\theta})|^2~|f_M'(\theta)|.$$
It is well known that there exists a stationary measure $\mu$ such that we have $\Lambda=\mathbb{E}\int_{\mathbb{T}}\ln |M(e^{i\pi\theta})|d\mu(\theta)$ (see for exemple \cite{Furstenberg}), so the result follows.
\end{proof}
\subsection{Proof of Theorem \ref{mainproj}}
We fix a random variable $\alpha$ in $\mathbb{T}$ and a random matrix $M=R_\alpha+E$ of $SL_2(\mathbb{R})$. We naturally get a random diffeomorphism $f_M=r_\alpha+\zetaup$ of $\mathbb{T}$, and Lemma \ref{proj} gives a decomposition $\zetaup=\zetaup_1+\zetaup_2+\zetaup_3$.
We assume that $\alpha$ does not belong almost surely to $\{0,\frac{1}{2}\}$. So $||d(2\alpha,\mathbb{Z})||_{L^2(\Omega)}\geq \delta$ for some $\delta>0$. In the sequel a term $O(M)$ means a term bounded by $CM$ with $C$ depending only on $\delta$ (and the chosen norms on $\mathcal{T}$ and $\mathcal{M}_2(\mathbb{R})$).
We keep the notations of the previous sections for the operators $T, T_0, U$ and $\overline{U}$, that is to say $T\varphi(x)=\mathbb{E}[\varphi\circ f_M(x)]$, $T_0\varphi(x)=\mathbb{E}[\varphi\circ r_\alpha(x)]$, $U\varphi(x)=\sum_{q\in\mathbb{Z}^*}\frac{\hat{\varphi}(q)}{1-\mathbb{E}[e^{2i\pi q\alpha}]}e^{2i\pi qx}$
and $\overline{U}\varphi(x)=\sum_{q\in\mathbb{Z}^*}\frac{\hat{\varphi}(q)}{1-\mathbb{E}[e^{-2i\pi q\alpha}]}e^{2i\pi qx}$.
\begin{lem1}\label{cohomoproj}
The operators $U$ and $\overline{U}$ are well defined and bounded on $\mathcal{T}_2$. Moreover, $||U||$ and $||\overline{U}||$ can be bounded by a constant depending only on $\delta$ (and the norm $||\cdot||$ on $\mathcal{T}_2$).
\end{lem1}
\begin{proof}
The operators $U$ and $\overline{U}$ are well defined on $\mathcal{T}_2$ since the denominators $1-\mathbb{E}[e^{2i\pi q\alpha}]$ do not vanish for $q=-2,-1,1,2$ thanks to the assumption that $\alpha$ does not belong almost surely to $\{0,\frac{1}{2}\}$. Theses operators are automatically bounded since $\mathcal{T}_2$ is finite dimensional. Finally the uniform bound of $||U||$ and $||\overline{U}||$ follows from the inequality $|1-\mathbb{E}[e^{2i\pi q\alpha}]|\geq 8\mathbb{E}\cro{d(q\alpha,\mathbb{Z})^2}$ (obtained in the proof of Lemma \ref{cohomo}) applied to $q=-2,-1,1,2$.
\end{proof}
\begin{lem1}\label{LyapuProj}
$$\Lambda=\frac{1}{4}\mathbb{E}\int_{\mathbb{T}} \pa{\zetaup_1 '+(\overline{U}\bar{\zetaup}_1)'-(\overline{U}\bar{\zetaup}_1)'\circ r_\alpha}^2dx+O(\varepsilon^3).$$
where $\bar{\zetaup}_1=\mathbb{E}[\zetaup_1\circ r_\alpha^{-1}]$ with $\zetaup_1$ given by Lemma \ref{proj}, and $\varepsilon=\mathbb{E}[||E||^3]^{\frac{1}{3}}$.
\end{lem1}
\begin{proof}
By Lemma \ref{Lyapuproj}, we have $\Lambda=-\frac{1}{2}\lambda(\mu)$ for some stationary probability measure $\mu$ on $\mathbb{T}$. If $\alpha$ is diophantine, the expansion in the statement is a consequence of Proposition \ref{lyapu}. We are going to check that the estimate is still valid without diophantine assumption by mimicking the proof of Proposition \ref{lyapu}, noticing that we only need to estimate $\mu$ on trigonometrical polynomials of small degrees, and so we only need the boundedness of $U$ on $\mathcal{T}_2$ given by Lemma \ref{cohomoproj}.
\begin{itemize}
\item For every $\psi$ in $\mathcal{T}_2$, with $\varphi=U\psi$ ($\in \mathcal{T}_2$) we have
$$\int_{\mathbb{T}}\psi d\mu-\int_{\mathbb{T}}\psi dx=\int_{\mathbb{T}}(\varphi-T_0\varphi) d\mu=\int_{\mathbb{T}}(T\varphi-T_0\varphi) d\mu=O(\varepsilon\|\varphi\|)=O(\varepsilon\|\psi\|)$$
\item For every $\psi$ in $\mathcal{T}_1$, with $\varphi=U\psi$ ($\in \mathcal{T}_1$) we have
$$\begin{disarray}{ll}\int_{\mathbb{T}}\psi d\mu-\int_{\mathbb{T}}\psi dx&=\int_{\mathbb{T}}(T\varphi-T_0\varphi) d\mu\\
&=\int_{\mathbb{T}}\mathbb{E}[(\varphi '\circ r_\alpha)\zetaup]d\mu+O(\varepsilon^2\|\varphi\|)\\
&=\int_{\mathbb{T}}\mathbb{E}[(\varphi '\circ r_\alpha)\zetaup_1]dx+O(\varepsilon^2\|\varphi\|)\\
&=\int_{\mathbb{T}}\varphi'\bar{\zetaup}_1dx+O(\varepsilon^2\|\varphi\|)\\
&=\int_{\mathbb{T}}\psi '\overline{U}\bar{\zetaup}_1 dx+O(\varepsilon^2\|\psi\|)\end{disarray}$$
(for the third equality we used that $(\varphi '\circ r_\alpha)\zetaup_1$ belongs to $\mathcal{T}_2$)
\item Denoting $\etaup=\overline{U}\bar{\zetaup}_1$ ($\in\mathcal{T}_1$), $g=Id-\etaup$ and $\tilde{\mu}=g_*\mu$, we have for $\psi$ in $\mathcal{T}_2$
$$\int_{\mathbb{T}}\psi d\tilde{\mu}=\int_{\mathbb{T}}\psi d\mu+O(\varepsilon\|\psi\|)=\int_{\mathbb{T}}\psi dx+O(\varepsilon\|\psi\|)$$
and for $\psi$ in $\mathcal{T}_1$,
$$\int_{\mathbb{T}}\psi d\tilde{\mu}=\int_{\mathbb{T}}\psi d\mu-\int_{\mathbb{T}}\psi ' \overline{U}\bar{\zetaup}_1 d\mu+O(\varepsilon^2\|\psi\|)=\int_{\mathbb{T}}\psi dx+O(\varepsilon^2\|\psi\|)$$
\item Denoting $\tilde{f}=g\circ f_M\circ g^{-1}=r_\alpha+\tilde{\zetaup}$ ($g$ invertible if $\varepsilon$ is small enough since $||\etaup||=O(\varepsilon)$),
by using the decomposition $\zetaup=\zetaup_1+\zetaup_2+\zetaup_3$ and Taylor expansions we can write $\tilde{\zetaup}=\tilde{\zetaup}_1+\tilde{\zetaup}_2+\tilde{\zetaup}_3$ with
$$\left\{\begin{disarray}{l}
\tilde{\zetaup}_1=\zetaup_1-\etaup\circ r_{\alpha}+\etaup,~ \tilde{\zetaup}_1\in
\mathcal{T}_1,~||\tilde{\zetaup}_1||=O(\max(\|E\|,\|\etaup\|))\\
\tilde{\zetaup}_2\in \mathcal{T}_2,~||\tilde{\zetaup}_2||=O(\max(\|E\|^2,\|\etaup\|^2))\\
||\tilde{\zetaup}_3||_1=O(\max(\|E\|^3,\|\etaup\|^3))
\end{disarray}\right..$$
\item We conclude:
$$\begin{disarray}{ll}\lambda(\mu)&=\mathbb{E}\int_{\mathbb{T}}\ln \tilde{f}'d\tilde{\mu}\\
&=\mathbb{E}\int_{\mathbb{T}}\tilde{\zetaup}_1'd\tilde{\mu}+\mathbb{E}\int_{\mathbb{T}}\tilde{\zetaup}_2'd\tilde{\mu}-\frac{1}{2}\mathbb{E}\int_{\mathbb{T}}\tilde{\zetaup}_1'^2d\tilde{\mu}
+O(\varepsilon^3)\\
&=-\frac{1}{2}\int_{\mathbb{T}}\tilde{\zetaup}_1'^2dx+O(\varepsilon^3)
\end{disarray},$$
from which the result follows since $\Lambda=-\frac{1}{2}\lambda(\mu)$
\end{itemize}
\end{proof}
We can deduce Theorem \ref{mainproj} by a serie of simple computations. Starting from the equality $E(e^{i\pi x})=\frac{1}{2}(Ze^{i\pi x}+Z' e^{-i\pi x})$
with $Z=(a+d)+i(c-b)$ and $Z'=(a-d)+i(b+c)$, we successively obtain (using Lemma \ref{proj})
\begin{itemize}
\item $\displaystyle \zetaup_1(x)=\frac{1}{\pi}\mbox{Im}\pa{E(e^{i\pi x})e^{-i\pi(x+\alpha)}}=\frac{1}{2\pi}\mbox{Im}\pa{Z e^{i\pi(2x+\alpha)}}+\mbox{constant}$
\item $\displaystyle \bar{\zetaup}_1(x)=\frac{1}{2\pi}\mbox{Im}\pa{\mathbb{E}[Z e^{-i\pi\alpha}] e^{2i\pi x}}+\mbox{constant}$
\item $\displaystyle \overline{U}\bar{\zetaup}_1(x)=\frac{1}{2\pi}\mbox{Im}\pa{\frac{\mathbb{E}[Ze^{-i\pi\alpha}]}{1-\mathbb{E}[e^{-2i\pi \alpha}]}e^{2i\pi x}}$
\item $\displaystyle \pa{\zetaup_1 +\overline{U}\bar{\zetaup}_1-\overline{U}\bar{\zetaup}_1\circ r_\alpha}(x)=\frac{1}{2\pi}\mbox{Im}\pa{X e^{2i\pi x}}+\mbox{constant}$\\
where
$\displaystyle X=Z e^{i\pi\alpha}+\frac{\mathbb{E}[Ze^{-i\pi\alpha}]}{1-\mathbb{E}[e^{-2i\pi \alpha}]}-\frac{\mathbb{E}[Ze^{-i\pi\alpha}]}{1-\mathbb{E}[e^{-2i\pi \alpha}]}e^{2i\pi\alpha}$,
\item $\displaystyle \pa{\zetaup_1 '+(\overline{U}\bar{\zetaup}_1)'-(\overline{U}\bar{\zetaup}_1)'\circ r_\alpha}(x)=\mbox{Re}\pa{X e^{2i\pi x}}$,\\
\item $\displaystyle\Lambda=\frac{1}{4}\mathbb{E}\int_{\mathbb{T}} \pa{\zetaup_1 '+(\overline{U}\bar{\zetaup}_1)'-(\overline{U}\bar{\zetaup}_1)'\circ r_\alpha}^2dx+O(\varepsilon^3)=\frac{1}{8}\mathbb{E}\pa{|X|^2}+O(\varepsilon^3).$
\end{itemize}
The result follows by simply rewriting $\mathbb{E}\pa{|X|^2}=\mathbb{E}\pa{|\overline{X}e^{2i\pi\alpha}|^2}$
\subsection{Proof of Theorem \ref{mainproj2}}
We are going to prove Theorem \ref{mainproj2} by mimicking the proof of Theorem \ref{principal}. Let $\delta>0$ and let $M$ be a random matrix in $SL_2(\mathbb{R})$ such that $\|Tr(M)\|_{L^2(\Omega)}\leq 2-\delta$. Let $\alpha$ in $\mathbb{T}$ be so that $d(M,\mathcal{R})=||M-R_\alpha||$, and let $f_M=r_\alpha+\zetaup$ be the associated random diffeomorphism of $\mathbb{T}$. We assume that $M$ is valued in the open set
$$\mathcal{U}_0=\{N\in Sl_2(\mathbb{R}), d(N,\mathcal{R}) < \beta\}$$ where $\beta$ is a constant depending only on $\delta$ and $\|\cdot\|$ chosen so that for $M$ in $\mathcal{U}_0$ we have $|f_M'-1|\leq \frac{1}{2}$ and $|Tr(M)-Tr(R_\alpha)|\leq\frac{\delta}{2}$. The second inequality implies that $\|Tr(R_\alpha)|\|_{L^2(\Omega)}\geq 2-\frac{\delta}{2}$ and so $\|d(2\alpha,\mathbb{Z})\|_{L^2(\Omega)}\geq \delta'$ for some positive $\delta'$ ($\approx \sqrt{\delta})$ depending on $\delta$, so the technics used to prove Theorem \ref{mainproj} still work.\\
Let us construct the first conjugation.
\begin{lem1}\label{firstproj}
There exists $P$ in $SL_2(\mathbb{R})$ such that either $||d(PMP^{-1},\mathcal{R})||_{L^2(\Omega)}\leq 4A_0\Lambda^{\frac{1}{2}}$ or $||d(PMP^{-1},\mathcal{R})||_{L^2(\Omega)}\leq C ||d(M,\mathcal{R})||_{L^2(\Omega)}^{\frac{3}{2}}$, where $A_0$ is the constant of Lemma \ref{equiv}, and $C$ is a constant depending only on $\delta$ and the norms. Moreover $\|P-I_2\|\leq C ||d(M,\mathcal{R})||_{L^2(\Omega)}$.
\end{lem1}
\begin{proof}
From the proof of Lemma \ref{LyapuProj}, we get that setting $\etaup=\overline{U}\bar{\zetaup}_1$, $g=Id-\etaup$, $\tilde{f}=gf_Mg^{-1}=r_\alpha+\tilde{\zetaup}$ and $\varepsilon=||d(M,\mathcal{R})||_{L^2(\Omega)}$, we have
$$\Lambda\geq \frac{1}{8}\int_{\mathbb{T}}\tilde{\zetaup}'^2 dx+O(\varepsilon^3),$$
using that if $\varepsilon$ is small enough, $\tilde{f}'<2$ so $\ln(\tilde{f}')\leq\tilde{\zetaup}'-\frac{1}{4}\tilde{\zetaup}'^2$. So there exists
a constant $C$ such that
$$\mathbb{E}\int_\mathbb{T}\tilde{\zetaup}'^2dx\leq 8\Lambda+C\varepsilon^3,$$
so
$$||d_0(\tilde{f},r_{\tilde{\alpha}})||_{L^2(\Omega)}\leq 3\Lambda^{\frac{1}{2}}+C^{\frac{1}{2}}\varepsilon^{\frac{3}{2}}$$
where $\tilde{\alpha}=\alpha+\int_{\mathbb{T}}\tilde{\zetaup}dx$.\\
By Lemma \ref{projR}, there exists $P$ in $SL_2(\mathbb{R})$ such that $\|P-I_2\|=O(\varepsilon)$ and $f_P(x)=x-\etaup(x)+O(||\etaup||^2)=g(x)+O(\varepsilon^2)$. Let us set $\widetilde{M}=PMP^{-1}$. Since $d_0(f_P,g)=O(\varepsilon^2)$, we deduce from Proposition \ref{esti0} that $d_0(f_{\widetilde{M}},\tilde{f})=d_0(f_Pf_Mf_P^{-1},gf_Mg^{-1})=O(\varepsilon^2).$ Hence
$$||d_0(f_{\widetilde{M}},r_{\tilde{\alpha}})||_{L^2(\Omega)}\leq 3\Lambda^{\frac{1}{2}}+C\varepsilon^{\frac{3}{2}}$$
for some new constant $C$. So either $||d_0(f_{\widetilde{M}},r_{\tilde{\alpha}})||_{L^2(\Omega)}\leq 4\Lambda^{\frac{1}{2}}$ or $||d_0(f_{\widetilde{M}},r_{\tilde{\alpha}})||_{L^2(\Omega)}\leq 4C\varepsilon^{\frac{3}{2}}$, and the conclusion follows from the inequality $||\widetilde{M}-R_{\tilde{\alpha}}||\leq A_0 d_0(f_{\widetilde{M}},r_{\tilde{\alpha}})$
\end{proof}
We can now prove Theorem \ref{mainproj2}.
\begin{proof}(Theorem \ref{mainproj2})\\
Let $M$ be a random matrix with Lyapunov exponent $\Lambda$. We are going to assume that $d(M,\mathcal{R}) < \frac{\beta}{2}$ a.s. (in particular, $M\in \mathcal{U}_0$). We construct a sequence of random matrices $(M_n)_n$ by induction: we set $M_0=M$, and once $M_n$ defined, if $||d(M_n,\mathcal{R})||_{L^2(\Omega)}\leq 4A_0\Lambda^{\frac{1}{2}}$ or if $M_n$ does not belong almost surely to $\mathcal{U}_0$ then we stop the sequence, and if not then we use Lemma \ref{firstproj} and set $M_{n+1}=P_n M_n P_n^{-1}$ where $P_n$ is given by the lemma. Thus we get a sequence $(M_n)_{n\leq N}$ where $N$ belongs to $\mathbb{N}\cup\{+\infty\}$. Finally, we set $Q_n=P_{n-1}\cdots P_0$, so that $M_n=Q_{n}MQ_{n}^{-1}$.\\
Let $\varepsilon_n=||d(M_n,\mathcal{R})||_{L^2(\Omega)}$. By invariance by conjugation, the Lyapunov exponent of $M_n$ is $\Lambda$. So from the construction and Lemma \ref{firstproj} we deduce that for every $n<N$, $\varepsilon_{n+1}\leq C\varepsilon_n^{\frac{3}{2}}$ and for every $n\leq N$, $||P_n-I_2||\leq C\varepsilon_n$. It is then straightforward that there is a constant $C_1$ and a positive number $\bar{\varepsilon}$ such that if $\varepsilon_0\leq \bar{\varepsilon}$ then for every $n\leq N$, $\varepsilon_n\leq C_1 2^{-\left(\frac{3}{2}\right)^n}\varepsilon_0$, and also $||Q_n-I_2||\leq C_1\varepsilon_0$, and then that $d(M_n,\mathcal{R})\leq \beta $, i.e. $M_n\in\mathcal{U}_0$ (so the sequence only stop if $||d(M_n,\mathcal{R})||_{L^2(\Omega)}\leq 4A_0\Lambda^{\frac{1}{2}}$).\\
Two cases can occur:
\begin{itemize}
\item If $\Gamma>0$, then $N<+\infty$. So $||d(M_N,\mathcal{R})||_{L^2(\Omega)}\leq 4A_0\Lambda^{\frac{1}{2}}$ with $M_N=Q_NMQ_N^{-1}$, and $||Q_N-I_2||\leq C_1\varepsilon_0.$
\item If $\Lambda=0$ then $N=+\infty$. Since $||Q_{n+1}-Q_{n}||=O(||Q_{n}||\cdot ||P_n-I_2||)=O(\varepsilon_n)$, $(Q_n)$ converge to some matrix $Q$ such that $||Q-I_2||=O(\varepsilon_0)$, and since $||d(Q_{n}MQ_{n}^{-1},\mathcal{R})||_{L^2(\Omega)}=\varepsilon_n\to 0$, we conclude that $QMQ^{-1}\in \mathcal{R}$ almost surely.
\end{itemize}
Thorem \ref{mainproj2} follows.
\end{proof}
\section{Appendix: $C^k$ estimates} \label{appen}
In this section we give a quick proof of the propositions stated in Section \ref{prelim} and state some other classical $C^k$ estimates.\\
In the following propositions we consider maps $f:\mathbb{R}\rightarrow\mathbb{R}$. We denote by $\|\cdot\|_\infty$ the supremum norm, that is $\|f\|_\infty=\sup_\mathbb{R} |f|$.
\begin{prop1}(Kolmogorov inequality)\\
For any integers $j\leq k$ and for any $f$ in $C^k(\mathbb{R})$,
$$\|f^{(j)}\|_\infty\leq C\|f^{(k)}\|_\infty^{j/k}\|f\|_\infty^{1-j/k}.$$
where $C$ is a constant depending only on $k$.
\end{prop1}
\begin{proof}
Being given real numbers $x$ and $h$, Taylor-Lagrange formula gives the existence of $c$ in $\mathbb{R}$ such that
\begin{equation}\label{Taylor}f(x+h)=\sum_{n=0}^{k-1}f^{(n)}(x)\frac{h^n}{n!}+f^{(k)}(c)\frac{h^k}{k!}\end{equation}
We fix real number $a_0,\ldots,a_{k-1}$ such that $\sum_{m=0}^{k-1}a_m n^m=\delta_{n,j}$ for $n=1,\ldots,k-1$ by inverting a Vandermonde system. For $t\in \mathbb{R}$ given, by a linear combinations of the formulas (\ref{Taylor}) with $h=0,t,2t,\ldots, (k-1)t$ we get
$$\sum_{m=0}^{k-1}a_m f(x+mt)=f^{(j)}(x)\frac{t^j}{j!}+\left(\sum_{m=0}^{n-1}a_mf^{(k)}(c_m)\right)\frac{t^k}{k!}$$
for some real numbers $c_1,\ldots,c_{k-1}$. In particular,
$$\|f^{(j)}\|_\infty\leq C(t^{-j}\|f\|_\infty+t^{k-j}\|f^{(k)}\|_\infty)$$
for some constant $C$, and the result follows by optimizing in $t$.
\end{proof}
\begin{prop1}\label{Kolmocor}(Product of norms of derivatives)\\
For any $f$, $g$ in $C^k(\mathbb{R})$, and any integer $j\leq k$,
$$\|f^{(j)}\|_\infty\|g^{(k-j)}\|_\infty\leq C(\|f^{(k)}\|_\infty\|g\|_\infty+\|f\|_\infty\|g^{(k)}\|_\infty)$$
where $C$ is a constant depending only on $k$.
\end{prop1}
\begin{proof}
It is a consequence of Kolmogorov inequality and the convexity inequality $a^{\theta}b^{1-\theta}\leq \theta a+(1-\theta)b$:
$$\|f^{(j)}\|_\infty\|g^{(k-j)}\|_\infty\leq C\|f^{(k)}\|_\infty^{j/k}\|f\|_\infty^{1-j/k}\|g^{(k)}\|_\infty^{1-j/k}\|g\|_\infty^{j/k}\leq C\left(\frac{j}{k}\|f^{(k)}\|_\infty\|g\|_\infty+(1-\frac{j}{k})\|f\|_\infty\|g^{(k)}\|_\infty\right).$$
\end{proof}
\begin{prop1}(Derivative of a product)\label{estiprod}
For any integer $k$ and any $f$, $g$ in $C^k(\mathbb{R})$,
$$\|(fg)^{(k)}\|_\infty\leq C(\|f^{(k)}\|_\infty\|g\|_\infty+\|f\|_\infty\|g^{(k)}\|_\infty)$$
where $C$ is a constant depending only on $k$.
\end{prop1}
\begin{proof}
By Leibnitz formula, $\|(fg)^{(k)}\|_\infty\leq \sum_{j=0}^k \begin{pmatrix}
k\\j
\end{pmatrix} \|f^{(j)}\|_\infty \|g^{(k-j)}\|_\infty$, and by the proposition above, $\|f^{(j)}\|_\infty\|g^{(k-j)}\|_\infty\leq C(\|f^{(k)}\|_\infty\|g\|_\infty+\|f\|_\infty\|g^{(k)}\|_\infty)$ for some $C$.
\end{proof}
\begin{prop1}(Derivative of a composition)\label{esticomp}
Let $M\geq 1$. For any integer $k\geq 1$ and any $f$, $g$ in $C^k(\mathbb{R})$ such that $|g'|\leq M$ on $\mathbb{R}$,
$$\|(f\circ g)^{(k)}\|_\infty\leq CM^{k-1}(\|f^{(k)}\|_\infty\|g'\|_\infty+\|f'\|_\infty\|g^{(k)}\|_\infty)$$
where $C$ is a constant depending only on $k$.
\end{prop1}
\begin{proof}
We proceed by induction on $k$. The statement is obvious for $k=1$. Let $k\geq 2$. Since $(f\circ g)^{(k)}=\left(f'\circ g\cdot g'\right)^{(k-1)}$, we obtain by Proposition \ref{estiprod} for some constant $C$
$$\|(f\circ g)^{(k)}\|_\infty \leq C\left(\|(f'\circ g)^{(k-1)}\|_\infty\|g'\|_\infty+\|f'\circ g\|_\infty\|(g')^{(k-1)}\|_\infty\right),$$
so
$$\|(f\circ g)^{(k)}\|_\infty\leq C\left(M\|(f'\circ g)^{(k-1)}\|_\infty+\|f'\|_\infty\|g^{(k)}\|_\infty\right).$$
By induction hypothesis,
$$\|(f'\circ g)^{(k-1)}\|_\infty\leq CM^{k-2}(\|f^{(k)}\|_\infty\|g'\|_\infty+\|f''\|_\infty\|g^{(k-1)}\|_\infty)$$
for some constant $C$ depending on $k$. So for some new constant $C$
$$\|(f\circ g)^{(k)}\|_\infty\leq CM^{k-1}\left(\|f^{(k)}\|_\infty\|g'\|_\infty+\|f''\|_\infty\|g^{(k-1)}\|_\infty+\|f'\|_\infty\|g^{(k)}\|_\infty\right).$$
By Proposition \ref{Kolmocor}
$$\|f''\|_\infty\|g^{(k-1)}\|_\infty \leq C(\|f^{(k)}\|_\infty\|g'\|_\infty+\|f'\|_\infty\|g^{(k)}\|_\infty)$$
for some constant $C$ so finally, with a new constant $C$,
$$\|(f\circ g)^{(k)}\|_\infty\leq CM^{k-1}(\|f^{(k)}\|_\infty\|g'\|_\infty+\|f'\|_\infty\|g^{(k)}\|_\infty),$$
which completes the induction.
\end{proof}
From these general estimates, we deduce some more specific ones for our context. We reintroduce the $C^k$-norms: for $\phi$ in $C^k(\mathbb{R})$ we define its $C^k$-norm by $\|\phi\|_k=\max(\|\phi\|_{\infty},\|\phi'\|_{\infty},\ldots,\|\phi^{(k)}\|_{\infty})$ (in particular, $\|\cdot\|_0$ is also the supremum norm). Alternatively we could define $\|\phi\|_k=\max(\|\phi\|_{\infty},\|\phi^{(k)}\|_{\infty})$, which is an equivalent norm thanks to Kolmogorov inequality.
\begin{lem1}\label{compodif}
Let $k$ be an integer, let $M\geq 1$, let $f$, $g$ be in $C^k(\mathbb{R})$ such that $|f'|, |g'|\leq M$ on $\mathbb{R}$. Then:
$$\|f\circ g-Id\|_k\leq CM^k(\|f-Id\|_k+\|g-Id\|_k)$$
where $C$ is a constant depending only on $k$.
\end{lem1}
\begin{proof}
Let $\varphi=f-Id$ and $\psi=g-Id$. Since $f\circ g-Id=\psi+\varphi\circ g$,
we only need to bound $\|\varphi\circ g\|_k$. We have $\|\varphi\circ g\|_0=\|\varphi\|_0$, $\|(\varphi\circ g)'\|_0\leq \|g'\|_0\|\varphi'\|_0\leq M\|\varphi\|_1$, and if $k\geq 2$, by Proposition \ref{esticomp} for some constant $C$ depending on $k$ we have
$$\|(\varphi\circ g)^{(k)}\|_0\leq CM^{k-1}(\|\varphi^{(k)}\|_0\|g'\|_0+\|\varphi'\|_0\|g^{(k)}\|_0),$$
with $\|\varphi'\|_0 \leq 1+M\leq 2M$, $\|g'\|_0\leq M$ and $\|g^{(k)}\|_0=\|\psi^{(k)}\|_0$, so
\begin{equation}\label{comp}\|\varphi\circ g\|_k\leq CM^k(\|\varphi\|_k+\|\psi\|_k),\end{equation}
for some new constant $C$ depending on $k$, and the statement follows.
\end{proof}
\begin{lem1}\label{invdif}
Let $k$ be an integer, let $q<1$, and let $f$ be in $C^k(\mathbb{R})$ such that $|f'-1|\leq \frac{1}{2}$ on $\mathbb{R}$. Then :
$$\|f^{-1}-Id\|_k\leq C\|f-Id\|_k$$
where $C$ is a constant depending only on $k$.
\end{lem1}
\begin{proof}
Let $g=f^{-1}$, $\varphi=f-Id$ and $\psi=g-Id$, so that the identity $f\circ g=Id$ becomes $\psi=-\varphi\circ g$. We want to prove that $\|\psi\|_k\leq C\|\varphi\|_k$ for some constant $C$. It is straightforward if $k=0$ or $1$ so we assume tha $k\geq 2$ and we make the induction assumption that for every $j<k$, $\|\psi\|_j\leq C\|\varphi\|_j$ for some constant $C$. Then:
$$\|\psi\|_k=\|\varphi\circ g\|_k\leq\|\varphi\|_0+\|\varphi '\circ g\cdot g '\|_{k-1}\leq\|\varphi\|_0+\sum_{j=0}^{k-1} \left( \begin{array}{c}k-1 \\ j\end{array}\right) \|\varphi '\circ g\|_{j}\|g '\|_{k-1-j}.$$
For $j=0$,
$$\|\varphi '\circ g\|_{0}\|g '\|_{k-1}\leq \|\varphi'\|_{0}(1+\|\psi '\|_{k-1}) \leq \|\varphi\|_1+\frac{1}{2}\|\psi\|_k,$$
and for $j\not=0$, by using inequality (\ref{comp}) (with $M=2$) and the induction assumption we can bound $\|\varphi '\circ g\|_{j}\leq C\|\varphi\|_j$ for some constant $C$, and then by using Proposition \ref{Kolmocor} we get $\|\varphi '\circ g\|_{j}\|g '\|_{k-1-j}\leq C\|\varphi\|_k$ with a new constant $C$. So we deduce finally that we have for some constant $C$
$$\|\psi\|_k\leq \frac{1}{2}\|\psi\|_k+C\|\varphi\|_k,$$
and so $\|\psi\|_k\leq 2C\|\varphi\|_k$, which completes the induction.
\end{proof}
\begin{lem1}(a $C^k$ mean value inequality) \label{meanvalue}
Let $M\geq 1$, let $f$, $g$ be in $C^k(\mathbb{R})$ such that $|f'|,|g'|,|f^{(k)},|g^{(k)}|\leq M$ on $\mathbb{R}$, and let $\phi\in C^{k+1}(\mathbb{R})$.
Then
$$\|\phi\circ f-\phi\circ g\|_k\leq C\|\phi\|_{k+1}\|f-g\|_k$$
where $C$ depends only on $k$ and $M$.
\end{lem1}
\begin{proof}
We write
$$\phi\circ f-\phi\circ g=(f-g)\int_0^1\phi'\circ h_t dt$$
where $h_t=(1-t)f+tg$. Thus,
$$\|\phi\circ f-\phi\circ g\|_k\leq C\|f-g\|_k\int_0^1\|\phi'\circ h_t\|_k dt$$
for some constant $C$ depending only on $k$. By Proposition \ref{esticomp} (and Kolmogorov inequality), $\|\phi'\circ h_t\|_k\leq C\|\phi\|_{k+1}$ for some constant $C$ depending on $k$ and $M$. The result follows.
\end{proof}
Finally, let us prove Propositions \ref{estiK}, \ref{esti0}, \ref{esti1}, \ref{estival} of Section \ref{prelim}. Proposition \ref{estiK} is an immediate consequence of Lemmas \ref{compodif} and \ref{invdif} and the fact that $d_k$ is invariant by (left or right) composition by rotations. Propostion \ref{estival} is a straightforward consequence of inequality (\ref{comp}) since $d_k(f\circ h, g\circ h)=\|(f-g)\circ h\|_k$. To prove Propostion \ref{esti1}, we write $f=r_\alpha+\zetaup$ and $g=Id+\etaup$, and then an algebraic computation
gives
$$g\circ f \circ g^{-1}=r_\alpha+\left(\zetaup\circ g^{-1}+\eta\circ (f\circ g^{-1})-\eta\circ g^{-1}\right).$$
The difference between this map and the approximation $r_\alpha+\left(\zetaup+\eta\circ r_\alpha-\eta\right)$ can be estimated in $C^1$-norm thanks to Lemma \ref{meanvalue} (with $k=1$), what gives the result (alternatively one can directly bound this difference and its derivative by elementary calculus). Finally, Proposition \ref{esti0} is an elementary consequence of the invariance of $d_0$ by right composition and mean value inequality:
$$\begin{array}{ll}d_0(gfg^{-1},\tilde{g}f\tilde{g}^{-1})&\leq d_0(gfg^{-1},\tilde{g}fg^{-1})+d_0(\tilde{g}f\tilde{g}^{-1},\tilde{g}fg^{-1})\\
&\leq d_0(g,\tilde{g})+d_0(\tilde{g}f\tilde{g}^{-1}g,\tilde{g}f)\\
&\leq d_0(g,\tilde{g})+d_0((\tilde{g}f\tilde{g}^{-1})\circ g,(\tilde{g}f\tilde{g}^{-1})\circ \tilde{g})\\
&\leq (1+\|(\tilde{g}f\tilde{g}^{-1})'\|_0)d_0(g,\tilde{g}),
\end{array}$$
with $\|(\tilde{g}f\tilde{g}^{-1})'\|_0$ easily bounded by above.
\bibliographystyle{plain}
|
2,877,628,091,137 | arxiv | \section*{Supplemental Material}
Considering the exchange of vector mesons, the potential between a pair of heavy and antiheavy hadrons at threshold takes the following form:
\begin{equation}
V\sim-F\beta_1\beta_2g_V^2\frac{2m_1m_2}{m_{\rm ex}^2},
\label{eq:potential}
\end{equation}
where $m_1,m_2$ and $m_{\rm ex}$ are the masses of the two heavy hadrons and the exchanged particle, respectively, $\beta_1$ and $\beta_2$ are the coupling constants for the two heavy hadrons with vector mesons, $g_V$ is a coupling parameter for the light-vector mesons, and $F$ is a group theory factor accounting for light-flavor SU(3) information. The values of $F$ are listed in Table~\ref{tab:potentials} for all combinations of a pair of heavy and antiheavy ground state hadrons. $\beta_1$ and $\beta_2$ are positive in our convention so that a positive $F$ means an attractive interaction. For systems that can form states with both positive and negative $C$ parities, for instance, $D\bar D^*\pm\bar D D^*$ or $\Sigma_c \bar \Sigma_c^*\pm \bar \Sigma_c \Sigma_c^*$, the potentials at threshold are the same with the mechanism considered here. The potentials presented here may also be used as the resonance saturation modeling of the constant contact terms in nonrelativistic effective field theory studies of the heavy-antiheavy hadron interactions.
\begin{table}[h]
\caption{Potentials at threshold of heavy-antiheavy hadron pairs with only light vector-meson exchanges, see Eq.~\eqref{eq:potential}. Positive $F$ means attractive. For the systems with $F=0$, the sub-leading exchanges of vector-charmonia also lead to an attractive potential at threshold. } \label{tab:potentials}
\begin{ruledtabular}
\begin{tabular}{cccc}
System & $I$ & exchanged particle & $F$\\
\hline
$D^{(*)}\bar D^{(*)}$& 0 &$\rho,\omega$ & $\frac32,\frac12$\\
& 1 &$\rho,\omega$ & $-\frac12,\frac12$\\
$D_s^{(*)}\bar D^{(*)}$& $\frac12$ &$-$ & $0$\\
$D^{(*)}_s\bar D^{(*)}_s $& 0&$\phi$ & $1$\\
\hline
$\bar D^{(*)}\Lambda_c$& $\frac12$ &$\omega$ & $-1$\\
$\bar D_s^{(*)}\Lambda_c$& $0$ &$-$ & $0$\\
$\bar D^{(*)}\Xi_c$& $1$ &$\rho,\omega$ & $-\frac12,-\frac12$\\
& $0$ &$\rho,\omega$ & $\frac32,-\frac12$\\
$\bar D_s^{(*)}\Xi_c$& $\frac12$ &$\phi$ & $-1$\\
\hline
$\bar D^{(*)}\Sigma_c^{(*)}$& $\frac32$ &$\rho,\omega$ & $-1,-1$\\
& $\frac12$ &$\rho,\omega$ & $2,-1$\\
$\bar D_s^{(*)}\Sigma_c^{(*)}$& $1$ &$-$ & $0$\\
$\bar D^{(*)}\Xi_c^{'(*)}$& $1$ &$\rho,\omega$ & $-\frac12,-\frac12$\\
& $0$ &$\rho,\omega$ & $\frac32,-\frac12$\\
$\bar D_s^{(*)}\Xi_c^{'(*)}$& $\frac12$ &$\phi$ & $-1$\\
$\bar D^{(*)}\Omega_c^{(*)}$& $\frac12$ &$-$ & $0$\\
$\bar D_s^{(*)}\Omega_c^{(*)}$& $0$ &$\phi$ & $-2$\\
\hline
$ \Lambda_c\bar\Lambda_c$& $0$ &$\omega$ & $2$\\
$\Lambda_c\bar \Xi_c$& $\frac12$ &$\omega$ & $1$\\
$\Xi_c\bar \Xi_c$& $1$ &$\rho,\omega,\phi$ & $-\frac12,\frac12,1$\\
& $0$ &$\rho,\omega,\phi$ & $\frac32,\frac12,1$\\
\hline
$\Lambda_c\bar\Sigma_c^{(*)}$& $1$ &$\omega$ & $2$\\
$\Lambda_c\bar\Xi_c^{'(*)}$&$\frac12$ &$\omega$ & $1$\\
$\Lambda_c\bar\Omega_c^{(*)}$ &$0$ &$-$ & $0$\\
$\Xi_c \bar\Sigma_c^{(*)}$ &$\frac32$ &$\rho,\omega$ & $-1,1$\\
&$\frac12$ &$\rho,\omega$ & $2,1$\\
$\Xi_c \bar\Xi_c^{'(*)}$ &$1$ &$\rho,\omega,\phi$ & $-\frac12,\frac12,1$\\
& $0$ &$\rho,\omega,\phi$ & $\frac32,\frac12,1$\\
$\Xi_c \bar\Omega_c^{(*)}$ &$\frac12$ &$\phi$ & $2$\\
\hline
$\Sigma_c^{(*)}\bar\Sigma_c^{(*)}$ & $2$ &$\rho,\omega$ & $-2,2$\\
& $1$ &$\rho,\omega$ & $2,2$\\
& $0$ &$\rho,\omega$ & $4,2$\\
$\Sigma_c^{(*)}\bar\Xi^{'(*)}_c$ &$\frac32$ &$\rho,\omega$ & $-1,1$\\
& $\frac12$ &$\rho,\omega$ & $2,1$\\
$\Sigma_c^{(*)}\bar\Omega^{(*)}_c$ &$0$ &$-$ & $0$\\
$\Xi_c^{'(*)} \bar\Xi_c^{'(*)}$&$1$ &$\rho,\omega,\phi$ & $-\frac12,\frac12,1$\\
&$0$ &$\rho,\omega,\phi$ & $\frac32,\frac12,1$\\
$\Xi^{'(*)}_c \bar\Omega_c^{(*)}$&$\frac12$ &$\phi$ & $2$\\
$\Omega_c ^{(*)}\bar\Omega_c^{(*)}$ &$0$ &$\phi$ & $4$
\end{tabular}
\end{ruledtabular}
\end{table}
\end{document}
|
2,877,628,091,138 | arxiv | \section{Introduction}\label{sec:intro}
The $\Lambda$ cold dark matter ($\Lambda$CDM) model has been taken as the standard cosmological paradigm since the discovery of late-universe acceleration~\citep{Riess98, Perlmutter99}. It is a remarkable success in terms of explaining the temperature and polarization anisotropies of the cosmic microwave background (CMB) that have been accurately measured by the \emph{Planck} satellite~\citep{Planck2018Overview, Planck2018Params}, the baryon acoustic oscillation features in the galaxy redshift survey data~\citep{BAO-SDSS-DR12-LOWZ, DES1yr_BAO}, the weak gravitational lensing of galaxies~\citep{DES1yr_WL}, the Type Ia supernovae luminosity distances~\citep{Pantheon, Macaulay_2019}, and many others.
Recently, the local distance-ladder measurement of Hubble constant (SH0ES) ~\citep{Riess16, Riess18a, Riess18b, Riess19}, followed by independent support from time delay of strong-lensing quasars images (H0LiCow, STRIDES)~\citep{H0LiCow,STRIDES2019}, starts to challenge the ``concordance'' $\Lambda$CDM picture. Assuming a minimal six-parameter $\Lambda$CDM model, SH0ES and H0LiCOW results together provide a 5.3$\sigma$ difference of $H_0$ with the CMB measurement. This inconsistency, often referred to as ``Hubble tension'', may indicate new physics beyond $\Lambda$CDM. Simple one-parameter extensions of $\Lambda$CDM, however, were found insufficient to resolve the Hubble tension~\citep{Guo_2019, Miao_2018}. More sophisticated models are hence proposed to take the challenge. The list includes but is not limited to modified gravity~\citep{Lin_2018, Lin_2019, Sola_2019,Rossi:2019lgt}, dark energy with phantom equation of state~\citep{Sha2,Sha3,Panpanich19}, early dark energy models~\citep{Karwal_2016,Alexander_2019,Poulin_2019}, backreaction phenomenons~\citep{Racz_2017, Kovacs_2020}, interacting dark components~\citep{DV_2017,Yang_2018a,Yang_2018b,DV_2018,Bhattacharyya_2019, DV_2019}, decaying dark matter~\citep{Vattis_2019, Blinov_2020}, modified recombination history~\citep{Chiang_2018, Gen_2020, LHL_2020}, primordial magnetic fields~\citep{Jedamzik_2020}, and extra relativistic species~\citep{D_Eramo_2018, Benetti_2017, Benetti_2018,Graef_2019,Carneiro_2019}. It has also been claimed that the Hubble tension may just be a relativistic non-linear effect in the standard $\Lambda$CDM paradigm~\citep{Bolejko_2018}.
Adhikari and Huterer proposed that non-Gaussian CMB covariance from a strong coupling between long-wavelength modes and short-wavelength modes can resolve the Hubble tension~\citep{SuperCMB}. We repeated their calculation and found the same results. However, we noticed that in this model the posterior amplitude of matter fluctuations ($\sigma_8$) is significantly higher than $\Lambda$CDM value, which is already at the upper edge of the bounds from late-universe observations of galaxy clustering and weak gravitational lensing~\citep{Planck2018Params}. Moreover, it is yet to be shown that the prediction of polarization and the large tri-spectrum in this model is consistent with \emph{Planck} data.
Nevertheless, the idea that Hubble tension may be due to some anomalies in primordial conditions is worth further investigation.
In the concordance picture, the initial seeds of cosmological fluctuations are assumed to originate from vacuum quantum fluctuations during early-universe inflation. For simplest single-field slow-roll inflation models, the predicted primordial metric fluctuations are almost perfectly Gaussian, and has a slightly tilted power-law primordial scalar power spectrum $\mathcal{P}(k)=A_s\left(\frac{k}{k_{\rm pivot}}\right)^{n_s-1}$, where $k$ is the comoving wave number and $k_{\rm pivot}=0.05\mathrm{Mpc}^{-1}$ is the pivot scale. The standard analysis of CMB and large-scale structure data is usually established on this featureless power-law primordial power spectrum. The global deviation from power-law shape, is bounded by \emph{Planck} data within a sub-percent level: $\frac{dn_s}{d\ln k}=-0.0041\pm 0.0067$~\citep{Planck2018Params}, which is fully consistent with the single-field slow-roll prediction $\left\vert\frac{dn_s}{d\ln k}\right\vert \lesssim 10^{-3}$. The \emph{Planck} collaboration also studied a broad class of inflation models as well as many phenomenological parametrizations, but found no evidence beyond the single-field slow-roll scenario~\citep{Planck2018Inflation,Planck2018NG}. Neither does a blind node expansion with cubic-spline interpolation favor any smooth non-power-law features with a resolution $\Delta \ln k\sim 1$~\citep{Planck2018Inflation}. These results are supported by many other independent works~\citep{Meerburg_2012, Zeng_2019, Domenech_2019}. In summary, the CMB data do not favor any global periodic oscillations or any broad smooth features with resolution $\Delta \ln k \sim 1$.
The apparently missing ingredient - sharper local features with $\Delta \ln k \ll 1$ are as well motivated from the theoretical perspectives. Note that $\ln k$ roughly corresponds to physical time or number of expansion e-folds during inflation. Many slow-roll-breaking processes during inflation, such as crossing a step in the inflaton potential, has strong impact only for $\sim \text{a few} \times 0.1$ efolds. These models can then produce sharp ($\Delta \ln k \sim \text{a few}\times 0.1$) features that are typically local in time ($\ln k$) domain and band-limited in frequency (Fourier conjugate of $\ln k$) domain. One way to study these sharp features is the top-down approach, that is, to parameterize and constrain the predicted features, in a model-by-model manner. For a few templates from popular models, the \emph{Planck} collaboration, again, found null results~\citep{Planck2018Inflation}. See also Refs.~\citep{Hazra2013Reconstruction,Verde2008On,Tocchinivalentini2006Non,handley2019bayesian} for earlier works. The other way, which is missing for the latest Hubble-tension related data, and will be done in this work, is the bottom-up approach that model-independently covers a much broader class of models.
We apply a wavelet analysis, a statistical tool specifically designed to study local and band-limited signals, to search for sharp features in the primordial power spectrum. Similar analysis has been done for earlier CMB data from COBE and WMAP satellites~\citep{pando1998evidence,mukherjee2000do,mukherjee2003wavelet,mukherjee2003direct,shafieloo2007features}, before \emph{Planck} data drove the Hubble tension. The purpose of our re-examination in the latest \emph{Planck} data is to investigate whether the Hubble tension is driven by a primordial sharp feature that manifests itself in high-$\ell$ multipoles that are only accurately measured by \emph{Planck}.
This paper is organized as follows. In Sec.~\ref{sec:wavelet} we introduce Daubechies wavelet analysis and our power spectrum reconstruction method. In Sec.~\ref{sec:test} we test the wavelet reconstruction method with mock CMB data for a fiducial inflationary model with slow-roll violation. In Sec.~\ref{sec:results} we report the results for \emph{Planck} + SH0ES + H0LiCow data. Sec.~\ref{sec:conclu} concludes.
\section{Method \label{sec:wavelet}}
The Daubechies wavelet basis takes the form:
\begin{equation}
\Psi_{n, m} (t) = 2^{n/2}\Psi_{0, 0}(2^nt-m), \ \ n,m \in Z
\end{equation}
where $\Psi_{0, 0}$ is the mother function of Daubechies wavelet. The basis functions are complete, compactly supported and orthogonal with respect to both the scale $n$ and the position $m$ indices. They are moving kernels with hierarchical resolutions, with each resolution level a factor of $2$ finer than the previous one. As shown in Fig.~\ref{fig:wavelets}, the Daubechies mother functions are not unique. The most oft-used 1st order Daubechies mother function, also known as Haar wavelet, is simple but discontinuous. Higher-order Daubechies in general cannot be expressed with elementary functions, but are continuous. The smoothness of the Daubechies mother function increases with its order. In this work, we use the 4ts order Daubechies basis, and check the robustness of our result with second-order Daubechies basis. The algorithm to construct Daubechies mother function of arbitrary order is given in Appendix~\ref{append}.
\begin{center}
\begin{figure}
\includegraphics[width=0.48\textwidth]{daubechies.pdf}
\caption{Daubechies wavelet mother functions.\label{fig:wavelets}}
\end{figure}
\end{center}
To blindly search features in the primordial scalar power spectrum $\mathcal{P}(k)$, we decompose its deviation from power-law shape into Daubechies wavelets
\begin{equation}
\ln \frac{\mathcal{P}(k)}{\mathcal{P}_{\rm ref}(k)} = \sum_{n=0}^3 \sum_{m=-2^{n+1}}^{2^{n+1}} A_{n,m}\Psi_{n, m} \left(\ln \frac {k}{k_{\rm pivot}}\right), \label{eq:pk}
\end{equation}
where the reference power-law is $\mathcal{P}_{\rm ref}(k) = A_s\left(\frac{k}{k_{\rm pivot}}\right)^{n_s-1}$. The lower and upper bounds of the scale index $n$ are chosen such that the resolution in $\ln k$ is limited to $ 0.1 \lesssim \Delta\ln k \lesssim 1$, to match features from slow-roll-breaking processes during inflation. The lower and upper bounds of the position index $m$ are chosen such that CMB scales measured by \emph{Planck} are well covered.
We use the publicly available software CosmoMC~\citep{Lewis2002Cosmological} to run Markov Chain Monte Carlo (MCMC) simulations and to estimate the marginalized bounds of cosmological parameters, which include the standard six built-in parameters $\Omega_bh^2$, $\Omega_ch^2$, $\theta$, $\tau_{\rm re}$, $\ln\left(10^{10}A_s\right)$, $n_s$ and the sixty-four $A_{n,m}$ coefficients defined in Eq.~\eqref{eq:pk}. Here $\Omega_bh^2$ and $\Omega_ch^2$ are baryon and CDM densities, respectively; $\theta$ is the angular extension of sound horizon on the last scattering surface; $\tau_{\rm re}$ is the reionization optical depth; $A_s$ and $n_s$ are the amplitude and index of the primordial scalar power spectrum. The Hubble constant $H_0$ can be derived from these parameters. The sum of neutrino masses is fixed to $\sum m_\nu=0.06\mathrm{eV}$, the minimum value allowed in normal hierarchy picture. Flat priors are applied to all the parameters including the $A_{n,m}$ coefficients.
The advantage of using wavelets is that they are local by construction. The additional degrees of freedom, despite being many, are not strongly correlated. This significantly accelerates the convergence of MCMC sampling.
\section{Test with Mock Data \label{sec:test}}
To test the viability of the wavelet reconstruction method, we consider a toy model with inflationary potential
\begin{equation}
V = \frac{3}{4}m^2M_p^2\left(1-e^{-\sqrt{\frac{2}{3}}\frac{\phi}{M_p}}\right)^2\left(1+\epsilon e^{-\frac{\left(\phi-\phi_0\right)^2}{2\mu^2}}\right), \label{eq:infmodel}
\end{equation}
where $M_p=2.45\times 10^{18}\mathrm{GeV}$ is the reduced Planck mass. This potential is constructed by adding a small bump, characterized by the amplitude parameter $\epsilon\ll 1$, the position parameter $\phi_0$ and the width parameter $\mu$, to the Starobinsky potential~\citep{Starobinsky_1983}. The parameters $m = 1.191\times10^{-5}M_p$, $\phi_0 = 5.37 M_p$, $\mu = 0.005M_p$, and $\epsilon = 10^{-5}$ are chosen such that, when instant reheating is assumed, the primordial power spectrum roughly matches CMB observations. The small bump leads to a temporary slow-roll violation and a typical width $\Delta\ln k \sim \text{a few}\times 0.1$ of the feature in the primordial power spectrum, which we compute by numerically integrating the linear perturbation equations of the gauge-invariant Sasaki-Mukhanov variable~\citep{Sasaki_1986, Mukhanov_1988}. The other cosmological parameters for the fiducial cosmology are taken to be the \emph{Planck} 2018 best-fit values~\citep{Planck2018Params}, as shown in the first column of Table~\ref{tab:test}.
To generate the mock CMB data, we assume the full width at half maximum (FWHM) $ = 5\,\mathrm{arcmin}$ for the temperature beam resolution, and $\mathrm{FWHM}=10\,\mathrm{arcmin}$ for polarization. With Gaussian approximation, the mock CMB likelihood reads~\citep{Verde/etal:2006}
\begin{equation}
\begin{split}
\label{chisq_CMB}
\ln\mathcal{L} =& -\frac{f_{\rm sky, eff}}{2}\sum_{\ell=\ell_{\min}}^{\ell_{\max}} (2\ell+1) \\
&\times \left[\frac{\hat{{\cal C}}_\ell^{TT}{\cal C}_\ell^{EE} + \hat{{\cal C}}_\ell^{EE}{\cal C}_\ell^{TT} - 2\hat{{\cal C}}_\ell^{TE}{\cal C}_\ell^{TE}}{{\cal C}_\ell^{TT}{\cal C}_\ell^{EE}-({\cal C}_\ell^{TE})^2} \right. \\
& + \left. \ln{\left(\frac{{\cal C}_\ell^{TT}{\cal C}_\ell^{EE}-({\cal C}_\ell^{TE})^2}{\hat{{\cal C}}_\ell^{TT}\hat{{\cal C}}_\ell^{EE}-(\hat{{\cal C}}_\ell^{TE})^2}\right)} - 2\right] \ ,
\end{split}
\end{equation}
where we have used $\ell_{\min}=2$, $\ell_{\max}=2500$, and an effective sky coverage $f_{\rm sky,eff} = 0.85$. In this formula, ${\cal C}^{XY}_\ell$ ($X,Y \in \{T, E\}$) are the model-dependent theoretical angular power spectra. They are given by ${\cal C}^{XY}_\ell= C^{XY}_\ell + N_\ell^{XY}$, where $C_\ell^{XY}$ are the noise-free CMB power spectra calculated with the publicly available code CAMB \citep{Lewis/etal:2000} and $N_\ell^{XY}$ are the noise spectra. To simulate the noise spectra, we assume a Gaussian beam shape and a sensitivity $50\,\mathrm{\mu K\,s^{1/2}}$ for temperature and $100\,\mathrm{\mu K\,s^{1/2}}$ for polarization, both integrated for five years. The hatted symbols $\hat{\cal C}_\ell^{XY}$ represent the mock data predicted by the fiducial cosmology. To check whether the reconstruction method produces any bias, we do not add a realization of cosmic variance onto the mock data. Thus, any significant deviation from the fiducial model should be interpreted as a bias rather than a look-elsewhere effect. For the real data that we will discuss in the next section, the look-elsewhere effect cannot be avoided and weak ``anomalies'' should not be overly interpreted.
\begin{figure}
\includegraphics[width=0.48\textwidth]{recbump.pdf}
\caption{Reconstructed primordial power spectrum for the mock CMB data generated from the inflationary model in Eq.~\eqref{eq:infmodel}. The dashed sky-blue lines are randomly picked trajectories from the likelihood-ordered top 68.3\% MCMC samples. The dark-gray and light-gray contours are marginalized 68.3\% and 95.4\% confidence level bounds, respectively.\label{fig:test}}
\end{figure}
We apply the wavelet reconstruction method to the mock CMB data. An unbiased detection of the input power spectrum including the slow-roll violation signal is shown in Fig.~\ref{fig:test}. As shown Table~\ref{tab:test}, the other input cosmological parameters are also well recovered with no noticeable bias.
\begin{table}
\begin{center}
\caption{Marginalized constraints on cosmological parameters for mock data.}
\label{tab:test}
\begin{tabular}{ccc}
\hline
\hline
Parameter & fiducial & constraint \\
$\Omega_b h^2$ & $0.02238$ & $0.02233^{+0.00036}_{-0.00031}$ \\
\hline
$\Omega_c h^2$ & $0.1201$ & $0.1202^{+0.0026}_{-0.0025}$ \\
\hline
$H_0$ (km/s/Mpc) & $67.3$ & $67.2^{+1.0}_{-1.0}$ \\
\hline
$\tau_{\rm re}$ & $0.0543$ & $0.0542^{+0.0027}_{-0.0026}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
We leave more detailed interpretation of the reconstructed primordial power spectrum to the next section, where the real CMB data are investigated.
\section{\emph{Planck} + SH0ES + H0LiCow \label{sec:results}}
To explicitly extract Hubble-tension-driven wavelet signals, we use jointly the SH0ES + H0LiCow constraint $H_0 = 73.82\pm 1.10 \,\mathrm{km\,s^{-1}Mpc^{-1}}$ ~\citep{H0LiCow} and the \emph{Planck} final release of TT,TE,EE + lensing likelihood~\citep{Planck2018Like}. Unlike the idealized mock CMB data that we discussed in the last section, the \emph{Planck} likelihood contains many nuisance parameters to describe uncertainties in the foreground template, etc., all of which are marginalized over in our analysis.
\begin{table}
\begin{center}
\caption{Marginalized constraints on cosmological parameters for \emph{Planck}+SH0ES+H0LiCow.}
\label{tbl:cosmomc}
\begin{tabular}{cc}
\hline
\hline
$\Omega_b h^2$ & $0.0232^{+0.0005}_{-0.0005}$ \\
\hline
$\Omega_c h^2$ & $0.1169^{+0.0015}_{-0.0016}$ \\
\hline
$100\theta_{MC}$ & $1.04146^{+0.00037}_{-0.00037}$ \\
\hline
$\tau_{\rm re}$ & $0.063^{+0.010}_{-0.008}$ \\
\hline
${\rm{ln}}(10^{10} A_s)$ & $3.060^{+0.019}_{-0.018}$ \\
\hline
$n_s$ & $0.995^{+0.010}_{-0.010}$ \\
\hline
$H_0$ (km/s/Mpc) & $69.4^{+0.7}_{-0.7}$\\
\hline
$A_{3,-7}$ & $-0.033^{+0.015}_{-0.015}$ \\
\hline
other $A_{n,m}$'s & no detection beyond $2\sigma$ \\
\hline
\end{tabular}
\end{center}
\end{table}
Table~\ref{tbl:cosmomc} lists the marginalized $1\sigma$ constraints of cosmological parameters. The sixty-four wavelet expansion coefficients are mostly consistent with zero within $2\sigma$, with only one $2.2\sigma$ exception $A_{3, -7}=-0.033\pm 0.015$. The $2.2\sigma$ weak anomaly can be well explained by look-elsewhere effect for the many degrees of freedom we have injected into the model. Another week anomaly is in the posterior of the reference $n_s=0.995\pm 0.010$, which is $\sim 2.7\sigma$ higher than the ``no wavelet, no $H_0$ prior'' case $n_s=0.965\pm 0.005$~\citep{Planck2018Params}. This can be explained by the known positive correlation between $n_s$ and $H_0$. More interestingly, it has been shown that the combination \emph{Planck} + SH0ES favors a model with a scale invariant primordial power spectrum and $\sim 0.7\pm 0.13$ extra relativistic species~\citep{Benetti_2017, Benetti_2018}.
Finally, we would like to point out that these weak anomalies are not associated with the wavelet reconstruction method or our particular choice of the wavelet mother function, as the anomalies do not show up in the test with mock CMB data where the look-elsewhere effect is avoided on purpose.
In Fig.~\ref{fig:trajs} we again visualize the reconstructed $\mathcal{P}(k)$ trajectories. The non-deviation from a power-law spectrum is consistent with the posteriors of the $A_{mn}$ parameters. The constraints are worse than the test case with mock CMB data, because the real \emph{Planck} data has foreground modeling uncertainties (especially for the polarization) and a slightly higher noise level than what we assumed in Sec.~\ref{sec:test}.
\begin{center}
\begin{figure}
\includegraphics[width=0.48\textwidth]{power_trajs.pdf}
\caption{Reconstructed primordial power spectrum for \emph{Planck}+SH0ES+H0LiCow. The dashed sky-blue lines are randomly picked trajectories from the likelihood-ordered top 68.3\% MCMC samples. The dark-gray and light-gray contours are marginalized 68.3\% and 95.4\% confidence level bounds, respectively.\label{fig:trajs}}
\end{figure}
\end{center}
Compared to the 12-knot cubic spline reconstruction in section 6.3 of \citet{Planck2018Inflation}, our wavelet analysis, by construction, picks out more local and sharper features. The high-frequency wiggling in $\mathcal{P}(k)$ is driven, or at least partially driven by the statistical fluctuations in CMB power spectra. In Fig~\ref{fig:TT} we show how the wavelet trajectories follow statistical fluctuations in the temperature angular power spectrum $D_\ell^{\rm TT}$, allowed by cosmic variance at low and intermediate ell's. At higher ell's, the trajectories converge due to a much smaller cosmic variance. These features can also be seen in the left and middle parts of Fig~\ref{fig:trajs}. The large scattering in the right part of Fig~\ref{fig:trajs} corresponds to the unconstrained power on small scales (high-$k$) beyond \emph{Planck} resolution.
\begin{center}
\begin{figure}
\includegraphics[width=0.48\textwidth]{PlanckDlTT.pdf}
\caption{CMB temperature power $D_\ell^{\rm TT}\equiv \frac{\ell(\ell+1)}{2\pi}C_\ell^{\rm TT}$, where $C_\ell^{\rm TT}$ is the angular power spectrum of temperature fluctuations. The dotted sky-blue lines are randomly picked trajectories from the likelihood-ordered top 68.3\% MCMC samples. The solid red line is the best-fit for wavelet expansion of $\mathcal{P}(k)$, whereas the dashed green line is the best-fit for the minimal six-parameter $\Lambda$CDM with power-law $\mathcal{P}(k)$.\label{fig:TT}}
\end{figure}
\end{center}
For the Hubble constant, we obtain a \emph{Planck} + SH0ES + H0LiCow joint constraint: $H_0=69.4\pm 0.7\mathrm{km\,s^{-1}Mpc^{-1}}$. Because the posterior is very close to Gaussian, we can approximately remove the SH0ES + H0LiCow contribution and obtain a \emph{Planck}-only constraint, as shown in Fig.~\ref{fig:H0}. For a comparison, we also plot the \emph{Planck} constraint for the standard $\Lambda$CDM power-law case as well as the 12-knot-spline case, which we obtain by repeating the calculations in~\citet{Planck2018Inflation}. We find that allowing more features in the primordial power spectrum, either local and band-limited as in the wavelet case, or just low-pass filtered as in the 12-knot-spline case, in general pushes the mean $H_0$ towards an even smaller value, which balances out the increased uncertainty and keeps the Hubble tension at roughly the same level. More specifically, the tension between \emph{Planck} and SH0ES + H0LiCow is $4.9\sigma$ for the wavelet analysis, and $5.3\sigma$ for the 12-knot-spline.
\begin{center}
\begin{figure}
\includegraphics[width=0.48\textwidth]{H0posterior.pdf}
\caption{Comparison of $H_0$ constraints.\label{fig:H0}}
\end{figure}
\end{center}
\section{Conclusion and Discussion \label{sec:conclu}}
To check the robustness of the wavelet analysis method, we repeated our calculation with 2nd order Daubechies basis, and found no significant variations in the results. We thus conclude that the Hubble tension cannot be eased by band-limited features in the primordial power spectrum with $\Delta\ln k\gtrsim 0.1$.
Ideally, the wavelet analysis, if expanded to infinite order, is equivalent to many other binning, expansion and interpolation methods. Practically, however, one has to introduce a cut in expansion order or a smoothing scheme to reduce the dimension of parameter space and to achieve MCMC convergence. The cut or smoothing schemes in different methods introduce model-dependent priors. In our wavelet analysis, the cut of expansion order leads to a prior that captures the local and band-limited features that naturally arise from various inflationary processes beyond slow-roll. Indeed, if not limited by the physical prior, a deconvolution scheme can map the $H_0$-discordance in the CMB power spectrum to the primordial power spectrum and ease the Hubble tension~\citep{Sha1}.
\section{Acknowledgments}
We thank J. Richard Bond, Lev Kofman and Pascal Vaudrevange for many useful discussions in the memorable days in Toronto. This work is supported by Sun Yat-sen University Research Starting Grant 71000-18841232.
|
2,877,628,091,139 | arxiv | \section{Introduction}
It is tempting to conclude that the time-honoured discrepancy between the Standard Model (SM) prediction for the muon anomalous magnetic moment and its experimental measurement is a firm indication of New Physics (NP) Beyond the SM (BSM). Moreover, after improving the determination of the fine structure constant, it recently turned out that there is also a significant difference between the experimental result of the electron anomalous magnetic moment and the corresponding SM prediction. According to the latest results, we have the following deviations in the anomalous magnetic moments of muon and electrons \cite{Keshavarzi:2020bfy,Parker:2018vye}:
\begin{eqnarray}
\label{eq:g2mu}
\delta a_{\mu} &=& a_{\mu}^{\rm exp} - a_{\mu}^{\rm SM} = (278\pm 88) \times 10^{-11} \,, \nonumber \\
\delta a_{e} &=& a_{e}^{\rm exp} - a_{e}^{\rm SM} = (-87\pm 36) \times 10^{-14},
\end{eqnarray}
which indicate a $3.1 \sigma$ and $2.4\sigma$ discrepancy between theory and experiment, respectively. Fermilab and J-PARC experiments
\cite{Semertzidis:1999kv,Farley:2003wt} are going to explore these anomalies in the near future with much higher precision, but now it is worthwhile speculating what possible NP phenomena might lie behind these two measurements. In doing so, it should be noted that $\delta a_{e}$ and $\delta a_{\mu}$ have opposite signs, which provides a challenge for any BSM explanation attempting to account for both of them simultaneously. This generated growing interest and several extensions of the SM have been analysed as possible origin of the results in (\ref{eq:g2mu}).
It is clear that any Electro-Weak (EW) scale NP effects that may explain the $a_{\mu}$ result will lead to corrections to $a_e$ of order $10^{-5}$ times smaller, due to the typical relative suppression generated by the mass ratio $(m_e/m_\mu)^2$, and, crucially, with the same sign. Therefore, the anomalies of $a_\mu$ and $a_e$ cannot be resolved simultaneously with the same NP contribution, unless it violates lepton flavour universality in a very peculiar way, so as to give a positive contribution to $a_\mu$ and a negative one to $a_e$. Some attempts along this line were in fact pursued by Ref. \cite{Liu:2018xkx,Han:2018znu,Endo:2019bcj,Bauer:2019gfk,Badziak:2019gaf,CarcamoHernandez:2020pxw,Haba:2020gkr,Bigaran:2020jil,Calibbi:2020emz,Chen:2020jvl,Jana:2020pxx,Li:2020dbg,Chun:2020uzw,Jana:2020joi,Arbelaez:2020rbq,DelleRose:2020qak,Crivellin:2018qmi,Dutta:2020scq,Hati:2020fzp,CarcamoHernandez:2019ydc,Crivellin:2019mvj,Botella:2020xzf}.
In this paper, we analyse the anomalous magnetic moment of muon and electron in a 2HDM with RH neutrinos and aligned Yukawa couplings. We emphasise that, in this class of models, one can account for the $a_e$ through one-loop effects generated by the exchange of RH neutrinos and charged Higgs bosons. At the same time, the measured value of $a_\mu$ can be obtained accurately through two-loop effects generated by a light CP-odd neutral Higgs state in combination with charged leptons. This phenomenology requires the $H^\pm$ and $A$ states to be relatively light, so that their pair production process has a sizeable cross section at the Large Hadron Collider (LHC), thereby enabling one to fingerprint this A2HDM with RH neutrinos in the years to come.
The plan of this paper is as follows. In the next section we describe our NP scenario. In the following one we present the formulae for $a_e$ and $a_\mu$. After this, we present our results for the two anomalous magnetic moments and the aforementioned $H^\pm A$ signature in two separate subsections. We then conclude.
\section{A2HDM with RH Neutrinos}
The most general Yukawa Lagrangian of the 2HDM can be written as
\begin{eqnarray}
\label{eq:yukL}
- \mathcal L_Y &=& \bar Q_L' \left( Y'_{1d} \Phi_1 + Y'_{2d} \Phi_2 \right) d_R' + \bar Q_L' \left( Y'_{1u} \tilde \Phi_1 + Y'_{2u} \tilde \Phi_2 \right) u_R'
+ \bar L'_L \left( Y'_{1\ell} \Phi_1 + Y'_{2\ell} \Phi_2 \right) \ell_R' \nonumber \\
&+& \bar L'_L \left( Y'_{1\nu} \tilde \Phi_1 + Y'_{2\nu} \tilde \Phi_2 \right) \nu_R' + \textrm{h.c.},
\end{eqnarray}
where the quark $Q_L', u_R', d_R'$ and lepton $L_R', \ell_R', \nu_R'$ fields are defined in the weak interaction basis and we also included the couplings of the Left-Handed (LH) lepton doublets with the RH neutrinos. The $\Phi_{1,2}$ fields are the two Higgs doublets in the Higgs basis and, as customary, $\tilde \Phi_i = i \sigma^2 \Phi_i^*$.
The Yukawa couplings $Y_{1j}'$ and $Y_{2j}'$, with $j = u,d,\ell$, are $3\times 3$ complex matrices while $Y_{1\nu}'$ and $Y_{2\nu}'$ are $3 \times n_R$ matrices, with $n_R$ being the number of RH neutrinos.
Besides implementing the standard $Z_2$ symmetry, potentially dangerous tree-level Flavour Changing Neutral Currents (FCNCs) can be tamed by requiring the alignment in flavour space of the two Yukawa matrices that couple to the same right-handed quark or lepton. This implies\footnote{We have assumed real $\zeta_f$. Notice also that the alignment in the neutrino sector is not a a consequence of the requirement of the absence of FCNCs. Nevertheless, we assume that the same mechanism that provides the alignment in the SM flavour space also holds for neutrinos. }
\begin{eqnarray}
Y'_{2,d} = \zeta_d Y'_{1,d} \equiv \zeta_d Y'_d, \qquad Y'_{2,u} = \zeta_u Y'_{1,u} \equiv \zeta_u Y'_u, \qquad Y'_{2,\ell} = \zeta_\ell Y'_{1,\ell} \equiv \zeta_\ell Y'_\ell, \qquad Y'_{2,\nu} = \zeta_\nu Y'_{1,\nu} \equiv \zeta_\nu Y'_\nu \,.
\end{eqnarray}
Renormalisation group effects can introduce some misalignment in the Yukawa couplings. These provide negligible FCNC contributions in the quark sector suppressed by mass hierarchies $m_q m_{q'}^2/v^3$ \cite{Jung:2010ik,Li:2014fea}.
\begin{table}[h]
\centering
\begin{tabular}{|ccccc|}
\hline
Aligned & Type I & Type II & Type III & Type IV \\
\hline \hline
$\zeta_u$ & $\cot \beta$ & $\cot \beta$ & $\cot \beta$ & $\cot \beta$ \\
$\zeta_d$ & $\cot \beta$ & $- \tan \beta$ & $-\tan \beta$ & $\cot \beta$ \\
$\zeta_l$ & $\cot \beta$ & $- \tan \beta$ & $\cot \beta$ & $-\tan \beta$ \\
\hline
\end{tabular}
\caption{Relation between the $\zeta_f$ couplings of the A2HDM and the ones of the $Z_2$ symmetric scenarios. \label{tab:2hdms}}
\end{table}
The Yukawa Lagrangian in Eq.~(\ref{eq:yukL}) generates a Dirac mass matrix for the standard neutrinos and can also be supplemented by a Majorana mass term $M_R'$ for the RH ones
\begin{eqnarray}
- \mathcal L_{M_R} = \frac{1}{2} \nu_R'^{\, T} C M_R' \nu_R' + \textrm{h.c.},
\end{eqnarray}
where $C$ is the charge-conjugation operator.
In particular, by exploiting a bi-unitary transformation in the charged lepton sector and a unitary transformation on the RH neutrinos, $L_L' = U_L \, L_L, \, \ell'_R = U_R^\ell \, \ell_R$ and $\nu'_R = U_R^\nu \, \nu_R$, it
is always possible to diagonalise (with real eigenvalues) the charged lepton and Majorana mass matrices at the same time,
\begin{eqnarray}
U_L^\dag Y'_{\ell} U_R^e &=& Y_{\ell} \equiv \frac{\sqrt{2}}{v} \textrm{diag} (m_e, m_\mu, m_\tau) \,, \nonumber \\
U_R^{^\nu \, T} M'_R U_R^\nu &=& M_R \equiv \textrm{diag}( M_1 , \ldots M_{n_R} ),
\end{eqnarray}
while $Y_{\nu} = U_L^\dag Y'_{\nu} U_R^\nu$ remains non-diagonal.
In this basis the neutrino mass matrix can be written as
\begin{eqnarray}
\label{eq:mass_matrix}
- \mathcal L_{\mathcal M_\nu} = \frac{1}{2} N_L^T C \mathcal M N_L + \textrm{h.c.} = \frac{1}{2} (\nu_L^T \, \nu_R^{c \,\, T}) C \left( \begin{array}{cc} 0 & M_D \\ M_D^T & M_R \end{array} \right) \left( \begin{array}{c} \nu_L \\ \nu_R^c \end{array} \right),
\end{eqnarray}
with $M_D = \frac{v}{\sqrt{2}} Y_{\nu}^*$ being the neutrino Dirac mass.
This can be diagonalised with a unitary $(3 + n_R) \times (3 + n_R)$ matrix $U$, via
\begin{eqnarray}
\left( \begin{array}{c} \nu_L \\ \nu_R^c \end{array} \right) = U \left( \begin{array}{c} \nu_l \\ \nu_h \end{array} \right) \equiv \left( \begin{array}{cc} U_{Ll} & U_{Lh} \\ U_{R^c l} & U_{R^c h} \end{array} \right) \left( \begin{array}{c} \nu_l \\ \nu_h \end{array} \right) ,
\end{eqnarray}
such that $\mathcal M_\nu = U^T \mathcal M U$ provides the masses of the three light active neutrinos $\nu_l$ and of the remaining $n_R$ heavy sterile neutrinos $\nu_h$.
The Yukawa interactions of the physical (pseudo)scalars\footnote{Note that, in a generic 2HDM with complex Higgs doublet fields, of the initial 8 degrees of freedom, upon EW Symmetry Breaking (EWSB), 5 survive as physical Higgs states: 2 CP-even, $h$ and $H$ (with, conventionally, $m_h<m_H$), 1 CP-odd, $A$, and 2 charged ones with undefined CP, $H^\pm$.} with the mass eigenstate fermions are then described by
\begin{eqnarray}
- \mathcal L_Y &=& \frac{\sqrt{2}}{v} \bigg[ \bar u ( - \zeta_u \, m_u \, V_{ud} \, P_L + \zeta_d \, V_{ud} \, m_d \, P_R ) d
+ \bar \nu_l ( - \zeta_\nu \, m_{\nu_l} \, U^\dag_{L l} \, P_L + \zeta_\ell \, U^\dag_{L l} \, m_\ell \, P_R ) \ell \nonumber \\
&+& \bar \nu_h ( - \zeta_\nu \, m_{\nu_h} \, U^\dag_{L h} \, P_L + \zeta_\ell \, U^\dag_{L h} \, m_\ell \, P_R ) \ell \bigg] H^+ + \textrm{h.c.} \nonumber \\
&+& \frac{1}{v} \sum_{\phi=h,H,A} \sum_{f=u,d,\ell} \xi_f^\phi \, \phi \, \bar f \, m_f \, P_R \, f
+ \frac{1}{v} \sum_{\phi=h,H,A} \xi_\nu^\phi \, \phi (\bar \nu_l \, U_{Ll}^\dag + \bar \nu_h \, U_{Lh}^\dag) P_R (U_{Ll} \, m_{\nu_l} \, \nu_l^c + U_{Lh} \, m_{\nu_h} \, \nu_h^c) + \textrm{h.c.},
\end{eqnarray}
where the couplings of the neutral Higgs states to the fermions are given by
\begin{eqnarray}
\xi_{u, \nu}^\phi = \mathcal R_{i1} + ( \mathcal R_{i2} - i \mathcal R_{i3} ) \zeta_u^* \,, \qquad
\xi_{d,\ell}^\phi = \mathcal R_{i1} + ( \mathcal R_{i2} + i \mathcal R_{i3} ) \zeta_{d,\ell},
\end{eqnarray}
where the matrix $\mathcal R$ diagonalises the scalar mass matrix.
Because of the alignment of the Yukawa matrices all the couplings of the (pseudo)scalar fields to fermions are proportional to the corresponding mass matrices, hence the A2HDM acronym. Therefore, this 2HDM realisation is notably different from the standard four Types \cite{Gunion:1989we,Gunion:1992hs,Branco:2011iw}, wherein the Yukawa couplings are fixed to well defined functions of the ratio of the Vacuum Expectation Values (VEVs) of the two Higgs doublets, denoted by $\tan\beta$, see Tab.~\ref{tab:2hdms}.
Then, the charged Higgs boson currents in the lepton sector are given by:
\begin{eqnarray}
- \mathcal L_{Y}^\textrm{CC} = \frac{\sqrt{2}}{v} \zeta_\ell \left[ (\bar \nu_l \, U^\dag_{L l} + \bar \nu_h \, U^\dag_{Lh}) m_\ell \, P_R \, \ell \right] H^+
- \frac{\sqrt{2}}{v} \zeta_\nu \left[ (\bar \nu_l \, U^\dag_{L l} \, m_{\nu_l} + \bar \nu_h \, U^\dag_{Lh} \, m_{\nu_h}) \, P_L \, \ell \right] H^+ + \textrm{h.c.}
\end{eqnarray}
Finally, the neutral and charged gauge boson interactions of the neutrinos are
\begin{eqnarray}
\mathcal L_Z &=& \frac{g}{2 \cos \theta_W} (\bar \nu_l \, U_{Ll}^\dag + \bar \nu_h \, U_{Lh}^\dag) \gamma^\mu (U_{Ll} \, \nu_l + U_{Lh} \, \nu_h ) Z_\mu, \nonumber \\
\mathcal L_W &=& - \frac{g}{\sqrt{2}} \left[ (\bar \nu_l \, U^\dag_{L l} + \bar \nu_h \, U^\dag_{L h}) \gamma^\mu P_L \, \ell \right] W^{+}_\mu + \textrm{h.c.}
\end{eqnarray}
We refer to \cite{DelleRose:2019ukt} for further details on the model.
\section{Anomalous magnetic moments}
\begin{figure}
\subfigure[]{\includegraphics[scale=0.35]{figures/WpNuL.pdf}}
\subfigure[]{\includegraphics[scale=0.35]{figures/WpNuR.pdf}}
\subfigure[]{\includegraphics[scale=0.35]{figures/HpNuL.pdf}}
\subfigure[]{\includegraphics[scale=0.35]{figures/HpNuR.pdf}}
\caption{Relevant Feynman diagrams contributing to the $g-2$ of the electron at one-loop order. Only the charges vector ($W^\pm$) and charged Higgs ($H^\pm$) currents are shown. \label{fig:diagrams}}
\end{figure}
The one-loop contributions to the anomalous magnetic moment of either lepton are
\begin{eqnarray}
a_\ell = \frac{G_F \, m_\ell^2}{4 \sqrt{2} \pi^2} \left[ g_{(a)} + g_{(b)} + g_{(c)} + g_{(d)} + g_{\textrm{2HDM}} \right],
\end{eqnarray}
where the individual terms are
\begin{eqnarray}
g_{(a)} &=& 2 \sum_{i = 1}^{3} |(U_{Ll})_{\ell \, i}|^2 \left[ \frac{5}{6} + \frac{1}{6} \frac{m_\ell^2}{M_W^2} \right] + \mathcal O(m_\ell^4) \,, \nonumber \\
g_{(b)} &=& 2 \sum_{i = 1}^{n_{R}} |(U_{Lh})_{\ell \, i}|^2 \left[ \frac{5}{6} + \mathcal G_{W^\pm} \left( \frac{m_{\nu_{h_i}}^2}{M_W^2} \right) \right] + \mathcal O(m_\ell^2) \,, \nonumber \\
g_{(c)} &=& 2 \sum_{i = 1}^{3} |(U_{Ll})_{\ell \, i}|^2 \left[ -\frac{\zeta_\ell^2}{12} \frac{m_\ell^2}{M_{H^\pm}^2} \right] + \mathcal O(m_\ell^4) \,, \nonumber \\
g_{(d)} &=& 2 \sum_{i = 1}^{n_{R}} |(U_{Lh})_{\ell \, i}|^2 \, \mathcal G_{H^\pm} \left( \frac{m_{\nu_{h_i}}^2}{M_{H^\pm}^2} \right) + \mathcal O(m_\ell^2) \,, \nonumber \\
g_\textrm{2HDM} &=& \mathcal O(m_\ell^2),
\end{eqnarray}
with
\begin{eqnarray}
\label{eq:Gfuncs}
\mathcal G_{W^\pm}(x) &=& \frac{-x + 6 x^2 - 3 x^3 - 2 x^4 + 6 x^3 \log x}{4(x - 1)^4}, \nonumber \\
\mathcal G_{H^\pm}(x) &=& \frac{\zeta_\nu^2}{3} \mathcal G_{W^\pm}(x) + \zeta_\nu \zeta_l \frac{ x (-1 + x^2 - 2 x \log x) }{2 (x - 1)^3} \,.
\end{eqnarray}
The index of the contributions corresponds to the different subfigures in Fig.~\ref{fig:diagrams} where, for simplicity, we show only the diagrams determined by the charged currents.
The contribution $g_{(a)}$ alone would exactly correspond to the SM case if it were not for the rescaling induced by the neutrino mixing matrix.
Nevertheless, the constant terms in $g_{(a)}$ and $g_{(b)}$ sums up to the SM result of $5/3$ due to the unitarity of such a mixing matrix. Therefore, these can be neglected since they do not contribute to the NP part.
The term $g_\textrm{2HDM}$ contains all the neutral Higgs boson contributions which are typical of the 2HDM alone. These are typically suppressed by a factor of $m_\ell^2/m_\phi^2$, with $\phi$ being one of the neutral (pseudo)scalar states of the 2HDM.
We can then write
the contribution to $(g-2)_\ell$, $\ell = e, \mu$, due to charged currents as follows:
\begin{equation}
a_\ell^\pm=a_\ell^{W^\pm} +
a_\ell^{H^\pm} = \frac{G_F \, m_\ell^2}{2 \sqrt{2} \pi^2} \sum_{i = 1}^{n_{R}} |(U_{Lh})_{\ell \, i}|^2 \left[ \mathcal G_{W^\pm} \left( \frac{m_{\nu_{h_i}}^2}{M_W^2} \right) + \mathcal G_{H^\pm} \left( \frac{m_{\nu_{h_i}}^2}{M_{H^\pm}^2} \right) \right].
\label{eq:AMM}
\end{equation}
The contribution to $(g-2)_\ell$, $\ell=e,\mu$, from the neutral (pseudo)scalars is
\begin{eqnarray} a_\ell^0=
\sum_{\phi=h,H,A} a_\ell^\phi = \frac{G_F \, m_\ell^2}{4 \sqrt{2} \pi^2} \sum_{\phi = h, H , A} (\xi^\phi_\ell)^2 \frac{m_\ell^2}{m_\phi^2} \mathcal F_\phi \left( \frac{m_\ell^2}{m_\phi^2} \right),
\end{eqnarray}
where
\begin{eqnarray}
{\cal F}_h(x) = F_H(x) \simeq - \frac{7}{6} - \log x \,, \qquad \qquad
{\cal F}_A(x) \simeq \frac{11}{6} + \log x.
\end{eqnarray}
For the sake of completeness, we also give the Barr-Zee two-loop diagram contributions, \cite{Barr:1990vd,Czarnecki:1995wq,Chang:1990sf,Cheung:2001hz,Cheung:2009fc,Cherchiglia:2016eui}
\begin{eqnarray}
a_\ell^\textrm{two-loop} = \frac{G_F m_\ell^2 \alpha}{4 \sqrt{2} \pi^3 } \sum_{\phi= h,H,A} \sum_f N_f^c Q_f^2 \xi_\ell^\phi \xi_f^\phi \frac{m_\ell^2}{m_\phi^2} G_\phi \left(\frac{m_\ell^2}{m_\phi^2} \right),
\end{eqnarray}
where $N_c^f$ is the number of colours and $Q_f$ the electric charge while
\begin{eqnarray}
G_\phi(x) = \int_0^1 d z \frac{\tilde g_\phi(z)}{z(1-z)-x} \log \frac{z(1-z)}{x} \,, \qquad \textrm{with} \quad \tilde g_\phi(z) = \left\{ \begin{array}{ll} 2 z(1-z) - 1, & \phi = h,H \\ 1, & \phi= A\end{array} \right. \,.
\end{eqnarray}
The total contribution to the $g-2$ is thus given by $a_\ell = a_\ell^\pm + a_\ell^0 + a_\ell^\textrm{two-loop}$.
Finally we present the Branching Ratio (BR) of the Lepton Flavour Violating (LFV) decays $\ell_\alpha \to \ell_\beta \gamma$ (with $\alpha,\beta=e,\mu,\tau$), as follows:
\begin{equation}
\label{eq:BRltolga}
\textrm{BR}(\ell_\alpha \to \ell_\beta \gamma) = \mathcal C \left| \sum_{i = 1}^{n_R} (U^*_{Lh})_{\alpha i} (U_{L h})_{\beta i} \left[ \mathcal G_{W^\pm} \left( \frac{m_{\nu_{h_i}}^2}{M_W^2} \right) + \mathcal G_{H^\pm} \left( \frac{m_{\nu_{h_i}}^2}{M_{H^\pm}^2} \right) \right] \right|^2,
\end{equation}
with
\begin{eqnarray}
\mathcal C = \frac{\alpha_W^3 s_W^2}{256 \pi^2} \left( \frac{m_{\ell_\alpha}}{M_W} \right)^4 \frac{m_{\ell_\alpha}}{\Gamma_{\ell_\alpha}},
\end{eqnarray}
where $\Gamma_{\ell_\alpha}$ is the total decay width of the lepton $\ell_\alpha$ and the loop functions are given above. The structure of the loop corrections is obviously the same as the one appearing above in the charged current corrections to $(g-2)_\ell$. The measured BR of these LFV decays will act as a constraint in our analysis.
\section{Results}
The solution of the $a_\mu$ anomaly relies upon a light pseudoscalar state $A$ contributing to the dominant two-loop Barr-Zee diagrams, as customary in 2HDMs.
The explanation of the anomaly is particularly simple in the `lepton-specific' 2HDM scenario, also dubbed Type-IV, in which the couplings of the $A$ and $H^\pm$ bosons to the leptons can be enhanced (for large $\tan \beta$) while those to the quarks are suppressed (being proportional to $\tan^{-1} \beta$).
Indeed, while it is always possibile to enhance the couplings to the leptons in any of the four standard realisations of the 2HDM, in Type-I and -III this is done at the cost of increasing the couplings to the up quark (for small $\tan \beta$). As a consequence, one faces a strong constraint from the perturbativity of the top-quark Yukawa coupling. In the Type-II, instead, the couplings to the down-quarks are enhanced (for large $\tan \beta$) and severe bounds are imposed by flavour physics and direct searches for extra Higgs bosons.
These issues can be much more easily addressed in the A2HDM since the couplings to leptons and quarks are disentangled and $\zeta_\ell$ can be raised independently of $\zeta_u$ and $\zeta_d$.
It is worth emphasising that a simultaneous explanation of both the $a_e$ and $a_\mu$ anomalies cannot be achieved neither in the $Z_2$ symmetric scenarios of the 2HDM nor in the pure A2HDM, since the contributions to the anomalous moments have a fixed sign as they both originate from the same $\zeta_\ell$. In \cite{Botella:2020xzf}, this constraint has been overcome by decoupling the electron and muon sectors, where all Yukawa matrices can be made diagonal in the fermion mass basis \cite{Penuelas:2017ikk,Botella:2018gzy}.
Here, instead, the degeneracy will be broken by exploiting the lepton non-universality that naturally arises in RH neutrino models: augmenting the A2HDM with RH neutrinos can allow for an independent solution to $a_e$. This is obtained with the one-loop diagrams shown in Fig.~\ref{fig:diagrams} provided that the charged Higgs boson is not too heavy to suppress the loop corrections.
The mass of the charged Higgs boson is bounded from below by direct searches at LEP II. In particular, searches for $H^{\pm}$ pair production provide $m_{H^\pm} \gtrsim 93.5$ GeV at 95 \% Confidence Level (CL) \cite{Abbiendi:2013hk} assuming the charged Higgs only decays leptonically into $\tau \nu$.
Since the mass of the pseudoscalar $A$ state is thus required to be much lighter than the charged one, our scenarios realises the mass hierarchy $m_A \ll m_{H^\pm} \simeq m_H$.
The almost degeneracy between the heavy neutral scalar and the charged Higgs state is induced by the constraints on the EW Precision Observables (EWPOs), i.e., $S, T$ and $U$. Indeed, the most stringent one arises from custodial symmetry and reads as\footnote{The expression for $\Delta T$ assumes the mass hierarchy $m_A \ll m_Z \ll m_{H^\pm} \simeq m_H$ and $\sin(\beta - \alpha) \simeq 1$. }
\begin{eqnarray}
\Delta T \simeq \frac{m_H}{32 \pi^2 \alpha v^2} (m_{H^\pm} - m_H),
\end{eqnarray}
which fixes the mass splitting to $(m_{H^\pm} - m_H) \sim \mathcal O(10$ GeV).
As quoted above, the scenarios with light scalar states is strongly constrained by flavour physics, in particular by neutral meson mixings ($\Delta M_q$ and $\epsilon_K$), leptonic decays of neutral and charged mesons as well as radiative $B$ decays ($b \to s \gamma$). These mostly depend on $m_{H^\pm}$, $\zeta_{u,d}$. Such measurements are reconciled in our setup simply by requiring a sufficiently small $\zeta_{u,d}$ which we will set to zero for the sake of simplicity. This in turn implies that the Yukawa interactions in our BSM scenario are purely leptophilic. This configuration also naturally complies with void searches for extra (pseudo)scalars at the LHC. In this respect, we have required that the Higgs sector of our model is compliant with the experimental constraints implemented in HiggsSignals \cite{Bechtle:2013xfa} (capturing the LHC measurements of the discovered Higgs boson\footnote{In our BSM scenario this is the $h$ state.}) and in HiggsBounds \cite{Bechtle:2020pkv} (enforcing limits following the aforementioned void searches for the $H,A$ and $H^\pm$ states at past and present colliders).
Contributions mediated by the charged Higgs states also affect the leptonic decays $\ell_i \to \ell_j \nu \bar \nu$ at tree level, with the stronger constraint coming from $\tau \to \mu \nu \bar \nu$ \cite{Kuno:1999jp,Abe:2015oca}. The corresponding bound projects onto the ratio $z = \zeta_\ell^2 \, m_{\tau} m_{\mu}/ m_{H^\pm}^2$ and gives $|z| < 0.72$ at 95\% CL \cite{Zyla:2020zbs}.
Finally, upper bounds on LFV processes, ($\textrm{BR}(\mu \to e \gamma) \le 4.2 \times 10^{-13} \,, ~\textrm{BR}(\tau \to e \gamma) \le 3.3 \times 10^{-8} \,, ~\textrm{BR}(\tau \to \mu \gamma) \le 4.4 \times 10^{-8}$ at 90\% CL) constrain the RH neutrinos interactions with the charged leptons. The charged Higgs boson also affects these decays with a large contribution.
Since a RH neutrino is only employed in the explanation of the $a_e$ anomaly, a non-negligible mixing is strictly required with the electron family. Therefore, the stringent constraint from $\mu \to e \gamma$ and the milder one from $\tau \to e \gamma$ can be satisfied by simply relying on the hierarchy $|(U_{Lh})_{\tau \, \nu_h}|, |(U_{Lh})_{\mu \, \nu_h}| \ll |(U_{Lh})_{e \, \nu_h}|$.
\subsection{Predictions for $\delta a_e$ and $\delta a_\mu$}
The contribution to $\delta a_e$ arising from the $W^\pm$, encoded in the $ \mathcal G_{W^\pm}$ function defined in Eq.~(\ref{eq:Gfuncs}), is negative but it can never be enhanced being fixed by the gauge interactions. For $m_{\nu_{h_i}}^2/M_W^2 \gg 1$, $\mathcal G_{W^\pm} \simeq - 1/2$. The impact of the charged Higgs boson in the loop functions is, however, much different. As an example, for large heavy neutrino masses, it saturates to $\mathcal G_{H^\pm} \simeq \zeta_\ell \zeta_\nu/2 - \zeta_\nu^2/6$ or behaves as $\mathcal G_{H^\pm} \simeq (\zeta_\ell \zeta_\nu/2 - \zeta_\nu^2/12) (m_{\nu_{h}}^2/m_{H^\pm}^2)$ for larger $m_{H^\pm}$. In both cases, the solution of the $a_e$ anomaly is facilitated by large and opposite $\zeta_\ell$ and $\zeta_\nu$. The same effect would also push the predicted $a_\mu$ in the opposite direction with respect to the current measurement. This is not an issue since the same hierarchy $|(U_{Lh})_{\mu \, \nu_h}| \ll |(U_{Lh})_{e \, \nu_h}|$ required to prevent the LFV bounds also suppresses the contribution of the charged Higgs boson to the muon $g-2$.
As well known in the literature, the latter can be explained in the 2HDM by the two-loop Barr-Zee diagrams of the neutral scalars which provide a positive correction for sufficiently light $A$.
This contribution may compete in $a_e$ against the one-loop effects discussed above but it is found to be subdominant in most of the parameter space.
The results of our analysis are depicted in Figs.~\ref{AMM-muon} and \ref{AMM-electron}. The former shows the regions in which the predicted $a_\mu$ is within 1 and $2\sigma$ around the measured central value. These are projected onto the most relevant parameter space defined by $m_A$ and $\zeta_\ell$. The mass of the charged Higgs boson has been fixed at a reference value of $m_{H^\pm} = 200$ GeV. Different choices of $m_{H^\pm}$ slightly modify the contours shown in the plot. In Fig.~\ref{AMM-electron} we show the prediction for $a_e$.
The points are generated by scanning over the parameter space of the model and comply with the experimental and theoretical bounds quoted above while reproducing $a_\mu$ within the $2\sigma$ range. The parameters are scanned as follows: $m_{\nu_{h}} \in (200, 2000)$ GeV, $m_{H\pm}, m_H \in (100,1000)$ GeV, $m_A \in (10,60)$ GeV, $\zeta_\ell, \zeta_\nu \in (-150,150)$ and $|(U_{Lh})_{\mu \, \nu_h}|^2 \in (10^{-5}, 10^{-3})$. In Fig.~\ref{AMM-electron}(a) and (b), $(g-2)_e$ is plotted, respectively, against $\zeta_\nu$ and the effective coupling $\zeta_\nu Y_\nu$ which characterises this model and that has been extensively discussed in \cite{DelleRose:2019ukt}. The vertical dashed line shows the maximum allowed value required by pertubativity. Finally, Fig.~\ref{AMM-electron}(c) shows the distribution of points along the $\zeta_\nu$ and $\zeta_\ell$ directions compliant with all the bounds discussed above as well as $a_e$ and $a_\mu$ measurements within $2\sigma$. As mentioned already, the two couplings must necessarily have opposite signs.
\begin{figure}
\includegraphics[scale=0.45]{figures/muon-AMM.pdf}
\caption{The 1 and $2\sigma$ regions of the anomalous magnetic moment of the muon in the parameter space of $m_A$ and $\zeta_\ell$. For the sake of definiteness, the mass of the charged Higgs has been chosen as $m_{H^\pm} = 200$ GeV.}
\label{AMM-muon}
\end{figure}
\begin{figure}
\includegraphics[scale=0.45]{figures/electron-AMM-1.pdf}
\includegraphics[scale=0.45]{figures/electron-AMM-3.pdf}
\includegraphics[scale=0.45]{figures/electron-AMM-2.pdf}
\caption{The anomalous magnetic moment of the electron as a function of (a) $\zeta_\nu$ and (b) the effective neutrino coupling $\zeta_\nu Y_\nu$. The horizontal solid, dashed and dot-dashed lines correspond, respectively, to the central value, the upper $1\sigma$ band and the upper $2\sigma$ band. The vertical dashed line in (b) represents the maximum allowed value for $Y_\textrm{eff}=\zeta_\nu Y_\nu$ from perturbativity. All the points satisfy the experimental and theoretical constraints as explained in the text and reproduce $a_\mu$ at $2\sigma$ level. (c) Distribution of points in the $(\zeta_\nu, \zeta_\ell)$ plane complying with all current experimental and theoretical bounds as well as with the solution of the $a_e$ and $a_\mu$ anomalies at $2\sigma$.}
\label{AMM-electron}
\end{figure}
\subsection{LHC phenomenology of the extra (pseudo)scalar bosons}
In the leptophilic scenario delineated above, the light pseudoscalar state $A$ can decay at tree-level via $A \to \tau \tau$ with BR close to $100\%$. For the charged Higgs boson, instead, the two main open decay modes are $H^\pm \to A W^\pm$, where the interaction is completely fixed by the $SU(2)_L$ gauge coupling, and $H^\pm \to \tau^\pm \nu$, which is controlled by the $\zeta_\ell$ coupling. Analogously, for the heavy neutral scalar state $H$ the two leading decay modes are $H \to \tau \tau$ and $H \to A Z$.
For large $m_{H^\pm}, m_H$, the BRs of the $H^\pm$ and $H$ are solely controlled by the coupling $g_\ell = \zeta_\ell \, m_\tau / m_{H^\pm}$ and are approximated by\footnote{We neglected small deviations from $\sin(\beta-\alpha) = 1$.}
\begin{eqnarray}
\textrm{BR}(H^\pm \to A W^\pm) = \textrm{BR}(H \to A Z) = \frac{1}{1 + 2 g_\ell^2} \,, \qquad \textrm{BR}(H^\pm \to \tau^\pm \nu) = \textrm{BR}(H \to \tau \tau) = \frac{2 g_\ell^2}{1 + 2 g_\ell^2} \,.
\end{eqnarray}
Since the couplings to the quarks are suppressed, the main production modes proceed through the EW interactions. The relevant processes are
\begin{eqnarray}
pp \to H^\pm A \,, \qquad pp \to H A \,, \qquad pp \to H^\pm H \,, \qquad pp \to H^+ H^-,
\end{eqnarray}
with the corresponding cross sections being only functions of the masses of the corresponding particles.
The cross sections at the LHC are computed with MadGraph \cite{Alwall:2014hca} and are shown in Fig.~\ref{fig:xs}. The largest contributions arise from $H^\pm A$ and $H A$.
The main signatures resulting from these processes are characterised by final states with several $\tau$ leptons
\begin{eqnarray}
3 \tau + \slashed{E}_T, \qquad 4 \tau + W^\pm, \qquad 4 \tau , \qquad 4 \tau + Z,
\end{eqnarray}
where the first two stem from $H^\pm A$ production (with a subleading component from $H^\pm H$) while the last two arise from the $H A$ production. A thorough analysis is beyond the scope of this paper. In order to get a feeling of the potential of these channels, here we list only an estimate of the inclusive cross section for the corresponding SM background
\begin{eqnarray}
& \sigma_\textrm{SM}(Z W^\pm \to 3 \tau + \slashed{E}_T) \simeq 94 \, \textrm{fb}, \qquad
& \sigma_\textrm{SM}(Z Z W^\pm \to 4 \tau + W^\pm) \simeq 3.2 \times 10^{-2} \, \textrm{fb}, \nonumber \\
& \sigma_\textrm{SM}(Z Z \to 4 \tau) \simeq 11 \, \textrm{fb}, \qquad
& \sigma_\textrm{SM}(Z Z Z \to 4 \tau + Z ) \simeq 1.1 \times 10^{-2} \, \textrm{fb} \,.
\end{eqnarray}
\begin{figure}
\includegraphics[scale=0.45]{figures/xs_1.pdf}
\includegraphics[scale=0.45]{figures/xs_2.pdf}
\includegraphics[scale=0.45]{figures/xs_3.pdf}
\caption{The LHC production cross sections of pairs of the extra Higgs bosons as functions of $m_A$ and $m_{H^\pm} = m_H$. \label{fig:xs}}
\end{figure}
\section{Conclusions}
The measurements of the the anomalous magnetic moment of electron and muon are amongst the most precise ones in the whole of particle physics, probing not only the structure of the SM but also the possibility of BSM theories entering these experimental observables. Intriguingly, both of these are currently showing some anomalies with respect to the SM predictions. Crucially, the two results go in different directions, i.e., the measurement of $a_{\mu}$ exceeds the SM result while that of $a_e$ lies below the corresponding SM yield. This circumstance makes it difficult to find BSM solutions, as multiple new particles are generally needed, each contributing its corrections in different directions, i.e., with different signs, unless significant violation of discrete quantum numbers is exploited.
In this paper, we adopted an A2HDM supplemented by RH neutrinos, respecting all the SM symmetries. In such a BSM framework, a possible explanation to the aforementioned anomalies can be attained through one and two-loop topologies wherein the contribution from a very light CP-odd neutral Higgs state interacting with leptons, is tensioned against the one due to a charged Higgs boson interacting with the new heavy neutrinos, the latter with mass at the EW scale. Crucially, such a spectrum is able to explain the two leptonic anomalous magnetic moment measurements while also predicting new hallmark signals in the form of $q\bar q' \to H^\pm A$ production yielding multi-$\tau$ final states, which are almost background free at the LHC and thus accessible already with current data samples.
\section*{Acknowledgements}
SM is financed in part through
the NExT Institute and the STFC Consolidated Grant No. ST/L000296/1.
LDR acknowledges support by the Spanish Ministry MEC under grant FPA 2017-88915-P and the Severo Ochoa excellence program of MINECO (SEV-2016-0588). IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. The project that gave rise to these results received the support of a fellowship from ''la Caixa'' Foundation (ID 100010434) and from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Action grant agreement No 847648. The fellowship code is LCF/BQ/PI20/11760032.
\bibliographystyle{apsrev4-1}
|
2,877,628,091,140 | arxiv | \section{Introduction}
\label{sec:Introduction}
The peer-to-peer (p2p) content delivery systems are permissionless decentralized services to seamlessly replicate contents to the end consumers. Typically these systems~\cite{Cohen-2003-P2P,kulbak2005emule} encompass a large ad-hoc network of deliverers such as normal Internet users or small organizations,
thus overcoming the bandwidth bottleneck of the original content providers. In contrast to giant pre-planned content delivery networks (i.e., CDNs such as Akamai~\cite{Akamai-2021-COM} and CloudFlare~\cite{CloudFlare-2021-COM}),
p2p content delivery can crowdsource unused bandwidth resources of tremendous Internet peers, thus having a wide array of benefits including robust service availability, bandwidth cost savings, and scalable peak-demand handling~\cite{Aalmashaqbeh-2019-Columbia,Anjum-et-al-2017-CN}.
Recently, renewed attentions to p2p content delivery are gathered \cite{Wang-et-al-2018-ICC,Goyal-et-al-2019-Usenix,Aalmashaqbeh-2019-Columbia} due to the fast popularization of decentralized storage networks (DSNs)~\cite{Swarm-ethereum-2020,IPFS-Benet-2014,Filecoin-2017-Online, StorJ-2018-WhiteBook, Miller-et-al-2014-SP}. Indeed, DSNs feature decentralized and robust {\em content storage}, but lack well-designed {\em content delivery} mechanisms catering for a prosperous content consumption market in the p2p setting, where the content shall not only be reliably stored but also must be always quickly {\em retrievable} despite potentially malicious participants ~\cite{Content_Consum_WARC-2020-WEB, Filecoin-Retrieval-2021-Spec}.
The primary challenge of designing a proper delivery mechanism for complementing DSNs is to realize the strict guarantee of ``fairness'' against adversarial peers. In particular, a fair p2p content delivery system has to promise well-deserved items (e.g., retrieval of valid contents, well-paid rewards to spent bandwidth) to all participants~\cite{Fan-et-al-2008-TON}.
Otherwise, free-riding parties can abuse the system~\cite{Feldman-2004-CEC,Locher-et-al-2006-HotNets,Piatek-et-al-2007-NSDI} and cause rational ones to escape, eventually resulting in possible system collapse~\cite{Hardin-et-al-2009-Science}. We reason as follows to distinguish two types of quintessential fairness, namely {\em delivery fairness} and {\em exchange fairness}, in the p2p content delivery setting where three parties, i.e., content {\em provider}, content {\em deliverer} and content {\em consumer}, are involved.
\smallskip
\noindent
{\bf Exchange fairness is not delivery fairness}.
Exchange fairness~\cite{Blum-1983-STOC,Damgaard-1995-Cryptology,Asokan-et-al-2000-JSAC,Kupccu--et-al-2010-RSAC,Dziembowski-et-al-2018-CCS,Maxwell-2016-BitcoinCore}, specifically for digital goods (such as signatures and videos), refers to ensuring one party's input keep {\em confidential} until it does learn the other party's input. Unfortunately, in the p2p content delivery setting, it is insufficient to merely consider exchange fairness because a content deliverer would expect to receive rewards proportional to the bandwidth resources it spends. Noticeably, exchange fairness fails to capture such new desiderata related to bandwidth cost, as it does not rule out that a deliverer may receive no reward after transferring a huge amount of {\em encrypted} data to the other party, which clearly breaks the deliverer's expectation on being well-paid but does not violate exchange fairness at all.
Consider FairSwap \cite{Dziembowski-et-al-2018-CCS} as a concrete example: the deliverer first sends the encrypted content and semantically secure digest to the consumer, then waits for a confirmation message from the consumer (through the blockchain) to confirm her receiving of these ciphertext, so the deliverer can reveal his encryption key to the content consumer via the blockchain; but, in case the consumer aborts, all bandwidth used to send ciphertext is wasted, causing no reward for the deliverer. A seemingly enticing way to mitigate the above attack on delivery fairness in FairSwap could be splitting the content into $n$ smaller chunks and run FairSwap protocols for each chunk, but the on-chain cost would grow linear in $n$, resulting in prohibitive on-chain cost for large contents such as movies. Adapting other fair exchange protocols for delivery fairness would encounter similar issues like FairSwap. Hence, the efficient construction satisfying delivery fairness remains unclear.
To capture the ``special" exchanged item for deliverers, we formulate delivery fairness (in Sec.~\ref{sec:ProblemFormulation}), stating that deliverers can receive rewards (nearly) proportional to the contributed bandwidth for delivering data to the consumers.
\smallskip
\noindent
{\bf Insufficiencies of existing ``delivery fairness''}. A range of existing literature~\cite{Sherman-et-al-2012-TON,Kamvar-et-al-2003-WWW,Sirivianos-2007-Usenix,Shin-et-al-2017-TON,Levin-et-al-2008-SIGCOMM} involve delivery fairness for p2p delivery. However, to our knowledge, no one assures delivery fairness in the {\em cryptographic} sense, as we seek to do. Specifically, they~\cite{Sherman-et-al-2012-TON,Kamvar-et-al-2003-WWW,Sirivianos-2007-Usenix,Shin-et-al-2017-TON,Levin-et-al-2008-SIGCOMM} are presented in {\em non-cooperative game-theoretic} settings where independent attackers free ride spontaneously without communication of their strategies, and the attackers are rational with the intentions to maximize their own benefits. Therefore, these works boldly ignore that the adversary intends to break the system. Unfortunately, such rational assumptions are particularly elusive to stand in ad-hoc open systems accessible by all malicious evils. The occurrences of tremendous real-world attacks in ad-hoc open systems~\cite{Mehar-et-al-IGI-2019, Botnet-2020-ENISA} hint us how vulnerable the prior studies' heavy assumptions can be and further weaken the confidence of using them in real-world p2p content delivery.
\smallskip
\noindent
{\bf Lifting for ``exchange fairness" between provider and consumer}. Besides the natural delivery fairness, it is equally vital to ensure exchange fairness for providers and consumers in a basic context of p2p content delivery, especially with the end goal to complement DSNs and enable some content providers to sell contents to consumers with delegating costly delivery/storage to a p2p network. In particular, the content provider should be guaranteed to receive payments proportional to the amount of correct data learned by the consumer; vice versa, the consumer only has to pay if indeed receiving qualified content.
Na\"\i ve attempts of tuning a fair exchange protocol~\cite{Asokan-et-al-2000-JSAC,Belenkiy-et-al-2007-WPES,Kupccu--et-al-2010-RSAC,Dziembowski-et-al-2018-CCS,Maxwell-2016-BitcoinCore,Eckey-et-al-2019-Arxiv} into p2p content delivery can guarantee neither delivery fairness (as analyzed earlier) nor exchange fairness: simply running fair exchange protocols twice between the deliverers and the content providers and between the deliverers and the consumers, respectively, would leak valuable contents, raising the threat of massive content leakage. Even worse, this idea disincentivizes the deliverers as they have to pay for the whole content before making a life by delivering the content to consumers.
\smallskip
\noindent
{\bf Our contributions.} Overall, it remains an open problem to realize such strong fairness guarantees in p2p content delivery to protect {\em all} providers, deliverers, and consumers. We for the first time formalize such security intuitions into a well-defined cryptographic problem on fairness, and present a couple of efficient blockchain-based protocols to solve it. In sum, our contributions are:
\begin{enumerate}
\item \underline{\smash{\em Formalizing p2p content delivery with delivery fairness.}} We formulate the problem of p2p content delivery with desired security goals, where fairness ensures that every party is fairly treated even others arbitrarily collude or are corrupted.
\item \underline{\smash{\em Verifiable fair delivery.}} We put forth a novel delivery fairness notion between a sender and a receiver dubbed verifiable fair delivery ($\mathsf{VFD}$): a non-interactive honest verifier can check whether a sender indeed sends a sequence of qualified data chunks to a receiver as long as not both the sender and the receiver are corrupted.
This primitive is powerful in the sense that: (i) the verifier only has to be non-interactive and honest, so it can be easily instantiated via the blockchain;
(ii) qualified data can be flexibly specified through a global predicate known by the sender, the receiver and the verifier,
so the predicate validation can be tuned to augment $\mathsf{VFD}$ in a certain way for the full-fledged p2p delivery scenario.
\item \underline{\smash{\em Lifting $\mathsf{VFD}$ for full-fledged p2p content delivery.}} We specify $\mathsf{VFD}$ to validate that each data chunk is signed by the original content provider, and wrap up the concrete instantiation to design an efficient blockchain-enabled fair p2p content delivery protocol $\mathsf{FairDownload}${}, which allows: (i) the consumer can retrieve content via downloading, i.e., {\em view-after-delivery}; (ii) minimal involvement of the content provider in the sense that only two messages are needed from the provider during the whole course of the protocol execution; (iii) one-time contract deployment and preparation while repeatable delivery of the same content to different consumers.
Thanks to the carefully instantiated $\mathsf{VFD}$,
the provider's content cannot be modified by the deliverer, so we essentially can view the fairness of consumer and provider as a fair exchange problem for digital goods between two parties.
%
To facilitate the ``two-party'' exchange fairness, we leverage the proof-of-misbehavior method (instead of using heavy cryptographic proofs for honesty \cite{Maxwell-2016-BitcoinCore}),
thus launching a simple mechanism to allow the consumer to dispute and prove that the provider indeed sells wrong content inconsistent to a certain digest;
along the way, we dedicatedly tune this component for better efficiency:
(i)~universal composability security \cite{Dziembowski-et-al-2018-CCS} is explicitly given up to employ {\em one-way security} in the stand-alone setting;
(ii) the generality of supporting any form of dispute on illegitimate contents \cite{Dziembowski-et-al-2018-CCS} is weaken to those inconsistent to digest in form of Merkle tree root.
\item \underline{\smash{\em Less latency for streaming delivery.}}
Though the protocol $\mathsf{FairDownload}${} is efficient as well as minimize the provider's activities, it also incurs considerable latency since the consumer can obtain the content only after all data chunks are delivered.
To accommodate the streaming scenario where the consumer can {\em view-while-delivery}, we propose another simple while efficient protocol $\mathsf{FairStream}${}, where each data chunk can be retrieved in $O(1)$ communication rounds. Though the design requires more involvement of the content provider, whose overall communication workload, however, remains much smaller than the content itself. $\mathsf{FairStream}${} realizes the same security goals as the $\mathsf{FairDownload}${} protocol.
\item \underline{\smash{\em Optimal on-chain and deliverer complexities.}} Both the downloading and streaming protocols achieve asymptotically optimal $\Tilde{O}(\eta + \lambda)$ on-chain computational costs even when dispute occurs. The on-chain costs only relates to the small chunk size parameter $\eta$ and the even smaller security parameter $\lambda$. This becomes critical to preserve low-cost of blockchain-based p2p content delivery.
%
Moreover, in both protocols, the deliverer only sends ${O}(\eta + \lambda)$ bits amortized for each chunk. Considering the fact that $\lambda$ is much less than $\eta$, this corresponds to asymptotically optimal deliverer communication, and is the key to keep p2p downloading and p2p streaming highly efficient.
\item \underline{\smash{\em Optimized implementations.}} We implement\footnote{Code availability: https://github.com/Blockchain-World/FairThunder.git} and optimize $\mathsf{FairDownload}${} and $\mathsf{FairStream}${}.
Various non-trivial optimizations are performed to improve the critical on-chain performance including efficient on-chain implementation of ElGamal verifiable decryption over bn-128 curve.
%
Extensive experiments are also conducted atop Ethereum Ropsten network, showing real-world applicability.
\end{enumerate}
\smallskip
\noindent
{\bf Structure.} Section~\ref{sec:Preliminaries} presents the notations and involved cryptographic primitives. In Section~\ref{sec:vfd}, we introduce a building block, viz. verifiable fair delivery ($\mathsf{VFD}$). Section~\ref{sec:ProblemFormulation} gives the formulation of fair p2p content delivery and the desired security goals. In Section~\ref{sec:ProtocolDesign_Downloading}, we present the fair p2p content delivery in the downloading setting with the instantiation of the $\mathsf{VFD}$ module, and in Section~\ref{sec:ProtocolDesign_Streaming}, we further present the fair p2p content delivery for the streaming setting. Section~\ref{sec:ImplementationandEvaluation} provides the details of the protocols' implementation and evaluation. Section~\ref{sec:RelatedWork} elaborates the comparison with related works, and we conclude in Section~\ref{sec:Conclusion}.
\section{Preliminaries}
\label{sec:Preliminaries}
In this section, we briefly describe the notations and relevant cryptographic primitives.
\smallskip
\noindent {\bf Notations.} We use $[n]$ to denote the set of integers $\{1,\dots,n\}$, $[a,b]$ to denote the set $\{a,\dots,b\}$, $x||y$ to denote a string concatenating $x$ and $y$, $\leftarrow_{\$}$ to denote uniformly sampling, and $\preceq$ to denote the prefix relationship.
\smallskip
\noindent \textbf{Global $\mathsf{ledger}$.}
It provides the primitive of cryptocurrency that can deal with ``coin'' transfers transparently.
Detailedly, each entry of the dictionary $\mathsf{ledger}[\mathcal{P}_i]$ records the balance of the party $\mathcal{P}_i$, and is global (which means it is accessible by all system participants including the adversary).
Moreover, the global dictionary $\mathsf{ledger}$ can be a subroutine of the so-called \textit{smart contract} -- a pre-defined piece of automatically executing code -- that can transact ``coins'' to a designated party by invoking the $\mathsf{ledger}$ dictionary when some conditions are met. For example, if a smart contract (which can be seen as a certain ideal functionality) executes $\mathsf{ledger}[\mathcal{P}_i]=\mathsf{ledger}[\mathcal{P}_i]+\bitcoinA$, the balance of $\mathcal{P}_i$ would increase by $\bitcoinA$.
\smallskip
\noindent \textbf{Merkle tree.}
This consists of a tuple of algorithms $(\mathsf{BuildMT}, \mathsf{GenMTP},
\mathsf{VerifyMTP})$.
$\mathsf{BuildMT}$ accepts as input a sequence of elements $m= (m_1, m_2, \cdots, m_n)$ and outputs the Merkle tree $\mathsf{MT}$ with $\mathsf{root}$ that commits $m$. Note we let $\mathsf{root}(\mathsf{MT})$ to denote the Merkle tree $\mathsf{MT}$'s $\mathsf{root}$. $\mathsf{GenMTP}$ takes as input the Merkle tree $\mathsf{MT}$ (built for $m$) and the $i$-th element $m_i$ in $m$, and outputs a proof $\pi_i$ to attest the inclusion of $m_i$ at the position $i$ of $m$. $\mathsf{VerifyMTP}$ takes as input the $\mathsf{root}$ of Merkle tree $\mathsf{MT}$, the index $i$, the Merkle proof $\pi_i$, and $m_i$, and outputs either 1 (accept) or 0 (reject). The security of Merkle tree scheme ensures that: for any probabilistic polynomial-time (P.P.T.) adversary $\mathcal{A}$, any sequence $m$ and any index $i$, conditioned on $\mathsf{MT}$ is a Merkle tree built for $m$, $\mathcal{A}$ cannot produce a fake Merkle tree proof fooling $\mathsf{VerifyMTP}$ to accept $m'_i \ne m_i\in m$ except with negligible probability given $m$, $\mathsf{MT}$ and security parameters.
\smallskip
\noindent \textbf{Verifiable decryption.} We consider a specific verifiable public key encryption ($\mathsf{VPKE}$) scheme consisting of a tuple of algorithms $(\mathsf{VPKE.KGen},\mathsf{VEnc}, \mathsf{VDec}, \mathsf{ProvePKE}, \mathsf{VerifyPKE})$ and allowing the decryptor to produce the plaintext along with a proof attesting the correct decryption~\cite{Camenisch-et-al-2003-Crypto}. Specifically, $\mathsf{KGen}$ outputs a public-private key pair, i.e., $(h,k) \leftarrow \mathsf{VPKE.KGen}(1^{\lambda})$ where $\lambda$ is a security parameter. The public key encryption satisfies semantic security. Furthermore, for any $(h,k)\leftarrow \mathsf{VPKE.KGen}(1^{\lambda})$, the $\mathsf{ProvePKE}_{k}$ algorithm takes as input the private key $k$ and the cipher $c$, and outputs a message $m$ with a proof $\pi$; while the $\mathsf{VerifyPKE}_{h}$ algorithm takes as input the public key $h$ and $(m,c,\pi)$, and outputs $1/0$ to accept/reject the statement that $m=\mathsf{VDec}_{k}(c)$. Besides the semantic security, the verifiable decryption scheme need satisfy the following extra properties:
\begin{compactenum}
\item[$\bullet$] \textit{Completeness}. $Pr[\mathsf{VerifyPKE}_{h}(m,c,\pi)=1 | (m,\pi)\leftarrow \mathsf{ProvePKE}_{k}(c)]=1$, for $\forall$ $c$ and $(h,k)\leftarrow\mathsf{KGen}(1^{\lambda})$;
\item[$\bullet$] \textit{Soundness}. For any $(h,k)\leftarrow\mathsf{KGen}(1^{\lambda})$ and $c$, no probabilistic poly-time (P.P.T.) adversary $\mathcal{A}$ can produce a proof $\pi$ fooling $\mathsf{VerifyPKE}_{h}$ to accept that $c$ is decrypted to $m'$ if $m' \neq \mathsf{VDec}_{k}(c)$ except with negligible probability;
\item[$\bullet$] \textit{Zero-Knowledge}. The proof $\pi$ can be simulated by a P.P.T. simulator $\mathcal{S}_{\mathsf{VPKE}}$ taking as input only public knowledge $m,h,c$, hence nothing more than the truthness of the statement $(m,c) \in \{(m,c)|m = \mathsf{VDec}_{k}(c)\}$ is leaked.
\end{compactenum}
\smallskip
\noindent \textbf{Cryptographic primitives.} We also consider: (i) a cryptographic hash function $\mathcal{H}: \{0,1\}^*\rightarrow\{0,1\}^{\lambda}$ in the random oracle model \cite{Bellare-and-Rogaway-1993-CCS}; (ii) a {\em semantically secure} (fixed-length) symmetric encryption made of $(\mathsf{SE.KGen}, \mathsf{SEnc}, \mathsf{SDec})$; (iii) an {\em existential unforgeability under chosen message attack} (EU-CMA) secure digital signature scheme consisting of the polynomial-time algorithms $(\mathsf{SIG.KGen}, \mathsf{Sign}, \mathsf{Verify})$.
\section{Warm-up: Verifiable Fair Delivery}
\label{sec:vfd}
We first warm up and set forth a building block termed {\em verifiable fair delivery} ($\mathsf{VFD}$), which enables an honest verifier to verify that a sender indeed transfers some amount of data to a receiver. It later acts as a key module in the fair p2p content delivery protocol. The high level idea of $\mathsf{VFD}$ lies in: the receiver needs to send back a signed ``receipt" in order to acknowledge the sender's bandwidth contribution and continuously receives the next data chunk. Consider that the data chunks of same size $\eta$ are transferred {\em sequentially} starting from the first chunk, the sender can always use the latest receipt containing the chunk index to prove to the verifier about the total contribution. Intuitively the sender {\em at most} wastes bandwidth of transferring one chunk.
\smallskip
\noindent
\textbf{Syntax of $\mathsf{VFD}$}.
The $\mathsf{VFD}$ protocol is among an interactive poly-time Turing-machine (ITM) sender denoted by $\mathcal{S}$, an ITM receiver denoted by $\mathcal{R}$, and a non-interactive Turing-machine verifier denoted by $\mathcal{V}$, and follows the syntax:
\begin{itemize}
\item {\bf Sender.} The sender $\mathcal{S}$ can be activated by calling an interface $\mathcal{S}.\mathsf{send}()$
with inputting a sequence of $n$ data chunks and their corresponding validation strings denoted by $((c_1,\sigma_{c_1}), \dots, (c_n,\sigma_{c_n}))$, and there exists an efficient and global predicate $\Psi(i, c_i, \sigma_{c_i})\rightarrow\{0,1\}$ to check whether $c_i$ is the $i$-th valid chunk due to $\sigma_{c_i}$; once activated, the sender $\mathcal{S}$ interacts with the receiver $\mathcal{R}$, and opens an interface $\mathcal{S}.\mathsf{prove()}$ that can be invoked to return a proof string $\pi$ indicating the number of sent chunks;
\item {\bf Receiver.} The receiver $\mathcal{R}$ can be activated by calling an interface $\mathcal{R}.\mathsf{recv}()$ with taking as input the description of the global predicate $\Psi(\cdot)$ to interact with $\mathcal{S}$, and outputs a sequence of $((c_1,\sigma_{c_1}), \dots, (c_{n'},\sigma_{c_{n'}}))$, where $n' \in [n]$ and every $(c_i,\sigma_{c_i})$ is valid due to $\Psi(\cdot)$;
\item {\bf Verifier.} The verifier $\mathcal{V}$ inputs the proof $\pi$ generated by $\mathcal{S}.\mathsf{prove()}$, and outputs an integer $\mathsf{ctr} \in \{0,\cdots,n\}$.
\end{itemize}
\smallskip
\noindent
{\bf Security of $\mathsf{VFD}$.} The $\mathsf{VFD}$ protocol must satisfy the following security properties:
\begin{itemize}
\item {\bf Termination}. If at least one of $\mathcal{S}$ and $\mathcal{R}$ is honest, the $\mathsf{VFD}$ protocol terminates within {\em at most} $2n$ rounds, where $n$ is the number of content chunks;
\item {\bf Completeness}. If $\mathcal{S}$ and $\mathcal{R}$ are both honest and activated, after $2n$ rounds, $\mathcal{S}$ is able to generate a proof $\pi$ which $\mathcal{V}$ can take as input and output $\mathsf{ctr}\equiv n$, while $\mathcal{R}$ can output $((c_1,\sigma_{c_1}), \dots, (c_n,\sigma_{c_n}))$, which is same to $\mathcal{S}$'s input;
\item {\bf Verifiable $\eta$ delivery fairness}. When one of $\mathcal{S}$ and $\mathcal{R}$ maliciously aborts, $\mathsf{VFD}$ shall satisfy the following delivery fairness requirements:
\begin{itemize}
\item \underline{\smash{\em Verifiable delivery fairness against $\mathcal{S}^*$.}} For any corrupted probabilistic poly-time (P.P.T.) sender $\mathcal{S}^*$ controlled by $\mathcal{A}$, it is guarantee that: the honest receiver $\mathcal{R}$ will always receive the valid sequence $(c_1,\sigma_{c_1}),\dots,(c_\mathsf{ctr},\sigma_{c_\mathsf{ctr}})$ if $\mathcal{A}$ can produce the proof $\pi$ that enables $\mathcal{V}$ to output $\mathsf{ctr}$.
\item \underline{\smash{\em Verifiable delivery fairness against $\mathcal{R}^*$.}} For any corrupted P.P.T. receiver $\mathcal{R}^*$ controlled by $\mathcal{A}$, it is ensured that: the honest sender $\mathcal{S}$ can always generate a proof $\pi$, which enables $\mathcal{V}$ to output {\em at least} $(\mathsf{ctr}-1)$ if $\mathcal{A}$ receives the valid sequence $(c_1,\sigma_{c_1}),\dots,(c_\mathsf{ctr},\sigma_{c_\mathsf{ctr}})$. At most $\mathcal{S}$ wastes bandwidth for delivering one content chunk of size $\eta$.
\end{itemize}
\end{itemize}
\smallskip
\noindent
\textbf{$\mathsf{VFD}$ protocol $\Pi_\mathsf{VFD}$}. We consider the authenticated setting that the sender $\mathcal{S}$ and the receiver $\mathcal{R}$ have generated public-private key pairs $(pk_{\mathcal{S}}, sk_{\mathcal{S}})$ and $(pk_{\mathcal{R}}, sk_{\mathcal{R}})$ for digital signature, respectively; and they have announced the public keys to bind to themselves. Then, $\mathsf{VFD}$ with the global predicate $\Psi(\cdot)$ can be realized by the protocol $\Pi_\mathsf{VFD}$ hereunder among $\mathcal{S}$, $\mathcal{R}$ and $\mathcal{V}$ against P.P.T. and static adversary in the \textit{stand-alone} setting\footnote{We omit the \textit{session id} (denoted as $sid$) in the stand-alone context for brevity. To defend against \textit{replay} attack in concurrent sessions, it is trivial to let the authenticated messages include an $sid$ field, which, for example, can be instantiated by the hash of the transferred data identifier $\mathsf{root}_m$, the involved parties' addresses and an increasing-only nonce, namely $sid := \mathcal{H}(\mathsf{root}_m||\mathcal{V}\_address||pk_{\mathcal{S}}||pk_{\mathcal{R}}||nonce)$.} with the synchronous network assumption:
\begin{itemize}
\item {\bf Construction of sender.} The sender $\mathcal{S}$, after activated via $\mathcal{S}.\mathsf{send}()$ with the input $((c_1,\sigma_{c_1}), \dots, (c_n,\sigma_{c_n}))$, $pk_{\mathcal{S}}$ and $pk_{\mathcal{R}}$, starts a timer $\mathcal{T}_{\mathcal{S}}$ lasting two synchronous rounds, initializes a variable $\pi_\mathcal{S}:=null$, and executes as follows:
\begin{itemize}
\item For each $i \in [n]$: the sender sends $(\mathsf{deliver}, i, c_i, \sigma_{c_i})$ to $\mathcal{R}$, and waits for the response message $(\mathsf{receipt}, i, \sigma_{\mathcal{R}}^{i})$ from $\mathcal{R}$. If $\mathcal{T}_{\mathcal{S}}$ expires before receiving the response, breaks the iteration; otherwise $\mathcal{S}$ verifies whether $\mathsf{Verify}(\mathsf{receipt}||i||pk_{\mathcal{R}}||pk_{\mathcal{S}}, \sigma_{\mathcal{R}}^{i}, pk_{\mathcal{R}})\equiv 1$ or not, if {\em true}, resets $\mathcal{T}_{\mathcal{S}}$, outputs $\pi_\mathcal{S}:=(i, \sigma_{\mathcal{R}}^{i})$, and continues to run the next iteration (i.e., increasing $i$ by one); if {\em false}, breaks the iteration;
\item Upon $\mathcal{S}.\mathsf{prove}()$ is invoked, it returns $\pi_\mathcal{S}$ as the $\mathsf{VFD}$ proof and halts.
\end{itemize}
\item {\bf Construction of receiver.} The receiver $\mathcal{R}$, after activated via $\mathcal{R}.\mathsf{recv}()$ with the input $pk_{\mathcal{S}}$ and $(pk_{\mathcal{R}}, sk_{\mathcal{R}})$, starts a timer $\mathcal{T}_{\mathcal{R}}$ lasting two synchronous rounds and operates as: for each $j \in [n]$: $\mathcal{R}$ waits for $(\mathsf{deliver}, j, c_j, \sigma_{c_j})$ from $\mathcal{S}$ and halts if $\mathcal{T}_{\mathcal{R}}$ expires before receiving the $\mathsf{deliver}$ message; otherwise $\mathcal{R}$ verifies whether $\Psi(j, c_j, \sigma_{c_j})\equiv1$ or not; if {\em true}, resets $\mathcal{T}_{\mathcal{R}}$, outputs $(c_j, \sigma_{c_j})$, and sends $(\mathsf{receipt}, i, \sigma^{i}_{\mathcal{R}})$ to $\mathcal{S}$ where $\sigma^{i}_{\mathcal{R}}\leftarrow \mathsf{Sign}(\mathsf{receipt}||i||pk_{\mathcal{R}}||pk_{\mathcal{S}}, sk_{\mathcal{R}})$, halts if {\em false}. Note that the global predicate $\Psi(\cdot)$ is efficient as essentially it just performs a signature verification.
\item {\bf Construction of verifier}. Upon the input $\pi_{\mathcal{S}}$, the verifier $\mathcal{V}$ parses it into $(\mathsf{ctr}, \sigma_{\mathcal{R}}^{\mathsf{ctr}})$, and checks whether $\mathsf{Verify}(\mathsf{receipt}||\mathsf{ctr}||pk_{\mathcal{R}}||pk_{\mathcal{S}}, \sigma_{\mathcal{R}}^{\mathsf{ctr}}, pk_{\mathcal{R}})\equiv1$ or not; if {\em true}, it outputs $\mathsf{ctr}$, or else outputs 0. Recall that $\mathsf{Verify}$ is to verify signatures.
\end{itemize}
\begin{lemma}
\label{lemma:VFD_completeness_fairness}
{In the synchronous authenticated network and stand-alone setting, the protocol $\Pi_\mathsf{VFD}$ satisfies termination, completeness and the verifiable $\eta$ delivery fairness against non-adaptive P.P.T. adversary corrupting one of the sender and the receiver.}
\end{lemma}
\noindent{\em Proof.} If both the sender and the receiver are honest, there would be $2n$ communication rounds since for every delivered chunk, the sender obtains a ``receipt" from the receiver for acknowledging bandwidth contribution. If one malicious party aborts, the other honest one would also terminate after its maintained timer expires, resulting in less than $2n$ communication rounds. Therefore, the termination property is guaranteed.
In addition, when both the sender $\mathcal{S}$ and the receiver $\mathcal{R}$ are honest, the {\em completeness} of $\mathsf{VFD}$ is immediate to see: in each round, $\mathcal{S}$ would start to deliver the next chunk after receiving the receipt from $\mathcal{R}$ within 2 rounds, i.e., a round-trip time. After 2$n$ synchronous rounds, $\mathcal{R}$ receives the chunk-validation pairs $((c_1, \sigma_{c_1}),\cdots,(c_n,\sigma_{c_n}))$ and $\mathcal{S}$ outputs the last receipt as a proof $\pi$, which is taken as input by the verifier $\mathcal{V}$ to output $n$ demonstrating $\mathcal{S}$'s delivery contribution.
For the $\eta$ delivery fairness of $\mathsf{VFD}$, on one hand, the malicious $\mathcal{S}^*$ corrupted by $\mathcal{A}$ may abort after receiving the $\mathsf{ctr}$-th ($1\leq \mathsf{ctr}\leq n$) receipt. In that case, $\mathcal{R}$ is also guaranteed to receive a valid sequence of $((c_1,\cdots,\sigma_{c_1}),\cdots, (c_{\mathsf{ctr}},\sigma_{c_{\mathsf{ctr}}}))$ with overwhelming probability, unless $\mathcal{A}$ can forge $\mathcal{R}$'s signature. However, it requires $\mathcal{A}$ to break the underlying EU-CMA signature scheme, which is of negligible probability.
On the other hand,
for the malicious $\mathcal{R}^*$ corrupted by $\mathcal{A}$,
if $\mathcal{V}$ takes the honest $\mathcal{S}$'s proof and can output $\mathsf{ctr}$, then $\mathcal{S}$ {\em at most} has sent $(\mathsf{ctr}+1)$ chunk-validation pairs, i.e., $(c_i, \sigma_{c_i})$, to $\mathcal{A}$. Overall, $\mathcal{S}$ {\em at most} wastes bandwidth of delivering one chunk of size $\eta$. Hence, the $\eta$ delivery fairness of $\mathsf{VFD}$ is rigorously guaranteed.
\section{Formalizing p2p Content Delivery}
\label{sec:ProblemFormulation}
Here we extend delivery fairness between a sender and a receiver and define the needed properties of p2p content delivery required by the provider, the deliverer and the consumer.
\subsection{System Model}
\noindent
{{\bf Participating Parties}}. We consider the following explicit entities (i.e., interactive Turing machines by cryptographic convention) in the context of p2p content delivery:
\begin{itemize}
\item \underline{\smash{\em Content Provider}} is an entity (denoted by $\mathcal{P}$) that owns the original content $m$ composed of $n$ data chunks,\footnote{Remark that the content $m$ is {\em dividable} in the sense that each chunk is independent to other chunks, e.g., each chunk is a small 10-second video.} satisfying a public known predicate $\phi(\cdot)$,\footnote{Throughout the paper, we consider that the predicate $\phi$ is in the form of $\phi(m) = [\mathsf{root}(\mathsf{BuildMT}(m)) \equiv \mathsf{root}_m]$, where $\mathsf{root}$ is the Merkle tree root of the content $m$. In practice, it can be aquired from a semi-trusted third party, such as BitTorrent forum sites~\cite{Kupccu--et-al-2010-RSAC} or VirusTotal~\cite{Janin-et-al-2020-EuroSPW}.} and $\mathcal{P}$ is willing to sell to the users of interest. Meanwhile, the provider would like to delegate the delivery of its content to a third-party (viz. a deliverer) with promise to pay $\bitcoinA_{\mathcal{P}}$ for each successfully delivered chunk.
\item \underline{\smash{\em Content Deliverer}} (denoted by $\mathcal{D}$) contributes its idle bandwidth resources to deliver the content on behalf of the content provider $\mathcal{P}$ and would receive the payment proportional to the amount of delivered data. In the p2p delivery scenario, deliverers can be some small organizations or individuals, e.g., the {\em RetrievalProvider} in Filecoin~\cite{Filecoin-Retrieval-2021-Spec}.
\item \underline{\smash{\em Content Consumer}} is an entity (denoted by $\mathcal{C}$) that would pay $\bitcoinA_{\mathcal{C}}$ for each chunk in the content $m$ by interacting with $\mathcal{P}$ and $\mathcal{D}$.
\end{itemize}
\noindent
{\bf Adversary}. Following modern cryptographic practices~\cite{Katz-et-al-2014-CRC}, we consider the adversary $\mathcal{A}$ with following standard abilities:
\begin{itemize}
\item \underline{\smash{\em Static corruptions.}} The adversary $\mathcal{A}$ can corrupt some parties only before the course of protocol executions;
\item \underline{\smash{\em Computationally bounded.}} The adversary $\mathcal{A}$ is restricted to P.P.T. algorithms;
\item \underline{\smash{\em Synchronous authenticated channel.}} We adopt the {synchronous network model of authenticated point-to-point channels} to describe the ability of $\mathcal{A}$ on controlling communications, namely, for any messages sent between honest parties, $\mathcal{A}$ is consulted to delay them up to a-priori known $\Delta$ but cannot drop, reroute or modify them. W.l.o.g., we consider a global clock in the system, and $\mathcal{A}$ can delay the messages up to a clock round \cite{Kosba-et-al-2016-SP,Kiayias-et-al-2016-Crypto}.
\end{itemize}
\noindent
{{\bf Arbiter smart contract $\mathcal{G}$}}. The system is in a hybrid model with oracle access to an arbiter smart contract $\mathcal{G}$. The contract $\mathcal{G}$ is a stateful ideal functionality that leaks all its internal states to the adversary $\mathcal{A}$ and all parties, while allowing to pre-specify some immutable conditions (that can be triggered through interacting with $\mathcal{P}$, $\mathcal{D}$, and $\mathcal{C}$) to transact ``coins'' over the cryptocurrency $\mathsf{ledger}$, thus ``mimicking'' the contracts in real life transparently. In practice, the contract can be instantiated through many real-world blockchains such as Ethereum~\cite{Wood-et-al-2014-Ethereum-Yellow-Paper}. Throughout this paper, the details of the arbiter contracts $\mathcal{G}$ follow the conventional pseudo-code notations in the seminal work due to Kosba {\em et al}.~\cite{Kosba-et-al-2016-SP}.
\subsection{Design Goals}
Now we formulate the problem of fair content delivery with an emphasis on the delivery fairness, which to our knowledge is the first formal definition to abstract the necessary security/utility requirements of delegated p2p content delivery.
\smallskip
\noindent
{{\bf Syntax}}.
A fair p2p content delivery protocol $\Pi=(\mathcal{P}, \mathcal{D}, \mathcal{C})$ is a tuple of three P.P.T. interactive Turing machines (ITMs) consisting of two explicit phases:
\begin{itemize}
\item \underline{\smash{\em Preparation phase.}}
The provider $\mathcal{P}$ takes as input public parameters and the content $m=(m_1,\dots, m_n) \in \{0,1\}^{\eta\times n}$ that satisfies $\phi(m)\equiv1$, where $\eta$ is chunk size in bit and $n$ is the number of chunks and it outputs some auxiliary data, e.g., encryption keys;
the deliverer $\mathcal{D}$ takes as input public parameters and outputs some auxiliary data, e.g., encrypted content; the consumer $\mathcal{C}$ does not involve in this phase. Note $\mathcal{P}$ deposits a budget of $n \cdot \bitcoinA_{\mathcal{P}}$ in $\mathsf{ledger}$ to incentivize $\mathcal{D}$ so it can {\em minimize} bandwidth usage in the next phase.
\item \underline{\smash{\em Delivery phase.}}
The provider $\mathcal{P}$ and the deliverer $\mathcal{D}$ take as input their auxiliary data obtained in the preparation phase, respectively, and they would receive the deserved payment; the consumer $\mathcal{C}$ takes as input public parameters and outputs the content $m$ with $\phi(m)\equiv 1$. Note $\mathcal{C}$ has a budget of $n \cdot \bitcoinA_{\mathcal{C}}$ in $\mathsf{ledger}$ to ``buy'' the content $m$ satisfying $\phi(m)\equiv1$, where $\bitcoinA_{\mathcal{C}} > \bitcoinA_{\mathcal{P}}$.
\end{itemize}
Furthermore, the fair p2p content delivery protocol $\Pi$ shall meet the following security requirements.
\smallskip
\noindent
{{\bf Completeness}}. For any content predicate $\phi(\cdot)$ in the form of $\phi(m) = [\mathsf{root}(\mathsf{BuildMT}(m)) \equiv \mathsf{root}_m]$,
conditioned on $\mathcal{P}, \mathcal{D}$ and $\mathcal{C}$ are all honest, the protocol $\Pi$ attains:
\begin{itemize}
\item The consumer $\mathcal{C}$ would obtain the qualified content $m$ satisfying $\phi(m)\equiv1$, and its balance in the global $\mathsf{ledger}[\mathcal{C}]$ would decrease by $n\cdot\bitcoinA_{\mathcal{C}}$, where $\bitcoinA_{\mathcal{C}}$ represents the amount paid by $\mathcal{C}$ for each content chunk.
\item The deliverer $\mathcal{D}$ would receive the payment $n\cdot\bitcoinA_{\mathcal{P}}$ over the global $\mathsf{ledger}$, where $\bitcoinA_{\mathcal{P}}$ represents the amount paid by $\mathcal{P}$ to $\mathcal{D}$ for delivering a content chunk to the consumer.
\item The provider $\mathcal{P}$ would receive its well-deserved payments over the ledger, namely, $\mathsf{ledger}[\mathcal{P}]$ would increase by $n\cdot(\bitcoinA_{\mathcal{C}}-\bitcoinA_{\mathcal{P}})$ as it receives $n\cdot\bitcoinA_{\mathcal{C}}$ from the consumer while it pays out $n\cdot\bitcoinA_{\mathcal{P}}$ to the deliverer.
\end{itemize}
\smallskip
\noindent
{\bf Fairness}. The protocol $\Pi$ shall satisfy the following fairness requirements:
\begin{itemize}
\item \underline{\smash{\em Consumer Fairness.}} For $\forall$ corrupted P.P.T. $\mathcal{D}^*$ and $\mathcal{P}^*$ (fully controlled by $\mathcal{A}$), it is guaranteed to the honest consumer $\mathcal{C}$ with overwhelming probability that: the $\mathsf{ledger}[\mathcal{C}]$ decreases by $\ell \cdot \bitcoinA_{\mathcal{C}}$ only if $\mathcal{C}$ receives a sequence of chunks $(m_1,\dots,m_\ell) \preceq m$ where $\phi(m)\equiv1$, Intuitively, this property states that $\mathcal{C}$ pays proportional to valid chunks it {\em de facto} receives.
\item \underline{\smash{\em Delivery $\eta$-Fairness.}} For $\forall$ malicious P.P.T. $\mathcal{C}^*$ and $\mathcal{P}^*$ corrupted by $\mathcal{A}$, it is assured to the honest deliverer $\mathcal{D}$ that: if $\mathcal{D}$ sent overall $O(\ell\cdot\eta + 1)$ bits during the protocol, $\mathcal{D}$ should {\em at least} obtain the payment of $(\ell-1)\cdot\bitcoinA_{\mathcal{P}}$. In other words, the unpaid delivery is bounded by $O(\eta)$ bits.
\item \underline{\smash{\em Provider $\eta$-Fairness.}} For $\forall$ corrupted P.P.T. $\mathcal{C}^*$ and $\mathcal{D}^*$ controlled by $\mathcal{A}$, it is ensured to the honest provider $\mathcal{P}$ that: if $\mathcal{A}$ can output ${\eta\cdot\ell}$ bits consisted in the content $m$, the provider $\mathcal{P}$ shall receive at least $(\ell-1)\cdot(\bitcoinA_{\mathcal{C}}-\bitcoinA_{\mathcal{P}})$ net income,
namely, $\mathsf{ledger}[\mathcal{P}]$ increases by $(\ell-1)\cdot(\bitcoinA_{\mathcal{C}}-\bitcoinA_{\mathcal{P}})$,
with all except negligible probability.
i.e., $\mathcal{P}$ is ensured that {\em at most} $O(\eta)$-bit content are revealed without being well paid.
\end{itemize}
\smallskip
\noindent
{{\bf Confidentiality against deliverer}}. This is needed to protect copyrighted data against probably corrupted deliverers, otherwise a malicious consumer can pretend to be or collude with a deliverer to obtain the plaintext content without paying for the provider, which violates the exchange fairness for $\mathcal{P}$. Informally, we require that the corrupted $\mathcal{D}^*$ on receiving protocol scripts (e.g., the delegated content chunks from the provider) cannot produce the provider's input content with all but negligible probability in a delivery session.\footnote{To preserve the content digital rights across multiple delivery sessions in the p2p content delivery setting, it is feasible to integrate digital rights management (DRM) schemes~\cite{Ma-et-al-2018-FGCS}, which can be an interesting future work.}
We remark that confidentiality is not captured by fairness, as it is trivial to see a protocol satisfying fairness might not have confidentiality: upon all payments are cleared and the consumers receives the whole content, the protocol lets the consumer send the content to the deliverer.
\smallskip
\noindent
{{\bf Timeliness}}. When at least one of the parties $\mathcal{P}$, $\mathcal{D}$ and $\mathcal{C}$ is honest (i.e., others can be corrupted by $\mathcal{A}$), the honest ones are guaranteed to halt in $O(n)$ synchronous rounds where $n$ is the number of content chunks. At the completion or abortion of the protocol, the aforementioned fairness and confidentiality are always guaranteed.
\smallskip
\noindent
{{\bf Non-trivial efficiency}}. We require the necessary non-trivial efficiency to rule out possible trivial solutions:
\begin{itemize}
\item The messages sent to $\mathcal{G}$ from honest parties are uniformly bounded by $\Tilde{O}(1)$ bits, which excludes a trivial way of using the smart contract to directly deliver the content.
\item In the delivery phase, the messages sent by honest $\mathcal{P}$ are uniformly bounded by $n\cdot\lambda$ bits, where $\lambda$ is a small cryptographic parameter, thus ensuring $n\cdot\lambda$ much smaller than the size of content $|m|$.
This indicates that $\mathcal{P}$ can save its bandwidth upon the completion of preparation phase and excludes the idea of delivering by itself.
\end{itemize}
\smallskip
\noindent
{{\bf Remarks}}.
We make the following discussions about the above definitions:
(i) $\phi(\cdot)$ is a public parameter known to all parties before the protocol execution;
(ii) our fairness requirements have already implied the case where the adversary corrupts one party of $\mathcal{P}$, $\mathcal{D}$ and $\mathcal{C}$ instead of two, since whenever the adversary corrupts two parties, it can let one of these corrupted two follow the original protocol; (iii) like all cryptographic protocols, it does not make sense to consider all parties are corrupted, so do we not;
(iv) the deliverer and the provider might lose well-deserved payment, but {\em at most} lose that for one chunk, i.e., the level of unfairness is strictly bounded; (v) though we focus on the case of one \textit{single} content deliverer, our formalism and design can be extended to capture \textit{multiple} deliverers, for example, when the whole content is cut to multiple pieces and each piece is delegated to a distinct deliverer. The extension with strong fairness guarantee forms an interesting future work.
In addition, one might wonder that a probably corrupted content provider fails in the middle of a transmission, causing that the consumer does not get the entire content but has to pay a lot. Nevertheless, this actually is not a serious worry in the peer-to-peer content delivery setting that aims to complement decentralized content storage networks because there essentially would be a large number of deliverers, and at least some of them can be honest. As such, if a consumer encounters failures in the middle of retrieving the content, it can iteratively ask another deliverer to start a new delivery session to fetch the remaining undelivered chunks. Moreover, our actual constructions in Section \ref{sec:ProtocolDesign_Downloading} and \ref{sec:ProtocolDesign_Streaming} essentially allow the consumers to fetch the content from any specific chunk instead of beginning with the first one.
\section{$\mathsf{FairDownload}${}: Fair p2p Downloading}
\label{sec:ProtocolDesign_Downloading}
This section presents the p2p fair delivery protocol $\Pi_{\mathsf{FD}}$, allowing the consumers to view the content after downloading (partial or all) the chunks, termed as {\em view-after-delivery}.
\subsection{$\mathsf{FairDownload}${} Overview}
At a high level, our protocol $\Pi_{\mathsf{FD}}$ can be constructed around the module of verifiable fair delivery ($\mathsf{VFD}$) and proceeds in {\em Prepare}, {\em Deliver} and {\em Reveal} phases as illustrated in Figure \ref{fig:protocol_overview}.
The core ideas of $\Pi_{\mathsf{FD}}$ can be over-simplified as follows:
\begin{itemize}
\item The content provider $\mathcal{P}$ encrypts each chunk, signs the encrypted chunks, and delegates to the deliverer $\mathcal{D}$; as such, the deliverer (as the sender $\mathcal{S}$) and the consumer $\mathcal{C}$ (as the receiver $\mathcal{R}$) can run a specific instance of $\mathsf{VFD}$, in which the global predicate $\Psi(\cdot)$ is instantiated to verify that each chunk must be correctly signed by $\mathcal{P}$; additionally, the non-interactive honest verifier $\mathcal{V}$ in $\mathsf{VFD}$ is instantiated via a smart contract, hence upon the contract receives a $\mathsf{VFD}$ proof from $\mathcal{D}$ claiming the in-time delivery of $\mathsf{ctr}$ chunks, it can assert that $\mathcal{C}$ indeed received $\mathsf{ctr}$ encrypted chunks signed by the provider, who can then present to reveal the decryption keys of these $\mathsf{ctr}$ chunks (via the smart contract).
\item Nevertheless, trivial disclosure of decryption keys via the contract would cause significant on-chain cost up to linear in the number of chunks; we therefore propose a {\em structured key generation scheme} composed of Alg.~\ref{alg:KeyGroupGen},~\ref{alg:RevealKey},~\ref{alg:ExtractKey} that allows the honest provider to reveal all $\mathsf{ctr}$ decryption keys via a short $\Tilde{O}(\lambda)$-bit message; furthermore, to ensure confidentiality against the deliverer, the script to reveal decryption keys is encrypted by the consumer's public key; in case the revealed keys cannot decrypt the cipher chunk signed by $\mathcal{P}$ itself to obtain the correct data chunk, the consumer can complain to the contract via a short $\Tilde{O}(\eta+\lambda)$-bit message to prove the error of decrypted chunk and get refund.
\end{itemize}
The protocol design of $\Pi_{\mathsf{FD}}$ can ensure the fairness for each participating party even others are all corrupted by non-adaptive P.P.T. adversary. The on-chain cost keeps {\em constant} regardless of the content size $|m|$ in the optimistic mode where no dispute occurs. While in the pessimistic case, the protocol also realizes asymptotically optimal on-chain cost, which is related to the chunk size $\eta$. Moreover, the deliverer $\mathcal{D}$ can achieve asymptotically optimal communication in the sense that $\mathcal{D}$ only sends $O(\eta+\lambda)$ bits amortized for each chunk, where $\eta$ is the chunk size and $\lambda$ is a small security parameter with $\lambda \ll \eta$. These properties contribute significantly to the efficiency and practicability of applying $\Pi_{\mathsf{FD}}$ to the p2p content delivery setting.
\begin{figure}[!t]
\centering
\includegraphics[width=.49\textwidth]{Figures/protocol_flow_downloading.eps}
\vspace{-6mm}
\caption{The overview of $\mathsf{FairDownload}$ protocol $\Pi_{\mathsf{FD}}$.}
\label{fig:protocol_overview}
\vspace{-4mm}
\end{figure}
\subsection{Arbiter Contract $\mathcal{G}_{d}^{\mathsf{ledger}}$ for Downloading}
The arbiter contract $\mathcal{G}_{d}^{\mathsf{ledger}}$ (abbr. $\mathcal{G}_{d}$) shown in Fig.~\ref{fig:downloading_contract_ideal_functionality} is a stateful ideal functionality having accesses to $\mathsf{ledger}$ to assist the fair content delivery via downloading. We make the following remarks about the contract functionality:
\begin{itemize}
\item \underline{\smash{\em Feasibility.}} To demonstrate the feasibility of $\mathcal{G}_{d}$, we describe it by following the conventional pseudocode notation of smart contracts~\cite{Kosba-et-al-2016-SP}. The description captures the essence of real-world smart contracts, since it: (i)~reflects that the Turing-complete smart contract can be seen as a stateful program to transparently handle pre-specified functionalities; (ii)~captures that a smart contract can access the cryptocurrency ledger to faithfully deal with conditional payments upon its own internal states.
\item \underline{\smash{\em $\mathsf{VFD}.\mathcal{V}$ subroutine.}} $\mathcal{G}_{d}$ can invoke the $\mathsf{VFD}$ verifier $\mathcal{V}$ as a subroutine. $\mathsf{VFD}$'s predicate $\Psi(\cdot)$ is instantiated to verify that each chunk is signed by the provider $\mathcal{P}$.
\item \underline{\smash{\em $\mathsf{ValidateRKeys}$ and $\mathsf{ValidatePoM}$ subroutines.}} The subroutines allow the consumer to prove to the contract if the content provider $\mathcal{P}$ behaves maliciously. We defer the details to the next subsection.
\end{itemize}
\begin{figure*}[!t]
\centering
\footnotesize
\fbox{%
\parbox{.96\linewidth}{%
\vspace{-2mm}
\begin{center}
{\bf The Arbiter Contract Functionality $\mathcal{G}_{d}^{\mathsf{ledger}}$ for P2P Downloading}
The arbiter contract $\mathcal{G}_{d}$ has access to the $\mathsf{ledger}$, and it interacts with the provider $\mathcal{P}$, the deliverer $\mathcal{D}$, the consumer $\mathcal{C}$ and the adversary $\mathcal{A}$. It locally stores the times of repeatable delivery $\theta$, the number of content chunks $n$, the content digest $\mathsf{root}_m$, the price $\bitcoinA_{\mathcal{P}}$, $\bitcoinA_{\mathcal{C}}$ and ${\bitcoinA}_{\mathsf{pf}}$, the number of delivered chunks $\mathsf{ctr}$ (initialized as 0), addresses $pk_{\mathcal{P}}, pk_{\mathcal{D}}, pk_{\mathcal{C}}, vpk_{\mathcal{C}}$, revealed keys' hash $erk_{\mathsf{hash}}$, state $\Sigma$ and three timers $\mathcal{T}_{\mathsf{round}}$ (implicitly), $\mathcal{T}_{\mathsf{deliver}}$, and $\mathcal{T}_{\mathsf{dispute}}$.
\end{center}
\vspace{-5mm}
\begin{multicols}{2}
\begin{flushleft}
\noindent
\xrfill[0.5ex]{0.5pt} {} {\bf Phase 1: Prepare} \xrfill[0.5ex]{0.5pt}
\begin{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{start}, pk_\mathcal{P}, \mathsf{root}_m, \theta, n, \bitcoinA_\mathcal{P}, \bitcoinA_\mathcal{C}, {\bitcoinA}_{\mathsf{pf}})$ from $\mathcal{P}$:
\begin{itemize}[-]
\item[-] assert $\mathsf{ledger}[\mathcal{P}]\ge (\theta\cdot( n\cdot\bitcoinA_{\mathcal{P}}+{\bitcoinA}_{\mathsf{pf}}))$ $\wedge$ $\Sigma\equiv\emptyset$
\item[-] store $pk_\mathcal{P}, \mathsf{root}_m, \theta, n, \bitcoinA_\mathcal{P}, \bitcoinA_\mathcal{C}, {\bitcoinA}_{\mathsf{pf}}$
\item[-] let $\mathsf{ledger}[\mathcal{P}] := \mathsf{ledger}[\mathcal{P}]-$ $\theta\cdot(n\cdot\bitcoinA_{\mathcal{P}}+\bitcoinA_{\mathsf{pf}})$ and $\Sigma := \mathsf{started}$
\item[-] send $(\mathsf{started}, pk_\mathcal{P}, \mathsf{root}_m, \theta, n, \bitcoinA_\mathcal{P},$ $\bitcoinA_\mathcal{C}, {\bitcoinA}_{\mathsf{pf}})$ to all entities
\end{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{join},pk_{\mathcal{D}})$ from $\mathcal{D}$:
\begin{itemize}
\item[-] assert $\Sigma \equiv \mathsf{started}$
\item[-] store $pk_{\mathcal{D}}$ and let $\Sigma := \mathsf{joined}$
\item[-] send $(\mathsf{joined},pk_{\mathcal{D}})$ to all entities
\end{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{prepared})$ from $\mathcal{D}$:
\begin{itemize}[-]
\item[-] assert $\Sigma \equiv \mathsf{joined}$, and let $\Sigma := \mathsf{ready}$
\item[-] send $(\mathsf{ready})$ to all entities
\end{itemize}
\end{itemize}
\noindent
\xrfill[0.5ex]{0.5pt} {} {\bf Phase 2: Deliver} \xrfill[0.5ex]{0.5pt}
\begin{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{consume}, pk_{\mathcal{C}}, vpk_{\mathcal{C}})$ from $\mathcal{C}$:
\begin{itemize}[-]
\item[-] assert $\theta > 0$
\item[-] assert $\mathsf{ledger}[\mathcal{C}]\ge n\cdot\bitcoinA_{\mathcal{C}}$ $\wedge$ $\Sigma\equiv\mathsf{ready}$
\item[-] store $pk_{\mathcal{C}}$, $vpk_{\mathcal{C}}$ and let $\mathsf{ledger}[\mathcal{C}]:=\mathsf{ledger}[\mathcal{C}]-n\cdot\bitcoinA_{\mathcal{C}}$
\item[-] start a timer $\mathcal{T}_{\mathsf{deliver}}$ and let $\Sigma := \mathsf{initiated}$
\item[-] send $(\mathsf{initiated}, pk_{\mathcal{C}}, vpk_{\mathcal{C}})$ to all entities
\end{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{delivered})$ from $\mathcal{C}$ or $\mathcal{T}_{\mathsf{deliver}}$ times out:
\begin{itemize}[-]
\item[-] assert $\Sigma\equiv\mathsf{initiated}$
\item[-] send $(\mathsf{getVFDProof})$ to $\mathcal{D}$, and wait for two rounds to receive $(\mathsf{receipt}, i, \sigma^{i}_{\mathcal{C}})$, then execute $\mathsf{verifyVFDProof}()$ to let $\mathsf{ctr} := i$, and then assert $0 \le \mathsf{ctr}\le n$
\item[-] let $\mathsf{ledger}[\mathcal{D}]:=\mathsf{ledger}[\mathcal{D}]+\mathsf{ctr}\cdot\bitcoinA_{\mathcal{P}}$
\item[-] let $\mathsf{ledger}[\mathcal{P}]:=\mathsf{ledger}[\mathcal{P}]+(n-\mathsf{ctr})\cdot\bitcoinA_{\mathcal{P}}$
\item[-] store $\mathsf{ctr}$, let $\Sigma := \mathsf{revealing}$, and send $(\mathsf{revealing}, \mathsf{ctr})$ to all entities
\end{itemize}
\end{itemize}
\noindent
\xrfill[0.5ex]{0.5pt} {} {\bf Phase 3: Reveal} \xrfill[0.5ex]{0.5pt}
\begin{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{revealKeys}, erk)$ from $\mathcal{P}$:
\begin{itemize}[-]
\item[-] assert $\Sigma\equiv\mathsf{revealing}$
\item[-] store $erk$ (essentially $erk$'s hash) and start a timer $\mathcal{T}_{\mathsf{dispute}}$
\item[-] let $\Sigma := \mathsf{revealed}$
\item[-] send $(\mathsf{revealed}, erk)$ to all entities
\end{itemize}
\item[$\bullet$] {\color{blue}Upon} $\mathcal{T}_{\mathsf{dispute}}$ times out:
\begin{itemize}[-]
\item assert $\Sigma\equiv\mathsf{revealed}$ and current time $\mathcal{T} \geq \mathcal{T}_{\mathsf{dispute}}$
\item $\mathsf{ledger}[\mathcal{P}]:=\mathsf{ledger}[\mathcal{P}]+\mathsf{ctr}\cdot\bitcoinA_{\mathcal{C}} + {\bitcoinA}_{\mathsf{pf}}$
\item $\mathsf{ledger}[\mathcal{C}]:=\mathsf{ledger}[\mathcal{C}]+(n-\mathsf{ctr})\cdot\bitcoinA_{\mathcal{C}}$
\item let $\Sigma := \mathsf{sold}$ and send $(\mathsf{sold})$ to all entities
\end{itemize}
{\ } \\
{\color{purple}$\triangleright$ Below is the dispute resolution}
\\
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{wrongRK})$ from $\mathcal{C}$ before $\mathcal{T}_{\mathsf{dispute}}$ times out:
\begin{itemize}[-]
\item assert $\Sigma\equiv\mathsf{revealed}$ and current time $\mathcal{T} < \mathcal{T}_{\mathsf{dispute}}$
\item if $(\mathsf{ValidateRKeys}(n, \mathsf{ctr},erk)\equiv false)$:
\begin{itemize}[*]
\item let $\mathsf{ledger}[\mathcal{C}]:=\mathsf{ledger}[\mathcal{C}]+n\cdot\bitcoinA_{\mathcal{C}} + {\bitcoinA}_{\mathsf{pf}}$
\item let $\Sigma := \mathsf{not\_sold}$ and send $(\mathsf{not\_sold})$ to all entities
\end{itemize}
\end{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{PoM}, i, j, c_i, \sigma_{c_i}, \mathcal{H}(m_i),$ $\pi^{i}_{\mathsf{MT}},rk, erk,\pi_{\mathsf{VD}})$ from $\mathcal{C}$ before $\mathcal{T}_{\mathsf{dispute}}$ times out:
\begin{itemize}[-]
\item assert $\Sigma\equiv\mathsf{revealed}$ and current time $\mathcal{T} < \mathcal{T}_{\mathsf{dispute}}$
\item invoke the $\mathsf{ValidatePoM}(i, j, c_i, \sigma_{c_i},\newline \mathcal{H}(m_i), \pi^{i}_{\mathsf{MT}}, rk, erk,\pi_{\mathsf{VD}})$ subroutine, if $true$ is returned:
\begin{itemize}[*]
\item let $\mathsf{ledger}[\mathcal{C}]:=\mathsf{ledger}[\mathcal{C}]+n\cdot\bitcoinA_{\mathcal{C}} + {\bitcoinA}_{\mathsf{pf}}$
\item let $\Sigma := \mathsf{not\_sold}$ and send $(\mathsf{not\_sold})$ to all entities
\end{itemize}
\end{itemize}
{\ } \\
{\color{violet}$\triangleright$ Reset to the ready state for repeatable delivery}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{reset})$ from $\mathcal{P}$:
\begin{itemize}[-]
\item assert $\Sigma\equiv \mathsf{sold}$ or $\Sigma\equiv \mathsf{not\_sold}$
\item set $\mathsf{ctr}$, $\mathcal{T}_{\mathsf{deliver}}$, $\mathcal{T}_{\mathsf{dispute}}$ as 0
\item nullify $pk_{\mathcal{C}}$ and $vpk_{\mathcal{C}}$
\item let $\theta := \theta - 1$, and $\Sigma := \mathsf{ready}$
\item send $(\mathsf{ready})$ to all entities
\end{itemize}
\end{itemize}
\end{flushleft}
\end{multicols}
\vspace{-3mm}
}
}
\caption{The arbiter contract functionality $\mathcal{G}_{d}^{\mathsf{ledger}}$ for downloading. ``Sending to all entities" captures that the smart contract is transparent to the public.}\label{fig:downloading_contract_ideal_functionality}
\vspace{-2mm}
\end{figure*}
\subsection{$\Pi_\mathsf{FD}$: $\mathsf{FairDownload}${} Protocol}
\label{sec:protocol_details}
Now we present the details of fair p2p downloading protocol $\Pi_{\mathsf{FD}}$. In particular, the protocol aims to deliver a content $m$ made of $n$ chunks\footnote{W.l.o.g., we assume $n = 2 ^ k$ for $k \in \mathbb{Z}$ for presentation simplicity.} with a-priori known digest in the form of Merkle tree root, i.e., $\mathsf{root}_m$. We omit the session id \textit{sid} and the content digest $\mathsf{root}_m$ during the protocol description since they remain the same within a delivery session.
\smallskip
\noindent
{\bf Phase I for Prepare.}
The provider $\mathcal{P}$ and the deliverer $\mathcal{D}$ interact with the contract functionality $\mathcal{G}_{d}$ in this phase as:
\begin{itemize}
\item The provider $\mathcal{P}$ deploys contracts and starts~\footnote{$\mathcal{P}$ can retrieve the deposits of $\bitcoinA_{\mathcal{P}}$ and $\bitcoinA_{\mathsf{pf}}$ back if there is no deliverer responds timely.} $\Pi_{\mathsf{FD}}$ by taking as input the security parameter $\lambda$, the incentive parameters $\bitcoinA_\mathcal{P}, \bitcoinA_\mathcal{C}, \bitcoinA_{\mathsf{pf}}\in \mathbb{N}$, where $\bitcoinA_{\mathsf{pf}}$ is the {\em penalty fee}\footnote{${\bitcoinA}_{\mathsf{pf}}$ can be set proportional to $(n\times\bitcoinA_{\mathcal{C}})$ in case $\mathcal{P}$ deliberately lowers it.} in a delivery session to discrouage the misbehavior from the provider $\mathcal{P}$, the number of times $\theta$ of repeatable delivery allowed for the contract, the $n$-chunk content $m = (m_1,\dots,m_n) \in \{0,1\}^{\eta\times n}$ satisfying $\mathsf{root}(\mathsf{BuildMT}(m)) \equiv \mathsf{root}_m$ where $\mathsf{root}_m$ is the content digest in the form of Merkle tree root, and executes $(pk_{\mathcal{P}},sk_{\mathcal{P}}) \leftarrow \mathsf{SIG.KGen}(1^\lambda)$, and sends $(\mathsf{start},pk_{\mathcal{P}},\mathsf{root}_m,\theta,n,\bitcoinA_{\mathcal{P}},\bitcoinA_{\mathcal{C}}, \bitcoinA_{\mathsf{pf}})$ to $\mathcal{G}_d$.
\smallskip
\item Upon $\Sigma \equiv \mathsf{joined}$, the provider $\mathcal{P}$ would execute:
\begin{itemize}
\item Randomly samples a master key $mk \leftarrow_{\$} \{0,1\}^{\lambda}$, and runs Alg. \ref{alg:KeyGroupGen}, namely $\mathsf{KT} \leftarrow \mathsf{GenSubKeys}(n, mk)$; stores $mk$ and $\mathsf{KT}$ locally;
\item Uses the leaf nodes, namely $\mathsf{KT}[n-1: 2n-2]$ (i.e., exemplified by Fig.~\ref{fig:key_derivation}a) as the encryption keys to encrypt $(m_1,\dots,m_n)$, namely $c = (c_1,\dots,c_n) \leftarrow (\mathsf{SEnc}_{\mathsf{KT}[n-1]}(m_1),\dots,\mathsf{SEnc}_{\mathsf{KT}[2n-2]}(m_n))$;
\item Signs the encrypted chunks to obtain the sequence $((c_1, \sigma_{c_1}),\cdots, (c_n, \sigma_{c_n}))$ where the signature $\sigma_{c_i} \leftarrow \mathsf{Sign}(i||c_i, sk_{\mathcal{P}}), i\in[n]$; meanwhile, computes $\mathsf{MT} \leftarrow \mathsf{BuildMT}(m)$ and signs the Merkle tree $\mathsf{MT}$ to obtain $\sigma^{\mathsf{MT}}_{\mathcal{P}} \leftarrow \mathsf{Sign}(\mathsf{MT}, sk_{\mathcal{P}})$, then locally stores $(\mathsf{MT}, \sigma^{\mathsf{MT}}_{\mathcal{P}})$ and sends $(\mathsf{sell}, ((c_1, \sigma_{c_1}),\cdots, (c_n, \sigma_{c_n})))$ to $\mathcal{D}$;
\item Waits for $(\mathsf{ready})$ from $\mathcal{G}_{d}$ to enter the next phase.
\end{itemize}
\item The deliverer $\mathcal{D}$ executes as follows during this phase:
\begin{itemize}
\item Upon receiving $(\mathsf{started}, pk_\mathcal{P}, \mathsf{root}_m, \theta, n, \bitcoinA_\mathcal{P}, \bitcoinA_\mathcal{C}, \bitcoinA_{\mathsf{pf}})$ from $\mathcal{G}_{d}$, executes $(pk_{\mathcal{D}},sk_{\mathcal{D}})\leftarrow \mathsf{SIG.KGen}(1^{\lambda})$, and sends $(\mathsf{join},pk_{\mathcal{D}})$ to $\mathcal{G}_d$;
\item Waits for $(\mathsf{sell}, ((c_1, \sigma_{c_1}),\cdots, (c_n, \sigma_{c_n})))$ from $\mathcal{P}$ and then: for every $(c_i, \sigma_{c_i})$ in the $\mathsf{sell}$ message, asserts that $\mathsf{Verify}(i||c_i, \sigma_{c_i}, pk_{\mathcal{P}})\equiv1$; if hold, sends $(\mathsf{prepared})$ to $\mathcal{G}_{d}$, and stores $((c_1, \sigma_{c_1}),\cdots, (c_n, \sigma_{c_n}))$ locally;
\item Waits for $(\mathsf{ready})$ from $\mathcal{G}_{d}$ to enter the next phase.
\end{itemize}
\end{itemize}
\begin{algorithm}[!t]
\caption{$\mathsf{GenSubKeys}$ algorithm}
\label{alg:KeyGroupGen}
\vspace{-2mm}
\begin{multicols}{2}
\algsetup{linenosize=\tiny}
\scriptsize
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $n, mk$
\ENSURE a $(2n-1)$-array $\mathsf{KT}$
\STATE let $\mathsf{KT}$ be an array, $|\mathsf{KT}|=2n-1$
\STATE $\mathsf{KT}[0] = \mathcal{H}(mk)$
\IF{$n \equiv 1$}
\RETURN $\mathsf{KT}$
\ENDIF
\IF{$n > 1$}
\FOR{$i$ in $[0, n-2]$}
\STATE $\mathsf{KT}[2i+1] = \mathcal{H}(\mathsf{KT}[i] || 0)$
\STATE $\mathsf{KT}[2 i+2] = \mathcal{H}(\mathsf{KT}[i] || 1)$
\ENDFOR
\ENDIF
\RETURN $\mathsf{KT}$
\end{algorithmic}
\end{multicols}
\vspace{-4mm}
\end{algorithm}
At the end of this phase, $\mathcal{P}$ owns a master key $mk$, the key tree $\mathsf{KT}$, and the Merkle tree $\mathsf{MT}$ while $\mathcal{D}$ receives the encrypted content chunks and is ready to deliver.
\smallskip
\noindent
{\bf Phase II for Deliver.} The consumer $\mathcal{C}$, the provider $\mathcal{P}$, and the deliverer $\mathcal{D}$ interact with the contract $\mathcal{G}_d$ in this phase as:
\begin{itemize}
\item The consumer $\mathcal{C}$ would execute as follows:
\begin{itemize}
\item Asserts $\Sigma \equiv \mathsf{ready}$, runs $(pk_{\mathcal{C}},sk_{\mathcal{C}}) \leftarrow \mathsf{SIG.KGen}(1^{\lambda})$ and $(vpk_{\mathcal{C}},vsk_{\mathcal{C}}) \leftarrow \mathsf{VPKE.KGen}(1^{\lambda})$, and sends $(\mathsf{consume}, pk_{\mathcal{C}}, vpk_{\mathcal{C}})$ to $\mathcal{G}_{d}$;
\item Upon receiving the message $(\mathsf{mtree}, \mathsf{MT}, \sigma^{\mathsf{MT}}_{\mathcal{P}})$ from $\mathcal{P}$ where
$\mathsf{Verify}(\mathsf{MT}, \sigma^{\mathsf{MT}}_{\mathcal{P}},pk_{\mathcal{P}})\equiv1$ and $\mathsf{root}(\mathsf{MT})\equiv\mathsf{root}_m$, stores the Merkle tree $\mathsf{MT}$ and then activates the receiver $\mathcal{R}$ in the $\mathsf{VFD}$ subroutine by invoking $\mathcal{R}.\mathsf{recv}()$ and instantiating the external validation function $\Psi(i, c_i, \sigma_{c_i})$ as $\mathsf{Verify}(i||c_i, \sigma_{c_i}, pk_\mathcal{P})$, and then waits for the execution of $\mathsf{VFD}$ to return the delivered chunks $((c_1, \sigma_{c_1}), (c_2, \sigma_{c_2}),\cdots)$ and stores them; upon receiving the whole $n$-size sequence after executing the $\mathsf{VFD}$ module, sends $(\mathsf{delivered})$ to $\mathcal{G}_{d}$;
\item Waits for $(\mathsf{revealing}, \mathsf{ctr})$ from $\mathcal{G}_{d}$ to enter the next phase.
\end{itemize}
\item The provider $\mathcal{P}$ executes as follows during this phase: upon receiving $(\mathsf{initiated}, pk_{\mathcal{C}}, vpk_{\mathcal{C}})$ from $\mathcal{G}_{d}$, asserts $\Sigma \equiv \mathsf{initiated}$, and sends $(\mathsf{mtree}, \mathsf{MT}, \sigma^{\mathsf{MT}}_{\mathcal{P}})$ to $\mathcal{C}$, and then enters the next phase.
\item The deliverer $\mathcal{D}$ executes as follows during this phase:
\begin{itemize}
\item Upon receiving $(\mathsf{initiated}, pk_{\mathcal{C}}, vpk_{\mathcal{C}})$ from $\mathcal{G}_{d}$: asserts $\Sigma \equiv \mathsf{initiated}$, and then activates the sender $\mathcal{S}$ in the $\mathsf{VFD}$ module by invoking $\mathcal{S}.\mathsf{send}()$ and instantiating the external validation function $\Psi(i, c_i, \sigma_{c_i})$ as $\mathsf{Verify}(i||c_i, \sigma_{c_i}, pk_\mathcal{P})$, and feeds $\mathsf{VFD}$ module with input $((c_1,\sigma_{c_1}),\dots,(c_n,\sigma_{c_n}))$;
\item Upon receiving $(\mathsf{\mathsf{getVFDProof}})$ from $\mathcal{G}_d$, sends the latest receipt, namely $(\mathsf{receipt}, i, \sigma^{i}_{\mathcal{C}})$ to $\mathcal{G}_d$;
\item Waits for $(\mathsf{revealing},\mathsf{ctr})$ from $\mathcal{G}_{d}$ to halt.
\end{itemize}
\end{itemize}
At the end of this phase, $\mathcal{C}$ receives the sequence of encrypted chunks $(c_1, c_2,\dots)$, and $\mathcal{D}$ receives the payment for the bandwidth contribution of delivering chunks, and the contract records the number of delivered chunks $\mathsf{ctr}$.
\begin{figure*}[!t]
\centering
\includegraphics[width=.96\textwidth]{Figures/key_derivation.eps}
\vspace{-2mm}
\caption{An example of the structured key derivation scheme in $\Pi_{\mathsf{FD}}$, where $n=8$ is the number of chunks. In (a), the encryption chunk keys $k_1,\cdots,k_8$ are derived from a randomly sampled master key $mk$; in (b), consider the number of the delivered chunks and therefore the number of chunk keys to reveal is $\mathsf{ctr}=7$, so the three blue elements $rk$ need to be revealed; besides, $rk$ is encrypted by $\mathcal{C}$'s public key, yielding $erk$; in (c), all 7 encryption chunk keys $k_1,\cdots,k_7$ can be recovered from the revealed three blue elements $rk$, which can be used to decrypt the chunks received from $\mathcal{D}$. Note that this example shows the {\em worst} case to reveal $|erk| = \log n$ elements; in a best case, only one element, i.e., the root node $mk$, needs to be revealed.}
\label{fig:key_derivation}
\vspace{-2mm}
\end{figure*}
\begin{algorithm}[!t]
\caption{$\mathsf{RevealKeys}$ algorithm}
\label{alg:RevealKey}
\vspace{-2mm}
\begin{multicols}{2}
\algsetup{linenosize=\tiny}
\scriptsize
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $n, \mathsf{ctr},$ and $mk$
\ENSURE $rk$, an array containing the minimum number of elements in $\mathsf{KT}$ that suffices to recover the $\mathsf{ctr}$ keys from $\mathsf{KT}[n-1]$ to $\mathsf{KT}[n+\mathsf{ctr}-2]$
\STATE let $rk$ and $ind$ be empty arrays
\STATE $\mathsf{KT} \leftarrow \mathsf{GenSubKeys}(n, mk)$
\IF{$\mathsf{ctr}\equiv 1$}
\STATE $rk$ appends $(n-1,\mathsf{KT[n-1]})$
\RETURN $rk$
\ENDIF
\FOR{$i$ in $[0, \mathsf{ctr}-1]$}
\STATE $ind[i]$ = $n-1+i$
\ENDFOR
\WHILE{$true$}
\STATE let $t$ be an empty array
\FOR{$j$ in $[0, \lfloor|ind|/2\rfloor-1]$}
\STATE $p_l=(ind[2j]-1)/2$
\STATE $p_r=(ind[2j+1]-2)/2$
\\ $\triangleright$ merge elements with the same parent node in $\mathsf{KT}$
\IF{$p_l \equiv p_r $}
\STATE $t$ appends $p_l$
\ELSE
\STATE $t$ appends $ind[2j]$
\STATE $t$ appends $ind[2j+1]$
\ENDIF
\ENDFOR
\IF{$|ind|$ is odd}
\STATE $t$ appends $ind[|ind|-1]$
\ENDIF
\IF{$|ind| \equiv |t|$}
\STATE break
\ENDIF
\STATE $ind = t$
\ENDWHILE
\FOR{$x$ in $[0,|ind|-1]$}
\STATE $rk$ appends $(ind[x], \mathsf{KT}[ind[x]])$
\ENDFOR
\RETURN $rk$
\end{algorithmic}
\end{multicols}
\vspace{-3mm}
\end{algorithm}
\begin{algorithm}[!t]
\caption{$\mathsf{ValidateRKeys}$ algorithm}
\label{alg:ValidateRKeys}
\vspace{-2mm}
\begin{multicols}{1}
\algsetup{linenosize=\tiny}
\scriptsize
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $n$, $\mathsf{ctr}$ and $erk$
\ENSURE $true$ or $false$ indicating that whether the correct number (i.e., $\mathsf{ctr}$) of decryption keys can be recovered
\IF{$n \equiv \mathsf{ctr}$ and $|erk| \equiv 1$ and the position of $erk[0] \equiv 0$}
\RETURN $true$ \COMMENT{root of $\mathsf{KT}$}
\ENDIF
\STATE Initialize $chunks\_index$ as a set of numbers $\{n-1,\dots,n+\mathsf{ctr}-2\}$
\FOR{each $(i, \_)$ in $erk$}
\STATE $d_i = {\log(n) - \lfloor \log(i+1) \rfloor}$
\STATE $l_i = i$, $r_i = i$
\IF{$d_i \equiv 0$}
\STATE $chunks\_index$ removes $i$
\ELSE
\WHILE{$(d_i\text{-}\text{-}) > 0$}
\STATE $l_i = 2l_i+1$
\STATE $r_i = 2r_i+2$
\ENDWHILE
\ENDIF
\STATE $chunks\_index$ removes the elements from $l_i$ to $r_i$
\ENDFOR
\IF{$chunks\_index\equiv\emptyset$}
\RETURN $true$
\ENDIF
\RETURN $false$
\end{algorithmic}
\end{multicols}
\vspace{-3mm}
\end{algorithm}
\begin{algorithm}[!t]
\caption{$\mathsf{RecoverKeys}$ algorithm}
\label{alg:ExtractKey}
\vspace{-2mm}
\begin{multicols}{1}
\algsetup{linenosize=\tiny}
\scriptsize
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $n, \mathsf{ctr},$ and $rk$
\ENSURE a $\mathsf{ctr}$-sized array $ks$
\STATE let $ks$ be an empty array
\FOR{each $(i, \mathsf{KT}[i])$ in $rk$}
\STATE $n_i = 2^{(\log n - \lfloor \log(i+1) \rfloor)}$
\STATE $v_i=\mathsf{GenSubKeys}$($n_i$, $\mathsf{KT}[i]$)
\STATE $ks$ appends $v_i[n_i-1: 2n_i-2]$
\ENDFOR
\RETURN $ks$
\end{algorithmic}
\end{multicols}
\vspace{-3mm}
\end{algorithm}
\smallskip
\noindent
{\bf Phase III for Reveal.} This phase is completed by $\mathcal{P}$ and $\mathcal{C}$ in the assistance of the arbiter contract $\mathcal{G}_{d}$, which proceeds as:
\begin{itemize}
\item The provider $\mathcal{P}$ proceeds as follows during this phase:
\begin{itemize}
\item Asserts $\Sigma \equiv \mathsf{revealing}$, executes Alg.~\ref{alg:RevealKey}, namely $rk \leftarrow \mathsf{RevealKeys}(n, \mathsf{ctr}, mk)$ to generate the revealed elements $rk$, and encrypt $rk$ by running $erk \leftarrow \mathsf{VEnc}_{vpk_{\mathcal{C}}}(rk)$, as exemplified by Fig.~\ref{fig:key_derivation}b, and then sends $(\mathsf{revealKeys}, erk)$ to $\mathcal{G}_{d}$; waits for $(\mathsf{sold})$ from $\mathcal{G}_{d}$ to halt.
\end{itemize}
\item The {\em consumer} $\mathcal{C}$ in this phase would first assert $\Sigma \equiv \mathsf{revealing}$ and wait for $(\mathsf{revealed}, erk)$ from $\mathcal{G}_d$ to execute the following:
\begin{itemize}
\item Runs Alg.~\ref{alg:ValidateRKeys}, namely $\mathsf{ValidateRKeys}(n, \mathsf{ctr}, erk)$ to preliminarily check whether the revealed elements $erk$ can recover the correct number (i.e, $\mathsf{ctr}$) of keys. If $false$ is returned, sends $(\mathsf{wrongRK})$ to $\mathcal{G}_{d}$ and halts;
\item If $\mathsf{ValidateRKeys}(n, \mathsf{ctr}, erk) \equiv true$, decrypts $erk$ to obtain $rk \leftarrow \mathsf{VDec}_{vsk_{\mathcal{C}}}(erk)$, and then runs Alg.~\ref{alg:ExtractKey}, i.e., $ks = (k_1,\cdots,k_{\mathsf{ctr}}) \leftarrow \mathsf{RecoverKeys}(n, \mathsf{ctr}, rk)$, as exemplified by Fig.~\ref{fig:key_derivation}c, to recover the chunk keys. Then $\mathcal{C}$ uses these keys to decrypt $(c_1,\cdots,c_{\mathsf{ctr}})$ to obtain $(m_1',\cdots,m_{\mathsf{ctr}}')$, where $m_i' = \mathsf{SDec}_{k_i}(c_i),i\in[\mathsf{ctr}]$, and checks whether for every $m_i' \in (m_1',\cdots,m_{\mathsf{ctr}}')$, $\mathcal{H}(m_i')$ is the $i$-th leaf node in Merkle tree $\mathsf{MT}$ received from $\mathcal{P}$ in the {\em Deliver} phase. If all are consistent, meaning that $\mathcal{C}$ receives all the desired chunks and there is no dispute, $\mathcal{C}$ outputs $(m_1',\cdots,m_{\mathsf{ctr}}')$, and then waits for $(\mathsf{sold})$ from $\mathcal{G}_{d}$ to halt. Otherwise, $\mathcal{C}$ can raise complaint by: choosing one inconsistent position (e.g., the $i$-th chunk), and computes $(rk,\pi_{\mathsf{VD}})\leftarrow\mathsf{ProvePKE}_{vsk_{\mathcal{C}}}(erk)$ and $\pi^{i}_{\mathsf{M}}\leftarrow \mathsf{GenMTP}(\mathsf{MT},\mathcal{H}(m_i))$, and then sends $(\mathsf{PoM}, i, j, c_i, \sigma_{c_i}, \mathcal{H}(m_i), \pi^{i}_{\mathsf{MT}},rk,erk,\pi_{\mathsf{VD}})$ to the contract $\mathcal{G}_{d}$, where $i$ is the index of the incorrect chunk to be proved; $j$ is the index of the element in $erk$ that can induce the key $k_i$ for the position $i$; $c_i$ and $\sigma_{c_i}$ are the $i$-th encrypted chunk and its signature received in the {\em Deliver} phase; $\mathcal{H}(m_i)$ is the value of the $i$-th leaf node in $\mathsf{MT}$; $\pi^{i}_{\mathsf{MT}}$ is the Merkle proof for $\mathcal{H}(m_i)$; $rk$ is decryption result from $erk$; $erk$ is the encrypted revealed key; $\pi_{\mathsf{VD}}$ is the verifiable decryption proof attesting to the correctness of decrypting $erk$ to $rk$.
\end{itemize}
\end{itemize}
\smallskip
\noindent
\textbf{Dispute resolution}. For the sake of completeness, the details of $\mathsf{ValidatePoM}$ subroutine is presented in Alg.~\ref{alg:ValidatePoM}, which allows the consumer to prove that it decrypts a chunk inconsistent to the digest $\mathsf{root}_m$. The time complexity is $O(\log n)$, which is critical to achieve the efficiency requirement. Additionally, we consider a natural case where an honest consumer $\mathcal{C}$ would not complain to the contract if receiving valid content.
\smallskip
\noindent
{\bf Design highlights}. We would like to highlight some design details in $\Pi_{\mathsf{FD}}$: (i) the $rk$ is an array containing several revealed elements, which are in the form of $(position, value)$. The $erk$ shares the similar structure where the $position$ is same and $value$ is encrypted from the corresponding $rk.value$. The $position$ is the index in $\mathsf{KT}$; (ii) to reduce the on-chain cost, the contract only stores the 256-bit hash of the $erk.value$ while emits the actual $erk$ as event logs~\cite{Lu-et-al-2020-ArXiv}. During the dispute resolution, $\mathcal{C}$ submits the $j$-th $erk$ element, and the contract would check the consistency of the submitted $erk$ with its on-chain hash; (iii) Alg.~\ref{alg:ValidateRKeys} allows the judge contract to perform preliminary check on whether the revealed elements can recover the desired number (i.e., $\mathsf{ctr}$) of decryption keys, without directly executing the relatively complex contract part of $\mathsf{ValidatePoM}$ (i.e., Alg.~\ref{alg:ValidatePoM}).
\smallskip
\noindent
{\bf Repeatable delivery}. The protocol $\Pi_\mathsf{FD}$ supports repeatable delivery, meaning that once a delivery session completes, the provider $\mathcal{P}$ can invoke the contract (by sending $(\mathsf{reset}$) to $\mathcal{G}_d$) to reset to ready state, so that new consumers can join in and start a new protocol instance. Such a $\theta$-time repeatable delivery mechanism can amortize the costs of contracts deployment and preparation (i.e., delegating encrypted chunks to a deliverer). Once $\theta$ decreases to 0, the provider $\mathcal{P}$ can either deploy a new contract (thus residing at a new contract address) or utilize the same contract address while re-run the {\em Prepare} phase. $\mathcal{P}$ may not need to delegate the encrypted chunks if a previously participating deliverer joins in.
\smallskip
\noindent
{\bf Dynamic deposits adjustment}. An interesting extension is with respect to the financial deposits that $\mathcal{P}$ provides. Specifically, $\Pi_\mathsf{FD}$ needs $\mathcal{P}$ to deposit the payment $(\theta\cdot n\cdot\bitcoinA_{\mathcal{P}})$ to incentivize successful delivery and $(\theta\cdot\bitcoinA_{\mathsf{pf}})$ to discourage $\mathcal{P}$'s potential misbehavior. Such a deposit is locked-up in contract and unaccessible, which poses potential loss for $\mathcal{P}$, for example, the provider may face an opportunity cost in the form of forgone returns that they could have accrued in alternative investments. Hence, integrating mechanisms~\cite{Harz-et-al-2019-CCS} of dynamic adjustment of cryptocurrency deposits can be an future extension, which also applies to the streaming setting in Section~\ref{sec:ProtocolDesign_Streaming}.
\begin{algorithm}[!t]
\caption{$\mathsf{ValidatePoM}$ algorithm}
\label{alg:ValidatePoM}
\algsetup{linenosize=\tiny}
\scriptsize
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $(i, j, c_i, \sigma_{c_i}, \mathcal{H}(m_i),\pi^{i}_{\mathsf{MT}},rk,erk,\pi_{\mathsf{VD}})$\\ $(\mathsf{root}_m,n,erk_\mathsf{hash},pk_{\mathcal{P}},vpk_{\mathcal{C}})$ are stored in the contract and hence accessible
\ENSURE $true$ or $false$
\STATE assert $j \in [0, |erk|-1] $
\STATE assert $\mathcal{H}(erk) \equiv erk_\mathsf{hash}$
\STATE assert $\mathsf{VerifyPKE}_{vpk_{\mathcal{C}}}(erk, rk, \pi_{\mathsf{VD}})\equiv 1$
\STATE assert $\mathsf{Verify}(i||c_i, \sigma_{c_i}, pk_\mathcal{P}) \equiv 1$
\STATE assert $\mathsf{VerifyMTP}(\mathsf{root}_m, i, \pi^{i}_{\mathsf{MT}}, \mathcal{H}(m_i)) \equiv 1$
\STATE $k_i$ = $\mathsf{RecoverChunkKey}(i, j, n, rk)$
\STATE assert $k_i \neq \bot$
\STATE $ m_i' = \mathsf{SDec}(c_i, k_i)$
\STATE assert $\mathcal{H}(m_i') \neq \mathcal{H}(m_i)$
\RETURN $false$ in case of any assertion error or $true$ otherwise
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!t]
\caption{$\mathsf{RecoverChunkKey}$ algorithm}
\label{alg:ExtractChkKey}
\vspace{-2mm}
\begin{multicols}{1}
\algsetup{linenosize=\tiny}
\scriptsize
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $(i, j, n, rk)$
\ENSURE $k_i$ or $\perp$
\STATE $(x,y) \leftarrow rk[j]$\\ \COMMENT{parse the $j$-th element in $rk$ to get the key $x$ and the value $y$}
\STATE let $k\_path$ be an empty stack
\STATE $ind = n+i-2$ \COMMENT{index in $\mathsf{KT}$}
\IF{$ind < x$}
\RETURN $\perp$
\ENDIF
\IF{$ind \equiv x$}
\RETURN $y$ \COMMENT{$k_i = y$}
\ENDIF
\WHILE{$ind > x$}
\STATE $k\_path$ pushes $0$ if $ind$ is odd
\STATE $k\_path$ pushes $1$ if $ind$ is even
\STATE $ind = \lfloor (ind - 1) / 2 \rfloor$
\ENDWHILE
\STATE let $b = |k\_path|$
\WHILE{$(b\text{-\text{-}}) > 0$}
\STATE pop $k\_path$ to get the value $t$
\STATE $k_i = \mathcal{H}(y||t)$
\ENDWHILE
\RETURN $k_i$
\end{algorithmic}
\end{multicols}
\vspace{-3mm}
\end{algorithm}
\subsection{Analyzing $\mathsf{FairDownload}$ Protocol}
Now we provide the detailed proofs that the protocol $\Pi_{\mathsf{FD}}$ satisfies the design goals.
\noindent
\begin{lemma}
\label{lemma:downloading_completeness}
Conditioned that all parties $\mathcal{P}$, $\mathcal{D}$ and $\mathcal{C}$ are honest, $\Pi_{\mathsf{FD}}$ satisfies the completeness property in the synchronous authenticated network and stand-alone model.
\end{lemma}
\noindent{\em Proof.}
The completeness of $\Pi_{\mathsf{FD}}$ is immediate to see: when all three participating parties honestly follow the protocol, the provider $\mathcal{P}$ gets a net income of $n\cdot(\bitcoinA_{\mathcal{C}}-\bitcoinA_{\mathcal{P}})$;
the deliverer $\mathcal{D}$ obtains the well-deserved payment of $n\cdot\bitcoinA_{\mathcal{P}}$;
the consumer $\mathcal{C}$ receives the valid content $m$, i.e., $\phi(m)\equiv1$.
\noindent
\begin{lemma}
\label{lemma:downloading_fairness}
In the synchronous authenticated model and stand-alone setting, conditioned that the underlying cryptographic primitives are secure, $\Pi_{\mathsf{FD}}$ satisfies the fairness requirement even when at most two parties of $\mathcal{P}$, $\mathcal{D}$ and $\mathcal{C}$ are corrupted by non-adaptive P.P.T. adversary $\mathcal{A}$.
\end{lemma}
\noindent{\em Proof.} The fairness for each party in $\Pi_{\mathsf{FD}}$ can be reduced to the underlying cryptographic building blocks, which can be analyzed as follows:
\begin{itemize}
\item \underline{\smash{\em Consumer Fairness.}} Consumer fairness means that the honest $\mathcal{C}$ only needs to pay proportional to what it {\em de facto} obtains even though malicious $\mathcal{P}^*$ and $\mathcal{D}^*$ may collude with each other. This case can be modeled as an adversary $\mathcal{A}$ corrupts both $\mathcal{P}$ and $\mathcal{D}$ to provide and deliver the content to the honest $\mathcal{C}$. During the {\em Deliver} phase, the $\mathsf{VFD}$ subroutine ensures that $\mathcal{C}$ receives the sequence $(c_1,\sigma_{c_1}),\dots,(c_\ell,\sigma_{c_\ell})$, $\ell\in [n]$ though $\mathcal{A}$ may maliciously abort. Later $\mathcal{A}$ can only claim payment from the contract of $\ell\cdot\bitcoinA_{\mathcal{P}}$, which is paid by the $\mathcal{A}$ itself due to the collusion. During the {\em Reveal} phase, if $\mathcal{A}$ reveals correct elements in $\mathsf{KT}$ to recover the $\ell$ decryption keys, then $\mathcal{C}$ can decrypt to obtain the valid $\ell$ chunks. Otherwise, $\mathcal{C}$ can raise complaint by sending the $(\mathsf{wrongRK})$ and further $(\mathsf{PoM})$ to the contract and gets refund. It is obvious to see that $\mathcal{C}$ either pays for the $\ell$ valid chunks or pays nothing. The fairness for the consumer is guaranteed unless $\mathcal{A}$ can: (i) break $\mathsf{VFD}$ to forge $\mathcal{C}$'s signature; (ii) find Merkle tree collision, namely find another chunk $m_i'\neq m_i$ in position $i$ of $m$ to bind to the same $\mathsf{root}_m$ so that $\mathcal{A}$ can fool the contract to reject $\mathcal{C}$'s complaint (by returning $false$ of $\mathsf{ValidatePoM}$) while indeed sent wrong chunks; (iii) manipulate the execution of smart contract in blockchain. However, according to the security guarantee of the underlying signature scheme, the second-preimage resistance of hash function in Merkle tree, and that the smart contract is modeled as an ideal functionality, the probability to break $\mathcal{C}$'s fairness is negligible. Therefore, the consumer fairness being secure against the collusion of malicious $\mathcal{P}^*$ and $\mathcal{D}^*$ is guaranteed.
\item \underline{\smash{\em Deliverer Fairness.}} Deliverer fairness states that the honest $\mathcal{D}$ receives the payment proportional to the expended bandwidth even though the malicious $\mathcal{P}^*$ and $\mathcal{C}^*$ may collude with each other. This amounts to the case that $\mathcal{A}$ corrupts both $\mathcal{P}$ and $\mathcal{C}$ and try to reap $\mathcal{D}$'s bandwidth contribution without paying. In the $\mathsf{VFD}$ subroutine, considering $\mathcal{D}$ delivers $\ell$ chunks, then it can correspondingly obtain either $\ell$ $(\ell\in [n])$ or $\ell-1$ (i.e., $\mathcal{A}$ stops sending the $\ell$-th receipt) receipts acknowledging the bandwidth contribution. Later $\mathcal{D}$ can use the latest receipt containing $\mathcal{C}$'s signature to claim payment $\ell\cdot\bitcoinA_{\mathcal{P}}$ or $(\ell-1)\cdot\bitcoinA_{\mathcal{P}}$ from the contract. At most $\mathcal{D}$ may waste bandwidth for delivering one chunk-validation pair of $O(\eta)$ bits. To break the security, $\mathcal{A}$ has to violate the contract functionality (i.e., control the execution of smart contract in blockchain), which is of negligible probability. Therefore, the deliverer fairness being secure against the collusion of malicious $\mathcal{P}^*$ and $\mathcal{C}^*$ is ensured.
\item \underline{\smash{\em Provider Fairness.}} Provider fairness indicates that the honest $\mathcal{P}$ receives the payment proportional to the number of valid content chunks that $\mathcal{C}$ receives. The malicious $\mathcal{D}^*$ can collude with the malicious $\mathcal{C}^*$ or simply create multiple fake $\mathcal{C}^*$ (i.e., Sybil attack), and then cheat $\mathcal{P}$ without real delivery. These cases can be modeled as an adversary $\mathcal{A}$ corrupts both $\mathcal{D}$ and $\mathcal{C}$. $\mathcal{A}$ can break the fairness of the honest $\mathcal{P}$ from two aspects by: (i) letting $\mathcal{P}$ pay for the delivery without truly delivering any content; (ii) obtaining the content without paying for $\mathcal{P}$. For case (i), $\mathcal{A}$ can claim that $\ell$ ($\ell\in [n]$) chunks have been delivered and would receive the payment $\ell\cdot\bitcoinA_{\mathcal{P}}$ from the contract. Yet this procedure would also update $\mathsf{ctr} := \ell$ in the contract, which later allows $\mathcal{P}$ to receive the payment $\ell\cdot\bitcoinA_{\mathcal{C}}$ after $\mathcal{T}_{\mathsf{dispute}}$ expires unless $\mathcal{A}$ can manipulate the execution of smart contract, which is of negligible probability. Hence, $\mathcal{P}$ can still obtain the well-deserved payment $\ell\cdot(\bitcoinA_\mathcal{C} - \bitcoinA_{\mathcal{P}})$. For case (ii), $\mathcal{A}$ can either try to decrypt the delivered chunks by itself without utilizing the revealing keys from $\mathcal{P}$, or try to fool the contract to accept the $\mathsf{PoM}$ and therefore repudiate the payment for $\mathcal{P}$ though $\mathcal{P}$ honestly reveals chunk keys. The former situation can be reduced to the need of violating the semantic security of the underlying encryption scheme and the pre-image resistance of cryptographic hash functions, and the latter requires $\mathcal{A}$ to forge $\mathcal{P}$'s signature, or break the soundness of the verifiable decryption scheme, or control the execution of the smart contract. Obviously, the occurrence of aforementioned situations are in negligible probability. Overall, the provider fairness being secure against the collusion of malicious $\mathcal{D}^*$ and $\mathcal{C}^*$ is assured.
\end{itemize}
In sum, $\Pi_{\mathsf{FD}}$ strictly guarantees the fairness for $\mathcal{P}$ and $\mathcal{C}$, and the unpaid delivery for $\mathcal{D}$ is bounded to $O(\eta)$ bits. The fairness requirement of $\Pi_{\mathsf{FD}}$ is satisfied.
\noindent
\begin{lemma}
\label{lemma:downloading_confidentiality}
In the synchronous authenticated network and stand-alone model,
$\Pi_{\mathsf{FD}}$ satisfies the confidentiality property against malicious deliverer corrupted by non-adaptive P.P.T. adversary $\mathcal{A}$.
\end{lemma}
\noindent{\em Proof.} This property states that on input all protocol scripts and the corrupted deliverer's private input and all internal states,
it is still computationally infeasible for the adversary to output the provider's input content. In $\Pi_{\mathsf{FD}}$, each chunk delegated to $\mathcal{D}$ is encrypted using symmetric encryption scheme before delivery by encryption key derived from Alg. \ref{alg:KeyGroupGen}. The distribution of encryption keys and uniform distribution cannot be distinguished by the P.P.T. adversary. Furthermore, the revealed on-chain elements $erk$ for recovering some chunks' encryption keys are also encrypted utilizing the consumer $\mathcal{C}$'s pubic key, which can not be distinguished from uniform distribution by the adversary. Additionally, $\mathcal{C}$ receives the Merkle tree $\mathsf{MT}$ of the content $m$ before the verifiable fair delivery ($\mathsf{VFD}$) procedure starts.
Thus to break the confidentiality property, the adversary $\mathcal{A}$ has to violate any of the following conditions: (i) the pre-image resistance of Merkle tree, which can be further reduce to the pre-image resistance of cryptographic hash function; and (ii) the security of the public key encryption scheme, essentially requiring at least to solve decisional Diffie-Hellman problem. The probability of violating the aforementioned security properties is negligible, and therefore, $\Pi_{\mathsf{FD}}$ satisfies the confidentiality property against malicious deliverer corrupted by $\mathcal{A}$.
\noindent
\begin{lemma}
\label{lemma:downloading_timeliness}
If at least one of the three parties $\mathcal{P}$, $\mathcal{D}$, $\mathcal{C}$ is honest and others are corrupted by non-adaptive P.P.T. adversary $\mathcal{A}$, $\Pi_{\mathsf{FD}}$ satisfies the timeliness property in the synchronous authenticated network and stand-alone model.
\end{lemma}
\noindent{\em Proof.}
Timeliness states that the honest parties in the protocol $\Pi_{\mathsf{FD}}$ terminates in $O(n)$ synchronous rounds, where $n$ is the number of content chunks, and when the protocol completes or aborts, the fairness and confidentiality are always preserved. As the guarantee of confidentiality can be straightforwardly derived due to the lemma~\ref{lemma:downloading_confidentiality} even if malicious parties abort, we only focus on the assurance of fairness. Now we elaborate the following termination cases for the protocol $\Pi_{\mathsf{FD}}$ with the arbiter contract $\mathcal{G}_d$ and at least one honest party:
\noindent
\underline{\smash{\em No abort.}} If all parties of $\mathcal{P}$, $\mathcal{D}$ and $\mathcal{C}$ are honest, the protocol $\Pi_{\mathsf{FD}}$ terminates in the {\em Reveal} phase, after $\mathcal{T}_{\mathsf{dispute}}$ expires. The {\em Prepare} phase and the {\em Reveal} phase need $O(1)$ synchronous rounds, and the {\em Deliver} phase requires $O(n)$ rounds where $n$ is the number of content chunks, yielding totally $O(n)$ rounds for the protocol $\Pi_{\mathsf{FD}}$ to terminate and the fairness is guaranteed at completion since each party obtains the well-deserved items.
\noindent
\underline{\smash{\em Aborts in the Prepare phase.}} This phase involves the interaction between the provider $\mathcal{P}$, the deliverer $\mathcal{D}$, and the arbiter contract $\mathcal{G}_d$. It is obvious this phase can terminate in $O(1)$ rounds if any party maliciously aborts or the honest party does not receive response after $\mathcal{T}_{\mathsf{round}}$ expires. Besides, after each step in this phase, the fairness for both $\mathcal{P}$ and $\mathcal{D}$ is preserved no matter which one of them aborts, meaning that $\mathcal{P}$ does not lose any coins in the $\mathsf{ledger}$ or leak any content chunks, while $\mathcal{D}$ does not waste any bandwidth resource.
\noindent
\underline{\smash{\em Aborts in the Deliver phase.}} This phase involves the provider $\mathcal{P}$, the deliverer $\mathcal{D}$, the consumer $\mathcal{C}$, and the arbiter contract $\mathcal{G}_d$. It can terminate in $O(n)$ rounds. After $\mathcal{C}$ sends $(\mathsf{consume})$ message to the contract, and then other parties aborts, $\mathcal{C}$ would get its deposit back once $\mathcal{T}_{\mathsf{round}}$ times out. The $\mathsf{VFD}$ procedure in this phase only involves $\mathcal{D}$ and $\mathcal{C}$, and the fairness is guaranteed whenever one of the two parties aborts, as analyzed in lemma~\ref{lemma:VFD_completeness_fairness}. The timer $\mathcal{T}_{\mathsf{deliver}}$ in contract indicates that the whole $n$-chunk delivery should be completed within such a time period, or else $\mathcal{G}_d$ would continue with the protocol by informing $\mathcal{D}$ to claim payment and update $\mathsf{ctr}$ after $\mathcal{T}_{\mathsf{deliver}}$ times out. $\mathcal{D}$ is motivated not to maliciously abort until receiving the payment from the contract. At the end of this phase, $\mathcal{D}$ completes its task in the delivery session, while for $\mathcal{P}$ and $\mathcal{C}$, they are motivated to enter the next phase and the fairness for them at this point is guaranteed, i.e., $\mathcal{P}$ decreases coins by $\mathsf{ctr}\cdot\bitcoinA_\mathcal{P}$ in $\mathsf{ledger}$, but the contract has also updated $\mathsf{ctr}$, which allows $\mathcal{P}$ to receive $\mathsf{ctr}\cdot\bitcoinA_{\mathcal{C}}$ from the $\mathsf{ledger}$ if keys are revealed honestly, and $\mathcal{C}$ obtains the encrypted chunks while does not lose any coins in $\mathsf{ledger}$.
\noindent
\underline{\smash{\em Aborts in the Reveal phase.}} This phase involves the provider $\mathcal{P}$, the consumer $\mathcal{C}$, and the arbiter contract $\mathcal{G}_d$. It can terminate in $O(1)$ rounds after the contract sets the state as $\mathsf{sold}$ or $\mathsf{not\_sold}$. If $\mathcal{C}$ aborts after $\mathcal{P}$ reveals the chunk keys on-chain, $\mathcal{P}$ can wait until $\mathcal{T}_{\mathsf{dispute}}$ times out and attain the deserved payment $\mathsf{ctr}\cdot\bitcoinA_{\mathcal{C}}$. If $\mathcal{P}$ reveals incorrect keys and then aborts, $\mathcal{C}$ can raise complaint within $\mathcal{T}_{\mathsf{dispute}}$ by sending message $(\mathsf{wrongRK})$ and further $(\mathsf{PoM})$ to get refund. Hence, the fairness for either $\mathcal{P}$ and $\mathcal{C}$ is guaranteed no matter when and which one aborts maliciously in this phase.
\noindent
\begin{lemma}
\label{lemma:downloading_efficiency}
In the synchronous authenticated network and stand-alone model, for any non-adaptive P.P.T. adversary $\mathcal{A}$, $\Pi_{\mathsf{FD}}$ meets the efficiency requirement that: the communication complexity is bounded to $O(n)$; the on-chain cost is bounded to $\widetilde{O}(1)$; the messages sent by the provider $\mathcal{P}$ after preparation are bounded to $n\cdot \lambda$ bits, where $n$ is the number of chunks and $\lambda$ is a small cryptographic parameter, and $n\cdot \lambda$ is much less than the content size $|m|$.
\end{lemma}
\noindent{\em Proof.}
The analysis regarding the non-trivial efficiency property can be conducted in the following three aspects:
\begin{itemize}
\item \underline{\smash{\em Communication Complexity.}} In the {\em Prepare} phase, $\mathcal{P}$ delegates the signed encrypted chunks to $\mathcal{D}$, where the communication complexity is $O(n)$. Typically this phase only needs to be executed once for the same content. In the {\em Deliver} phase, $\mathcal{P}$ sends the content Merkle tree $\mathsf{MT}$ to $\mathcal{C}$, and $\mathcal{D}$ activates the $\mathsf{VFD}$ subroutine to deliver the content chunks to $\mathcal{C}$. The communication complexity in this phase is also $O(n)$. In the {\em Reveal} phase, the revealed elements for recovering $\mathsf{ctr}$ keys is {\em at most} $O(\log{n})$. Finally, if dispute happens, the communication complexity of sending $\mathsf{PoM}$ (mostly due to the merkle proof $\pi^{i}_{\mathsf{MT}}$) to the contract is $O(\log{n})$. Therefore, the communication complexity of the protocol $\Pi_{\mathsf{FD}}$ is $O(n)$.
\item \underline{\smash{\em On-chain Cost.}} In the \textit{optimistic} case where no dispute occurs, the on-chain costs of $\Pi_{\mathsf{FD}}$ include: (i) the functions (i.e., $\mathsf{start}$, $\mathsf{join}$ and $\mathsf{prepared}$) in the {\em Prepare} phase are all $O(1)$; (ii) in the {\em Deliver} phase, the $\mathsf{consume}$ and $\mathsf{delivered}$ functions are $O(1)$. Note that in the $\mathsf{delivered}$ function, the cost of signature verification is $O(1)$ since $\mathcal{D}$ only needs to submit the latest $\mathsf{receipt}$ containing one signature of $\mathcal{C}$; (iii) the storage cost for revealed elements (i.e., $erk$) is {\em at most} $O(\log{n})$, where $n$ is the number of chunks. Hence, the overall on-chain cost is {\em at most} $O(\log n)$, namely $\widetilde{O}(1)$. In the \textit{pessimistic} case where dispute happens, the on-chain cost is only related to the delivered chunk size $\eta$ no matter how large the content size $|m|$ is (the relationship between the chunk size and costs in different modes is depicted in Section~\ref{sec:ImplementationandEvaluation}).
\item \underline{\smash{\em Message Volume for $\mathcal{P}$.}} Considering that the contract is deployed and the deliverer is ready to deliver. Every time when a new consumer joins in, a new delivery session starts. The provider $\mathcal{P}$ shows up twice for: (i) sending the Markle tree $\mathsf{MT}$, which is in complexity of $O(\log n)$, to $\mathcal{C}$ in the {\em Deliver} phase, and (ii) revealing $erk$, which is in complexity of {\em at most} $O(\log n)$, to $\mathcal{C}$ in the {\em Reveal} phase. The total resulting message volume $O(\log n)$ can be represented as $n\cdot\lambda$ bits, where $\lambda$ is a small cryptographic parameter, and $n\cdot\lambda$ is obviously much less than the content size of $|m|$.
\end{itemize}
\begin{theorem}
\label{thm:downloading}
Conditioned on that the underlying cryptographic primitives are secure, the protocol $\mathsf{FairDownload}$ satisfies the completeness, fairness, confidentiality against deliverer, timeliness and non-trivial efficiency properties in the synchronous authenticated network, $\mathcal{G}^{\mathsf{ledger}}_{d}$-hybrid and stand-alone model.
\end{theorem}
\noindent{\em Proof.}
Lemmas~\ref{lemma:downloading_completeness},~\ref{lemma:downloading_fairness},~\ref{lemma:downloading_confidentiality},~\ref{lemma:downloading_timeliness}, and~\ref{lemma:downloading_efficiency} complete the proof.
\section{$\mathsf{FairStream}${}: Fair p2p Streaming}
\label{sec:ProtocolDesign_Streaming}
In this section, we present the p2p fair delivery protocol $\Pi_{\mathsf{FS}}$, allowing {\em view-while-delivery} in the streaming setting.
\subsection{$\mathsf{FairStream}${} Overview}
As depicted in Fig.~\ref{fig:streaming_protocol_overview}, our protocol $\Pi_{\mathsf{FS}}$ works as three phases, i.e., {\em Prepare}, {\em Stream}, and {\em Payout}, at a high level. The core ideas for $\Pi_{\mathsf{FS}}$ are:
\begin{itemize}
\item Same as the {\em Prepare} phase in $\Pi_{\mathsf{FD}}$, initially the content provider $\mathcal{P}$ would deploy the smart contract, encrypt content chunks, sign the encrypted chunks and delegate to the deliverer $\mathcal{D}$.
\item The streaming process consists of $O(n)$ communication rounds, where $n$ is the number of chunks. In each round, the consumer $\mathcal{C}$ would receive an encrypted chunk from $\mathcal{D}$ and a decryption key from $\mathcal{P}$; any party may abort in a certain round due to, e.g., untimely response or invalid message; especially, in case erroneous chunk is detected during streaming, $\mathcal{C}$ can complain and get compensated with a valid and short (i.e., $O(\eta + \lambda)$ bits) proof;
\item Eventually all parties enter the {\em Payout} phase, where $\mathcal{D}$ and $\mathcal{P}$ can claim the deserved payment by submitting the latest receipt signed by the consumer before a timer maintained in contract expires; the contract determines the final internal state $\mathsf{ctr}$, namely the number of delivered chunks or revealed keys, as the {\em larger} one of the indexes in $\mathcal{P}$ and $\mathcal{D}$'s receipts. If no receipt is received from $\mathcal{P}$ or $\mathcal{D}$ before the timer expires, the contract would treat the submitted index for that party as 0. Such a design is critical to ensure fairness as analyzed in Section~\ref{sec:FS_analysis}.
\end{itemize}
Fig.~\ref{fig:streaming_round} illustrates the concrete message flow of one round chunk delivery during the {\em Stream} phase. We highlight that a black-box call of the $\mathsf{VFD}$ module is not applicable to the streaming setting since $\mathsf{VFD}$ only allows the consumer $\mathcal{C}$ to obtain the encrypted chunks, which brings the advantage that the provider $\mathcal{P}$ merely needs to show up once to reveal a minimum number of elements and get all chunk keys to be recovered. However, the streaming procedure demands much less latency of retrieving each content chunk, leading to the intuitive design to let $\mathcal{C}$ receive both an encrypted chunk and a corresponding chunk decryption key in one same round. $\mathcal{P}$ is therefore expected to keep online and reveal each chunk key to $\mathcal{C}$. Overall, the protocol design in $\Pi_{\mathsf{FS}}$ requires relatively more involvement of the provider $\mathcal{P}$ compared with the downloading setting, but the advantage is that instead of downloading all chunks in $O(n)$ rounds before viewing, the consumer $\mathcal{C}$ now can retrieve each chunk with $O(1)$ latency. All other properties including each party's fairness, the on-chain computational cost, and the deliverer's communication complexity remain the same as those in the downloading setting.
\begin{figure}[!t]
\centering
\includegraphics[width=.49\textwidth]{Figures/protocol_flow_streaming.eps}
\vspace{-6mm}
\caption{The overview of $\mathsf{FairStream}$ protocol $\Pi_{\mathsf{FS}}$. The dispute may arise in a certain round in the {\em Stream} phase, and the messages $(\mathsf{claimDelivery})$ and $(\mathsf{claimRevealing})$ may be sent to the contract in a different order.}
\label{fig:streaming_protocol_overview}
\vspace{-2mm}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=.45\textwidth]{Figures/streaming_round.eps}
\caption{The concrete message flow of one round chunk delivery in the {\em Stream} phase of $\Pi_{\mathsf{FS}}$. All these messages are sent off-chain.}
\label{fig:streaming_round}
\vspace{-2mm}
\end{figure}
\subsection{Arbiter Contract $\mathcal{G}_{s}^{\mathsf{ledger}}$ for Streaming}
The arbiter contract $\mathcal{G}_{s}^{\mathsf{ledger}}$ (abbr. $\mathcal{G}_{s}$) illustrated in Fig.~\ref{fig:streaming_contract_ideal_functionality} is a stateful ideal functionality that can access to $\mathsf{ledger}$ functionality to facilitate the fair content delivery via streaming. The timer $\mathcal{T}_{\mathsf{receive}}$ ensures that when any party maliciously aborts or the consumer $\mathcal{C}$ receives invalid chunk during the streaming process, the protocol $\Pi_{\mathsf{FS}}$ can smoothly continue and enter the next phase. The dispute resolution in contract is relatively simpler than the downloading setting since no verifiable decryption is needed. The timer $\mathcal{T}_{\mathsf{finish}}$ indicates that both $\mathcal{D}$ and $\mathcal{P}$ are supposed to send the request of claiming their payment before $\mathcal{T}_{\mathsf{finish}}$ times out, and therefore it is natural to set $\mathcal{T}_{\mathsf{finish}} > \mathcal{T}_{\mathsf{receive}}$. Once $\mathcal{T}_{\mathsf{finish}}$ expires, the contract determines the final $\mathsf{ctr}$ by choosing the maximum index in $\mathcal{P}$ and $\mathcal{D}$'s receipts, namely $\mathsf{ctr}_{\mathcal{P}}$ and $\mathsf{ctr}_{\mathcal{D}}$, respectively, and then distributes the well-deserved payment for each party. Once the delivery session completes, the provider $\mathcal{P}$ can invoke the contract by sending $(\mathsf{reset})$ to $\mathcal{G}_s$ to reset to the ready state and continue to receive new requests from consumers.
\begin{figure*}[!t]
\centering
\footnotesize
\fbox{%
\parbox{.96\linewidth}{%
\vspace{-2mm}
\begin{center}
{\bf The Arbiter Contract Functionality $\mathcal{G}_{s}^{\mathsf{ledger}}$ for p2p Streaming}
\vspace{1mm}
The contract $\mathcal{G}_{s}$ can access to $\mathsf{ledger}$, and it interacts with $\mathcal{P}$, $\mathcal{D}$, $\mathcal{C}$ and the adversary $\mathcal{A}$. It locally stores $\theta$, $n$, $\mathsf{root}_m$, $\bitcoinA_{\mathcal{P}}$, $\bitcoinA_{\mathcal{C}}$, ${\bitcoinA}_{\mathsf{pf}}$, $\mathsf{ctr}_{\mathcal{D}}$, $\mathsf{ctr}_\mathcal{P}$, $\mathsf{ctr}$ (all $\mathsf{ctr}_{\mathcal{D}}$, $\mathsf{ctr}_\mathcal{P}$, $\mathsf{ctr}$ are initialized as 0), $pk_{\mathcal{P}}, pk_{\mathcal{D}}, pk_{\mathcal{C}}$, the penalty flag $\mathsf{plt}$ (initialized by $false$), the state $\Sigma$ and three timers \\ $\mathcal{T}_{\mathsf{round}}$ (implicitly), $\mathcal{T}_{\mathsf{receive}}$, $\mathcal{T}_{\mathsf{finish}}$.
\end{center}
\vspace{-4mm}
\begin{multicols}{2}
\begin{flushleft}
\noindent
\xrfill[0.5ex]{0.5pt} {} {\bf Phase 1: Prepare} \xrfill[0.5ex]{0.5pt}
\begin{itemize}
\item[$\bullet$] This phase is the same as the {\em Prepare} phase in $\mathcal{G}_{d}$.
\end{itemize}
\noindent
\xrfill[0.5ex]{0.5pt} {} {\bf Phase 2: Stream} \xrfill[0.5ex]{0.5pt}
\begin{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{consume}, pk_{\mathcal{C}})$ from $\mathcal{C}$:
\begin{itemize}[-]
\item assert $\theta > 0$
\item[-] assert $\mathsf{ledger}[\mathcal{C}]\ge n\cdot\bitcoinA_{\mathcal{C}}$ $\wedge$ $\Sigma\equiv\mathsf{ready}$
\item[-] store $pk_{\mathcal{C}}$ and let $\mathsf{ledger}[\mathcal{C}]:=\mathsf{ledger}[\mathcal{C}]-n\cdot\bitcoinA_{\mathcal{C}}$
\item[-] start two timers $\mathcal{T}_{\mathsf{receive}}$, and $\mathcal{T}_{\mathsf{finish}}$
\item let $\Sigma := \mathsf{initiated}$ and send $(\mathsf{initiated}, pk_{\mathcal{C}})$ to all entities
\end{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{received})$ from $\mathcal{C}$ before $\mathcal{T}_{\mathsf{receive}}$ times out:
\begin{itemize}[-]
\item[-] assert current time $\mathcal{T} < \mathcal{T}_{\mathsf{receive}}$ and $\Sigma\equiv\mathsf{initiated}$
\item[-] let $\Sigma := \mathsf{received}$ and send $(\mathsf{received})$ to all entities
\end{itemize}
\item[$\bullet$] {\color{blue}Upon} $\mathcal{T}_{\mathsf{receive}}$ times out:
\begin{itemize}[-]
\item[-] assert current time $\mathcal{T} \geq \mathcal{T}_{\mathsf{receive}}$ and $\Sigma\equiv\mathsf{initiated}$
\item[-] let $\Sigma := \mathsf{received}$ and send $(\mathsf{received})$ to all entities
\end{itemize}
{\ } \\
\item[] {\color{purple}$\triangleright$ Below is to resolve dispute during streaming in $\Pi_{\mathsf{FS}}$}
\vspace{0.5mm}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{PoM}, i, c_i, \sigma_{c_i}, k_i, \sigma_{k_i}, \mathcal{H}(m_i), \pi^{i}_{\mathsf{MT}})$ from $\mathcal{C}$ before $\mathcal{T}_{\mathsf{receive}}$ expires:
\begin{itemize}[-]
\item assert current time $\mathcal{T} < \mathcal{T}_{\mathsf{receive}}$ and $\Sigma\equiv\mathsf{initiated}$
\item assert $\mathsf{Verify}(i||c_i, \sigma_{c_i}, pk_{\mathcal{P}}) \equiv 1$
\item assert $\mathsf{Verify}(i||k_i, \sigma_{k_i}, pk_{\mathcal{P}}) \equiv 1$
\item assert $\mathsf{VerifyMTP}(\mathsf{root}_m, i, \pi^{i}_{\mathsf{MT}}, \mathcal{H}(m_i)) \equiv 1$
\item $m_i' = \mathsf{SDec}(c_i, k_i)$
\item assert $\mathcal{H}(m_i') \neq \mathcal{H}(m_i)$
\item let $\mathsf{plt} := true$
\item let $\Sigma := \mathsf{received}$ and send $(\mathsf{received})$ to all entities
\end{itemize}
\end{itemize}
\noindent
\xrfill[0.5ex]{0.5pt} {} {\bf Phase 3: Payout} \xrfill[0.5ex]{0.5pt}
\begin{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{claimDelivery}, i, \sigma^{i}_{\mathcal{C}\mathcal{D}})$ from $\mathcal{D}$:
\begin{itemize}
\item[-] assert current time $\mathcal{T} < \mathcal{T}_{\mathsf{finish}}$
\item[-] assert $i\equiv n$ or $\Sigma\equiv\mathsf{received}$ or $\Sigma\equiv\mathsf{payingRevealing}$
\item[-] assert $\mathsf{ctr} \equiv 0$ and $0 < i \leq n$
\item[-] assert $\mathsf{Verify}(\mathsf{receipt}||i||pk_{\mathcal{C}}||pk_{\mathcal{D}}, \sigma^{i}_{\mathcal{C}\mathcal{D}}, pk_{\mathcal{C}}) \equiv 1$
\item[-] let $\mathsf{ctr}_{\mathcal{D}} := i$, $\Sigma := \mathsf{payingDelivery}$, and then send $(\mathsf{payingDelivery})$ to all entities
\end{itemize}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{claimRevealing}, i, \sigma^{i}_{\mathcal{C}\mathcal{P}})$ from $\mathcal{P}$:
\begin{itemize}
\item[-] assert current time $\mathcal{T} < \mathcal{T}_{\mathsf{finish}}$
\item[-] assert $i \equiv n$ or $\Sigma\equiv\mathsf{received}$ or $\Sigma\equiv\mathsf{payingDelivery}$
\item[-] assert $\mathsf{ctr} \equiv 0$ and $0 < i \leq n$
\item[-] assert $\mathsf{Verify}(\mathsf{receipt}||i||pk_{\mathcal{C}}||pk_{\mathcal{P}}, \sigma^{i}_{\mathcal{C}\mathcal{P}}, pk_{\mathcal{C}}) \equiv 1$
\item[-] let $\mathsf{ctr}_{\mathcal{P}} := i$, $\Sigma := \mathsf{payingRevealing}$, and then send $(\mathsf{payingRevealing})$ to all entities
\end{itemize}
\item[$\bullet$] {\color{blue}Upon} $\mathcal{T}_{\mathsf{finish}}$ times out:
\begin{itemize}
\item[-] assert current time $\mathcal{T} \geq \mathcal{T}_{\mathsf{finish}}$
\item[-] let $\mathsf{ctr} := \max\{\mathsf{ctr}_{\mathcal{D}}, \mathsf{ctr}_\mathcal{P}\}$
\item[-] let $\mathsf{ledger}[\mathcal{D}]:=\mathsf{ledger}[\mathcal{D}]+\mathsf{ctr}\cdot\bitcoinA_{\mathcal{P}}$
\item[-] if $\mathsf{plt}$: \\
{\ } let $\mathsf{ledger}[\mathcal{P}]:= \mathsf{ledger}[\mathcal{P}]+ (n-\mathsf{ctr})\cdot\bitcoinA_{\mathcal{P}} + \mathsf{ctr}\cdot\bitcoinA_{\mathcal{C}}$ \\
{\ } let $\mathsf{ledger}[\mathcal{C}]:=\mathsf{ledger}[\mathcal{C}]+ (n-\mathsf{ctr})\cdot\bitcoinA_{\mathcal{C}} + \bitcoinA_{\mathsf{pf}}$
\item[-] else: \\
{\ } let $\mathsf{ledger}[\mathcal{P}]:=\mathsf{ledger}[\mathcal{P}]+ (n-\mathsf{ctr})\cdot\bitcoinA_{\mathcal{P}} + \mathsf{ctr}\cdot\bitcoinA_{\mathcal{C}} + \bitcoinA_{\mathsf{pf}}$
{\ } let $\mathsf{ledger}[\mathcal{C}]:=\mathsf{ledger}[\mathcal{C}]+ (n-\mathsf{ctr})\cdot\bitcoinA_{\mathcal{C}}$
\item[-] if $\mathsf{ctr} > 0$: \\ {\ } let $\Sigma := \mathsf{sold}$ and send $(\mathsf{sold})$ to all entities
\item[-] else let $\Sigma := \mathsf{not\_sold}$ and send $(\mathsf{not\_sold})$ to all entities
\end{itemize}
{\ } \\
{\color{violet}$\triangleright$ Reset to the ready state for repeatable delivery}
\item[$\bullet$] {\color{blue}On receive} $(\mathsf{reset})$ from $\mathcal{P}$:
\begin{itemize}[-]
\item assert $\Sigma\equiv \mathsf{sold}$ or $\Sigma\equiv \mathsf{not\_sold}$
\item set $\mathsf{ctr}$, $\mathsf{ctr}_{\mathcal{D}}$, $\mathsf{ctr}_{\mathcal{P}}$, $\mathcal{T}_{\mathsf{receive}}$, $\mathcal{T}_{\mathsf{finish}}$ as 0
\item nullify $pk_{\mathcal{C}}$
\item let $\theta := \theta - 1$ and $\Sigma := \mathsf{ready}$
\item send $(\mathsf{ready})$ to all entities
\end{itemize}
\end{itemize}
\end{flushleft}
\end{multicols}
\vspace{-4mm}
}
}
\caption{The streaming-setting arbiter functionality $\mathcal{G}_{s}^{\mathsf{ledger}}$. ``Sending to all entities" captures that the smart contract is transparent to the public.}\label{fig:streaming_contract_ideal_functionality}
\vspace{-4mm}
\end{figure*}
\subsection{$\Pi_{\mathsf{FS}}$: $\mathsf{FairStream}${} Protocol}
\label{sec:ft_streaming_protocol_details}
\smallskip
\noindent
{\bf Phase I for Prepare.} This phase executes the same as the {\em Prepare} phase in the $\Pi_{\mathsf{FD}}$ protocol.
\smallskip
\noindent
{\bf Phase II for Stream.} The consumer $\mathcal{C}$, the deliverer $\mathcal{D}$ and the provider $\mathcal{P}$ interact with the contract $\mathcal{G}_s$ in this phase as:
\begin{itemize}
\item The consumer $\mathcal{C}$ interested in the content with digest $\mathsf{root}_m$ would initialize a variable $x := 1$ and then:
\begin{itemize}
\item Asserts $\Sigma \equiv \mathsf{ready}$, runs $(pk_{\mathcal{C}},sk_{\mathcal{C}}) \leftarrow \mathsf{SIG.KGen}(1^{\lambda})$, and sends $(\mathsf{consume}, pk_{\mathcal{C}})$ to $\mathcal{G}_{s}$;
\item Upon receiving the message $(\mathsf{mtree}, \mathsf{MT}, \sigma^{\mathsf{MT}}_{\mathcal{P}})$ from $\mathcal{P}$, asserts
$\mathsf{Verify}(\mathsf{MT}, \sigma^{\mathsf{MT}}_{\mathcal{P}},pk_{\mathcal{P}})\equiv1 \wedge \mathsf{root}(\mathsf{MT})\equiv\mathsf{root}_m$, and stores the Merkle tree $\mathsf{MT}$, or else halts;
\item Upon receiving the message $(\mathsf{deliver}, i, c_i, \sigma_{c_i})$ from $\mathcal{D}$, checks whether $i \equiv x \wedge \mathsf{Verify}(i||c_i, \sigma_{c_i}, pk_{\mathcal{P}}) \equiv 1$, if hold, starts (for $i\equiv 1$) a timer $\mathcal{T}_{\mathsf{keyResponse}}$ or resets (for $1 <i\leq n$) it, sends $(\mathsf{keyReq}, i, \sigma^{i}_{\mathcal{C}})$ where $\sigma^{i}_{\mathcal{C}} \leftarrow \mathsf{Sign}(i||pk_{\mathcal{C}}, sk_{\mathcal{C}})$ to $\mathcal{P}$ (i.e., the step (2) in Fig.~\ref{fig:streaming_round}). If failing to check or $\mathcal{T}_{\mathsf{keyResponse}}$ times out, halts;
\item Upon receiving the message $(\mathsf{reveal}, i, k_i, \sigma_{k_i})$ from $\mathcal{P}$ before $\mathcal{T}_{\mathsf{keyResponse}}$ times out, checks whether $i \equiv x \wedge \mathsf{Verify}(i||k_i, \sigma_{k_i}, pk_{\mathcal{P}}) \equiv 1$, if failed, halts. Otherwise, starts to validate the content chunk based on received $c_i$ and $k_i$: decrypts $c_i$ to obtain $m_i'$, where $m_i'=\mathsf{SDec}_{k_i}(c_i)$, and then checks whether $\mathcal{H}(m_i')$ is consistent with the $i$-th leaf node in the Merkle tree $\mathsf{MT}$, if inconsistent, sends $(\mathsf{PoM}, i, c_i, \sigma_{c_i}, k_i, \sigma_{k_i}, \mathcal{H}(m_i), \pi^{i}_{\mathsf{MT}})$ to $\mathcal{G}_{s}$. If it is consistent, sends the receipts $(\mathsf{receipt}, i, \sigma^{i}_{\mathcal{C}\mathcal{D}})$ to $\mathcal{D}$ and $(\mathsf{receipt}, i, \sigma^{i}_{\mathcal{C}\mathcal{P}})$ to $\mathcal{P}$, where $\sigma^{i}_{\mathcal{C}\mathcal{D}} \leftarrow \mathsf{Sign}(\mathsf{receipt}||i||pk_{\mathcal{C}}||pk_{\mathcal{D}}, sk_{\mathcal{C}})$ and $\sigma^{i}_{\mathcal{C}\mathcal{P}} \leftarrow \mathsf{Sign}(\mathsf{receipt}||i||pk_{\mathcal{C}}||pk_{\mathcal{P}}, sk_{\mathcal{C}})$, and sets $x := x + 1$, and then waits for the next $(\mathsf{deliver})$ message from $\mathcal{D}$. Upon $x$ is set to be $n+1$, sends $(\mathsf{received})$ to $\mathcal{G}_s$;
\item Waits for the messages $(\mathsf{received})$ from $\mathcal{G}_s$ to halt.
\end{itemize}
\item The deliverer $\mathcal{D}$ initializes a variable $y := 1$ and executes as follows in this phase:
\begin{itemize}
\item Upon receiving $(\mathsf{initiated}, pk_{\mathcal{C}})$ from $\mathcal{G}_s$, sends the message $(\mathsf{deliver}, i, c_i, \sigma_{c_i}), i = 1$ to $\mathcal{C}$ and starts a timer $\mathcal{T}_{\mathsf{chunkReceipt}}$;
\item Upon receiving the message $(\mathsf{receipt}, i, \sigma^{i}_{\mathcal{C}\mathcal{D}})$ from $\mathcal{C}$ before $\mathcal{T}_{\mathsf{chunkReceipt}}$ times out, checks whether $\mathsf{Verify}(\mathsf{receipt}||i||pk_{\mathcal{C}}||pk_\mathcal{D}, \sigma^{i}_{\mathcal{C}\mathcal{D}}, pk_{\mathcal{C}}) \equiv 1 \wedge i \equiv y$ or not, if succeed, continues with the next iteration: sets $y := y+1$, sends $(\mathsf{deliver}, i, c_i, \sigma_{c_i}), i = y$ to $\mathcal{C}$, and resets $\mathcal{T}_{\mathsf{chunkReceipt}}$ (i.e., the step (1) in Fig.~\ref{fig:streaming_round}); otherwise $\mathcal{T}_{\mathsf{chunkReceipt}}$ times out, enters the next phase.
\end{itemize}
\item The provider $\mathcal{P}$ initializes a variable $z:=1$ and executes as follows in this phase:
\begin{itemize}
\item Upon receiving $(\mathsf{initiated}, pk_{\mathcal{C}})$ from $\mathcal{G}_{s}$: asserts $\Sigma \equiv \mathsf{initiated}$, and sends $(\mathsf{mtree}, \mathsf{MT}, \sigma^{\mathsf{MT}}_{\mathcal{P}})$ to $\mathcal{C}$;
\item Upon receiving $(\mathsf{keyReq}, i, \sigma^{i}_{\mathcal{C}})$ from $\mathcal{C}$, checks whether $i \equiv z \wedge \mathsf{Verify}(i||pk_{\mathcal{C}}, \sigma^{i}_{\mathcal{C}}, pk_{\mathcal{C}}) \equiv 1$, if succeed, sends $(\mathsf{reveal}, i, k_i, \sigma_{k_i})$, where $\sigma_{k_i} \leftarrow \mathsf{Sign}(i||k_i, sk_{\mathcal{P}})$, to $\mathcal{C}$ and starts (for $i \equiv 1$) a timer $\mathcal{T}_{\mathsf{keyReceipt}}$ or resets (for $1 < i \leq n$) it (i.e., the step (3) in Fig.~\ref{fig:streaming_round}), otherwise enters the next phase;
\item On input $(\mathsf{receipt}, i, \sigma^{i}_{\mathcal{C}\mathcal{P}})$ from $\mathcal{C}$ before $\mathcal{T}_{\mathsf{keyReceipt}}$ expires, checks $\mathsf{Verify}(\mathsf{receipt}||i||pk_{\mathcal{C}}||pk_{\mathcal{P}}, \sigma^{i}_{\mathcal{C}\mathcal{P}}, pk_{\mathcal{C}}) \equiv 1$ $\wedge$ $i\equiv z$ or not, if succeed, sets $z = z+1$. Otherwise $\mathcal{T}_{\mathsf{keyReceipt}}$ times out, enters the next phase.
\end{itemize}
\end{itemize}
\smallskip
\noindent
{\bf Phase III for Payout.} The provider $\mathcal{P}$ and the deliverer $\mathcal{D}$ interact with the contract $\mathcal{G}_s$ in this phase as:
\begin{itemize}
\item The provider $\mathcal{P}$ executes as follows in this phase:
\begin{itemize}
\item Upon receiving $(\mathsf{received})$ or $(\mathsf{delivered})$ from $\mathcal{G}_s$, or receiving the $n$-th $\mathsf{receipt}$ from $\mathcal{C}$ (i.e., $z$ is set to be $n+1$), sends $(\mathsf{claimRevealing}, i, \sigma^{i}_{\mathcal{C}\mathcal{P}})$ to $\mathcal{G}_s$;
\item Waits for $(\mathsf{revealed})$ from $\mathcal{G}_s$ to halt.
\end{itemize}
\item The deliverer $\mathcal{D}$ executes as follows during this phase:
\begin{itemize}
\item Upon receiving $(\mathsf{received})$ or $(\mathsf{revealed})$ from $\mathcal{G}_s$, or receiving the $n$-th $\mathsf{receipt}$ from $\mathcal{C}$ (i.e., $y$ is set to be $n+1$), sends $(\mathsf{claimDelivery}, i, \sigma^{i}_{\mathcal{C}\mathcal{D}})$ to $\mathcal{G}_s$;
\item Waits for $(\mathsf{delivered})$ from $\mathcal{G}_{s}$ to halt.
\end{itemize}
\end{itemize}
\subsection{Analyzing $\mathsf{FairStream}$ Protocol}
\label{sec:FS_analysis}
\noindent
\begin{lemma}
\label{lemma:streaming_completeness}
Conditioned that all parties $\mathcal{P}$, $\mathcal{D}$ and $\mathcal{C}$ are honest, $\Pi_{\mathsf{FS}}$ satisfies the completeness property in the synchronous authenticated network model and stand-alone setting.
\end{lemma}
\noindent{\em Proof.}
If all parties $\mathcal{P}$, $\mathcal{D}$ and $\mathcal{C}$ are honest to follow the protocol, the completeness is obvious to see: the provider $\mathcal{P}$ receives a net income of $n\cdot(\bitcoinA_{\mathcal{C}}-\bitcoinA_{\mathcal{P}})$; the deliverer $\mathcal{D}$ obtains the payment of $n\cdot\bitcoinA_\mathcal{P}$; the consumer $\mathcal{C}$ pays for $n\cdot\bitcoinA_{\mathcal{C}}$ and attains the valid content $m$ with $\phi(m)\equiv 1$.
\noindent
\begin{lemma}
\label{lemma:streaming_fairness}
In the synchronous authenticated network model and stand-alone setting, conditioned that the underlying cryptographic primitives are secure, $\Pi_{\mathsf{FS}}$ meets the fairness requirement even when at most two parties of $\mathcal{P}$, $\mathcal{D}$ and $\mathcal{C}$ are corrupted by non-adaptive P.P.T. adversary $\mathcal{A}$.
\end{lemma}
\noindent{\em Proof.}
The fairness for each party can be reduced to the underlying cryptographic building blocks. Specifically,
\begin{itemize}
\item \underline{\smash{\em Consumer Fairness.}} The consumer fairness means that the honest $\mathcal{C}$ needs to pay proportional to what it {\em de facto} receives even though malicious $\mathcal{P}^*$ and $\mathcal{D}^*$ may collude with each other. This case can be modeled as a non-adaptive P.P.T. adversary $\mathcal{A}$ corrupts $\mathcal{P}$ and $\mathcal{D}$ to provide and deliver the content to $\mathcal{C}$. During the {\em Stream} phase, $\mathcal{C}$ can stop sending back the receipts any time when an invalid chunk is received and then raise complaint to the contract to get compensation. Considering that $\mathcal{C}$ receives a sequence of $(c_1, \sigma_{c_1}), \cdots, (c_\ell, \sigma_{c_\ell}), \ell \in [n]$ though $\mathcal{A}$ may abort maliciously. Then it is ensured that $\mathcal{A}$ can {\em at most} get $\ell$ receipts and claim payment of $\ell\cdot \bitcoinA_{\mathcal{P}}$ and $\ell\cdot\bitcoinA_{\mathcal{C}}$, where the former is paid by $\mathcal{A}$ itself due to collusion. Overall, $\mathcal{C}$ either pays $\ell\cdot\bitcoinA_{\mathcal{C}}$ and obtains $\ell$ valid chunks or pays nothing. To violate the fairness for $\mathcal{C}$, $\mathcal{A}$ has to break the security of signature scheme, i.e., forge $\mathcal{C}$'s signature. The probability is negligible due to the EU-CMA property of the underlying signature scheme. Therefore, the consumer fairness being against the collusion of malicious $\mathcal{P}^*$ and $\mathcal{D}^*$ is ensured. Note that breaking the security of the Merkle tree (i.e., finding another chunk $m_i'\neq m_i$ in position $i$ of $m$ to bind to the same $\mathsf{root}_m$ so as to fool the contract to reject $\mathcal{C}$'s $\mathsf{PoM}$) or controlling the execution of smart contract in blockchain, which are of negligible probability due to the second-preimage resistance of hash function in Merkle tree and the fact that contract is modeled as an ideal functionality, can only repudiate the penalty fee $\bitcoinA_{\mathsf{pf}}$ and would not impact $\mathcal{C}$'s fairness in the streaming setting.
\item \underline{\smash{\em Deliverer Fairness.}} The deliverer fairness states that the honest $\mathcal{D}$ receives the payment proportional to the contributed bandwidth even though the malicious $\mathcal{P}^*$ and $\mathcal{C}^*$ may collude with each other. This case can be modeled as the non-adaptive P.P.T. adversary $\mathcal{A}$ corrupts both $\mathcal{P}$ and $\mathcal{C}$ to reap $\mathcal{D}$'s bandwidth resource without paying. In the {\em Stream} phase, if the honest $\mathcal{D}$ delivers $\ell$ chunks, then it is guaranteed to obtain $\ell$ or $\ell-1$ (i.e., $\mathcal{A}$ does not respond with the $\ell$-th receipt) receipts. In the {\em Payout} phase, $\mathcal{A}$ cannot lower the payment for the honest $\mathcal{D}$ since $\mathcal{D}$ can send the $\ell$-th or $(\ell-1)$-th receipt to the contract, which would update the internal state $\mathsf{ctr}_{\mathcal{D}}$ as $\ell$ or $\ell - 1$. Once $\mathcal{T}_{\mathsf{finish}}$ times out, $\mathcal{D}$ can receive the well-deserved payment of $\ell\cdot\bitcoinA_{\mathcal{P}}$ or $(\ell-1)\cdot\bitcoinA_{\mathcal{P}}$ from the contract, and {\em at most} waste bandwidth for delivering one chunk of size $\eta$. To violate the fairness for $\mathcal{D}$, $\mathcal{A}$ has to control the execution of smart contract to refuse $\mathcal{D}$'s request of claiming payment though the request is valid. The probability to control the contract functionality in blockchain is negligible, and therefore the deliverer fairness being secure against the collusion of malicious $\mathcal{P}^*$ and $\mathcal{C}^*$ is assured.
\item \underline{\smash{\em Provider Fairness.}} The provider fairness indicates that the honest $\mathcal{P}$ receives the payment proportional to the number of valid chunks that $\mathcal{C}$ receives. The malicious $\mathcal{D}^*$ and $\mathcal{C}^*$ may collude with each other or $\mathcal{D}^*$ can costlessly create multiple fake $\mathcal{C}^*$ (i.e., Sybil attack), and then cheat $\mathcal{P}$ without truly delivering the content. These cases can be modeled as a non-adaptive P.P.T. adversary $\mathcal{A}$ corrupts both $\mathcal{D}$ and $\mathcal{C}$. There are two situations $\mathcal{P}$'s fairness would be violated: (i) $\mathcal{A}$ claims payment (paid by $\mathcal{P}$) without real delivery; (ii) $\mathcal{A}$ obtains content chunks without paying for $\mathcal{P}$. For case (i), $\mathcal{A}$ would try to maximize the payment paid by $\mathcal{P}$ by increasing the $\mathsf{ctr}_{\mathcal{D}}$ via the $(\mathsf{claimDelivery})$ message sent to the contract. However, the $\mathcal{G}_s$ would update the counter $\mathsf{ctr}$ as $\max\{\mathsf{ctr}_{\mathcal{D}},\mathsf{ctr}_{\mathcal{P}}\}$ in contract after $\mathcal{T}_{\mathsf{finish}}$ times out, and the intention that $\mathcal{A}$ tries to maximize $\mathsf{ctr}_{\mathcal{D}}$ would correspondingly maximize $\mathsf{ctr}$. Considering that $\mathcal{A}$ wants to claim the payment of $\ell\cdot\bitcoinA_{\mathcal{P}}, \ell\in [n]$ by letting the $(\mathsf{claimDelivery})$ message contain the index of $\ell$ while no content is actually delivered, essentially the honest $\mathcal{P}$ can correspondingly receive the payment of $\ell\cdot\bitcoinA_{\mathcal{C}}$, and therefore a well-deserved net income of $\ell\cdot(\bitcoinA_{\mathcal{C}}-\bitcoinA_{\mathcal{P}})$, unless $\mathcal{A}$ can manipulate the execution of smart contract. For case~(ii), on one hand, each content chunk is encrypted before receiving the corresponding chunk key from $\mathcal{P}$. Hence, $\mathcal{A}$ has to violate the semantic security of the underlying symmetric encryption scheme to break the provider fairness, which is of negligible probability. On the other hand, during the streaming procedure, $\mathcal{P}$ can always stop revealing the chunk key to $\mathcal{A}$ if no valid receipt for the previous chunk key is responded in time. {\em At most} $\mathcal{P}$ would lose one content chunk of size $\eta$ and receive well-deserved payment using the latest receipt. To violate the fairness, $\mathcal{A}$ again has to control the execution of smart contract, which is of negligible probability, to deny the payment for $\mathcal{P}$ though the submitted receipt is valid. Therefore, the provider fairness against the collusion of malicious $\mathcal{D}^*$ and $\mathcal{C}^*$ is guaranteed.
\end{itemize}
In sum, the fairness for $\mathcal{C}$ is strictly ensured in $\Pi_{\mathsf{FS}}$, while for $\mathcal{P}$ and $\mathcal{D}$, the unpaid revealed content for $\mathcal{P}$ and the unpaid bandwidth resource of delivery are bounded to $O(\eta)$ bits. i.e., $\Pi_{\mathsf{FS}}$ satisfies the defined fairness property.
\noindent
\begin{lemma}
\label{lemma:streaming_confidentiality}
In the synchronous authenticated network and stand-alone model,
$\Pi_{\mathsf{FS}}$ satisfies the confidentiality property against malicious deliverer corrupted by non-adaptive P.P.T. adversary $\mathcal{A}$.
\end{lemma}
\noindent{\em Proof.} The confidentiality indicates that the deliverer $\mathcal{D}$ cannot learn any useful information about the content $m$ besides a-priori known knowledge within a delivery session. It can be modeled as a non-adaptive P.P.T. adversary corrupts $\mathcal{D}$. In $\Pi_{\mathsf{FS}}$, the possible scripts of leaking information of $m$ include: (i) the encrypted content chunks delegated to $\mathcal{D}$; and (ii) the Merkle tree $\mathsf{MT}$ of the content $m$. To break the confidentiality property, $\mathcal{A}$ has to violate the pre-image resistance of cryptographic hash functions (for the encryption scheme and $\mathsf{MT}$), which is of negligible probability. Hence, the confidentiality property against the malicious deliverer can be ensured.
\noindent
\begin{lemma}
\label{lemma:streaming_timeliness}
If at least one of the three parties $\mathcal{P}$, $\mathcal{D}$ and $\mathcal{C}$ is honest and others are corrupted by non-adaptive P.P.T. adversary $\mathcal{A}$, $\Pi_{\mathsf{FS}}$ meets the timeliness property in the synchronous authenticated network and stand-alone model.
\end{lemma}
\noindent{\em Proof.}
The timeliness means that the honest parties in $\Pi_{\mathsf{FS}}$ can terminate in $O(n)$ synchronous rounds, where $n$ is the number of content chunks, and when the protocol completes or aborts, the fairness and confidentiality are always preserved. Similarly, we focus on the analysis of fairness since the guarantee of confidentiality can be straightforwardly derived in light of the lemma~\ref{lemma:streaming_confidentiality} even if malicious parties abort. We distinguish the following termination cases for $\Pi_{\mathsf{FS}}$ with the arbiter contract $\mathcal{G}_s$ and at least one honest party:
\noindent
\underline{\smash{\em No abort.}} If all of $\mathcal{P}$, $\mathcal{D}$ and $\mathcal{C}$ are honest, the protocol $\Pi_{\mathsf{FS}}$ terminates in the {\em Payout} phase, after $\mathcal{T}_{\mathsf{finish}}$ times out. Both the {\em Prepare} and {\em Payout} phases can be completed in $O(1)$ rounds, while the {\em Stream} phase needs $O(n)$ rounds, where $n$ is the number of content chunks, resulting in $O(n)$ rounds for the protocol $\Pi_{\mathsf{FS}}$ to terminate and the fairness for all parties at completion are ensured as they obtain the well-deserved items.
\noindent
\underline{\smash{\em Aborts in the Prepare phase.}} The analysis for this phase is the same as the $\Pi_{\mathsf{FD}}$ protocol in lemma~\ref{lemma:downloading_timeliness}.
\noindent
\underline{\smash{\em Aborts in the Stream phase.}} This phase involves the provider $\mathcal{P}$, the deliverer $\mathcal{D}$, the consumer $\mathcal{C}$ and the arbiter contract $\mathcal{G}_s$, and it would terminate in $O(n)$ rounds due to the following cases: (i) $\mathcal{C}$ receives all the chunks and sends the $(\mathsf{received})$ message to contract; (ii) any party aborts during the streaming, and then the timer $\mathcal{T}_{\mathsf{receive}}$ times out in contract; (iii) $\mathcal{C}$ successfully raises complaint of $\mathcal{P}$'s misbehavior. During streaming, if $\mathcal{D}$ aborts, for example, after receiving the $\ell$-th receipt for chunk delivery, then $\mathcal{C}$ is guaranteed to have received $\ell$ encrypted chunks at that time point. If $\mathcal{P}$ aborts, for example, after receiving the $\ell$-th receipt for key revealing, then $\mathcal{C}$ is assured to have received $\ell$ keys for decryption at that time point. If $\mathcal{C}$ aborts, in the worst case, after receiving the $\ell$-th encrypted chunk from $\mathcal{D}$ and the $\ell$-th key from $\mathcal{P}$, at that time point, $\mathcal{D}$ is ensured to have obtained $\ell - 1$ receipts for the bandwidth contribution, while $\mathcal{P}$ is guaranteed to have received $\ell - 1$ receipts for key revealing, which means the fairness for $\mathcal{D}$ and $\mathcal{P}$ is still preserved according to the fairness definition, i.e., the unpaid delivery resource for $\mathcal{D}$ and the unpaid content for $\mathcal{P}$ are bounded to one chunk of $O(\eta)$ bits.
\noindent
\underline{\smash{\em Aborts in the Payout phase.}}
This phase involves the provider $\mathcal{P}$, the deliverer $\mathcal{D}$ and the arbiter contract $\mathcal{G}_s$, and it can terminate in $O(1)$ rounds. The fairness for the honest one is not impacted no matter when the other party aborts since $\mathcal{P}$ and $\mathcal{D}$ are independently claim the payment from contract. After $\mathcal{T}_\mathsf{finish}$ times out, the contract would automatically distribute the payment to all parties according to the internal state $\mathsf{ctr}$.
\noindent
\begin{lemma}
\label{lemma:streaming_efficiency}
In the synchronous authenticated network model and stand-alone setting, for any non-adaptive P.P.T. adversary $\mathcal{A}$, $\Pi_{\mathsf{FS}}$ satisfies the efficiency requirement: the communication complexity is bounded to $O(n)$; the on-chain cost is bounded to $\widetilde{O}(1)$; the messages transferred by the provider $\mathcal{P}$ after the setup phase are bounded to $n\cdot\lambda$ bits, where $n$ is the number of chunks and $\lambda$ is a cryptographic parameter, and $n\cdot \lambda$ is much less than the content size $|m|$.
\end{lemma}
\noindent{\em Proof.}
The analysis of efficiency guarantee in $\Pi_{\mathsf{FS}}$ can be conducted in the following three perspectives:
\begin{itemize}
\item \underline{\smash{\em Communication Complexity.}} The {\em Prepare} phase is the same as the downloading setting, and therefore the time complexity is $O(n)$. In the {\em Stream} phase, $\mathcal{P}$ sends the Merkle tree $\mathsf{MT}$ of $m$ and meanwhile $\mathcal{D}$ starts to deliver the delegated $n$ chunks to $\mathcal{C}$. If dispute happens during streaming, the complexity of sending $\mathsf{PoM}$ is $O(\log n)$. Overall the communication complexity of this phase is $O(n)$. In the {\em Payout} phase, the $(\mathsf{claimDelivery})$ and $(\mathsf{claimRevealing})$ messages sent by $\mathcal{P}$ and $\mathcal{D}$ to contract is in $O(1)$. Hence, the total communication complexity of $\Pi_{\mathsf{FS}}$ is $O(n)$.
\item \underline{\smash{\em On-chain Costs.}} The {\em Prepare} phase yields on-chain costs of $O(1)$, which is same as the downloading setting. In the {\em Stream} phase, the on-chain cost of the $\mathsf{consume}$ function is $O(1)$ and the multiple rounds of content delivery (i.e., the streaming process) are executed off-chain. When dispute occurs during streaming, the on-chain cost is $O(\log n)$ (for verifying the Merkle proof), leading to a total on-chain costs of $O(\log n)$. In the {\em Payout} phase, the on-chain costs is $O(1)$ since $\mathcal{P}$ and $\mathcal{D}$ only need to submit the latest receipt consisting of one signature. Overall, the on-chain cost of $\Pi_{\mathsf{FS}}$ is $O(\log n)$, namely $\widetilde{O}(1)$.
\item \underline{\smash{\em Message Volume for $\mathcal{P}$.}} Considering that the contract is deployed and the deliverer is ready to deliver. Every time when a new consumer joins in, a new delivery session starts. The messages that $\mathcal{P}$ needs to send include: (i) the Merkle tree $\mathsf{MT}$ of $m$ in the {\em Stream} phase is $O(\log n)$; (ii) the $n$ chunk keys revealed to $\mathcal{C}$ is $O(n)$. Note that the message volume decrease from $n$ chunks to $n$ keys (e.g., 32 KB for a chunk v.s. 256 bits for a chunk key); (iii) the $(\mathsf{claimRevealing})$ message for claiming payment, which is $O(1)$ since only the latest receipt containing one signature needs to be submitted to $\mathcal{G}_s$. Overall, the resulting message volume can be represented as $n\cdot \lambda$, where $\lambda$ is a small cryptographic parameter, which is much smaller than the content size $|m|$.
\end{itemize}
\begin{theorem}
\label{thm:stream}
Conditioned that the underlying cryptographic primitives are secure, the protocol $\mathsf{FairStream}$ satisfies the completeness, fairness, confidentiality against deliverer, timeliness, and non-trivial efficiency properties in the synchronous authenticated network, $\mathcal{G}^{\mathsf{ledger}}_s$-hybrid and stand-alone model.
\end{theorem}
\noindent{\em Proof.} Lemmas~\ref{lemma:streaming_completeness},~\ref{lemma:streaming_fairness},~\ref{lemma:streaming_confidentiality},~\ref{lemma:streaming_timeliness}, and~\ref{lemma:streaming_efficiency} complete the proof.
\vspace{1mm}
Besides, we have the following corollary to characterize the latency relationship between $\mathsf{FairDownload}$ and $\mathsf{FairStream}$.
\noindent
\begin{corollary}
\label{lemma:streaming_less_latency}
In the synchronous authenticated setting without corruptions, the honest consumer $\mathcal{C}$ in $\Pi_{\mathsf{FS}}$ can: (i) retrieve the first chunk in $O(1)$ communication rounds once activating the Stream phase; (ii) retrieve every $(i+1)$-th content chunk in $O(1)$ communication rounds once the $i$-th content chunk has delivered. This yields less retrieval latency compared to that all chunks retrieved by the consumer in $\Pi_\mathsf{FD}$ delivers in $O(n)$ rounds after the Deliver phase is activated.
\end{corollary}
\noindent{\em Proof.} In $\Pi_\mathsf{FD}$, the honest consumer $\mathcal{C}$ is able to obtain the keys only after the completion of the verifiable fair delivery module to decrypt the received chunks, meaning that the latency of retrieving the raw content chunks is in $O(n)$ communication rounds. While for $\Pi_{\mathsf{FS}}$, in each round of streaming, the honest $\mathcal{C}$ can obtain one encrypted chunk from the deliverer $\mathcal{D}$ as well as one decryption key from the provider $\mathcal{P}$, and consequently the retrieval latency, though entailing relatively more involvement of $\mathcal{P}$, is only in $O(1)$ communication rounds.
\vspace{1mm}
It is worth pointing out in $\mathsf{FairStream}$, $\mathcal{P}$ and $\mathcal{D}$ are only allowed to claim the payment {\em after} the $\mathsf{received}$ state in contract, which indicates that either $\mathcal{C}$ has received all the valid chunks or any party has aborted during the streaming procedure. Typically the number of delivered chunks $\mathsf{ctr}$ and therefore the payment amount to $\mathcal{D}$ (i.e., $\mathsf{ctr}\cdot\bitcoinA_{\mathcal{P}}$) and $\mathcal{P}$ (i.e., $\mathsf{ctr}\cdot\bitcoinA_{\mathcal{C}}$) would {\em not} be very small. If considering another strategy that allows $\mathcal{P}$ and $\mathcal{D}$ to claim the payment any time during the streaming, the payment amount may be small, e.g., in pennies. In that case, it is feasible to introduce the payment channels~\cite{Malavolta-et-al-2017-CCS,Dziembowski-et-al-2019-SP} to handle micropayments~\cite{Decker-et-al-2015-Springger} and improve efficiency. Such a strategy can be an interesting future extension.
{\bf Extension for delivering from any specific chunk}.
The protocol $\mathsf{FairStream}$ (as well as $\mathsf{FairDownload}$) can be easily tuned to transfer the content from the middle instead of the beginning.
Specifically, for the downloading setting, one can simply let the content provider reveal the elements that are able to recover a {\em sub-tree} of the key derivation tree $\mathsf{KT}$ for decrypting the transferred chunks. The complaint of incorrect decryption key follows the same procedure in~\S\ref{sec:ProtocolDesign_Downloading}. For the streaming setting, it is more straightforward as each chunk ciphertext and its decryption key are uniquely identified by the index and can be obtained in $O(1)$ rounds by the consumer, who can immediately complain to contract in the presence of an incorrect decryption result.
\section{Implementation and Evaluations}
\label{sec:ImplementationandEvaluation}
To shed some light on the feasibility of $\mathsf{FairDownload}${} and $\mathsf{FairStream}${}, we implement, deploy and evaluate them in the {\em Ethereum Ropsten} network. The arbiter contract is implemented in Solidity and split into \textit{Optimistic} and \textit{Pessimistic} modules, where the former is executed when no dispute occurs while the later is additionally called if dispute happens. Note that the contracts are only deployed once and may be used for multiple times to facilitate many deliveries, which amortizes the cost of deployment.
\smallskip
\noindent
{\bf Cryptographic instantiations.} The hash function is \textit{keccak256} and the digital signature is via ECDSA over secp256k1 curve. The encryption of each chunk $m_i$ with key $k_i$ is instantiated as: parse $m_i$ into $t$ 32-byte blocks $(m_{i,1}, \dots, m_{i,t})$ and output $c_i = (m_{i,1} \oplus \mathcal{H}(k_i||1), \dots, m_{i,t} \oplus \mathcal{H}(k_i||t))$. {The decryption is same to the encryption.}
We construct public key encryption scheme based on ElGamal: Let $\mathcal{G}=\langle{g}\rangle$ to be $G_1$ group over {\em alt-bn128} curve \cite{EIP-196} of prime order $q$, where $g$ is group generator; The private key $k \stackrel{R}{\longleftarrow} \mathbb{Z}_{q}$, the public key $h = g^k$, the encryption $\mathsf{VEnc}_{h}(m)=(c_1,c_2)=(g^r,m\cdot g^{kr})$ where $r\stackrel{R}{\longleftarrow} \mathbb{Z}_{q}$ and $m$ is encoded into $\mathcal{G}$ with Koblitz's method~\cite{Koblitz-1987-Mathematics}, and the decryption $\mathsf{VDec}_{k}((c_1,c_2))= c_2/c_1^{k}$. To augment ElGamal for verifiable decryption, we adopt Schnorr protocol \cite{Schnorr-1989-CTAC} for Diffie-Hellman tuples with using Fiat-Shamir transform \cite{Fiat-Shamir-1986-Crypto} in the random oracle model. Specifically, $\mathsf{ProvePKE}_{k}((c_1,c_2))$ is as: run $\mathsf{VDec}_{k}((c_1,c_2))$ to obtain $m$. Let $x \stackrel{R}{\longleftarrow} \mathbb{Z}_{q}$, and compute $A=g^x$, $B=c_1^x$, $C=\mathcal{H}(g||A||B||h||c_1||c_2||m)$, $Z=x+kC$, $\pi=(A,B,Z)$, and output $(m,\pi)$; $\mathsf{VerifyPKE}_{h}((c_1,c_2), m,\pi)$ is as: parse $\pi$ to obtain $(A,B,Z)$, compute $C'=\mathcal{H}(g||A||B||h||c_1||c_2||m)$, and verify $(g^Z \equiv A\cdot h^{C'}) \wedge (m^{C'}\cdot c_1^{Z} \equiv B\cdot c_2^{C'})$, and output $1/0$ indicating the verification succeeds or fails.
\smallskip
\noindent
\subsection{Evaluating $\mathsf{FairDownload}${}}
Table~\ref{tab:gas_costs} presents the on-chain costs for all functions in $\Pi_{\mathsf{FD}}$. For the recent violent fluctuation of Ether price, we adopt a gas price at 10 Gwei to ensure over half of the mining power in Ethereum would mine this transaction\footnote{https://ethgasstation.info/.}, and an exchange rate of 259.4 USD per Ether, which is the average market price of Ether between Jan./1st/2020 and Nov./3rd/2020 from coindesk\footnote{https://www.coindesk.com/price/ethereum/.}. We stress that utilizing other cryptocurrencies such as Ethereum classic\footnote{https://ethereumclassic.org/.} can much further decrease the price for execution. The price also applies to the streaming setting.
\smallskip
\noindent
{\bf Cost of optimistic case.} Without complaint the protocol $\Pi_{\mathsf{FD}}$ only executes the functions in {\em Deliver} and {\em Reveal} phases when a new consumer joins in, yielding the total cost of 1.032 USD for all involved parties except the one-time cost for deployment and the {\em Prepare} phase. Typically, such an on-chain cost is {\em constant} no matter how large the content size or the chunk size are, as illustrated in Figure~\ref{fig:chunk_size_cost}. In a worse case, up to $\log n$ elements in Merkle tree need to be revealed. In that case, Figure~\ref{fig:reveal_cost} depicts the relationship between the number of revealed elements and the corresponding costs.
\begin{figure}[!htpb]
\vspace{-4mm}
\centering
\subfloat[Costs for various chunk size \label{fig:chunk_size_cost}]{\includegraphics[width=.48\linewidth]{Figures/chunk_size_and_costs.eps}}
\hspace{1mm}
\subfloat[Costs for $erk$ revealing cost \label{fig:reveal_cost}]{\includegraphics[width=.48\linewidth]{Figures/reveal_costs.eps}}
\vspace{-1mm}
\caption{Experiment results for the $\mathsf{FairDownload}$ protocol (averaged over 5 independent runs).}
\vspace{-2mm}
\end{figure}
\begin{table}[!t]
\centering
\begin{scriptsize}
\caption{The on-chain costs of all functions in $\mathsf{FairDownload}$}
\vspace{-2mm}
\centering
\label{tab:gas_costs}
\setlength{\tabcolsep}{0.5em}
{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ c | c | c | c | c }
\hline
\hline
{\em Phase} & {\em Function}& {\em Caller} & {\em Gas Costs} & {USD Costs} \\
\hline
Deploy & (Optimistic) & $\mathcal{P}$ & 2 936 458 & 7.617 \\
\hline
Deploy & (Pessimistic) & $\mathcal{P}$ & 2 910 652 & 7.550 \\
\hline
\hline
\multirow{3}{*}{Prepare}
& \multicolumn{1}{c|}{$\mathsf{start}$}
& \multicolumn{1}{c|}{$\mathcal{P}$}
& \multicolumn{1}{c|}{ 110 751}
& \multicolumn{1}{c}{0.287} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{join}$}
& \multicolumn{1}{c|}{$\mathcal{D}$}
& \multicolumn{1}{c|}{ 69 031}
& \multicolumn{1}{c}{0.179} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{prepared}$} & \multicolumn{1}{c|}{$\mathcal{D}$} & \multicolumn{1}{c|}{ 34 867} & \multicolumn{1}{c}{0.090} \\\cline{2-5}
\hline
\multirow{3}{*}{Deliver}
&
\multicolumn{1}{c|}{$\mathsf{consume}$} & \multicolumn{1}{c|}{$\mathcal{C}$} & \multicolumn{1}{c|}{117 357} & \multicolumn{1}{c}{0.304} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{delivered}$} & \multicolumn{1}{c|}{$\mathcal{C}$} & \multicolumn{1}{c|}{57 935} & \multicolumn{1}{c}{0.150} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{verifyVFDProof}$} & \multicolumn{1}{c|}{$\mathcal{D}$} & \multicolumn{1}{c|}{56 225} & \multicolumn{1}{c}{0.146} \\
\hline
\multirow{2}{*}{Reveal}
& \multicolumn{1}{c|}{$\mathsf{revealKeys}$} & \multicolumn{1}{c|}{$\mathcal{P}$} & \multicolumn{1}{c|}{113 041} & \multicolumn{1}{c}{0.293} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{payout}$} & \multicolumn{1}{c|}{$\mathcal{G}_{d}$} & \multicolumn{1}{c|}{53 822} & \multicolumn{1}{c}{0.139} \\\cline{2-5}
\hline
\hline
\multirow{2}{*}{\makecell[c]{Dispute Resolution}}
& \multicolumn{1}{c|}{$\mathsf{wrongRK}$} & \multicolumn{1}{c|}{$\mathcal{C}$} & \multicolumn{1}{c|}{23 441} & \multicolumn{1}{c}{0.061} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{PoM}$} & \multicolumn{1}{c|}{$\mathcal{C}$} & \multicolumn{1}{c|}{389 050} & \multicolumn{1}{c}{1.009} \\
\hline
\hline
\end{tabular}
}
\end{scriptsize}
\vspace{-4mm}
\end{table}
\smallskip
\noindent
{\bf Cost of pessimistic case.} When complaint arises, the arbiter contract involves to resolve dispute. The cost of executing $\mathsf{wrongRK}$ function relates to the concrete values of $n$, $\mathsf{ctr}$ and $|erk|$, and in Table~\ref{tab:gas_costs}, the cost is evaluated on $n\equiv \mathsf{ctr}\equiv 512$, and $|erk|\equiv 1$. The cost of $\mathsf{PoM}$ function validating misbehavior varies by the content chunk size $\eta$, as depicted in Figure~\ref{fig:chunk_size_cost} pessimistic costs. The results demonstrate that the on-chain costs increase linearly in the chunk size (mostly due to chunk decryption in contract).
\subsection{Evaluating $\mathsf{FairStream}${}}
\begin{table}[!t]
\centering
\begin{scriptsize}
\caption{The on-chain costs of all functions in $\mathsf{FairStream}$}
\vspace{-2mm}
\centering
\label{tab:gas_costs_streaming}
\setlength{\tabcolsep}{1.0em}
{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ c | c | c | c | c }
\hline
\hline
{\em Phase} & {\em Function}& {\em Caller} & {\em Gas Costs} & {USD Costs} \\
\hline
Deploy & (Optimistic) & $\mathcal{P}$ & 1 808 281 & 4.691 \\
\hline
Deploy & (Pessimistic) & $\mathcal{P}$ & 1 023 414 & 2.655 \\
\hline
\hline
\multirow{3}{*}{Prepare}
& \multicolumn{1}{c|}{$\mathsf{start}$}
& \multicolumn{1}{c|}{$\mathcal{P}$}
& \multicolumn{1}{c|}{ 131 061}
& \multicolumn{1}{c}{0.340} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{join}$}
& \multicolumn{1}{c|}{$\mathcal{D}$}
& \multicolumn{1}{c|}{ 54 131}
& \multicolumn{1}{c}{0.140} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{prepared}$} & \multicolumn{1}{c|}{$\mathcal{D}$} & \multicolumn{1}{c|}{ 34 935} & \multicolumn{1}{c}{0.091} \\\cline{2-5}
\hline
\multirow{4}{*}{Stream}
&
\multicolumn{1}{c|}{$\mathsf{consume}$} & \multicolumn{1}{c|}{$\mathcal{C}$} & \multicolumn{1}{c|}{95 779} & \multicolumn{1}{c}{0.248} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{received}$} & \multicolumn{1}{c|}{$\mathcal{C}$} & \multicolumn{1}{c|}{39 857} & \multicolumn{1}{c}{0.103} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{receiveTimeout}$} & \multicolumn{1}{c|}{$\mathcal{G}_s$} & \multicolumn{1}{c|}{39 839} & \multicolumn{1}{c}{0.103} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{PoM}$} & \multicolumn{1}{c|}{$\mathcal{C}$} & \multicolumn{1}{c|}{90 018} & \multicolumn{1}{c}{0.234} \\
\hline
\multirow{3}{*}{Payout}
&
\multicolumn{1}{c|}{$\mathsf{claimDelivery}$} & \multicolumn{1}{c|}{$\mathcal{D}$} & \multicolumn{1}{c|}{67 910} & \multicolumn{1}{c}{0.176} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{claimRevealing}$} & \multicolumn{1}{c|}{$\mathcal{P}$} & \multicolumn{1}{c|}{67 909} & \multicolumn{1}{c}{0.176} \\\cline{2-5}
&
\multicolumn{1}{c|}{$\mathsf{finishTimeout}$} & \multicolumn{1}{c|}{$\mathcal{G}_s$} & \multicolumn{1}{c|}{88 599} & \multicolumn{1}{c}{0.230} \\
\hline
\hline
\end{tabular}
}
\end{scriptsize}
\vspace{-4mm}
\end{table}
\begin{figure*}[!htpb]
\centering
\subfloat[Bandwidths among entities in the testing experiment \label{fig:streaming_bandwidth}]{\includegraphics[width=.18\linewidth]{Figures/bandwidth.eps}}
\hspace{0.8mm}
\subfloat[Time costs of streaming 512 content chunks in LAN \label{fig:streaming_LAN}]{\includegraphics[width=.26\linewidth]{Figures/StreamingLAN.eps}}
\hspace{0.8mm}
\subfloat[Time costs of streaming 512 content chunks in WAN \label{fig:streaming_WAN}]{\includegraphics[width=.26\linewidth]{Figures/StreamingWAN.eps}}
\hspace{0.8mm}
\subfloat[Average time costs and the corresponding bitrate for various chunk size \label{fig:streaming_result}]{\includegraphics[width=.26\linewidth]{Figures/FTStreaming_avg.eps}}
\vspace{-2mm}
\caption{The performance of $\mathsf{FairStream}$ protocol in the LAN and WAN testing environments (averaged over 5 independent runs).}
\vspace{-2mm}
\end{figure*}
Table~\ref{tab:gas_costs_streaming} illustrates the on-chain costs of all functions in ${\mathsf{FairStream}}$. As the deployment of contract and the {\em Prepare} phase can be executed only {\em once}, we discuss the costs in both optimistic and pessimistic modes after a new consumer participates in, i.e., starting from the {\em Stream} phase. Specifically,
\smallskip
\noindent
{\bf Costs of optimistic case.} When no dispute occurs, the $\Pi_{\mathsf{FS}}$ protocol executes the functions in {\em Stream} and {\em Payout} phases except the $\mathsf{PoM}$ function for verifying proof of misbehavior, yielding a total cost of 0.933 USD for all involved parties. Note that only one of the $(\mathsf{received})$ and $(\mathsf{receiveTimeout})$ functions would be invoked. Meanwhile, the $(\mathsf{claimDelivery})$ and $(\mathsf{claimRevealing})$ functions may be called in different orders. The costs in the optimistic mode is {\em constant} regardless of the content size and chunk size.
\smallskip
\noindent
{\bf Costs of pessimistic case.} When complaint arises, the total on-chain cost is 1.167 USD for all involved parties during a delivery session. The cost of the $\mathsf{PoM}$ function: (i) increases slightly in number of chunks $n$, since it computes $O(\log n)$ hashes to verify the Merkle tree proof; (ii) increase linearly in the the content chunk size $\eta$ due to the chunk decryption in contract, which follows a similar trend to Fig.~\ref{fig:chunk_size_cost} pessimistic costs but with lower costs since no verification of verifiable decryption proof is needed.
\smallskip
\noindent
{\bf Streaming efficiency.} To demonstrate feasibility of using $\Pi_{\mathsf{FS}}$ for p2p streaming, we evaluate the efficiency for streaming 512 content chunks with various chunk size. Fig.~\ref{fig:streaming_bandwidth} shows the experimental bandwidth among parties in LAN (i.e., three VM instances on three servers residing on the same rack connected with different switches, where servers are all Dell PowerEdge R740 and each is equipped with 2 Intel(R) Xeon(R) CPU Silver 4114 processors, 256 GB (16 slots$\times$16 GB/slot) 2400MHz DDR4 RDIMM memory and 8 TB (8 slots$\times$1TB/slot) 2.5 inch SATA hard drive. Each VM on the servers has the same configuration of 8 vCPUs, 24 GB memory and 800 GB hard drive) and WAN (i.e., three {\em Google cloud} VM instances are initialized in {\em us-east4-c}, {\em us-east1-b} and {\em europe-north1-a}, respectively. Each VM is configured with 2 vCPUs, 4 GB memory and 10 GB hard drive). Considering that $\mathcal{P}$ owns information to choose the proper deliverer $\mathcal{D}$ to ensure better delivery quality (e.g., less delay from $\mathcal{D}$ to $\mathcal{C}$), the link between $\mathcal{D}$ and $\mathcal{C}$ is therefore evaluated in a higher bandwidth environment. Figure~\ref{fig:streaming_LAN} and~\ref{fig:streaming_WAN} illustrate the experiment results of consecutively streaming 512 content chunks in both LAN and WAN and the corresponding time costs. We can derive the following observations: (i) obviously the time costs increase due to the growth of chunk size; (ii) the delivery process remains stable with only slight fluctuation, as reflected by the slope for each chunk size in Figure~\ref{fig:streaming_LAN} and~\ref{fig:streaming_WAN}. Furthermore, Fig.~\ref{fig:streaming_result} depicts the average time costs for each chunk (over the 512 chunks) and the corresponding bitrate. The results show that the bitrate can reach 10 Mpbs even in the public network, which is potentially sufficient to support high-quality content streaming. E.g., the video bitrate for HD 720 and HD 1080 are {\em at most} 4 Mbps and 8 Mbps, respectively~\cite{Bitrate_Bandwidth_IBM-2020-WebPage}.
\ignore{
\begin{table}[!htpb]
\centering
\begin{scriptsize}
\caption{The time costs of streaming 512 chunks in LAN (unit: seconds)}
\centering
\label{tab:streaming_time_cost_LAN}
\begin{tabular}{ c | c | c | c | c }
\hline
Fraction of delivered chunks& 32 KB & 256 KB & 1 MB & 2 MB\\
\hline\hline
10\% & 1.24 & 1.49 & 2.89 & 4.92 \\
\hline
20\% & 2.29 & 3.07 & 5.48 & 9.55 \\
\hline
30\% & 3.30 & 4.72 & 8.26 & 14.42 \\
\hline
40\% & 4.22 & 6.22 & 11.02 & 19.07 \\
\hline
50\% & 5.18 & 7.73 & 13.75 & 23.83 \\
\hline
60\% & 6.11 & 9.23 & 16.50 & 28.59 \\
\hline
70\% & 7.03 & 10.71 & 19.23 & 33.28 \\
\hline
80\% & 7.91 & 12.28 & 22.01 & 38.15 \\
\hline
90\% & 8.77 & 13.76 & 24.78 & 38.15 \\
\hline
100\% & 9.59 & 15.25 & 27.55 & 47.84 \\
\hline\hline
\end{tabular}
\end{scriptsize}
\end{table}
}
\ignore{
\begin{table}[!htpb]
\centering
\begin{scriptsize}
\caption{The time costs of streaming 512 chunks in WAN (unit: seconds)}
\centering
\label{tab:streaming_time_cost_WAN}
\begin{tabular}{ c | c | c | c | c }
\hline
Fraction of delivered chunks& 32 KB & 256 KB & 1 MB & 2 MB\\
\hline\hline
10\% & 8.06 & 8.58 & 14.03 & 15.15 \\
\hline
20\% & 15.46 & 16.61 & 25.82 & 28.93 \\
\hline
30\% & 22.92 & 24.92 & 37.13 & 43.17 \\
\hline
40\% & 30.21 & 33.03 & 48.08 & 56.97 \\
\hline
50\% & 37.48 & 41.17 & 59.11 & 70.69 \\
\hline
60\% & 44.72 & 49.27 & 70.17 & 84.42 \\
\hline
70\% & 51.94 & 57.36 & 81.17 & 98.27 \\
\hline
80\% & 59.31 & 65.62 & 92.16 & 113.75 \\
\hline
90\% & 66.51 & 73.72 & 102.82 & 128.65 \\
\hline
100\% & 73.69 & 81.81 & 116.74 & 143.64 \\
\hline\hline
\end{tabular}
\end{scriptsize}
\end{table}
}
\section{Related Work}
\label{sec:RelatedWork}
Here we review the pertinent technologies and discuss their insufficiencies in the specific context of p2p content delivery. Table~\ref{tab:p2p_cdn_comparison} summarizes the advantages provided by our protocol when compared with other representative related works.
\smallskip
\noindent{\bf P2P information exchange schemes.} Many works~\cite{Piatek-et-al-2007-NSDI,Cohen-2003-P2P,Sherman-et-al-2012-TON,Kamvar-et-al-2003-WWW,Sirivianos-2007-Usenix,Shin-et-al-2017-TON,Levin-et-al-2008-SIGCOMM} focused on the basic challenge to incentivize users in the p2p network to voluntarily exchange information. However, these schemes have not been notably successful in combating free-riding problem and strictly ensuring the fairness. Specifically, the schemes in BitTorrent \cite{Cohen-2003-P2P}, BitTyrant~\cite{Piatek-et-al-2007-NSDI},
FairTorrent \cite{Sherman-et-al-2012-TON}, PropShare \cite{Levin-et-al-2008-SIGCOMM} support direct reciprocity (i.e., the willingness for participants to continue exchange basically depends on their past direct interactions, e.g., the {\em Tit-for-Tat} mechanism in BitTorrent) for participants, which cannot accommodate the {\em asymmetric} interests (i.e., participants have distinct types of resources such as bandwidth and cryptocurrencies to trade between each others) in the p2p content delivery setting. For indirect reciprocity (e.g., reputation, currency, credit-based) mechanisms including Eigentrust~\cite{Kamvar-et-al-2003-WWW}, Dandelion~\cite{Sirivianos-2007-Usenix}, they suffer from Sybil attacks, e.g., a malicious peer could trivially generate a sybil peer and ``deliver to himself" and then rip off the credits. We refer readers to~\cite{Shin-et-al-2017-TON} for more discussions about potential attacks to existing p2p information exchange schemes. For T-chain~\cite{Shin-et-al-2017-TON}, it still considers rational attackers and cannot strictly ensure the delivery fairness as an adversary can waste a lot of bandwidth of deliverers though the received content is encrypted.
More importantly, all existing schemes, to our knowledge, are presented in the non-cooperative game-theoretic setting, which means that they only consider independent attackers free ride spontaneously without communication of their strategies, and the attackers are rational with the intentions to maximize their own benefits. However, such rational assumptions are elusive to guarantee the fairness for parties in the ad-hoc systems accessible by all malicious entities.
Our protocol, on the contrary, assures the delivery fairness in the cryptographic sense. Overall, our protocol can rigorously guarantee fairness for all participating parties, i.e., deliverers with delivery fairness, providers and consumers with exchange fairness. Also, the fairness in the p2p information exchange setting is typically measured due to the discrepancy between the number of pieces uploaded and received over a long period~\cite{Joe-et-al-2016-ICDCS} for a participant. If we examine each concrete delivery session, there is no guarantee of fairness. This further justifies that the p2p information exchange schemes are not directly suitable for the specific p2p content delivery setting.
\begin{table*}[!t]
\centering
\begin{scriptsize}
\caption{Comparison of different related representative approaches}
\vspace{-2mm}
\centering
\label{tab:p2p_cdn_comparison}
\setlength{\tabcolsep}{1.0em}
{\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ c | c | c | c | c | c | c }
\hline
\hline
\multicolumn{2}{c|}{\backslashbox{Schemes}{Features}} &
\multicolumn{1}{c|}{\makecell[c]{What to exchange? \\ (Incentive type)}} &
\multicolumn{1}{c|}{\makecell[c]{Delivery Fairness\\ c.f., Sec.4}} &
\multicolumn{1}{c|}{\makecell[c]{Confidentiality\\ c.f., Sec.4}} &
\multicolumn{1}{c|}{\makecell[c]{Exchange Fairness\\c.f., Sec.4}} &
\multicolumn{1}{c}{\makecell[c]{On-chain Costs,\\$n$ is the \# of content chunks}} \\
\hline
\hline
\multicolumn{1}{c|}{\multirow{3}{*}{\makecell[c]{\\ \\P2P Information\\ Exchange}}}
& \multicolumn{1}{c|}{BitTorrent~\cite{Cohen-2003-P2P}}
& \multicolumn{1}{c|}{\makecell*[{}{c}]{Files $\leftrightarrow$ Files\\(Tit-for-Tat)}}
& \multicolumn{1}{c|}{$\times$}
& \multicolumn{1}{c|}{$\times$}
& \multicolumn{1}{c|}{Not fully}
& \multicolumn{1}{c}{n/a} \\[4pt]\cline{2-7}
& \multicolumn{1}{c|}{Dandelion~\cite{Sirivianos-2007-Usenix}}
& \multicolumn{1}{c|}{\makecell*[{}{c}]{Files $\leftrightarrow$ Credits\\ (Reputation)}}
& \multicolumn{1}{c|}{$\times$}
& \multicolumn{1}{c|}{$\times$}
& \multicolumn{1}{c|}{Not fully}
& \multicolumn{1}{c}{n/a} \\[4pt]\cline{2-7}
& \multicolumn{1}{c|}{T-Chain~\cite{Shin-et-al-2017-TON}}
& \multicolumn{1}{c|}{\makecell*[{}{c}]{Files $\leftrightarrow$ Files \\ (Tit-for-Tat)}}
& \multicolumn{1}{c|}{$\times$}
& \multicolumn{1}{c|}{$\surd$}
& \multicolumn{1}{c|}{Not fully}
& \multicolumn{1}{c}{n/a} \\[4pt]\cline{2-7}
\hline
\multirow{3}{*}{\makecell[c]{\\ Decentralized\\Content Delivery}}
& \multicolumn{1}{c|}{Gringotts~\cite{Goyal-et-al-2019-Usenix}}
& \multicolumn{1}{c|}{\makecell{Bandwidth $\leftrightarrow$ Coins\\(Monetary)}}
& \multicolumn{1}{c|}{\makecell*[{}{c}]{{\em multiple} chunks' deliveries\\ not paid in worst cases}}
& \multicolumn{1}{c|}{$\times$}
& \multicolumn{1}{c|}{$\times$}
& \multicolumn{1}{c}{$O(n)$} \\[4pt]\cline{2-7}
& \multicolumn{1}{c|}{CacheCash~\cite{Aalmashaqbeh-2019-Columbia}}
& \multicolumn{1}{c|}{\makecell*[{}{c}]{Bandwidth $\leftrightarrow$ Coins\\(Monetary)}}
& \multicolumn{1}{c|}{\makecell*[{}{c}]{all chunks' deliveries\\ not paid in worst cases}}
& \multicolumn{1}{c|}{$\times$}
& \multicolumn{1}{c|}{$\times$}
& \multicolumn{1}{c}{$[o(1),O(n)]$} \\[4pt]\cline{2-7}
& \multicolumn{1}{c|}{\textbf{Our Protocols}}
& \multicolumn{1}{c|}{\makecell*[{}{c}]{Bandwidth/Files $\leftrightarrow$ Coins\\(Monetary)}}
& \multicolumn{1}{c|}{\makecell*[{}{c}]{{\em one} chunk's delivery\\ not paid in worst cases}}
& \multicolumn{1}{c|}{$\surd$}
& \multicolumn{1}{c|}{$\surd$}
& \multicolumn{1}{c}{$\Tilde{O}(1)$} \\
\hline
\hline
\end{tabular}
}
\end{scriptsize}
\vspace{-4mm}
\end{table*}
\smallskip
\noindent{\bf Fair exchange and fair MPC.} There are also intensive works on fair exchange protocols in cryptography. It is well-known that a fair exchange protocol cannot be designed to provide complete exchange fairness without a trusted third party (TTP)~\cite{Pagnia-1999-TUD-BS}, which is a specific impossible result of the general impossibility of fair multi-party computation (MPC) without honest majority \cite{Cleve-1983-STOC}. Some traditional ways hinge on a TTP~\cite{Micali-2003-PODC,Asokan-et-al-2000-JSAC, Belenkiy-et-al-2007-WPES,Kupccu--et-al-2010-RSAC} to solve this problem, which has been reckon hard to find such a TTP in practice.
To avoid the available TTP requirement, some other studies \cite{Blum-1983-STOC,Damgaard-1995-Cryptology,Pinkas-2003-EUROCRYPT,Garay-et-al-2006-TCC} rely on the ``gradual release'' approach, in which the parties act in turns to release their private values bit by bit,
such that even if one malicious party aborts,
the honest party can recover the desired output by investing computational resources (in form of CPU time) comparable to that of the adversary uses.
Recently, the blockchain offers an attractive way to instantiate a non-private TTP, and a few results \cite{Maxwell-2016-BitcoinCore,Dziembowski-et-al-2018-CCS,Eckey-et-al-2019-Arxiv,Kiayias-et-al-2016-Crypto, Choudhuri-et-al-2017-CCS,Bentov-and-Kumaresan-2014-Crypto} leverage this innovative decentralized infrastructure to facilitate fair exchange and fair MPC despite the absence of honest majority. Unfortunately, all above fair exchange and fair MPC protocols fail to guarantee {\em delivery fairness} in the specific p2p content delivery setting, as they cannot capture the fairness property for the special exchanged item (i.e., bandwidth), as earlier discussed in Section~\ref{sec:Introduction}.
\smallskip
\noindent{\bf State channels.} A state channel establishes a private p2p medium, managed by pre-set rules, allowing the involved parties to update state unanimously by exchanging authenticated state transitions off-chain~\cite{Gudgeon-et-al-2020-FC}. Though our protocols can be reckon as the application of payment channel networks (PCNs) (or more general state channels~\cite{Miller-et-al-2019-FC}) yet there are two key differences: i) fairness in state channels indicates that an honest party (with valid state transition proof) can always withdraw the agreed balance from the channel~\cite{Gudgeon-et-al-2020-FC}, while our protocols, dwelling on the delivery fairness in the specific context of p2p content delivery, ensure the bandwidth contribution can be quantified and verified to generate such a valid state transition proof; ii) state channels essentially allow any two parties to interact, while our protocols target the interaction among any three parties with a totally different payment paradigm~\cite{Aalmashaqbeh-2019-Columbia} for p2p content delivery.
\smallskip
\noindent{\bf Decentralized content delivery.} There exist some systems that have utilized the idea of exchanging bandwidth for rewards to incentivize users' availability or honesty such as Dandelion~\cite{Sirivianos-2007-Usenix}, Floodgate~\cite{Nair-et-al-2008-ICCCN}. However, different drawbacks impede their practical adoption, as discussed in~\cite{Aalmashaqbeh-2019-Columbia}. Here we elaborate the comparison with two protocols, i.e., Gringotts~\cite{Goyal-et-al-2019-Usenix}, CacheCash~\cite{Aalmashaqbeh-2019-Columbia}, that target the similar p2p content delivery scenario.
{\em Application Scenario.} Typically, the p2p content delivery setting involves asymmetric exchange interests of participants, i.e., the consumers expect to receive a specific content identified by a certain digest in time, while the providers and the deliverers would share their content (valid due to the digest) and bandwidth in exchange of well-deserved payments/credits, respectively.
Unfortunately, Gringotts and CacheCash fail to capture this usual scenario, and cannot support the content providers to sell content over the p2p network, due to the lack of content confidentiality and exchange fairness. In greater detail, both Gringotts and CacheCash delegate a copy of raw content to the deliverers, which results in a straightforward violation of exchange fairness, i.e., a malicious consumer can pretend to be or collude with a deliverer to obtain the plaintext content without paying for the provider.
{\em Delivery Fairness.} Gringotts typically requires the deliverer to receive a receipt (for acknowledging the resource contribution) only after multiple chunks are delivered, which poses the risk of losing bandwidth for delivering multiple chunks. For CacheCash, a set of deliverers are selected to distribute the chunks in parallel, which may cause the loss of bandwidth for all chunks in the worst case. Our protocols ensures that the unfairness of delivery is bounded to one chunk of size $\eta$.
{\em On-chain Costs.} Gringotts stores all chunk delivery records on the blockchain, and therefore the on-chain costs is in $O(n)$. While for CacheCash, the deliverers can obtain {\em lottery tickets} (i.e., similar to ``receipts") from the consumer after each ``valid" chunk delivery. The on-chain costs is highly pertinent to the winning probability $p$ of tickets. E.g., $p = \frac{1}{n}$ means that on average the deliverer owns a winning ticket after $n$ chunks are delivered, or $p = 1$ indicates that the deliverer receives a winning ticket after each chunk delivery, leading to at most $O(n)$ on-chain costs of handling redeem transactions. For our protocols, the on-chain costs is bounded to $\Tilde{O}(1)$.
Additionally, Gringotts allows streaming of content chunks, which functions similarly to our $\mathsf{FairStream}$ protocol, and CacheCash demands to download all the chunks, which applies to the similar scenario as our $\mathsf{FairDownload}$ protocol.
\vspace{1mm}
\section{Conclusion and Future Works}
\label{sec:Conclusion}
We present the first two fair p2p content delivery protocols atop blockchains to support {\em fair p2p downloading} and {\em fair p2p streaming}, respectively. They enjoy strong fairness guarantees to protect any of the content provider, the content consumer, and the content deliverer from being ripped off by other colluding parties. Detailed complexity analysis and extensive experiments of prototype implementations are performed and demonstrate that the proposed protocols are highly efficient.
Yet still, the area is largely unexplored and has a few immediate follow-up, for example: (i) in order to realize maximized delivery performance, it is desirable to design a mechanism of adaptively choosing deliverers during each delivery task; (ii) it is also enticing to leverage the off-chain payment channels to handle possible micropayments and further reduce the on-chain cost; (iii) to better preserve the digital rights of sold contents against pirating consumers, some digital rights management (DRM) schemes can be introduced.
\bibliographystyle{./IEEEtran}
{\footnotesize
|
2,877,628,091,141 | arxiv | \section{Introduction}
Due to its large mass,
$m_t = 173.2\pm0.9$~GeV \cite{Lancaster:2011wr},
the top-quark seems to play a special role,
and therefore top-quark physics
may give hints to physics beyond the standard model (SM).
Assuming that the Higgs mechanism is indeed the way particles obtain their masses,
the top-quark is the only quark with a Yukawa coupling of a ``natural'' size,
i.e. a coupling of order unity.
Thus it may in particular lead us to an understanding of the phenomenon ``flavour'',
which in the SM is encoded in the Yukawa couplings.
Given the fact that up to now we do not have any hint
at a specific model for ``new physics'' (NP),
and that the LHC \cite{Evans:2008zzb}
will produce a large amount of top-quarks \cite{Beneke:2000hk,Bernreuther:2008ju},
it is be desirable to have an approach
which is as model independent as possible to analyse the data.
Hence we will not pick a specific model here;
rather we shall refer to an effective theory description of possible NP.
This approach is well known and we shall gather
the necessary relations in the next section.
However, by including up to dimension-6 operators in this approach
a large number of unknown couplings appear,
which are \textit{a priori} unconstrained.
From present data we may obtain limits on certain couplings
which have to be included in the analysis.
In particular, flavour physics rules out
a generic flavour structure for the dimension-6 operators,
and the flavour constraints can be readily incorporated
by the assumption of minimal flavour violation (MFV).
MFV has become a popular assumption
to avoid flavour constraints in many new physics models.
On the other hand, it is important to test,
whether a hint at NP is compatible with MFV or not.
Hence it is desirable to have the possibility to test the MFV hypothesis
without referring to a specific new physics model,
in which case one can only make use of the effective theory approach.
There is already a large number of analyses
of anomalous top couplings in the literature,
some recent work can be found in
Refs.~\cite{Willenbrock:2012br,Zhang:2012cd,Zhang:2010dr,AguilarSaavedra:2004wm}.
However, these papers either deal with flavour-diagonal anomalous couplings
or do not take into account the constraints from MFV.
In the present paper we propose a way
to perform a test of the MFV hypothesis in
anomalous flavour-changing top couplings.
The basic idea is to make use of the MFV constraints
on the flavour structure of the $t \to qV$ couplings,
where $q$ is a quark and $V$ a gauge boson.
However, this is still not restrictive enough to allow for a simple analysis
and hence we will be forced to make additional assumptions
which we shall keep as simple as possible.
The paper is organised as follows.
In the next section we collect the relevant dimension-6 operators
for an effective description at the top-mass scale.
In Sec.~\ref{sec:MFV} we give the MFV relations
between the coupling constants of these operators
and project out the relevant operators for charged- and neutral-current top decays.
In Sec.~\ref{sec:DRTop} we compute the rates for
top decays into lighter quarks under the emission of gluons,
photons and weak bosons and derive relations between different decay rates
which may serve as a test of MFV and conclude in Sec.~\ref{sec:Conc}.
\section{Effective Approach with Two Higgs Doublets} \label{sec:EA-2HD}
We consider the SM as the dimension-4 part of an effective theory;
hence physics at a large scale $\Lambda$ beyond the SM manifests itself
through the presence of higher-dimensional operators,
suppressed by powers of the scale $\Lambda$.
This is generically true for any new physics model
with degrees of freedom at scales $\Lambda$.
All particles constituting the SM have been found,
however, the symmetry-breaking sector is not yet fixed,
although the recent discovery at the LHC is very likely a Higgs particle \cite{Aad:2012tfa,Chatrchyan:2012ufa}.
In the present paper we shall take this into account
by assuming a two-Higgs doublet model of type II (2HDM-II),
which allows for easy contact between supersymmetric models (SUSY),
and the SM with a single Higgs doublet
and also to heavy Higgs models
using nonlinear representations~\cite{Gunion:1989we,Donoghue:1978cj,Hall:1981bc}.
Focussing on quarks only and using the notation
\begin{eqnarray}
Q_L &=& \left( \begin{pmatrix}
u_L \\
d_L
\end{pmatrix} ,
\begin{pmatrix}
c_L \\
s_L
\end{pmatrix} ,
\begin{pmatrix}
t_L \\
b_L
\end{pmatrix} \right) , \\
u_R &=& \left( u_R , c_R , t_R \right) , d_R = \left( d_R , s_R , b_R \right) ,
\end{eqnarray}
the Yukawa couplings of the 2HDM-II model can be written
in terms of two Higgs doublet fields $\Phi_1$ and $\Phi_2$
with hypercharge $Y = 1$ and otherwise identical quantum numbers, as
\begin{equation} \label{Yuk}
-\mathscr{\altL}^{\rm{2HDM}}_{\rm Yuk}=\bar{Q}_L Y_D \Phi_1 d_R + \bar{Q}_L Y_U \tilde{\Phi}_2 u_R + {\rm H.c.} ,
\end{equation}
where $\tilde\Phi$ is the charge-conjugated Higgs doublet
given by
\begin{equation}
\tilde\Phi = i \tau_2 \Phi^\ast \ .
\end{equation}
``New physics'' beyond this SM-like dimension-4 piece is parametrized
in terms of higher-dimensional operators
\cite{Burges:1983zg,Leung:1984ni,Buchmuller:1985jz}
\begin{equation}
\mathscr{\altL} = \mathscr{\altL}_{4D} + \frac{1}{\Lambda} \mathscr{\altL}_{5D} + \frac{1}{\Lambda^2} \mathscr{\altL}_{6D} + \ldots ,
\end{equation}
where $\mathscr{\altL}_{4D}\equiv\mathscr{\altL}_{\rm SM}$,
and the new contributions
$\mathscr{\altL}_{5D}$, $\mathscr{\altL}_{6D}$, $\ldots$,
have to be symmetric under the SM gauge symmetry
$SU(3)_C \otimes SU(2)_L \otimes U(1)_Y$.
It turns out, that for quarks there is no dimension five operator compatible
with this symmetry
and thus the next-to-leading terms in the $\Lambda^{-1}$
expansion are of dimension six or higher.
The number of possible operators
is already quite large for a single Higgs boson~\cite{Buchmuller:1985jz,Hansmann:2003jm}.
We are going to consider this effective Lagrangian
at the top-quark mass scale,
which we identify with the electroweak scale $\mu \sim m_t$.
Furthermore, we are interested in processes leading to anomalous,
flavour-changing couplings of the top-quark to the gauge bosons,
and hence it is sufficient for us
to look at operators which are bilinear in the quark fields.
It is useful to classify the operators according to the helicities
of the quark fields, left-left (LL),
right-right (RR) and left-right (LR).
Using this notation,
the Lagrangian can be written as
\begin{eqnarray}
\mathscr{\altL} = \mathscr{\altL}_{\rm 2HDM}%
&+& \frac{1}{\Lambda^2} \sum\limits_i \will{(i)}{\rm LL} \Op{(i)}{\rm LL}
+ \frac{1}{\Lambda^2} \sum\limits_i \will{(i)}{\rm RR} \Op{(i)}{\rm RR}\notag\\
&+& \frac{1}{\Lambda^2} \sum\limits_i \will{(i)}{\rm LR} \Op{(i)}{\rm LR} + \ldots , \label{effth1}
\end{eqnarray}
where the $\will{(i)}{hh'}$ are generic coupling constants
which can in principle be calculated in specific models of new physics.
In what follows we shall study anomalous couplings
of flavour-changing currents involving top quarks to the gauge bosons.
To this end, it is sufficient to consider those dimension-6 operators
which are bilinear in the quark fields and which
-- after spontaneous symmetry breaking --
will induce vertices of the form $t \to q V$.
However, not all of them are independent when applying the equations of motion.
In particular,
the operators involving three covariant derivatives
can be reduced either the ones with two derivatives or
to four-fermion operators \cite{Bach:2012fb}.
Furthermore, within the 2HDM-II model,
``flavour-changing neutral currents'' (FCNCs)
and large $CP$ violation are naturally suppressed by the imposed
discrete $\mathcal Z_2$ symmetry,
forbidding $\Phi_1\leftrightarrow \Phi_2$ transitions~\cite{Glashow:1976nt}.
Hence the set of independent operators for purely left-handed transitions
can be chosen as~\cite{Buchmuller:1985jz}:
\begin{equation}
\begin{split}
\Op{ij(3)}{\rm LL} &=%
\left( \Phi_1^\dagger i D_\mu \Phi_1 \right)%
\left( \bar{Q}_{Li} \gamma^\mu Q_{Lj} \right) , \\
\Op{ij(4)}{\rm LL} &=%
\left( \Phi_2^\dagger i D_\mu \Phi_2 \right)%
\left( \bar{Q}_{Li} \gamma^\mu Q_{Lj} \right) , \\
\Op{ij(5)}{\rm LL} &=%
\left( \Phi_1^\dagger \tau_I i D_\mu \Phi_1 \right)%
\left( \bar{Q}_{Li} \tau_I \gamma^\mu Q_{Lj} \right) , \\
\Op{ij(6)}{\rm LL} &=%
\left( \Phi_2^\dagger \tau_I i D_\mu \Phi_2 \right)%
\left( \bar{Q}_{Li} \tau_I \gamma^\mu Q_{Lj} \right) ,
\end{split}
\end{equation}
and for purely right-handed transitions we have
\begin{equation}
\begin{split}
\Op{ij(2)}{\rm RR} &=%
\left( \Phi_2^\dagger i D_\mu \Phi_2 \right)%
\left( \bar{u}_{Ri} \gamma^\mu u_{Rj} \right) , \\
\Op{\prime ij(2)}{\rm RR} &=%
\left( \Phi_1^\dagger i D_\mu \Phi_1 \right)%
\left( \bar{d}_{Ri} \gamma^\mu d_{Rj} \right) , \\
\Op{ij(3)}{\rm RR} &=%
\left( \tilde{\Phi}_2^\dagger i D_\mu \Phi_1 \right)%
\left( \bar{u}_{Ri} \gamma^\mu d_{Rj} \right) ,
\end{split}
\end{equation}
where $D_\mu$ denotes the covariant derivative of the
SU(3)$_C$~$\otimes$~SU(2)$_L$~$\otimes$~U(1)$_Y$ gauge symmetry.
For the transitions from left- to right-handed helicities we have
\begin{equation}
\begin{split}
\Op{ij(4)}{\rm LR} &=%
\left( \bar{Q}_{Li} \sigma^{\mu\nu} \tau_I u_{Rj} \right)%
\tilde{\Phi}_2 W^I_{\mu\nu} + {\rm H.c.} , \\
\Op{ij(5)}{\rm LR} &=%
\left( \bar{Q}_{Li} \sigma^{\mu\nu} u_{Rj} \right)%
\tilde{\Phi}_2 B_{\mu\nu} + {\rm H.c.} , \\
\Op{\prime ij(4)}{\rm LR} &=%
\left( \bar{Q}_{Li} \sigma^{\mu\nu} \tau_I d_{Rj} \right)%
\Phi_1 W^I_{\mu\nu} + \rm{H.c.} , \\
\Op{\prime ij(5)}{\rm LR} &=%
\left( \bar{Q}_{Li} \sigma^{\mu\nu} d_{Rj} \right)%
\Phi_1 B_{\mu\nu} + {\rm H.c.} ,
\end{split}
\end{equation}
where $W^ {I=1,2,3}_{\mu \nu}$ and
$B_{\mu \nu}$ denote the field strength of
SU(2)$_W$ and U(1)$_Y$ symmetries, respectively,
and $\sigma^{\mu\nu}=\frac{i}{2}[\gamma^\mu,\gamma^\nu]$.
In addition to these operators leading to anomalous weak couplings
we also can have anomalous coupling to gluons
which read
\begin{equation}
\begin{split}
\ten{P}{ij(5)}{\rm LR} &=%
\left( \bar{Q}_{Li} \sigma^{\mu\nu} T^a u_{Rj} \right)%
\tilde{\Phi}_2 G_{\mu\nu}^a + {\rm H.c.} , \\
\ten{P}{\prime ij(5)}{\rm LR} &=%
\left( \bar{Q}_{Li} \sigma^{\mu\nu} T^a d_{Rj} \right)%
\Phi_1 G_{\mu\nu}^a + {\rm H.c.} ,
\end{split}
\end{equation}
where $T^a$ are the generators of SU(3)$_C$
and $G_{\mu \nu}^a$ is the gluon field strength.
Note that all these operators carry flavour indices $i,j$
and hence the coupling constants in Eq.~\eqref{effth1}
are actually $3 \times 3$ matrices in flavour space.
Thus it is evident that generic parametrization is pretty much useless
due to the large number of unknown parameters.
\section{Minimal Flavour Violation}\label{sec:MFV}
Data on flavour processes restricts the possible couplings
in Eq.~\eqref{effth1} severely.
Since currently there is no indication from flavour processes
of new effects at the TeV scale,
any NP at that scale must be ``minimally flavour violating''
\cite{Ali:1999we,Buras:2000dm},
i.e.\ the new physics couplings obey
the same flavour-suppression pattern as the standard model processes.
The most economical way to implement this idea
has been advocated in Ref.~\cite{D'Ambrosio:2002ex},
where the flavour symmetry
\begin{equation} \label{FlavourGroup}
\mathcal G_F = \text{SU(3)}_{Q_L} \times \text{SU(3)}_{U_R} \times \text{SU(3)}_{D_R}
\end{equation}
was introduced, which is
[up to -- for our purposes -- irrelevant U(1) factors]
the largest flavour symmetry
which is compatible with the SM.
Under this symmetry the quarks transform according to
\begin{eqnarray}
Q_L &\sim& (3,1,1) , \\
u_R &\sim& (1,3,1) , \ %
d_R \sim (1,1,3) ,
\end{eqnarray}
while the SM gauge and the Higgs fields are singlets
with respect to $\mathcal G_F$.
In the SM, this symmetry is broken only by the Yukawa couplings
$Y_U$ and $Y_D$ shown in Eq.~\eqref{Yuk}.
Following Ref.~\cite{D'Ambrosio:2002ex} these Yukawa couplings
can be introduced as spurion fields
with the transformation property
\begin{equation}
Y_U \sim (3,\bar{3},1) , \ %
Y_D \sim (3,1,\bar{3}) ,
\end{equation}
such that the Yukawa interaction \eqref{Yuk} is rendered invariant.
``Freezing'' the spurion fields to the actual values of
the Yukawa couplings yields the $\mathcal G_F$ symmetry breaking in the SM.
This spurion analysis can be extended to any new physics model
as well as to our effective field theory approach.
To this end we insert the minimum number of spurions
into the set of higher-dimensional operators,
which are required to be Lorentz and gauge invariant
as well as flavour invariant.
Thus
-- omitting some trivial structures which lead to flavour violation --
we get
\begin{widetext}
\begin{eqnarray}
\label{MNSpurionQQ}
\sum_{i,j} \will{ij}{\rm LL} \left( \bar{Q}_{Li} \cdots Q_{Lj} \right) &=&
\bar{Q}_{L} \left[\alpha_{\rm LL} {\mathds{1}} + \beta_{\rm LL} \, Y_U Y_U^\dagger
+ \eta_{\rm LL} \ Y_D Y_D^\dagger \right]
\cdots Q_{L} , \\
\label{MNSpurionuu}
\sum_{i,j} \will{ij}{\rm RR} \left( \bar{u}_{Ri} \cdots u_{Rj} \right) &=&
\alpha_{\rm RR} \left( \bar{u}_{R} \left[ Y_U^\dagger Y_D Y_D^\dagger Y_U \right]
\cdots u_{R} \right) , \\
\label{MNSpuriondd}
\sum_{i,j} \will{\prime ij}{\rm RR} \left( \bar{d}_{Ri} \cdots d_{Rj} \right) &=&
\beta_{\rm RR} \left( \bar{d}_{R} \left[ Y_D^\dagger Y_U Y_U^\dagger Y_D \right]
\cdots d_{R} \right) , \\
\label{MNSpuriondu}
\sum_{i,j} \will{\prime \prime ij}{\rm RR} \left( \bar{d}_{Ri} \cdots u_{Rj} \right) &=&
\eta_{\rm RR} \left( \bar{d}_{R} \left[ Y_D^\dagger Y_U \right]
\cdots u_{R} \right) , \\
\label{MNSpurionQu}
\sum_{i,j} \will{ij}{\rm LR} \left( \bar{Q}_{Li} \cdots u_{Rj} \right) &=&
\bar{Q}_{L} \left[ \lambda_U Y_U + \alpha_{\rm LR} Y_D Y_D^\dagger Y_U \right]
\cdots u_{R} , \\
\label{MNSpurionQd}
\sum_{i,j} \will{\prime ij}{\rm LR} \left( \bar{Q}_{Li} \cdots d_{Rj} \right) &=&
\bar{Q}_{L} \left[ \lambda_D Y_D + \beta_{\rm LR} Y_U Y_U^\dagger Y_D \right]
\cdots d_{R} ,
\end{eqnarray}
where the ellipses denote the Dirac,
colour, and weak SU(2) matrices that appear in the operators.
The coefficients $\alpha_{\rm LL} \cdots \beta_{\rm LR}$
are expected to have a ``natural'' size.
The precise meaning of this statement depends on the way the NP effects enter the model.
A tree-level-induced NP effect
(e.g., a tree-level exchange of a new particle with mass $\Lambda$)
will induce coefficients $\alpha_{\rm LL} \cdots \beta_{\rm LR} \sim {\mathcal O}(1)$,
while loop-induced NP effects will suffer from the typical
loop-suppression factor $1/(16 \pi^2)$
and hence we would have $\alpha_{\rm LL} \cdots \beta_{\rm LR} \sim {\mathcal O}(10^{-2})$.
The physical quark fields are the mass eigenstates,
which are defined in such a way that the neutral component
of the terms proportional to $\lambda_U$ and $\lambda_D$
in \eqref{MNSpurionQu} and \eqref{MNSpurionQd} is diagonal,
since this contribution is exactly of the form of the mass terms in the SM Lagrangian.
This is achieved by picking a basis of the $Y_U$ and $Y_D$ where
\begin{equation}
Y_U = Y_U^{\rm diag} , \quad Y_D = V_{\rm CKM} Y_D^{\rm diag} ,
\end{equation}
where $V_{\rm CKM}$ is the
Cabibbo-Kobayashi-Maskawa (CKM) rotation from the weak to the mass eigenbasis:
$d_L^{\rm weak} = V_{\rm CKM} d_L^{\rm mass}$.
Resolving the terms in Eqs.~\eqref{MNSpurionQQ}--\eqref{MNSpurionQd}
into charged and neutral components,
one finds in terms of mass eigenstates for the charged component from Eq.~\eqref{MNSpurionQQ}
(from here all quark fields are mass eigenstates)
\begin{equation}
\sum_{i,j} \will{ij}{\rm LL} \left( \bar{Q}_{Li} \cdots \tau_+ Q_{Lj} \right) = %
\bar{u}_L \cdots \left[ \alpha_{\rm LL} {\mathds 1}%
+ \beta_{\rm LL} \left(Y_U^{\rm diag} \right)^2%
+ \eta_{\rm LL} \left(Y_D^{\rm diag} \right)^2 \right]%
V_{\rm CKM} d_L ,
\end{equation}
with $\tau_\pm = \frac12(\tau_1\pm i\tau_2)$,
while for the neutral components we get
\begin{eqnarray}
&& \frac{1}{2} \sum_{i,j} \will{ij}{\rm LL} \left( \bar{Q}_{Li} \cdots (1+ \tau_3) Q_{Lj} \right) =%
\eta_{\rm LL} \bar{u}_L \cdots V_{\rm CKM} \left( Y_D^{\rm diag} \right)^2 V_{\rm CKM}^\dagger u_L , \\
&& \frac{1}{2} \sum_{i,j} \will{ij}{\rm LL} \left( \bar{Q}_{Li} \cdots (1- \tau_3) Q_{Lj} \right) =%
\eta_{\rm LL} \bar{d}_L \cdots V_{\rm CKM}^\dagger \left( Y_U^{\rm diag} \right)^2 V_{\rm CKM} d_L .
\end{eqnarray}
For the remaining helicity combinations we get
\begin{eqnarray}
\sum_{i,j} \will{ij}{\rm RR} \left( \bar{u}_{Ri} \cdots u_{Rj} \right) &=&
\alpha_{\rm RR} \left( \bar{u}_{R}
\left[ Y_U^{\rm diag} V_{\rm CKM} \left( Y_D^{\rm diag} \right)^2 V_{\rm CKM}^\dagger Y_U^{\rm diag} \right]
\cdots u_{R} \right) , \\
\sum_{i,j} \will{\prime ij}{\rm RR} \left( \bar{d}_{Ri} \cdots d_{Rj} \right) &=&
\beta_{\rm RR} \left( \bar{d}_{R}
\left[ Y_D^{\rm diag} V_{\rm CKM}^\dagger \left(Y_U^{\rm diag} \right)^2 V_{\rm CKM} Y_D^{\rm diag} \right]
\cdots d_{R} \right) , \\
\sum_{i,j} \will{\prime \prime ij}{\rm RR} \left( \bar{d}_{Ri} \cdots u_{Rj} \right) &=&
\eta_{\rm RR} \left( \bar{d}_{R} \left[ Y_D^{\rm diag} V_{\rm CKM}^\dagger Y_U^{\rm diag} \right]
\cdots u_{R} \right) .
\end{eqnarray}
Finally, Eqs.~\eqref{MNSpurionQu} and \eqref{MNSpurionQd}
have to be split into charged and neutral components.
Omitting flavour diagonal contributions, we get
\begin{eqnarray}
\left. \sum_{i,j} \will{ij}{\rm LR} \left( \bar{Q}_{Li} \cdots u_{Rj} \right) \right|_{\rm charged} &=&
\bar{d}_{L} \left[ \lambda_U V_{\rm CKM}^\dagger Y_U^{\rm diag}
+ \alpha_{\rm LR} \left(Y_D^{\rm diag} \right)^2 V_{\rm CKM}^\dagger Y_U^{\rm diag} \right]
\cdots u_{R} , \\
\left. \sum_{i,j} \will{ij}{\rm LR} \left( \bar{Q}_{Li} \cdots u_{Rj} \right) \right|_{\rm neutral} &=&
\alpha_{\rm LR} \bar{u}_{L} \left[ V_{\rm CKM} \left( Y_D^{\rm diag} \right)^2 V_{\rm CKM}^\dagger Y_U^{\rm diag} \right]
\cdots u_{R} , \\
\left. \sum_{i,j} \will{\prime ij}{\rm LR} \left( \bar{Q}_{Li} \cdots d_{Rj} \right) \right|_{\rm charged} &=&
\bar{u}_{L} \left[ \lambda_D V_{\rm CKM} Y_D^{\rm diag}
+ \beta_{\rm LR} \left(Y_U^{\rm diag} \right)^2 V_{\rm CKM} Y_D^{\rm diag} \right]
\cdots d_{R} , \\
\left. \sum_{i,j} \will{\prime ij}{\rm LR} \left( \bar{Q}_{Li} \cdots d_{Rj} \right) \right|_{\rm neutral} &=&
\beta_{\rm LR} \bar{d}_{L} \left[ V_{\rm CKM}^\dagger \left( Y_U^{\rm diag} \right)^2 V_{\rm CKM} Y_D^{\rm diag} \right]
\cdots d_{R} .
\end{eqnarray}
\end{widetext}
Thus the flavour structure of the operators can be fixed by the assumption of MFV,
and hence the number of independent couplings is reduced to
the number of operator structures listed in Sec.~\ref{sec:EA-2HD}.
Note that the entries in $Y_U^{\rm diag}$
and $Y_D^{\rm diag}$ are small except for the one entry in $Y_U^{\rm diag}$,
corresponding to the top mass.
Also, for large $\tan \beta$, $Y_D^{\rm diag}$
may also contain a large entry related to the bottom mass.
It has been pointed out in Ref.~\cite{Feldmann:2008ja}
that this may spoil the expansion in powers of the spurion insertions.
We will not go into any details here
and restrict our analysis to the minimum number of spurion insertions.
In most of the analyses using effective theory approaches,
unknown couplings are treated ``one at a time'',
which means that one coupling is varied with all other couplings set to zero.
In this paper we shall propose a slightly different scheme,
which automatically implements the relations among the couplings implied by MFV.
We will either set all the couplings to be unity,
$\alpha_{\rm LL} \cdots \beta_{\rm LR} \equiv 1$ (``tree-induced scenario''),
and vary the scale $\Lambda$,
or we will set $\alpha_{\rm LL} \cdots \beta_{\rm LR} \equiv 1/(16 \pi^2)$ (``loop-induced scenario''),
and vary the scale $\Lambda$.
Alternatively, we may fix the scale $\Lambda$ (e.g., at 1~TeV),
which means that we identify all the couplings $\alpha_{\rm LL} \cdots \beta_{\rm LR}$
and study the constraints on the remaining single parameter.
We note that this scheme depends on the choice of the basis for the dimension-6 operators; however,
the rationale behind this idea is that in a truly minimally flavour-violating scenario
all the remaning couplings should be natural, independently of the basis choice of the operators.
In turn this means that
-- up to the hierarchies implied by MFV --
no further hierarchical structures should emerge.
Without going into the details of a specific NP model,
there is no way to infer the detailed couplings;
if we want to stick to a model-independent approach
there is no alternative to such a crude scheme.
\section{Decay Rates of Top Quarks}\label{sec:DRTop}
In the remainder of the paper we shall focus on processes
with top quarks and their anomalous couplings to gauge bosons.
As mentioned in the previous section we use an effective theory approach
to study anomalous, flavour-changing top couplings at the weak scale $\mu \sim m_t$.
The MFV hypothesis allows us to predict relative sizes of
couplings for different flavours in the final state;
in turn, this may be used as a test of MFV in top decays,
once anomalous decays have been discovered.
\subsection{Charged Currents}\label{sec:CC}
The first class of decays are the charged currents from couplings
of the form $Wtq$, $q\in\{d,s,b\}$.
Taking into account the various helicity combinations,
the effective interaction for the charged-current couplings has the general form
\begin{eqnarray}
\mathscr{\altL}_{\rm eff} &=& \sum_{q = d,s,b}
\frac{g_2}{\sqrt2}
\biggl\{- \bar{q} \gamma^\mu \left( \coupL{q}{1} P_L + \coupR{q}{1} P_R \right) t W_\mu^- \notag\allowdisplaybreaks \\
&&\!\! - (i \partial)_\nu \left[ \bar{q} \frac{i\sigma^{\mu\nu}}{M_W}
\left( \coupL{q}{2} P_L + \coupR{q}{2} P_R \right) t \right] W_\mu^- \biggr\} , \label{eq:SMtqW}
\end{eqnarray}
where $P_{R/L}=\frac12 (1\pm\gamma_5)$
denote the chiral projectors.
Aplying the MFV hypothesis we get for the couplings
\begin{equation}\label{eq:cWtb}
\begin{split}
\coupL{q}{1} =& V_{tq}^\ast \left[%
1 + \frac{\alpha^{(5)}_{\rm LL}}{2}\frac{v_1^2}{\Lambda^2}%
+ \frac{\alpha^{(6)}_{\rm LL}}{2}\frac{v_2^2}{\Lambda^2} \right]
= V_{tq}^\ast + \delta \coupL{q}{1} , \\
\coupR{q}{1} =& V_{tq}^\ast \eta^{(3)}_{\rm RR} \frac{m_q m_t}{\Lambda^2} , \\
\coupL{q}{2} =& 2 V_{tq}^\ast \lambda_D^\ast \frac{m_q v}{\Lambda^2} , \\
\coupR{q}{2} =& 2 V_{tq}^\ast \lambda_U \frac{m_t v}{\Lambda^2} ,
\end{split}
\end{equation}
where $v_1 \equiv v \cos\beta$ and
$v_2 \equiv v \sin\beta$ are the vacuum expectation values
of the Higgs fields $\Phi_1$ and $\Phi_2$, respectively.
Note that we have kept the SM contribution in $L_1^q$
and defined $\delta\coupL{q}{1}$ to be the possible NP piece.
Furthermore, the parameter $v^2 =v_1^2+v_2^2$
is fixed by the $W$-boson mass,
$M_W^2 = \frac14g^2_2v^2$,
where $g_2$ is the weak SU(2) coupling constant,
or equivalently by the Fermi constant,
$G_F = 1/(\sqrt2 v^2)$.
As discussed above, the remaining unknown couplings in
Eqs.~\eqref{eq:cWtb} are generically of order unity,
and hence in MFV we have the order-of-magnitude estimate
\begin{equation} \label{MFV}
\begin{split}
\coupR{q}{1} &\sim \frac{2 m_q m_t}{v^2} \delta \coupL{q}{1} , \
\coupL{q}{2} \sim \frac{4 m_q}{v} \delta\coupL{q}{1} , \\
\coupR{q}{2} &\sim \frac{4 m_t}{v} \delta\coupL{q}{1}\ .
\end{split}
\end{equation}
Although the renormalization-group flow
is not expected to change the orders of magnitude,
it is still important to know the renormalization effects for a quantitative analysis.
We expect the relations~\eqref{MFV}
to hold at the high scale $\Lambda \gg \mu$,
and so we have to scale down to the scale of the top mass,
$\mu \sim m_t$,
where the measurement takes place.
\begin{figure}
\subfigure[\label{fig:ADim-1} self energies]{%
\includegraphics[scale=.45]{FD-ADSelf.pdf}}
\subfigure[\label{fig:ADim-2} vertex corrections]{%
\includegraphics[scale=.45]{FD-VC-0.pdf}}
\caption{\label{fig:FD-ADim} Feynman diagrams for the
calculation of the anomalous dimension for
the charged electroweak case.}
\end{figure}
We focus on QCD effects only
and consider the diagrams shown in Figs.~\ref{fig:ADim-1} and \ref{fig:ADim-2}.
The left- and right-handed current do not have an anomalous dimension, and hence
\begin{equation}
\coupL{q}{1} (m_t) = \coupL{q}{1} (\Lambda) , %
\quad \coupR{q}{1} (m_t) = \coupR{q}{1} (\Lambda)\ .
\end{equation}
The helicity-changing contributions have an anomalous dimension
which has to be equal for both helicity combinations.
To leading order one finds
\begin{equation}
\gamma^T(\alpha_s) = \frac{2 \alpha_s}{3 \pi} ,
\end{equation}
which yields for the running from
$\Lambda$ to $\mu \sim m_t$ for the two remaining couplings
\begin{equation}
\begin{split}
\coupL{q}{2}(m_t) &= \coupL{q}{2}(\Lambda) \left( \frac{\alpha_s(\Lambda)}{\alpha_s(m_t)} \right)^{\frac{4}{3 \beta_0}} , \\
\coupR{q}{2}(m_t) &= \coupR{q}{2}(\Lambda) \left( \frac{\alpha_s(\Lambda)}{\alpha_s(m_t)} \right)^{\frac{4}{3 \beta_0}} ,
\end{split}
\end{equation}
with
\begin{equation}
\beta_0 = \frac{11 n_c - 2 n_f}{3} ,
\end{equation}
where $n_c$ and $n_f$ are the numbers of
colours and quark flavours, respectively.
From the effective operators we may calculate the amplitudes for top decays,
taking into account a possible NP effect.
To this end, we use Eq.~\eqref{eq:SMtqW}
and compute the decay rates for the decay of an unpolarized top-quark
into a down-type quark $q$ and an on-shell $W$ boson.
The analysis of the $W$-decay products allows us to reconstruct its polarization,
which is either longitudinal, left-, or right-handed.
The corresponding rates read \cite{Kane:1991bg,AguilarSaavedra:2006fy}
\begin{widetext}
\begin{eqnarray}
\Gamma (t \to q W_0) &=&
\frac{g_2^2 |\vec{q}|}{32 \pi} \left\{
\frac{m_t^2}{M_W^2} \left[
|\coupL{q}{1} |^2
+ |\coupR{q}{1} |^2 \right] \left(1-x_W^2 - 2 x_q^2 - x_W^2 x_q^2 + x_q^4 \right)
- 4 x_q {\rm Re}\left\{\coupL{q}{1}\coupR{q\ast}{1}\right\}
\right. \notag \\
&& \qquad \qquad \, \,
+ \left[
|\coupL{q}{2} |^2
+ |\coupR{q}{2} |^2 \right] \left(1-x_W^2 + x_q^2\right)
- 4 x_q {\rm Re}\left\{\coupL{q}{2}\coupR{q\ast}{2}\right\}
\notag \\
&& \qquad
- 2 \frac{m_t}{M_W} {\rm Re}
\left(\coupL{q}{1}\coupR{q\ast}{2}
+ \coupL{q}{2}\coupR{q\ast}{1}\right) \left(1-x_W^2 - x_q^2\right)
\notag \\
&& \qquad \left.
+ 2 \frac{m_t}{M_W} x_q {\rm Re}\left\{\coupL{q}{1}\coupL{q\ast}{2} + \coupR{q}{2}\coupR{q\ast}{1}\right\}
\left(1+x_W^2 - x_q^2\right) \right\} \allowdisplaybreaks\\
\Gamma (t \to q W_{L/R}) &=& \frac{g_2^2 |\vec{q}|}{32 \pi} \left\{
\left[ |\coupL{q}{1} |^2
+ |\coupR{q}{1} |^2 \right]\left(1-x_W^2 + x_q^2\right)
- 4 x_q {\rm Re} \left\{\coupL{q}{1}\coupR{q\ast}{1} \right\}
\right. \notag \\
&& \qquad
+ \frac{m_t^2}{M_W^2} \left[
|\coupL{q}{2} |^2
+ |\coupR{q}{2} |^2 \right] \left(1-x_W^2 - 2 x_q^2 - x_W^2 x_q^2 + x_q^4\right)
- 4 x_q {\rm Re}\left\{\coupL{q}{2}\coupR{q\ast}{2}\right\}
\notag \\
&& \qquad
- 2 \frac{m_t}{M_W} {\rm Re}\left\{\coupL{q}{1}\coupR{q\ast}{2} + \coupL{q}{2}\coupR{q\ast}{1}\right\}
\left(1-x_W^2 - x_q^2\right)
\notag \\
&& \qquad \left.
+ 2 \frac{m_t}{M_W} x_q {\rm Re}\left\{\coupL{q}{1}\coupL{q\ast}{2} + \coupR{q}{2}\coupR{q\ast}{1}\right\}
\left(1+x_W^2 - x_q^2 \right) \right\}
\notag \allowdisplaybreaks\\
&\pm& \frac{g_2^2 m_t }{64 \pi} \frac{m_t^2}{M_W^2}
\left\{ - x_W^2 \left[ |\coupL{q}{1} |^2
- |\coupR{q}{1} |^2 \right]
+ \left[ |\coupL{q}{2} |^2
+ |\coupR{q}{2} |^2 \right] \left(1-x_q^2\right) \vphantom{\frac11}\right.
\notag \allowdisplaybreaks\\
&& \left. \vphantom{\frac11} \qquad \quad \qquad
+ 2 x_W {\rm Re}\left\{\coupL{q}{1}\coupR{q\ast}{2} - \coupL{q}{2}\coupR{q\ast}{1}\right\}
+ 2 x_W x_q {\rm Re}\left\{\coupL{q}{1}\coupL{q\ast}{2} - \coupR{q}{2}\coupR{q\ast}{1})\right\}\right\}
\notag \\
&& \qquad \quad \qquad \quad \qquad\times \left(1-2 x_W^2 - 2 x_q^2 + x_W^4 - 2 x_W^2 x_q^2 + x_q^4 \right)
\end{eqnarray}
\end{widetext}
where the upper sign holds for left-handed
and the lower sign for right-handed $W$ bosons.
Furthermore, $x_q\equiv m_q/m_t$, $x_W \equiv M_W/m_t$,
\begin{equation}
|\vec p| = \frac{m_t}{2}\sqrt{\lambda(1,x_q^2,x_W^2)} ,
\end{equation}
and the K\"all\'en function $\lambda(a,b,c) \equiv (a-b-c)^2-4bc$.
The total rate $\Gamma (t \to q W)$ is given by the sum
\begin{eqnarray}
\Gamma (t \to q W) &=& \Gamma (t \to q W_0) + \Gamma (t \to q W_L )\notag\\
&&+ \Gamma (t \to q W_R ) ,
\end{eqnarray}
and the corresponding observables are the helicity fractions
$F_0 = \Gamma (t \to q W_0) / \Gamma (t \to q W) $, $F_L = \Gamma (t \to q W_L) / \Gamma (t \to q W) $ and
$F_R = \Gamma (t \to q W_R) / \Gamma (t \to q W) $.
Using the condition $F_0 + F_R + F_L \equiv 1$
we get for the normalised differential decay rate \cite{Kane:1991bg,AguilarSaavedra:2006fy},
\begin{eqnarray}
\frac{1}{\Gamma}\frac{d\Gamma}{d\cos\theta^\ast} &=&
\frac{3}{8} \left(1-\cos\theta^\ast\right)^2 F_L
+ \frac{3}{8} \left(1+\cos\theta^\ast\right)^2 F_R \notag \\
&&+ \frac{3}{4} \sin^2\theta^\ast F_0 ,
\end{eqnarray}
with the helicity angle $\theta^\ast$,
defined as the angle between the charged lepton three-momentum
in the $W$-boson rest frame
and the $W$-boson momentum in the top-quark rest frame.
The latest experimental measurements from
ATLAS \cite{ATLAS-CONF-2011-122}
and CMS \cite{CMS-PAS-TOP-11-020}
are shown in Table~\ref{tab:helfrac}.
\begin{table}[ht]
\caption{\label{tab:helfrac} Helicity fractions @95\%~C.L.
from ATLAS and CMS for $t\to bW$ decay (see text for references).
The errors are statistical and systematic, respectively.}
\begin{ruledtabular}
\begin{tabular}{lll}
Fraction & ATLAS & CMS\\\hline
$F_0$ & $0.57\pm0.07\pm0.09$ & $0.567\pm0.074\pm0.047$\\
$F_L$ & $0.35\pm0.04\pm0.04$ & $0.393\pm0.045\pm0.029$\\
$F_R$ & $0.09\pm0.04\pm0.08$ & $0.040\pm0.035\pm0.044$
\end{tabular}
\end{ruledtabular}
\end{table}
For a quantiative analysis we adopt the scheme described above.
This means in particular, that we take the relations \eqref{MFV} as equalities,
and that we analyse the data in terms of the single quantity
\begin{equation}
\delta \coupL{q}{1} = \alpha \frac{v^2}{\Lambda^2} ,
\end{equation}
where $\alpha$ would be unity in a tree-induced scenario,
while $\alpha= 1/(16 \pi^2)$ in a loop-induced scenario.
In Fig.~\ref{Fig:tbW} we plot the helicity fraction $F_L$ and $F_R$
for $t \to b W$ as as function of $\delta \coupL{3}{1}$.
The standard model value corresponds to $\delta \coupL{q}{1} \equiv 0$
up to very small radiative corrections.
The colored bands indicate the data shown in Table~\ref{tab:helfrac}.
From this we infer that the SM value is well compatible with the current data.
However, given the current uncertainties,
there is still some room for a nonvanishing $\delta \coupL{q}{1}$.
It is interesting to note that for both helicity fractions
a region around $\delta \coupL{q}{1} \sim 0.4$ is still allowed,
while the other region constrains
$|\delta \coupL{q}{1}| \le 0.1$.
\begin{figure}[ht]
\includegraphics[width=.48\textwidth]{FL-tbW.pdf} \includegraphics[width=.48\textwidth]{F0-tbW.pdf}
\caption{%
Helicity fractions $F_L$ (top) and $F_0$ (bottom) for the decay $t \to b W$
as a function of a possible MFV new physics contribution with the coupling $\delta \coupL{q}{1}$.
The horizontal bands indicate the current data from the LHC experiments,
while the vertical bands indicate the currently allowed range for $\delta \coupL{q}{1} $.}
\label{Fig:tbW}
\end{figure}
The parameter $\delta \coupL{q}{1} $ still contains the dependence on $\Lambda$,
the scale of new physics.
Assuming a value of $\Lambda \sim 1$~TeV
we end up with $v^2 / \Lambda^2 \sim 0.1$.
This implies for NP scales around 1~TeV that the couplings in
Eq.~\eqref{eq:cWtb} can still be as large as unity,
implying that the current sensitivity cannot rule out MFV new physics effects at tree level.
In turn, in the loop-induced scenario there is still plenty of room for NP effects.
\subsection{Neutral Currents}\label{sec:NC}
The study of FCNCs
such as $t\to qV$, $q\in\{u,c\}$, $V\in\{Z,\gamma, g\}$
is important in the context of NP analyses,
since the contribution of the SM is highly
Glashow-Iliopoulos-Maiani (GIM)
suppressed \cite{Glashow:1970gm}.
A measurement of such a process at the current level of sensitivity
would clearly indicate new physics, in particular also implying non-MFV effects;
the SM branching ratios ($\mathscr B$) are in the region
$\mathscr B(t\to qZ)\sim \mathcal O(10^{-13})$,
$\mathscr B(t\to q\gamma)\sim \mathcal O(10^{-13})$ and
$\mathscr B(t\to qg)\sim \mathcal O(10^{-11})$ \cite{Grzadkowski:1990sm}.
\begin{table}[ht]
\caption{\label{tab:tqVOp}List of operators for $t \to qV$,
$q \in\{ u, c \}$, $V\in\{Z,\gamma,g\}$
transitions.}
\begin{ruledtabular}
\begin{tabular}{ll}
Decay & Operator\\\hline
$t\to qZ$ & $\Op{ij(3)}{\rm LL}$, $\Op{ij(4)}{\rm LL}$, $\Op{ij(5)}{\rm LL}$, $\Op{ij(6)}{\rm LL}$\\
& $\Op{ij(2)}{\rm RR}$, $\Op{ij(4)}{\rm LR}$, $\Op{ij(5)}{\rm LR}$\\\hline
$t\to q\gamma$ & $\Op{ij(4)}{\rm LR}$, $\Op{ij(5)}{\rm LR}$\\\hline
$t\to qg$ & $\ten{P}{ij(5)}{\rm LR}$
\end{tabular}
\end{ruledtabular}
\end{table}
The operators contributing to the FCNC interactions are listed in Table~\ref{tab:tqVOp}.
The experimental signatures of the various channels are quite different,
so we study the different processes separately in the following.
\subsubsection{$t\to q Z$}\label{sec:tqZ}
The effective Lagrangian for the neutral currents
involving the $Z_0$
can be written as
\begin{eqnarray}
\mathscr{\altL}_{\rm eff} &=&
\frac{g_2}{\cos \theta_W} Z_\mu\notag\\
&&\times \biggl\{ \bar{q} \gamma^\mu \left( \coupL{\prime q}{1} P_L + \coupR{\prime q}{1} P_R \right)t \notag\\
&& - \frac{(i\partial_\nu)}{M_Z}\left[ \bar{q}i \sigma^{\mu\nu} \left( \coupL{\prime q}{2} P_L
+ \coupR{\prime q}{2} P_R \right) t \right] \biggr\} , \label{eq:tqZ}
\end{eqnarray}
with the couplings
\begin{widetext}
\begin{eqnarray}
\coupL{\prime q}{1} &=&
V_{qb} \ten{V}{\ast}{tb} \left[
\eta^{(3)}_{\rm LL} \frac{m_b^2}{\Lambda^2}
+ \eta^{(4)}_{\rm LL}\frac{m_b^2}{\Lambda^2} \tan^2 \beta \right.
\left. - \frac{ \eta^{(5)}_{\rm LL}}{2}\frac{m_b^2}{\Lambda^2}
- \frac{ \eta^{(6)}_{\rm LL}}{2}\frac{m_b^2}{\Lambda^2} \tan^2 \beta \right] , \allowdisplaybreaks\\
\coupR{\prime q}{1} &=&
V_{qb} \ten{V}{\ast}{tb}
\alpha^{(2)}_{\rm RR} \frac{m_b^2}{\Lambda^2} \frac{m_q m_t }{v^2} \frac{1}{\sin^2 \beta} , \allowdisplaybreaks \\
\coupL{\prime q}{2} &=&
2 V_{qb} \ten{V}{\ast}{tb}
\frac{m_q}{v} \frac{1}{\sin^2 \beta}
\biggl( \cos \theta_W \alpha^{(4)\ast}_{\rm LR} \frac{m_b^2}{\Lambda^2}
- \sin \theta_W \alpha^{(5)\ast}_{\rm LR} \frac{m_b^2}{\Lambda^2} \biggr) , \allowdisplaybreaks\\
\coupR{\prime q}{2} &=&
2 V_{qb} \ten{V}{\ast}{tb}
\frac{m_t}{v} \frac{1}{\sin^2 \beta}
\biggl( \cos \theta_W \alpha^{(4)}_{\rm LR}\frac{m_b^2}{\Lambda^2}
- \sin \theta_W \alpha^{(5)}_{\rm LR}\frac{m_b^2}{\Lambda^2} \biggr) ,
\end{eqnarray}
\end{widetext}
where $\theta_W$ is the Weinberg angle.
As an order-of-magnitude estimate,
it is worthwhile to note that MFV leads to a significant GIM-like suppression
of this coupling by a factor of $m_b^2 / \Lambda^2$,
which is much smaller than the ``natural'' value of the coupling $v^2 / \Lambda^2$.
Furthermore, also for the relative sizes of the couplings for the various helicity combinations,
we get an MFV prediction $\tan \beta \sim 1$ or larger
\begin{equation}\label{eq:ttoqZest}
\begin{split}
\coupR{\prime q}{1} &\sim
\coupL{\prime q}{1} \frac{m_q m_t }{v^2} \frac{1}{\tan^2 \beta} , \\
\coupL{\prime q}{2} &\sim
\coupL{\prime q}{1} \frac{m_q}{v} \frac{1}{\tan^2 \beta} , \\
\coupR{\prime q}{2} &\sim
\coupL{\prime q}{1} \frac{m_t}{v} \frac{1}{\tan^2 \beta} ,
\end{split}
\end{equation}
Finally we note that there is also a loop-induced contribution from the SM
which has been calculated in Ref.~\cite{AguilarSaavedra:2002ns}.
However, the relevant vertex cannot be expressed as a local operator,
and hence the expressions are quite cumbersome.
On the other hand, since the SM is by construction MFV,
the same suppression factors as for the new physics contribution will appear,
and since the SM rates are tiny compared to the current experimental limit
we take for the SM contribution the simple estimates
\begin{eqnarray} \label{SMsimple}
\coupL{\prime q}{1 {\rm SM}} &=&
V_{qb} \ten{V}{\ast}{tb} \frac{1}{16 \pi^2} \frac{m_b^2}{v^2} , \\
\coupR{\prime q}{1 {\rm SM}} &=&
V_{qb} \ten{V}{\ast}{tb} \frac{1}{16 \pi^2} \frac{m_b^2}{v^2} \frac{m_q m_t }{v^2} , \notag \\
\coupL{\prime q}{2 {\rm SM}} &=&
\coupR{\prime q}{2 {\rm SM}} =
V_{qb} \ten{V}{\ast}{tb} \frac{1}{16 \pi^2} \frac{m_b^2}{v^2} \frac{m_t }{v} , \notag
\end{eqnarray}
which is numerically very close to the real calculation,
i.e.\ it yields a branching fraction of the order
\begin{equation}
\mathscr B_{\rm SM} (t \to c Z)
\sim \left| \frac{1}{16 \pi^2} V_{cb} \frac{m_b^2}{v^2} \right|^2 \sim 6 \times 10^{-15} \ .
\end{equation}
In order to perform a numerical analysis
we proceed similarly to the case of charged currents.
We take the approximate relations \eqref{eq:ttoqZest}
as exact equations and express everything in terms of the new physics coupling $\coupL{\prime q}{1}$.
Including the standard model estimate on the basis
of the naive estimate \eqref{SMsimple} we show in Fig.~\ref{fig:t2cZ}
the branching fraction for $t \to q Z$, $q\in\{u,c\}$.
\begin{figure}[ht]
\includegraphics[width=.48\textwidth]{BR-tqZMFV.pdf}
\caption{%
The branching fraction for $t \to Z q$
as a function of the coupling $\coupL{\prime q}{1}$
and $\tan\beta = 10$.
The horizontal lines indicate the expectation within the SM
and the current limits imposed by
ATLAS \cite{CortesGonzalez:2012ym,ATLAS-CONF-2011-154}
and CMS \cite{CMS-PAS-TOP-11-028}.
}
\label{fig:t2cZ}
\end{figure}
We note that the natural size of $\coupL{\prime q}{1}$ is in MFV
given by $V_{qb} V_{tb}^\ast m_b^2 / \Lambda^2 $,
assuming tree-level FCNC effects.
This is for $\Lambda \sim 1$~TeV about $10^{-6}$,
which is one order of magnitude above the SM value.
Loop-induced new physics effects would show up
in MFV for $\Lambda \sim 1$~TeV only at a level of
$\coupL{\prime q}{1} \sim 10^{-9}$,
which is far below the SM value.
In turn, the current experimental limit
implies $\coupL{\prime q}{1} \le 0.01$,
which is far above the prediction of any MFV scenario.
\subsubsection{$t\to q\gamma$ and $t\to q g$}\label{sec:tqgamma}
The possible couplings for photonic transitions
are more restricted due to electromagnetic gauge invariance.
Extracting the relevant terms form the dimension-6 operators we find
for the effective Lagrangian
\begin{eqnarray}
\mathscr{\altL}_{\rm eff} = -e \mathcal A_\mu
\frac{(i \partial_\nu)}{m_t} \left[
\bar{q} i \sigma^{\mu\nu} \left(
L^{(2)}_q P_L
+ R^{(2)}_q P_R \right) t \right] , \label{eq:tqgamma}
\end{eqnarray}
where $e$ and $\mathcal A_\mu$ are the electron charge
and the electromagnetic field, respectively.
For the $\gamma tq$ vertex
we get only two independent anomalous coupling constants,
\begin{widetext}
\begin{equation}
\begin{split}
\coupL{(2)}{q} &=
\frac{4 V_{qb} \ten{V}{\ast}{tb}}{e} \left(
\sin \theta_W \will{32(4)\ast}{\rm LR}
+ \cos \theta_W \will{32(5)\ast}{\rm LR} \right)
\frac{m_b^2}{\Lambda^2}\frac{m_qm_t }{v_1^2} , \\
\coupR{(2)}{q} &=
\frac{4 V_{qb} \ten{V}{\ast}{tb}}{e} \left(
\sin \theta_W \will{23(4)}{\rm LR}
+ \cos \theta_W \will{23(5)}{\rm LR} \right)
\frac{m_b^2}{\Lambda^2 }\frac{m_t^2}{v_1^2} .
\end{split}
\end{equation}
\end{widetext}
Clearly, in a MFV scenario, the right-handed top quark
yields the dominant contribution,
\begin{equation}\label{eq:tqgammaest}
\coupL{(2)}{q} \sim \frac{m_q}{m_t} \coupR{(2)}{q} \ .
\end{equation}
Taking this as an equation,
we can express the rate and the branching fractions
in terms of the single coupling
$\coupR{(2)}{q}$.
\begin{figure}[ht]
\includegraphics[width=.48\textwidth]{BR-tqgamma.pdf}
\caption{%
The branching fraction for $t \to q \gamma$
as a function of the coupling $\coupL{\prime q}{1}$.
The horizontal lines indicate the expectation within the SM
and the current limits imposed by
LEP \cite{Heister:2002xv,Abdallah:2003wf,Abbiendi:2001wk,Achard:2002vv,LEP-Exotica:WG-2001-01}
and HERA $(t\to u\gamma)$ \cite{Aaron:2009vv},
and also from
Tevatron: 3.2\% \cite{Abe:1997fz}
which is not shown.
}
\label{fig:t2cgamma}
\end{figure}
The standard model value for this process is very small.
This can be inferred already from a simple order-of-magnitude estimate
by just collecting the factors for the loop suppression and the CKM and mass factors.
One obtains
\begin{eqnarray}
\mathscr B_{\rm SM} (t \to q \gamma)
&\sim& \left| \frac{\alpha}{\pi} V_{tb} V_{qb}^\ast \frac{m_b^2}{M_W^2} \frac{m_t^2}{v^2} \right|^2 \notag\\
&\sim& \begin{cases}
4 \times 10^{-14} & {\rm for} \quad t \to c \gamma \\
4 \times 10^{-16} & {\rm for} \quad t \to u \gamma
\end{cases}
\end{eqnarray}
which is numerically close to the values obtained from the full calculation.
As in the case of $t \to Z q$, the MFV expectation is very small.
Assuming a tree-like scenario, $\coupR{(2)}{q}$
is of the order $V_{qb} \ten{V}{\ast}{tb} m_b^2 / \Lambda^2$.
For a new physics scale $\Lambda \sim 1$~TeV
we end up with a typical expectation
$\coupR{(2)}{c} \sim 10^{-7}$ and
$\coupR{(2)}{u} \sim 10^{-8}$
for the loop-induced scenario;
this is even smaller
by a factor of $1/(16 \pi^2)$.
Thus, if nature is minimally flavour violating,
but otherwise the couplings have natural sizes,
the current limits are several orders of magnitude above the expectations.
The case $t \to g q$ is very similar to $t \to q \gamma$;
the only difference is the larger strong coupling
and larger QCD renormalization effects.
The one-loop QCD renormalization is given in Appendix~\ref{appsec:QCDRen};
although the effects can be sizable,
they are still far away from being relevant in the current experimental situation.
Due to QCD gauge invariance the effective interaction
has a similar form as the photonic operator,
\begin{eqnarray}
\mathscr{\altL}_{\rm eff} &=& -g_s G_\mu^a
\frac{1}{m_t} \notag\\
&& \times (-i \partial_\nu)
\left[ \bar{q} T^a i \sigma^{\mu\nu}
\left( \tilde{L}^{(2)}_q P_L
+ \tilde{R}^{(2)}_q P_R \right) t \right] , \label{eq:tqg}
\end{eqnarray}
with
\begin{align}
\ten{\tilde L}{(2)}{q} &=
V_{qb} V^\ast _{tb}
\frac{4}{g_s} \ten{K}{32(5)\ast}{\rm LR} \frac{m_b^2}{\Lambda^2}\frac{m_qm_t}{v_1^2} , \\
\ten{\tilde R}{(2)}{q} &=
V_{qb} V^\ast _{tb}
\frac{4}{g_s} \ten{K}{23(5)}{\rm LR} \frac{m_b^2}{\Lambda^2}\frac{m_t^2}{v_1^2} ,
\end{align}
where $\ten{G}{a}{\mu}$ is the field strength-tensor
of the gluon fields and $T^a$ are the Gell-Mann matrices.
For $\ten{K}{32(5)}{\rm LR}, \ten{K}{23(5)}{\rm LR} \sim 1$
we get the same order-of-magnitude relation for the couplings
as for the photonic case
\begin{equation}\label{eq:tqgest}
\tilde{L}^{(2)}_q \sim \frac{m_q}{m_t} \tilde{R}^{(2)}_q ,
\end{equation}
and we shall again use this as an equality
to perform an MFV analysis in terms of the single variable.
Figure~\ref{fig:t2cg} shows the current status.
\begin{figure}[ht]
\includegraphics[width=.48\textwidth]{BR-tqg.pdf}
\caption{%
The branching fraction for $t \to q g$
as a function of the coupling $\tilde{R}^{(2)}_q$.
The horizontal lines indicate the expectation
within the SM and the current limits imposed by
ATLAS \cite{Collaboration:2012gd}.
}
\label{fig:t2cg}
\end{figure}
The estimate in the standard model
is the same as for $t \to q \gamma$
with the electromagnetic coupling replaced by the strong one,
\begin{eqnarray}
\mathscr B_{\rm SM} (t \to q g)
&\sim& \left| \frac{\alpha_s}{\pi} V_{tb} V_{qb}^\ast \frac{m_b^2}{M_W^2} \frac{m_t^2}{v^2} \right|^2 \notag\\
&\sim& \begin{cases}
6 \times 10^{-12} & {\rm for} \quad t \to c g \\
4 \times 10^{-14} & {\rm for} \quad t \to u g
\end{cases}
\end{eqnarray}
which again is close to the result of the full calculation.
Concerning the expectations of the MFV scenario,
one arrives at the same conclusions as for $t \to q \gamma$:
the MFV expectations are still several orders of magnitude away from the experimental sensitivity.
\section{Comparison to other constraints}
Aside from direct searches, indirect constraints
may also be obtained from both electroweak as well as from $B$ and $K$ decays.
The assumption of MFV also links the flavour physics of the top quark
with that of the bottom and the strange quark.
Overall, the constraints from $B$ and $K$ physics
which are discussed at length in Refs.~\cite{D'Ambrosio:2002ex,Hurth:2008jc},
are much more restrictive than the ones obtained from the current data on the top quark.
In turn, any effect that could be seen in top decays
at the current level of precision would indicate a deviation from MFV.
A certain loophole in the MFV argument emerges due to the large top-quark mass,
implying an order unity Yukawa coupling for the top.
This may be closed by employing a nonlinear realization of MFV \cite{Feldmann:2008ja};
however, as discussed in Ref.~\cite{Kagan:2009bn}, some effects may become visible in the top
sector for large values of $\tan \beta$.
Another way of testing anomalous $t \to b W$ couplings is in loop-induced $B$ decays.
It has been shown in Ref.~\cite{Grzadkowski:2008mf} that in particular the decay $B \to X_s \gamma$
is quite sensitive to an anomalous $t \to b W$ coupling resulting in a limit.
The combined analyses performed in Refs.~\cite{Drobnak:2011aa,Drobnak:2011wj},
including also other FCNC processes of $B$ mesons,
arrives at bounds for anomalous $t \to b W$ couplings
which are again stronger than what can be obtained from the current
direct measurements at the LHC.
Also the precision data from the electroweak sector constrain
possible nonstandard top couplings.
Such an analysis, using the precision data on the oblique parameters
of the electroweak sector, has been performed in Ref.~\cite{Zhang:2012cd}.
The constraints obtained in this way are comparable to
the ones obtained from flavour decays and
thus are again much stronger than the bounds from the currently available direct measurements.
\section{Conclusions}\label{sec:Conc}
We have discussed anomalous,
flavour-changing top decays in a model-independent way
by employing an effective field theory approach.
The new element in our analysis is the implementation of minimal flavour violation
by setting up a simple scheme with few parameters,
which obeys the MFV hierarchical structure.
We have calculated the decay rates for the charged current
$t\to qW$, $q\in\{d,s,b\}$,
as well as for the FCNC couplings
$tqV$, $q\in\{u,c\}$, $V\in\{Z,\gamma,g\}$,
in terms of the effective couplings for different helicities
under the assumption that the top quark is produced
in a high-energy collision as a quasi-free particle.
Comparing such a scenario with the present data shows that there is still plenty
of room for NP effects in the anomalous, flavour-changing top couplings,
in particular for the flavour-changing neutral current decays.
\section*{Acknowledgements}
One of us (S.F.) would like to thank M. Jung for
help-full discussions and comments.
This work was supported by
the German Ministry for Research and Education
(BMBF, Contract No. 05H12PSE).
\begin{widetext}
|
2,877,628,091,142 | arxiv | \section{Introduction}
The phenomenon of $CP$ violation is one of the least tested aspects of the
Standard Model (SM) and represents one of the sectors where a large
sensitivity to possible New Physics (NP) effects can be expected. An
important step forward in understanding the nature of this phenomenon has
recently been achieved by the KTeV and NA48 collaborations, obtaining the
following measurements of direct $CP$ violation in ${K}^0(\Kob) \to 2 \pi$
decays:
\begin{equation}
\Re\left(\frac{\epsilon'}{\epsilon} \right) = \left\{ \begin{array}{ll} (28.0 \pm 4.1 )
\times 10^{-4} \; \; \; \; \; \; \; & \cite{KTeV}~, \\ (18.5 \pm 7.3)
\times 10^{-4} & \cite{sozzi}~. \ea \right.
\label{epspKTEV}
\end{equation} These results, together with the earlier finding by NA31 \cite{NA31},
clearly establish the existence of direct $CP$ violation, as generally
predicted by the SM. However, an intriguing aspect of this new measurement
is that the values in (\ref{epspKTEV}) tend to be larger than most SM
estimates \cite{Bosch,epsp2}. Unfortunately the
theoretical predictions of $\epsp/\epsilon$ are affected by large
uncertainties, mainly of non--perturbative origin, and it is possible that
the experimental values above are still compatible with the SM expectations
(see, in particular, Ref.~\cite{epsp2}). Nonetheless, it is clear that after
these new experimental results the chances of sizable NP contributions in
$\epsp/\epsilon$ have increased substantially.
Among other possible NP scenarios, low energy supersymmetry \cite{Susy}
represents one of the most interesting and consistent extensions of the
Standard Model. In generic supersymmetric models, the large number of
new particles carrying flavor quantum numbers would naturally lead to large
effects in $CP$--violating and flavor--changing neutral--current (FCNC)
amplitudes \cite{SusyFCNC,HKR}. Actually, in this context the problem is
not how to generate large $CP$--violating effects, but rather how to avoid
dangerous corrections to small quantities like $\epsilon_K$ or $\Delta
m_K$, which seem to be consistent with their SM expectations. However, as
discussed recently in \cite{Sanda,Masiero,noi}, in specific supersymmetric
scenarios it is possible to generate non--standard ${\cal O}(10^{-3})$
contributions to $\epsp/\epsilon$ without getting troubles with the
experimental constraints of other $CP$ and FCNC processes.
\par
From a phenomenological point of view, the supersymmetric sources of a
sizable enhancement of $\epsp/\epsilon$ which can avoid fine--tuning problems
in $|\Delta S|=2$ amplitudes, are basically two \cite{noi}: a large
$\bar{s}dG$ vertex induced by the chromomagnetic operator \cite{Masiero}
and an enhanced $\bar{s} d Z$ vertex \cite{CI}. Since the problem of
non--perturbative uncertainties in the estimate of $\epsp/\epsilon$ is
typically worse in the case of supersymmetric contributions, it is very
useful to identify other observables which could clearly signal the
manifestation of either of these two mechanisms. As discussed in
\cite{noi,BS}, in the case of the enhanced $\bar{s} d Z$ vertex there is a
strong correlation between $\epsp/\epsilon$ and the theoretically--clean
$K\to\pi\nu\bar{\nu}$ widths. The scenario where $\epsp/\epsilon$ receives
sizable supersymmetric corrections via the $\bar{s} d Z$ vertex could
therefore be clearly excluded or confirmed by future precise experiments on
rare decays.
\par
More difficult to identify is the case where $\epsp/\epsilon$ receives
sizable contributions by the chromomagnetic operator. Indeed this
non--standard effect would be present mainly in non--leptonic
processes. However, since there is a strict correlation between the
chromomagnetic operator ($\sim \bar{s} \sigma^{\mu\nu} t^a d G^a_{\mu\nu}
$) and the magnetic penguin contributing to the $s\to d \gamma$ transition
($\sim \bar{s} \sigma^{\mu\nu} d F_{\mu\nu}$), interesting consequences of
this scenario could in principle be observed in processes with real photons
or $e^+e^-$ pairs in the final state. As shown in \cite{noi}, an example
of such processes is provided by the $K_L \to \pi^0 e^+e^-$ decay. In this
letter we analyze the consequences of this scenario in $K\to\pi\pi \gamma$
decays, focusing on the possible enhancements of direct--$CP$--violating
observables. As we will show, these can provide complementary information
to rare decays.
The paper is organized as follows: in Section 2 we recall the structure of
supersymmetric contributions to magnetic operators and their impact on
$\epsp/\epsilon$. In Section 3 we estimate the matrix element of the tensor
current, necessary to evaluate $CP$--violating effects in $K\to\pi\pi
\gamma$ decays. The general decomposition of $K\to\pi\pi \gamma$
amplitudes and the estimate of the supersymmetric contributions to
$\epsp_{+-\gamma}$ is given in Section 4, while in Section 5 we discuss the
charge asymmetry in $K^\pm\to\pi^\pm \pi^0 \gamma$ decays. Finally in
Section 6 we summarize our results.
\section{Gluino contributions to magnetic operators
and $\epsp/\epsilon$}
A useful framework to evaluate supersymmetric contributions to
$CP$--vio\-la\-ting and FCNC processes is provided by the mass--insertion
approximation \cite{HKR}. This consists in choosing a simple flavor--basis
for the gauge interactions and, in that basis, to perform a perturbative
expansion of the squark mass matrices around their
diagonal. Gluino--mediated amplitudes usually provide the dominant effect,
therefore the basis typically adopted is the one where the
gluino--quark--squark vertices are flavor--diagonal.
A detailed discussion of the leading terms generated by gluino exchange in
the framework of the mass--insertion approximation can be found in
\cite{GGMS,CFGMS}. Given the strong constraints from $|\Delta S|=2$
processes, it is found that only the dimension--5 magnetic operators
induced by ${\tilde d}_{L(R)}-{\tilde s}_{R(L)}$ mixing could lead to sizable
$CP$--violating effects in $|\Delta S|=1$ amplitudes avoiding fine--tuning
problems. These operators can be written as \cite{GGMS}
\begin{eqnarray}
&&{\cal H}^{(5)}_{eff}~=~\frac{(\delta_{RL}^D)_{21}}{m_{\tilde g}} \left[ {\widetilde
C}_7(x_{gq}) {\bar s}_R \sigma^{\mu\nu}d_L {\hat F}_{\mu\nu} + {\widetilde
C}_8(x_{gq}){\bar s}_R \sigma^{\mu\nu}{\hat G}_{\mu\nu} d_L \right] \nonumber\\
&&\qquad + \frac{(\delta_{LR}^D)_{21}}{m_{\tilde g}} \left[ {\widetilde
C}_7(x_{gq}) {\bar s}_L \sigma^{\mu\nu}d_R {\hat F}_{\mu\nu} + {\widetilde
C}_8(x_{gq}) {\bar s}_L \sigma^{\mu\nu} {\hat G}_{\mu\nu} d_R \right]+{\rm
h.c.}~,\qquad
\label{Heff}
\end{eqnarray}
where ${\hat G}_{\mu\nu}=gt^a G^a_{\mu\nu}$, ${\hat F}_{\mu\nu}=e F_{\mu\nu}$,
\begin{equation}
(\delta_{AB}^D)_{ij} = (\delta_{BA}^D)_{ji}^* = (M^2_D)_{{\tilde q}^i_A {\tilde q}^j_B}/m^2_{{\tilde d}}~,
\end{equation}
$m_{{\tilde d}}$ is the average down--squark mass,
$m_{{\tilde g}}$ is the gluino mass and $x_{gq}=m_{{\tilde g}}^2/m^2_{{\tilde d}}$.
Neglecting QCD corrections, the Wilson coefficients
${\widetilde C}_{7,8}(x_{gq})$ are given by \cite{noi,GGMS}
\begin{eqnarray}
&{\widetilde C}_7 (x) = -\displaystyle\frac{\alpha_s}{24\pi} F_0(x)~, \qquad\qquad
&{\widetilde C}_7 (1) = -\displaystyle\frac{1}{108} \frac{\alpha_s}{\pi}~, \\
&{\widetilde C}_8 (x) = \displaystyle\frac{\alpha_s}{8\pi} G_0(x)~, \qquad\quad \qquad
&{\widetilde C}_8 (1) = -\displaystyle\frac{5}{144} \frac{\alpha_s}{\pi}~,
\end{eqnarray}
with
\begin{eqnarray}
G_0(x) &=& \frac{x(22-20x-2x^2+16x\ln(x)
-x^2\ln(x)+9\ln(x))}{3(1-x)^4}~, \\
F_0(x) &=& \frac{4x(1+4x-5x^2+4x\ln(x)
+2x^2\ln(x))}{3 (1-x)^4}~.
\end{eqnarray}
Due to the smallness of the electric charge, the contribution generated by
${\cal H}^{(5)}_{eff}$ to $\Re(\epsp/\epsilon)$ is dominated by the terms
proportional to ${\widetilde C}_8$. This can be written as \cite{noi}
\begin{equation}
\Re \left(\frac{\epsp}{\epsilon}\right)^{\mbox{\tiny SUSY}}_G = P_G \Im \Lambda^-_g ~,
\label{epspG} \end{equation} where \begin{equation} \Lambda^-_g = \left[ (\delta_{LR}^D)_{21} -
(\delta_{LR}^D)_{12}^* \right] G_0(x_{gq})
\end{equation}
and\footnote{~Following
\cite{DAI}, here we adopt a normalization of $K\to (2 \pi)_I$ amplitudes
such that $\Re(A_0)^{\rm exp}=2.72\times 10^{-7}$~GeV and we employ the
notation $F_\pi = 92.4$~MeV. Note that both these conventions differ from
those adopted in \protect\cite{noi}. Moreover $\omega^{-1} =
(\Re(A_0)/\Re(A_2))_{exp} = 22.2 \pm 0.1$ is the $\Delta I = 1/2$ rule
enhancement factor.}
\begin{eqnarray}
P_G~ & = & ~\frac{11}{64}
\frac{\omega}{|\epsilon|\Re(A_0)} \frac{ m_\pi^2 m_K^2}{F_\pi (m_s+m_d)}
\frac{\alpha_s(m_{\tilde g})}{\pi} \frac{1}{m_{\tilde g}} \eta B_G \nonumber \\ & \simeq &
2.4\times 10^{2} B_G \left(\frac{137 {\rm~MeV}}{m_s + m_d}\right)
\left(\frac{ 500~\mbox{GeV}}{m_{\tilde g}}\right)
\left(\frac{\alpha_s(m_{\tilde g})}{\alpha_s(500~{\rm
GeV})}\right)^{\frac{23}{21}}~. \qquad \label{PG}
\end{eqnarray}
The expression (\ref{epspG}) has been obtained neglecting the mixing
induced by QCD corrections between ${\widetilde C}_8$ and the Wilson
coefficients of the SM $|\Delta S|=1$ effective Hamiltonian. This is a good approximation if
${\widetilde C}_8$ is sufficiently large: in this case the
renormalization--group evolution of ${\widetilde C}_8$ is almost diagonal
and is taken into account by the factor \cite{noi}
\begin{equation}
\eta =
\left(\frac{\alpha_s(m_{\tilde g})}{\alpha_s(m_t)}\right)^\frac{2}{21}
\left(\frac{\alpha_s(m_t)}{\alpha_s(m_b)}\right)^\frac{2}{23}
\left(\frac{\alpha_s(m_b)}{\alpha_s(m_c)}\right)^\frac{2}{25} \simeq 0.89
\left(\frac{\alpha_s(m_{\tilde g})}{\alpha_s(500~\rm{GeV})} \right)^\frac{2}{21}~.
\end{equation}
In (\ref{PG}) we have not explicitly shown the scale dependence of
quark masses and $B_G$, which are evaluated at $\mu = m_c$. The parameter
$B_G$, expected to be ${\cal O}(1)$ for a renormalization scale $\mu \sim
1$~GeV, is defined by
\begin{equation}
\langle (\pi\pi)_{I=0} | {\bar s}_R
\sigma^{\mu\nu}{\hat G}_{\mu\nu} d_L |K^0 \rangle (\mu) =
\frac{11}{4\sqrt{2}} \frac{m_\pi^2}{F_\pi}\frac{m_K^2}{m_s(\mu)+m_d(\mu)}
B_G(\mu)~.
\end{equation}
\section{Matrix elements of the tensor current}
Contrary to the case of $\epsp/\epsilon$, the ${\widetilde C}_7$ terms of
${\cal H}^{(5)}_{eff}$ could play an important role in $CP$--violating
observables of $K\to\pi\pi\gamma$ decays. In order to evaluate their
impact, we need to estimate the matrix elements of the
$\bar{s}_{R(L)}\sigma^{\mu\nu} d_{L(R)}$ current between kaon and pion
states. Given the Lorentz structure and the transformation properties
under $CP$ and $SU(3)_L\times SU(3)_R$, the lowest--order chiral
realization of the tensor current can be written as
\begin{eqnarray}
\bar{s}_R \sigma_{\mu\nu} d_L &\longrightarrow &
-i \frac{a_T F_\pi^2}{2} \left[ \partial_\mu U^\dagger \partial_\nu U U^\dagger -
\partial_\nu U^\dagger \partial_\mu U U^\dagger\right]_{23}~, \label{tc1}\\
\bar{s}_L \sigma_{\mu\nu} d_R &\longrightarrow &
-i \frac{a_T F_\pi^2}{2} \left[ \partial_\mu U \partial_\nu U^\dagger U -
\partial_\nu U \partial_\mu U^\dagger U \right]_{23}~, \label{tc2}
\end{eqnarray}
where we have neglected terms proportional to the Levi-Civita tensor,
$\epsilon_{\mu\nu\rho\sigma}$, not interesting to the present analysis.
Here $U$ is the usual chiral field (we follow the notation
of \cite{DAI}) and $a_T$ is an unknown coupling.
To obtain a first estimate of $a_T$ we proceed by differentiating
and using the e.o.m. on both sides of (\ref{tc1}-\ref{tc2}).
In this way on the l.h.s. we obtain some terms whose
chiral realization is well known, namely the
$\bar s_{L(R)} \gamma^\mu d_{L(R)}$ currents. Identifying the
corresponding terms on the r.h.s. we then obtain
\begin{equation}
a_T= \frac{m_s+m_d}{m_K^2}~.
\label{at1}
\end{equation}
Unfortunately, it is not possible to repeat this identification for
all the quark bilinears which appear on the l.h.s. This shows that
Eq. (\ref{at1}) is not to be trusted literally. The same conclusion can
also be reached by noting that the scale dependence of the tensor current
is not the same as that of the scalar bilinear. Eq. (\ref{at1}) would
therefore give the wrong scale dependence of the matrix elements of the
tensor current, and, strictly speaking, cannot be correct.
On the other hand, we find Eq. (\ref{at1}) instructive, in
the sense that it shows that the coefficient $a_T$ (which has dimensions of
the inverse of a mass) must be proportional to the inverse of the scale of
chiral symmetry breaking, with a numerical coefficient of ${\cal O}(1)$.
An additional indication on the value of $a_T$ can be obtained by
evaluating the $\langle K | \bar{s} \sigma^{\mu\nu} d | \pi \rangle$ matrix
element in the limit where the strange quark mass is very heavy ($m_s \gg
\Lambda_{QCD}$). The value of $a_T$ thus determined can be written as
\cite{Casalbuoni}
\begin{equation}
|a_T| \simeq \frac{1}{2 m_K}\left[ f_+(q^2)
+{\cal O}(f_-) \right]~,
\label{at2}
\end{equation}
where $f_\pm (q^2)$ are the form factors of the vector current.
Obviously, this result can be
trusted even less than Eq. (\ref{at1}). On the other hand it shows that if
we vary the strange quark mass, and approach its physical value from above,
we get a value of $a_T$ which is numerically close to that obtained with
chiral arguments. We believe that this serves as an independent check of
the order of magnitude, and gives us confidence that the real value of
$a_T$ cannot be too different from the estimates presented here. A further
independent estimate of $|a_T|$ very close to the one in (\ref{at2}) can be
obtained also in the framework of vector meson dominance, as in
\cite{RPS}. Given these results, for simplicity we shall assume in the
following
\begin{equation}
a_T = \frac{B_T}{2 m_K}~,
\label{at3}
\end{equation}
where $B_T$ is a dimensionless parameter expected to be of ${\cal O}(1)$. Note,
however, that Eq. (\ref{at3}) does not show the correct chiral behaviour,
which should rather be read from (\ref{at1}). Both the correct dependence
on the quark masses, and on the QCD renormalization scale are assumed to be
hidden inside $B_T$.
\section{$K \to \pi \pi \gamma$ amplitudes and $\epsilon'_{+-\gamma}$}
The most general form, dictated by gauge and Lorentz invariance, for the
transition amplitude $K(p_K) \to \pi_1(p_1)\pi_2(p_2)\gamma(\epsilon, q)$
is given by \begin{equation} A(K\to\pi\pi\gamma)= \epsilon_{\mu}^* \left[ E(z_i)
(qp_1p_2^\mu - qp_2p_1^\mu) + M(z_i)
\epsilon^{\mu\nu\rho\sigma}p_{1\nu}p_{2\rho}q_{\sigma}\right] /m_K^3,
\label{kppgamp1} \end{equation} where $E$ and $M$, known as electric and magnetic
amplitudes, are dimensionless functions of
\begin{equation}
z_i = {p_iq \over m_K^2} \quad(i=1,2)\qquad \mbox{and} \qquad z_3 = z_1+z_2
= {p_K q \over m_K^2}
\end{equation}
(only two of the $z_i$'s are independent). Following \cite{DAI} we
can decompose the electric amplitude as $E = E_{IB} + E_{DE}$, where
\begin{equation}
E_{IB}(z_i) = { e A(K\to \pi_1\pi_2) \over M_K z_3 }\left ( {Q_2 \over z_2}
- {Q_1 \over z_1} \right)
\end{equation}
is the well--known bremsstrahlung contribution ($eQ_i$ denotes the electric
charge of the pion $\pi_i$). Furthermore, we can expand the
direct--emission amplitudes $E_{DE}$ and $M$ as
\begin{eqnarray}
E_{DE}(z_i) &=& E_1 + O\left[(z_1-z_2)\right]~, \\
M(z_i) &=& M_1 + O\left[(z_1-z_2)\right]~,
\label{eq:multi}
\end{eqnarray}
where the higher order terms in $(z_1-z_2)$ can be safely neglected
due to the phase--space suppression.
The first $CP$ violating observable we shall consider is
\begin{equation}
\eta_{+-\gamma} = { A (K_L \to \pi^+\pi^-\gamma)_{E_{IB} +E_1} \over
A (K_S \to \pi^+\pi^-\gamma)_{E_{IB} +E_1} }~.
\label{eta+-g}
\end{equation}
Due to the vanishing of direct emission amplitudes, at small photon
energies $\eta_{+-\gamma}$ tends to the usual $K\to 2\pi$ parameter
$\eta_{+-} = A(K_L \to \pi^+\pi^-)/A (K_S \to \pi^+\pi^-)$. On the other
hand, the difference $(\eta_{+-\gamma}-\eta_{+-})$, that vanishes for
$E_\gamma \to 0$, is an independent index of direct $CP$
violation. Following \cite{DAI} we can write
\begin{equation}
\epsilon'_{+-\gamma}~=~\eta_{+-\gamma}-\eta_{+-}~=~
i \frac { e^{i(\delta_n-\delta_0)} m_K z_+z_-}{e \sqrt{2} \Re A_0}
\left( \Im A_0 {\Re E_n \over \Re A_0 } - \Im E_n \right)~,
\label{eppmg}
\end{equation}
where on the r.h.s we have neglected small contributions suppressed by
$\omega = \Re A_2/\Re A_0 =0.045$ and the following decomposition has been
employed
\begin{equation}
E_1(K^0) = \frac{1}{\sqrt{2}}~e^{i\delta_n} E_n~, \qquad\qquad
(p_1,p_2)\equiv (p_+,p_-)~.
\label{eq:e1en}
\end{equation}
Assuming that the dominant SUSY contribution to the $CP$--violating phase
of $E_n$ is generated by the magnetic photon operator we find
\begin{equation}
\Im \left( E_n \right)^{\mbox{\tiny SUSY}} =
- \frac{em_K^2}{12 F_\pi}
\frac{\alpha_s(m_{\tilde g})}{\pi } \frac{ \eta^2
B_T }{ m_{\tilde g} } \left[ \frac{ F_0(x_{gq}) }{ G_0(x_{gq}) }
+ 8 (1- \eta^{-1}) \right]
\Im \Lambda^-_g~.
\label{eq:imen}
\end{equation}
Then using (\ref{epspG}) to express both $\Im \Lambda^-_g$ and $(\Im
A_0)^{\mbox{\tiny SUSY}}_G$ in terms of $\Re(\epsp/\epsilon)^{\mbox{\tiny SUSY}}_G$, we obtain
\begin{equation}
\left( \frac{\epsilon'_{+-\gamma}}{\epsilon}\right)^{\mbox{\tiny SUSY}} =
\frac { e^{i(\delta_n-\delta_0+\pi/4)} z_+z_-}{\omega} \left[
R_{FG} - \frac{m_K \Re E_n}{e \Re A_0} \right]
\Re \left(\frac{\epsp}{\epsilon}\right)^{\mbox{\tiny SUSY}}_G~,
\label{eppmg2}
\end{equation}
where
\begin{eqnarray}
R_{FG}~& = & \frac{16}{33\sqrt{2}}
\frac{m_K(m_s+m_d)}{m_\pi^2} \eta \frac{B_T}{B_G} \left[ \frac{F_0(x_{gq})
}{G_0(x_{gq})} +8 (1- \eta^{-1})\right]\qquad \label{RFG} \\ & \simeq & -
1.9 \frac{B_T}{B_G} \left( \frac{ m_s + m_d }{ 137 {\rm~MeV} } \right)
\qquad ({\rm for}\ m_{\tilde g}=500~\mbox{GeV}, x_{gq}=1)~. \nonumber
\end{eqnarray}
Unfortunately at the moment there are no precise experimental informations
about $\Re E_n$, however naive chiral counting suggests $m_K \Re E_n /( e
Re A_0) \ll 1$ \cite{DAI}. Neglecting this contribution in (\ref{eppmg2}),
assuming $|B_T/B_G|\leq 1$, $x_{gq}\leq 1.3$ \cite{RGEb} and $(m_s+m_d)\leq
158$~MeV, we finally obtain \begin{equation} \left| \frac{\epsilon'_{+-\gamma}}{\epsilon}
\right|^{\mbox{\tiny SUSY}} \leq~50~z_+z_-~ \Re
\left(\frac{\epsp}{\epsilon}\right)^{\mbox{\tiny SUSY}}_G\leq~0.15~ z_+z_-~,
\label{epgbound}
\end{equation}
where the last inequality has been obtained imposing
$\Re(\epsp/\epsilon)^{\mbox{\tiny SUSY}}_G \leq 3\times 10^{-3}$. Note that the sensitivity
of this result to the value of $m_{\tilde g}$ and $m_{\tilde d}$
is very small: they enter
only through the $F_0/G_0$ ratio and the factor $\eta$ in (\ref{RFG}).
Interestingly the upper bound (\ref{epgbound}) is substantially larger
(almost one order of magnitude) with respect to the corresponding one
obtained within the Standard Model \cite{DAI}. A large value of
$\epsilon'_{+-\gamma}/\epsilon$ could therefore offer a clean signature of the
scenario where $\epsilon'/\epsilon$ is dominated by supersymmetric
magnetic--type contributions. Moreover, we notice that
$\epsilon'_{+-\gamma}/\epsilon$ is generated by the interference of two $\Delta
I = 1/2$ amplitudes (it is indeed enhanced by $\omega^{-1}$ with respect to
$\epsilon'/\epsilon$) and therefore, contrary to $\epsilon'/\epsilon$ or
$K_L\to\pi^0e^+e^-$, it is almost insensitive to possible new--physics
effects in the $\bar{s}dZ$ vertex.
Finally, we stress that the correlation between gluino--mediated
contributions to $\epsilon'/\epsilon$ and $\epsilon'_{+-\gamma}/\epsilon$ is
clearer than the corresponding one between $\epsilon'/\epsilon$ and
$B(K_L\to\pi^0e^+e^-)$ \cite{noi}. Indeed, due to the different number of
pions in the final state, the supersymmetric coupling ruling the effect in
$K_L\to\pi^0e^+e^-$ is not exactly the same as in $\epsilon'/\epsilon$ and
$\epsilon'_{+-\gamma}/\epsilon$ \cite{noi}.
\section{Charge asymmetry in $K^\pm \to \pi^\pm \pi^0\gamma$}
A very clean observable of direct $CP$ violation is provided by the asymmetry
between $K^+ \rightarrow \pi^+ \pi^0 \gamma$ and $K^- \rightarrow \pi^-
\pi^0 \gamma$ decay widths \cite{DAI,RPS,CH67,DMS,DP}. The decay rates of
$K^\pm \rightarrow \pi^\pm \pi^0 \gamma$ are conveniently expressed in
terms of $T_c^*$, the kinetic energy of the charged pion in the kaon rest
frame, and $W^2 = (q p_K)(q p_{\pm})/(m_{\pi^+}^2 m_K^2)$. Factorizing the
IB differential width, one can write \cite{GINO}
\begin{eqnarray}
\Frac{\partial^2 \Gamma}{\partial T_c^* \partial W^2} \, & = & \,
\Frac{\partial^2 \Gamma_{IB}}{\partial T_c^* \partial W^2} \,
\left\{ \, 1 \, + \, 2 \, \Frac{m_{\pi^+}^2}{m_K} \,
\Re \left( \Frac{E_{DE}}{e A} \right) \, W^2 \, \right. \nonumber \\
& & \qquad \qquad \left. + \, \Frac{m_{\pi^+}^4}{m_K^2} \left( \left|
\Frac{E_{DE}}{e A}
\right|^2 \, + \, \left| \Frac{M}{eA} \right|^2 \right) \, W^4
\, \right\}~, \label{eq:ddtw2}
\end{eqnarray}
where $A \equiv A(K^{\pm} \rightarrow \pi^{\pm} \pi^0)$. Since the linear
term in $W^2$ is sensitive to the interference between the IB amplitude and
the first electric dipole term $E_1$, it is convenient to introduce a
direct--$CP$--violating observable $\Omega$, defined as follows
\begin{equation}
\Frac{
{\partial^2 \Gamma^+}/{\partial T_c^* \partial W^2}
\, - \, {\partial^2 \Gamma^- }/{\partial T_c^* \partial W^2} }{
{\partial^2 \Gamma^+}/{\partial T_c^* \partial W^2}
\, + \, {\partial^2 \Gamma^- }/{\partial T_c^* \partial W^2} }
\; = \;
\Omega \, W^2~\,+ {\cal O}\left(\frac{m_\pi^4}{m_K^4}W^4\right) \, .
\label{eq:omw2}
\end{equation}
Setting $(p_1,p_2) \equiv (p_{\pm},p_0)$ and factorizing the strong phases
analogously to (\ref{eq:e1en}) we write \cite{DAI},
\begin{equation}
E_1 (K^{\pm}) =
e^{i \delta_1}~E_c~, \qquad\qquad E_{IB}(K^{\pm}) = - e^{i \delta_2} ~
\Frac{3 e \Re(A_2)}{2 m_K z_{\pm} z_3} ~.
\label{eq:e1ib}
\end{equation}
Assuming, as in the neutral channel, that the magnetic photon operator
gives the dominant SUSY contributions to the $CP$--violating phase of $E_c$,
we find \begin{equation} \Im (E_c)^{\mbox{\tiny SUSY}} = \Im (E_n)^{\mbox{\tiny SUSY}} ~, \end{equation} where $\Im
(E_n)^{\mbox{\tiny SUSY}}$ is given in (\ref{eq:imen}). Substituting this result in
(\ref{eq:ddtw2}) we finally obtain
\begin{eqnarray}
\Omega^{\mbox{\tiny SUSY}} & = & \Frac{64}{99} \, \Frac{|\epsilon|}{\omega^2}
\Frac{m_s + m_d}{m_K} \, \sin(\delta_1 - \delta_2) \, \eta \,
\Frac{B_T}{B_G} \nonumber \\
& & \, \times \, \left[ \Frac{F_0(x_{gq})}{G_0(x_{gq})}
\, + \, 8 ( 1 - \eta^{-1}) \, \right] \, \Re\left( \frac{\epsilon'}{\epsilon} \right)^{\mbox{\tiny SUSY}}_G~.
\label{eq:omiga}
\end{eqnarray}
Since the dominant $CP$--conserving $K^\pm\to\pi^\pm\pi^0\gamma$ amplitude
is a $\Delta I = 3/2$ transition, $\Omega$ is enhanced by a factor
$\omega^{-2}$ with respect to $\epsp$. This enhancement, however, is
partially compensated by the fact that the strong phase-difference
appearing in (\ref{eq:omiga}) is quite small $(\delta_1 - \delta_2)\simeq
10^\circ$ \cite{GASME}.\footnote{~While $\delta_2(m_K)\simeq - 7^{\circ}$
\cite{GASME}, in principle the $\delta_1$ phase shift should be input with
a dependence in the integration variables. This is however beyond the
accuracy required by the present analysis.} Employing the same assumptions
adopted in Eq.~(\ref{epgbound}) and using $\sin(\delta_1 - \delta_2)\leq
0.2$ we find
\begin{equation}
|\Omega|^{\mbox{\tiny SUSY}}~\leq~0.077~\Re\left( \frac{\epsilon'}{\epsilon} \right)^{\mbox{\tiny SUSY}}_G
\leq 2.3 \times 10^{-4}~.
\label{omigasusy}
\end{equation}
Similarly to the case of $(\epsilon'_{+-\gamma}/\epsilon)^{\mbox{\tiny SUSY}}$, also the
result in (\ref{omigasusy}) is substantially larger than what expected
within the Standard Model \cite{DAI}.\footnote{~An asymmetry at the level
of $10^{-4}$ between $K^+ \rightarrow \pi^+ \pi^0 \gamma$ and $K^-
\rightarrow \pi^- \pi^0 \gamma$ widths was claimed in \cite{DP} already
within the Standard Model. This result was however clearly overestimated as
discussed in \cite{DAI,RPS}.}
Since the kinetic variable $W^2$ can reach values of ${\cal O}(1)$ \cite{DMS},
the result (\ref{omigasusy}) implies that in a specific region of the
Dalitz plot, the asymmetry between $K^+ \rightarrow \pi^+ \pi^0 \gamma$ and
$K^- \rightarrow \pi^- \pi^0 \gamma$ distributions can be of
${\cal O}(10^{-4})$. A much smaller value is obtained performing a wide
integration over the phase space. For instance integrating over $W$ and
$T_c^*$ in the interval $55~\mbox{MeV} \leq T_c^* \leq 90 ~\mbox{MeV}$
\cite{PDG}, leads to
\begin{equation}
\delta \Gamma = \Frac{\Gamma(K^+ \rightarrow \pi^+ \pi^0 \gamma) \, - \,
\Gamma(K^- \rightarrow \pi^- \pi^0 \gamma)}{\Gamma(K^+ \rightarrow \pi^+
\pi^0 \gamma) \, + \, \Gamma(K^- \rightarrow \pi^- \pi^0 \gamma)} \leq
3\times 10^{-3}~ \Re\left( \frac{\epsilon'}{\epsilon} \right)^{\mbox{\tiny SUSY}}_G~.
\label{eq:dgove}
\end{equation}
As pointed out in \cite{DP}, we finally note that $CPT$ invariance allows us
to connect, at the first order in $\alpha_{em}$, the charge asymmetry of
the total widths in $K^\pm \rightarrow \pi^\pm \pi^0 \gamma$ to the one in
$K^\pm \rightarrow \pi^\pm \pi^0$. The relation is given by
\begin{eqnarray}
\Frac{\Gamma(K^+ \rightarrow \pi^+ \pi^0) \, - \,
\Gamma(K^- \rightarrow \pi^- \pi^0)}{\Gamma(K^+ \rightarrow \pi^+ \pi^0)
\, + \, \Gamma(K^- \rightarrow \pi^- \pi^0)} \, & = & \,
- \, \Frac{B(K^+ \rightarrow \pi^+ \pi^0 \gamma)}{B(K^+ \rightarrow
\pi^+ \pi^0)} \, \delta \Gamma \nonumber \\
& \simeq & \, - 1.3 \times 10^{-3} \, \delta \Gamma~.
\label{eq:kppasy}
\end{eqnarray}
that, through (\ref{eq:dgove}), leads to an asymmetry of ${\cal O}(10^{-8})$ for
the non--radiative process.
\section{Conclusions}
The unexpectedly large values of $\Re(\epsilon'/\epsilon)$ recently put
forward by the KTeV and the NA48 collaborations need a better theoretical
understanding. The difference from most SM estimates could be explained
either with unknown (but standard) non--perturbative effects or with New
Physics. Since the theoretical improvements in the calculation of the
non--perturbative effects may require a long time, it is worth looking for
other observables that could confirm or exclude the New--Physics origin of
the observed direct $CP$ violation.
\par
In this letter we have pointed out a strict correlation between the SUSY
contributions to the chromomagnetic operator, affecting
$\epsilon'/\epsilon$, and the magnetic $s d \gamma$
operator contributing to $K\to\pi\pi\gamma$ amplitudes.
We have searched for direct--$CP$--violating observables in the latter
processes which may get enhanced by a large coefficient in front of the
magnetic--penguin operator.
\par
First we have considered $K_{L,S} \rightarrow \pi^+ \pi^- \gamma$ decays
and concluded that the ratio $\epsilon'_{+-\gamma}/\epsilon$ is presumably
enhanced over its SM value in the scenario where $\epsilon'/\epsilon$ is
dominated by gluino--mediated supersymmetric amplitudes. In particular for
large photon energies $|\epsilon'_{+-\gamma}/\epsilon|$ could reach values
of ${\cal O}(0.5\%)$. In the $K^{\pm} \rightarrow \pi^{\pm} \pi^0
\gamma$ modes we have studied the charge asymmetry of the decay
distributions. We have found that also this clean direct--$CP$--violating
observable could be enhanced by supersymmetric effects, reaching values of
${\cal O}(10^{-4})$ in specific phase--space regions.
\par
In both cases the results found imply that a more detailed experimental
investigation of $CP$ violation in $K\to \pi\pi\gamma$ decays is well worth
the effort. Interestingly, this investigation could already be started with
existing experimental facilities like KTeV, NA48 and KLOE. Finally, we
stress that the major theoretical uncertainty in the present analysis comes
from the ratio of hadronic matrix elements $B_T/B_G$. We hope that this
quantity could be pinned down more precisely in the future with lattice--QCD
calculations.
\section*{Acknowledgements}
We thank F.J. Botella and A.J. Buras for interesting discussions.
J.P. is partially supported by Grants AEN--96/1718 of CICYT (Spain) and
PB97--1261 of DGESIC (Spain).
G.C. is partially supported by Schweizerische Nationalfonds.
|
2,877,628,091,143 | arxiv | \section{Introduction}
\input{subfiles/1-introduction}
\section{Background and Preliminaries}\label{sec:background}
\input{subfiles/2-background}
\subsection{Similarity Learning as Pairwise Bipartite Ranking}
\input{subfiles/2-1-formulation}
\subsection{Recursive ${\rm ROC\xspace}$ Curve Optimization - The TreeRank Algorithm}\label{subsec:treerank}
\input{subfiles/2-2-TreeRank}
\section{A Tree-Based Approach to Similarity Learning}\label{sec:main}
We now investigate how the {\sc TreeRank} method for ${\rm ROC\xspace}$ optimization recalled in the preceding section can be extended to the framework of similarity-learning and next establish learning rates in $\sup$ norm in this context.
\subsection{A Similarity-Learning Version of {\sc TreeRank}}
\input{subfiles/3-1-algorithm}\label{subsection_3-1}
\subsection{Generalization Ability - Rate Bound Analysis}
\input{subfiles/3-2-rate}
\section{Illustrative Numerical Experiments}\label{sec:exp}
\input{subfiles/4-experiments.tex}
\section{Conclusion}
\input{subfiles/conclusion}
\bibliographystyle{abbrv}
\subsubsection{Synthetic data experiments}\label{exp_sim_data}
\label{exp_sim_data}
To begin with, we study the ability of similarity ranking trees to retrieve the
optimal ROC curve for synthetic data, issued from a random tree of depth
$D_{gt}$ with a noise parameter $\delta$. Our experiments illustrate three
aspects of learning a similarity $s_D$ with TreeRank of depth $D$: the impact
of the class asymmetry $p_+ \ll 1-p_+$ as seen in the bounds of \cite{VCB18},
the trade-off between number of training instances and model complexity, see
\cref{thm:rate}, and finally the impact of model biais.
Results are summarized in \cref{tab:simulated_data_experiments}.
Details about the synthetic data experiments and real data experiments can be found
in the appendix.
\captionsetup[table]{skip=10pt}
\begin{table}
\centering
\noindent\makebox[\textwidth]{
\scriptsize
\begin{tabular}[t]{cllcllcll}
\toprule
\multicolumn{3}{c}{Class asymmetry} &
\multicolumn{3}{c}{Model complexity} &
\multicolumn{3}{c}{Model bias} \\
\cmidrule(lr){1-3} \cmidrule(lr){4-6} \cmidrule(lr){7-9}
$p_+$ & $D_1(s_{D}, s^*)$ & $D_\infty(s_{D}, s^*)$ &
$D_{\text{gt}}$ & $D_1(s_{D}, s^*)$ & $D_\infty(s_{D}, s^*)$ &
$D$ & $D_1(s_{D}, s^*)$ & $D_\infty(s_{D}, s^*)$ \\
\cmidrule(lr){1-3} \cmidrule(lr){4-6} \cmidrule(lr){7-9}
$0.5$ & $ 0.07 (\pm 0.07)$ & $ 0.30 (\pm 0.07)$ &
$1$ & $0.00 (\pm 0.01)$ & $0.06 (\pm 0.01)$ &
$1$ & $0.21 (\pm 0.13)$ & $0.65 (\pm 0.13)$ \\
$10^{-1}$ & $ 0.08 (\pm 0.08)$ & $ 0.31 (\pm 0.08)$ &
$2$ & $0.03 (\pm 0.04)$ & $0.20 (\pm 0.04)$ &
$2$ & $0.11 (\pm 0.10)$ & $0.43 (\pm 0.10)$ \\
$10^{-3}$ & $ 0.42 (\pm 0.17)$ & $ 0.75 (\pm 0.17)$ &
$3$ & $0.07 (\pm 0.07)$ & $0.30 (\pm 0.07)$ &
$3$ & $0.07 (\pm 0.07)$ & $0.30 (\pm 0.07)$ \\
$2\cdot10^{-4}$ & $ 0.45 (\pm 0.08)$ & $ 0.81 (\pm 0.08)$ &
$4$ & $0.12 (\pm 0.09)$ & $0.43 (\pm 0.09)$ &
$8$ & $0.06 (\pm 0.06)$ & $0.28 (\pm 0.06)$ \\
\cmidrule(lr){1-3} \cmidrule(lr){4-6} \cmidrule(lr){7-9}
\multicolumn{3}{l}{Parameters: $D= D_{gt} = 3$.} &
\multicolumn{3}{c}{$D_{gt} = D$, $p=0.5$.} &
\multicolumn{3}{c}{$D_{gt} = 3$, $p=0.5$.} \\
\midrule
\multicolumn{9}{l}{
Shared parameters: $\X = \R^3$, $\delta = 0.01$, $n_{\text{test}}=100,000$,
$n_{\text{train}} = 150 \cdot (5/4)^{D_{gt}^2}$.
} \\
\bottomrule
\end{tabular}
}
\hfill
\caption{Synthetic data experiments.
Between parenthesis are 95\%-confidence intervals
based off the normal approximation obtained on 400 runs.
}
\label{tab:simulated_data_experiments}
\end{table}
\subsection{Acknowledgments}
This work was supported by IDEMIA. We thank the LOD reviewers for their
constructive input.
\subsection{Illustrative figures}
\Cref{fig:anom_tree} represents a fully grown tree of depth $3$ with its associated scores.
\Cref{fig:SLFRK} represents a split produced by the LeafRank procedure.
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[width=0.7\linewidth]{figures/tree.png}}
\caption{A piecewise constant similarity function described by an oriented binary subtree $\mathcal{T}$. For any pair $(x,x')\in \mathcal{X}^2$, the similarity score $s_{\mathcal{T}}(x,x')$ can be computed very fast in a top-down manner using the heap structure: starting from the initial value $2^J$ at the root node, at each internal node $\mathcal{C}_{j,k}$, the score remains unchanged if $(x,x')$ moves down to the left sibling and one subtracts $2^{J-(j+1)}$ from it if $(x,x')$ moves down to the right.}
\label{fig:anom_tree}
\end{center}
\end{figure}
\begin{figure}[ht]
\vskip -1cm
\begin{center}
\centerline{\includegraphics[width=0.8\linewidth]{figures/treerank_node.png}}
\centerline{\includegraphics[width=0.3\linewidth]{figures/acceptance_region.pdf}}
\caption{Symmetric split produced by the {\sc Symmetric LeafRank} procedure.}
\label{fig:SLFRK}
\end{center}
\end{figure}
\subsection{Representation of proposal functions for $\X\times\X = \R\times\R$}
We illustrate visually the outcomes of TreeRank for different
proposition regions, for a similarity function on the unit square $[0,1]\times [0,1]$.
To obtain a symmetric similarity function, a natural approach is to transform the data using any
function $f : \X\times\X \to \text{Im}(f)$ such that $f(x,x') = f(x',x)$ and then choose a collection of
regions $\mathcal{D} \subset \mathcal{P}(\text{Im}(f))$, to form $\mathcal{C}$ such that
\begin{align*}
\mathcal{C} = \left\{ x,x' \in \X \times \X \; \vert \; f(x,x') \in D \right\}_{D \in \mathcal{D}}.
\end{align*}
The $i$-th element of the vector $f(x,x')$ will be written $f^{(i)}(x,x')$.
\begin{figure}
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/experiments/exp_1_data.pdf}
\caption{Training pairs}
\label{fig:pairs}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/experiments/exp_1_square.pdf}
\caption{$C_{\text{sq}}$}
\label{fig:reg_minmax}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/experiments/exp_1_diag.pdf}
\caption{$C_{\text{diag}}$}
\label{fig:reg_diag}
\end{subfigure}%
\caption{Representation of TreeRank score function with different proposal regions.
The $x$-axis corresponds to $x_1$ while the $y$-axis corresponds to $x_1'$.}
\label{fig:regions_illustration}
\end{figure}
In that context, we present two approaches:
\begin{itemize}
\item Set $f(x,x') = \binom{x\vee x'}{x\wedge x'}$ where $x\vee x'$ and
$x\wedge x'$ respectively stand for the element-wise maximum and
minimum of $x$ and $x'$. We introduce the collection
$\mathcal{C}_{\text{sq}}$ of all regions:
\begin{align*}
\left \{ x,x' \in \X \times \X \Big /
\left (\sigma f^{(i)}(x,x') \ge \sigma A \right )\otimes
\left (\sigma f^{(i+D)}(x,x') \le \sigma A \right) \right \}
\end{align*}
where $i \in \{ 1, \dots, D \}$, $\sigma \in \{-1, +1\}$, $A \in \R$
and $\otimes$ is the standard XOR.
\item Set $f(x,x') = \binom{\lvert x-x' \rvert}{x+x'}$. We introduce the collection
$\mathcal{C}_{diag}$ of all regions:
\begin{align*}
\left \{ x,x' \in \X \times \X \Big / \sigma f^{(i)}(x,x') \ge \sigma A \right \}
\end{align*}
where $i \in \{ 1, \dots, D \}$, $\sigma \in \{-1, +1\}$, $A \in \R$.
\end{itemize}
We illustrate with \cref{fig:regions_illustration} the results of the outcome
of the TreeRank algorithm with either one of these two approaches, in a simple
case where $\X = [0,1]$, $\mu(x) = 1$, $K=2$ and $\p\{ Y=2 |X=x\} = 0.6 \cdot
\I\{x\ge 0.5 \} + 0.2$.
More complicated decision regions can be chosen, such as any linear decision
function on the transformation $f(x,x')$ of the pair $x,x'$. As stated in \cref{subsection_3-1},
those could be learned for example by an asymmetrically weighted SVM.\\
\subsection{Details about the synthetic data experiments of \cref{exp_sim_data}}
Assume a fully grown tree $\mathcal{T}$ of depth $D_{\text{gt}}$, with terminal
cells $\mathcal{C}_l \subset \X \times \X$ for all $0 \le l \le
L:=2^{D_{\text{gt}}}-1$. The tree is constructed with splits on the
transformation of the input space $\X\times\X$ by the function $f$ introduced
in \cref{lemma:sym_transform}. The split is chosen by selecting the split
variable uniformly at random, and the split value using a uniform law over that
variable on the current cell. The distribution of the data is assumed to be
defined by $p_+$, $F_+ = \sum_{l=1}^L \delta_l^+ \cdot
\mathcal{U}(\mathcal{C}_l)$ and $F_- = \sum_{l=1}^L \delta_l^- \cdot
\mathcal{U}(\mathcal{C}_l)$ where $\mathcal{U}(\mathcal{C}_l)$ is the uniform
distribution over $\mathcal{C}_l$. Introduce $\sigma$ as the permutation that
orders the cells $C_l$ by decreasing $\delta_l^+ / \delta_l^-$, i.e.
$\delta_{\sigma(l)}^+ / \delta_{\sigma(l)}^- \ge \delta_{\sigma(l+1)}^+ /
\delta_{\sigma(l+1)}^- $ for all $0 \le l \le L-1$, then the optimal ROC curve
${\rm ROC\xspace}^*$ is the line that connects the dots $(0,0)$ and $(\sum_{j=0}^l
\delta_{\sigma(j)}^-, \sum_{j=0}^l \delta_{\sigma(j)}^+)$ for all $0\le l \le
L$.
Now we detail our choice for the specification of the parameters $\delta^+_l$
and $\delta^-_l$. Assume $\sigma$ to be the identity permutation. To study the
ability of our method to retrieve the optimal ROC curve for different levels of
statistical noise, introduce a noise parameter $0 < \delta < 1$ and fix
$\delta_l^+ = c_{\delta}^+ \cdot \delta^{l/L}$, and $\delta_l^- =
c_{\delta}^- \cdot \delta^{-l/L}$ for all $0 \le l \le L$, with $c_{\delta}^+$ and
$c_{\delta}^-$ normalization constants in $l$ such that both sets
$\{\delta_l^+\}_{0 \le l \le L}$ and $\{\delta_l^-\}_{0 \le l \le L}$ sum to
one.
When $\delta$ is close to $0$, ${\rm ROC\xspace}^*$ approaches the unit step,
whereas when $\delta$ is close to $1$, ${\rm ROC\xspace}^*$ approaches the ROC of random assignment.
The experiments presented here used $\delta = 0.01$, which makes
for an ${\rm AUC\xspace}^*$ of $0.96$ approximately. By varying the parameter $\delta$, one can
study the outcome of our approach for different levels of statistical noise.
The first experiment shows that the learned model $s_D$ generalizes poorly when
positive instances are rare, as shown in the bounds of \cite{VCB18}. The second
one that when $D_n \sim \sqrt{\log n }$, learned models stay decent, as show by
\cref{thm:rate}. The last experiment illustrates the fact that using an overly
deep tree comparatively to the ground truth does not hinder performance, thanks
to the global nature of the ranking problem.
\subsection{Real data experiments}\label{exp_real_data}
We compare the performance of our approach to the widely acclaimed metric
learning technique LMNN, see \cite{Weinberger2009}, as well as a similarity
derived from the cosine similarity of a low-dimensional neural network encoding
of the instances, optimized for classification with a softmax cross-entropy
loss. For that matter, we use the MNIST database with reduced dimensionality by
PCA. The neural network approach is inspired by state of the art techniques in
applications of similarity learning, such as in facial recognition. It has
shown outstanding performance, but is not directly derived from the ranking
problem that these systems usually tackle.
The MNIST database of handwritten digits has a training set of 60,000 images and a test set
of 10,000 images and is widely used to benchmark classification algorithms.
Each image represents a number between 0 and 9 with a monochrome image of $28\times 28$
pixels, which makes for $K=10$ classes and an initial dimensionality of $784$.
The standard principal components analysis (PCA) was set to keep $95\%$
of the explained variance, which reduces the dimensionality of the data to $d=153$.
This first step was necessary to limit the memory requirements of the LMNN algorithm.
We used the implementation of LMNN provided by the python package \emph{metric-learn},
and changed the regularization parameter to be $0.01$.
The neural network approach learned an encoding $e: \X = \R^d \to \R^{d_e}$ of size
$d_e=128$, used for classification at training time, with a simple
softmax-cross entropy behind a fully connected $d_e\times K$ layer.
The encoding was composed of three stacked fully connected layers followed
by ReLU activations of sizes $153\times 146$, $146\times 140$ and $140 \times 134$,
and finally a $134\times 128$ fully connected layer without an activation function.
These layer sizes are arbitrary and were simply chosen as a linear interpolation
between the input size $d$ and output size $d_e$. The similarity between two instances
is computed using a simple cosine similarity between their embeddings.
Our approach was based off a ranking forest with for symmetric LeafRank an
asymmetric classification tree over the transformed data of fixed depth $5$,
see \cref{fig:SLFRK} for an exemple of this type of proposal region. The
ranking forest aggregates the results of 44 trees of depth 15
learned on only $10^5$ pairs each. Refer to \cite{CDV09} and \cite{CDV13} for
details on ranking forests. ROC curve plots are shown in \cref{fig:roc_models}.
For now, our method shows higher performance than the linear metric learning approach,
but performs worse than the neural network encoding approach. Further work will aim
to improve the performance of our approach, perhaps with a better LeafRank algorithm.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/experiments/exp_2_roc_2.pdf}
\caption{ROC curves for the real data experiments.}
\label{fig:roc_models}
\end{figure}
\begin{comment}
\subsection{Generating instances with their assigned class}
To begin with, we study the ability of similarity ranking trees to retrieve
the optimal ROC curve for synthetic data. The data is generated from a family
of distributions for which $\eta(x,x')$ is piecewise constant and
which aims to approximate well many possible data distributions.
For that matter, assume a fully grown tree of depth $D \ge 1$, with terminal
cells $\mathcal{C}_{l}$ for $l \in \{0, \dots, L \}$ with $L=2^D$, we assume
that $\mu_k = \sum_{l=1}^{L} \delta_{k, l} \cdot \mathcal{U}(\mathcal{C}_l)$,
where $\mathcal{U}(\mathcal{C}_l)$ is the uniform law over $\mathcal{C}_l$.
It implies that $\sum_{l=1}^L \delta_{k,l} = 1$ for all $1\le k \le K$.
Introduce $l(x) \in \{1, \dots, L\}$ as the indice of the cell that
$x$ belongs to, i.e. $x \in C_{l(x)}$, then
\begin{align*}
\eta_k(x) = \frac{ p_k \cdot \delta_{k, l(x)} }{\sum_{k=1}^K p_k \cdot \delta_{k, l(x)}}
\quad \text{ and } \quad
\eta(x,x') = \sum_{k=1}^K \eta_k(x) \eta_k(x').
\end{align*}
It follows that $\eta(x,x')$ is constant on all sets $C_{l,l'} = (\mathcal{C}_{l}
\times \mathcal{C}_{l'}) \cup (\mathcal{C}_{l'} \times \mathcal{C}_{l})$
with $l \le l'$, which implies a simple expression of ${\rm ROC\xspace}^*$.
The regions described by $C_{l,l'}$ are quite
general and involve a costly search of complexity in $O(n^2)$ to optimize for
\cref{eq:cost_classif}. Instead, we approximate those regions by recursive
splits on data transformed by the function $f$ defined in \cref{lemma:sym_transform}.
Introduce the value of $\eta$ over $C_{l,l'}$ as $\eta_{l,l'}$, and the
function $b:\{1,\dots,L(L-1)/2\} \to \{1,\dots, L\}^2$ that orders
the sets $C_{l,l'}$ by their posterior probability, i.e. $\eta_{h(i)} \ge
\eta_{h(i+1)}$ for all $1 \le l \le L(L-1)/2 - 1$.
We denote by $b^{(1)}(j)$ and $b^{(2)}(j)$ respectively the first and
second coordinates of $b(j)$. The optimal ROC curve ${\rm ROC\xspace}^*$ then relies the points
\begin{align}
(0,0) \text{ and }
\sum_{j=1}^{h} \p\{X,X' \in C_{b(j)}\}
\left( \frac{1-\eta_{b(j)} }{1-p_+},
\frac{\eta_{b(j)}}{ p_+} \right) \text{ for all } 1 \le h \le \binom{L}{2}.
\label{roc:class}
\end{align}
All of the quantities involved in \cref{roc:class} are known. Indeed,
\begin{align*}
\p\{ X,X' \in C_{l,l'} \} = (1 + \I\{l=l'\}) \mu(C_l) \mu(C_{l'})
\end{align*}
where $\mu(C_l) = \sum_{k=1}^K p_k \delta_{k,l}$ for all $1\le l \le L$,
and $p_+ = \sum_{k=1}^K p_k^2$. Explcitly \cref{roc:class} can be rewritten:
\begin{align}
(0,0) \text{ and }
\sum_{j=1}^{h}
\left( \frac{\sum_{k\ne k'} p_k \delta_{k, b^{(1)}(j)}
\cdot p_{k'} \delta_{k', b^{(2)}(j)} }{\sum_{k\ne k'} p_k p_k'},
\frac{\sum_{k=1}^K p_k^2 \delta_{k, b^{(1)}(j)}
\cdot \delta_{k, b^{(2)}(j)} }{\sum_{k=1}^K p_k^2}
\right) \label{roc:class}
\end{align}
for all $1 \le h \le \binom{L}{2}$.
\subsubsection{
\textcolor{red}{What I need:}
}
I need to be able to generate a matrix $\Delta := (p_k \delta_{k,l})_{1\le k \le K,
1 \le l \le L}$ of size $(K, L)$ such that all of its lines $\Delta_k$ sum to $p_k$
and the increments of the ROC curve are controlled, i.e. the matrix generation has
parameters to yield ROC curves with the same shape.
The TPR increments are determined by the values in the $(L,L)$ matrix $\Delta^\top \Delta$
while the FPR increments are determined by the values in the $(L,L)$ matrix
$\Delta^\top (1_K 1_K^\top - I_K)\Delta$ with $1_K$ a $(K,1)$ ones vectors and $I_K$ the
identity $(K,K)$ matrix. To see the type of ROC curve parametrization
that I'd like to have, see \cref{gensimpospairs}
\subsubsection{
\textcolor{red}{What I've tried but fails:}
}
In the following experiments, we fix all classes as equiprobable,
i.e. $p_k = p_{k'}$ for all $1 \le k,k'\le K$, and generate the
coefficients $\{\delta_{k,l}\}_{l=1}^L$ from a Dirichlet distribution
with equal concentration parameters $\delta$, independently for all $1 \le k
\le K$. The objective was to have classes are more separable when $\delta$ is
low, and less separable when $\delta$ is high. The problem is that, since the
generation of the $\delta_{k,l}$ is done independently on the classes $k$, I don't
control at all the shape of the ROC curve, which is illustrated with \cref{gen_pairs}.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{\textwidth}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[height=6cm]{figures/appendix_datagen/gen_pairs/all_deterministic/roc_simtree}
\end{subfigure}%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[height=6cm]{figures/appendix_datagen/gen_pairs/all_deterministic/scatter_simtree}
\end{subfigure}
\caption{Results with realizations of the Dirichlet that give $(\delta_{1,1}, \delta_{1,2})= (0, 1)$ and
$(\delta_{2,1},\delta_{2,2})= (1, 0)$. \emph{optimal} is ${\rm ROC\xspace}^*$. }
\centering
\end{subfigure}
\begin{subfigure}[t]{\textwidth}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[height=6cm]{figures/appendix_datagen/gen_pairs/all_random/roc_simtree}
\end{subfigure}%
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[height=6cm]{figures/appendix_datagen/gen_pairs/all_random/scatter_simtree}
\end{subfigure}
\caption{Results with realizations of the Dirichlet that give $(\delta_{1,1}, \delta_{1,2})= (0, 1)$ and
$(\delta_{2,1},\delta_{2,2})= (0.14, 0.86)$.}
\end{subfigure}
\caption{ Examples of distributions for $K=2, L=2$ and $\delta = 0.08$, with their
corresponding ROC curves and parameters $\delta_{k,l}$'s.}
\label{gen_pairs}
\end{figure}
\subsection{Generating negative and positive pairs}\label{gensimpospairs}
See \cref{noise_pair_gen} for an illustration.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=4cm]{figures/appendix_datagen/pair_roc/roc_simtree_noisy}
\caption{$\gamma=0.9$}
\end{subfigure}%
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=4cm]{figures/appendix_datagen/pair_roc/roc_simtree_medium}
\caption{$\gamma=0.8$}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=4cm]{figures/appendix_datagen/pair_roc/roc_simtree_deter}
\caption{$\gamma=0.7$}
\end{subfigure}
\caption{Different levels of noise for different values of $\gamma$.}
\label{noise_pair_gen}
\end{figure}
\subsection{Rejected}
\subsubsection{Synthetic data experiments.}
To begin with, we study the ability of similarity ranking trees to retrieve
the optimal ROC curve for synthetic data. The data is generated from a family
of distributions for which $\eta(x,x')$ is piecewise constant and
which aims to approximate well many possible data distributions.
For that matter, assume a fully grown tree of depth $D \ge 1$, with terminal
cells $\mathcal{C}_{l}$ for $l \in \{0, \dots, L \}$ with $L=2^D$, we assume
that $\mu_k = \sum_{l=1}^{L} \delta_{k, l} \cdot \mathcal{U}(\mathcal{C}_l)$,
where $\mathcal{U}(\mathcal{C}_l)$ is the uniform law over $\mathcal{C}_l$.
It implies that $\sum_{l=1}^L \delta_{k,l} = 1$ for all $1\le k \le K$.
Introduce $l(x) \in \{1, \dots, L\}$ as the indice of the cell that
$x$ belongs to, i.e. $x \in C_{l(x)}$, then
\begin{align*}
\eta_k(x) = \frac{ p_k \cdot \delta_{k, l(x)} }{\sum_{k=1}^K p_k \cdot \delta_{k, l(x)}}
\quad \text{ and } \quad
\eta(x,x') = \sum_{k=1}^K \eta_k(x) \eta_k(x').
\end{align*}
It follows that $\eta(x,x')$ is constant on all sets $C_{l,l'} = (\mathcal{C}_{l}
\times \mathcal{C}_{l'}) \cup (\mathcal{C}_{l'} \times \mathcal{C}_{l})$
with $l \le l'$, which implies a simple expression of ${\rm ROC\xspace}^*$.
The regions described by $C_{l,l'}$ are quite
general and involve a costly search of complexity in $O(n^2)$ to optimize for
\cref{eq:cost_classif}. Instead, we approximate those regions by recursive
splits on data transformed by the function $f$ defined in \cref{lemma:sym_transform}.
In the following experiments, we fix all classes as equiprobable,
i.e. $p_k = p_{k'}$ for all $1 \le k,k'\le K$, and generate the
coefficients $\{\delta_{k,l}\}_{l=1}^L$ from a Dirichlet distribution
with equal concentration parameters $\delta$, for all $1 \le k \le K$.
Hence, classes are more separable when $\delta$ is low, and the problem
is more noisy when $\delta$ is high.
Ordering the sets with $C_{l,l'}$ with $l \le l'$ by the decreasing value of
$\eta$ over $C_{l,l'}$, written $\eta_{l,l'}$, we can derive the increment
between two points of ${\rm ROC\xspace}^*$ as
\begin{align*}
\left( \frac{(1-\eta_{l,l'})\p\{X,X' \in C_{l,l'}\} }{1 - \p\{Y=Y'\}},
\frac{\eta_{l,l'} \cdot \p\{X,X' \in C_{l,l'}\} }{ \p\{Y=Y'\}} \right)
\end{align*}
and all of this quantities are known, indeed,
\begin{align*}
\p\{ X,X' \in C_{l,l'} \} =
\begin{cases}
\mu(C_l)^2 \quad & \text{if }l=l',\\
2 \mu(C_l) \mu(C_{l'}) \quad & \text{if }l\ne l',\\
\end{cases}
\end{align*}
where $\mu(C_l) = \sum_{k=1}^K p_k \delta_{k,l}$ for all $1\le l \le L$,
and $\p\{ Y = Y'\} = \sum_{k=1}^K p_k^2$.
If we were to define the data as uniformly distributed on $[0,1]^d$ and
then define $\eta_k$ as constant on the cells of the tree, we'd obtain:
\begin{align*}
\left( \frac{(1-\eta_{l,l'})\p\{X,X' \in C_{l,l'}\} }{1 - \p\{Y=Y'\}},
\frac{\eta_{l,l'} \cdot \p\{X,X' \in C_{l,l'}\} }{ \p\{Y=Y'\}} \right)
\end{align*}
and all of this quantities are known, indeed,
\begin{align*}
\eta_{l,l'} = \sum_{k=1}^K \delta_{k,l} \delta_{k,l'}
\end{align*}
\begin{align*}
\p\{ X,X' \in C_{l,l'} \} =
\begin{cases}
\mu(C_l)^2 \quad & \text{if }l=l',\\
2 \mu(C_l) \mu(C_{l'}) \quad & \text{if }l\ne l',\\
\end{cases}
\end{align*}
where $\mu(C_l) = \lambda(C_l)$ for all $1\le l \le L$,
and $p_k = \p\{ Y=k \} = \sum_{l=1}^L \lambda(C_l) \delta_{k,l}$ and
$\p\{ Y = Y'\} = \sum_{k=1}^K p_k^2$.
\end{comment}
|
2,877,628,091,144 | arxiv | \section{The model}
\textit{Phenomenological model.}
We consider classical spins, $\mathbf{S}_i$, of unit length on a square lattice in the $xy$-plane with ferromagnetic NN $J_1$ and antiferromagnetic 3rd NN $J_3$ exchange interactions. Additionally, we include the DMI with constant $D$ gradually increased starting from 0.
In such a form, DMI stabilizes skyrmion and helical states of Bloch type.
The $z$ axis is normal to the lattice plane (the $xy$ plane).
\begin{align}
E=
&-J_1 \sum_{\langle i,j\rangle}\mathbf{S}_i\cdot\mathbf{S}_j+J_3 \sum_{\langle\langle\langle i,j\rangle\rangle\rangle}\mathbf{S}_i\cdot\mathbf{S}_j -H \sum_iS_i^z\nonumber\\
&-D \, \sum_{i}(\mathbf{S}_i \times \mathbf{S}_{i+\hat{x}} \cdot \hat{x}
+ \mathbf{S}_i \times \mathbf{S}_{i+\hat{y}} \cdot \hat{y}).
\label{energy}
\end{align}
$\langle i,j \rangle$ and $\langle\langle\langle i,j\rangle\rangle\rangle$ denote pairs of NN and 3rd NN spins, respectively, and $J_1,J_3>0$. The third term describes the interaction with the magnetic field parallel to the $z$ axis.
In what follows, energy is measured in units of $J_1 = 1$ and distances -- in units of the lattice constant $a$.
The Hamiltonian (\ref{energy}) is used to compute the energy of different spin configurations which can then be compared to determine the optimal spin configuration for various sets of $J_1$, $J_3$, $D$ and $H$ (see the Supplemental Material for the calculation methods).
\begin{figure}
\includegraphics[width=0.99\columnwidth]{Fig2-intermediatespiral2}
\caption{
(color online)
\label{DSS} Real space configurations of DSS for $J_3 = 0.34$, $D=0.05$ and variable magnetic field: $H=0.02$ (a), $H=0.03$ (b), $H=0.04$ (c), $H=0.05$ (d). Color plots in the first column show the out-of-plane spin components, while the arrows -- in-plane ones. The second column shows horizontal linescans of the corresponding color plots. Red, blue and black lines represent $S_x$, $S_y$ and $S_z$, respectively.
DSS is a buffer 1D spiral modulation that gradually develops from the helicoid (a) and transforms into the cone (d).
}
\end{figure}
\textit{1D spiral states.}
First, we consider 1D chains of spins with periodic boundary conditions (pbc).
The number of spins in a chain is varied to address the possible change of an equilibrium spiral period $\lambda$ under an applied magnetic field.
We prepare a set of spiral states to address the period change with accuracy $0.1$.
Then, we find the spiral state that minimizes the total energy for each point in a parameter space.
For example, the number of points 25 in a chain may accommodate different number of wave lengths: one (then $\lambda=25$), two ($\lambda=12.5$) etc.
By this in particular, we may address the well known process of spiral expansion under an applied magnetic field within the model $J_1 - D$ mentioned in the introduction \cite{Bogdanov94,Togawa} (see the Supplemental Material for details).
The optimal wavelengths of spin spirals for zero field were computed numerically and are plotted in Fig. \ref{PD} (c) on the plane $D-J_3$ as the straight lines.
The slopes of the contours for $\lambda$ can be expressed as $D/J_3$ where $D/J_1 = \tan (2\pi/\lambda)$ and $J_3/J_1 = 1/(4\cos(2\pi/\lambda))$. For fixed $\lambda$, a relation between $D$ and $J_3$ is expressed as
\begin{equation}
D = -4\sin (\frac{2\pi}{\lambda})J_3 + J_1\tan (\frac{2\pi}{\lambda}).
\end{equation}
Remarkably, the spirals with short periods can be stabilized by either DMI or FEI with relatively large magnitudes.
On the other hand, the same value of the spiral period is achieved for relatively weak competing interactions.
For example, $\lambda = 9$ can be achieved for $D = 0.18$ ($\lambda_{D} = 34$) and $J_3 = 0.255$ ($\lambda_{F} = 32$).
In other words, if within the models ($J_1$ - $D$) and ($J_1$ - $J_3$) the spiral states with the same period are stabilized, then within the model ($D$ - $J_1$ - $J_3$) the spiral period would be much smaller, since the interplay between $D$ and $J_3$ should also be taken into account.
Moreover, we prove that only one spiral solution is realized differentiating it from the model with competing DMI and dipole-dipole interaction (DDI) (or FEI and DDI \cite{Hou}), where two energy minima with the properties of spirals and stripe domains (or skyrmions and bubble domains) are known to coexist \cite{Mantel,leonovPHD}.
In an applied magnetic field, as was discussed in the introduction, DMI and FEI stabilize two different spiral states, correspondingly, helicoids and cones.
In the case of competing DMI and FEI, we find an intermediate spin spiral phase that bears features of both mentioned 1D modulations.
In Figs. \ref{PD} (d), (e) we plot "slices" of three-dimensional phase diagrams that contain the stability regions of all three spiral states.
The phase diagrams are spanned by the exchange coupling strength $J_3$ and the magnetic field for fixed values of DMI $D$.
The low field regime is occupied by the helical spiral, which is the preferred spiral state by the DMI (green shaded region).
In an applied magnetic field, a DSS may appear (blue shaded region), which afterward either undergoes the phase transition with the homogeneous state (white shaded region) or is replaced by the conical phase preferred by FEI (red shaded region).
When the DMI strength is sufficiently large, the DSS and the conical phase disappear.
Fig. \ref{DSS} shows the real space configurations and variations of the spin components as functions of $x$ coordinate for the DSS. In contrast to the conical and helical spiral phases, the DSS has all three varying spin components.
In an applied magnetic field, the DSS develops from the helicoid by the second-order phase transition: the rotating magnetization acquires a small oscillating $S_x$-component (Fig. \ref{DSS} (a), (e)).
In an increased field, DSS transforms into the conical phase with constant $S_z$.
Since some accuracy criteria are involved into the definition of different spiral states, the boundaries of different phases on the phase diagrams may vary.
The magnetization curves plotted in Fig. \ref{MC} indicate all spiral states and thus can be validated experimentally. Moreover, within the 1D model, the DSS transforms into the homogeneous state by the first-order phase transition.
In two spatial dimensions, the constructed phase diagrams must be supplemented by the regions of stable skyrmion lattices (SkL).
Within the model ($J_1$ - $D$), the SkLs develop from the helicoids \cite{Bogdanov94} at the field $0.216 H/H_D$ and by the second-order phase transition expand into the homogeneous state for $0.8H/H_D$.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{Fig3-Magncurves4}
\caption{
(color online) Representative magnetization curves that indicate all three 1D spiral modulations and possible scenario of magnetization processes that could be identified in the experiments. For conical (blue lines) and helical phases (red lines), magnetization curves represent anhysteretic lines symmetric with respect to the field direction. For $D=0$, the helicoidal magnetization exhibits the drastic increase only near the field of saturation (c). For $D\neq 0$, the helicoids undergo the first-order phase transition (and are thus accompanied by the hysteresis)
with the homogeneous state (c).
Magnetization curves for DSS (green lines) bear pronounced non-linear character
what is proven additionally by the corresponding energy densities plotted in the second column (orange curves).
\label{MC}
}
\end{figure}
\begin{figure}
\includegraphics[width=0.99\columnwidth]{Fig4-skyrmions4}
\caption{
(color online) Fan-like oscillations of the out-of-plane spin components $S_z$ at the ISs outskirts shown for increased value of DMI: $D=0$ (a), $D=0.03$ (b), $D=0.06$ (c). Note that only the range $S_z\in [0.8,1.0]$ is highlighted to make the oscillations identifiable. Insets show corresponding in-plane components (arrows) and out-of-plane components (color) of spins. The field values in (b), (c) correspond to the inception of ISs' elliptical instability (white circles in Fig. \ref{instability} (h) and spin structures in Fig. \ref{instability} (b), (c)); in (a) -- to the instability with respect to the conical phase (the corresponding spin structure is shown in Fig. \ref{instability} (c)). Amplitudes of first three oscillations are shown in (d) as functions of DMI $D$. Corresponding field values are indicated near each point.
\label{IS}
}
\end{figure}
\textit{Isolated skyrmions.}
Next, we discuss the properties of isolated skyrmions that are surrounded by the homogeneous state and are thus realized above the saturation fields of cones, helicoids and DSS.
Frustrated skyrmions ($D=0$ in (\ref{energy})) are known to acquire arbitrary vorticity (i.e., skyrmions and antiskyrmions have the same energy) and helicity (which is a zero mode) \cite{Okubo,LM}.
Moreover, the spins at the skyrmion outskirt undergo the fan oscillations (Fig. \ref{IS} (a)) that also give rise to sign changes of the skyrmion-skyrmion interaction potentials and complex cluster formation.
It is obvious that the additional DMI selects only one type of IS according to the symmetry arguments (Bloch skyrmions in our case) making other types of ISs metastable particles with higher energy \cite{Dupe2016}.
Note that Bloch skyrmions are also supported by DDI, since the magnetization component along the radial direction necessarily leads to internal magnetic charges as is observed in Ref. \onlinecite{Kurumaji}.
The oscillations of spin components with two rotational senses (one of which is not supported by DMI) are also suppressed.
Remarkably, the extrema of the oscillations do now change their radial positions (Figs. \ref{IS} (b), (c)), but rather become "erased" by the increased DMI (Fig. \ref{IS} (d)).
The global minimum of interaction potential, however, is preserved up to relatively high values of $D$ (comparable with the magnitude of $J_3$) and weak attraction may be observed.
\begin{figure*}
\includegraphics[width=1.99\columnwidth]{Fig5-Instability2}
\caption{
(color online) Instabilities of isolated skyrmions with respect to the helical (a), (b) and conical (c) 1D spirals as well as with respect to the 2D spin modulation (d). In (a) an IS develops an elliptical distortion along one of the $<11>$ directions. While stretching, it can bend along the complementary $<11>$ (b) and eventually transforms into a spiral domain that occupies the whole space. Elliptical instability of skyrmions occurs slightly below the saturation fields of helicoids (at white circles in (h)).
The $S_z$ spin components are shown as color plots, whereas the black arrows indicate projections of the spins onto the $xy$ plane.The 2D spin structure (d) is realized due to the incompatibility of the conical phase $\mathbf{k}||[11]$ and the square numerical grid with coordinate axes $<10>$. Still, such a spin distribution may appear in nanosystems with confined geometries, since for higher magnetic fields it has lower energy as compared with the cones $\mathbf{k}||[10]$ (green and blue curves, correspondingly, in (g)). To obtain the conical phase $\mathbf{k}||[11]$ with the minimal energy (red curve in (g)), we switch to a new coordinate system with $<11>$ axes. Such a conical phase (c) may accommodate isolated skyrmions with the internal structure resembling bimerons. Antiskyrmions (e), (f), on the contrary, acquire elliptical distortions even above the saturation fields of helicoids (upper boundary of the grey shaded region in (h)). The ratio of ellipticity in this case is the field-dependent parameter as seen from (e) $H=0.04$ and (f) $H=0.034$. Immediately below the saturation field, antiskyrmions transform into helicoids.
\label{instability}
}
\end{figure*}
With the magnetic field decreased below the saturation value, isolated skyrmions undergo instability with respect to the corresponding 1D spiral state.
The classical example is represented by the tendency of IS to elongate and expand into a band with helicoidal modulations and eventually to fill the whole space (Fig. \ref{instability} (a),(b)) \cite{Bogdanov94,LeonovNJP16}.
The elongation direction in this case complies with the propagation direction of spirals along $[11]$ inherent to discrete models on the square lattice.
For lower values of $D$ (e.g., $D=0$, Fig. \ref{instability} (c)), ISs enter the region of cone stability.
Then in a way similar to bimerons \cite{Murooka,Tretiakov}, their structure acquires regions with the antiskyrmion-like type of the magnetization rotation (red-colored regions) to adjust to the oblique magnetization of the conical spiral.
Interestingly, pbc may impose an IS instability toward another state with 2D spin arrangement (Fig. \ref{instability} (d)). This state has lower energy than the conical state with $\mathbf{q}||[10]$, but the higher energy as compared with $\mathbf{q}||[11]$ (Fig. \ref{instability} (g)).
To avoid such an artifact of numerical routines, one should make transition to the coordinate systems with the axes $x||[1\bar{1}]$ and $y||[11]$ that correctly accommodates the conical phase with $\mathbf{q}||[11]$ (Fig. \ref{instability} (c)).
Note that the elliptical instability of IS occurs at the field values lower than the saturation fields of spiral states (white circles in Fig. \ref{instability} (h)).
Antiskyrmions, on the contrary, elongate and fill the space with the spiral modulations as soon as the energy of the spiral state becomes negative as compared with the homogeneous state (in the grey shaded region of Fig. \ref{instability} (h)).
Interestingly, above the spiral saturation field, antiskyrmions also represent elongated particles, but with the fixed ratio of their ellipticity. Figs. \ref{instability} (e), (f) show metastable antiskyrmions for different values of the field above the region of spiral stability.
\textit{Discussion and Conclusions.}
Recently, short period magnetic modulations have been observed in a series of chiral magnets MnSi$_{1-x}$Ge$_x$ \cite{Fujishiro,Kanazawa} (including MnGe \cite{Tanigaki}) by means of Lorentz transmission electron microscopy and high-field transport measurements.
Not only the structures with gradual magnetization rotations (like skyrmions or helices) were identified, but also two distinct three-dimensional hedgehog lattices that incorporate point defects \cite{Fujishiro}.
Such a short periodicity of magnetic modulations would require large DMI, which contradicts however to the band structure calculations \cite{Koretsune} demonstrating smaller values of $D$ with increasing $x$ \cite{Koretsune}.
Thus, the DMI may not be the primary origin of the short-period helical structure. Instead, the magnetic frustration or Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction \cite{Hayami,Okubo} can be a possible mechanism as suggested in Ref. \onlinecite{Fujishiro}.
In the present manuscript, we show that the short-size magnetic modulations are induced by the effect of competing DMI and FEI, each with moderate strength values.
Moreover, the constructed phase diagrams exhibit new transitional spiral states and underlie enriched skyrmion properties.
In particular, isolated skyrmions remain attracting also in presence of relatively large DMI.
The variety of ISs also extends and includes bimerons (obtained due to IS instability with respect to the cones) and/or elongated antiskyrmions with variable rate of ellipticity.
We stress that the competing DMI and FEI become relevant for skyrmion use in spintronic devices, since small IS size can be achieved by relatively weak D and J3 strengths.
Besides, the increased IS stability can result.
Thus, the proposed effects suggest a new strategy for search of skyrmion hosting materials.
\section{Acknowledgements}
The authors are grateful to Ivan Smalyukh, Katia Pappas, Jun-ichiro Ohe, Istvan Kezsmarki, Hikaru Kawamura and Maxim Mostovoy for useful discussions. This work was funded by JSPS Core-to-Core Program, Advanced Research Networks (Japan) and JSPS Grant-in-Aid for Research Activity Start-up 17H06889. AOL thanks Ulrike Nitzsche for technical assistance.
|
2,877,628,091,145 | arxiv | \chapter{Introduction}
\defd{d}
\def{D}{{D}}
\REF\gepner{D. Gepner, ``Fusion Rings And Geometry,'' Commun. Math. Phys.
{\bf 141} (1991) 381; D. Gepner and A. Schwimmer, ``Symplectic Fusion
Rings And Their Metric,'' Nucl. Phys. {\bf B380} (1992) 147; D. Gepner,
``Foundations Of Rational Conformal Field Theory, I'' (Cal Tech preprint,
1992).}
My main goal in these lecture notes will be to elucidate a formula
of Doron Gepner [\gepner],
which relates two mathematical objects, one rather old and one rather new.
Along the way we will consider a few other matters as well.
\def{\cal N}{{\cal N}}
\REF\vafa{C. Vafa, ``Topological Mirrors And Quantum Rings,''
in {\it Essays On Mirror Manifolds}, ed. S.-T. Yau (International
Press, 1992).}
\REF\intrilligator{K. Intriligator, ``Fusion Residues,'' Mod. Phys.
Lett. {\bf A6} (1991) 3543.}
The old structure is the cohomology ring of
the Grassmannian $G(k,N)$ of complex $k$ planes in $N$ space -- except
that one considers the quantum cohomology (or Floer instanton homology)
rather than
the classical cohomology. The new
structure is the Verlinde algebra, which computes the Hilbert polynomial
of the moduli space of vector bundles on a curve. Gepner's formula, as
we will consider it here,
\foot{Gepner actually discusses the classical cohomology of the Grassmannian
and identifies it with a close cousin of the Verlinde algebra. The
refinement of Gepner's formula that we will consider was conjectured
by Vafa [\vafa] and Intriligator [\intrilligator].}
says that the quantum cohomology ring of $G(k,N)$ coincides
with the Verlinde algebra of the group $U(k)$ essentially
at level $N-k$.\foot{Actually if one decomposes the Lie algebra of
$U(k)$ as $su(k)\times u(1)$, then the level is $(N-k,N)$, that
is level $N-k$ for the $su(k)$ factor and level $N$ for the $u(1)$
factor. The source of this subtlety will become clear in \S4.6.}
Gepner discovered his formula by computing the left and right hand side
and observing that they were equal. We will seek a more conceptual
explanation, by representing the quantum cohomology ring of the Grassmannian
in a quantum field theory and reducing that quantum field theory
at low energies to another quantum field theory which is known to
compute the Verlinde algebra.
\REF\gerasimov{A. Gerasimov, ``Localization In GWZW And Verlinde Formula,''
hepth@xxx/9305090.}
\REF\blau{M. Blau and G. Thompson, ``Derivation Of The Verlinde Formula
{}From Chern-Simons Theory And The $G/G$ Model,'' Nucl. Phys. {\bf B408}
(1993) 345.}
The Verlinde formula appears in several quantum field theories.
Of these, the one that is relevant here
is the gauged WZW model, of $G/G$. The shortest
and most complete explanation of its relation
to the Verlinde formula is due to Gerasimov [\gerasimov],
and I will explain his argument in \S2.
Part of the charm of the $G/G$ model is that it can be abelianized,
that is, reduced to a theory in which the gauge group is the maximal
torus of $G$, extended by the Weyl group [\blau].
The argument is simple in concept and will be summarized in \S2.6.
\REF\batyrev{V. Batyrev, ``Quantum Cohomology Rings Of Toric
Manifolds'' (preprint, 1993).}
In \S3, I explain at a qualitative level how the quantum cohomology
of the Grassmannian is represented in a quantum field theory,
and some general techniques for studying this field theory
and reducing it to a problem in gauge theory.
In \S4, I describe the arguments in more technical detail.
The analysis actually should be adaptable to other manifolds
that can be realized as symplectic quotients of linear spaces,
such as flag manifolds and toric varieties. (The quantum cohomology
of a toric variety has been studied by Batyrev [\batyrev]; that of
a general flag manifold has apparently not yet been studied.)
\REF\div{A. D'Adda, M. Luscher, and P. DiVecchia, ``A $1/N$ Expandable
Series Of Nonlinear Sigma Models With Instantons,'' Nucl. Phys. {\bf B146}
(1978) 63, ``Topology And Higher Symmetries Of The Two Dimensional
Nonlinear Sigma Model,'' Phys. Report {\bf 49} (1979) 239.}
\REF\oldwit{E. Witten, ``Instantons, The Quark Model, And The $1/N$
Expansion,'' Nucl. Phys. {\bf B149} (1979) 285.}
\REF\brazil{E. Abdalla, M. Forger, and A. Lima Santos, ``Non-Local
Charges For Nonlinear Sigma Models On Grassmann Manifolds,'' Nucl. Phys.
{\bf B256} (1985) 145.}
\REF\cecotti{S. Cecotti and C. Vafa, ``On Classification Of $N=2$
Supersymmetric Theories,'' Harvard preprint HUTP-92-A064.}
This paper, despite its length, is based on an idea that can be
described very simply.
The two dimensional supersymmetric sigma
model with target $G(k,N)$ can be described as a $U(k)$ gauge
theory (in ${\cal N}=2$ superspace) with $N$ multiplets of chiral superfields
in the fundamental representation of $U(k)$. It was studied from
this point of view in the case of $k=1$ (that is ${\bf CP}^{N-1}$) many years
ago [\div,\oldwit], and the generalization to arbitrary $k$ is also
familiar [\brazil,\cecotti]. At low energy, a suitable
$U(k)$ gauge theory with the matter content just stated
reduces to the supersymmetric sigma
model of the Grassmannian. On the other hand, integrating out the $N$
matter multiplets, one gets an effective action for the $U(k)$ gauge
multiplet. Because of a sort of mixing between scalars and vectors,
this low energy effective action has no massless particles; this is how
the presence of a mass gap has been shown in the past. The novelty
in the present paper is simply the observation that the low energy effective
action is in fact a gauged WZW model of $U(k)/U(k)$. Under this
low energy reduction, the topological correlation functions of the
sigma model -- which compute the quantum cohomology of $G(k,N)$ -- are
mapped into correlation functions of the $U(k)/U(k)$ model
that can be computed (as we recall in \S2) in terms of the Verlinde
algebra. This gives the map between the two theories.
\REF\bertram{A. Bertram, G. Daskapoulos, and R. Wentworth,
``Gromov Invariants For Holomorphic Maps From Riemann Surfaces To
Grassmannians,'' preprint (April, 1993).}
\REF\lerche{W. Lerche, C. Vafa, and N. Warner, ``Chiral Rings In
${\cal N}=2$ Superconformal Theories,'' Nucl. Phys. {\bf B324} (1989) 427.}
\REF\bourdeau{M. Bourdeau, E. J. Mlawer, H. Riggs, and H. J. Schnitzer,
``Topological Landau-Ginzburg Matter From $SP(N)_K$ Fusion Rings,''
Mod. Phys. Lett. {\bf A7} (1992) 689.}
The quantum cohomology of the Grassmannian has also been studied -- using,
more or less, a classical version of the same setup we will follow -- by
Bertram, Daskapoulos, and Wentworth [\bertram].
And there is a forthcoming mathematical
approach to Gepner's formula in work of Braam and Agnihorti.
The cohomology of the Grassmannian is closely related
to the chiral ring of a certain ${\cal N}=2$ superconformal field theory
[\lerche] (somewhat misleadingly called a $U(N)/U(k)\times U(N-k)$
coset model); this model probably should be included in the story,
but that will not be done here. Some of the phenomena we will study
have analogs for real and symplectic Grassmannians, as in [\bourdeau]
and the second paper cited in [\gepner]; it would be interesting
to try to extend the analysis for those cases.
\S2 and \S3 can be read independently of one another. \S4 requires
more familiarity with methods of physics than either \S2 or \S3.
Physicists may want to start with \S4.
\chapter{The Verlinde Formula And The $G/G$ Model}
First of all, the Verlinde algebra counts theta functions, such
as the classical theta functions of Jacobi and their generalizations.
In modern language, the classical theta functions can be described as
follows. Let ${\cal T}$ be a complex torus of dimensions $g$, ${\cal L}$
a line bundle defining a principal polarization, and $s$ a positive
integer. Then the space of level $s$ theta functions is $H^0({\cal T},
{\cal L}^{\otimes s})$. The dimension of this space can be readily determined
from the Riemann-Roch theorem.
For example, ${\cal T}$ might be the Jacobian ${\cal J}$ of a complex
Riemann surface $\Sigma$, that is, the moduli space of holomorphic
line bundles over $\Sigma$ of some given degree. This example suggests
the generalization to the ``non-abelian theta functions'' of A. Weil.
Here one replaces the Jacobian of $\Sigma$ by the moduli space ${\cal R}$
of rank $k$ (stable) holomorphic vector bundles over $\Sigma$; now
a ``non-abelian theta function'' at level $s$ is an element
of $H^0({\cal R},{\cal L}^{\otimes s})$. Though the Riemann-Roch
theorem gives a formula for the dimension of this space, this formula
is difficult to use in the non-abelian case
as it involves invariants of ${\cal R}$
that are not easy to determine directly.
\REF\bott{R. Bott, ``On E. Verlinde's Formula In The Context Of
Stable Bundles,'' in {\it Topological Methods In Quantum Field
Theories}, ed. W. Nahm et. al. (World Scientific, 1991).}
The Verlinde algebra gives on the other hand a practical formula
for the dimension of $H^0({\cal R},{\cal L}^{\otimes s})$; the formula
was described very explicitly by Raoul Bott in [\bott]. Roughly speaking,
the origin of the Verlinde formula in differential geometry is as follows.
Via Hodge theory, ${\cal R}$ is endowed with a natural Kahler metric.
As the dimension of the space of non-abelian theta functions is
independent of the complex structure of $\Sigma$, one
can choose the complex structure to simplify the problem.
It is convenient to take $\Sigma$ to be
a nearly degenerate surface consisting of three-holed
spheres joined by long tubes. Using the behavior of the differential
geometry of ${\cal R}$ in this limit, one can write $H^0({\cal R},{\cal L}
^{\otimes s})$ as a sum of tensor products of
similar spaces for a three-holed sphere
with some branching around the holes. The Verlinde algebra encodes
the details of this.
The Verlinde algebra arises in several quantum field theories:
\REF\everlinde{E. Verlinde, ``Fusion Rules And Modular Transformations
In 2-D Conformal Field Theory,'' Nucl. Phys. {\bf B300} (360) 1988.}
(1) It originally arose
[\everlinde] in the WZW model, a conformal field theory
(whose Lagrangian we will recall later) that
governs maps from a Riemann surface
$\Sigma$ to a compact Lie group $G$.
Mathematically, this model is related to representations of affine
Lie algebras, the unitary action of the modular group $SL(2,{\bf Z})$ on their
characters, etc.
(2) The Verlinde formula is an important ingredient in understanding
Chern-Simons gauge theory on a three-manifold. Thus it is relevant
to the knot and three-manifold invariants constructed from quantum
field theory.
(3) The Verlinde formula also enters in the gauged WZW model,
governing a pair $(g,A)$, where $A$ is a connection on a principal
$G$ bundle $P$ over a Riemann surface, and $g$ is a section of $P\times_G G$
(where $G$ acts on itself via the adjoint action).
\REF\gawedzki{K. Gawedzki and A. Kupianen, ``A $G/H$ Conformal Field
Theory From Gauged WZW Models,'' Phys. Lett. {\bf 215B} (1988) 119,
``Coset Construction From Functional Integrals,'' Nucl. Phys.
{\bf B320(FS)} (1989) 649; K. Gawedzki, ``Constructive Conformal Field
Theory,'' in {\it Functional Integration, Geometry, And Strings},
eds. Z. Hava and J. Sobczyk (Birkhauser, 1989).}
\REF\uggwitten{E. Witten, ``On Holomorphic Factorization Of WZW And
Coset Models,'' Commun. Math. Phys. {\bf 114} (1992) 189.}
Of these it is
the third -- which was discovered most recently -- that will enter
our story. The present
section is therefore mainly devoted to an explanation -- following
Gerasimov [\gerasimov], who reinterpreted earlier formulas
[\gawedzki,\uggwitten] -- of the gauged WZW model and its relation
to nonabelian theta functions.
\section{Gauge Theory And The Prequantum Line Bundle In Two Dimensions}
\REF\axelrod{S. Axelrod, ``Geometric Quantization Of Chern-Simons
Gauge Theory,'' Ph.D. Thesis, Princeton University (1991).}
\REF\gaw{K. Gawedzki, ``Topological Actions In Two-Dimensional
Quantum Field Theories,'' in {\it Non-perturbative Quantum Field
Theory,} ed. G. 't Hooft (Plenum Press, 1988); G. Felder, K.
Gawedzki, and A. Kupianen, ``Spectra of Wess-Zumino-Witten Models
With Arbitrary Simple Groups,'' Commun. Math. Phys. {\bf 117} (1988) 127.}
\REF\abott{M. F. Atiyah and R. Bott, ``The Yang-Mills Equations Over
Riemann Surfaces,'' Philos. Trans. R. Soc. London. {\bf A308} (1982) 523.}
\def{\cal A}{{\cal A}}
Let $G$ be a compact Lie group, $\Sigma$ a closed oriented two-manifold without
boundary,
and $P$ a principal $G$ bundle over $\Sigma$.
To achieve some minor simplifications in the exposition, I will suppose
$G$ simple, connected, and simply connected. (Notation aside, the only
novelty required to treat a general compact Lie group is that more
care is required in defining the functional $\Gamma(g,A)$ that
appears below; see [\axelrod, \S4].)
One consequence of the assumption about $G$ is that $P$ is trivial.
Let ${\cal A}$ be the space
of connections on $P$. ${\cal A}$ has a natural symplectic structure $\omega$ that
can be defined with no choice of metric or complex structure on $\Sigma$.
(This and some other facts that I summarize presently are originally
due to Atiyah and Bott [\abott].) The symplectic structure
can be defined by the formula
$$\omega(a_1,a_2)={1\over 2\pi}\int_\Sigma\Tr a_1\wedge a_2,\eqn\defform$$
where $a_1$ and $a_2$ are adjoint-valued one-forms representing
tangent vectors to ${\cal A}$. Here $\Tr$ is an invariant quadratic form
on the Lie algebra of $G$, defined for $G=SU(k)$ to be the trace in the
$k$ dimensional representation; in general one can take $\Tr$ to
be the smallest positive multiple of the trace in the adjoint
representation such that the differential form $\Theta$ introduced
below has periods that are multiples of $2\pi$.
A prequantum line bundle ${\cal L}$ over
${\cal A}$ is a unitary line bundle with
a connection of curvature $-i\omega$. ${\cal L}$ exists and is unique
up to isomorphism since ${\cal A}$ is an affine space.
We can take ${\cal L}$ to be the trivial bundle
with a connection defined by the following formula:
$${D\over DA_i}={\delta\over\delta A_i}+{i\over
4\pi}\epsilon^{ij}A_j.\eqn\bbu$$
($\epsilon^{ij}$ is the Levi-Civita antisymmetric tensor; when
local complex coordinates are introduced, we will take $\epsilon^{z\overline z}
=-\epsilon^{\overline z z}=i$.)
The $k^{th}$ power ${\cal L}^{\otimes k}$ is therefore the trivial
bundle endowed with the connection
$${D\over DA_i}={\delta\over\delta A_i}+{ik\over 4\pi}\epsilon^{ij}A_j.
\eqn\ccub$$
Let $\widehat G$ be the group of gauge transformations. If $P$ is trivialized,
a gauge transformation is a map $g:\Sigma\rightarrow G$,
and acts on the connection $A$ by
$$ A \rightarrow A^g=gAg^{-1}-d g \cdot g^{-1}. \eqn\mcon$$
More invariantly, $g$ is a section of $P\times_G G$, where $G$ acts on
itself in the adjoint representation, and the action of
$g$ on ${\cal A}$ should be written as
$$d_A\rightarrow
g d_A g^{-1}, \eqn\eeqn$$
with $d_A$ the gauge-covariant extension of the exterior
derivative. At the Lie algebra level this is
$$ A\rightarrow A-d_A\alpha, \eqn\con$$
where $\alpha$ is a section of $P\times_G\bf g$, with $\bf g$ being
the Lie algebra of $G$, on which $G$ acts by the adjoint action.
The action of the gauge group on the space ${\cal A}$ of connections
lifts to an action on the prequantum line bundle. At the Lie algebra
level, the lift is generated by the operators
$$ D_i{D\over DA_i}-{ik\over 4\pi}\epsilon^{ij}F_{ij},\eqn\hocco$$
with $F=d A+A\wedge A$ the curvature form. \hocco\ means very concretely
that the infinitesimal gauge transformation \con\ is represented
on sections of ${\cal L}$ by the operator
$$\int_\Sigma \Tr\alpha\left(D_i{D\over DA_i}-{ik\over 4\pi}F\right).
\eqn\bocco$$
\REF\weitsinger{T. R. Ramadas, I. M. Singer, and J. Weitsman, ``Some
Comments On Chern-Simons Gauge Theory,'' Commun. Math. Phys. {\bf 126} (1989)
409.}
\REF\wwwitten{E. Witten, ``Global Aspects Of Current Algebra,'' Nucl.
Phys. {\bf B223} (1983) 422, ``Non-Abelian Bosonization In Two
Dimensions,'' Commun. Math. Phys. {\bf 92} (1984) 455.}
\REF\wz{J. Wess and B. Zumino, ``Consequences Of Anomalous Ward Identities,''
Phys. Lett. {\bf 37B} (1971) 95.}
\REF\polyakov{A. M. Polyakov and P. B. Wiegmann, ``Theory Of Non-Abelian
Goldstone Bosons In Two Dimensions,'' Phys. Lett. {\bf B131} (1983) 121.}
Even globally, at the group level, the $\widehat G$ action on ${\cal L}$
can be described rather explicitly [\weitsinger].
Pick a three
manifold $B$ with $\partial B=\Sigma$. Given $g:\Sigma\rightarrow G$,
extend $g$ to a map (which I will also call $g$) from $B$ to $G$.
(The extension exists because of our assumption that $\pi_0(G)=\pi_1(G)=0$.
See [\gaw,\axelrod] for the definition of $\Gamma$ without
this simplifying assumption.)
Define as in [\wwwitten]
$$ \Gamma(g)={1\over 12 \pi}\int_B\Tr g^{-1}d g\wedge g^{-1}d g\wedge
g^{-1}d g, \eqn\wzfun$$
which is known as the
Wess-Zumino anomaly functional [\wz].
This is equivalent to
$$\Gamma(g)=\int_Bg^*(\Theta),\eqn\pooo$$
where
$$\Theta={1\over 12\pi}\Tr g^{-1}d g\wedge g^{-1}d g\wedge g^{-1}d g
\eqn\sonnop$$
is a left- and right-invariant closed three-form on $G$.
The periods of $\Theta$ are multiples of $2\pi$. This
ensures that, regarded as
a map to ${\bf R}/2\pi {\bf Z}$, $\Gamma(g)$ depends only on the restriction
of $g$ to $\Sigma$. Note that $\Gamma$ is defined purely in differential
topology; no metric or complex structure on $\Sigma$ is required.
It follows rather directly from the definition of
$\Gamma$ that for $g,h:\Sigma\rightarrow G$,
$$\Gamma(gh)=\Gamma(g)+\Gamma(h)-{1\over 4\pi}\int_\Sigma\Tr g^{-1}d g
\wedge d h\cdot h^{-1}. \eqn\pwform$$
A variant of this equation is called the Polyakov-Wiegmann formula
[\polyakov].
Now, given a connection $A$ on $P$, set
$$W(g,A)=\Gamma(g)-{1\over 4\pi}\int_\Sigma\Tr A\wedge g^{-1}d g.\eqn\mcin$$
{}From \pwform, it follows almost immediately that
$$W(gh, A)= W(g, A^h)+W(h,A). \eqn\cform$$
{}From this we can define an action of the gauge group $\widehat G$ on
the space of functions of $A$. In fact, setting
$$g^*\chi(A)= \exp(ikW(g,A))\cdot \chi(A^g), \eqn\jform$$
we have $(gh)^*=h^*g^*$. Differentiating \jform\ with respect to $g$
at $g=1$, one sees that this particular lift induces \hocco\ at
the Lie algebra level; so \jform\ is the desired lifting of the action
of the gauge group to an action on the prequantum line bundle ${\cal L}$.
\section{Non-Abelian Theta Functions}
So far we have considered $\Sigma$ simply as a closed, oriented two-manifold
without boundary. If one picks a complex structure on $\Sigma$,
some additional interesting constructions can be made [\abott].
A complex structure
on $\Sigma$ induces a complex structure on the space ${\cal A}$ of connections.
One simply declares that the $(0,1)$ part of $A$ is holomorphic
and the $(1,0)$ part is antiholomorphic.
If $z,\overline z$ are local complex coordinates on $\Sigma$,
then the connection \ccub\ characterizing ${\cal L}^{\otimes k}$ can be written
$$\eqalign{{D\over DA_z} & ={\delta\over\delta A_z}-{k\over 4\pi}A_{\overline z}
\cr
{D\over DA_{\overline z}} & ={\delta\over\delta A_{\overline z}}+
{k\over 4\pi}A_{z}.
\cr} \eqn\roar$$
The complex structure on ${\cal A}$
can be described in very down-to-earth terms by saying that
a holomorphic function on ${\cal A}$ is a function annihilated
by $\delta/\delta A_z$. Correspondingly,
a holomorphic section of ${\cal L}^{\otimes k}$
is a section annihilated by $D/D A_z$. Even more explicitly, a holomorphic
section of ${\cal L}^{\otimes k}$
is a function $\chi(A_z,A_{\overline z})$ which can be written
$$\chi(A_z,A_{\overline z})=\exp\left({k\over 4\pi}\int_\Sigma\Tr A_zA_{\overline z}
\right)\cdot \widehat \chi(A_{\overline z}), \eqn\oncoo$$
with $\widehat \chi(A_{\overline z})$ an ordinary holomorphic function on ${\cal A}$.
Once a complex structure is picked on $\Sigma$,
the connection $A$ determines operators $\overline\partial_A$ giving
complex structures to vector bundles $P\times_G\bf r$, with ${\bf r}$
a representation of $G$. The action of gauge
transformations on $A$ can be described by the action on the
$\overline\partial_A$ operators:
$$\overline\partial_A\rightarrow g\cdot \overline\partial_A\cdot g^{-1}. \eqn\mnon$$
Since this formula makes sense for complex $g$, the $\widehat G$ action
on ${\cal A}$ extends to an action of the complexified gauge group
$\widehat G_{{\bf C}}$ (consisting of maps of $\Sigma$ to the complexification
$G_{\bf C}$ of $G$).
Two $\overline\partial_A$ operators define equivalent holomorphic bundles
if and only if they are related as in \mnon. So the quotient
${\cal A}/G_{\bf C}$ (in case the $G_{\bf C}$ action
is not free, the quotient must be taken in the sense of geometric invariant
theory) is the same as the moduli space ${\cal R}$ of (stable)
holomorphic principal $G$ bundles over $\Sigma$.
The formulas used to describe the lift of the $\widehat G$ action to ${\cal L}$
make sense when $g$ is complex, so we get a lift of the
$\widehat G_{\bf C}$ action to ${\cal L}$. One defines a line
bundle over ${\cal R}$ -- which we will also call ${\cal L}$ --
whose sections over an open set $U\subset {\cal R}$
are the same as the $\widehat G_{\bf C}$-invariant sections
of ${\cal L}$ over the inverse image of $U$ in ${\cal A}$.
\subsection{Non-Abelian Theta Functions}
The space of non-abelian theta functions, at level $k$, is
$H^0({\cal R},{\cal L}^{\otimes k})$. From what has just been said, this
is the same as the $\widehat G$-invariant (or equivalently, $\widehat G_{\bf C}$-invariant)
subspace ${\cal H}^{\widehat G}$ of ${\cal H}=H^0({\cal A},{\cal L}^{\otimes k})$.
We want to determine the dimension of ${\cal H}^{\widehat G}$.
The strategy, as in [\gerasimov],
will be as follows. We will find a very convenient description
of the action of $\widehat G$ on ${\cal H}$. In fact, for $g\in \widehat G$,
we will find an explicit integral kernel $K(A,B;g)$ (with $A,B\in {\cal A}$)
such that for $\chi\in {\cal H}$,
$$g^*\chi(A)= \int DB \,\,\, K(A,B;g)\chi(B). \eqn\mokno$$
(For fixed $g$, $K(A,B;g)$ is a section of $p_1^*({\cal L}^{\otimes k})\otimes
p_2^*({\cal L}^{\otimes(-k)})$ over ${\cal A}\times {\cal A}$,
with $p_1$ and $p_2$ being the two projections to ${\cal A}$.)
Here $DB$ is the natural symplectic measure, normalized
in a way that will be specified in \S2.4,
on the symplectic manifold ${\cal A}$.
Now the projection operator $\Pi:{\cal H}\rightarrow {\cal H}^{\widehat G}$
can be written
$$\Pi={1\over {\rm vol}(\widehat G)}\int_{\widehat G}Dg \,\,\,g^*.\eqn\jory$$
Here formally $Dg$ is a Haar measure on $\widehat G$ and ${\rm vol}(\widehat G)$ is the
volume of $\widehat G$ computed with the same measure.
The dimension of ${\cal H}^G=H^0({\cal R},{\cal L}^{\otimes k})$
is the same as $\Tr \Pi$, and so can
evidently be written
$${\rm dim}\, H^0({\cal R},{\cal L}^{\otimes k})
={1\over {\rm vol}(\widehat G)}\int Dg \,\,DA \,\,\,\,\, K(A,A;g).
\eqn\evwri$$
It remains to construct a suitable kernel $K$.
This will be done using gauged WZW models.
\section{Gauged WZW Models}
Let $\Sigma$ be a complex Riemann surface; the complex structure
determines the Hodge duality operator $*$ on one-forms.
For a map $g:\Sigma\rightarrow G$, the WZW functional
is
$$I(g)=-{1\over 8\pi}\int_\Sigma \Tr g^{-1}d g\wedge * g^{-1}d g
-i\Gamma(g), \eqn\qzwlag$$
where $\Gamma(g)$ was defined in \wzfun. While $\Gamma(g)$ is defined
purely in differential topology, the first term in the definition of $I(g)$
depends on the complex structure of $\Sigma$ through the $*$ operator.
The quantum field theory
with Lagrangian $L(g)=kI(g)$
is conformally invariant and describes level $k$ highest weight
representations of the loop group of $G$ and the action of the modular
group on their characters.
$G\times G$ acts on $G$ by left and
right multiplication ($g\rightarrow a g b^{-1}$). Let us denote this copy of
$G\times G$ as $G_L\times G_R$ with $G_L$ and $G_R$ acting on the left
and right respectively.
$I(g)$ is invariant under $G_L\times G_R$. We want to pick a subgroup
$H\subset G_L\times G_R$ and construct a gauge invariant extension
of $I(g)$ with gauge group $H$. What this means is that we introduce
a principal $H$ bundle $P$, with connection $A$, and we replace the map
$g:\Sigma\rightarrow G$ by a section of the bundle $P\times_HG$; here $G$ is
understood as the trivial principal $G$ bundle over $\Sigma$, and $H$ acts
on $G$ via its chosen embedding in $G_L\times G_R$. We want
to construct a natural, gauge invariant functional $I(g,A)$ that
reduces at $A=0$ to $I(g)$.
There is no problem in constructing a gauge invariant extension of the first
term in \qzwlag. One simply replaces the exterior derivative by its
gauge-covariant extension:
$$-{1\over 8\pi}\int_\Sigma\Tr g^{-1}d_A g\wedge * g^{-1}d_Ag. \eqn\jopipo$$
On the other hand, there is a topological obstruction to constructing
a gauge invariant extension of $\Gamma(g)$. The requirement is that
the class in $H^3(G,{\bf Z})$ that determines the functional $\Gamma$
(and which in real cohomology is represented by the differential
form $\Theta$) should have an extension to the equivariant cohomology
group $H^3_H(G,{\bf Z})$. This is explained in [\axelrod,\S4];
a quick explanation at the level of de Rham theory
(ignoring the torsion in $H^3_H(G,{\bf Z})$)
is in the appendix of [\uggwitten].
\REF\gko{P. Goddard, A. Kent, and D. Olive, ``Virasoro Algebras And
Coset Space Models,'' Phys. Lett. {\bf 152B} (1985) 88.}
\REF\rabinovici{A. Altschuler, K. Bardacki, and E. Rabinovici,
``String Models With $c<1$ Components,'' Nucl. Phys. {\bf B299} (1988) 157,
A. Altschuler, K. Bardacki, and E. Rabinovici,
``A Construction Of the $c<1$ Modular Invariant Partition Functions,''
Commun. Math. Phys. {\bf 118} (1988) 241.}
\REF\schnitzer{D. Karabali, and H. J. Schnitzer, ``BRST Quantization Of
The Gauged WZW Action And Coset Conformal Field Theories,'' Nucl. Phys.
{\bf B329} (1990) 649; D. Karabali, Q-Han Park, H. J. Schnitzer, and
Zhu Yang,
``A GKO Construction Based On A Path Integral Formulation Of Gauged
Wess-Zumino-Witten Actions,'' Phys. Lett. {\bf B216} (1989) 307.}
As explained, for instance, in that appendix, the condition for existence
of a gauge invariant extension of $\Gamma(g)$
can be put in the following very explicit form. If $T_a,\,\,\,a=1\dots
\dim(H)$ are a basis of the Lie algebra of $H$, and if the embedding
$H\subset G_L\times G_R$ is described at the Lie algebra level by
$T_a\rightarrow (T_{a,L},T_{a,R})$, then the requirement is
$$\Tr T_{a,L}T_{b,L}=\Tr T_{a,R}T_{b,R}, ~~~ {\rm for~all}~a,b.\eqn\forall$$
A subgroup $H\subset G_L\times G_R$ obeying this condition is said to
be anomaly-free.
For such an $H$, the gauge invariant extension of $\Gamma$ exists and is
explicitly
$$\eqalign{\Gamma(g,A)=&
\Gamma(g)-{1\over 4\pi}\sum_a\int_\Sigma A^a\Tr\left(T_{a,L} d g\cdot
g^{-1}+T_{a,R}g^{-1}d g\right)\cr &-{1\over 8\pi}\sum_{a,b}
\int A^a\wedge A^b\Tr
\left(T_{a,R}g^{-1}T_{b,L}g-T_{b,R}g^{-1}T_{a,L}g\right)
.\cr}\eqn\qqq$$
Combining these formulas, one gets for anomaly-free $H$ a gauge invariant
extension of the WZW functional,
$$I(g,A)=
-{1\over 8\pi}\int_\Sigma\Tr g^{-1}d_A g\wedge * g^{-1}d_Ag-i\Gamma(g,A).
\eqn\jopipol$$
The quantum field theories with Lagrangians $L(g,A)=kI(g,A)$, $k$ a positive
integer, are called $G/H$ models.
\foot{The terminology is somewhat misleading since these models are not
the most obvious sigma models with target space $G/H$; and one is not
allowed to use the most obvious $H$ actions on $G$, such as the left
or right actions, which are anomalous. The terminology is used because
the models are believed [\rabinovici,\schnitzer,
\gawedzki,\uggwitten] to be equivalent
to GKO models [\gko], which were originally described algebraically,
and are conventionally called $G/H$ models or coset models.
The claimed equivalence to the GKO models implies in particular that
the models are conformally invariant at the quantum level.}
Note that for given $G$ and $H$, there may be several $G/H$ models,
since there may be several anomaly-free embeddings of $H$ in $G_L\times G_R$.
If $H$ is any subgroup of $G$, then the diagonal embedding of $H$ in
$G_L\times G_R$ is always anomaly free. The model determined by
such a diagonal embedding is often called ``the'' $G/H$ model.
If we pick local complex coordinates $z,\overline z$ on $\Sigma$ (which
will facilitate a small calculation needed presently) and write
the measure $|d z\wedge d\overline z|$ as $d^2z$, then the Lagrangian of
the diagonal $G/H$ model is explicitly $k$ times
$$\eqalign{
I(g,A)=&
I(g)-{1\over 2\pi}\int_\Sigmad^2 z\Tr A_z\partial_{\overline z}
g\cdot g^{-1}\cr &
+{1\over 2\pi}\int_\Sigmad^2z\Tr A_{\overline z}g^{-1}\partial_zg
-{1\over 2\pi}\int_\Sigmad^2z\Tr\left(A_z
A_{\overline z}-A_zgA_{\overline z}g^{-1}\right)
.\cr }\eqn\juryfile$$
Our interest will center on the special case of the diagonal $G/H$ model
for $H=G$. This is then the $G/G$ model with adjoint action of $G$ on
itself. The Lagrangian is $k$ times \juryfile, and the partition function
at level $k$ is
$$Z_k(G,\Sigma)=
{1\over {\rm vol}(\widehat G)}\int {D} g\,\,\,{D} A \,\,\,\,\,\exp\left(
-kI(g,A)\right). \eqn\niso$$
Our goal is to use \evwri\ to show that
$Z_k(G,\Sigma)$ coincides with the dimension of the space of non-abelian theta
functions at level $k$.
\section{The Kernel}
One more special case is important: $H=G_L\times G_R$.
This is an anomalous subgroup, so there is no gauge invariant
$G/H$ Lagrangian and no $G/H$ quantum field theory for this $H$.
We will do something else instead.
Denote the $G_L$ and $G_R$ components of an $H$ connection as $A$ and $B$. Set
$$I(g,A,B)=I(g)+{1\over 2\pi}\int d^2z\Tr\left( A_{\overline z}g^{-1}\partial_zg
-B_z\partial_{\overline z}g\cdot g^{-1}+B_zgA_{\overline z} g^{-1}
-{1\over 2}A_zA_{\overline z}-{1\over 2}B_zB_{\overline z}\right). \eqn\ipo$$
This functional is determined by the following: it
is not gauge invariant, but its change under
a gauge transformation is independent of $ g$ and related in a useful
way to the geometry of the prequantum line bundle.
In fact, under an infinitesimal
gauge transformation
$$\delta g=vg-gu,~~~\delta A=-d_Au,~~~\delta B=-d_Av ,\eqn\infg$$
we have
$$\delta I(g,A,B)={i\over 4\pi}\int_\Sigma\Tr\left(u\,d A-v\,d B\right).
\eqn\ginf$$
(The fact that an extension $I(g,A,B)$ of $I(G)$ exists
with these properties has a conceptual explanation noted
in the appendix to [\uggwitten].)
Now, set
$$K(A,B;g)=\exp\left(-kI(g,A,B)\right). \eqn\cnxon$$
In its dependence on $A$, $K$ can be interpreted as a holomorphic
section of ${\cal L}^{\otimes k}$; this just means that $K$ is
independent of $A_z$ except for the exponential factor prescribed in
\oncoo. Likewise, in its dependence on $B$, $K$ is an anti-holomorphic
section of ${\cal L}^{\otimes(-k)}$; this means that it is independent
of $B_{\overline z}$ except for a similar exponential. In [\uggwitten], the above
facts were used to describe holomorphic factorization
of WZW and coset models. Gerasimov's insight [\gerasimov]
was that $K$ is actually the kernel representing the action of the
gauge group on ${\cal H}=H^0({\cal A},{\cal L}^{\otimes k})$.
This means that for $\chi\in {\cal H}$ and $g\in \widehat G$,
$$g^*\chi(A)=\int {D} B\,\,\,\,\,K(A,B;g) \chi(B). \eqn\niuggo$$
To show this, we first as in \oncoo\ write the holomorphic section $\chi$
as
$$\chi(B)=\exp\left({k\over 4\pi}\int_\Sigmad^2z\Tr B_zB_{\overline z}\right)
\widehat\chi(B_{\overline z}) \eqn\noko$$
with $\widehat\chi$ an ordinary holomorphic function.
The $B$-dependent factors in the integral
on the right hand side of \niuggo\ are
$$\int {D} B\,\,\,\,\exp\left({k\over 2\pi}\int_\Sigmad^2z
\Tr\left( B_zB_{\overline z}
-B_zA_{\overline z}{}^g\right)\right)\cdot \widehat \chi(B_{\overline z}). \eqn\goko$$
To perform such an integral, the basic fact is that if $f(\phi)$ is
a holomorphic function that grows at infinity more slowly than
$\exp(|\phi|^2)$, then
$${1\over \pi}\int_{\bf C} |d\phi\wedge d\overline\phi|
\exp(-\overline\phi \phi +a\overline\phi)f(\phi)=f(a). \eqn\ompo$$
Using this fact and normalizing the symplectic
measure on ${\cal A}$ so that
$$\int {D} B\,\,\exp\left({k\over 2\pi}\int_\Sigma
d^2z\Tr B_zB_{\overline z}\right) = 1 \eqn\mpo$$
(to avoid a determinant that would otherwise arise in using \ompo), we get
simply
$$\int {D} B\,\,\,\,\exp\left({k\over 2\pi}\int_\Sigmad^2z
\Tr\left( B_zB_{\overline z}
-B_zA_{\overline z}{}^g\right)\right)\cdot \widehat \chi(B_{\overline z})=
\widehat\chi(A_{\overline z}^g). \eqn\gokko$$
The integral in \niuggo\ thereby becomes
$$\eqalign{\int {D} B \,\,\,\,K(A,B;g)\chi(B)&
=\exp\left(-k\left(I(g)+{1\over 2\pi}
\int d^2z\Tr A_{\overline z}g^{-1}d g -{1\over 4\pi}
\int d^2z\Tr A_zA_{\overline z}
\right)\right)\cr &~~~\cdot \widehat\chi(A_{\overline z}{}^g).\cr} \eqn\kson$$
Using \noko\ to reexpress $\widehat\chi$ in terms of $\chi$, and using
the explicit forms of $I(g)$ and $A^g$, we get
$$\int {D} B\,\,\,K(A,B;g)\chi(B)=\exp\left(ik\left(\Gamma(g)-{1\over 4\pi}
\int_\Sigma\Tr A\wedge g^{-1}d g\right)\right)\cdot \chi(A^g). \eqn\dson$$
Using the definition of $g^*$ in \jform, this indeed coincides
with the desired formula \niuggo.
As we saw in arriving at \evwri, it follows that the dimension of the
space of non-abelian theta functions is
$$\dim H^0({\cal R},{\cal L}^{\otimes k})={1\over {\rm vol}(\widehat G)}
\int {D} g \,\,{D} A\,\,\, K(A,A;g).
\eqn\snosno$$
But the integral on the right is precisely the partition function
\niso\ of the $G/G$ model (since $K(A,A;g)=\exp(-kI(g,A))$).
So we have arrived at the main goal of this section: identifying
the dimension of the space of non-abelian theta functions with the
partition function of the $G/G$ model.
\subsection{Inclusion Of Marked Points}
Now we would
like to extend the analysis slightly to the case of a Riemann surface
$\Sigma$ with marked points labeled by representations of $G$.
The $G/G$ model in this situation will give a path integral representation
of the Verlinde algebra. (This generalization might be omitted on a first
reading.)
Suppose one has a represention $\rho$ of a compact Lie group $G$
in a Hilbert space ${\cal H}$. Then as in \jory, the projection
operator onto the invariant subspace of ${\cal H}$ is
$$\Pi= {1\over {\rm vol}(G)}\int_G Dg \,\,\rho(g), \eqn\uxx$$
with $Dg$ an invariant measure on $G$ and ${\rm vol}(G)$ the
volume of $G$ computed with that measure.
The trace of $\Pi$ is the multiplicity with which the trivial
representation of $G$ appears in ${\cal H}$.
Now pick an irreducible representation
$V$ of $G$, that is a vector space $V$ in which $G$ acts
irreducibly by $g\rightarrow \rho_V(g)\in {\rm Aut}(V)$. We want a formula
for the multiplicity with which $V$ appears in $G$.
We can reduce to the previous case as follows. Let $\overline V$
be the dual or complex conjugate representation of $G$. The multiplicity
with which $V$ appears in ${\cal H}$ is the same as the multiplicity
with which the trivial representation appears in ${\cal H}\otimes {\overline V}$.
So we define the projection operator $\Pi_{V}$ onto the $G$-invariant
subspace of ${\cal H}\otimes {\overline V}$:
$$\Pi_V={1\over {\rm vol}(G)}\int Dg\,\,\, \rho(g)\otimes \rho_{\overline V}(g).
\eqn\hudxo$$
The multiplicity with which $V$ appears in ${\cal H}$ is
$${\rm mult}(V)=\Tr \Pi_V. \eqn\udxo$$
We want to apply this to the case in which $G$ is replaced by the
group $\widehat G$ of gauge transformations of a principal $G$ bundle
$P\rightarrow \Sigma$; and ${\cal H}$ will be, as above,
$H^0({\cal A},{\cal L}^{\otimes k})$. The representations we will
use will be the following simple ones.
For a point $x\in \Sigma$, let $r_x:\widehat G\rightarrow G$ be the
map of evaluation at $x$. For any representation $\rho_V:G\rightarrow {\rm Aut}(V)$
of $ G$, we have the corresponding representation $\rho_{x,V}
=\rho_V\circ r_x$ of $\widehat G$.
Pick now points $x_i\in \Sigma$, labeled by representations $V_i$,
and let $V=\otimes_iV_i$ with $\widehat G$ acting by
$$\rho_V=\otimes_i \rho_{x_i,V_i}. \eqn\ingo$$
The conjugate representation is $\rho_{\overline V}=\otimes_i\rho_{x_i,\overline V_i}$.
We want to find a path integral representation of the multiplicity
with which $V$ appears in ${\cal H}$, along the lines of
\udxo. To this aim we must calculate
$$\Tr\left(\rho(g)\otimes
\rho_{\overline V}(g)\right)=\Tr\rho(g)\cdot\Tr \rho_{\overline V}(g).\eqn\faf$$
Here the first factor has a path integral expression; in fact,
$$\Tr\rho(g)=\int DA \,\, K(A,A;g), \eqn\mcco$$
with $K(A,B;g)$ the kernel introduced in \cnxon.
The second factor is simply
$$\Tr\rho_{\overline V}(g) = \prod_i \Tr_{\overline V_i}g(x_i). \eqn\imoxox$$
So we get
$${\rm mult}(V)=\dim \left({\cal H}\otimes \overline V\right)^{\widehat G}
={1\over {\rm vol}(\widehat G)}\int Dg\,\,DA\,\,\exp(-kI(g,A))
\cdot\prod_i\Tr_{\overline V_i} g(x_i). \eqn\micnic$$
The right hand side is usually called the (unnormalized)
correlation function,
$$\left\langle \prod_i \Tr_{\overline V_i}g(x_i)\right\rangle\eqn\oxxo$$
in the gauged WZW model. \oxxo\ would be unchanged if all $\overline V_i$
are replaced by $V_i$; the gauged WZW action has a symmetry (coming from an
involution of $G$ that exchanges all representations with their
complex conjugates) that ensures this.
\subsection{Relation To The Verlinde Algebra}
Now let us relate this to the Verlinde algebra.
Let $T$ be the maximal torus of $G$ and $G/T$ the quotient
of $G$ by the right action of $T$. For any irreducible representation $V$
of $G$, there is a homogeneous line bundle ${\cal S}$ over
$G/T$ such that $ H^0(G/T,{\cal S})$ is isomorphic to $V$.
Given marked points $x_1,\dots, x_s$ on $\Sigma$, let
$\widehat {\cal A}$ be the symplectic manifold
$$\widehat{ \cal A}={\cal A}\times\prod_{i=1}^s(G/T)_i \eqn\oopp$$
where $(G/T)_i$ is a copy of $G/T$ ``sitting'' at $x_i$.
This is an informal way to say that the gauge group $\widehat G$
(and its complexification $\widehat G_{\bf C}$) acts on
$(G/T)_i$ by composition of the evaluation map $r_{x_i}$
with the natural action of $G$ (or $G_{\bf C}$) on $G/T$.
If we are given irreducible
representations $V_i$ of $G$, let for each $i$ ${\cal S}_i$
be a homogeneous line bundle over $(G/T)_i$ such that
$H^0((G/T)_i,{\cal S}_i)\cong \overline V_i$.
Define a homogeneous line bundle $\widehat {\cal L}$ over $\widehat{ \cal A}$
by
$$\widehat {\cal L}={\cal L}^{\otimes k}\otimes\left(\otimes_i{\cal S}_i\right).
\eqn\hins$$
(In an obvious way, I have identified the line bundles ${\cal L}$
and ${\cal S}_i$ with their pullbacks to $\widehat{\cal A}$.)
Then
$$H^0(\widehat{\cal A},\widehat{\cal L})=H^0({\cal A},{\cal L}^{\otimes k})
\otimes\left(\otimes_i \overline V_i\right). \eqn\polyp$$
The multiplicity ${\rm mult}(V)$ of \micnic\
is therefore the same as the dimension of the $\widehat G$-invariant
subspace of $H^0(\widehat{\cal A},\widehat{\cal L})$:
$${\rm mult}(V)={\rm dim}\left(H^0(\widehat{\cal A},\widehat{\cal L})^{\widehat G}\right).
\eqn\jurfo$$
\REF\seshadri{V. B. Mehta and C. S. Seshadri,
``Moduli Of Vector Bundles On Curves With Parabolic Structures,''
Math. Ann. {\bf 248} (1980) 205.}
On the other hand, let $\widehat{\cal R}$ be the quotient of $\widehat{\cal A}$
by $\widehat G_{\bf C}$
(the quotient being taken in the sense of geometric invariant theory,
using the ample line bundle $\widehat{\cal L}$). $\widehat{\cal R}$ is called
the moduli space of holomorphic bundles over $\Sigma$ with parabolic
structure, the parabolic structure being a reduction of the structure
group to $T$ at the marked points $x_i$. (By a theorem of Mehta and
Seshadri [\seshadri],
$\widehat{\cal R}$ coincides with the moduli space of flat connections on
$P\rightarrow \Sigma\,\,
- \{x_i\}$ with certain branching about the $x_i$, up to gauge
transformation.)
The $\widehat G_{\bf C}$-invariant line bundle $\widehat{\cal L}\rightarrow\widehat{\cal A}$
descends to a line bundle over $\widehat{\cal R}$, which we will also call
$\widehat{\cal L}$, whose sections over an open set $U\subset {\cal R}$
are $\widehat G$-invariant sections of $\widehat{\cal L}$ over the
inverse image of $U$ in $\widehat A$. So in particular
$$H^0(\widehat{\cal R},\widehat{\cal L}) =H^0(\widehat{\cal A},\widehat{\cal L})^{\widehat G}.
\eqn\hurry$$
Both $\widehat{\cal R}$ and $\widehat{\cal L}$
depend on the $V_i$, but I will not indicate this in the notation.
The left hand side of \hurry\ is the space of non-abelian theta
functions with parabolic structure.
If we combine \micnic, \oxxo, \jurfo, and \hurry, we find
that the dimension of this space is naturally written as a correlation
function in the gauged WZW model:
$$\dim H^0(\widehat{\cal R},\widehat{\cal L})=\left\langle \prod_{i=1}^s\Tr_{V_i}g(x_i)
\right\rangle.\eqn\finalgo$$
\subsection{The Verlinde Algebra}
As a special case of this, the Verlinde algebra is defined as follows.
For given ``level'' $k$, the loop group of the compact Lie
group $G$ has a finite number of isomorphism classes of unitary,
integrable representations; their highest weights are a distinguished
list of isomorphism classes $V_\alpha,\,\,\alpha\in W$ of representations
of $G$. Let $X$ be the ${\bf Z}$ module freely generated by the $V_\alpha$.
$X$ has a natural
metric given by $g(V_\alpha,V_\beta)=1$ if $V_\alpha=\overline V_\beta$
and otherwise $g(V_\alpha,V_\beta)=0$.
It also has a natural multiplication structure
that we will describe presently.
$X$ endowed with this structure is called the Verlinde algebra.
Using the metric on $X$, a multiplication
law $V_\alpha\cdot V_\beta=\sum_\gamma N_{\alpha\beta}{}^\gamma V_\gamma$
can be defined by giving a cubic form $N_{\alpha\beta\gamma}$ which
is interpreted as $\sum_\delta g_{\gamma\delta}N_{\alpha\beta}{}^{\delta}$.
Such a cubic form is defined as follows.
Take $\Sigma$ to be a curve of genus zero with three marked points
$x_i$, $i=1\dots 3$, labeled by integrable representations $V_{\alpha_i}$,
$\alpha_i\in W$. The choice of the $\alpha_i$ and of a level $k$
determines a moduli space $\widehat{\cal R}$ of parabolic bundles
with a line bundle $\widehat{\cal L}$. The structure constants of the Verlinde
algebra are
$$N_{\alpha_1,\alpha_2,\alpha_3}=\dim H^0(\widehat{\cal R},\widehat{\cal L}).
\eqn\yuyu$$
So in other words, from \finalgo, the Verlinde structure functions
are the genus zero three point functions of the $G/G$ model:
$$N_{\alpha_1,\alpha_2,\alpha_3}=\left
\langle\prod_{i=1}^3\Tr_{V_i}g(x_i)\right\rangle.
\eqn\josos$$
The basic phenomenon under study in the present paper is a relation
between the quantum cohomology of the Grassmannian and the $G/G$ model;
the result can be applied to the Verlinde algebra because of \josos.
The special case of a genus zero surface with three marked
points is fundamental because the general case can be reduced to this
by standard sewing and gluing arguments.
In fact, such sewing and gluing arguments, applied to
a genus zero curve with four marked points, yield the associativity
of the Verlinde algebra.
\subsection{Higher Cohomology}
Obviously, the above discussion has only a physical level of rigor.
Among many points that should be clarified I will single out one.
If the $V_i$ are integrable representations at level $k$, then the higher
cohomology $H^i(\widehat{\cal R},\widehat{\cal L}),\,\,i>0$ vanishes,
and $\dim H^0(\widehat{\cal R},\widehat{\cal L})$ coincides with the Euler characteristic
$\chi(\widehat{\cal R},\widehat{\cal L})=\sum_i(-1)^i\dim H^i(\widehat{\cal R},\widehat{\cal L})$.
{}From comments made to me by R. Bott and G. Segal, it appears that
for \finalgo\ to hold for arbitrary representations $V_i$
(perhaps not integrable), one must replace $\dim H^0(\widehat{\cal R},\widehat{\cal L})$
by $\chi(\widehat{\cal R},\widehat{\cal L})$. A rigorous treatment of the $G/G$
model should show the restriction to integrable representations
in deriving \finalgo; there may also be a supersymmetric version
of the derivation that naturally gives the Euler characteristic
and holds for all representations.
\section{Some Additional Properties}
The reader may wish at this stage to turn to \S3. However, I will pause
here and in \S2.6 below
to explain a few additional facts that have their own interest
and will be needed at a few points in \S4.
\subsection{Topological Field Theory}
First of all, the gauged WZW theory of $G/H$ is in general
conformally invariant but not topologically invariant. A
conformal structure appears in the definition of the Lagrangian.
However, for $H=G$ we have evaluated the
partition function of the $G/H$ model, and found it to
be an integer, independent of the conformal structure of $\Sigma$,
and equal to the dimension of the space of non-abelian theta functions.
This strongly suggests that the $G/G$ model is actually a topological
field theory. Let us try to demonstrate that directly.
A conformal structure on $\Sigma$ can be specified by giving a metric
$h$, uniquely determined up to Weyl scaling.
Under a change in $h$, the change in the $G/G$ Lagrangian is
$$\delta L={k\over 8\pi}\int_\Sigma d^2z\sqrt h (h^{z\overline z})^2
\left(\delta h_{\overline z\overline z}\Tr (g^{-1}D_zg)^2+\delta h_{zz}
\Tr (D_{\overline z} g\cdot g^{-1})^2\right). \eqn\polo$$
Though this expression does not vanish identically, it vanishes
when the classical equations of motion are obeyed. In fact,
under a variation of the connection $A$, the Lagrangian changes
by
$$\delta' L= {k\over 2\pi}\int_\Sigma d^2z\sqrt h h^{z\overline z}
\Tr\left(\delta A_{\overline z} g^{-1}D_zg-\delta A_z D_{\overline z}g\cdot g^{-1}
\right). \eqn\moomoo$$
So the classical Euler-Lagrange equations, asserting
the vanishing of $\delta' L$, are
$$0 = g^{-1}D_z g= D_{\overline z}g \cdot g^{-1}. \eqn\roomoo$$
Since \polo\ vanishes when \roomoo\ does,
the $G/G$ model is classically a topological field theory.
Quantum mechanically
the analog of using the equations of motion is to make a suitable
change of variables in the path integral. In this case, we consider
the infinitesimal redefinition of $A$
$$\eqalign{\delta A_z & = {1\over 4}\delta h_{zz}h^{z\overline z}D_{\overline z}g\cdot
g^{-1} \cr
\delta A_{\overline z} & = -{1\over 4} \delta h_{\overline z\overline z}
h^{z\overline z} g^{-1}D_zg. \cr} \eqn\ujmoo$$
(This is a complex change of coordinates that entails an infinitesimal
displacement of the integration contour in the complex plane, or
more exactly a displacement of the cycle of integration in the complexification
of ${\cal A}$.)
Substituting in \roomoo, we see that the
Lagrangian $L(g,A)$ is invariant under a change of metric on $\Sigma$
compensated by the transformation \ujmoo\ of the field variables.
The path integral for the partition function
$$ \int {D} g\,\,\,{D} A \,\,\,\exp(-L(A,g)) \eqn\jmoo$$
is therefore invariant
under the combined change of metric and integration variable,
provided the measure
${D} A$ is invariant. To this effect, we must compute a Jacobian or,
at the infinitesimal level, the divergence
of the vector field that generates the change of variables \ujmoo.
This is formally
$$\int_\Sigma\left({\delta\over\delta A_z(x)}\delta A_z(x)
+{\delta\over\delta A_{\overline z}(x)}\delta A_{\overline z}(x).\right) \eqn\gmoo$$
This vanishes, as $\delta A_z$ is indepependent of $A_z$ and $\delta
A_{\overline z}$ is independent of $A_{\overline z}$.
This completes the explanation of why the $G/G$ model
is a topological field theory.
Let us note now that the other Euler-Lagrange equation of motion,
obtained by varying with respect to $g$, is
$$D_{\overline z}(g^{-1}D_zg) +F_{\overline zz}= 0, \eqn\cxxon$$
with $F$ the curvature of the connection $A$. So given
\roomoo, this implies that
$$F=0. \eqn\czonzo$$
\subsection{Comparison To The Obvious Topological Field Theory}
If the goal were to construct a topological field theory using
the fields $g,A$, the more obvious way to do it would be to take
the Lagrangian to be simply
$$L'(g,A)=-ik\Gamma(g,A), \eqn\hixon$$
which manifestly corresponds to a topological field theory, since
it is defined without use of any metric or conformal structure.
How does this theory compare to the $G/G$ WZW model?
More generally, let us consider the family of theories
$$L_{k'}(g,A)=-{k'\over 8\pi}\int_\Sigma\Tr g^{-1}d_Ag\wedge
*g^{-1}d_Ag -ik\Gamma(g,A), \eqn\cargo$$
with positive $k'$.
This coincides with the $G/G$ model at $k'=k$, and with the manifestly
topologically invariant model at $k'=0$.
It is straightforward to work out that the classical equations
of motion are
$$ 0 =g^{-1}D_zg -\lambda D_zg\cdot g^{-1}=D_{\overline z}g\cdot g^{-1}-
\lambda g^{-1}D_{\overline z}g, \eqn\cimbo$$
with
$$\lambda = {k'-k\over k'+k}.\eqn\argo$$
For $0<k'<\infty$, one has
$$-1<\lambda<1. \eqn\dgon$$
\cimbo\ implies
$$d_Ag = 0 .\eqn\noki$$
For instance, the first equation in \cimbo\ is equivalent to
$$\left(1-\lambda {\rm Ad}(g)\right)(D_zg)=0, \eqn\ncon$$
with ${\rm Ad (g)}(x)=gxg^{-1}$. Since $|{\rm Ad}(g)|\leq 1$ and $|\lambda|
<1$, \ncon\ implies $D_zg=0$, and similarly \cimbo\ implies $D_{\overline z}g=0$.
Given that \noki\ follows from the classical equations of motion,
the same sort of reasoning as above shows that the Lagrangians
$L_{k'}$ describe a family of topological field theories: a change
of metric can be compensated by a change of integration variable
with trivial Jacobian.
Now, to study the $k'$ dependence, look at
$${\partial L_{k'}\over \partial k'}=-{1\over 8\pi}\int_\Sigma
\Tr (g^{-1}d_Ag\wedge *g^{-1}d_Ag). \eqn\xoxo$$
By virtue of
\noki, this expression vanishes by the classical equations of motion,
so classically the family of theories governed by $L_{k'}$ is
constant.
{}From the above discussion, we know how we should proceed quantum
mechanically:
we should find a change of integration variable that compensates
for the $k'$ dependence of the Lagrangian. Such a change
of variable exists because of \noki; one can take explicitly
$$\delta A_z=-{\delta k'\over k+k'}\left(1-\lambda{\rm Ad}(g^{-1})\right)^{-1}
(D_zg\cdot g^{-1}).\eqn\coconon$$
Now, however, a difference arises from our earlier discussion. Because
$\delta A_z$ is a function of $A_z$, the Jacobian of the transformation
in \coconon\ is not necessarily 1; the integration measure in the path
integral may not be invariant.
The change of the integration measure is formally
$$\int_\Sigma \Tr {\delta\over\delta A_z(x)}\delta A_z(x).
\eqn\cucuin$$
Since
$${\delta \over\delta A_z(x)}\delta A_z(y)\sim \delta^2(x,y),
\eqn\ucuin$$
this is ill-defined, proportional to $\delta^2(0)$. In any
event, since \cucuin\ is the integral over $\Sigma$ of a local
quantity, any regularization should be of that form. Quantities
analogous to \cucuin\ are regularized (albeit in a slightly
{\it ad hoc} fashion) in [\blau], in deriving eqn. (6.22).
I will not repeat such a calculation here, but I will just explain
what general form the answer must have, by asking what is
the most general possible perturbation of the $G/G$ model.
\subsection{The Complete Family Of Theories}
\REF\maybe{S. Elitzur, A. Forge, and E. Rabinovici,
``On Effective Theories Of Topological Strings,''
Nucl. Phys. {\bf B388} (1992) 131.}
Let us simply go back to the gauged WZW model of $G/G$, and
ask what kind of perturbations it has (see also [\maybe]).
We permit the perturbation
of the Lagrangian to be the integral of an arbitrary local functional
of $g,A$, and a metric $h$ on $\Sigma$.
In this way we will obtain continuous perturbations of the
$G/G$ model, but forbid discrete perturbations (notably changes in $k$)
that cannot be described via the addition of a local functional
to the Lagrangian.
Perturbations that vanish
by the classical equations of motion are irrelevant, since they
can be eliminated by a change of integration variables as described
above. (Even if the integration measure is not invariant under
the change of variables, changes of variables can be used to eliminate
the perturbations that vanish by the equations of motion in favor
of other perturbations that do not so vanish.)
In classifying perturbations, we therefore can work modulo operators
that vanish by the classical equations of motion. Given \roomoo\
and \czonzo, this means that we can discard anything proportional to
$d_Ag$ or $F$.
The gauge invariant local operators, modulo operators that vanish
by the equations of motion, are generated by operators of the form
$U(g)$, with $U$ some function on $G$ that is invariant under conjugation.
Since $U(g)$ is a zero-form, to construct from it a perturbation
of the Lagrangian, we need also a metric $h$ on $\Sigma$, or at least
a measure $\mu$, such as the Riemannian measure. The curvature scalar
of $h$ will be called $R$. The most interesting
perturbations are
$$Q_U= \int_\Sigma d \mu \,\,U(g) \eqn\nxon$$
and
$$S_U= \int_\Sigma d^2z \sqrt h R \, \, U(g). \eqn\xon$$
\nxon\ breaks the diffeomorphism invariance of the $G/G$ model
down to invariance under the group of diffeomorphisms that preserve
the measure $\mu$. The $G/G$ model perturbed as in
\nxon\ is an interesting family of theories invariant under
area-preserving diffeomorphisms (and reducing for $k\rightarrow\infty$ to
two dimensional Yang-Mills theory, which has the same invariance).
Slightly less obviously, the $G/G$ model perturbed by \xon\ is still
a topological field theory. In fact, under an infinitesimal change in $h$,
$\sqrt h R$ changes by a total derivative (so that
$\int_\Sigma d^2z\sqrt h R $ is a topological invariant, a multiple
of the Euler characteristic). After integrating by parts,
the change in \xon\ under a change in $h$ is
$$\delta S_U\sim \int_\Sigmad^2z \sqrt h \left(
\delta h_{i'j'}-h_{i'j'}h^{kl}\delta h_{kl}\right) h^{i'i}h^{j'j}
D_iD_j U(g). \eqn\uponon$$
This vanishes by the equations of motion, since
$d_A(g)=0$ implies $d U=0$.
Hence one can compensate for $\delta S_U$ with a redefinition of
$A$ (and the Jacobian for the transformation is trivial, since
the requisite $\delta A$ is independent of $A$).
Other perturbations, such as $\int_\Sigma d^2z \sqrt h R^2 U(g)$,
are less interesting, since (i) they do not possess the large
invariances of the theories perturbed by $Q_U$ or $S_U$; (ii)
they vanish as a negative power of $t$ if the metric of $\Sigma$
is scaled up by $h\rightarrow t h$, $t>>1$. The latter property means
that in most applications of these systems, such perturbations
(if not prevented by (i)) can be conveniently eliminated.
Since the most general perturbation of the $G/G$ model that
preserves the diffeomorphism invariance is of the form of $S_U$,
the regularized version of \cucuin\ must be
equivalent to $S_U$ for some $U$. By the same token,
for any $k'$, the $G/G$ model
must be equivalent to the $L_{k'}$ model perturbed by some $S_U $
(with a $k'$-dependent $U$), and vice-versa. In particular,
setting $k'=0$, the $G/G$ model is equivalent to the
manifestly topologically invariant model with Lagrangian
$-ik\Gamma(g,A)$, perturbed by some $S_U$. The requisite $ U$'s in these
statements can in fact be computed at least heuristically
along the lines of the derivation of eqn. (6.22) of [\blau],
but I will not do so here.
\subsection{Interpretation}
Note that the conjugation-invariant function $U(g)$ that entered
above can be expressed as a linear combination of the characters
$\Tr_Vg$, as $V$ runs over irreducible representations of $G$.
These are precisely the operators whose correlation functions
were interpreted algebro-geometrically in \finalgo, so the
theories obtained by perturbing the $G/G$ model are all computable
in terms of the Verlinde algebra.
\section{Abelianization}
I will now briefly describe another interesting facet of the $G/G$
model, introduced in [\blau], which apart from its beauty will enter
at a judicious moment in \S4.
\foot{A computation reaching a rather similar conclusion is sketched
in [\gerasimov], but unfortunately the fermionic symmetry
$\delta $ introduced in equations (71)-(74) of that paper does not obey
$\delta^2=0$, which would be needed to justify the computation.
I will therefore concentrate on sketching the argument of [\blau].}
A recurring and significant theme in the theory of compact
Lie groups is the reduction to the maximal torus $T$, extended
by the Weyl group $W$. As explained
in [\blau], the $G/G$ model admits such an reduction to the
maximal torus. It is equivalent to the $T/T$ model
(that is, the $G/H$ model with both $G$ and $H$ set equal to $T$)
perturbed by $S_U$, where $U$ is a certain Weyl-invariant function
on $T$ and $S_U$ is defined in \xon.
At the level of precision explained in [\blau], the abelianization
of the model proceeds as follows. Pick a maximal torus $T\subset G$,
with Lie algebra ${\bf t}$.
Impose the ``gauge condition'' $g\in T$.
\foot{This is not really valid globally as a gauge condition.
One must think in terms of integrating over the fibers of the map
$G\rightarrow T/W$ that maps a group element to its conjugacy class.}
Decompose the connection
as $A=A_0+A_\perp$, where $A_0$ is the part of the connection valued
in ${\bf t}$, and $A_\perp$ is valued in the orthocomplement $\bf t_\perp$
of ${\bf t}$.
In this gauge the $G/G$ Lagrangian takes the form
$$L_{G/G}(g,A)=L_{T/T}(g,A_0) -{k\over 2\pi}\int_\Sigma d^2z\Tr \left(
A_{\perp,z}A_{\perp,\overline z}-A_{\perp,z}g A_{\perp,\overline z}g^{-1}\right).
\eqn\ombo$$
Here
$$L_{T/T}(g,A_0)=kI_{T/T}(g,A_0) \eqn\rombo$$
is the Lagrangian of the $T/T$ model, at level $k$.
The $G/G$ model, in this gauge, differs from the $T/T$ model
by the last term in \ombo, which involves $A_\perp$.
To reduce the $G/G$ model to something like the $T/T$ model,
one must ``integrate out'' $A_\perp$ to reduce to a description
involving $g$ and $A_0$ only. Happily, the $A_\perp$ integral is
Gaussian:
$$\int {D} A_\perp\exp\left({k\over 2\pi}\int_\Sigma d^2z\Tr\left(
A_{\perp,z} A_{\perp,\overline z}-A_{\perp,z}gA_{\perp,\overline z}g^{-1}\right)\right).
\eqn\umco$$
Such a Gaussian integral formally gives rise to a determinant
(as we briefly explain in \S3.5 below). In comparing the
$G/G$ model to the $T/T$ model, another determinant arises:
the Fadde'ev-Popov determinant comparing the volume of $\widehat G$ to
the volume of $\widehat T$. These two determinants are rather singular
but at the same time extremely simple, because the exponent in
\umco\ (like the corresponding expression in the Fadde'ev-Popov determinant)
is a local functional without derivatives.
In [\blau], Blau and Thompson calculate these determinants, with
a plausible regularization, and argue that the $G/G$ model is
equivalent to a $T/T$ model with Lagrangian
$$\widehat L_{T/T}(g,A_0)=(k+\rho)I(g, A_0)-{1\over 4
\pi}\int_\Sigma
d^2x \sqrt h R \log\det{}_{{\bf t}_\perp}(1-{\rm Ad}(g)). \eqn\polyp$$
Here $\rho$ is the dual Coxeter number of $G$, and $\det_{{\bf t}_\perp}
(1-{\rm Ad}(g))$ is the determinant of $1-{\rm Ad}(g)$, regarded
as an operator on ${\bf t}_\perp$. (This well-known Weyl-invariant function
enters in the Weyl character formula, where it has a somewhat
similar origin,
involving a comparison of the volumes of $G$ and $T$.)
In \S4.6, we will have occasion to use \polyp\ for the case that
$G=U(k)$. For that case, if $g={\rm diag}(\sigma_1,\dots,\sigma_k)$,
the eigenvalues of $1-{\rm Ad}(g)$ acting on ${\bf t}_\perp$
are the numbers $1-\sigma_i\sigma_j{}^{-1}$,
for $1\leq i,j\leq k$, $i\not= j$. Hence the correction term in \polyp\
becomes in this case
$$\Delta L =-{1\over 4\pi}\int_\Sigma d^2x \sqrt h R \left(\sum_{i\not= j}
\ln(\sigma_i-\sigma_j)-(k-1)\sum_i\ln\sigma_i\right). \eqn\lateruse$$
Given the role of the $G/G$ model in counting non-abelian theta
functions, its reduction to a $T/T$ model is a
kind of abelianization of the problem of counting such functions.
In \S7.3 of [\blau], this is pursued further to obtain a completely
explicit count of non-abelian theta functions for $G=SU(2)$.
The role of the endpoint contributions in equation (7.14) of that
paper still deserves closer study.
\chapter{The Quantum Cohomology Of The Grassmannian}
\REF\frenkel{I. B. Frenkel, ``Representations of affine Lie algebras,
Hecke modular forms and Korteweg-de Vries equations,'' in
{\it Lie Algebras And Related Topics}, Lecture Notes In Mathematics
Vol. 933 (Springer-Verlag, 1982), p. 71; ``Representations Of Kac-Moody
Algebras And Dual Resonance Algebras,'' in {\it Lectures In
Applied Mathematics}, vol. 21 (American Mathematical Society, 1985), p. 325.}
\REF\segal{A. Pressley and G. Segal, {\it Loop Groups} (Oxford University
Press, 1986).}
\REF\schnitzer{E. J. Mlawer, S. G. Naculich, H. A. Riggs, and H. J.
Schnitzer, ``Group Level Duality of WZW Fusion Coefficients And
Chern-Simons Link Observables,'' Nucl. Phys. {\bf B352} (1991) 863;
S. G. Naculich, H. A. Riggs, and H. J. Schnitzer,
``Group Level Duality In WZW Models And Chern-Simons Theory,''
Phys. Lett. {\bf B246} (1990) 417.}
\REF\naka{T. Nakanishi and A. Tsuchiya, ``Level Rank Duality Of WZW Models
In Conformal Field Theory,'' Commun. Math. Phys. {\bf 144} (1992) 351.}
The Grassmannian $G(k,N)$ is the space of all $k$ dimensional
subspaces of a fixed $N$ dimensional complex vector space $V\cong
{\bf C}^N$. If we want to make the dependence on $V$ explicit, we write
$G_V(k,N)$.
By associating with a $k$ dimensional subspace of $ V$
the $N-k$ dimensional orthogonal subspace of the dual space $V^*$, we see that
$G_V(k,N)\cong G_{V^*}(N-k,N)$. The relation that we will explain
here and in \S4 between the
Verlinde algebra of $U(k)$ at level $(N-k,N)$\foot{That is,
at levels $N-k$ and $N$ for the $su(k)$ and $u(1)$ factors
in the Lie algebra of $U(k)$.} and the quantum cohomology of
$G(k,N)$ therefore implies that the Verlinde algebra
of $U(k)$ at level $(N-k,N)$ coincides with that of $U(N-k)$
at level $(k,N)$. This is a surprising fact that had been
noted earlier. (For instance, see [\frenkel] and [\segal, p. 212, Proposition
(10.6.4)] for $k\leftrightarrow N-k$ symmetry of loop group representations
and [\schnitzer,\naka] for such symmetry of the Verlinde algebra.)
One way to describe $G(k,N)$ is as follows. Let $B$ be the space of all
linearly independent $k$-plets $e_1,\dots,e_k\subset V$. A point in $B$
labels a $k$-plane $V$ with a basis. The group $GL(k,{\bf C})$ acts on $B$ by
change of basis, $e_i\rightarrow \sum_j W_i{}^je_j,\,\,\,W\in GL(k,{\bf C})$. Since
$GL(k,{\bf C})$ acts simply transitively on the space of bases of $V$, upon
dividing by $GL(k,{\bf C})$ we precisely forget the basis and therefore
$$G(k,N)=B/GL(k,{\bf C}). \eqn\furtful$$
$B$ is dense and open in the $k$-fold product
${\bf C}^{kN}=V\times V\times \dots \times V$
(since the generic $k$-plet $e_1,\dots , e_k\subset V$ is a basis
of $V$), so $G(k,N)$ is a quotient of a dense open subset of ${\bf C}^{kN}$
by $GL(k,{\bf C})$. In fact, $G(k,N)$ is the good quotient of ${\bf C}^{kN}$
by $GL(k,{\bf C})$ that would be constructed in geometric invariant theory.
There is also a symplectic version of this, which will be more relevant
in what follows. Pick a Hermitian metric on $V$ so that $V^k={\bf C}^{kN}$
gets a metric and a symplectic structure. In linear coordinates
$\phi^{is}$, $i=1\dots k$, $s=1\dots N$ on ${\bf C}^{kN}$, the symplectic
form is
$$\omega=i\sum_{i,s}d\phi^{is}\wedge d \overline\phi_{is}.\eqn\sympst$$
$\omega$ is not invariant under $GL(k,{\bf C})$, but it is invariant under
a maximal compact subgroup $U(k)\subset GL(k,{\bf C})$.
To this symplectic action is associated a ``moment map'' $\mu$
from ${\bf C}^{kN}$ to the dual of the Lie algebra of $U(k)$, given by the
angular momentum functions that generate $U(k)$ via Poisson brackets.
In this case we can take the moment map to be
$$\mu:(e_1,\dots,e_k)\rightarrow \{(e_i,e_j)-\delta_{ij}\}.\eqn\morfo$$
In other words, $\mu=0$ precisely if the vectors $e_1,\dots, e_k$
are orthonormal.
Every $k$-plane has an orthonormal basis, unique up to the action of
$U(k)$, so
$$G(k,N)=\mu^{-1}(0)/U(k). \eqn\tuggo$$
This is the description of $G(k,N)$ that we will actually use.
We will also want to remember one fact: $\mu$ is a quadratic function
on the real vector space underlying ${\bf C}^{kN}$. In components,
$$\mu^i{}_j=\sum_s\phi^{is}\overline\phi_{js}-\delta^i{}_j.\eqn\tuffo$$
\section{Cohomology}
Now we need to discuss the cohomology of $G(k,N)$. We begin with the
classical cohomology. Over $G(k,N)$ there is a ``tautological''
$k$-plane bundle $E$ (whose fiber over $x\in G(k,N)$ is the $k$
plane in $V$ labeled by $x$) and a complementary bundle $F$
(of rank $N-k$):
$$0\rightarrow E\rightarrow V\cong {\bf C}^N\rightarrow F \rightarrow 0. \eqn\compbun$$
Obvious cohomology classes of $G(k,N)$ come from Chern classes.
We set
$$ x_i=c_i(E^*), \eqn\mipp$$
where $*$ denotes the dual. (It is conventional to use $E^*$ rather
than $E$, because $\det E^*$ is ample.)
This is practically where Chern classes come from, as $G(k,N)$ for
$N\rightarrow \infty$ is the classifying space of the group $U(k)$.
It is known that the $x_i$ generate $H^*(G(k,N))$ with certain relations.
The relations come naturally from the existence of the complementary
bundle $F$ in \compbun. Let $y_j=c_j(F^*)$, and let
$c_t(\cdot)=1+tc_1(\cdot)+t^2c_2(\cdot)+\dots$. Then
as a consequence of \compbun,
$$ c_t(E^*)c_t(F^*)=1, \eqn\ombun$$
and $H^*(G(k,N))$ is generated by the $x_i,y_j$ with relations
\ombun.
If one wishes,
these relations can be partially solved to express the $y_j$ in terms
of the $x_i$ (or vice-versa).
\REF\instantons{M. Dine, N. Seiberg, X.-G. Wen, and E. Witten,
``Non-Perturbagtive Effects On The String World Sheet, I, II''
Nucl. Phys. {\bf B278} (1986) 769, {\bf B289} (1987) 319.}
\REF\gromov{M. Gromov, ``Pseudo-Holomorphic Curves In Symplectic Manifolds,''
Invent. Math. {\bf 82} (1985) 307.}
Now we come to the quantum cohomology, which originally entered in
string theory, where [\instantons]
it enters the theory of the Yukawa couplings
(which are related to quark and lepton masses), and in Floer/Gromov
theory of symplectic manifolds [\gromov]. Additively, the quantum cohomology
is the same as the classical one, but the ring structure is different.
Giving a ring structure on $W=H^*(G(k,N))$ is the same as giving the
identity $1\in W$ and a cubic form
$$(\alpha,\beta,\gamma)=\int_{G(k,N)}\alpha\cup\beta\cup\gamma.\eqn\wurko$$
The cubic form determines a metric
$$g(\alpha,\beta)=(\alpha,\beta,1), \eqn\bomxo$$
and given a metric the cubic form $W\times W\times W\rightarrow {\bf C}$
determines a ring structure $W\times W\rightarrow W$.
So I will explain the quantum cohomology ring by describing the quantum
cubic form. To this aim, let $\Sigma$ be a closed oriented two-manifold
(which in string theory would be the ``world-sheet,'' analogous to the
world-line of a particle). Let $P\in \Sigma$. Let
${\cal W}={\rm Maps}(\Sigma,G(k,N))$. Evaluation at $P$ gives
a map
$${\cal W}\underarrow{{\rm ev}(P)} G(k,N), \eqn\evmap$$
by which $\alpha\in H^*(G(k,N))$ pulls back to $\widehat\alpha(P)
={\rm ev}(P)^*(\alpha)\in H^*({\cal W})$.
Now pick a complex structure on $\Sigma$, and let ${\cal M}\subset
{\cal W}$ be the space of holomorphic maps of $\Sigma$ to $G(k,N)$.
We have ${\cal M}=\cup_\lambda {\cal M}_\lambda$, with ${\cal M}_\lambda$
being the connected components of ${\cal M}$.
In the case
of the Grassmannian, the components ${\cal M}_
\lambda$ are determined by the degree, defined as follows.
If $\eta=c_1(E^*)$, which generates $H^2(G(k,N),{\bf Z})$,
and $\Phi:\Sigma\rightarrow G(k,N)$ is such that
$\int_\Sigma\Phi^*(\eta)=d$, then $\Phi$ is said to be of degree $d$.
Since $\det E^*$ is ample, holomorphic curves only exist for $d\geq 0$.
The quantum cubic form
is defined as follows (ignoring analytical details and tacitly assuming that
the ${\cal M}_\lambda$ are smooth and compact).
Let $\Sigma$ be of genus zero. Let $P,Q,R$
be three points in $\Sigma$. Then for $\alpha,\beta,\gamma\in H^*(G(k,N))$,
we set
$$\langle \alpha,\beta,\gamma\rangle=\sum_d e^{-dr}\cdot \int_{{\cal M}_d}
\widehat\alpha(P)\cup \widehat\beta(Q)\cup \widehat\gamma(R),
\eqn\tangle$$
with $r$ a real parameter.
In what sense does $\langle\alpha,\beta,\gamma\rangle$ generalize the
classical cubic form? One component of ${\cal M}$, namely ${\cal M}_0$,
consists of constant maps $\Sigma\rightarrow G(k,N)$. This component is a copy
of $G(k,N)$ itself. Under that identification the evaluation maps
at $P,Q$, and $R$ all coincide with the identity, so the contribution
of ${\cal M}_0$ to $\langle\alpha,\beta,\gamma\rangle$ coincides with
the classical cubic form defined as in \wurko. The quantum cubic
form differs from the classical one by contributions of the rational
curves of higher degree. These contributions are small for $r>>0$.
In practice, for dimensional reasons, for every given $\alpha,\beta,\gamma$
of definite dimension, the sum in \tangle\ receives a contribution from
at most one value of $d$. (This is in marked contrast to the much-studied
case of a Kahler manifold of $c_1=0$, where every positive $d$ can contribute
to the same correlation function.) Therefore, no information is lost
if we set $r=0$, and that is what we will do in the rest of this section.
It follows from the definition (for any Kahler manifold, not just
the Grassmannian) that
$$\langle \alpha,\beta,1\rangle =(\alpha,\beta,1) \eqn\nurgo$$
and thus that the classical and quantum metrics coincide.
This is equivalent to the statement that rational maps of positive
degree do not contribute to $\langle\alpha,\beta,1\rangle$.
In fact (as $\widehat 1(R)=1$),
the contribution of a component ${\cal M}_\lambda$
of positive degree is
$$\int_{\cal M_\lambda}\widehat\alpha(P)\cup\widehat\beta(Q).
\eqn\ongo$$
A group $F\cong\bf C^*$
acts on ${\bf CP}^1$ leaving fixed the points $P$ and $Q$.
$F$ acts freely on ${\cal M}_\lambda$, if ${\cal M}_\lambda$ is
a component of rational maps of positive degree. The classes
$\widehat\alpha(P)$ and $\widehat\beta(Q)$ in the cohomology
of ${\cal M}_\lambda$ are pullbacks from ${\cal M}_\lambda/F$.
Therefore, on dimensional grounds \ongo\ vanishes.
\section{The Grassmannian}
\def{\bf Z}{{\bf Z}}
Let us now work out the quantum cohomology ring of the Grassmannian.
As a preliminary, we note that the contribution of a moduli
space ${\cal M}_d$
to the quantum cubic form obeys an obvious dimensional condition:
it vanishes unless the sum of the dimensions of $\alpha,\beta,\gamma$
equals the (real) dimension of ${\cal M}_d$.
The component ${\cal M}_d$ of genus zero holomorphic curves of
degree $d$ in $G(k,N)$ has (according to the Riemann-Roch theorem)
complex dimension $\dim_{{\bf C}} G(k,N)+dN$. The fact that this
depends on $d$ means that the dimensional condition depends on $d$
and therefore that the quantum cohomology ring is not ${\bf Z}$-graded.
However, the fact that the real dimensions are all equal modulo
$2N$ means that the cohomology is ${\bf Z}/2N{\bf Z}$-graded.
\REF\yaulect{E. Witten, ``Two Dimensional Gravity And Intersection
Theory On Moduli Space,'' Surveys in Differential Geometry
{\bf 1} (1991) 243.}
Returning to the relations $c_t(E^*)c_t(F^*)=1$ that define the cohomology
of the Grassmannian, we see that (as the left hand side is {\it a priori}
a polynomial in $t$ of degree $N$) the classical relations are of dimension
$0,2,4,\dots, 2N$. To a classical relation of degree $2k$, the rational
curves of degree $d>0$ will add a correction of degree $2k-2dN$; this
therefore must vanish unless $k=N$ and $d=1$. Therefore, of the defining
relations of the cohomology, the only one subject to a quantum correction
is the ``top'' relation $c_k(E^*)c_{N-k}(F^*)=0$, and the correction is
an element of $H^*(G(k,N))$
of degree 0 and hence simply an integer. So the non-trivial effect of
the quantum corrections will be simply to generate a relation of the form
$$ c_k(E^*)c_{N-k}(F^*)= a, \eqn\rellform$$
for some $a\in {\bf Z}$.
Moreover, $a$ is to be computed by examining rational curves in the
Grassmannian of degree 1. We will find that $a=(-1)^{N-k}$, so
the quantum cohomology ring can be described by the relations
$$c_t(E^*)c_t(F^*)=1+ (-1)^{N-k}t^N. \eqn\ellform$$
This correction has been described previously [\yaulect] in the special case
of $k=1$ (complex projective space).
Despite its simple form, the correction has a dramatic effect:
while the classical cohomology ring is nilpotent (in the sense that every
element of positive degree is nilpotent), the quantum cohomology ring
is semi-simple. This is evident in its Landau-Ginzburg description
[\lerche,\intrilligator,\gepner] which we consider presently.
\subsection{Computation Of $a$}
For $X$ a submanifold of $G(k,N)$, let $[X]$ be its Poincar\'e dual
cohomology class. For instance, for $p$ a point in the Grassmannian,
$[p]$ is a top dimensional class, obeying $g(1,[p])=1$. (It does
not matter here if the metric $g(~,~)$ is defined using the classical
or quantum cubic form, since we have seen that
these determine the same metric.)
The definition of the quantum ring structure from the quantum cubic
form is such that $a=c_k(E^*)c_{N-k}(F^*)$ can be computed
as
$$ a =\langle c_k(E^*),c_{N-k}(F^*),[p]\rangle. \eqn\ormo$$
$c_k(E^*)$ equals the Poincar\'e dual of the zero locus
of a generic section of $E^*$. The dual of the exact sequence
\compbun\ reads
$$ 0 \rightarrow F^*\rightarrow V^* \rightarrow E^* \rightarrow 0, \eqn\yiro$$
with $V^*$ a fixed $N$ dimensional complex vector space. The image
in $E^*$ of any fixed vector $w\in V^*$ gives a holomorphic section $\overline w$
of $E^*$. If as before $e_1,\dots ,e_N$ is a basis of $V$, and
$w$ is the linear form that maps $\sum_{i=1}^N r^ie_i$ to $r^1$,
then the restriction of $w$ to $E\subset V$ vanishes precisely if
$E$ consists only of vectors with $r^1=0$. This is a copy of $G(k,N-1)$
which we will call $X_w$. Since $\overline w$ has only a simple zero along
$X_w$ (any $E$ can be perturbed in first order to get one for which
$\overline w\not= 0$), we have
$$c_k(E^*)=[X_w].\eqn\juniper$$
For future use, let us note that
$$\int_{G(k,N)}c_k(E^*)^{N-k} = 1 . \eqn\conc$$
Indeed, we can pick $N-k$ holomorphic sections of $E^*$ whose
zero sets intersect transversely at a single point. To do so,
let $w_i$ for $i=1,\dots, N-k$ be the linear form on $V$
that maps $\sum_{i=1}^Nr^ie_i$ to $r^i$. Then the $\overline w_i$
have the required properties, vanishing precisely for $E$ the
$k$-plane spanned by $e_{N-k+1},\dots, e_N$.
Now let us compute $c_{N-k}(F^*)=(-1)^{N-k}c_{N-k}(F)$. Under the
holomorphic surjection $V\rightarrow F$, any vector $v\in V$ projects
to a holomorphic section $\overline v$ of $F$. $\overline v$ vanishes precisely
if $v\in E$; let $Y_v=\{E\in G(k,N)|v\in E\}$. Then $\overline v$ has a simple
zero along $Y_v$, so $c_{N-k}(F)=[Y_v]$ and therefore
$$c_{N-k}(F^*) = (-1)^{N-k} [Y_v]. \eqn\berry$$
Rational curves of degree one in $G(k,N)$
can all be described as follows. Let $(s,t)$ be
homogeneous coordinates for ${\bf CP}^1$. For $r_1,\dots,
r_k$ a set of $k$ linearly independent vectors in the $N$ dimensional
vector space $V$, let $\{r_1,\dots,r_k\}$ be the $k$-plane
that they span. Then a rational curve of degree one in $G(k,N)$ is of the
form
$$(s,t)\rightarrow \{sr_0+tr_1,r_2,r_3,\dots, r_{k}\}, \eqn\witho$$
with $r_0,\dots, r_{k}$ being linearly independent vectors in $V$.
We have to calculate
$$a=(-1)^{N-k}\int_{{\cal M}_1}\widehat{[X_w]}(P)\cup \widehat{[Y_v]}(Q)\cup
\widehat{[p]}(R). \eqn\hdndn$$
Here ${\cal M}_1$ is the space of degree 1 rational curves,
$w\in V^*$, $v\in V$, and $P,Q,R$ are points in ${\bf CP}^1$.
If everything is sufficiently generic, $a$ is simply the number of degree
one curves that pass through $X_e$ at $P$, through $Y_f$ at $Q$, and
through $p$ at $R$.
We choose $p$ to be an arbitrary point in $G(k,N)$ corresponding
to a $k$-plane spanned by vectors $v_1,\dots ,v_k$.
We take $v=v_0$ to be linearly independent of these, and
we pick $w$ to be any linear form that maps $v_0$ to 1, $v_1$ to $-1$,
and the $v_j$ of $j>1$ to 0.
{}From the explicit description of degree one curves in \witho, we see that
the $k$-planes represented by points in the image of such a curve
are subspaces of a common $k+1$-plane. For a curve that passes
through $Y_v$ at $Q$ and through $p$ at $R$, this is clearly
the $k+1$-plane $W$ spanned by $v_0,v_1,\dots ,v_k$.
Requiring that the curve pass also through $X_w$ at $P$
determines the curve uniquely. For instance, if $Q=(1,0)$,
$R=(0,1)$, and $P=(1,1)$, then the degree 1 curve must be
$$(s,t)\rightarrow \{v_0s+v_1t,v_2,\dots,v_k\}. \eqn\lovon$$
The subvarieties $\widehat {[X_w]}(P)$, $\widehat {[Y_f]}(Q)$,
and $\widehat {[p]}(R)$ of $\cal M_1$ meet transversely at that point,
so we get finally
$$a=(-1)^{N-k} \eqn\humpback$$
as claimed above.
\subsection{Landau-Ginzburg Formulation}
Write
$$c_t(E^*)=\sum_{i=0}^kx_it^i, \eqn\jurry$$
with $x_i=c_i(E^*)$. Define functions $y_j(x_i), \,\,j\geq 0$ by
$$ {1\over c_t(E^*)}=\sum_{j\geq 0}y_j t^j. \eqn\purry$$
Classically, the cohomology ring of $G(k,N)$ is described by the
relations
$$ y_j=0, \,\,{\rm for}\,\,N-k+1\leq j\leq N. \eqn\riflo$$
Let
$$-{\rm log} c_t(E^*)=\sum_{r\geq 0}U_r(x_1,\dots,x_k)t^r. \eqn\ulff$$
So
$$-t^jc_t(E^*)^{-1}=-{\partial\over\partial x_j}{\rm log}c_t(E^*)
=\sum_{r\geq 0}{\partial U_r\over\partial x_j}t^r. \eqn\hulff$$
Hence if
$$ W_0=(-1)^{N+1}U_{N+1} \eqn\pullf$$
then
$${\partial W_0\over\partial x_i} = (-1)^Ny_{N+1-i},\,\,{\rm for}\,\,
1\leq i\leq k. \eqn\bulff$$
So the defining relations of the classical cohomology take the form
$$ d W_0 = 0. \eqn\iflo$$
To obtain in a similar way the quantum cohomology ring, set
$$ W = W_0 +(-1)^kx_1. \eqn\xiflo$$
The relations $dW=0$ now give
$$y_{N+1-i}+(-1)^{N-k}\delta_{i,1}=0. \eqn\hugiflo$$
Therefore the relation $c_t(E^*)\cdot(\sum_jy_jt^j)=1$ becomes
$$ \left(\sum_{i=0}^kx_it^i\right)\cdot\left(\sum_{j=0}^{N-k}y_jt^j
-(-1)^{N-k}t^N+O(t^{N+1})\right)=1. \eqn\cugiflo$$
Keeping only the terms of order at most $t^N$, this becomes
$$\left(\sum_{i=0}^kx_it^i\right)\cdot\left(\sum_{j=0}^{N-k}y_jt^j\right)=
1 +(-1)^{N-k}t^N. \eqn\pubiflo$$
This coincides with the quantum cohomology ring as described in \ellform.
The function $W$ is called the Landau-Ginzburg potential.
If we introduce the roots of the Chern polynomial
$$c_t(E^*)=\prod_{i=1}^k(1+\lambda_it), \eqn\oddball$$
then $W$ can be written
$$ W(\lambda_1,\dots,\lambda_k)={1\over k+1}\sum_{j=1}^k\left(
\lambda_j{}^{N+1} +(-1)^k\lambda_j\right). \eqn\graddy$$
Now let us discuss integration.
Integration defines a linear functional on the top dimensional cohomology
of $G(k,N)$, which is the cohomology in real dimension $2k(N-k)$:
$$f\rightarrow I( f)= \int_{G(k,N)}f. \eqn\icco$$
Since $H^{2k(N-k)}(G(k,N))$ is one dimensional, any two linear
functionals on that space are proportional. Such a linear functional
can be obtained as follows in the Landau-Ginzburg description.
We examine the classical case first.
If we consider $x_i=c_i(E^*)$ to be of degree $i$, then the top
dimensional cohomology consists of polynomials $f$ of degree $k(N-k)$
modulo the ideal generated by $\partial W_0/\partial x_i,\,\,\,i=1\dots k$.
Consider the linear form on homogeneous polynomials of degree $k(N-k)$
defined by
$$ J(f)= (-1)^{k(k-1)/2}\left({1\over 2\pi i}\right)^k\oint dx_1
\dots dx_k { f\over \prod_{i=1}^k \partial W_0/\partial x_i}. \eqn\hudd$$
The integration contour is a product of circles enclosing the poles
in the denominator. $J(f)$ annihilates the ideal generated by $dW_0$,
since if $f$ is divisible by, say, $\partial W/\partial x_i$, then
one of the denominators in \hudd\ is canceled and one of the contour
integrals vanishes.
For $f$ of degree $k(N-k)$, the integral in \hudd\ is unaffected
if $W_0$ is replaced by $W$; this follows from taking the contour
integral on \hudd\ to be a large contour. The integral can then
be evaluated as a simple sum of residues:
$$J(f)=(-1)^{k(k-1)/2}
\sum_{dW=0}{f\over \det\left({\partial^2
W\over\partial x_i\partial x_j}\right)}.
\eqn\werewolf$$
It is convenient to change variables from the $x_i$ to the $\lambda_a$.
One has
$$\left.
\det\left({\partial^2W\over\partial x_i\partial x_j}\right)\right|_{dW=0}
=\det\left({\partial^2W\over\partial \lambda_a\partial \lambda_b}
\right)\cdot\det\left({\partial\lambda_a\over\partial x_i}\right)^2.
\eqn\erewor$$
The Jacobian in the change of variables from $x_i$ to $\lambda_a$ is
the Vandermonde determinant:
$$\det\left({\partial x_i\over\partial\lambda_a}\right)^2
=\prod_{a<b}(\lambda_a-\lambda_b)^2. \eqn\jerwor$$
So
$$J(f)={(-1)^{k(k-1)/2}\over k!}\sum_{dW(\lambda_a)=0}
{f\cdot\prod_{a<b}(\lambda_a-\lambda_b)^2\over \prod_a dW/d\lambda_a}
={(-1)^{k(k-1)/2}\over k!(2\pi i)^k}\oint d\lambda_a{f\cdot\prod_{a<b}
(\lambda_a-\lambda_b)^2\over \prod_a dW/d\lambda_a}. \eqn\jupper$$
The integration contour in each $\lambda_a$ integral is a circle
running counterclockwise around the origin.
A factor of $k!$ comes here because, as the map from the $\lambda_a$
to the $x_i$ is of degree $k!$, each critical point of $W(x_i)$
corresponds to $k!$ critical points of $W(\lambda_a)$.
To verify that $J(f)$ is correctly normalized to coincide with $I(f)$,
we set $f=c_k(E^*)^{N-k}=\prod_a\lambda_a^{N-k}$. According to
\conc, $I(f)=1$. To verify that $J(f)=1$, we use the contour
integral version of \jupper. In the denominator we can replace
$dW/d\lambda_a=\lambda_a{}^N+(-1)^k$ by $\lambda_a{}^N-\lambda_a{}^{N-k}$
without changing the behavior on a large contour enough to affect the
integral. Then
$$J(f)={(-1)^{k(k-1)/2}\over k!(2\pi i)^k}\oint d\lambda_1\dots
d\lambda_k{\prod_{a<b}(\lambda_a-\lambda_b)^2\over
\prod_c(\lambda_c^k-1)}. \eqn\bulb$$
The integral is easily done as a sum of residues. The poles
are at $\lambda_a{}^k=1$, for $1\leq a\leq k$. Because of the
Vandermonde determinant in the numerator, the $\lambda_a$ must
be distinct. Up to a permutation, one must have $\lambda_a=\exp(2\pi i a/k)$;
evaluating the residue at this value of the $\lambda_a$ and including
a factor of $k!$ from the sum over permutations, one gets $J(f)=1$.
\section{Quantum Field Theory Interpretation}
Physicists would never actually begin with the definition that I have
given above for the quantum cubic form.
Rather, everything begins with considerations on the
function space ${\cal W}={\rm Maps}(\Sigma,G(k,N))$. Physicists
are mainly interested in quantum field theory, which is conveniently
formulated in terms of integration over spaces such as ${\cal W}$.
For instance, let $\Sigma$ be a complex Riemann surface with Hodge duality
operator $*$, pick a Hermitian
metric on $G(k,N)$ (such as the natural $U(k)$-invariant metric),
and for a map $\Phi:\Sigma\rightarrow G(k,N)$, set
$$L(\Phi)=\int_\Sigma(d\Phi,*d\Phi). \eqn\hudoxx$$
Then in the ``bosonic sigma model with target space $G(k,N)$'' we consider
integrals such as
$$\int_{{\cal W}}D\Phi\,\,\exp\left(-{L(\Phi)\over\lambda}\right),
\eqn\burdoo$$
with $\lambda$ a positive real number. This is not complete pie in the sky.
For instance, to make the definition more concrete,
one can triangulate $\Sigma$ and make a finite dimensional approximation
to the integral. Then the problem is to adjust $\lambda$, while
refining the triangulation, so that the given integral (and related
ones) converges as the triangulation is infinitely refined.
For a homogeneous space of positive curvature such as the Grassmannian,
one knows at a physical level of rigor precisely how to do this:
$\lambda$ must be taken to vanish in inverse proportion to the logarithm
of the number of vertices in the triangulation. This is a consequence
of a phenomenon known as ``asymptotic freedom,'' which plays a crucial
role in the theory of the strong interactions in four
dimensions; sigma models with targets such as the Grassmannian were
intensively studied in the late 1970's and early 1980's as simple
cases of asymptotically free quantum field theories. Asymptotic
freedom actually plays an important role in our story, since it leads
to the mass gap that will be essential in \S4.
\subsection{Supersymmetric Sigma Models}
What we actually want to do is to transfer the integral over
the space of holomorphic maps that defined the quantum cohomology
ring,
$$\langle \alpha,\beta,\gamma\rangle=\sum_\lambda\int_{{\cal M}_\lambda}
\widehat\alpha(P)\cup \widehat\beta(Q)\cup \widehat\gamma(R),\eqn\utangle$$
to an integral over the space ${\cal W}$ of all maps of $\Sigma$ to
$G(k,N)$.
Reversing the usual logic, this is done as follows. The condition that
$\Phi:\Sigma\rightarrow G(k,N)$ is holomorphic is an equation
$$0 = \left(\overline\partial\Phi\right)^{1,0} \eqn\muxxo$$
which asserts the vanishing of a section
$$s:\Phi\rightarrow (\overline\partial \Phi)^{1,0} \eqn\uxxo$$
of an infinite dimensional vector bundle $Y$ over ${\cal W}$.
($Y$ is the bundle whose fiber at $\Phi\in {\cal W}$ is the space
of $(0,1)$ forms on $\Sigma$ with values in $\Phi^*(T^{1,0}G(k,N))$,
with $T^{1,0}G(k,N)$ being the $(1,0)$ part of the complexified
tangent bundle of $G(k,N)$. The point of the definition is just
that $(\overline\partial\phi)^{1,0}$ is a vector in $Y$.)
\REF\mq{V. Mathai and D. Quillen, ``Superconnections, Thom Classes,
and Equivariant Differential Forms,'' Topology {\bf 25} (1986) 85.}
\def\widehat{\widehat}
The space ${\cal M}$ of holomorphic maps, being defined by the vanishing
of a section $s:{\cal W}\rightarrow Y$, is Poincar\'e dual to the Euler class
$\chi(Y)$ of $Y$. So formally we can write
$$\langle\alpha,\beta,\gamma\rangle =\int_{\cal M}\widehat\alpha\cup\widehat\beta
\cup\widehat\gamma=\int_{\cal W}\widehat\alpha\cup\widehat\beta\cup\widehat\gamma\cup\chi(Y).
\eqn\mcmc$$
Now, there are any number of ways to write a differential form representing
$\chi(Y)$, but one nice way (formulated mathematically by
Mathai and Quillen [\mq])
uses a section $s$ and has a nice exponential factor $\exp(-|s|^2/\lambda)$,
with $|s|^2$ the norm of $s$ with respect to a metric on $Y$, and
$\lambda$ a positive real number.
For the section indicated in \uxxo, the norm with respect to the natural
metric is
$$|s|^2=\int_\Sigma(d\Phi,*d\Phi), \eqn\jxno$$
which is precisely the Lagrangian introduced above for the bosonic
sigma model with target space the Grassmannian.
\REF\topsig{E. Witten, ``Topological Sigma Models,'' Commun. Math. Phys.
{\bf 118} (1988) 411.}
\REF\atj{M. F. Atiyah and L. Jeffrey, ``Topological Lagrangians
And Cohomology,'' J. Geom. Phys. {\bf 7} (1990) 119.}
So the long and short of it is that we get a representation
$$\langle\alpha,\beta,\gamma\rangle=
\int_{{\cal W}\times \dots}\int {D}\Phi \,\,\dots \,\,
\exp\left(-{1\over \lambda}\int_\Sigma|d\Phi|^2+\dots\right)
\widehat\alpha(P)\widehat\beta(Q)\widehat\gamma(R),\eqn\hardo$$
much like the bosonic sigma model, but with ``fermions,'' represented
by ``$\dots$'' The quantum field theory that appears here
is in fact a twisted form of the usual supersymmetric
nonlinear sigma model, as I explained in [\topsig]; in the twisted
model, the fermions can be interpreted in terms of differential forms
on the function space ${\cal W}$. The relation to the Mathai-Quillen
formula was explained by Atiyah and Jeffrey [\atj] in the analogous
case of four dimensional Donaldson theory.
It should be fairly obvious that instead of $G(k,N)$ we could use
a general Kahler manifold $X$ in the above discussions, at least
at the classical level. (If we are willing
to give up the interpretation in terms of twisting of a unitary supersymmetric
model, we can even consider almost complex manifolds that are not Kahler.)
At the quantum level, the situation is more subtle.
There are two main branches in the subject. If $c_1(X)=0$, the
supersymmetric sigma model (with a suitable choice of the Kahler metric
of $X$) is conformally invariant; such models provide classical
solutions of string theory. On the other hand, if $c_1>0$, as in the case
of the Grassmannian, one is in a quite different world, with asymptotic
freedom and analogs of the mass generation and chiral symmetry breaking
seen in the strong interactions.
\section{Strategy}
To try to say something of substance
in this situation, we use the realization
of $G(k,N)$ as $\mu^{-1}(0)/U(k)$, where $\mu$ is the moment map from
${\bf C}^{kN}$ to the Lie algebra of $U(k)$. One is tempted to try to
lift a map $\Phi:\Sigma\rightarrow G(k,N)$ to a map $\widehat\Phi:\Sigma\rightarrow {\bf C}^{kN}$.
There is not a natural way to do this, and there may even be a topological
obstruction.
So instead we proceed as follows. Let $P$ be a principal $U(k)$ bundle
over $\Sigma$, $A$ a connection on $P$, $\widehat\Phi$ a section of
$P\times_{U(k)}{\bf C}^{kN}$, and $S$ a two-form on $\Sigma$ with values
in the adjoint bundle
${\rm ad}(P)$. Take
$$\widehat
L(\widehat\Phi,A,S)=\int_\Sigma\left((d_A\widehat\Phi,*d_A\widehat\Phi)+i(S,\mu\circ\widehat\Phi)
\right) . \eqn\muggo$$
Then classically the theory described by $\widehat L(\widehat\Phi,A,S)$
is equivalent to the bosonic sigma model with target $G(k,N)$.
This can be seen as follows. The Euler-Lagrange equation of $S$ is
$\mu\circ\widehat\Phi=0$, so, under the natural projection
$P\times_{U(k)}{\bf C}^{kN}\rightarrow {\bf C}^{kN}/U(k)$,
$\widehat\Phi$ maps to $\Phi:\Sigma\rightarrow \mu^{-1}(0)/U(k)=G(k,N)$. The
Euler-Lagrange equation for $A$ identifies $P$ and $A$ with the pull-back
by $\widehat\Phi$ of the tautological principal $U(k)$ bundle and connection
over $G(k,N)$. Once these restrictions and identifications are
made, $\widehat L(\widehat\Phi,A,S)$ reduces to the Lagrangian $L(\Phi)$ of the
bosonic sigma model of the Grassmannian.
This sort of reasoning is still valid quantum mechanically.
For instance, using
$$\int_{-\infty}^\infty {d x\over 2\pi}e^{ixy}=\delta(y) \eqn\hx$$
(and the obvious generalization of that formula to several variables)
we get the path integral formula
$$\int {D} S\,\,\exp\left(-i\int_\Sigma(S,\mu\circ\widehat\Phi)\right)
=\delta(\mu\circ\widehat\Phi). \eqn\hxo$$
So the $S$ integral places on $\widehat\Phi$ precisely the restriction
that one would guess from the classical Euler-Lagrange equations.
{}From simple properties of Gaussian integrals (which are introduced
below), one similarly deduces that, quantum mechanically as classically,
the $A$ integral has the effect of identifying $P,A$ with the pull-backs
of the tautological objects over the Grassmannian.
Similar reasoning holds after including fermions, so we get for the quantum
cubic form a representation of the general kind
$$\langle\alpha,\beta,\gamma\rangle=
\int {D} \widehat\Phi \,{D} A\, {D} S\,\dots \exp\left(-\int_\Sigma\left(
(d_A\widehat\Phi,*d_A\widehat\Phi)+i(S,\mu\circ\widehat\Phi)+\dots\right)\right)
\cdot \widehat\alpha(P)\widehat\beta(Q)\widehat\gamma(R). \eqn\hoco$$
As before, ``$\dots$'' represents terms involving fermions that
are not indicated explicitly.
\section{Reversing The Order Of Integration}
In sum, \hoco\ will reduce to \hardo\ if we integrate over $A$ and $S$
first. To get something interesting, we instead integrate first over
$\widehat\Phi$. The key point is that $\widehat\Phi$ is a section of a bundle
over $\Sigma$ with linear fibers (a ${\bf C}^{kN}$ bundle)
and that $\widehat L$ is quadratic in $ \widehat\Phi$. Consequently, the $\widehat\Phi$
integral is a Gaussian integral.
The basic one dimensional formula
$$\int_{-\infty}^\infty {d x\over\sqrt {2\pi}}\exp(-\lambda x^2/2)=
{1\over\sqrt \lambda} \eqn\ncno$$
has the $n$ dimensional generalization
$$\int_{-\infty}^\infty{d x_1\dots d x_n\over (2\pi)^{n/2}}
\exp\left(-{1\over 2}\sum_{i,j}M_{ij}x_ix_j\right) ={1\over \sqrt {\det M}},
\eqn\boco$$
for any quadratic form $M$ with positive real part; this is demonstrated
by picking a coordinate system in which $M={\rm diag}(m_1,\dots,m_n)$.
In our case the $\widehat\Phi$ integral is (apart from terms involving fermions)
$$\int{D}\Phi\,\,\, \exp\left(-\int_\Sigma\left((d_A\widehat\Phi,*d_A\widehat\Phi)
+i(S,\mu\circ\widehat\Phi)\right)\right). \eqn\gau$$
This is an infinite dimensional Gaussian integral with $M$ the quadratic
form associated with the elliptic differential operator
$$M'=(d_A^*d_A+iS)\otimes 1_N. \eqn\roppo$$
The notation reflects the fact that the ${\bf C}^{kN}$ bundle of which
$\widehat\Phi$ is a section is actually a sum of $N$ copies of a ${\bf C}^k$ bundle,
and $M$ is the sum of $N$ copies of a quadratic form derived from
an operator (namely $d_A^*d_A+iS$) on sections of that ${\bf C}^k$ bundle.
So the integral over $\widehat\Phi$
gives
$${1\over \sqrt {\det M'}}=
\left(1\over \sqrt {\det (d_A^*d_A+iS)}\right)^{N/2}
.\eqn\boppo$$
The determinant of the elliptic differential operator $d_A^*d_A+iS$ can
be conveniently
defined using the $\zeta$-function regularization of Ray and Singer.
So modulo fermions we get
$$\langle\alpha ,\,\beta,\,\gamma\rangle=
\int{D} A\,{D} S\dots
\left(1\over \sqrt {\det (d_A^*d_A+iS)}\right)^{N/2}
\cdot\widehat\alpha\widehat\beta\widehat\gamma. \eqn\oppo$$
So we have transformed the problem of computing the quantum cohomology
of the Grassmannian to a problem involving integration over the connection
$A$ and over $S$ -- a problem in quantum gauge theory. This brings
us into an entirely different world, that of \S2 of this paper.
As this stage we can see why -- as topologists might expect -- the sigma model
of $G(k,N)$ simplifies in the limit of $k$ fixed, $N\rightarrow\infty$. The integrand
in \oppo\ has a sharp peak at the minimum of the determinant, and
``everything'' can be calculated in an asymptotic expansion in powers
of $1/N$, by expanding around this peak.
For fixed $N$, it is not true that ``everything'' can be calculated,
but the topological quantities can be, reducing to a saddle point
by a more elaborate argument. The essence of the matter is that
although the classical Lagrangian $\widehat L$ is conformally invariant,
the quantum theory is not (because, for instance, with Ray-Singer
or any other regularization, the determinant introduced above is not
conformally invariant). The topological quantities are however
not just conformally invariant but completely independent of the metric
of $\Sigma$. Scaling up the metric of $\Sigma$ by a very large real factor,
life simplifies because of the basic physical properties of the model
-- asymptotic freedom and the dynamically generated mass gap. At very
large distances (that is, if the metric on $\Sigma$ is scaled up by
a very big factor), the complicated integral over $A, S, $ and fermions
in \oppo\ reduces to a local and tractable quantum field theory
-- in fact it reduces to the gauged WZW model (of $U(k)/U(k)$)
that was analyzed in \S2.
There is a basic principle here: every quantum field theory with
a mass gap reduces at very big distances to a topological field
theory. Often the topological field theory that so arises is more
or less trivial, but in the case of the supersymmetric sigma model
of the Grassmannian, it is the gauged WZW model. This large distance
reduction of the Grassmannian sigma model to a gauged WZW model,
plus the relation explained in \S2 between the gauged WZW model
and the Verlinde algebra, give the relation between the quantum
cohomology of the Grassmannian and the Verlinde algebra.
It is well known that at large distances,
massive particles can be neglected and massless particles dominate.
Less fully appreciated is that beyond the reach of the propagating
fields, a non-trivial dynamics of the vacuum or topological
field theory may prevail.
\subsection{Differential Geometry Of The Moduli Space Of Bundles}
A detailed
discussion of the reduction of the sigma model to the gauged WZW model
will be the subject of \S4, but here I will make a few naive remarks.
The integrand in \oppo\ actually has its maximum for flat connections
-- with some branching at the points $P,Q,R\in \Sigma$ at with
$\widehat\alpha,\widehat\beta, $ and $\widehat\gamma$ are inserted. The moduli
space of such flat connections is (by a theorem of Mehta and
Seshadri [\seshadri]) the same as the moduli space ${\cal R}$ of
rank $k$ stable holomorphic vector bundles over $\Sigma$, with some
parabolic structure at $P,Q,R$ determined by the branching.
So the integral gives some differential geometry of ${\cal R}$.
(In view of \czonzo,
${\cal R}$ can be interpreted as a space of classical solutions
of the gauged WZW model.) In
the large $N$ limit, direct analysis of the determinant in \oppo\ shows that
the differential geometric quantity that appears is
the volume of the symplectic
manifold ${\cal R}$, times $N^{\dim_{{\bf C}}{\cal R}}$.
This is the leading large $N$ behavior of the
Riemann-Roch formula for the dimension of the space
$H^0({\cal R},{\cal L}^{N-k})$ of non-abelian theta functions.
This simple direct argument relates
the quantum cohomology of $G(k,N)$ to the dimension
of the space of non-abelian theta functions for large $N$.
The only way I know to establish this as an exact relation, not just as
asymptotic one for large $N$, is to reduce the sigma model of $G(k,N)$
to the gauged WZW model as we will do in \S4, and then study that model
as in \S2.
\chapter{From The Grassmannian To The $G/G$ Model}
\def{\partial\over\partial x^0}{{\partial\over\partial x^0}}
\def{\partial\over\partial x^1}{{\partial\over\partial x^1}}
\def{\cal D}{{\cal D}}
\REF\bagger{J. Wess and J. Bagger, {\it Supersymmetry And Supergravity},
second edition (Princeton University Press, Princeton, 1992).}
\REF\phases{E. Witten, ``Phases Of ${\cal N}=2$ Models In Two Dimensions,''
Nucl. Phys. {\bf B403} (1993) 159 .}
This section is organized as follows.
After recalling some background about ${\cal N}=2$ models in two dimensions in \S4.1,
we construct in \S4.2 the
sigma model whose target space is the Grassmannian $G(k,N)$.
Then we analyze its behavior at long distances in \S4.3-6. In \S4.7-8, we
enter
the computational stage and work things out in detail in the simplest
non-trivial case.
In the past, the long distance behavior of the Grassmannian sigma model
has been analyzed on ${\bf R}^2$
[\div--\cecotti];
the main results were spontaneously broken chiral
symmetry, the existence of a mass gap, and a determination for large $N$ of the
spectrum of low-lying states. The novelty here is to examine
the long distance behavior more globally, uncovering the relation
to the gauged WZW model and thereby (in view of \S2) the Verlinde algebra.
I will make a small change in notation in this section. In \S2,
we considered general compact Lie groups, and (as is conventional
in mathematics) we took the Lie algebra to consist of anti-hermitian
matrices (so the quadratic form $(a,b)=\Tr ab$ is negative definite).
The reason that this convention is standard for general Lie groups is
that in the case of a real group, whose representations may also all be
real, it is unnatural to introduce factors
of $i$ and therefore the group generators are naturally anti-hermitian.
In this section, the gauge group will be the unitary group
$U(k)$, which will arise in a natural complex representation,
and I will follow the standard physics convention that the
group generators are hermitian matrices; thus $(a,b)=\Tr ab$ will be positive
definite. The complexification of the Lie algebra of $U(k)$ consists
of all $k\times k$ complex matrices; if $\sigma$ is such a matrix,
then $\overline\sigma$ will denote its hermitian adjoint.
\section{Background}
We will work in ${\cal N}=2$ superspace in two dimensions, conventions
and the basic setup
being as explained in [\bagger,\phases].
The detailed formulas of this subsection are presented mainly for reference,
and most readers will want to skim them.
We consider first flat
superspace with bosonic coordinates $x^m,\,m=0,1$ (and Lorentz
signature $-+$)
and fermionic coordinates $\theta^\alpha,\,\overline\theta
^{\alpha}$. In a light-cone basis,
supersymmetry is realized geometrically by the
operators
$$\eqalign{ Q_{\pm} & = {\partial\over\partial\theta^\pm}+i\overline\theta^\pm
\left({\partial\over\partial x^0}\pm {\partial\over\partial x^1}\right) \cr
\overline Q_\pm & = -{\partial\over\partial\overline\theta^\pm}
-i\theta^\pm\left({\partial\over\partial x^0}\pm {\partial\over\partial x^1}\right).\cr} \eqn\ixox$$
These operators commute with the superspace covariant derivatives
$$\eqalign{ D_{\pm} & = {\partial\over\partial\theta^\pm}-i\overline\theta^\pm
\left({\partial\over\partial x^0}\pm {\partial\over\partial x^1}\right) \cr
\overline D_\pm & = -{\partial\over\partial\overline\theta^\pm}
+i\theta^\pm\left({\partial\over\partial x^0}\pm {\partial\over\partial x^1}\right)\cr} \eqn\pixox$$
which are used in constructing Lagrangians.
To formulate gauge theory, one introduces a gauge field in superspace,
replacing
the differential operators $D_\alpha$, $\overline D_\alpha$, and $\partial_m
=\partial/\partial x^m$ by gauge covariant derivatives
${\cal D}_\alpha$, $\overline {\cal D}_\alpha$, and ${\cal D}_m$.
On the superspace gauge fields one imposes the very strong constraints
$$\eqalign{0=\{\overline{\cal D}_\alpha,\overline{\cal D}_\beta\}&=\{{\cal D}_\alpha,{\cal D}_\beta\}
\cr \{{\cal D}_\pm,\overline{\cal D}_\pm\} & = 2i\left({\cal D}_0\pm{\cal D}_1\right).\cr}
\eqn\mixo$$
Among other things, these conditions permit the existence of ``chiral
superfields,'' superspace fields $\Phi$ obeying
$$ \overline{\cal D}_\alpha \Phi = 0 . \eqn\bixo$$
With the aid of the constraints one can take
locally
$$\eqalign{{\cal D}_\alpha & = e^{-V}D_\alpha e^V \cr
\overline{\cal D}_\alpha & = e^V\overline D_\alpha e^{-V}\cr} \eqn\yixo$$
where $V$ is a real Lie algebra-valued function on superspace, called
a vector superfield. After also fixing some residual gauge invariance,
one can go to a ``Wess-Zumino gauge,'' in which
$$\eqalign{ V & = \theta^-\overline\theta^-(v_0-v_1)+\theta^+\overline\theta^+
(v_0+v_1)-\sqrt 2\sigma \theta^-\overline\theta^+-\sqrt 2\overline\sigma\theta^+
\overline\theta^-\cr &+2i\theta^-\theta^+\left(\overline\theta^-\overline\lambda_-+\overline
\theta^+\overline\lambda_+\right)+2i\overline\theta^+\overline\theta^-(\theta^+\lambda_+
+\theta^-\lambda_-)+2\theta^-\theta^+\overline\theta^+\overline\theta^-D.\cr}
\eqn\pimxo$$
Here $v_m$ is an ordinary two-dimensional gauge field, and the other
fields are bose and fermi matter fields. $\sigma$ is a complex $k\times k$
matrix, and -- as $V$ is hermitian -- $\overline sigma$ is its hermitian adjoint.
We write $F= F_{01}=\partial_0v_1-\partial_1v_0+[v_0,v_1]$ for
the curvature of $v$.
The supersymmetry transformation laws for this multiplet are
$$\eqalign{
\delta v_m & =i\overline\epsilon \sigma_m\lambda +i\epsilon \sigma_m\overline\lambda
\cr
\delta\sigma & = -i\sqrt 2\overline\epsilon_+\lambda_--i\sqrt 2\epsilon_-\overline
\lambda_+ \cr
\delta\overline \sigma & = -i\sqrt 2\epsilon_+\overline\lambda_--i\sqrt 2\overline\epsilon_-
\lambda_+ \cr
\delta D & = -\overline\epsilon_+(D_0-D_1)\lambda_+-\overline\epsilon_-(D_0+D_1)\lambda_-
+ \epsilon_+(D_0-D_1)\overline\lambda_++\epsilon_-(D_0+D_1)\overline\lambda_-
\cr &+\sqrt 2 \epsilon_+[\sigma,\overline\lambda_-] +\sqrt 2\epsilon_-[\overline\sigma,
\overline\lambda_+]+\sqrt 2[\sigma,\lambda_+]\overline\epsilon_-+\sqrt 2
[\overline\sigma,\lambda_-]\overline\epsilon_+ \cr
\delta\lambda_+ & = i\epsilon_+D+\sqrt 2(D_0+D_1)\overline\sigma\epsilon_-
-F_{01}\epsilon_+ -[\sigma,\overline\sigma]\epsilon_+ \cr
\delta\lambda_- & = i\epsilon_-D+\sqrt 2(D_0-D_1)\sigma\epsilon_+
+F_{01}\epsilon_- +[\sigma,\overline\sigma]\epsilon_- \cr
\delta\overline\lambda_+ & = -i\overline\epsilon_+D+\sqrt 2(D_0+D_1)\sigma\overline
\epsilon_- -F_{01}\overline\epsilon_+ +[\sigma,\overline\sigma]\overline\epsilon_+ \cr
\delta\overline\lambda_- & = -i\overline\epsilon_-D+\sqrt 2(D_0-D_1)\overline\sigma
\overline\epsilon_++F_{01}\overline\epsilon_- -[\sigma,\overline\sigma]\overline\epsilon_-. \cr}
\eqn\milopo$$
The basic gauge invariant field strength is
$$\eqalign{\Sigma ={1\over 2\sqrt 2}\{\overline {\cal D}_+,{\cal D}_-\} &
= \sigma+i\sqrt 2\theta^+\overline\lambda_+-i\sqrt 2\overline\theta^-\lambda_-
+\sqrt 2\theta^+\overline\theta^-D\cr &-i\overline\theta^-\theta^-(D_0-D_1)\sigma
-i\theta^+\overline\theta^+(D_0+D_1)\sigma \cr &
+\sqrt 2\overline\theta^-\theta^-\theta^+
(D_0-D_1)\overline\lambda_+-\sqrt 2\theta^+\overline\theta^+\overline\theta^-
(D_0+D_1)\lambda_--i\sqrt 2\theta^+\overline\theta^- F_{01}
\cr & -2i\theta^-\overline\theta^-\theta^+[\sigma,\overline\lambda_-]
-2i\overline\theta^-\theta^+\overline\theta^+[\sigma,\lambda_+]\cr &
-\overline\theta^-\theta^-\theta^+\overline\theta^+\left((D_0{}^2-D_1{}^2)\sigma
-[\sigma,[\sigma,\overline\sigma]]\right)
+i\theta^-\overline\theta^-\theta^+\overline\theta^+[\sigma,\partial_mv^m]
.\cr} \eqn\nogof$$
(The last term does not really spoil gauge invariance:
the gauge transformations that preserve Wess-Zumino
gauge have a certain $\theta$ dependence which requires this term to be
present.)
$\Sigma$ is a twisted chiral superfield; this means that
(by the Bianchi identity together with the constraints)
$$\overline{\cal D}_+\Sigma={\cal D}_-\Sigma = 0 . \eqn\hutcho$$
With the aid of $\Sigma$,
it is straightforward to construct gauge invariant
Lagrangians. The standard gauge kinetic energy is
$$\eqalign{L_g & =-{1\over 4e^2}\int d^2x\,\,d^4\theta \Tr\overline\Sigma\Sigma
\cr &
={1\over e^2}\intd^2x\,\Tr
\left({1\over 2}F_{01}^2+|D_0\sigma|^2-|D_1\sigma|^2
+i\overline\lambda_-(D_0+D_1)\lambda_-+i\overline\lambda_+(D_0-D_1)\lambda_+
\right.\cr &~~~~~~~\left.+{1\over 2}D^2-{1\over 2}[\sigma,\overline\sigma]^2
-{\sqrt 2}\lambda_+
[\sigma,\overline\lambda_-]
+{\sqrt 2}[\overline\sigma,\lambda_-]\overline\lambda_+\right)
.\cr}\eqn\muccdo$$
One more term constructed from gauge fields only is important.
Using the fact that $\Sigma$ is a twisted chiral superfield, there
is an invariant interaction of the form
$$L_{D,\theta}={it\over 2\sqrt 2}\intd^2 x \,d\theta^+\,d\overline\theta^-
\,\,\Tr \Sigma|_{\theta^-=\overline\theta^+=0}+{\mit c.c.}
=\intd^2 x \left(-r\Tr D+{\theta\over 2\pi}\Tr F_{01}\right), \eqn\jumbox$$
with
$$t=ir+{\theta\over 2\pi}. \eqn\uccdo$$
\subsection{Matter Fields}
Chiral superfields are functions $\Phi$ on superspace,
transforming in some given unitary representation $V$ of the gauge group,
and obeying
$$\overline{\cal D}_\pm \Phi = 0 . \eqn\ruccdo$$
Such a field has an expansion in components
$$\Phi = \phi +\sqrt 2\theta^\alpha\psi_\alpha +
\theta^\alpha\theta_\alpha F . \eqn\wuddo$$
The supersymmetry transformation laws for this multiplet are (by
dimensional reduction from [\bagger, p. 50])
$$\eqalign{
\delta \phi & = \sqrt 2\left(\epsilon_+\psi_--\epsilon_-\psi_+\right) \cr
\delta \psi_+ &= i\sqrt 2\left(D_0+D_1\right)\phi\overline\epsilon_-
+\sqrt 2\epsilon_+ F - 2 \overline\sigma\phi\overline\epsilon_+ \cr
\delta \psi_- &= -i\sqrt 2\left(D_0-D_1\right)\phi\overline\epsilon_+
+\sqrt 2\epsilon_- F + 2 \sigma\phi\overline\epsilon_- \cr
\delta F & = -i\sqrt 2\overline\epsilon_+\left(D_0-D_1\right)\psi_+
-i\sqrt 2\overline\epsilon_-\left(D_0+D_1\right)\psi_-
\cr & ~~~+2\left(\overline\epsilon_+\overline\sigma\psi_-+\overline\epsilon_-\sigma\psi_+
\right) + 2i\left(\overline\epsilon_-\overline\lambda_+-\overline\epsilon_+\overline
\lambda_-\right)\phi
. \cr}
\eqn\nurbob$$
The usual kinetic energy for a multiplet of such chiral superfields is
$$\eqalign{L_{{\mit ch}}={1\over 4}\int d^2x \,d^4\theta\,\,\overline\Phi \Phi
= & \int d^2x\,\,\left(|D_0\phi|^2-|D_1\phi|^2 +|F|^2
+i\overline\psi_+(D_0-D_1)\psi_+\right.\cr &\left.+i\overline\psi_-(D_0+D_1)\psi_-
+\overline\phi D\phi -\overline\phi\{\sigma,\overline\sigma\}\phi
\right.\cr & \left.-\sqrt 2\overline\psi_+\overline\sigma\psi_-
-\sqrt 2\overline\psi_-\sigma\psi_+
+i\sqrt 2 \overline\psi_+\overline\lambda_-\phi -i\sqrt 2 \overline\psi_-\overline\lambda_+\phi
\right.\cr & \left.
+i\sqrt 2\overline\phi \lambda_+\psi_- -i\sqrt 2\overline\phi\lambda_-\psi_+\right).
\cr} \eqn\qqmmw$$
\REF\hitchin{N. J. Hitchin, A. Karlhede, U. Lindstrom, and M. Rocek,
``Hyperkahler Metrics And Supersymmetry,'' Commun. Math. Phys. {\bf 108}
(1987) 535.}
The $|D_\alpha \phi|^2$ term in \qqmmw\ is the conventional
free kinetic energy corresponding to a sigma model with a flat
metric on $V\cong {\bf C}^r$.
The $\overline\phi D\phi$ term is the coupling of $D$ to the moment map,
in the sense that if we pick a basis $T_a,\, a=1
\dots \dim G$ for the Lie algebra of $G$, then this term
is
$$\int d^2x \sum_a D^a (\overline\phi,T_a\phi), \eqn\iglo$$
and the functions $(\overline\phi,T_a\phi)$ are the components of the
moment map.
A more general Kahler metric on $V$ (and
accordingly, a more general form of the moment map)
could be obtained by replacing the function $\overline\Phi\Phi$ on the left
hand side of \qqmmw\ with a more general Kahler potential $K(\Phi,\overline\Phi)$.
These matters are explained in some detail in [\hitchin].
\section{The Model}
Now we can construct the actual model of interest. We take the
gauge group to be $G=U(k)$. We take $kN$ chiral superfields
$\Phi^{is}$, $i=1\dots k$, $s=1\dots N$, regarded as $N$ copies
of the defining $k$ dimensional representation of $G$.
The action of $G$ commutes with a global
symmetry group $H\cong U(N)$ which one can think of as the unitary
transformations of ${\bf C}^N$.
The Lagrangian that we actually wish to study is simply
$$L = L_{{\mit gauge}}+L_{D,\theta}+L_{{\mit ch}}. \eqn\ripporo$$
The potential energy is determined
by the following terms in $L$:
$$L_{{\mit pot}} ={1\over 2e^2}\Tr D^2-r\Tr D + \overline\phi D\phi
-{1\over 2e^2}\Tr[\sigma,\overline\sigma]^2- \overline\phi\{\sigma,\overline\sigma\}\phi.
\eqn\pixox$$
Upon integrating out $D$, the
potential energy is
$$V = {e^2\over 2}\sum_{i,j=1}^k\left(\sum_s\overline\phi_{is}\phi^{js}-\delta_i{}^j
r\right)^2+{1\over 2e^2}\Tr[\sigma,\overline\sigma]^2
+\overline\phi\{\sigma,\overline\sigma\}\phi. \eqn\jixox$$
The space of classical vacua is the space of zeroes of $V$ up to gauge
transformation. For $V$ to vanish, $\phi$ must be non-zero, and
therefore $\sigma$ must vanish. As anticipated in \S3, the first
term in the potential is the square of the moment map for the action
of $U(k)$ on ${\bf C}^{kN}$.
This term vanishes precisely if the vectors in ${\bf C}^N$ represented
by the rows of $\phi$, divided by $\sqrt r$, are orthonormal.
The $k$ dimensional subspace $V\subset {\bf C}^N$ spanned by the rows of $\phi$
is gauge invariant, and is the only gauge invariant data determined
by $\phi$ (since any two orthonormal bases of $V$ are related
by the action of $U(k)$). Moreover, every $k$ dimensional subspace
$V\subset {\bf C}^N$ has such an orthonormal basis.
So the space of classical vacua is the Grassmannian $G(k,N)$ of
$k$ dimensional subspaces of ${\bf C}^N$.
Since the condition for vanishing energy is
$$\sum_s\overline\phi_{is}\phi^{js}=\delta_i{}^jr, \eqn\kop$$
the radius of the space of vacua is $\sqrt r$ and the Kahler
class is proportional to $r$.
Classically, the space of vacua shrinks to a point for $r=0$; for
$r<0$ the classical energy can no longer vanish and it appears
that supersymmetry is spontaneously broken. Quantum
mechanically, the situation is rather different and there is a smooth
continuation to negative $r$ with unbroken supersymmetry,
as discussed (for $k=1$) in [\phases,\S3.2];
the existence of this continuation will be exploited below.
The choice of a classical vacuum spontaneously breaks the symmetry
group $U(N)$ to $U(k)\times U(N-k)$, while leaving supersymmetry
unbroken. The oscillations in the vacuum
are massless Goldstone bosons at the classical level. Their
supersymmetric partners are, of course, also massless classically.
Other modes are readily seen to have masses proportional to $e$.
The model therefore reduces at long distances (or equivalently
for $ e\rightarrow\infty$) to the supersymmetric nonlinear sigma model
with target space the Grassmannian; we will more briefly call this
the Grassmannian sigma model.
At the quantum level, spontaneous breaking of a continuous
symmetry such as the $U(N)$ symmetry of this model is not possible
in two dimensions. The symmetry must therefore be restored by
quantum corrections. Exhibiting this symmetry restoration, and the
associated mass gap, was a primary goal of early investigations
of the model.
\subsection{$R$ Symmetries}
A right-moving $R$-symmetry in an ${\cal N}=2$ model in two dimensions is
a symmetry under which $\theta^+\rightarrow e^{i\alpha}\theta^+,$
$\overline\theta^+\rightarrow e^{-i\alpha}\overline\theta^+$, while $\theta^-,
\overline\theta^-$ are invariant. A left-moving $R$-symmetry
obeys the analogous condition with $\theta^+$ and $\theta^-$
exchanged.
The Grassmannian sigma model as constructed above has at the classical
level a right-moving
$R$-symmetry $J_R$\foot{We will somewhat imprecisely use the symbol
$J_R$ to denote either the current or the corresponding charge;
and similarly for other currents introduced momentarily.}
under which the charges of the various fields
are as follows: $(\psi_+,F,\sigma,\lambda_-)$ have charges
$(-1,-1,1,1)$, their complex conjugates have opposite charge,
and other fields have charge zero. Similarly there is classically
a left-moving $R$-symmetry $J_L$ under which $(\psi_-,F_i,\sigma,\lambda_+)$
have charges $(-1,-1,-1,1)$, their complex conjugates have opposite
charges, and other fields are neutral.
At the quantum level, the sum $J_V=J_R+J_L$ is a ``vector'' symmetry,
that is, it transforms left- and right-moving fermions the same way,
so it is free of anomaly and generates a $U(1)$ symmetry.
However, the ``axial'' combination
$J_A=J_R-J_L$ is anomalous.
The anomaly can be described as follows.
Let $a$ be a $U(1)$ connection with first Chern class 1,
and embed this in $G=U(K)$ so that the $U(k)$ gauge
field is $v={\rm diag}(a,0,0,\dots,0)$. In such an instanton
field, the index of the Dirac operator acting on $\psi_+$ is $N$.
\foot{The index is defined as
the number of $\psi_+$ zero modes minus the number of
$\overline\psi_+$ zero modes. Recall that $\psi_+$ transforms
as a sum of $N$ copies of the defining $k$-dimensional representation
of $U(k)$; each of these contributes 1 to the index.}
Similarly the $\psi_-$ index is $-N$. The total anomaly in
$J_A=J_R-J_L$ is the difference of these or $2N$.
The anomaly in any instanton field would be an integer multiple of this.
So $J_A$ is conserved only modulo $2N$.
The only symmetries we can construct from $J_A$ are the
discrete transformations $\exp(2\pi it J_A/2N)$, with $t\in {\bf Z}$.
This gives a discrete group, isomorphic to ${\bf Z}_{2N}$,
of chiral symmetries. If unbroken, these symmetries would
prevent $\psi$ and $\lambda$ from gaining a mass. One of the
main results of the old literature on this model was that
this ${\bf Z}_{2N}$ is spontaneously broken down to ${\bf Z}_2$,
making a mass gap possible;
the surviving ${\bf Z}_2$ is just the operation $(-1)^F$
that counts fermions modulo two.
\subsection{The Twisted Model}
\REF\witmir{E. Witten, ``Mirror Manifolds And Topological Field
Theory,'' in {\it Essays On Mirror Manifolds}, ed. S.-T. Yau
(International Press, 1992).}
Any ${\cal N}=2$ supersymmetric theory in two dimensions with an $R$ symmetry
can be twisted to obtain a topological field theory.
The construction, as explained in [\witmir], which the reader can consult
for details, involves
adding to the usual stress tensor
the derivative of the $R$-current.
As we have just seen, in the case of the Grassmannian there is
only one anomaly-free $R$-symmetry. Consequently,
only one twisted topological field theory can be constructed;
it is related to the quantum cohomology of the Grassmannian,
which was introduced in \S3.
In going from the untwisted to the twisted model, the spin of
every field decreases (in the convention of [\phases])
by $J_V/2$. For instance, in the untwisted
model, $\psi_+$ and $\overline\psi_+$ have spin $1/2$ and $J_V=\mp 1$,
so in the twisted model they have respectively spin $1$ and 0.
More generally the fermi fields that have spin zero in the twisted model
are $\overline\psi_+,\psi_-,\overline\lambda_-,\lambda_+$.
If the twisted model is formulated on ${\bf S}^2$, the spin zero fields
each have one zero mode and the spin one fields have none.
Since $\overline\psi_+,\psi_-,\overline\lambda_-,\lambda_+$ have
$kN,kN,k^2,k^2$ components respectively and have $J_A=1,1,-1,-1$,
the total $J_A$ value of the zero modes is $kN+kN-k^2-k^2=2k(N-k)$,
and this is the anomaly in $J_A$ conservation due to coupling to the
curvature of $S^2$. (Not coincidentally, $2k(N-k)$ is the
dimension of $G(k,N)$.) More generally, on a surface of genus $g$, the
spin one fields would have $g$ zero modes, so the violation of $J_A$ is
$$\Delta J_A = 2k(N-k)(1-g)=k(N-k)\int_\Sigma d^2x\sqrt h {R\over 2\pi}.
\eqn\oxoc$$
Here I have written the Euler characteristic of $\Sigma$, which of course
equals $2(1-g)$, as the familiar curvature integral.
\subsection{Fermionic Symmetry Of The Twisted Theory}
The untwisted theory, formulated on a flat world-sheet, possesses
fermionic symmetries, that were described in detail in
equations \milopo, \nurbob. After twisting, the fermionic parameters
$\epsilon_+$ and $\overline\epsilon_-$ in the transformation laws have
spin zero; let
$Q_-$ and $\overline Q_+$ be the symmetries generated by those transformations,
and let $Q=Q_-+\overline Q_+$.
By a standard calculation, $Q_-{}^2=\overline Q_+{}^2=\{Q_-,\overline Q_+\}=0$
and in particular $Q^2=0$.
Moreover, the stress tensor
can be written as $T=\{Q,\Lambda\}$ for some $\Lambda$.
It follows that if we restrict ourselves to operators that
are annihilated by $Q$ (or more exactly to cohomology
classes of such operators), the theory can be interpreted
as a topological field theory. Each cohomology class of $Q$-invariant
operators has representatives annihilated by both $Q_-$ and $\overline Q_+$.
The relevant observables are easily found. The transformation laws of the
topological theory are found from the microscopic transformation laws
\milopo, \nurbob\ by setting $\epsilon_-=\overline\epsilon_+=0$ and
keeping $\epsilon_+,\overline\epsilon_-$. By inspection of the transformation
laws, $\sigma$ (but not $\overline\sigma$)
is invariant, so that any gauge invariant holomorphic
function of
$\sigma$ is a suitable vertex operator in the topological theory.
Such functions are linear combinations of characters, so the
basic operators constructed this way are
$$O_V(x)=\Tr_V\sigma(x) , \eqn\burfo$$
with $V$ an irreducible representation of $G=U(k)$ and $\Tr_V$ the
trace in that representation.
Actually, these are the only relevant operators. In fact, even
before twisting, the model (for $r>>0$)
is equivalent at long distances, as we saw above,
to a sigma model with target the Grassmannian $G(k,N)$.
Consequently, the twisted model is simply the standard $A$
model of $G(k,N)$ (the $A$ model for any Kahler target is explained
in detail in [\witmir])
so the cohomology classes of observables are in one-to-one correspondence
with the de Rham cohomology of $G(k,N)$.
Indeed, upon integrating out the massive fields, $\sigma(x)$ turns
into a bilinear expression in massless fermions tangent to $G(k,N)$,
with values in the adjoint representation of $U(k)$.
It is easy to calculate this explicitly in the weak coupling, low energy
limit. To this aim, we need only evaluate a tree diagram, and
we can ignore the kinetic energy of the massive $\sigma$ field. The
relevant part of the Lagrangian is simply
$$-\int d^2x\left(\sum_{ijs}\overline\phi_{is}\{\sigma,\overline\sigma\}^i{}_j
\phi^{js} +\sqrt 2\sum_{ijs}\overline\psi_{+is}\overline\sigma^i{}_j\psi_-{}^{js}
\right). \eqn\purrify$$
Because of the overall $U(N)$ invariance, it suffices to work out
the effective operator representing $\sigma$ in the low energy theory
at one particular point on $G(k,N)$. We take this to be the point
represented by $\phi^{is}=\sqrt r \delta^{is}$ for $1\leq s\leq k$,
$\phi^{is}=0$ for $s>k$. With this choice, \purrify\ becomes
$$-\int d^2x \left(2r\Tr \overline\sigma\sigma +\sqrt 2 \sum_{ijs}
\overline\psi_{+is}\overline\sigma^i{}_j\psi_-{}^{js}\right).\eqn\durify$$
The quickest way to evaluate the tree diagram is simply to impose
the equation of motion of $\overline \sigma$; this gives
$$ \sigma^j{}_i=-{1\over r\sqrt 2}\sum_s\overline\psi_{+is}\psi_-^{js}.
\eqn\reddo$$
In the interpretation of the low energy theory in terms of differential
forms on $G(k,N)$, $\psi_-/\sqrt r$ and $\overline\psi_+/\sqrt r$
are $(1,0)$ and $(0,1)$ forms. ($\psi_-$ and $\overline\psi_+$ have
been normalized to have canonical kinetic energies; their natural
normalization as differential forms involves dividing by $\sqrt r$.)
So $\sigma$ is represented in the low energy theory by a $(1,1)$
form or more exactly by the operator in the $G(k,N)$ model
determined by this $(1,1)$ form.
The chosen vacuum $\phi^{is}=\sqrt r\delta^{is}$ is invariant up to
a gauge transformation under a subgroup $U(k)\times U(N-k)$ of $U(N)$.
For $\sigma$ to be a $U(N)$-invariant form in the adjoint representation
of the gauge group, it must transform in the adjoint
representation of the unbroken $U(k)$ (since the unbroken symmetry
is a mixture of this with a gauge transformation)
and be invariant under $U(N-k)$. The $U(N)$ action can then be
used to extend $\sigma$ in a unique way to an invariant $(1,1)$ form
on $G(k,N)$. It is evident that
the right hand side of \reddo\ has the required properties.
Conversely, the right hand side of \reddo\ is the only bilinear expression
in $\overline\psi_+$ and $\psi_-$ with the claimed properties, so any
adjoint-valued $U(N)$-invariant $(1,1)$ form would be a multiple of $\sigma$.
Such a form is the curvature of the tautological $U(k)$ bundle
$E^*$ with its natural connection.
So up to a constant, which I will not verify
directly (it can be absorbed in the constant later called $c$),
$\sigma$ coincides in the low energy theory
with the tautological curvature. Hence classical
expressions $O_V= \Tr_V\sigma$ coincide with the corresponding polynomials
in Chern
classes on $G(k,N)$, and as quantum operators in the twisted theory,
the $O_V$ coincide with the elements of the quantum cohomology determined
by those classes.
The fact that the tautological classes generate the cohomology
of $G(k,N)$ ensures that the $O_V$ span the space of observables of
the twisted theory.
There is a more conceptual approach to identifying $\sigma$ with the
tautological curvature which I will indicate very briefly.
Let $\lambda_-=\eta_0-\eta_1$,
$\overline\lambda_+=\eta_0+\eta_1$. Restrict to the diagonal fermionic
symmetry with $\epsilon_+=\overline\epsilon_-=\epsilon$. Then
a key part of the symmetry algebra is
$$\eqalign{\delta v_m & = 2i\epsilon \eta_m \cr
\delta \eta_m & =\sqrt 2\epsilon D_m\sigma \cr
\delta \sigma & = 0. \cr} \eqn\iffo$$
This multiplet describes the equivariant cohomology of the gauge
group acting on the space ${\cal A}$ of connections.
The interpretation of $\sigma$ as the curvature of the tautological
bundle over the quotient is standard in equivariant cohomology.
This interpretation holds independent of any specific Lagrangian model;
the salient feature of
the particular model we are considering is that the connection $v_m$
is identified via the low energy equations of motion with the pullback
of the tautological connection on $E^*\rightarrow G(k,N)$.
\subsection{Instantons}
A correlation function
$$\left\langle \prod_i O_{V_i}(x_i)\right\rangle \eqn\guelf$$
on a Riemann surface $\Sigma$ can be computed as follows.
The $O_{V_i}$ determine cohomology classes of $G(k,N)$ as we have just
seen; pick Poincar\'e dual cycles $H_i$.
Let $d$ be the non-negative integer, if any,
such that the moduli space of holomorphic
maps $\Phi:\Sigma\rightarrow G(k,N)$ obeying
$$\Phi(x_i)\in H_i \eqn\lilko$$
has virtual dimension zero. The correlation function \guelf\
is zero if such a $d$ does not exist; otherwise it is
$$\left\langle\prod_i O_{V_i}(x_i)\right\rangle = \exp(-dr)\cdot N_{\{H_i\}}
\eqn\mbmb$$
with $N_{\{H_i\}}$ the ``number'' of holomorphic maps $\Phi:\Sigma
\rightarrow G(k,N)$ that obey \lilko.
(In general, in defining this number,
one must make a suitable perturbation of the equation to
avoid possible degeneracies; that is why I have put the word ``number''
in quotes.)
\mbmb\ follows from the standard description of the
$A$ model, as explained in [\witmir].
More microscopically, to see the appearance of instantons,
one can begin with the transformation
laws \milopo, \nurbob. The calculation
of the correlation function in
\guelf\ can be localized, by a standard argument, on the fixed
points of $Q_-,\overline Q_+$. An analysis as in [\phases], pp. 184-8,
identifies those fixed points (for $r>>0$)
with the holomorphic maps of $\Sigma$ to the Grassmannian.
Those holomorphic maps appear in precisely the form in which they
were studied by Bertram, Daskapoulos, and Wentworth [\bertram].
\section{Some Renormalization Factors}
Before analyzing the quantum theory, I want to first
point out a few details involving renormalization.
Any topological field theory in two dimensions could be modified
by the addition of a term
$$\Delta L = a\int_\Sigma d^2x\sqrt h {R\over 2\pi} \eqn\normert$$
without affecting the topological invariance. The affect of this
is merely to multiply
a genus $g$ amplitude by a factor of $\exp(a(2-2g))$.
An important role in the analysis will be played by the Kahler
parameter $r$. For instance, $r$ enters
in the basic formula \mbmb\ expressing correlation functions in
terms of instantons. However, as we see in \mbmb, the $r$ dependence
of a degree $d$ instanton contribution is known {\it a priori}.
Moreover, because of the ${\bf Z}$ grading of the classical cohomology,
every given correlation function in genus $g$ receives a contribution
at most only from one known value of $d$. Therefore, there is no material
loss in setting $r$ to 0, and we will do that eventually in \S4.7.
Another normalization question involves the possibility of multiplying
an operator of degree $w$ by a factor $\exp(uw)$ with some constant $u$.
One can show by keeping track of the classical ${\bf Z}$ grading that
this can be absorbed in adding constants to $r$ and $a$.
This normalization question will arise below because we will
find that the field $\sigma$ of the Grassmannian sigma model
has a macroscopic interpretation as
$$\sigma=c g,\eqn\juppl$$
where $c$ is a constant
that we will determine only approximately and $g$ is the elementary field
of a gauged WZW model.
In practice, in our computations we will not try to determine
the precise values of $a$ and $c$. At the end, when we enter
the computational stage, we will identify the values of these
parameters by checking a couple of special cases of the formulas.
\section{Quantum Properties Of The Model}
We come finally to the point of the present paper -- the calculation
mapping the Grassmannian sigma model onto the $G/G$ model,
and thence the Verlinde algebra.
The calculations themselves are not new [\div--\cecotti],
and I will therefore present them rather briefly. What is new
is the result that we will get by considering these computations
in a global context.
We begin with the expression for $D$ in terms of matter fields
that is obtained by varying the potential energy term \pixox\ with
respect to $D$:
$$-{1\over e^2}D^i{}_j=\sum_{s=1}^N\phi^{is}\overline\phi_{js}-r\delta^i_s.
\eqn\oppu$$
At the classical level, for $r>>0$, vanishing of the $D^i{}_j$ -- which
is needed to set the energy to zero -- requires that the $\phi^{is}$
should have non-zero vacuum expectation values. This in turn spontaneously
breaks the global $U(N)$ symmetry (and ensures the existence of
massless Goldstone bosons and the absence of a mass gap). Such
spontaneous breaking of a continuous symmetry
is, however, impossible in two dimensions.
The resolution of this conundrum has long been known. Quantum
mechanically the operator $O^i{}_j=\sum_{s=1}^N\phi^{is}\overline\phi_{js}$ can
have an expectation value even if the $\phi^{is}$ do not. If
this expectation value can equal $r\delta^i{}_j$, then the $D^i{}_j$
can vanish without spontaneous breaking of the $U(N)$ symmetry.
To investigate this phenomenon, let us compute the expectation value
of $O^i{}_j$. We will first do this in a naive approximation,
treating the $\phi$'s as free fields with the mass term
that we can read off from the classical Lagrangian. Then we will
discuss the conditions for validity of the approximation.
We will do the calculation on Euclidean ${\bf R}^2$, making the
standard Wick rotations from the Lorentz signature Lagrangian
given above.
The mass term for the $\phi$ field in the Lagrangian
is $\sum_{i,j,s}\overline\phi_{is}\{\sigma,\overline\sigma\}^i{}_j\phi^{is}$, with
$\{\cdot,\cdot\}$ the anticommutator.
Treating the $\phi$'s as free fields with that mass term,
the expectation value of $O^i{}_j$ is simply
$$\langle O\rangle =N\int {d^2k\over (2\pi)^2}{1\over k^2+\{\sigma,\overline\sigma
\}}.\eqn\tugboat$$
The factor of $N$ comes from summing over $s$.
The integral in \tugboat\ is logarithmically divergent. The divergence
can be regularized by subtracting a similar integral with
$\{\sigma,\overline\sigma\}$ replaced by a multiple of the identity,
say $2\mu^2$, with $\mu$ an arbitrary ``subtraction point.''
The subtraction can be interpreted as an additive renormalization of
$r$. After this regularization, the integral can be evaluated,
and one gets
$$\langle O\rangle = -{N\over
4\pi}\ln\left(\{\sigma,\overline\sigma\}/2\mu^2\right).
\eqn\wondrous$$
The condition for $D$ to vanish in this approximation is hence that
$$-{N\over 4\pi}\ln\left(\{\sigma,\overline\sigma\}/2\mu^2\right)-r =
0,\eqn\ugboat$$
or
$$\{\sigma,\overline\sigma\}= 2\mu^2\exp\left(-4\pi r/N\right). \eqn\marblo$$
This is however only a necessary condition for vanishing of the energy.
Another condition comes from the presence in the classical
Lagrangian of a term proportional to $\Tr[\sigma,\overline\sigma]^2$.
This term gives a contribution to the energy that
vanishes precisely when $[\sigma,\overline\sigma]=0$,
so in seeking to describe the vacuum, we may assume that $\sigma$
and $\overline\sigma$ commute and therefore rewrite \marblo\ in the form
$$\sigma\overline\sigma=\mu^2\exp\left(-4\pi r/N\right). \eqn\tarble$$
This means that
$$\sigma = c g \eqn\arble$$
with $g$ a unitary matrix -- $g = 1$ -- and
$c$ the constant
$$c =\mu\exp\left(-2\pi r/N\right). \eqn\ommo$$
Thus, we have obtained a kind of sigma model with a field
$g$ taking values in the unitary group.
The vacuum expectation value of $\sigma$ that we have just found
gives a positive mass squared to the $\phi^{is}$, so that they
will have zero vacuum expectation value, restoring the $U(N)$
symmetry. However, the discrete chiral symmetry (conservation
of $J_A$ modulo $2N$) is spontaneously broken in this process.
Indeed, since $\sigma$ has $J_A=2$, the vacuum expectation
value of $\sigma$ breaks ${\bf Z}_{2N}$ down to ${\bf Z}_2$.
(For instance, this is discussed in detail for $k=1$ on p. 310
of [\oldwit].)
As the broken symmetry is discrete, this does not produce
Goldstone bosons and is compatible with the existence of a mass
gap. In fact, the broken symmetry helps in getting a mass gap,
since most of the fermions obtain masses at tree level from the
vacuum expectation value of $\sigma$.
\subsection{Validity Of The Approximation}
Before proceeding to unravel further subtleties, let us
discuss the conditions for validity of the approximation.
The traditional region of validity of the above approximation,
as in [\div,\oldwit], is $k$ fixed, $N\rightarrow \infty$, with $r$ and $1/e^2$
of order $N$. In this limit, the corrections to the
approximation (of treating the $\phi$'s as free fields with
a $\sigma$-dependent mass) are of order $1/N$. The above
computation is part of the beginning of a systematic
expansion of all physical observables in powers of $1/N$.
Many important features of the theory
involve properties that are stable under perturbation -- like
whether there is a mass gap, what symmetries are spontaneously
broken, and certain aspects of the topological
sector. For addressing such questions, the $1/N$ expansion is
good enough for fixed $k$ and sufficiently big $N$.
That is not enough for us, because we wish
to relate the Verlinde algebra to the cohomology
of the Grassmannian for all $k$ and $N$. Happily, there is another
region of validity of the approximation. At the classical level,
the matrix $O$ is positive definite, and accordingly for $r<0$
it would be impossible for the energy to vanish. Quantum mechanically,
because of the subtraction that was needed in the above computation,
$O$ is not positive definite. Accordingly, zero energy is possible
also for negative $r$ at the quantum level; indeed, the solution
\tarble\ makes sense for either sign of $r$. (The continuation of the
model to negative $r$ was discussed in [\phases,\S3.2] for the case
$k=1$.)
I claim that for any $k$ and $N$, the computation leading to \tarble\
is a valid approximation for $r<<0$. The reason for this is that
the approximate vacuum state given by this computation has
exponentially {\it large} $\sigma$ for $r\rightarrow\-\infty$. This gives
an exponentially large mass to the $\Phi$ multiplet, so $\phi$ and $\psi$ loops
can be ignored except perhaps for renormalization effects involving
diagrams with poor ultraviolet convergence. In this super-renormalizable
theory, the only such diagram is the one loop diagram whose evaluation
leads to \tarble.
The quantum cohomology of the Grassmannian involves, naively,
the behavior for $r>>0$. However, because the first Chern class
of $G(k,N)$ is positive, every topological correlation function of the
twisted theory (of operators
of definite dimension or $J_A$) receives a contribution only from
one value of the instanton number and hence depends on $r$
as $\exp(-dr)$ with a known constant $d$ that appeared in \mbmb.
The behavior for $r<<0$
therefore determines the behavior for $r>>0$. Consequently,
the fact that our approximation is valid for the theory continued
to $r<<0$ means that it is good enough for studying the topological
sector of the twisted theory. We will now explore the
implications.
\section{The Mass Gap And The WZW Action}
Because $\sigma$ was determined to be an arbitrary unitary matrix
(times a fixed constant), it appears at first sight that
the model has a continuous vacuum degeneracy and therefore
massless particles, at least in this approximation. This can
hardly be correct because the massless $\sigma$ particles,
subject to the constraint \tarble, do not furnish a representation
of ${\cal N}=2$ supersymmetry. (The $\phi$ fields are massive in this
approximation,
as we have noted, and so cannot help.) This puzzle was resolved in the
old literature in the context of the $1/N$ expansion;
the resolution involves giving a mass to $\sigma$ by mixing with the
$U(k)$ gauge field $v$.
(See, for instance, p. 308 of [\oldwit] for $k=1$ and
the discussion of the $\phi_5-\lambda$ propagator
on pp. 165-6 of [\brazil] for general $k$.)
I will not present the detailed computations here, as they are standard;
I will merely summarize them and focus on the interpretation, which is all
that is new.
\FIG\fermloop{The one-loop diagram describing $\sigma - v$ mixing}
\FIG\ofermloop{The one-loop diagram generating the Wess-Zumino coupling
for $\sigma$.}
The $1/N$ expansion amounts to integrating out
the chiral superfields $\Phi^{is}$ to obtain
an effective action for the gauge multiplet.
The $\sigma - v$ mixing comes from the one loop diagram of figure (\fermloop).
The non-vanishing contribution
is the one in which the particles running around the loop are fermions.
However, perhaps even more fundamental is the one-loop diagram with
external sigma fields only and internal fermions, shown in figure
(\ofermloop).
The fermions $\psi^{is}$, for $s=1\dots N$, form $N$ copies of the
fundamental representation of $U(k)$. Let us suppress the $s$ index
and look at a single multiplet $\psi^i$. The key point is that,
looking back to the Lagrangian
\qqmmw, the fermions receive their mass from a coupling $-\sqrt 2\,
\overline\psi_{-i}\sigma^i{}_j\psi_{+}{}^{j}-{\mit c.c.}$ This coupling
breaks the $U(k)_L\times U(k)_R$ chiral symmetry of the fermion
kinetic energy down to a diagonal $U(k)$. Therefore, when we integrate
out the fermions to get an effective action for $\sigma$, we are dealing
with the standard problem of integrating out massive fermions that
receive their mass from spontaneous chiral symmetry breaking. It is
precisely in connection with this problem that the anomalous Wess-Zumino
interaction, defined in \wzfun, was originally discovered. The long
wavelength limit of the effective interaction obtained by integrating
out $\sigma$ is therefore -- allowing for the $N$ multiplets --
precisely
$$ L_{{\mit eff}}(\sigma)= N\Gamma(\sigma). \eqn\mirro$$
The form of this interaction is completely determined by $U(k)_L\times
U(k)_R$ invariance and the chiral anomaly. (Let me warn that reader
that the $U(k)_L\times U(k)_R$ symmetry just invoked
is explicitly broken by interactions, such as the gauge couplings, that
do not contribute to $L_{{\mit eff}}(\sigma)$ in leading order in
$1/N$. The corrections to the leading large $N$ behavior are the subject
of the next sub-section.)
Now we include the gauge fields and Feynman diagrams such as
that of figure (\fermloop).
Such diagrams must extend \mirro\ to a gauge invariant
effective action $L_{{\mit eff}}(\sigma,v)$. The minimal choice,
in some sense, is the gauge invariant extension of the Wess-Zumino
action that was defined in \qqq:
$$L_{{\mit eff}}(\sigma,v)=N\Gamma(\sigma,v) . \eqn\riffo$$
Is this minimal form correct?
Apart from terms that vanish by the equations of motion
and terms of higher dimension that can be ignored at long distances,
a non-minimal gauge invariant term (on a flat world sheet)
would have to be of the form
$$\int_\Sigma \Tr F W(\sigma), \eqn\rripo$$
with $F=dv+v\wedge v$ and $W$ a function of $\sigma$
that transforms in the adjoint representation.
In the next sub-section, we will show that
such a term is not generated, even by
corrections to the $1/N$ expansion. We will also discuss the role
of the terms that vanish by the equations of motion, and some
curvature-dependent terms.
\subsection{Synthesis}
If we take the Lagrangian $N\Gamma(\sigma,v)$ by itself, it describes
a level $N$ gauged WZW model of $U(k)/U(k)$. This sort of model
was analyzed in \S2, and as we know from \S2.5, it describes
a topological field theory.
There are no propagating modes at all, massless or massive.
If one adds conventional kinetic energy for $\sigma$ and $v$
(such terms are certainly present in our underlying Lagrangian),
one has propagating modes but massive ones. Indeed the conventional
kinetic energy is irrelevant in the infrared and the large distance
behavior is that of the gauged WZW model.
Thus, the Grassmannian sigma model --
even if one does not restrict {\it a priori} to its topological sector --
reduces at long distances to a topological field theory.
In fact, any theory with a mass gap
will do this, since at distances at which the massive
particles can be neglected, all that survives is
dynamics of the vacuum or topological field theory.
In the case of the Grassmannian sigma
model, there was an underlying topological sector, described in \S4.2, and
visible from the classical Lagrangian before any analysis of its
quantum properties. The basic observable in this topological sector
was the $\sigma$ field that appears in
\riffo\ (but now restricted to $\overline\sigma\sigma={\rm constant}$).
Thus the topological sector, defined microscopically, passes over
at large distances to the gauged WZW model governing the $\sigma$ field.
This is the mapping from the quantum cohomology of the Grassmannian
to the gauged WZW model (and thence the Verlinde algebra) that is the
main goal of this paper.
In what follows, we will analyze the
corrections to the $1/N$ expansion and eventually pin down the details
of the mapping from the quantum cohomology to the gauged WZW model.
\subsection{Search For Manifest Supersymmetry}
It would be attractive to find an extension of
\riffo, including the fermi partners of $\sigma$ and $v$, with
manifest ${\cal N}=2$ supersymmetry. Of course, the one-loop effective
action from which \riffo\ was defined is such an extension, but
it would be nice to find, for instance, a compact description in ${\cal N}=2$
superspace of a local interaction describing
the long-wavelength part of the one-loop effective action.
I have been unable to do this and leave it as an interesting question.
However, let us truncate to
the abelian case in which $\sigma$, $v$, and their fermionic partners
are diagonal matrices (for instance
$\sigma={\rm diag}(\sigma_1,\dots,\sigma_k)$).
In this case, the field strength is similarly diagonal
(say $\Sigma={\rm diag}(\Sigma_1,\dots,\Sigma_k)$).
With this truncation,
it is possible to find an explicit, local
superspace interaction that describes
all of the anomalous interactions. This interaction (which in the abelian
case was discussed in [\phases], \S3.2), is
$$L_{{\cal N}=2}={1\over \sqrt 2}\sum_{i=1}^k\int d^2x \,d\theta^+\,d\overline\theta^-
\left({it\Sigma_i\over 2}-{N\over 2\pi}\Sigma_i\ln(\Sigma_i/\mu)\right)+{\mit
c.c.} \eqn\buffalo$$
The ease of writing
this interaction in the diagonal case and the difficulty of describing
its full non-abelian generalization may be related to the utility
of abelianization in the next subsection.
\section{Corrections}
Now we turn to analyzing the corrections
to this approximation.
We can ignore operators of dimension higher than two, which are irrelevant
at long distances. Terms of dimension less than two, such as \nxon, cannot
arise as they would
violate the underlying ${\cal N}=2$ supersymmetry or (as a consequence) the
topological invariance of the twisted sector.
Also, we can ignore terms that vanish by the $v$
equations of motion and so can be eliminated, as in \S2.5, by a redefinition
of $v$. Such terms (which are of the form $\int_\Sigma \sigma^*(B)$, with
$B$ an adjoint-invariant two-form on $U(k)$) would play no role in
our subsequent analysis.
We are left with three issues to consider:
(1) First, there might be corrections to the discrete data, the
``level'' of the effective
WZW model. In the one loop approximation, we found the level to be
$N$; however, corrections of relative order $1/N$ could shift this by
a constant.
Actually, this has to be formulated more precisely because
the Lie algebra of $U(k)$, which we will call $u(k)$, is not simple;
it can be split as $su(k)\oplus u(1)$, where $su(k)$ consists of the
traceless $k\times k$ hermitian matrices and $u(1)$ is the center of $u(k)$.
In general, one could have a gauged WZW model for $U(k)$ of level
$(N_1,N_2)$, by which I mean that the Lagrangian would be determined
by the quadratic form $(\cdot,\cdot)$
on $u(k)$ such that $(a,b)=N_1\Tr ab$ for
$a,b\in su(k)$, and $(a,b)=N_2\Tr ab$ for $a,b\in u(1)$.
Thus, the first correction to the $1/N$ approximation might lead to
$(N_1,N_2)=(N+u,N+v)$ where $u,v$ are integers (perhaps depending on $k$);
higher order corrections in $1/N$ must vanish as
they could not be integral for all $N$.
(2) Second, the low energy effective action might contain a term
$$\int_\Sigma \Tr F W(\sigma), \eqn\ripo$$
with as above $F$ the $u(k)$ curvature, and $W$ a function of $\sigma$
that transforms in the adjoint representation. Though this term vanishes
by the equations of motion of the low-energy gauged WZW model and so could
be eliminated even from the quantum theory
by a field redefinition (as described in \S2.5),
it could still play a role that will be explained later.
Note that a constant term in $W$ (that is, a multiple of the identity)
could be absorbed in an additive
renormalization of $t$; we will not try to determine such a renormalization,
and all of our statements about $W$
will hold modulo an additive constant.
(3) Finally, we need to know whether, when the twisted theory is formulated
on a curved world-sheet, the effective Lagrangian contains a term
$$\Delta L = \int_\Sigma d^2x\sqrt h {R\over 4\pi} U(\sigma), \eqn\ocx$$
with $R$ the world-sheet curvature and $U(\sigma)$ a function invariant
under conjugation. As we have discussed in \S2.5, any continuous
deformation of the gauged WZW model that preserves the topological
invariance and cannot be eliminated by a change of variables is of this
form. (By contrast, the deformations considered above in (1) are discrete,
not continuous, and the deformations in (2) can be removed by a change of
variable.)
Now, here are the answers that I will claim for these three questions:
(A1) I will claim that the level of the effective gauged WZW model
is really $(N-k,N)$. The correction can be thought of as a $1/N$
correction that comes from integrating out the $U(k)$ gauge multiplet
(the $u(1)$ level is not shifted since the gauge multiplet is neutral
under $u(1)$).
(A2) I will claim that $W=0$, in other words that no term of the form
\ripo\ is generated.
(A3) I will claim that a term of the form \ocx\ is generated, with
$$U = (N-k)\ln \det \sigma+{\rm constant}.\eqn\onzo$$
This might be regarded as the minimal possibility compatible with
the anomaly formula \oxoc.
\subsection{Abelianization}
Now I will explain how I will do the calculation. A $1/N$ expansion
will not suffice, since we do not want to be limited to sufficiently
large $N$. Instead, we will study the theory in the alternative
regime of $r<<0$.
To identify the corrections to the effective action of the three
types discussed above, it suffices to work in the region of field space
in which $\sigma$ is a diagonal matrix with distinct eigenvalues
$\sigma_1,\dots,\sigma_k$. Moreover, we impose the condition
of unbroken supersymmetry (or vanishing vacuum energy); in the approximation
of \tarble\ -- which is valid for $r<<0$ -- the condition is
$$\overline\sigma_i\sigma_i =\mu^2
\exp(-4\pi r/N), ~{\rm for}~i=1,\dots,k.\eqn\urfo$$
The distinct values of the $\sigma_i$ break $U(k)$ to a diagonal subgroup
$U(1)^k$. Calculations are relatively easy because the chiral superfields
and the ``off-diagonal'' part of the gauge multiplet
have large masses, of order $\overline\sigma\sigma$,
which can be read off from the classical Lagrangian. The fields which
remain massless in this approximation (and actually get masses at one
loop, smaller by a factor of $e^2$) are the diagonal part of the gauge
multiplet. The effective action for the massless modes, including the
one loop correction, has already been written with manifest ${\cal N}=2$
supersymmetry in \buffalo. This is a kind of gauged WZW model
of $U(1)^k$.
So in this regime, we get a kind of abelianization of the Grassmannian
sigma model.
This should not come as a complete surprise, since as we have recalled
in \S2.6, the gauged WZW model has a precisely analogous abelianization.
Now, we will have to be careful in using abelianization to compute
the effects of types (1),
(2), and (3), because in going from the
gauged WZW model to its abelianization, precisely analogous terms
are generated. These were computed in [\blau] and described in \S2.6,
and are as follows.
(B1) The shift in level in going from the
gauged WZW model of $U(k)$ to its abelianization is
$(k,0)$. (There is obviously no shift of the $u(1)$ level under abelianization
since $u(1)$ is already abelian.)
(B2) No term of the form \ripo\ is generated.
(B3) The term of the form \ocx\ that is generated in abelianizing
the gauged WZW model was presented in equation \lateruse.
Now in verifying claims (A1), (A2), and (A3), we will integrate
out from the Grassmannian sigma model the
fields that, in the abelianized regime, have tree level masses;
thus we will get the precise abelianized
theory that is equivalent to the Grassmannian sigma model.
Then we will interpret the result as a sum of two contributions:
the terms claimed in (A1), (A2), and (A3) which describe how
to go from the topological sector of the Grassmannian sigma model
to an equivalent gauged WZW model; and the terms (B1), (B2), and (B3),
which arise in abelianization of the gauged WZW model.
So claims (A1), (A2), and (A3) are equivalent to the following claims,
which are the ones that we will actually check:
(C1) After abelianization, there is no shift in the level of the
Grassmannian sigma model from the naive result $(N,N)$. We interpret
this to mean that the topological sector of the Grassmannian sigma
model is equivalent to a gauged WZW model of $U(k)/U(k)$
at level $(N-k,N)$, and the
level of that model is shifted by $(k,0)$ upon abelianization.
(C2) There will be no induced term of the type \ripo, in abelianizing
either of the two models or in comparing them.
(C3) The induced term of type \ocx\ in abelianization of the Grassmannian
model will be the sum of \onzo\ and the contribution \lateruse\ that arises
in abelianizing the gauged WZW model.
The sum of these is simply
$$\widetilde U(\sigma)=(N-1)\ln \det \sigma-\sum_{i\not=j}\ln(\sigma_i
-\sigma_j). \eqn\oczo$$
\subsection{The Calculation}
Now I will explain the calculation justifying (C1), (C2), and (C3).
In discussing (C1) and (C2), world-sheet curvature is irrelevant, and we can
work on a flat ${\bf R}^2$.
(C1) and (C2) can be taken together and deduced from the following
principle. Suppose that a $u(1)$ gauge field $v$, with field strength
$f=d v$, is coupled
to a Dirac fermion $\chi$, of charge $q$. Let $\chi$ have a mass term
$$ L_{{\mit mass}}=-\int_\Sigma d^2x\left(\overline\chi_- m \chi_++\overline\chi_+
\overline m \chi_-\right). \eqn\gogog$$
We want to integrate out $\chi$ to get an effective action for $v$.
The dependence of the effective action on the phase of $m$ comes only
from the chiral anomaly and is
$$L_{{\rm eff}}= \dots +q\int_\Sigma {f\over (2\pi)}{\rm Im }\log m.
\eqn\homer$$
Now we look at the Grassmannian sigma model in the abelianized regime
of $r<<0$, $\sigma_i$ large (obeying \urfo) and distinct.
The chiral superfields $\Phi^{js}$ and the off-diagonal part of the
gauge multiplet have bare masses of order $|\sigma_i|$. They can be
integrated out in a one loop approximation; higher order corrections
would be of order $e^2$ and irrelevant. Integrating out massive
bosons does not give terms relevant to (C1) or (C2), while
the contributions of fermions can be deduced from \homer.
To do so explicitly, let $v_i, ~i=1\dots k$, be the diagonal components
of the gauge field. First we work out the contributions of chiral
superfields. Each $v_i$ is coupled to $N$ chiral superfields $\Phi^{is},
{}~s=1\dots N$, of charge 1. The fermi elements of these superfields
have mass $\sqrt 2\sigma_i$ (by inspection of \qqmmw), so their contribution
is
$$ N\sum_{i=1}^k\int_{\Sigma}{d v_i\over 2\pi}{\rm Im}\ln \sigma_i.
\eqn\ity$$
This is the level $N$
gauged WZW action computed of equation
\riffo, specialized to the
case that only the diagonal components of $v$ and $\sigma$ are non-zero.
Now, we come to the off-diagonal part of the gauge fields. Again,
the relevant contribution comes from the phases of the masses of the
off-diagonal fermions $\lambda^i{}_j,\,\,\,i\not=j$.
Since \ity\ coincides with the level $N$ gauged WZW action, the claims
(C1) and (C2) amount to the assertion
that no additional contribution will come from integrating out
the $\lambda^i{}_j$.
By inspection of \muccdo, the mass of $\lambda^i{}_j$ is $\sqrt 2(\sigma_i
-\sigma_j)$. The gauge field $v_i$ interacts with the $\lambda^i{}_j$,
$j\not= i$, of charge 1, and with the $\lambda^m{}_i$, $m\not= i$,
of charge $-1$. Their contribution adds up to
$$ \sum_{i,j}\int_\Sigma {dv_i\over 2\pi}\left({\rm Im}\ln(\sigma_i
-\sigma_j) -{\rm Im}\ln (\sigma_j-\sigma_i)\right). \eqn\jud$$
This is zero, or more exactly, it is independent of the $\sigma_i$.
Consequently it can be interpreted as a constant
term in $W$ or an additive renormalization of $t$; as noted in
the paragraph following \ripo, we will not keep track of such effects.
As for the diagonal components of $\lambda$, they are neutral and do not
couple to the $v_i$.
It remains to discuss (C3). To this aim, we can take $\Sigma$ to
be a Riemann surface of genus zero and take the $\sigma_i$ to be constants.
As $\int_\Sigma d^2x \sqrt h R/4\pi=1$ in genus zero, the
claim (C3) amounts to the assertion that the
path integral $\int D\Phi_i\dots e^{-L}$ is
a constant multiple of
$$ (\det\sigma)^{-(N-1)}\prod_{i\not= j}(\sigma_i-\sigma_j). \eqn\huvvo$$
To verify this, we first integrate out the massive fields in the same
one loop approximation as above. As before, the boson determinant
is real and depends only on $|\sigma_i|$,
while the fermion determinant has a phase that can be
extracted from the chiral anomaly. Ordinarily, there is no chiral
anomaly for fermions in a gravitational field in two dimensions,
but the twisting to produce the topological theory involves
a modification of the fermion kinetic energy that introduces such an
anomaly.
We could proceed as above, starting with the anomaly formula analogous to
\homer. For
the sake of variety, however, let us note that the anomaly can be
captured by the path integral over the zero modes of the fermion kinetic
energy. For instance, the fermions $\psi^{is}$ from the
chiral multiplets have components $\overline\psi_+,\psi_-$ of spin zero
and other components of spin one. The zero modes of the fermion
kinetic energy are the constant modes of $\overline\psi_+,\psi_-$, and
the path integral over those modes is
$$\int d\overline\psi_{+is}d\psi_-{}^{jt}\exp\left(\sum_{is}\overline\psi_{+is}
\overline\sigma_i\psi_-{}^{is}\right)=\det\overline\sigma^N={\rm constant}\cdot
\det\sigma^{-N}, \eqn\nuffo$$
where we have used the fact that $\sigma\overline\sigma = {\rm constant}$.
Similarly, for the off-diagonal $\lambda$ fields, the
zero modes of the kinetic energy are the constant modes of $\overline\lambda_-$,
$\lambda_+$, and the path integral over those modes is
$$\prod_{i\not = j}\int d\overline\lambda_-{}^i_j \,d\lambda_+{}^j{}_i
\exp\left(\sqrt 2
(\sigma_i-\sigma_j)\overline\lambda_-{}^i{}_j\lambda_+{}^j_i\right)
={\rm const}\cdot \prod_{i\not= j}\left(\sigma_i-\sigma_j\right). \eqn\upplo$$
Comparing \nuffo\ and
\upplo\ with the claim made in \oczo\ concerning (C3), we
see that we are missing precisely one factor of $\det\sigma$.
This must come from the remaining integral over the diagonal
components of the gauge field. Indeed, though the diagonal fermions
$\lambda^i{}_i$ are massless at tree level, they receive at the one
loop level a mass term with the form
$$ \sum_{i=1}^k\left(\lambda_+{}^i{}_i
\overline\sigma_i{}^{-1}\overline\lambda_-{}^i{}_i+{\mit c.c.}\right)
\eqn\mippo$$
This term can be straightforwardly calculated or can be read off from
the ${\cal N}=2$ extension \buffalo\ of the bosonic anomalous interactions.
Integrating out the diagonal fermions therefore gives
(up to a constant) a factor of
$$\left(\det\overline\sigma\right)^{-1}
={\rm constant}\cdot \left(\det\sigma\right). \eqn\ovvo$$
This is the last factor needed for (C3).
This factor could in a more general way be predicted as follows.
Just because the diagonal theory (including the $\lambda^i{}_i$)
is a product of $k$ sub-theories,
the phase it produces must be of the form
$$\prod_{i=1}^k F(\sigma_i) \eqn\poxp$$
for some function $F$. Given this factorized form, to agree
with the anomaly formula \oxoc\ it must be that $F(\sigma)=\sigma$.
\section{The Verlinde Algebra And The Grassmannian}
In this subsection, we will put the pieces together and write
down the precise connection between the Verlinde algebra and
the quantum cohomology of the Grassmannian.
First of all, we consider the map from the cohomology ring of the
Grassmannian to the Verlinde algebra. The quantum cohomology ring
of $G(k,N)$ is generated by operators of the form
$$O_V=\Tr_V\sigma, \eqn\olfo$$
with $V$ an irreducible representation of $U(k)$. Thinking of $\sigma$
is the curvature of the natural connection on the tautological rank
$k$ bundle over $G(k,N)$, $\Tr_V\sigma$ can be interpreted as a characteristic
class of that bundle and hence a cohomology class of $G(k,N)$.
On the other hand, working at long distances, $\sigma$ becomes
a unitary matrix (up to a constant that we will eventually pin down)
and then $\Tr_V\sigma$
can be interpreted as an operator in the effective gauged WZW model
of $U(k)/U(k)$.
We worked out in \finalgo,\josos\ the interpretation of this operator:
it is the element of the Verlinde algebra determined by the representation
$V$. Of course, from what we have said in \S4.6, the Verlinde
algebra in question is the one for the group $U(k)$ at
level $(N-k,N)$.
So we have gotten the precise mapping from the cohomology of the
Grassmannian to the Verlinde algebra. A couple of points should be
clarified:
(1) It is essential that in mapping from the Grassmannian sigma
model to the gauged WZW model, there is no correction of type \ripo.
Such a term, since it vanishes by the equations of motion
in the gauged WZW model, could be transformed away by a redefinition
of $\sigma$ and $v$. But the resulting redefinition of $\sigma$
would cause the operator $O_V$ to mix with similar operators for
other representations. Thus, were terms with the structure \ripo\ to appear,
their precise form would enter in determining the map from
the cohomology of $G(k,N)$ to the Verlinde algebra.
By contrast, corrections to the gauged WZW action
that vanish by the $v$ equations of motion and so can be removed
by redefinition of $v$ are immaterial, since $O_V(\sigma)$ is
independent of $v$. As noted at the beginning of \S4.3,
we have made no claim that corrections that vanish by the $v$
equations of motion are not generated or have any particular structure.
(2) In the rest of this paper, we will set the Kahler parameter $r$ to
zero; as explained in \S4.3, this involves no essential loss of information.
Two other constants discussed in \S4.3 also enter.
One is the constant $c$ in the relation
$\sigma = c\cdot {\rm unitary~matrix}$, which we evaluated only approximately
in \ommo.
The other is the additive renormalization constant called $a$ in \normert,
which is unknown since
we did not attempt to determine the constant in \onzo.
For the time being, we will set $c$ and $a$ to 1 and 0; eventually
we will verify that this is correct (for $r=0$)
by checking special cases of the formulas.
\subsection{Correlation Functions And The Metric}
Now, let us determine precisely how the correlation functions
and the metric in the Grassmannian sigma model compare to those
in the gauged WZW model. The essential point that goes beyond
what we have just said above is that one must include the correction
term of \ocx, \onzo. By topological invariance, $\sigma$
can be treated as a constant, so the correction factor in the path
integral is
$$\exp(-\Delta L)=\exp\left(-\int_\Sigma d^2x\sqrt h{R\over 4\pi}
(N-k)\ln\det\sigma\right) = (\det\sigma)^{(g-1)(N-k)}. \eqn\bumbo$$
The point or points at which $\det\sigma$ is inserted are immaterial.
(It will turn out that $\det \sigma$ is an invertible element of the
Verlinde algebra or quantum cohomology.)
So if $\langle ~~~\rangle_{G(k,N)}$ denotes an expectation value of
the path integral of the Grassmannian sigma model, and $\langle ~~~\rangle
_{{\mit WZW}}$ denotes a path integral in the gauged WZW model,
then the relation between these symbols in genus $g$
is
$$\left\langle \prod_{i=1}^s \Tr_{V_i}(\sigma_i)\right\rangle_{G(k,N)}
=\left\langle\prod_{i=1}^s \Tr_{V_i}(\sigma_i) \cdot (\det\sigma)^{(g-1)
(N-k)}\right\rangle_{{\mit WZW}}. \eqn\omigo$$
Henceforth we will abbreviate $\Tr_{V_i}(\sigma_i)$ as $V_i$.
{}From \omigo\ one can see that the natural metric on the cohomology
of $G(k,N)$ (given by Poincar\'e duality) does not coincide with
the natural metric on the Verlinde algebra. Let us call these
metrics (the sigma model and Verlinde metrics)
$g_{{\sigma}}$ and $g_{{V}}$, respectively.
We recall that the metric is defined by a two point function in
genus 0, so
$$\eqalign{ g_{{\sigma}}(V_1,V_2) & = \left\langle V_1V_2
\right\rangle_{G(k,n)}\cr
g_{{V}}(V_1,V_2)& =\left\langle V_1V_2\right\rangle_{{\mit WZW}}=
\langle V_1V_2\cdot
(\det\sigma)^{(N-k)}\rangle_{G(k,N)},\cr} \eqn\controv$$
with the correlation functions being in genus zero.
Now let us compare the ring structure on the cohomology of the
Grassmannian to the ring structure of the gauged WZW model.
We recall that in either of the two theories,
the ring structure is introduced by interpreting
the genus zero three point function in terms of a binary operation,
say $V_1,V_2\rightarrow V_1\cdot V_2$, according to the following formula:
$$\left\langle V_1V_2V_3\right\rangle =g(V_1\cdot V_2,V_3). \eqn\murmoro$$
The relation between the genus zero three point functions of the
two theories is from \omigo\
$$\left\langle V_1V_2V_3\right\rangle_{{\mit WZW}}=\langle V_1V_2V_3
(\det\sigma)^{(N-k)}\rangle_{{\mit G(k,N)}}. \eqn\urmoro$$
In particular the three point functions of the $G(k,N)$ and
gauged WZW models do not coincide. However, they differ by the
same factor of $\left(\det\sigma\right)^{(N-k)}$ that
enters in the relation between the metrics.
This means in fact, upon putting together the last few formulas,
that the multiplication laws are the same in the two theories.
So finally, our natural map from the quantum cohomology of the Grassmannian
to the Verlinde algebra is a ring homomorphism -- justifying terminology
that was used above.
\subsection{Non-Abelian Theta Functions}
In the title of this paper and in much of the writing of it,
I have emphasized the Verlinde algebra, which determines the
dimension of the space of non-abelian theta functions. However,
the above gives directly a formula for the dimension of the space
${\cal H}$ of non-abelian theta functions without having to detour via the
Verlinde algebra. Of course, we will count non-abelian theta
functions for the group $U(k)$ at level $(N-k,N)$; because the
$U(1)$ theory is well understood, there is no essential difficulty
in generalizing to other levels.
Let $\langle 1\rangle^g$ denote the partition function in genus $g$.
Then the dimension of ${\cal H} $ on
a Riemann surface $\Sigma$ of genus $g$ is
$$\dim{\cal H}=\langle 1\rangle^g_{{\mit WZW}}
=\left\langle (\det\sigma)^{-(g-1)(N-k)}\right\rangle_{G(k,N)}
=\left \langle(\det\sigma)^{k(g-1)}\right\rangle_{G(k,N)}
\eqn\yutt$$
In the last expression, I have used the fact that $(\det\sigma)^N=1$,
as one can deduce from the Landau-Ginzburg description
of the quantum cohomology (we do this below for $k=2$).
The right hand side of \yutt\
can be evaluated by counting holomorphic maps of $\Sigma$ to
$G(k,N)$ obeying certain conditions.
\section{Getting Down To Earth}
In this section, we will make everything completely explicit
in the cases of $k=1$ and $k=2$. (Some of the issues are discussed
by Gepner for general $k$ in the last paper in [\gepner].)
First we dispose of $G(1,N)$, that is, ${\bf CP}^{N-1}$.
Over ${\bf CP}^{N-1}$ there is a tautological principal $U(1)$ bundle
$P$. Let $W$ be the standard representation of $U(1)$, of
``charge one.'' Associated to $P$ in the representation $W$ is
a line bundle ${\cal L}$. The representation $W$ determines
an operator $\Tr_W\sigma=\sigma$ which we will call $x$.
In the sigma model, interpreting $\sigma$ as the curvature of the natural
connection on ${\cal L}$, $x=c_1({\cal L})$.
The cohomology ring of ${\bf CP}^{N-1}$ is generated
by $x$ and classically is ${\bf C}[x]/x^{N-1}$. But
from [\yaulect] or the $k=1$ case of \ellform, the quantum cohomology
ring is
$$ R={\bf C}[x]/(x^N-1). \eqn\urmo$$
The metric on the cohomology determined by Poincar\'e duality is
$$g_{{\sigma}}(x^k,x^l) =\delta_{k+l,N-1}, \eqn\nurmo$$
where in view of \urmo,
$k$ and $l$ are evaluated modulo $N$.
On the other hand, in the gauged WZW model, $x=\Tr_W\sigma$
should be identified with the element of the Verlinde algebra
for $U(1)$ at level $N$ determined by the representation $W$.
The structure of this algebra is well known.
It is generated by $W$ with the relation $W^N=1$, just
as in \urmo, and the metric is
$$g_{{V}}(W^k,W^l)=\delta_{k+l,0} ,\eqn\rmo$$
with again $k$ and $l$ taken modulo $N$.
\nurmo\ and \urmo\ are related in the fashion predicted by \controv.
Indeed since for either metric $g(a,b)=g(ab,1)$, \controv\ is equivalent
to
$$g_{{V}}(x^k,1)= g_{{\sigma}}(x^k,x^{N-1}). \eqn\udlo$$
That disposes of $k=1$. Obviously, we cannot expect the non-abelian
case $k=2$ to be as trivial as that.
\subsection{The Verlinde Algebra For $k=2$}
First we describe explicitly (but not in a fully self-contained
fashion) the Verlinde algebra of the group
$U(2)$ at the desired level $(N-2,N)$.
We have an exact sequence
$$ 1\rightarrow Z_2\rightarrow SU(2)\times U(1)\underarrow{f} U(2)\rightarrow 1.\eqn\ippo$$
Here the map $f$ is as follows: we identify $SU(2)$ with the $2\times 2$
unitary matrices of determinant 1, $U(1)$ with $2\times 2$ unitary
matrices that are multiples of the identity, and for $x\in SU(2)$,
$y\in U(1)$, let $f(x,y)=xy$.
The gauged WZW action of $U(2)$ at level $(N-2,N)$ restricts,
if one takes the fields to lie in $SU(2)$, to the $SU(2)$ action
at level $N-2$; but if one takes the fields to be in $U(1)$, it restricts
to the $U(1)$ gauged WZW action at level $2N$. (A factor of two arises
simply because the trace of the identity matrix in the fundamental
representation of $U(2)$ is 2.) Therefore, we will proceed
by comparing the Verlinde algebra of $U(2)$ at level $(N-2,N)$
to that of $SU(2)\times U(1)$ at level $(N-2,2N)$.
The $SU(2)$ Verlinde algebra was described explicitly in [\bott].
If $V_1$ is the two dimensional representation of $SU(2)$,
and $V_n$ is its $n^{th}$ symmetric tensor power, then the Verlinde
algebra of $SU(2)$ is the usual representation ring of $SU(2)$,
subject to the relation
$$V_{N-1}=0. \eqn\subrel$$
The representation ring of $SU(2)$, subject to this relation,
is spanned additively by $V_0,\dots,V_{N-2}$.
The multiplication law can be described explicitly as
$$V_i\times V_j=\sum_t N_{ijt}V_t, \eqn\jroo$$
where $N_{ijt} $ is 1 if the following relations and their
cyclic permutations are obeyed:
$$i+j\geq t, ~~ N-2-i+j\geq N-2-t,~~ 2(N-2)-i-j\geq t.\eqn\hugf$$
Otherwise $N_{ijt}=0$.
The metric on the Verlinde algebra is
$$g(V_s,V_t)=\delta_{s,t}. \eqn\ubrel$$
The $U(1)$ Verlinde algebra has already been introduced above.
It is generated by the charge one representation $W$, and
the defining relation at level $2N$ is
$$W^{2N}=1. \eqn\hubrel$$
The metric is
$$g(W^u,W^v)=\delta_{u+v,0}. \eqn\bgrel$$
The $SU(2)\times U(1)$ Verlinde algebra at level $(N-2,2N)$,
is therefore spanned additively by the elements $V_iW^j$,
for $i=0,\dots,N-2$, $j=0,\dots,2N-1$, corresponding
to the representation $V_i\otimes W^{\otimes j}$.
The multiplication law
and metric are products of the multiplication law and metric
of $SU(2)$ and $U(1)$.
Now we want to proceed to $U(2)=(SU(2)\times U(1))/{\bf Z}_2$. Dividing
by ${\bf Z}_2$ halves the volume of the group manifold (if one
uses a fixed Haar measure in an obvious sense). The Verlinde algebra
is defined on a certain space of conformal blocks which can be constructed
by quantizing an appropriate phase space ${\cal M}$
-- for instance, the phase
space of the gauged WZW model. When the volume of the
group is halved, the volume of ${\cal M}$ is divided by $2^2=4$\foot{
${\cal M}$ is the moduli space of flat connections in genus one and
consists of pairs of commuting elements of the gauge group $G$, divided by
the Weyl group. Because one has a pair of elements of $G$,
the volume of ${\cal M}$ is decreased by a factor of $n^2$ if one
divides $G$ by a group of order $n$.};
therefore, in the semiclassical limit of large $N$, the Verlinde algebra
of $U(2)$ at level $(N-2,N)$ will have one fourth the dimension of
that of $SU(2)\times U(1)$ at level $(N-2,2N)$.
\REF\zoo{G. Moore and N. Seiberg, ``Taming The Conformal Zoo,''
Phys. Lett. {\bf B220} (1989) 422.}
One factor of two is obvious. Among all $SU(2)\times U(1)$ representations,
we must restrict to those that are representations of $U(2)$.
This means keeping only $V_iW^j$ with $i+j$ even.
The second factor of two is less obvious.
One must impose the equivalence
relation
$$ V_iW^j=V_{N-2-i}W^{j+N}. \eqn\grort$$
I refer the interested reader to [\zoo] for an explanation
(in the analogous case of $SO(3)=SU(2)/{\bf Z_2}$) of such matters.
Note that if we set $\tau(V_iW^j)=V_{N-2-i}W^{j+N}$,
then the Verlinde algebra of $SU(2)\times U(1)$ at level
$(N-2,N)$ obeys $\tau(a)b=a\tau(b)=\tau(ab)$.
This ensures that the Verlinde algebra of $SU(2)\times U(1)$
induces a natural algebra structure on the quotient by the relations \grort.
This quotient algebra, restricted to $i+j$ even (a $\tau$-invariant
condition) is the Verlinde algebra of $U(2)$ at level $(N-2,N)$.
The metric is
$$g_{{V}}(V_iW^s,V_jW^t)=\delta_{i,j}\delta_{s+t,0}
+\delta_{i,N-2-j}\delta_{s+t-N,0}. \eqn\ugglo$$
A complete but redundant set of relations for the $U(2)$ Verlinde
algebra would be the relations
$$ V_{N-1}=0,~~~W^{2N}=1 \eqn\gglo$$
inherited from the $SU(2)$ and $U(1)$ algebras, along with \grort.
A special case of \grort\ is
$$ V_{N-2}W^N = 1. \eqn\gloff$$
It will become clear presently that \gglo\ and \gloff\ suffice to
characterize the Verlinde algebra.
\subsection{Representations And Characters}
Two representations of the group $U(2)$ will play
a distinguished role. The first is the standard two dimensional
representation ${\cal V}_1$. Under restriction to $SU(2)\subset U(2)$,
${\cal V}_1$ restricts to the standard two dimensional representation
$V_1$ of $SU(2)$, and the scalars in $U(2)$ act with charge 1.
So ${\cal V}_1$ pulls back to
the representation $V_1\otimes W$ of $SU(2)\times U(1)$.
The other important representation is $\eta=\wedge^2{\cal V}_1$.
$SU(2)$ acts trivially on $\eta$, and the scalars in $U(2)$ act
with charge 2, so $\eta$ pulls back to the representation $W^2$
of $SU(2)\times U(1)$.
So the elements of the $U(2)$ Verlinde algebra determined
by ${\cal V}_1$ and $\eta$ are just $V_1W$ and $W^2$.
What elements in the quantum cohomology do these same representations
determine? Over $G(2,N)$, there is a tautological complex two-plane
bundle $E^*$ with curvature matrix represented by the quantum field
$\sigma$. The Chern classes of $E^*$ are
$$\eqalign{ c_1(E^*) & = \Tr_{{\cal V}_1}\sigma \cr
c_2(E^*) & = \Tr_\eta\sigma . \cr}
\eqn\porfori$$
So under the natural mapping from the quantum cohomology of $G(2,N)$
to the Verlinde algebra of $U(2)$, $c_1(E^*)$ and $c_2(E^*)$ correspond
to the representations ${\cal V}_1$ and $\eta$.
Let us now briefly discuss the classical representation ring of $U(2)$;
the Verlinde algebra is a quotient of this, as we have described.
Consider the maximal abelian subgroup of $U(2)$ of matrices
of the form
$$\sigma=\left(\matrix{ \lambda_1& 0 \cr 0 & \lambda_2\cr }
\right), \eqn\polon$$
with $|\lambda_1|=|\lambda_2|=1$.
The character of a representation $R$ of $U(2)$ is $\Tr_R\sigma$
regarded as a function of the $\lambda_i$.
For instance, the characters of ${\cal V}_1$ and $\eta$
are
$$\eqalign{ \Tr_{{\cal V}_1}\sigma & = \lambda_1+\lambda_2 \cr
\Tr_\eta\sigma & = \lambda_1\lambda_2. \cr}\eqn\holon$$
If ${\cal V}_n$ is the $n^{th}$ symmetric tensor power of ${\cal V}_1$,
then its character is
$$\Tr_{{\cal V}_n}\sigma ={\lambda_1{}^{n+1}-\lambda_2{}^{n+1}\over
\lambda_1-\lambda_2}. \eqn\bolon$$
Any irreducible representation of $U(2)$ is of the form ${\cal V}_s\eta^t$ for
some integers $s,t$ (with $s\geq 0$). The equivalence relation
\grort\ becomes
$${\cal V}_s\eta^t\leftrightarrow {\cal V}_{N-2-s}\eta^{s+t+1}.\eqn\newgrort$$
In general, the map from a representation of $U(2)$ to its character
is an isomorphism between the ring of representations of $U(2)$
and the ring of Laurent polynomials in $\lambda_1$ and $\lambda_2$
that are invariant under the Weyl group, which acts by $\lambda_1
\leftrightarrow\lambda_2$.
As we have sketched above, the Verlinde algebra of $U(2)$ is a quotient
of the classical representation ring of $U(2)$ by a certain ideal.
Under the isomorphism between the representation ring and the
character ring, the generators of this ideal can be identified
with certain Laurent polynomials in the $\lambda$'s. For instance,
the relations in \gglo\ and \gloff\ become
$$\eqalign{{\lambda_1{}^N-\lambda_2{}^N\over\lambda_1-\lambda_2} & = 0 \cr
(\lambda_1\lambda_2)^N & = 1 \cr
\lambda_1\lambda_2{\lambda_1{}^{N-1}-\lambda_2{}^{N-1}
\over\lambda_1-\lambda_2} & = 1 .\cr } \eqn\hiddo$$
The second relation in \hiddo\ has the following implication. The
classical representation ring of $U(2)$ is a ring of {\it Laurent}
polynomials in the $\lambda$'s, including negative powers. But by
multiplying by a suitable power of $1=(\lambda_1\lambda_2)^N$,
one can clear the denominators and regard the Verlinde algebra as
the quotient of the ring of Weyl-invariant polynomials (not Laurent
polynomials) in the $\lambda$'s
by a certain ideal ${\cal I}$. We will learn that ${\cal I}$
is in fact generated
by the first and third relations in \hiddo.
If we multiply the first relation in \hiddo\ by $\lambda_1+\lambda_2$
and subtract the third,
we learn that
$${\lambda_1{}^{N+1}-\lambda_2{}^{N+1}\over \lambda_1-\lambda_2
}+1 = 0. \eqn\triddo$$
\subsection{Cohomology Ring Of $G(2,N)$}
Now return to the sigma model interpretation of $\sigma$ as the curvature
of the tautological two-plane bundle $E$ over $G(2,N)$.
If we introduce the roots $\widetilde \lambda_1,\widetilde
\lambda_2$ of the Chern polynomial,
then the Chern classes of $E$ are
$$\eqalign{ c_1(E^*) & = \widetilde \lambda_1+\widetilde\lambda_2\cr
c_2(E^*) & = \widetilde \lambda_1\widetilde
\lambda_2 .\cr} \eqn\uncoc$$
We observed earlier that
under the map from cohomology of $G(2,N)$ to the Verlinde algebra,
$c_1(E^*)$ and $c_2(E^*)$ correspond to ${\cal V}_1$ and $\eta$.
If in turn we identify the Verlinde algebra as a quotient of the character
ring, ${\cal V}_1$ and $\eta$ are identified with their characters, which
were given in \holon. As \holon\ and \uncoc\ coincide,
the identification between these rings can be interpreted as
$\lambda_i\leftrightarrow\widetilde\lambda_i$. Henceforth, we make
this identification and drop the tildes.
The quantum comology ring of $G(2,N)$ is the ring of polynomials in
$c_1(E^*)$ and $c_2(E^*)$ modulo an ideal ${\cal J}$. As explained in
\S3.2, ${\cal J}$ can be described as follows. Let
$$W(\lambda_1,\lambda_2) ={1\over N+1}\left(\lambda_1{}^{N+1}
+\lambda_2{}^{N+1}\right)+\left(\lambda_1+\lambda_2\right). \eqn\cncncn$$
Since it is Weyl-invariant, $W$ can be regarded as a polynomial
in $c_1=\lambda_1+\lambda_2$
and $c_2=\lambda_1\lambda_2$.
The ideal ${\cal J}$ is generated by the relations
$$0=dW= {\partial W\over\partial c_1}dc_1+{\partial W\over\partial c_2}dc_2.
\eqn\brno$$
Since
$$\eqalign{ d\lambda_1=&{\lambda_1 dc_1-dc_2\over \lambda_1-\lambda_2}\cr
d\lambda_2=&{-\lambda_2dc_1+dc_2\over\lambda_1-\lambda_2},\cr
}\eqn\cxzs$$
we have
$$dW=\left(\lambda_1{}^N+1\right)d\lambda_1+\left(\lambda_2{}^N+1\right)
d\lambda_2=dc_1\left({\lambda_1{}^{N+1}-\lambda_2{}^{N+1}\over
\lambda_1-\lambda_2}+1\right)-dc_2\left({\lambda_1{}^N-\lambda_2{}^N
\over\lambda_1-\lambda_2}\right).\eqn\polnson$$
The quantum cohomology of $G(2,N)$ is therefore defined by the relations
$$0={\lambda_1{}^N-\lambda_2{}^N\over \lambda_1-\lambda_2}
={\lambda_1{}^{N+1}-\lambda_2{}^{N+1}\over \lambda_1-\lambda_2}+1.\eqn\cract$$
If we compare this to \triddo\ and to the first equation in \hiddo,
we see that these relations hold in the Verlinde algebra,
and consequently the Verlinde algebra is a quotient of the
quantum cohomology of $G(2,N)$.
To show that the two algebras coincide (and that all additional
relations we found earlier for the Verlinde algebra are consequences
of \polnson), it suffices to compare the dimensions of the two algebras.
{}From the description of the Verlinde algebra as being
spanned by the elements $V_iW^j,$ with $0\leq i\leq N-2$, $0\leq
j\leq 2N-1$, with a two-fold restriction and a two-fold identification,
its dimension is $N(N-1)/2$. On the other hand, with $c_1$ and
$c_2$ considered to be of degree 1 and 2, respectively,
the potential $W(c_1,c_2)$ is homogeneous of degree $N+1$; it follows
by a simple counting that the polynomial ring in the $c_j$ modulo
the ideal $dW=0$ has dimension $N(N-1)/2$. This completes the
explicit verification of the equivalence between these rings.
Moreover, we can now dispose of the constant $c$ in the relation
$\sigma = c g$. This constant corresponds to a possible constant
in the relation $\lambda_i\leftrightarrow\widetilde \lambda_i$.
The relations above such as \cract\ are not invariant
under rescaling of the $\lambda$'s, and so the agreement with the Verlinde
algebra would be ruined if we modified the value of $c$. A similar
argument holds for $k>2$.
\subsection{The Metric}
It remains to show that the metric on the Verlinde algebra and the
metric on the quantum cohomology of $G(2,N)$ are related in the
expected fashion.
Since either metric obeys $g(a,b)=g(ab,1)$, to verify \controv,
it suffices to show that
$$g_{{
V}}(a,1)=g_{{\sigma}}(a,(\det\sigma)^{N-2}).\eqn\occovo$$
We already know the Verlinde metric:
$$g_{{V}}
({\cal V}_s\eta^t,1)=\delta_{s,0}\delta_{t,0} +\delta_{N-2-s,0}
\delta_{t-1,0}.\eqn\orfo$$
Now let us compute the metric on the cohomology of $G(2,N)$. Since
there are no instanton corrections to the metric, we need only compute
the classical metric on the cohomology.
According to \jupper, that metric is
$$g_{{\sigma}}(a,b)=
-{1\over 2} \sum_{dW(\lambda)=0}
{ab(\lambda_1-\lambda_2)^2\over N^2\lambda_1{}^{N-1}\lambda_2{}
^{N-1}}=
-{1\over 2}\sum_{\lambda_1{}^N=\lambda_2{}^N=-1}
{ab(\lambda_1-\lambda_2)^2\over N^2\lambda_1{}^{N-1}\lambda_2{}^{N-1}}.
\eqn\nugl$$
The symmetric polynomials in the $\lambda$'s of degree
$2(N-2)$ (corresponding to the top dimensional cohomology of $G(2,N)$)
are of the form
$$ f_r = {\lambda_1{}^{2r+1}-\lambda_2{}^{2r+1}\over\lambda_1-\lambda_2}
(\lambda_1\lambda_2)^{N-2-r},\,\,\,{\rm with}~0\leq r\leq N-2.\eqn\truv$$
A simple calculation gives
$$g_{{\sigma}}(f_r,1)=-{1\over 2N^2}\sum_{\lambda_1{}^N=\lambda_2{}^N=-1}
\left(\left(\lambda_1\over\lambda_2\right)^{1+r}
-\left(\lambda_1\over\lambda_2\right)^{r}
-\left(\lambda_1\over\lambda_2\right)^{-r}
+\left(\lambda_1\over\lambda_2\right)^{-1-r}\right)=\delta_{r,0}.
\eqn\ilmoxx$$
This reproduces the first term on the right of \orfo\ up to the shift
predicted in \occovo. To interpret the second term on the right of \orfo, note
that while classically for homogeneous $f$,
$g_{{\sigma}}(f,1)$ is non-zero unless $f$ is of degree $2(N-2)$,
the quantum cohomology is only graded modulo $N$ (in complex dimension),
so we can also consider the case that $f$ is of degree $N-4$;
by a calculation similar to the above, this reproduces the second term
in \orfo.
Moreover, we can now dispose of the renormalization constant $a$
of equation \normert. Inclusion of this term would rescale the metric
by a factor of $e^{2a}$; the agreement between the two metrics means
that the above formulas are normalized correctly, at $r=0$.
Though we have made this check on the value of $a$ (and a similar,
earlier check for $c$) only for $k=2$, the arguments are similar for any
$k$.
\ack{I benefited from discussions with G. Segal at an early stage
of this work.}
\refout
\figout
\end
|
2,877,628,091,146 | arxiv | \section[#1]{\centering\normalfont\scshape #1}}
\newcommand{\ssubsection}[1]{%
\subsection[#1]{\raggedright\normalfont\itshape #1}}
\newcommand{\mathfrak{p}}{\mathfrak{p}}
\newcommand{\mathfrak{q}}{\mathfrak{q}}
\begin{document}
\maketitle
\begin{abstract}
We express the multigraded Betti numbers of an arbitrary monomial ideal in terms of the multigraded Betti numbers of two basic classes of ideals. This decomposition has multiple applications. In some concrete cases, we use it to construct minimal resolutions of classes of monomial ideals; in other cases, we use it to compute projective dimensions. To illustrate the effectiveness of the structural decomposition, we give a new proof of a classic theorem by Charalambous that states the following: let $k$ be a field, and $M$ an Artinian monomial ideal in $S=k[x_1,\ldots,x_n]$; then, for all $i$, $\betti_i(S/M) \geq {n \choose i }$.
\end{abstract}
\section{Introduction
The problem of finding the minimal resolution of an arbitrary monomial ideal in closed form has been deemed utopic by many a mathematician. As a consequence, people have tried to restrict the study of minimal resolutions to particular classes of ideals. Borel ideals, minimally resolved by the Eliahou-Kervaire resolution [EK]; generic ideals, minimally resolved by the Scarf complex [BPS]; and dominant ideals, minimally resolved by the Taylor resolution [Al], are examples of this restrictive approach.
In the first half of this paper, however, we turn to the general problem, and decompose the minimal resolution of an arbitrary monomial ideal in terms of the minimal resolutions of two basic classes that we call dominant, and purely nondominant ideals. More precisely, we express the multigraded Betti numbers of an ideal as the sum of the multigraded Betti numbers of some dominant and some purely nondominant ideals. Since dominant ideals are minimally resolved by their Taylor resolutions, our decomposition reduces the study of minimal monomial resolutions to the study of minimal resolutions of purely nondominant ideals.
Unfortunately, the resolutions of purely nondomiant ideals involve the same challenges that we encounter in the general context. Some of these difficulties are the existence of ghost terms, characteristic dependence, and the striking fact that some of the simplest purely nondominant ideals cannot be minimally resolved by any subcomplex of the Taylor resolution. Thus, in the second half of this work we focus our efforts on one particular case: monomial ideals whose structural decomposition has no purely nondominant part. As a result of this study, we obtain the multigraded Betti numbers of two families that we call $2$-semidominant and almost generic ideals.
The structural decomposition is also a useful tool to compute projective dimensions. We prove, for instance, that if an ideal $M$ satisfies certain conditions, $\pd(S/M)=2$, and, under some other conditions, $\pd(S/M)=n$, where $n$ is the number of variables in the polynomial ring. Another result, also related to projective dimensions, is a new proof of a classic theorem of Charalambous [Ch] (see also [Pe, Corollary 21.6]), stating: let $k$ be a field, and $M$ an Artinian monomial ideal in $S=k[x_1,\ldots,x_n]$; then, for all $i$, $\betti_i(S/M) \geq {n \choose i }$. While the original proof relies on the radical of an ideal, ours is based on the structural decomposition.
The organization of the article is as follows. Section 2 is about background and notation. Sections 3 and 4 are technical. They contain some isomorphism theorems, as well as the structural decomposition theorems advertised above. In section 5, we compute the multigraded Betti numbers of two families of ideals. In section 6, we compute projective dimensions. Section 7 is the conclusion; it includes some comments, questions, and conjectures.
\section{Background and Notation
Throughout this paper $S$ represents a polynomial ring over an arbitrary field $k$, in a finite number variables. The letter $M$ always denotes a monomial ideal
in $S$. With minor modifications, the constructions that we give below can be found in [Me,Pe].
\begin{construction}
Let $M$ be generated by a set of monomials $\{l_1,\ldots,l_q\}$. For every subset $\{l_{i_1},\ldots,l_{i_s}\}$ of $\{l_1,\ldots,l_q\}$, with $1\leq i_1<\ldots<i_s\leq q$,
we create a formal symbol $[l_{i_1},\ldots,l_{i_s}]$, called a \textbf{Taylor symbol}. The Taylor symbol associated to $\{\}$ is denoted by $[\varnothing]$.
For each $s=0,\ldots,q$, set $F_s$ equal to the free $S$-module with basis $\{[l_{i_1},\ldots,l_{i_s}]:1\leq i_1<\ldots<i_s\leq q\}$ given by the
${q\choose s}$ Taylor symbols corresponding to subsets of size $s$. That is, $F_s=\bigoplus\limits_{i_1<\ldots<i_s}S[l_{i_1},\ldots,l_{i_s}]$
(note that $F_0=S[\varnothing]$). Define
\[f_0:F_0\rightarrow S/M\]
\[s[\varnothing]\mapsto f_0(s[\varnothing])=s\]
For $s=1,\ldots,q$, let $f_s:F_s\rightarrow F_{s-1}$ be given by
\[f_s\left([l_{i_1},\ldots,l_{i_s}]\right)=
\sum\limits_{j=1}^s\dfrac{(-1)^{j+1}\lcm(l_{i_1},\ldots,l_{i_s})}{\lcm(l_{i_1},\ldots,\widehat{l_{i_j}},\ldots,l_{i_s})}
[l_{i_1},\ldots,\widehat{l_{i_j}},\ldots,l_{i_s}]\]
and extended by linearity.
The \textbf{Taylor resolution} $\mathbb{T}_{l_1,\ldots,l_q}$ of $S/M$ is the exact sequence
\[\mathbb{T}_{l_1,\ldots,l_q}:0\rightarrow F_q\xrightarrow{f_q}F_{q-1}\rightarrow\cdots\rightarrow F_1\xrightarrow{f_1}F_0\xrightarrow{f_0}
S/M\rightarrow0.\]
\end{construction}
We define the \textbf{multidegree} of a Taylor symbol $[l_{i_1},\ldots,l_{i_s}]$, denoted $\mdeg[l_{i_1},\ldots,l_{i_s}]$, as follows:
$\mdeg[l_{i_1},\ldots,l_{i_s}]=\lcm(l_{i_1},\ldots,l_{i_s})$. The Taylor symbols $[l_{i_1},\ldots,l_{i_s}]$ are called \textbf{faces}. A Taylor symbol
of the form $[l_{i_1},\ldots,\widehat{l_{i_j}},\ldots, l_{i_s}]$ is referred to as a \textbf{facet} of the face $[l_{i_1},\ldots,l_{i_s}]$.
\textit{Note}:
In our construction above, the generating set $\{l_1,\ldots,l_q\}$ is not required to be minimal. Thus, $S/M$ has many Taylor resolutions. We reserve the notation
$\mathbb{T}_M$ for the Taylor resolution of $S/M$, determined by the minimal generating set of $M$. (Although some authors define a single Taylor resolution of $S/M$, our construction is general, like in [Ei].)
\begin{construction}
Let $M$ be minimally generated by $\{l_1,\ldots,l_q\}$. Let $A$ be the set of Taylor symbols of $\mathbb{T}_M$ whose
multidegrees are not common to other Taylor symbols; that is, a Taylor symbol $[\sigma]$ is in $A$ if and only if $\mdeg[\sigma]\neq \mdeg[\sigma']$,
for every Taylor symbol $[\sigma']\neq [\sigma]$. For each $s=0,\ldots,q$, set $G_s$ equal to the free $S$-module with basis
$\{[l_{i_1},\ldots,l_{i_s}]\in A:1\leq i_1<\ldots<i_s\leq q\}$. For each $s=0,\ldots,q$, let $g_s=f_s\restriction_{G_s}$. It can be proven that the $g_s$
are well defined (more precisely, that $g_s\left(G_s\right)\subseteq G_{s-1}$) and that
\[0\rightarrow G_q\xrightarrow{g_q}G_{q-1}\rightarrow \cdots\rightarrow G_1\xrightarrow{g_1}G_0\xrightarrow{g_0} S/M\rightarrow 0\]
is a subcomplex of $\mathbb{T}_M$, which is called the \textbf{Scarf complex} of $S/M$.
\end{construction}
\begin{definition}
Let $M$ be a monomial ideal, and let
\[\mathbb{F}:\cdots\rightarrow F_i\xrightarrow{f_i}F_{i-1}\rightarrow\cdots\rightarrow F_1\xrightarrow{f_1}F_0\xrightarrow{f_0} S/M\rightarrow 0\]
be a free resolution of $S/M$.
We say that a basis element $[\sigma]$ of $\mathbb{F}$ has \textbf{homological degree i}, denoted $\hdeg[\sigma]=i$, if
$[\sigma] \in F_i$. $\mathbb{F}$ is said to be a \textbf{minimal resolution} if for every $i$, the differential matrix $\left(f_i\right)$ of $\mathbb{F}$
has no invertible entries.
\end{definition}
\begin{definition}
Let $M$ be a monomial ideal, and let
\[\mathbb{F}:\cdots\rightarrow F_i\xrightarrow{f_i}F_{i-1}\rightarrow\cdots\rightarrow F_1\xrightarrow{f_1}F_0\xrightarrow{f_0} S/M\rightarrow 0\]
be a minimal free resolution of $S/M$.
\begin{itemize}
\item For every $i\geq 0$, the $i^{th}$ \textbf{Betti number} $\betti_i\left(S/M\right)$ of $S/M$ is $\betti_i\left(S/M\right)=\rank(F_i)$.
\item For every $i\geq 0$, and every monomial $l$, the \textbf{multigraded Betti number} $\betti_{i,l}\left(S/M\right)$ of $S/M$, in homological degree $i$ and multidegree $l$,
is \[\betti_{i,l}\left(S/M\right)=\#\{\text{basis elements }[\sigma]\text{ of }F_i:\mdeg[\sigma]=l\}.\]
\item The \textbf{projective dimension} $\pd\left(S/M\right)$ of $S/M$ is \[\pd\left(S/M\right)=\max\{i:\betti_i\left(S/M\right)\neq 0\}.\]
\end{itemize}
\end{definition}
\begin{definition}
Let $M$ be minimally generated by a set of monomials $G$.
\begin{itemize}
\item A monomial $m\in G$ is called \textbf{dominant} (in $G$) if there is a variable $x$, such that for all $m'\in G\setminus\{m\}$, the exponent with
which $x$ appears in the factorization of $m$ is larger than the exponent with which $x$ appears in the factorization of $m'$.
The set $G$ is called \textbf{dominant} if each of its elements is dominant. The ideal $M$ is called \textbf{dominant} if $G$ is dominant.
\item $G$ is called \textbf{$p$-semidominant} if $G$ contains
exactly $p$ nondominant monomials. The ideal $M$ is \textbf{$p$-semidominant} if $G$ is $p$-semidominant.
\item We say that $G$ is \textbf{purely nondominant} when all the elements of $G$ are nondominant. In this case, we also say that $M$ is \textbf{purely nondominant}.
\end{itemize}
\end{definition}
\begin{example}\label{example 1}
Let $M_1$, $M_2$, and $M_3$ be minimally generated by $G_1=\{a^2,b^3,ab\}$, $G_2=\{ab,bc,ac\}$, and $G_3=\{a^2b,ab^3c,bc^2\}$, respectively. Note that $a^2$ and $b^3$ are dominant in $G_1$, but $ab$ is not. Thus, both the set $G_1$ and the ideal $M_1$ are {1}-semidominant. On the other hand, $ab$, $bc$, and $ac$ are nondominant in $G_2$. Therefore, $G_2$ and $M_2$ are purely nondominant (as well as {3}-semidominant). Finally, $a^2b$, $ab^3c$, and $bc^2$ are dominant in $G_3$. Thus, $G_3$ and $M_3$ are dominant.
\end{example}
\section{Isomorphism Theorems
The notation that we introduce below retains its meaning until the end of this section.
Let $M$ be a monomial ideal with minimal generating set $G=\{m_1,\ldots,m_q,n_1,\ldots,n_p\}$, where $m_1,\ldots,m_q$ are dominant, and $n_1,\ldots,n_p$ are nondominant. Let $1\leq d \leq q$, and let
$H=\{h_1,\ldots,h_c\}=\{m_{d+1},\ldots,m_q,n_1,\ldots,n_p\}$. Then $G$ can be expressed in the form $G=\{m_1,\ldots,m_d,h_1,\ldots,h_c\}$.
Let $m=\lcm(m_{r_1},\ldots,m_{r_j})$, where $1\leq r_1 <\ldots< r_j \leq d$. By convention, if $j=0$, $m=1$. For all $s=1,\ldots,c$, let $h'_s=\dfrac{\lcm(m,h_s)}{m}$. Let $M_m=(h'_1,\ldots,h'_c)$.
\begin{example}\label{example 1
Let $M=(a^3b^2,c^3d,ac^2,a^2c,b^2d,abc,bcd)$. Note that $M$ is $5$-semidominant, with $m_1=a^3b^2$, $m_2=c^3d$, $n_1=ac^2$, $n_2=a^2c$, $n_3=b^2d$, $n_4=abc$, $n_5=bcd$. If we set $d=2$, then $H=\{h_1,\ldots,h_5\}=\{n_1,\ldots,n_5\}$. Suppose that $m=\lcm(m_1)=a^3b^2$. Then $M_m=(h'_1,\ldots,h'_5)$, where $h'_1=\dfrac{\lcm(a^3b^2,ac^2)}{a^3b^2}=c^2$; $h'_2=\dfrac{\lcm(a^3b^2,a^2c)}{a^3b^2}=c$;
$h'_3=\dfrac{\lcm(a^3b^2,b^2d)}{a^3b^2}=d$; $h'_4=\dfrac{\lcm(a^3b^2,abc)}{a^3b^2}=c$; $h'_5=\dfrac{\lcm(a^3b^2,bcd)}{a^3b^2}=cd$.
Thus, $M_{a^3b^2}=(c^2,c,d,c,cd)$. Although $\{c^2,c,d,c,cd\}$ does not generate $M_{a^3b^2}$ minimally, sometimes, nonminimal generating sets like this will serve our purpose.
\end{example}
\begin{proposition}\label{1
Let $1\leq s_1<\ldots<s_i \leq c$. The Taylor symbols $[h'_{s_1},\ldots,h'_{s_i}]$ of $\mathbb{T}_{h'_1,\ldots,h'_c}$, and $[m_{r_1},\ldots,m_{r_j},h_{s_1},\ldots,h_{s_i}]$ of $\mathbb{T}_M$, satisfy
\[\mdeg[m_{r_1},\ldots,m_{r_j},h_{s_1},\ldots,h_{s_i}]=m \mdeg[h'_{s_1},\ldots,h'_{s_i}].\]
\end{proposition}
\begin{proof}
Note that $h_{s_1}\mid \lcm(m,h_{s_1})=mh'_{s_1} \text{, and } mh'_{s_1}\mid m\lcm(h'_{s_1},\ldots,h'_{s_i})$. Thus, $h_{s_1}\mid m \lcm(h'_{s_1},\ldots,h'_{s_i})$.\\
Similarly,
$h_{s_2},\ldots,h_{s_i}\mid m \lcm(h'_{s_1},\ldots,h'_{s_i})$. Hence, $\lcm(m,h_{s_1},\ldots,h_{s_i})\mid m \lcm(h'_{s_1},\ldots,h'_{s_i})$. We will show that $m\lcm(h'_{s_1},\ldots,h'_{s_i})\mid \lcm(m,h_{s_1},\ldots,h_{s_i})$.\\
Let $h'_{s_1}=x_1^{\alpha_{11}}\ldots x_n^{\alpha_{1n}},\ldots , h'_{s_i}=x_1^{\alpha_{i1}}\ldots x_n^{\alpha_{in}}$,
and let $\gamma_1=\max(\alpha_{11},\ldots,\alpha_{i1}), \ldots, \gamma_n = \max(\alpha_{1n}, \ldots, \alpha_{in})$. Then $\lcm(h'_{s_1}, \ldots, h'_{s_i})=x_1^{\gamma_1} \ldots x_n^{\gamma_n}$. Notice that
$m x_1^{\gamma_1}$ divides one of $m h'_{s_1} = \lcm(m,h_{s_1}), \ldots, m h'_{s_i} = \lcm(m,h_{s_i})$, and therefore,
$m x_1^{\gamma_1} \mid \lcm(m,h_{s_1},\ldots,h_{s_i})$.
Similarly,
$m x_2^{\gamma_2},\ldots,mx_n^{\gamma_n} \mid \lcm(m,h_{s_1},\ldots,h_{s_i})$.
Thus,
$x_1^{\gamma_1},\ldots, x_n^{\gamma_n} \mid \dfrac{\lcm(m,h_{s_1}, \ldots, h_{s_i})}{m}$.
It follows that
$\lcm(h'_{s_1}, \ldots, h'_{s_i})=x_1^{\gamma_1} \ldots x_n^{\gamma_n} \mid \dfrac{\lcm(m,h_{s_1}, \ldots, h_{s_i})}{m}$, which is equivalent to saying that
$m\lcm(h'_{s_1},\ldots,h'_{s_i}) \mid \lcm(m,h_{s_1},\ldots,h_{s_i})$.
Finally,
\begin{dmath*}
m\mdeg[h'_{s_1}, \ldots, h'_{s_i}]=m \lcm(h'_{s_1}, \ldots, h'_{s_i})=\lcm(m,h_{s_1},\ldots,h_{s_i})=
\lcm(\lcm(m_{r_1}, \ldots,m_{r_j}),h_{s_1}, \ldots, h_{s_i}) = \lcm(m_{r_1}, \ldots,m_{r_j},h_{s_1}, \ldots, h_{s_i}) = \mdeg[m_{r_1}, \ldots,m_{r_j},h_{s_1}, \ldots, h_{s_i}].
\end{dmath*}
\end{proof}
\begin{example}\label{example 2
Let $M$, $m$, and $H$ be as in Example \ref{example 1}. The Taylor symbols $[h'_2,h'_3]=[c,d]$ of $\mathbb{T}_{c^2,c,d,c,cd}$, and $[m_1,h_2,h_3]=[a^3b^2,a^2c,b^2d]$ of $\mathbb{T}_M$, have multidegrees $cd$ and $a^3b^2cd$, respectively. Therefore, $\mdeg[a^3b^2,c,d]=a^3b^2 \mdeg[c,d]$, which is consistent with Proposition \ref{1}.
\end{example}
\textit{Note}:
We will say that a monomial $l$ \textbf{occurs} in a resolution $\mathbb{F}$ if there is a basis element of $\mathbb{F}$ with multidegree $l$.
If $a$ is an entry of a differential matrix of a resolution $\mathbb{F}$ and $[\sigma]$ is an element of the basis of $\mathbb{F}$, by abusing the
language we will often say that \textbf{$a$ is an entry of $\mathbb{F}$} and \textbf{$[\sigma]$ is an element of $\mathbb{F}$}. Moreover, sometimes we will use the notation
$[\sigma]\in \mathbb{F}$.
\begin{theorem}\label{3
Let $m'$ be a multidegree that occurs in $\mathbb{T}_{h'_1,\ldots,h'_c}$.
\begin{enumerate}[(i)]
\item There are no basis elements of $\mathbb{T}_M$, with multidegree $mm'$ and homological degree less than $j$.
\item For every $i$, there is a bijective correspondence between the basis elements of $\mathbb{T}_{h'_1,\ldots,h'_c}$, with multidegree $m'$ and homological
degree $i$, and the basis elements of $\mathbb{T}_M$, with multidegree $mm'$ and homological degree $i+j$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) Let $[h'_{s_1},\ldots,h'_{s_i}]$ be a basis element of $\mathbb{T}_{h'_1,\ldots,h'_c}$, with multidegree $m'$. By Proposition \ref{1},
\[mm'=m\mdeg[h'_{s_1},\ldots,h'_{s_i}]=\mdeg[m_{r_1},\ldots,m_{r_j},h_{s_1},\ldots,h_{s_i}].\]
It follows that every basis element $[\sigma]$ of $\mathbb{T}_M$, with multidegree $mm'$, must contain the same dominant monomials $m_{r_1},\ldots,m_{r_j}$ [Al, Lemma 4.3]. This means that
$\hdeg[\sigma]\geq j$.\\
(ii) Let $A_{i,m'}=\left\{[\sigma]\in \mathbb{T}_{h'_1,\ldots,h'_c}: \hdeg[\sigma]=i; \mdeg[\sigma]=m'\right\}$.\\
Let $B_{i,m'}=\left\{[\sigma]\in \mathbb{T}_M: \hdeg[\sigma]=i+j; \mdeg[\sigma]=mm'\right\}$.\\
Let $f_{i,m'}:A_{i,m'}\rightarrow B_{i,m'}$ be defined by $f_{i,m'}[h'_{s_1},\ldots,h'_{s_i}]=[m_{r_1},\ldots,m_{r_j},h_{s_1},\ldots,h_{s_i}]$.\\
Notice that $f_{i,m'}$ is well defined:\\ if $[\sigma]\in A_{i,m'}$, then $\hdeg f_{i,m'}[\sigma]=i+j$ and by Proposition \ref{1}, $mm'=m\mdeg[\sigma]=\mdeg f_{i,m'}[\sigma]$.\\
Besides that, $f_{i,m'}$ is one to one:\\
$f_{i,m'}[h'_{s_1},\ldots,h'_{s_i}]=f_{i,m'}[h'_{t_1},\ldots,h'_{t_i}]\\
\Rightarrow [m_{r_1},\ldots,m_{r_j},h_{s_1},\ldots,h_{s_i}]=[m_{r_1},\ldots,m_{r_j},h_{t_1},\ldots,h_{t_i}]\\
\Rightarrow h_{s_1}=h_{t_1},\cdots,h_{s_i}=h_{t_i}\\
\Rightarrow [ h'_{s_1},\ldots,h'_{s_i}]=[h'_{t_1},\ldots,h'_{t_i}].$\\
Finally, $f_{i,m'}$ is onto:\\
Suppose that $[\tau]$ is in $B_{i,m'}$. Let $[h'_{t_1},\ldots,h'_{t_k}]$ be an element in $\mathbb{T}_{h'_1,\ldots,h'_c}$, with multidegree $m'$. By Proposition \ref{1},
\[mm'=m\mdeg[h'_{t_1},\ldots,h'_{t_k}]=\mdeg[m_{r_1},\ldots,m_{r_j},h_{t_1},\ldots,h_{t_k}].\]
Since $[\tau]$ and $[m_{r_1},\ldots,m_{r_j},h_{t_1},\ldots,h_{t_k}]$ are basis elements of equal multidegree, they must contain the same dominant monomials [Al, Lemma 4.3] and, given that $[\tau]$ has homological degree $i+j$, $[\tau]$ must be of the form $[\tau]=[m_{r_1},\ldots,m_{r_j},h_{s_1},\ldots,h_{s_i}]$. By Proposition \ref{1}, $[h'_{s_1},\ldots,h'_{s_i}] \in A_{i,m'}$, and $f_{i,m'} [h'_{s_1},\ldots,h'_{s_i}]=[\tau]$.
\end{proof}
\begin{example}\label{example 3
Let $M$, $m$, and $H$ be as in Example \ref{example 1}. Let $m'=cd$. Since the only basis element of $\mathbb{T}_M$ in homological degree $0$ is
$[\varnothing]$, and $\mdeg[\varnothing]=1$, there are no basis elements of $\mathbb{T}_M$ in homological degree $0$ and multidegree $mm'=a^3b^2cd$. This illustrates Theorem \ref{3}(i). On the other hand, the basis elements of
$\mathbb{T}_{c^2,c,d,c,cd}$ with multidegree $m'=cd$ are \\
$[h'_5]$, in homological degree $1$; \\
$[h'_2,h'_5]$, $[h'_3,h'_5]$, $[h'_4,h'_5]$, $[h'_2,h'_3]$, $[h'_3,h'_4]$ in homological degree $2$;\\
$[h'_2,h'_3,h'_5]$, $[h'_2,h'_4,h'_5]$, $[h'_3,h'_4,h'_5]$, $[h'_2,h'_3,h'_4]$ in homological degree $3$; and \\
$[h'_2,h'_3,h'_4,h'_5]$ in homological degree $4$.\\
Similarly, the basis elements of $\mathbb{T}_M$ with multidegree $mm'=a^3b^2cd$ are \\
$[m_1,h_5]$ in homological degree $2$;\\
$[m_1,h_2,h_5]$, $[m_1,h_3,h_5]$, $[m_1,h_4,h_5]$, $[m_1,h_2,h_3]$, $[m_1,h_3,h_4]$, in homological degree $3$; \\
$[m_1,h_2,h_3,h_5]$, $[m_1,h_2,h_4,h_5]$, $[m_1,h_3,h_4,h_5]$, $[m_1,h_2,h_3,h_4]$ in homological degree $4$; and \\
$[m_1,h_2,h_3,h_4,h_5]$ in homological degree $5$, which illustrates the bijective correspondence of Theorem \ref{3}(ii).
\end{example}
Notation: for every multidegree $m'$ that occurs in $\mathbb{T}_{h'_1,\ldots,h'_c}$ and every $i=0,\ldots,c$, let $f_{i,m'}:A_{i,m'}\rightarrow B_{i,m'}$ be
the bijection constructed in Theorem \ref{3}.
Let $A_i=\bigcup\limits_{m'}A_{i,m'}$ and $B_i=\bigcup\limits_{m'}B_{i,m'}$. Let us define $f_i:A_i\rightarrow B_i$ by
$f_i[\sigma]=f_{i,m'}[\sigma]$, if $\mdeg[\sigma]=m'$.
Let $A=\bigcup\limits_i A_i$; $B=\bigcup\limits_i B_i$. Let us define $f:A\rightarrow B$ by $f[\sigma]=f_i[\sigma]$, if $\hdeg[\sigma]=i$.
Note that $A$ is the basis of $\mathbb{T}_{h'_1,\ldots,h'_c}$, and $f$ is a bijection that sends an element with multidegree $m'$ and homological degree $i$
to an element with multidegree $mm'$ and homological degree $i+j$.
To better understand the statement of the next theorem, we refer the reader to [Al, Remark 3.4].
\begin{theorem}\label{4}
If $a_{\pi\theta}$ is an entry of $\mathbb{T}_{h'_1,\ldots,h'_c}$, determined by elements $[\theta],[\pi]\in A$, then $f[\theta],f[\pi]$ determine an entry
$b_{\pi\theta}$ of $\mathbb{T}_M$ such that $b_{\pi\theta}=(-1)^ja_{\pi\theta}$.
\end{theorem}
\begin{proof}
Since $[\theta],[\pi]$ appear in consecutive homological degrees, so do $f[\theta],f[\pi]$. Thus, $f[\theta],f[\pi]$ determine an entry $b_{\pi\theta}$ of
$\mathbb{T}_M$.
If $[\pi]$ is a facet of $[\theta]$, then $f[\pi]$ is also a facet of $f[\theta]$ and, these elements are of the form:
\[[\theta]=[h'_{s_1},\ldots,h'_{s_i}];\quad [\pi]=[h'_{s_1},\ldots,\widehat{h'_{s_t}},\ldots,h'_{s_i}];\]
\[f[\theta]=[m_{r_1},\ldots,m_{r_j},h_{s_1},\ldots,h_{s_i}];\quad f[\pi]=[m_{r_1},\ldots,m_{r_j},h_{s_1},\ldots,\widehat{h_{s_t}},\ldots,h_{s_i}]\]
Thus
$b_{\pi\theta}=(-1)^{j+t+1}\dfrac{\mdeg f[\theta]}{\mdeg f[\pi]}=(-1)^j(-1)^{t+1}\dfrac{m\mdeg[\theta]}{m\mdeg[\pi]}=(-1)^j a_{\pi\theta}$.
On the other hand, if $[\pi]$ is not a facet of $[\theta]$, $f[\pi]$ cannot be a facet of $f[\theta]$, either. Thus $a_{\pi\theta}=0=b_{\pi\theta}$.
\end{proof}
Notation: let $\mathbb{F}_0=\mathbb{T}_{h'_1,\ldots,h'_c}$. If there is an invertible entry $a^{(0)}_{\pi_0\theta_0}$ of $\mathbb{F}_0$, determined by elements
$[\theta_0],[\pi_0]\in \mathbb{F}_0$, let $\mathbb{F}_1$ be the resolution of $S/M_m$ such that
\[\mathbb{F}_0=\mathbb{F}_1\oplus \left(0\rightarrow S[\theta_0]\rightarrow S[\pi_0]\rightarrow 0\right).\]
Let us assume that $\mathbb{F}_{k-1}$ has been defined. If there is an invertible entry $a^{(k-1)}_{\pi_{k-1}\theta_{k-1}}$ of $\mathbb{F}_{k-1}$,
determined by elements $[\theta_{k-1}],[\pi_{k-1}]$ of $\mathbb{F}_{k-1}$, let $\mathbb{F}_k$ be the resolution of $S/M_m$ such that
\[\mathbb{F}_{k-1}=\mathbb{F}_k \oplus \left(0\rightarrow S[\theta_{k-1}]\rightarrow S[\pi_{k-1}]\rightarrow 0\right).\]
\begin{theorem}\label{5}
Suppose that $\mathbb{F}_0,\ldots,\mathbb{F}_u$ are resolutions of $S/M_m$, defined as above. Then
\begin{enumerate}[(i)]
\item It is possible to define resolutions $\mathbb{G}_0,\ldots,\mathbb{G}_u$ of $S/M$, as follows:
\[\mathbb{G}_0=\mathbb{T}_M;\quad \mathbb{G}_{k-1}=\mathbb{G}_k\oplus \left(0\rightarrow Sf[\theta_{k-1}]\rightarrow Sf[\pi_{k-1}]\rightarrow 0\right).\]
\item If $a^{(u)}_{\tau\sigma}$ is an entry of $\mathbb{F}_u$, determined by elements $[\sigma],[\tau]$ of $\mathbb{F}_u$, then
$f[\sigma],f[\tau]$ are in the basis of $\mathbb{G}_u$ and determine an entry $b^{(u)}_{\tau\sigma}$ of $\mathbb{G}_u$, such that
$b^{(u)}_{\tau\sigma}=(-1)^ja^{(u)}_{\tau\sigma}$.
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is by induction on $u$. If $u=0$, (i) and (ii) are the content of Theorem \ref{4}.\\
Let us assume that parts (i) and (ii) hold for $u-1$.
We will prove parts (i) and (ii) for $u$.\\
(i) We need to show that $\mathbb{G}_u$ can be defined by the rule
\[\mathbb{G}_{u-1}=\mathbb{G}_u\oplus\left(0\rightarrow Sf[\theta_{u-1}]\rightarrow Sf[\pi_{u-1}]\rightarrow 0\right).\]
In other words, we must show that $f[\theta_{u-1}]$, $f[\pi_{u-1}]$ are in the basis of $\mathbb{G}_{u-1}$, and the entry $b^{(u-1)}_{\pi_{u-1}\theta_{u-1}}$ of
$\mathbb{G}_{u-1}$, determined by them, is invertible. But this follows from induction hypothesis and the fact that $a^{(u-1)}_{\pi_{u-1}\theta_{u-1}}$ is
invertible.\\
(ii) Notice that the basis of $\mathbb{F}_u$ is obtained from the basis of $\mathbb{F}_{u-1}$, by eliminating $[\theta_{u-1}],[\pi_{u-1}]$. This means
that $[\sigma],[\tau]$ are in the basis of $\mathbb{F}_{u-1}$, and the pairs $\left([\sigma],[\tau]\right)$, $\left([\theta_{u-1}],[\pi_{u-1}]\right)$ are
disjoint. Then by induction hypothesis, $f[\sigma]$, $f[\tau]$ are in the basis of $\mathbb{G}_{u-1}$, and because $f$ is a bijection,
$\left(f[\sigma],f[\tau]\right)$, $\left(f[\theta_{u-1}],f[\pi_{u-1}]\right)$ are disjoint pairs. Since the basis of $\mathbb{G}_u$ is obtained from the
basis of $\mathbb{G}_{u-1}$, by eliminating $f[\theta_{u-1}],f[\pi_{u-1}]$, we must have that $f[\sigma],f[\tau]$ are in the basis of $\mathbb{G}_u$.
Finally, we need to prove that $b^{(u)}_{\tau\sigma}=(-1)^ja^{(u)}_{\tau\sigma}$. By [Al, Lemma 3.2(iv)], if
$\hdeg[\sigma]\neq\hdeg[\theta_{u-1}]$, then $a^{(u)}_{\tau\sigma}=a^{(u-1)}_{\tau\sigma}$. In this case, we must also have that
$\hdeg f[\sigma]\neq \hdeg f[\theta_{u-1}]$, which implies that $b^{(u)}_{\tau\sigma}=b^{(u-1)}_{\tau\sigma}$, by the same lemma. Then, by induction hypothesis,
$b^{(u)}_{\tau\sigma}=b^{(u-1)}_{\tau\sigma}=(-1)^ja^{(u-1)}_{\tau\sigma}=(-1)^ja^{(u)}_{\tau\sigma}$. On the other hand, if
$\hdeg[\sigma]=\hdeg[\theta_{u-1}]$, then $\hdeg f[\sigma]=\hdeg f[\theta_{u-1}]$. Combining the induction hypothesis with [Al, Lemma 3.2(iii)], we
obtain
\[b^{(u)}_{\tau\sigma}=b^{(u-1)}_{\tau\sigma}-\dfrac{b^{(u-1)}_{\tau\theta_{u-1}}b^{(u-1)}_{\pi_{u-1}\sigma}}{b^{(u-1)}_{\pi_{u-1}\theta_{u-1}}}=
(-1)^j\left(a^{(u-1)}_{\tau\sigma}-\dfrac{a^{(u-1)}_{\tau\theta_{u-1}}a^{(u-1)}_{\pi_{u-1}\sigma}}{a^{(u-1)}_{\pi_{u-1}\theta_{u-1}}}\right)=(-1)^j
a^{(u)}_{\tau\sigma}\]
\end{proof}
Since the process of making standard cancellations must eventually terminate, there is an integer $u\geq0$, such that $\mathbb{F}_0,\ldots,\mathbb{F}_u$ are
defined as above and $\mathbb{F}_u$ is a minimal resolution of $S/M_m$. For the rest of this section $u$ is such an integer and
$\mathbb{F}_u$ is such a minimal resolution. Moreover, the resolutions $\mathbb{F}_0,\ldots,\mathbb{F}_u$ and $\mathbb{G}_0,\ldots,\mathbb{G}_u$ are also
fixed for the rest of this section.
Notation: Let $A'=A\setminus\{[\theta_0],[\pi_0],\ldots,[\theta_{u-1}],[\pi_{u-1}]\}$ and
$B'=B\setminus\{f[\theta_0],f[\pi_0],\ldots,f[\theta_{u-1}],$\\
$f[\pi_{u-1}]\}$. Notice that $A'$ is the basis of the minimal resolution $\mathbb{F}_u$.
\begin{theorem}\label{6
If $b^{(u)}_{\pi\theta}$ is an entry of $\mathbb{G}_u$, determined by elements $f[\theta],f[\pi]\in B'$, then $b^{(u)}_{\pi\theta}$ is noninvertible.
\end{theorem}
\begin{proof}
Since $f[\theta],f[\pi]\in B'$, $[\theta],[\pi]\in A'$ and thus, the entry $a^{(u)}_{\pi\theta}$ of $\mathbb{F}_u$ is noninvertible. Now, by Theorem
\ref{5}(ii), $b^{(u)}_{\pi\theta}$ is noninvertible.
\end{proof}
\begin{theorem}\label{7
Let $m'$ be a multidegree that occurs in $\mathbb{T}_{M_m}$.
\begin{enumerate}[(i)]
\item There are no basis elements of $\mathbb{G}_u$, with multidegree $mm'$ and homological degree less than $j$.
\item For every $i=0,\ldots,c$, there is a bijective correspondence between the basis elements of $\mathbb{F}_u$, with multidegree $m'$ and homological
degree $i$, and the basis elements of $\mathbb{G}_u$, with multidegree $mm'$ and homological degree $i+j$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) Since the basis of $\mathbb{G}_u$ is contained in that of $\mathbb{T}_M$, the statement follows from Theorem \ref{3}(i).\\
(ii) The set of basis elements of $\mathbb{F}_u$, with multidegree $m'$ and homological degree $i$ is
$A'_{i,m'}=A_{i,m'}\setminus\{[\theta_0],[\pi_0],\ldots,[\theta_{u-1}],[\pi_{u-1}]\}$. Similarly, the set of basis elements of $\mathbb{G}_u$, with
multidegree $mm'$ and homological degree $i+j$ is $B'_{i,m'}=B_{i,m'}\setminus\{f[\theta_0],f[\pi_0],\ldots,f[\theta_{u-1}],f[\pi_{u-1}]\}$. Notice that
$[\theta_k] \in A_{i,m'}$ if and only if $f[\theta_k] \in B_{i,m'}$. Likewise, $[\pi_k] \in A_{i,m'}$ if and only if $f[\pi_k] \in B_{i,m'}$.
Therefore, if we restrict
$f_{i,m'}:A_{i,m'}\rightarrow B_{i,m'}$ to $A'_{i,m'}$, we get a bijection between $A'_{i,m'}$ and $B'_{i,m'}$.
\end{proof}
Notation: If $b^{(u)}_{\gamma_0\delta_0}$ is an invertible entry of $\mathbb{G}_u$, determined by basis elements
$[\delta_0],[\gamma_0]$ of $\mathbb{G}_u$, let $\mathbb{G}_{u+1}$ be the resolution of $S/M$ such that
\[\mathbb{G}_u=\mathbb{G}_{u+1}\oplus\left(0\rightarrow S[\delta_0]\rightarrow S[\gamma_0]\rightarrow0\right).\]
Assume that $\mathbb{G}_{u+(k-1)}$ has been defined. If $b^{(u+k-1)}_{\gamma_{k-1}\delta_{k-1}}$ is an invertible entry of $\mathbb{G}_{u+(k-1)}$, determined by basis elements $[\delta_{k-1}]$, $[\gamma_{k-1}]$ of
$\mathbb{G}_{u+(k-1)}$, let
$\mathbb{G}_{u+k}$ be the resolution of $S/M$ such that
\[\mathbb{G}_{u+(k-1)}=\mathbb{G}_{u+k}\oplus \left(0\rightarrow S[\delta_{k-1}]\rightarrow S[\gamma_{k-1}]\rightarrow 0\right).\]
\begin{theorem}\label{8
Suppose that $\mathbb{G}_u$, $\mathbb{G}_{u+1}$ are defined as above. If $b^{(u)}_{\pi\theta}$ is an entry of $\mathbb{G}_u$, determined by
elements $f[\theta],f[\pi]\in B'$, then $f[\theta],f[\pi]$ are in the basis of $\mathbb{G}_{u+1}$. Moreover, the entry $b^{(u+1)}_{\pi\theta}$ of
$\mathbb{G}_{u+1}$, determined by $f[\theta]$ and $f[\pi]$ is noninvertible.
\end{theorem}
\begin{proof}
Since $\mathbb{G}_u=\mathbb{G}_{u+1}\oplus\left(0\rightarrow S[\delta_0]\rightarrow S[\gamma_0]\rightarrow0\right)$, we have that
$b^{(u)}_{\gamma_0\delta_0}$ is invertible, $[\delta_0],[\gamma_0]$ are in $\mathbb{G}_u$, in consecutive homological degrees, and
$\mdeg[\delta_0]=\mdeg[\gamma_0]$. Suppose that $[\delta_0],[\gamma_0]\in B'$. Then there are elements $[\sigma],[\tau]\in A'$, in consecutive homological
degrees, such that $f[\sigma]=[\delta_0]$ and $f[\tau]=[\gamma_0]$. By Theorem \ref{7}(ii), $[\sigma],[\tau]$ are in $\mathbb{F}_u$ and determine and entry
$a^{(u)}_{\tau\sigma}$. Now, it follows from Theorem \ref{5}(ii), that $b^{(u)}_{\gamma_0\delta_0}=(-1)^ja^{(u)}_{\tau\sigma}$. This means that
$a^{(u)}_{\tau\sigma}$ is invertible and $\mathbb{F}_u$ is not minimal, a contradiction.
On the other hand, if only one of $[\delta_0],[\gamma_0]$ is in $B'$, then $\mdeg[\delta_0]\neq \mdeg[\gamma_0]$; another contradiction.
We conclude that neither $[\delta_0]$ nor $[\gamma_0]$ is in $B'$
and therefore, the pairs $\left(f[\theta],f[\pi]\right)$; $\left([\delta_0],[\gamma_0]\right)$ are disjoint. This proves that $f[\theta],f[\pi]$ are in
the basis of $\mathbb{G}_{u+1}$.
Let us finally prove that $b^{(u+1)}_{\pi\theta}$ is noninvertible. If $\hdeg f[\theta]\neq\hdeg[\delta_0]$, then $b^{(u+1)}_{\pi\theta}=b^{(u)}_{\pi\theta}$ by [Al, Lemma 3.2(iv)],
and by Theorem \ref{6}, $b^{(u+1)}_{\pi\theta}$ is noninvertible. If $\hdeg f[\theta]=\hdeg[\delta_0]$, by [Al, Lemma 3.2(iii)], we have
$b^{(u+1)}_{\pi\theta}=b^{(u)}_{\pi\theta}-\dfrac{b^{(u)}_{\pi\delta_0}b^{(u)}_{\gamma_0\theta}}{b^{(u)}_{\gamma_0\delta_0}}$.
Since $b^{(u)}_{\pi\theta}=(-1)^ja^{(u)}_{\pi\theta}$, $b^{(u)}_{\pi\theta}$ is noninvertible. Since $\mdeg f[\pi]\neq\mdeg[\delta_0]$, the entry
$b^{(u)}_{\pi\delta_0}$ of $\mathbb{G}_u$, determined by $[\delta_0]$, $f[\pi]$, is noninvertible. Hence, the product $b^{(u)}_{\pi\delta_0}.b^{(u)}_{\gamma_0\theta}$
must be noninvertible. This means that the quotient
$\dfrac{b^{(u)}_{\pi\delta_0}b^{(u)}_{\gamma_0\theta}}{b^{(u)}_{\gamma_0\delta_0}}$ is noninvertible. Finally, $b^{(u+1)}_{\pi\theta}$ is noninvertible, for the
difference of two noninvertible monomials is noninvertible.
\end{proof}
\begin{theorem}\label{9}
Suppose that $\mathbb{G}_u,\ldots,\mathbb{G}_{u+v}$ are defined as above. If $b^{(u)}_{\pi\theta}$ is an entry of $\mathbb{G}_u$, determined by
elements $f[\theta],f[\pi]\in B'$, then $f[\theta],f[\pi]$ are in the basis of $\mathbb{G}_{u+v}$, and the entry $b^{(u+v)}_{\pi\theta}$ of
$\mathbb{G}_{u+v}$, determined by $f[\theta],f[\pi]$ is noninvertible.
\end{theorem}
\begin{proof}
The proof is by induction on $v$. If $v=1$, the statement is the content of Theorem \ref{8}.
Let us assume that the statement holds for $v-1$.
Since $\mathbb{G}_{u+(v-1)}=\mathbb{G}_{u+v}\oplus\left(0\rightarrow S[\delta_{v-1}]\rightarrow S[\gamma_{v-1}]\rightarrow 0\right)$, it follows that the
entry $b^{(u+v-1)}_{\gamma_{v-1}\delta_{v-1}}$ of $\mathbb{G}_{u+(v-1)}$, determined by $[\delta_{v-1}],[\gamma_{v-1}]$, is invertible. If we had that
$[\delta_{v-1}],[\gamma_{v-1}]\in B'$, then, by induction hypothesis, $b^{(u+v-1)}_{\gamma_{v-1}\delta_{v-1}}$ would be noninvertible, a contradiction.
On the other hand, if exactly one of $[\delta_{v-1}],[\gamma_{v-1}]$ were in $B'$, their multidegrees would be different, another contradiction. Hence, neither
$[\delta_{v-1}]$ nor $[\gamma_{v-1}]$ is in $B'$. This means that the pairs $\left(f[\theta],f[\pi]\right)$, $\left([\delta_{v-1}],[\gamma_{v-1}]\right)$ are
disjoint. Thus, $f[\theta],f[\pi]$ are in the basis of $\mathbb{G}_{u+v}$.\\
Let us now prove that $b^{(u+v)}_{\pi\theta}$ is noninvertible. If $\hdeg f[\theta]\neq\hdeg[\delta_{v-1}]$, then $b^{(u+v)}_{\pi\theta}=b^{(u+v-1)}_{\pi\theta}$ by [Al, Lemma 3.2 (iv)],
and the result follows from induction hypothesis. Now, if $\hdeg f[\theta]=\hdeg[\delta_{v-1}]$,
\[b^{(u+v)}_{\pi\theta}=b^{(u+v-1)}_{\pi\theta}-\dfrac{b^{(u+v-1)}_{\pi\delta_{v-1}}b^{(u+v-1)}_{\gamma_{v-1}\theta}}{b^{(u+v-1)}_{\gamma_{v-1}\delta_{v-1}}}\]
by [Al, Lemma 3.2(iii)].
Notice that $b^{(u+v-1)}_{\pi\theta}$ is noninvertible, by induction hypothesis. Since $\mdeg f[\pi]\neq\mdeg [\delta_{v-1}]$, it follows that the entry
$b^{(u+v-1)}_{\pi\delta_{v-1}}$ of $\mathbb{G}_{u+(v-1)}$, determined by $[\delta_{v-1}],f[\pi]$, is noninvertible. This implies that the product
$b^{(u+v-1)}_{\pi\delta_{v-1}}b^{(u+v-1)}_{\gamma_{v-1}\theta}$ is noninvertible. Moreover, since $b^{(u+v-1)}_{\gamma_{v-1}\delta_{v-1}}$ is invertible,
the quotient
$\dfrac{b^{(u+v-1)}_{\pi\delta_{v-1}}b^{(u+v-1)}_{\gamma_{v-1}\theta}}{b^{(u+v-1)}_{\gamma_{v-1}\delta_{v-1}}}$
is noninvertible. Finally, $b^{(u+v)}_{\pi\theta}$ is noninvertible, for the difference of two noninvertible monomials is noninvertible.
\end{proof}
Since the process of making standard cancellations must eventually terminate, there is an integer $v\geq0$, such that
$\mathbb{G}_u,\ldots,\mathbb{G}_{u+v}$ are defined as above, and $\mathbb{G}_{u+v}$ is a minimal resolution of $S/M$.
For the rest of this section, $v$ is such an integer and $G_{u+v}$ is such a minimal resolution. Moreover, the resolutions
$\mathbb{G}_u,\ldots,\mathbb{G}_{u+v}$ are fixed for the rest of this section.
\begin{theorem}\label{10}
Let $m'$ be a multidegree that occurs in $\mathbb{T}_{M_m}$. For each $i$, there is a bijective correspondence between the basis elements of $\mathbb{G}_u$,
with multidegree $mm'$ and homological degree $i+j$, and the basis elements of $\mathbb{G}_{u+v}$, with multidegree $mm'$ and homological degree $i+j$.
\end{theorem}
\begin{proof}
Since the basis of $\mathbb{G}_{u+v}$ is contained in that of $\mathbb{G}_u$, every basis element of $\mathbb{G}_{u+v}$, with multidegree $mm'$ and
homological degree $i+j$ is in $\mathbb{G}_u$. Conversely, every basis element of $\mathbb{G}_u$, with multidegree $mm'$ and homological degree $i+j$ is in
$\mathbb{G}_{u+v}$, by Theorem \ref{9}.
\end{proof}
\begin{theorem}\label{11
Let $\mathbb{F}$ be a minimal resolution of $S/{M_m}$, and let $\mathbb{G}$ be a minimal free resolution of $S/M$. Let $m'$ be a multidegree
that occurs in $\mathbb{T}_{M_m}$. Then
\begin{enumerate}[(i)]
\item There are no basis elements of $\mathbb{G}$, with multidegree $mm'$ and homological degree less than $j$.
\item For each $i$, there is a bijective correspondence between the basis elements of $\mathbb{F}$, with multidegree $m'$ and homological
degree $i$, and the basis elements of $\mathbb{G}$, with multidegree $mm'$ and homological degree $i+j$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) Since the basis of $\mathbb{G}$ is contained in that of $\mathbb{T}_M$, this part follows from Theorem \ref{3}(i).\\
(ii) This part follows immediately from Theorem \ref{7}(ii) and Theorem \ref{10}.
\end{proof}
\begin{example}\label{12}
Consider Example \ref{example 1}, again. Recall that $M=(m_1,m_2,h_1,h_2,h_3,h_4,h_5)=(a^3b^2,c^3d,ac^2,a^2c,b^2d,abc,bcd)$, and $m=\lcm(m_1)=a^3b^2$. Since $M_m=(c^2,c,d,c,cd)=(c,d)$, the minimal resolution of $S/M_m$ is of the form
\[\mathbb{F}: 0\rightarrow S[c,d] \rightarrow
\begin{array}{c}
S[c]\\
\oplus \\
S[d]
\end{array}
\rightarrow S[\varnothing] \rightarrow S/M_m \rightarrow 0. \]
Thus, $\betti_{0,1}\left(S/M_m\right)=\betti_{1,c}\left(S/M_m\right)=\betti_{1,d}\left(S/M_m\right)=\betti_{2,cd}\left(S/M_m\right)=1$.
By Theorem \ref{11}(ii) (with $m=a^3b^2$, and $j=1$),
$\betti_{1,a^3b^2}\left(S/M\right)=\betti_{2,a^3b^2c}\left(S/M\right)=\betti_{2,a^3b^2d}\left(S/M\right)=\betti_{3,a^3b^2cd}\left(S/M\right)=1$.
By Theorem \ref{11}(i),
$\betti_{0,a^3b^2}\left(S/M\right)=\betti_{0,a^3b^2c}\left(S/M\right)=\betti_{0,a^3b^2d}\left(S/M\right)=\betti_{0,a^3b^2cd}\left(S/M\right)=1$. (In the next section we will give the entire list of multigraded Betti numbers of $S/M$.)
\end{example}
\section{Structural Decomposition Theorems
The notation below retains its meaning until the end of this section.
Let $M$ be an ideal with minimal generating set $G=\{m_1,\ldots, m_q, n_1, \ldots, n_p\}$, where $m_1,\ldots,m_q$ are dominant and $n_1,\ldots,n_p$ are nondominant. Let $1\leq d \leq q$, and let $H=\{m_{d+1},\ldots,m_q,n_1,\ldots,n_p\}$. Then $G$ can be expressed in the form $G=\{m_1,\ldots,m_d,h_1,\ldots,h_c\}$, where $H=\{h_1,\ldots,h_c\}$.
\begin{itemize}
\item If $c>0$, let $C=\{(j,m)\in \mathbb{Z}^+ \times S:\text{ there are integers }1\leq r_1<\cdots <r_j \leq d \text{, such that }m=\lcm(m_{r_1},\ldots,m_{r_j})\} \bigcup \{(0,1)\}$. For each $(j,m) \in C$, let $M_m=(h'_1,\ldots,h'_c)$,
where $h'_i=\dfrac{\lcm(m,h_i)}{m}$.
\item If $c=0$, let $C=\{(0,1)\}$ and let $M_1=M$.
\end{itemize}
\begin{theorem}\label{1SD}
For each integer $k$ and each monomial $l$,
\[\betti_{k,l} (S/M)= \sum\limits_{(j,m)\in C} \betti_{k-j,l/m}(S/M_m).\]
\end{theorem}
\begin{proof}
If $c=0$, the theorem is trivial. Let us consider the case $c>0$. \\
If $\betti_{k,l}(S/M)=0$, then $\sum\limits_{(j,m)\in C}\betti_{k-j,l/m }(S/M_m)=0$, by Theorem \ref{11}(ii).
Suppose now that $\betti_{k,l}(S/M)\neq 0$. Then there is an element $[\tau]$ in the basis of a minimal resolution of $S/M$, such that $\hdeg[\tau]=k$ and
$\mdeg[\tau]=l$. Let $m_{r_1}, \ldots,m_{r_j}$ be the dominant monomials that are contained in $[\tau]$, and such that $\{m_{r_1}, \ldots , m_{r_j}\}$ is a subset of $\{m_1, \ldots , m_d\}$. Since all basis elements of $\mathbb{T}_M$ with equal multidegree must contain the same dominant monomials [Al, Lemma 4.3], every basis element of $\mathbb{T}_M$ in homological degree $k$ and multidegree $l$ must be of the form $[m_{r_1},\ldots,m_{r_j},h_{s_1},\ldots,h_{s_{k-j}}]$. Let $m=\lcm(m_{r_1},\ldots,m_{r_j})$. Then $(j,m)\in C$, and $\betti_{k,l}(S/M)=\betti_{k-j,l/m}(S/M_m)$, by Theorem \ref{11}(ii).\\
We will complete the proof by showing that $\betti_{k-j',l/m}(S/M_{m'})=0$, for all $(j',m')\in C \setminus \{(j,m)\}$. Let $(j',m')\in C$. Then there are dominant monomials $m_{u_1},\ldots,m_{u_j}$, such that
$m'=\lcm(m_{u_1},\ldots,m_{u_{j'}})$. Suppose that $\betti_{k-j,l/m}(S/M_{m'})\neq 0$. Then $\mathbb{T}_{M_{m'}}$ has a basis element $[h'_{t_1},\ldots,h'_{t_{k-j'}}]$ with multidegree $l/m$. By Proposition \ref{1},
$\l=m \mdeg[h'_{t_1},\ldots,h'_{t_{k-j'}}]=\mdeg[m_{u_1},\ldots,m_{u_{j'}},h_{t_1},\ldots,h_{t_{k-j'}}]$. Since the basis elements of $\mathbb{T}_M$ in homological degree $k$ and multidegree $l$ are of the form
$[m_{r_1},\ldots,m_{r_j},h_{s_1},\ldots,h_{s_{k-j}}]$, we must have that $\{m_{u_1},\ldots,m_{u_{j'}}\}=\{m_{r_1},\ldots,m_{r_j}\}$. In particular, $j'=j$, and
$m'=\lcm(m_{u_1},\ldots,m_{u_{j'}})=\lcm(m_{r_1},\ldots,m_{r_j})=m$. Thus $(j',m')=(j,m)$.
\end{proof}
\begin{definition
Recall that $G=\{m_1,\ldots, m_q, n_1, \ldots, n_p\}=\{m_1,\ldots,m_d,h_1,\ldots,h_c\}$ is the minimal generating set of $M$. If $d=q$, the equation
\[\betti_{k,l} (S/M)= \sum\limits_{(j,m)\in C} \betti_{k-j,l/m}(S/M_m),\]
given by Theorem \ref{1SD}, will be called the \textbf{first structural decomposition} of $M$.
\end{definition}
Note that when $d=q$, we have that $c=p$, and $\{h_1,\ldots,h_c\}=\{n_1,\ldots,n_p\}$.
\begin{example}\label{example 1SD
Consider Example \ref{example 1}, again. Recall that $M=(m_1,m_2,n_1,n_2,n_3,n_4,n_5)=(a^3b^2,c^3d,ac^2,a^2c,b^2d,abc,bcd)$, where $\{h_1,\ldots,h_5\}=\{n_1,\ldots,n_5\}$, and hence, $d=q$. By definition,
\begin{dmath*}
C=\{(2,\lcm(a^3b^2,c^3d));(1,\lcm(a^3b^2));(1,\lcm(c^3d));(0,1)\}=\{(2,a^3b^2c^3d);(1,a^3b^2);(1,c^3d);(0,1)\}.
\end{dmath*}
Now, each ordered pair $(j,m)$ in $C$, determines a monomial ideal $M_m$. Namely, $(2,a^3b^2c^3d)$ defines $M_{a^3b^2c^3d}=(h'_1,h'_2,h'_3,h'_4,h'_5)$, where $h'_1=\dfrac{\lcm(a^3b^2c^3d,ac^2)}{a^3b^2c^3d}=1$. Therefore, $M_{a^3b^2c^3d}=(1)=S$.\\
Likewise, $(1,a^3b^2)$ defines $M_{a^3b^2}=(c,d)$ (recall Example \ref{12}). \\
Also, $(1,c^3d)$ defines $M_{c^3d}=(h'_1,h'_2,h'_3,h'_4,h'_5)$, where $h'_1=\dfrac{\lcm(c^3d,ac^2)}{c^3d}=a$; $h'_2=\dfrac{\lcm(c^3d,a^2c)}{c^3d}=a^2$; $h'_3=\dfrac{\lcm(c^3d,b^2d)}{c^3d}=b^2$; $h'_4=\dfrac{\lcm(c^3d,abc)}{c^3d}=ab$;
$h'_5=\dfrac{\lcm(c^3d,bcd)}{c^3d}=b$. Thus, $M_{c^3d}=(a,a^2,b^2,ab,b)=(a,b)$.\\
Finally, $(0,1)$ defines $M_1=M=(ac^2,a^2c,b^2d,abc,bcd)$. Therefore, the first structural decomposition decomposition of $M$ is
\[
\betti_{k,l}(S/M)=\betti_{k-1,l/a^3b^2}\left(\dfrac{S}{(c,d)}\right)+\betti_{k-1,l/c^3d}\left(\dfrac{S}{(a,b)}\right)+\betti_{k,l}\left(\dfrac{S}{(ac^2,a^2c,b^2d,abc,bcd)}\right).\\
\]
\end{example}
\begin{theorem}\label{2SD
There is a family $\mathscr{D}$ of dominant ideals and a family $\mathscr{N}$ of purely nondominant ideals, such that
\[\betti_{k,l}(S/M)=\sum\limits_{D\in \mathscr{D}} \betti_{k-j_D,l/m_D}(S/D) + \sum\limits_{N\in \mathscr{N}} \betti_{k-j_N,l/m_N}(S/N),\]
where $j_D$, $j_N$ are integers that depend on $D$ and $N$, respectively, and $m_D$, $m_N$ are monomials that depend on $D$ and $N$, respectively.
\end{theorem}
\begin{proof}
Let \[\betti_{k,l}(S/M)=\sum\limits_{(j,m)} \betti_{k-j,l/m}(S/M_m)\]
be the first structural decoposition of $M$. If some $M_m=(h'_1,\ldots,h'_p)$ (recall that $c=p$) is neither dominant nor purely nondominant, then its minimal generating set is of the form $\{u_1,\ldots, u_{q_1},v_1,\ldots,v_{p_1}\}$, where $u_1,\ldots,u_{q_1}$ are dominant, $v_1,\ldots,v_{p_1}$ are nondominant, $q_1 \geq 1$, $p_1 \geq 1$, and $q_1 +p_1 \leq p$. In particular, $p_1\leq p-1$. Let
$\betti_{k,l}(S/M_m)=\sum\limits_{(j',m')} \betti_{k-j',l/m'}(S/M_{m,m'})$ be the first structural decomposition of $M_m$. Combining the last two identities, we obtain
\[\betti_{k,l}(S/M)=\sum\limits_{(j,m)}\betti_{k-j,l/m}(S/M_m) = \sum\limits_{(j,m)} \sum\limits_{(j',m')} \betti_{k-j-j',l/mm'}(S/M_{m,m'}).\]
If some $M_{m,m'}$ is neither dominant nor purely nondominant, then $M_{m,m'}=(v'_1,\ldots,v'_{p_1})$ (where $v'_i=\dfrac{\lcm(m',v_i)}{m'}$), and the number $p_2$ of nondominant generators in its minimal generating set is less than $p_1$ (because $M_{m,m'}$ is minimally generated by at most $p_1$ monomials). In particular, $p_2\leq p_1-1\leq p-2$. Suppose that, after applying Theorem \ref{1SD} $r$ times, we obtain a decomposition
$\betti_{k,l}(S/M)=\sum\limits_{(j_1,l_1)} \cdots \sum\limits_{(j_r,l_r)} \betti_{k-j_1-\ldots -j_r,l/l_1\ldots l_r}(S/M_{l_1,\ldots,l_r})$, such that if some $M_{l_1,\ldots,l_r}$ is neither dominant nor purely nondominant, then
$M_{l_1,\ldots,l_r}=(w'_1,\ldots,w'_{p_{r-1}})$, with $p_{r-1}\leq p-(r-1)$. \\
If some $M_{l_1,\ldots,l_r}$ is neither dominant nor purely nondominant, then the number $p_r$ of nondominant generators in its minimal generating set is less than $p_{r-1}$. In particular,
$p_r\leq p_{r-1}-1 \leq p-r$. Therefore, after applying Theorem \ref{1SD} $p$ times, we obtain a decomposition
\[\betti_{k,l}(S/M)=\sum\limits_{(j_1,l_1)}\cdots \sum\limits_{(j_p,l_p)} \betti_{k-j_1-\ldots-j_p,l/l_1\ldots l_p}(S/M_{l_1,\ldots,l_p}).\]
If we assume that there is an ideal $M_{l_1,\ldots,l_p}$ which is neither dominant nor purely nondominant, then $M_{l_1,\ldots,l_p}=(z'_1,\ldots,z'_{p_{p-1}})$, with $p_{p-1}\leq p-(p-1)=1$.\\
But this scenario is not possible, for the minimal generating set of such an ideal must contain at least one dominant generator and at least one nondominant generator. \\
We conclude that each $M_{l_1,\ldots,l_p}$ is either dominant or purely nondominant.
\end{proof}
\begin{definition
The equation
\[\betti_{k,l}(S/M)=\sum\limits_{D\in \mathscr{D}} \betti_{k-j_D,l/m_D}(S/D)+ \sum\limits_{N\in \mathscr{N}} \betti _{k-j_N,l/m_N}(S/N)\]
constructed in the proof of Theorem \ref{2SD} will be called \textbf{second structural decomposition} of $M$. The sum $\sum\limits_{D\in \mathscr{D}} \betti_{k-j_D,l/m_D}(S/D)$ will be called \textbf{dominant part} of the second structural decomposition, and the sum $\sum\limits_{N\in \mathscr{N}} \betti _{k-j_N,l/m_N}(S/N)$ will be called \textbf{purely nondominant part} of the second structural decomposition.
\end{definition}
\textit{Note}:
Although Theorem \ref{2SD} states the existence of a decomposition of the form
\[\betti_{k,l}(S/M)=\sum\limits_{D\in \mathscr{D}} \betti_{k-j_D,l/m_D}(S/D) + \sum\limits_{N\in \mathscr{N}} \betti_{k-j_N,l/m_N}(S/N),\]
the proof of Theorem \ref{2SD} is constructive. In fact, we show that
\[\betti_{k,l}(S/M)=\sum\limits_{(j_1,l_1)}\cdots \sum\limits_{(j_p,l_p)} \betti_{k-j_1-\ldots-j_p,l/l_1\ldots l_p}(S/M_{l_1,\ldots,l_p}),\]
where the ideals $M_{l_1,\ldots,l_p}$ are either dominant or purely nondominant, and they determine the dominant and purely nondominant part of the second structural decomposition.\\
Recall that if $D$ is a dominant ideal, its minimal resolution is given by $\mathbb{T}_D$ [Al, Theorem 4.4]. Therefore, when the second structural decomposition of $M$ has no purely nondominant part, we can immediately compute the multigraded Betti numbers $\betti_{k,l}(S/M)$. Such is the case in the next example.
\begin{example}\label{example 2SD
In Example \ref{example 1} we introduced the ideal $M=(a^3b^2,c^3d,ac^2,a^2c,b^2d,abc,bcd)$ and, in Example \ref{example 1SD} we gave its first structural decomposition. We would like to read off the Betti numbers of $S/M$ from the Betti numbers of the three ideals on the right side of that decomposition. The first two of these ideals, namely $M_{a^3b^2}=(c,d)$ and $M_{c^3d}=(a,b)$, are dominant. Hence, their minimal resolutions are $\mathbb{T}_{M_{a^3b^2}}$ and $\mathbb{T}_{M_{c^3d}}$, respectively. However, the third ideal, $M_1=(ac^2,a^2c,b^2d,abc,bcd)$, is not dominant. In order to obtain the multigraded Betti numbers of $S/M_1$ we compute the first structural decomposition of $M_1$ (we leave the details to the reader):\\
\begin{align*}
\betti_{k,l}\left(\dfrac{S}{M_1}\right)&=
\betti_{k-2,l/a^2c^2}\left(\dfrac{S}{(b)}\right)+\betti_{k-1,l/ac^2}\left(\dfrac{S}{(b)}\right)+ \betti_{k-1,l/a^2c}\left(\dfrac{S}{(b)}\right)
+ \betti_{k-1,l/b^2d}\left(\dfrac{S}{(c)}\right) \\
& +\betti_{k,l}\left(\dfrac{S}{(abc,bcd)}\right).
\end{align*}
Now, if we combine this equation with the first structural decomposition of $M$, given in Example \ref{example 1SD}, we obtain
\begin{align*}
\betti_{k,l}(S/M) & =
\betti_{k-1,l/a^3b^2}\left(\dfrac{S}{(c,d)}\right)+\betti_{k-1,l/c^3d}\left(\dfrac{S}{(a,b)}\right)+\betti_{k-2,l/a^2c^2}\left(\dfrac{S}{(b)}\right)\\
&+
\betti_{k-1,l/ac^2}\left(\dfrac{S}{(b)}\right)
+ \betti_{k-1,l/a^2c}\left(\dfrac{S}{(b)}\right)
+ \betti_{k-1,l/b^2d}\left(\dfrac{S}{(c)}\right) + \betti_{k,l}\left(\dfrac{S}{(abc,bcd)}\right).
\end{align*}
Note that this is the second structural decomposition of $M$, for each ideal on the right side of this decomposition is dominant. In order to compute $\betti_{k,l}(S/M)$, it would be unwise to choose integers $k$ and monomials $l$ at random. We might take many guesses and still not find any nonzero multigraded Betti numbers. The right way to compute $\betti_{k,l}(S/M)$ is by first computing the minimal resolutions of the dominant ideals on the right side of the decomposition, which we do next.
\begin{itemize}
\item The multigraded Betti numbers of $S/(c,d)$ are \\
$\betti_{0,1}(S/(c,d))=\betti_{1,c}(S/(c,d))=\betti_{1,d}(S/(c,d))=\betti_{2,cd}(S/(c,d))=1$. \\
Therefore, $\betti_{k-1,l/a^3b^2}\left(\dfrac{S}{(c,d)}\right)=1$ when $(k-1,l/a^3b^2)$ equals one of $(0,1)$, $(1,c)$, $(1,d)$, $(2,cd)$; that is, when $(k,l)$ equals one of $(1,a^3b^2)$, $(2,a^3b^2c)$, $(2, a^3b^2d)$, $(3,a^3b^2cd)$.
\item The multigraded Betti numbers of $S/(a,b)$ are \\
$\betti_{0,1}(S/(a,b))=\betti_{1,a}(S/(a,b))=\betti_{1,b}(S/(a,b))=\betti_{2,ab}(S/(a,b))=1$. \\
Therefore, $\betti_{k-1,l/c^3d}\left(\dfrac{S}{(a,b)}\right)=1$ when $(k-1,l/c^3d)$ equals one of $(0,1)$, $(1,a)$, $(1,b)$, $(2,ab)$; that is, when $(k,l)$ equals one of $(1,c^3d)$, $(2,ac^3d)$, $(2, bc^3d)$, $(3,abc^3d)$.
\item The multigraded Betti numbers of $S/(b)$ are
$\betti_{0,1}(S/(b))=\betti_{1,b}(S/(b))=1$. \\
Therefore, $\betti_{k-2,l/a^2c^2}\left(\dfrac{S}{(b)}\right)=1$, or $\betti_{k-1,l/ac^2}\left(\dfrac{S}{(b)}\right)=1$, or $\betti_{k-1,l/a^2c}\left(\dfrac{S}{(b)}\right)=1$, when $(k-2,l/a^2c^2)$ equals one of $(0,1)$, $(1,b)$, or when $(k-1,l/ac^2)$ equals one of $(0,1)$, $(1,b)$, or when $(k-1,l/a^2c)$ equals one of $(0,1)$, $(1,b)$; that is, when $(k,l)$ equals one of $(2,a^2c^2)$, $(3,a^2bc^2)$, $(1, ac^2)$, $(2,abc^2)$, $(1, a^2c)$, $(2,a^2bc)$.
\item The multigraded Betti numbers of $S/(c)$ are
$\betti_{0,1}(S/(c))=\betti_{1,c}(S/(c))=1$. \\
Therefore, $\betti_{k-1,l/b^2d}\left(\dfrac{S}{(c)}\right)=1$ when $(k-1,l/b^2d)$ equals one of $(0,1)$, $(1,c)$; that is, when $(k,l)$ equals one of $(1,b^2d)$, $(2,b^2cd)$.
\item The multigraded Betti numbers of $S/(abc,bcd)$ are \\
$\betti_{0,1}(S/(abc,bcd))=\betti_{1,abc}(S/(abc,bcd))=\betti_{1,bcd}(S/(abc,bcd))=\betti_{2,abcd}(S/(abc,bcd))=1$. \\
Therefore, $\betti_{k,l}\left(\dfrac{S}{(abc,bcd)}\right)=1$ when $(k,l)$ equals one of $(0,1)$, $(1,abc)$, $(1, bcd)$, $(2,abcd)$.
\end{itemize}
Thus, the nonzero multigraded Betti numbers of $S/M$ are\\
$\betti_{3,a^3b^2cd}=\betti_{3,abc^3d}=\betti_{3,a^2bc^2}=\betti_{2,a^3b^2c}=\betti_{2,a^3b^2d}=\betti_{2,ac^3d}=\betti_{2,bc^3d}=\betti_{2,a^2c^2}=\betti_{2,abc^2}=\betti_{2,a^2bc}=\betti_{2,b^2cd}=\betti_{2,abcd}=\betti_{1,a^3b^2}=\betti_{1,c^3d}=\betti_{1,ac^2}=\betti_{1,a^2c}=\betti_{1,b^2d}=\betti_{1,abc}=\betti_{1,bcd}=\betti_{0,1}=1.$
\end{example}
\begin{definition
Recall that $G=\{m_1,\ldots,m_d,h_1,\ldots,h_c\}$ is the minimal generating set of $M$. If $d=1$, the equation
\[\betti_{k,l} (S/M)= \sum\limits_{(j,m)\in C} \betti_{k-j,l/m}(S/M_m),\]
given by Theorem \ref{1SD}, will be called the \textbf{third structural decomposition} of $M$.
\end{definition}
Note that when $d=1$, the right hand side of the equation above has only two terms. The third strutural decomposition will be instrumental in the proof of Charalambous theorem, in Section 6.
\section{Decompositions without purely nondominant part
When the second structural decomposition of $M$ has no purely nondominant part, the numbers $\betti_{k,l} (S/M)$ can be easily computed, as illustrated in Example \ref{example 2SD}. In this section, however, our aim is to compute Betti numbers of classes of ideals rather than single ideals. More specifically, we will introduce two families of ideals whose decompositions have no purely nondominant part, and will give their multigraded Betti numbers explicitly.
\begin{definition}
Let $L$ be the set of all monomials $l$ such that the number of basis elements of $\mathbb{T}_M$, with multidegree $l$ is odd.
\begin{enumerate}[(i)]
\item We say that $M$ has \textbf{characteristic Betti numbers}, if for each monomial $l$
\begin{equation*}
\sum\limits_k \betti_{k,l}(S/M)=
\begin{cases}
1 \text{ if } l \in L, \\
0 \text{ otherwise.}
\end{cases}
\end{equation*}
\item For each $l \in L$, let
\[f(l)=\min\{\hdeg[\sigma]:[\sigma]\in \mathbb{T}_M \text{ and } \mdeg[\sigma]=l\}.\]
We say that $M$ has \textbf{characteristic Betti numbers in minimal homological degrees}, if
\begin{equation*}
\betti_{k,l}(S/M)= \begin{cases}
1 \text{ if } l \in L \text{ and } k=f(l),\\
0 \text{ otherwise.}
\end{cases}
\end{equation*}
\end{enumerate}
\end{definition}
\begin{lemma} \label{lemma}
Let \[\betti_{k,l}(S/M)=\sum\limits_{(j,m)} \betti_{k-j,l/m}(S/M_m)\] be the first structural decomposition of $M$. Then, the second structural decomposition of $M$ has no purely nondominant part if and only if the second structural decomposition of each $M_m$ has no purely nondominant part.
\end{lemma}
\begin{proof}
Let
\[\betti_{k,l}(S/M_m)=\sum\limits_{D\in \mathscr{D}} \betti_{k-j_D,l/m_D}(S/D) + \sum\limits_{N\in \mathscr{N}} \betti_{k-j_N,l/m_N}(S/N),\]
be the second structural decomposition of $M_m$. Then\\
$\betti_{k,l}(S/M)= \sum\limits_{(j,m)} \betti_{k-j,l/m}(S/M_m)=$
\[\sum\limits_{(j,m)}\left( \sum\limits_{D\in \mathscr{D}_m}\betti_{k-j-j_D,l/mm_D}(S/D)+ \sum\limits_{N\in\mathscr{N}_m} \betti_{k-j-j_N,l/mm_N}(S/N) \right)\]
is the second structural decomposition of $M$.
\end{proof}
\begin{theorem}\label{char Betti numbers}
If the second structural decomposition of $M$ has no purely nondominant part, then $M$ has characteristic Betti numbers.
\end{theorem}
\begin{proof}
The proof is by induction on the cardinality of the minimal generating set $G$ of $M$.\\
If $\#G=1$ or $\#G=2$, then $M$ is dominant and, by [Al, Corollary 4.5], $M$ is Scarf. Now, Scarf ideals have characteristic Betti numbers.\\
Suppose now that the theorem holds for ideals with minimal generating sets of cardinality $\leq q-1$. \\
Let us assume that $\#G=q$. By hypothesis, the second structural decomposition of $M$ has no purely nondominant part, which implies that $M$ itself is not purely nondominant. Therefore, $G$ must be of the form $G=\{m_1,\ldots,m_s,n_1,\ldots,n_t\}$, where $m_1,\ldots,m_s$ are dominant, $n_1,\ldots,n_t$ are nondominant, $s>0$, and $s+t=q$. In particular, $t\leq q-1$. Now, the first structural decomposition of $M$ is
\[\betti_{k,l} (S/M)= \sum\limits_{(j,m)\in C} \betti_{k-j,l/m}(S/M_m),\]
where each $M_m$ is minimally generated by at most $q-1$ monomials.
Then,
\[\sum \limits_k \betti_{k,l}(S/M)=\sum\limits_k \sum\limits_{(j,m)} \betti_{k-j,l/m} (S/M_m) = \sum\limits_{(j,m)} \sum\limits_k \betti_{k-j,l/m} (S/M_m).\]
Suppose that, for some monomial $l$, $\sum\limits_k\betti_{k,l}(S/M) \neq 0$. Then, there must be a pair $(j',m') \in C$
such that $\sum\limits_k \betti_{k-j',l/m'} (S/M_{m'}) \neq 0$. By Lemma \ref{lemma}, the second structural decomposition of $M_{m'}$ has no purely nondominant part and, by induction hypothesis,
$M_{m'}$ has characteristic Betti numbers. Hence, $\sum\limits_k \betti_{k-j',l/m'} (S/M_{m'}) =1$.\\
Suppose, by means of contradiction, that there is a pair $(j,m)\in C \setminus \{(j',m')\}$ such that $\sum\limits_k \betti_{k-j,l/m} (S/M_m) \neq 0$. Then, there exist basis elements $[\sigma] \in \mathbb{T}_{M_{m'}}$ and $[\tau] \in \mathbb{T}_{M_m}$, such that $\mdeg[\sigma]=l/m'$, and $\mdeg[\tau]=l/m$. Recall that $[\sigma]$, $[\tau]$, $m'$, and $m$ are of the form $[\sigma]=[n'_{a_1},\ldots,n'_{a_c}]$; $[\tau]=[n'_{b_1},\ldots,n'_{b_d}]$; $m'=\lcm(m_{u_1},\ldots,m_{u_{j'}})$; $m=\lcm(m_{v_1},\ldots,m_{v_j})$. By Proposition \ref{1} we have that
\[l=m'l/m'=m'\mdeg[\sigma]=m'\mdeg[n'_{a_1},\ldots,n'_{a_c}]=\mdeg[m_{u_1},\ldots,m_{u_{j'}},n_{a_1},\ldots,n_{a_c}].\]
Similarly,
\[l=ml/m=m\mdeg[\tau]=m\mdeg[n'_{b_1},\ldots,n'_{b_d}]= \mdeg[m_{v_1},\ldots,m_{v_j},n_{b_1},\ldots,n_{b_d}].\]
Hence,
$\mdeg[m_{u_1},\ldots,m_{u_{j'}},n_{a_1},\ldots,n_{a_c}] $ and $\mdeg[m_{v_1},\ldots,m_{v_j},n_{b_1},\ldots,n_{b_d}]$ are two basis elements of $\mathbb{T}_M$, with the same multidegree $l$. By [Al, Lemma 4.3], these basis elements must contain the same dominant generators. However, since $(j',m')\neq(j,m)$, we must have that $\{m_{u_1},\ldots,m_{u_j'}\} \neq \{m_{v_1},\ldots,m_{v_j}\}$, a contradiction. Therefore,
$\sum\limits_k \betti_{k-j,l/m} (S/M_m)=0$, for all $(j,m)\in C \setminus\{(j',m')\}$. Thus,
\[\sum\limits_k \betti_{k,l}(S/M)= \sum\limits_{(j,m)} \sum\limits_k \betti_{k-j,l/m} (S/M_m)= \sum\limits_k \betti_{k-j',l/m'} (S/M_{m'})=1.\]
We have proven that $\sum\limits_k \betti_{k,l}(S/M)\leq 1$, for each monomial $l$.\\
Since a minimal resolution $\mathbb{F} $ of $S/M$ can be obtained from $\mathbb{T}_M$ by making series of consecutive cancellations, and given that each consecutive cancellation involves a pair of basis elements of equal multidegree, the number of basis elements of $\mathbb{T}_M$ with a given multidegree $l$ is even if and only if the number of basis elements of $\mathbb{F}$ with multidegree $l$ is even. But the number of basis elements of $\mathbb{F}$ with multidegree $l$ is $\sum\limits_k \betti_{k,l}(S/M) \leq 1$, which proves the theorem.
\end{proof}
In Example \ref{example 2SD}, we computed the second structural decomposition of the ideal $M=(a^3b^2,c^3d,ac^2,a^2c,b^2d,abc,bcd)$, and noticed that it has no purely nondominant part. Right after, we found the numbers $\betti_{k,l}(S/M)$ and proved that, with the language of this section, $M$ has characteristic Betti numbers. This is consistent with Theorem \ref{char Betti numbers}.
\begin{lemma}\label{hom degree}
Let $\betti_{k,l}(S/M)=\sum\limits_{(j,m)} \betti_{k-j,l/m} (S/M_m)$ be the first structural decomposition of $M$. Suppose that $M$ has characteristic Betti numbers, and each $M_m$ has characteristic Betti numbers in minimal homological degrees. Then $M$ has characteristic Betti numbers in minimal homological degrees.
\end{lemma}
\begin{proof}
Let $l$ be a monomial that is the common multidegree of an odd number of basis elements of $\mathbb{T}_M$. Let $r$ be such that $\betti_{r,l}(S/M)=1$. Then, there is a pair $(j,m)\in C$ such that $\betti_{r-j,l/m}(S/M_m)=1$. It follows that there is a basis element $[n'_{a_1},\ldots,n'_{a_{r-j}}]$ of $\mathbb{T}_{M_m}$ with multidegree $l/m$.
Recall that $m$ is of the form $m=\lcm(m_{b_1},\ldots,m_{b_j})$, with $m_{b_1},\ldots,m_{b_j} \in \{m_1,\ldots,m_s\}$. By Proposition \ref{1}, $l=m \dfrac{l}{m}=m \mdeg[n'_{a_1},\ldots,n'_{a_{r-j}}]=\mdeg[m_{b_1},\ldots,m_{b_j},n_{a_1},\ldots,n_{a_{r-j}}]$. Suppose that $[\sigma]$ is a basis element of $\mathbb{T}_M$, such that $\mdeg[\sigma]=l$. We will show that $r \leq \hdeg[\sigma]$. By [Al, Lemma 4.3], $[\sigma]$ must be of the form
$[\sigma]=[m_{b_1},\ldots,m_{b_j},n_{c_1},\ldots,n_{c_d}]$. By Proposition \ref{1}, $\l=\mdeg[\sigma]=m \mdeg[n'_{c_1}\ldots,n'_{c_d}]$. Thus, the basis element $[n'_{c_1},\ldots,n'_{c_d}]$ of $\mathbb{T}_{M_m}$ has multidegree $l/m$. Since $M_m$ has Betti numbers in minimal homological degrees, $\hdeg[n'_{a_1},\ldots,n'_{a_{r-j}}]\leq \hdeg[n'_{c_1},\ldots,n'_{c_d}]$. It follows that $r=\hdeg[m_{b_1},\ldots,m_{b_j},n_{a_1},\ldots,n_{a_{r-j}}] \leq \hdeg[m_{b_1},\ldots,m_{b_j},n_{c_1},\ldots,n_{c_d}] = \hdeg[\sigma]$, which proves that $M$ itself has characteristic Betti numbers in minimal homological degrees.
\end{proof}
\begin{definition}
Suppose that the polynomial ring $S$ has $n$ variables $x_1,\ldots,x_n$. We will say that $M$ is \textbf{almost generic}, if there is an index $i$ such that no variable among
$x_1,\ldots, x_{i-1}, x_{i+1}, \ldots,x_n$ appears with the same nonzero exponent in the factorization of two minimal generators of $M$.
\end{definition}
\begin{example}
$M=(a^2b^2cd^2,a^3b^3c,cd^4)$ is almost generic because no variable among $a,b,d$ appears with the same nonzero exponent in the factorization of two minimal generators of $M$.
\end{example}
\begin{lemma}\label {Lemma 5.7}
Let $\betti_{k,l}(S/M)=\sum\limits_{(j,m)} \betti_{k-j,l/m} (S/M_m)$ be the first structural decomposition of $M$. Suppose that $M$ is almost generic. Then, each $M_m$ is almost generic.
\end{lemma}
\begin{proof}
Let $G$ be the minimal generating set of $M$. By definition, there is an index $i$ such that no variable among $x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n$ appears with the same nonzero exponent in the factorization of two generators in $G$.
Let $G=\{m_1,\ldots,m_s,n_1,\ldots,n_t\}$, where $m_1,\ldots,m_s$ are dominant, and $n_1,\ldots,n_t$ are nondominant. Then, each $M_m$ in the first structural decomposition of $M$ is of the form $M_m=(n'_1,\ldots,n'_t)$. Suppose, by means of contradiction, that some $M_m$ is not almost generic. Then, there is a variable $x \neq x_i$ that appears with the same nonzero exponent $\alpha$ in the factorization of two generators $n'_a,n'_b \in \{n'_1,\ldots,n'_t\}$. Recall that $n'_a=\dfrac{\lcm(m,n_a)}{m}$, $n'_b=\dfrac{\lcm(m,n_b)}{m}$. Let $u,v,w$ be the exponents with which x appears in the factorizations of $m,n_a,n_b$, respectively. Note that $x$ appears with exponents $u+\alpha$ and $\max(u,v)$ in the factorizations of $mn'_a$ and $\lcm(m,n_a)$, respectively. Since $mn'_a=\lcm(m,n_a)$, we must have that $u+\alpha=\max(u,v)$. It follows that $v=u+\alpha$. Likewise, $x$ appears with exponents $u+\alpha$ and $\max(u,w)$ in the factorizations of $mn'_b$ and $\lcm(m,n_b)$, respectively. Since $mn'_b=\lcm(m,n_b)$, we must have that $u+\alpha=\max(u,w)$. It follows that $w=u+\alpha$. Combining these identities, we deduce that $v=w$, which implies that $x$ appears with the same nonzero exponent in the factorizations of $n_a$ and $n_b$, which contradicts the fact that $M$ is almost generic.
\end{proof}
\begin{lemma} \label{almost generic lemma}
If $M$ is almost generic, its second structural decomposition has no purely nondominant part.
\end{lemma}
\begin{proof}
Let $G$ be the minimal generating set of $M$. The proof is by induction on the cardinality of $G$.\\
If $\#G=1$ or $\#G=2$, then $M$ is dominant and the theorem holds.\\
Suppose that the theorem holds whenever $\#G\leq q-1$. Let us now assume that $\#G=q$.
Since $M$ is almost generic, there is an index $i$ such that no variable among $x_1,\ldots,x_{i-1},x_{i+1},$\\
$\ldots,x_n$ appears with the same nonzero exponent in the factorization of two generators in $G$. Let $j \neq i$, and let $k$ be the greatest exponent with which $x_j$ appears in the factorization of a generator in $G$. Then there is a unique element in $G$, divisible by $x_j^k$, and such an element must be dominant (in $x_j$). Hence, $G$ can be represented in the form $G= \{m_1,\ldots,m_s,n_1,\ldots,n_t\}$, where $m_1,\ldots,m_s$ are dominant, $n_1,\ldots,n_t$ are nondominant, $s+t=q$, and $s \geq 1$. In particular $t \leq q-1$. By Lemma \ref{Lemma 5.7}, each $M_m$ in the first structural decomposition of $M$ is almost generic, and since $M_m=(n'_1,\ldots,n'_t)$, $M_m$ is minimally generated by at most $q-1$ elements. Then, by induction hypothesis, the second structural decomposition of $M_m$ has no purely nondominant part. Finally, the theorem follows from Lemma \ref {lemma}.
\end{proof}
\begin{corollary}\label{Corollary 4}
If $M$ is almost generic, it has characteristic Betti numbers.
\end{corollary}
\begin{proof}
Immediate from Theorem \ref{char Betti numbers} and Lemma \ref{almost generic lemma}.
\end{proof}
\begin{theorem}\label{MinHomDeg}
If $M$ is almost generic, it has characteristic Betti numbers in minimal homological degrees.
\end{theorem}
\begin{proof}
By induction on the cardinality of the minimal generating set $G$ of $M$.\\
If $\#G=1$ or $\#G=2$, $M$ is dominant, and the Theorem holds. Let us assume that the theorem holds whenever $\#G \leq q-1$.\\
Suppose now that $\#G=q$. By Lemma \ref{almost generic lemma}, $M$ is not purely nondominant. Then, $G$ can be represented in the form $G=\{m_1,\ldots,m_s,n_1,\ldots,n_t\}$, where $m_1,\ldots,m_s$ are dominant, $n_1,\ldots,n_t$ are nondominant, $s+t=q$, and $s \geq 1$. In particular, $t \leq q-1$. Let $\betti_{k,l}(S/M)= \sum\limits_{(j,m)} \betti_{k-j,l/m} (S/M_m)$ be the first structural decomposition of $M$. Since $M_m=(n'_1,\ldots,n'_t)$, $M_m$ is minimally generated by at most $q-1$ monomials. By Lemma \ref{Lemma 5.7}, $M_m$ is almost generic. By induction hypothesis, $M_m$ has characteristic Betti numbers in minimal homological degrees. Now, the result follows from Lemma \ref{hom degree}.
\end{proof}
\begin{theorem}
If $M$ is $2$-semidominant, it has characteristic Betti numbers in minimal homological degrees.
\end{theorem}
\begin{proof}
Let $G=\{m_1,\ldots,m_q,n_1,n_2\}$ be the minimal generating set of $M$, where $n_1,n_2$ are nondominant. Then, the first structural decomposition of $M$ is
\[\betti_{k,l}(S/M)= \sum\limits_{(j,m)} \betti_{k-j,l/m} (S/M_m),\]
where $M_m=(n'_1,n'_2)$.
If $n'_1=1$ or $n'_2=1$, then $M_m=S$, and $\betti_{k-j,l/m} (S/M_m)=0$. Otherwise, $(n'_1,n'_2)$ is minimally generated by either one or two monomials, which implies that
$M_m$ is dominant. Note that the first and second structural decompositions agree, and have no purely nondominant part. By Theorem \ref{char Betti numbers}, $M$ has characteristic Betti numbers. Moreover, since each $M_m$ is dominant, it has characteristic in minimal homological degrees. Now, the result follows from Lemma \ref{hom degree}.
\end{proof}
\section{Structural decomposition and Projective Dimension
If the minimal generating set $G$ of $M$ has at least two monomials, and the ring $S$ has $n$ variables, there are two natural bounds for the projective dimension $\pd\left(S/M\right)$, namely, $2\leq \pd\left(S/M\right) \leq n$. In this section we discuss some cases where the lower and upper bounds are achieved.
Hilbert-Burch theorem [Ei, Theorem 20.15] describes the structure of the minimal resolutions of ideals $M$, when $\pd(S/M)=2$. The next theorem gives sufficient conditions for the lower bound $\pd(S/M)=2$ to be achieved.
\begin{theorem}\label{pd}
Let $M$ be either $2$- or $3$-semidominant. Suppose that there exist a minimal generator of $M$ that divides the $\lcm$ of every pair of minimal generators of $M$. Then $\pd(S/M)=2$.
\end{theorem}
\begin{proof}
Let $G=\{m_1,\ldots,m_s,n_1,\ldots,n_t\}$ be the minimal generating set of $M$, where $m_1,\ldots,$\\
$m_s$ are dominant, and $n_1,\ldots,n_t$ are nondominant. By hypothesis, there is an element $n$ in $G$ that divides the $\lcm$ of every pair of elements in $G$. However, since each $m_i$ is dominant, $m_i\nmid \lcm(n_1,n_2)$. It follows that $n$ must be one of the $n_1,\ldots,n_t$; say $n=n_1$. Let
$\betti_{k,l}(S/M)= \sum\limits_{(j,m)\in C} \betti_{k-j,l/m}(S/M_m)$ be the first structural decomposition of $M$. Let $(j,m)\in C$. We will prove that $\betti_{k-j,l/m}(S/M_m)=0$ for all $k \geq 3$, and all monomials $l$.\\
First, let us assume that $j \geq 2$. Then there are monomials $m_{i_1},\ldots,m_{i_j} \in G$ such that $m=\lcm(m_{i_1},\ldots,m_{i_j})$, and $M_m=(n'_1,\ldots,n'_t)$, where $n'_i= \dfrac{\lcm(m,n_i)}{m}$. In particular,
$n_1\mid \lcm(m_{i_1},m_{i_2})$, and hence, $n_1 \mid m$. This implies that $\lcm(m,n_1)=m$, and thus $n'_1=\dfrac{\lcm(m,n_1)}{m}=1$. Therefore, $M_m=S$, and $\betti_{k-j,l/m}(S/M_m)=0$ for all $k \geq 3$ and all monomials $l$.\\
Now, let us assume that $j=1$. Then $m=m_{i_1}$, and by hypothesis, $n_1 \mid \lcm(m_{i_1},n_k)$, for all $k=2,\ldots,t$. Then $\lcm(m_{i_1},n_1) \mid (m_{i_1},n_k)$, for all $k=2,\ldots,t$, and therefore, $n'_1\mid n'_k$, for all $k=2,\ldots,t$. This means that $M_m=(n'_1,\ldots,n'_t)=(n'_1)$, and thus, $\pd(S/M_m)\leq 1$. It follows that $\betti_{k-j,l/m}(S/M_m)=\betti_{k-1,l/m}(S/M_m)=0$, for all $k\geq 3$, and all monomials $l$.
Finally, suppose that $j=0$. Then $m=1$, and $M_m=(n'_1,\ldots,n'_t)=(n_1,\ldots,n_t)$. Since $M$ is either $2$-semidominant or $3$-semidominant, $t=2$ or $t=3$. If $t=2$, then $M_m=(n_1,n_2)$, and thus
$\betti_{k-j,l/m}(S/M_m)=\betti_{k,l}(S/M_m)=0$, for all $k\geq 3$, and all monomials $l$. On the other hand, if $t=3$, since $n_1 \mid \lcm(n_2,n_3)$, $M_m=(n_1,n_2,n_3)$ is not dominant, and we must have
$\pd(S/M_m)\leq 2$. It follows that $\betti_{k-j,l}(S/M_m)=\betti_{k,l}(S/M_m)=0$, for all $k \geq 3$, and all monomials $l$. Therefore, for all $k \geq 3$, and all monomials $l$ the first structural decomposition of $M$ gives
$\betti_{k,l}(S/M)=\sum\limits_{(j,m)\in C}\betti_{k-j,l/m}(S/M_m)=0$, which means that $\pd(S/M)\leq 2$. However, since $\#G\geq 2$, we must have $\pd(S/M)=2$.
\end{proof}
\begin{example
Let
\begin{align*}
m_1 & =a^3c^2d^2e^2f^2g^2 & m_2 & =a^2b^3d^2e^2f^2g^2 & m_3 & =a^2b^2c^3e^2f^2g^2\\
m_4 & =a^2b^2c^2d^3f^2g^2 & m_5 & =a^2b^2c^2de^3g^2 & m_6 & =b^2c^2d^2e^2g^3\\
n_1 & =abcdefg & n_2 & =a^2b^2c^2d^2e^2f & n_3 & = a^2b^2c^2d^2ef^2.
\end{align*}
Let $M=(m_1,\ldots,m_6)$; $M_2=(m_1,\ldots,m_6,n_1,n_2)$; $M_3=(m_1,\ldots,m_6,n_1,n_2,n_3)$. It is clear that $M$ is dominant; $M_2$ is $2$-semidominant, and $M_3$ is $3$-semidominant. Note that $n_1$ divides the $\lcm$ of every pair of monomials in $\{m_1,\ldots,m_6,n_1,n_2,n_3\}$. By Theorem \ref{pd}, $\pd(S/M)=6$; $\pd(S/M_2)=\pd(S/M_3)=2$. (We see how adding a few monomials to the minimal generating set can change the projective dimension dramatically.)
\end{example}
The fact that Artinian monomial ideals have maximum projective dimension (in the sense of Hilbert Syzygy theorem) was proven by Charalambous [Ch] (see also [Pe, Corollary 21.6]), using the radical of an ideal as main tool. Here we give an alternative proof of this fact that relies entirely on the first structural decomposition.
\begin{theorem}\label{Artinian
If $M$ is Artinian in $S=k[x_1,\ldots,x_n]$, then $\pd(S/M)=n$.
\end{theorem}
\begin{proof}
By induction on $n$. If $n=1$, the result is trivial. Suppose that $\pd(S/M)=n-1$, for Artinian ideals $M$ in $n-1$ variables.\\
Let $M$ be Artinian in $S=k[x_1,\ldots,x_n]$. Then, for each $i=1,\ldots,n$, the minimal generating set $G$ of $M$ contains a monomial $x_i^{\alpha_i}$, $\alpha_i\geq 1$. Notice that each $x_i^{\alpha_i}$ is dominant.
Let $\betti_{k,l}(S/M)=\sum\limits_{(j,m)\in C} \betti_{k-j,l/m}(S/M_m)$ be the third structural decomposition of $M$, where $m_1=x_1^{\alpha_1}$, and $H=G\setminus\{ x_1^{\alpha_1} \}$. Since $x_1^{\alpha_1}$ is dominant, $(1,x_1^{\alpha_1}) \in C$. Thus, $\betti_{k-1,l/x_1^{\alpha_1}}(S/M_{x_1^{\alpha_1}})$ is one of the terms on the right side of the third structural decomposition.\\
Let $G=\{x_1^{\alpha_1},\ldots,x_n^{\alpha_n}, l_1,\ldots,l_q\}$. By construction,
\begin{align*}
M x_1^{\alpha_1} &=\left(\dfrac{\lcm(x_1^{\alpha_1},x_2^{\alpha_2})}{x_1^{\alpha_1}},\ldots,\dfrac{\lcm(x_1^{\alpha_1},x_n^{\alpha_n})}{x_1^{\alpha_1}},\dfrac{\lcm(x_1^{\alpha_1},l_1)}{x_1^{\alpha_1}},\ldots,\dfrac{\lcm(x_1^{\alpha_1},l_q)}{x_1^{\alpha_1}}\right)\\
& =(x_2^{\alpha_2},\ldots,x_n^{\alpha_n},l'_1,\ldots,l'_q).
\end{align*}
Since $x_1^{\alpha_1}$ is dominant, $x_1$ does not appear in the factorization of $l'_i$. Thus, $M_{x_1^{\alpha_1}}$ is an Artinian ideal in $n-1$ variables. By induction hypothesis, $\pd(S/M_{x_1^{\alpha_1}})=n-1$. Therefore, there is a monomial $l$, such that $\betti_{n-1,l}(S/M_{x_1^{\alpha_1}}) \neq 0$. Let $l'=l x_1^{\alpha_1}$.Then, $\betti_{n-1,l'/x_1^{\alpha_1}}(S/M_{x_1^{\alpha_1}}) \neq 0$. Finally,
\[\betti_{n,l'}(S/M)=\sum\limits_{(j,m)\in C\setminus\{(1,x_1^{\alpha_1})\}} \betti_{k-j,l'/m}(S/M_m) + \betti_{n-1,l'/x_1^{\alpha_1}}(S/M_{x_1^{\alpha_1}}) \neq 0,\] \\
which implies that $\pd(S/M)=n$.
\end{proof}
\begin{theorem} \label{Gasharov
Let $M$ be Artinian in $S=k[x_1,\ldots,x_n]$. Let $\mathbb{F}_M$ be a minimal resolution of $S/M$, obtained from $\mathbb{T}_M$ by means of consecutive cancellations. Then there is a basis element $[\sigma]$ of $\mathbb{F}_M$, such that $\hdeg[\sigma]=n$, and $\mdeg[\sigma]$ is divisible by each variable $x_1,\ldots,x_n$.
\end{theorem}
\begin{proof}
By Theorem \ref{Artinian}, there is a basis element $[\sigma]$ in the basis of $\mathbb{F}_M$, such that $\hdeg[\sigma]=n$. \\
By means of contradiction, suppose that the set $\{x_{j_1},\ldots,x_{j_i}\}$ of all variables dividing $\mdeg[\sigma]$ is a proper subset of $\{x_1,\ldots,x_n\}$; that is $i \leq n-1$.\\
Let $m=\mdeg[\sigma]$; let $G$ be the minimal generating set of $M$, and let $M_m$ be the ideal generated by $\{l \in G: l \mid m\}$. By [GHP, Theorem 2.1], there is a subcomplex $\left(\mathbb{F}_M\right)_{\leq m}$ of $\mathbb{F}_M$, such that $[\sigma]$ is a basis element of $\left(\mathbb{F}_M\right)_{\leq m}$, and $\left(\mathbb{F}_M\right)_{\leq m}$ is a minimal resolution of $S/M_m$. Therefore, $\pd (S/M_m) \geq \hdeg[\sigma]=n$. However, since $M_m$ is a monomial ideal in $k[x_{j_1},\ldots,x_{j_i}]$, it follows from Hilbert Syzygy theorem that $\pd(S/M_m)\leq i \leq n-1$, a contradiction.\\
We conclude that $\mdeg[\sigma]$ is divisible by $x_1,\ldots,x_n$.
\end{proof}
Now we are ready to prove Charalambous theorem.
\begin{theorem} \label{combinatorics
Let $M$ be Artinian in $S=k[x_1,\ldots,x_n]$. Then, for all $i=0,\ldots,n$, $\betti_i(S/M) \geq {n \choose i }$.
\end{theorem}
\begin{proof}
Let $0 \leq i \leq n$. Let $\mathbb{X}_i$ be the class of all subsets of $\{x_1,\ldots,x_n\}$, of cardinality $i$. Then $\#\mathbb{X}_i= {n\choose i}$. Let $\{x_{j_1},\ldots,x_{j_i}\} \in \mathbb{X}_i$. Let $G$ be the minimal generating set of $M$, and let $G_{j_1,\ldots,j_i}$ and let $M_m$ be the monomial ideal generated by $G_{j_1,\ldots,j_i}$. Since $M$ is Artinian in $k[x_1,\ldots,x_n]$, $G$ contains generators of the form $x_{j_1}^{\alpha_{j_1}},\ldots,x_{j_i}^{\alpha_{j_i}}$, and hence, $M_m$ is Artinian in $k[x_{j_1},\ldots,x_{j_i}]$. By Theorem \ref{Gasharov}, there is a basis element $[\sigma_{j_1,\ldots,j_i}]$. of a minimal resolution $\left(\mathbb{F}_M\right)_{\leq m}$ of $S/M_m$, such that $\hdeg[\sigma_{j_1,\ldots,j_i}]=i$, and $\mdeg[\sigma_{j_1,\ldots,j_i}]$ is divisible by $x_{j_1},\ldots,x_{j_i}$. Since $M_m$ is an ideal in $k[x_{j_1},\ldots,x_{j_i}]$, $\mdeg[\sigma_{j_1,\ldots,j_i}]$ is not divisible by any variable of $\{x_1,\ldots,x_n\} \setminus \{x_{j_1},\ldots,x_{j_i}\}$. By [GHP, Theorem 2.1], $\left(\mathbb{F}_M\right)_{\leq m}$ can be regarded as a subcomplex of a minimal resolution $\mathbb{F}_M$ of $S/M$. Therefore, $[\sigma_{j_1,\ldots,j_i}]$ is a basis element of $\mathbb{F}_M$. Notice that if $\{x_{j_1},\ldots,x_{j_i}\}$ and $\{x_{k_1},\ldots,x_{k_i}\}$ are different elements of $\mathbb{X}_i$, the basis elements $[\sigma_{j_1,\ldots,j_i}]$ and $[\sigma_{k_1,\ldots,k_i}]$, determined by these sets must be different too. In fact, the sets of variables dividing $\mdeg[\sigma_{j_1,\ldots,j_i}]$ and $\mdeg[\sigma_{k_1,\ldots,k_i}]$ are $\{x_{j_1},\ldots,x_{j_i}\}$ and $\{x_{k_1},\ldots,x_{_i}\}$, respectively. It follows that $\betti_i(S/M) \geq \# \mathbb{X}_i = {n \choose i }$.
\end{proof}
\begin{theorem}\label{pd=n
Let $M$ be an ideal in $S=k[x_1,\ldots,x_n]$, with minimal generating set $G$. Suppose that $G$ contains a subset of the form
\[G'=\{x_1^{\alpha_i}x_i^{\beta_i}: \alpha_i \geq 0, \text{ }\beta_i \geq1, \text{ for all }i=1,\ldots,n\}.\]
Then $\pd(S/M)=n$.
\end{theorem}
\begin{proof}
Let $G=G' \cup \{l_1,\ldots,l_q\}$. Let $\betti_{k,l}(S/M)=\sum\limits_{(j,m)\in C} \betti_{k-j,l/m} (S/M_m)$ be the first structural decomposition of $M$. Let $\gamma=\alpha_1+\beta_1$. Then $x_1^\gamma=x_1^{\alpha_1}x_1^{\beta_1}$ is a minimal generator in $G$. Notice that the exponent $\gamma$ with which $x_1$ appears in the factorization of $x_1^\gamma$, must be larger than the exponent with which $x_1$ appears in the factorization of any other minimal generator $l$ in $G$; otherwise, $l$ would be a multiple of $x_1^\gamma$. Hence, $x_1^\gamma$ is dominant in $G$; which implies that $(1,x_1^{\alpha_{11}}) \in C$. Thus, $\betti_{k-1,l/x_1^{\gamma}}(S/M_{x_1^{\gamma}})$ is one of the terms on the right side of the first structural decomposition. Now,
\begin{align*}
M_{x_1^{\gamma}} &=\left(\dfrac{\lcm(x_1^{\gamma},x_1^{\alpha_2}x_2^{\beta_2})}{x_1^{\gamma}}, \ldots, \dfrac{\lcm(x_1^{\gamma},x_1^{\alpha_n}x_n^{\beta_n})}{x_1^{\gamma}}, \dfrac{\lcm(x_1^{\gamma},l_1)}{x_1^{\gamma}}, \ldots, \dfrac{\lcm(x_1^{\gamma},l_q)}{x_1^{\gamma}}\right)\\
& =(x_2^{\beta_2},\ldots, x_n^{\beta_n},l'_1,\ldots,l'_q),
\end{align*}
where $l'_i=\dfrac{\lcm(x_1^{\gamma},l_i)}{x_1^{\gamma}}$.\\
Since $x_1^{\gamma}$ is dominant, $x_1$ does not appear in the factorization of $l'_i$. Thus, $M_{x_1^{\gamma}}$ is an Artinian monomial ideal in $k[x_2,\ldots,x_{n-1}]$. It follows that $\pd(S/M_{x_1^{\gamma}})=n-1$. Therefore, there is a monomial $l$, such that $\betti_{n-1,l}(S/M_{x_1^{\gamma}}) \neq 0$. Let $l'=l x_1^{\gamma}$. Then, $\betti_{n-1,l'/x_1^{\gamma}}(S/M_{x_1^{\gamma}}) \neq 0$. Finally,
\[\betti_{n,l'}(S/M)= \sum\limits_{(j,m)\in C \setminus \{(1,x_1^{\gamma})\}} \betti_{n-j,l'/m} (S/M_m) + \betti_{n-1,l'/x_1^{\gamma}}(S/M_{x_1^{\gamma}}) \neq 0,\]
which implies that $\pd(S/M)=n$.
\end{proof}
\begin{example}
Let $M=(x_1^3,x_1x_2,x_1x_3,x_1x_4,x_1x_5,x_2x_4,x_3x_5)$. Then, the subset $G'=\{x_1^3,x_1x_2,x_1x_3,x_1x_4,x_1x_5\}$ of the minimal generating set of $M$, satisfies the hypotheses of Theorem \ref{pd=n}. Hence, $\pd(S/M)=5$.
\end{example}
The hypothesis of Theorem \ref{pd=n} is more common than it may seem. For instance, Artinian ideals satisfy this condition. The hypothesis of Theorem \ref{pd=n} is also satisfied if $M$ is the smallest Borel ideal containing a given monomial. A particular case of Theorem \ref{pd=n} is proven in [Ra].
\section{Conclusion}
We close this article with some remarks, questions, and conjectures.
The structural decomposition is one of the very few techniques that allow us to compute Betti numbers by hand, not for arbitrary monomial ideals, but for a wide class of them. As a matter of fact, the ideal $M$ in Example \ref{example 2SD}, is minimally generated by $7$ monomials and even so we were able to compute the numbers $\betti_{k,l}(S/M)$. (Starting with $\mathbb{T}_M$, we could also calculate the $\betti_{k,l}(S/M)$ by means of consecutive cancellations. But since the basis of $\mathbb{T}_M$ contains $\sum\limits {7 \choose i }=128$ elements, and the basis of the minimal resolution of $S/M$ contains $20$ elements, we should make $\dfrac{128-20}{2}=54$ consecutive cancellations, which obviously requires the use of software.)
On a different note, the structural decomposition of an ideal $M$ generates a finite family $\{M_m\}$ of ideals that usually has these two properties: a) the minimal generating set of each $M_m$ has smaller cardinality than the minimal generating set of $M$; b) if $M$ is an ideal in $S=k[x_1,\ldots,x_n]$, then $M_m$ is an ideal in a polynomial ring with less than $n$ variables. As a consequence, the structural decomposition works well when one wants to prove facts by induction. Theorems \ref {char Betti numbers} and \ref{MinHomDeg}, for instance, are proven by induction on the cardinaliy of the minimal generating set. On the other hand, Theorem \ref{Artinian} is proven by induction on the number of variables.
One last comment. Theorem \ref{char Betti numbers} does not depict the entire family of ideals with characteristic Betti numbers. In fact, $M=(a^2bc,b^2c^2,a^2b^2,abc^2)$ is purely nondominant and has characteristic Betti numbers. What other ideals have characteristic Betti numbers? The next two conjectures suggest some possibilities.
\begin{conjecture}
Suppose that for some ideal $M$, there are indices $i< j$ such that no variable among $x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_{j-1},,x_{j+1},\ldots,x_n$ appears with the same nonzero exponent in the factorization of two minimal generators. Then $M$ has characteristic Betti numbers.
\end{conjecture}
\begin{conjecture}
Let $M$ be minimally generated by $\{m_1,\ldots,m_q,n_1,\ldots,n_p\}$, where the $m_i$ are dominant and the $n_i$ are nondominant. If the ideal $M'=(n_1,\ldots,n_p)$ is almost generic, then $M$ has characteristic Betti numbers in minimal homological degrees.
\end{conjecture}
Finally, we conjecture that Theorem \ref{pd} admits a simple generalization.
\begin{conjecture}
If $M$ is minimally generated by more than one monomial, and there is a minimal generator that divides the $\lcm$ of every pair of generators, then $\pd(S/M)=2$.
\end{conjecture}
\bigskip
\noindent \textbf{Acknowledgements}: I am deeply indebted to my wife Danisa for her support, from beginning to end of this project. Selflessly, she spent many hours typing many versions of this paper. With her common sense and her uncommon wisdom, she helped me to select the contents and organize the material. Time and time again, she turned my frustration into motivation. She is a true gift from God.
|
2,877,628,091,147 | arxiv | \section{Introduction}
\label{sec:introduction}
Overconstrained linkages is a long-lasting but still highly
active topic of research in mechanism science. For several decades, researchers
focused on overconstrained mechanisms consisting of a single loop of $n \le 6$
revolute joints (R), prismatic joints (P), or, sometimes, helical joints (H).
New linkages of that type are continuously being discovered, often by craftily
combining known linkages \cite{wohlhart91, chen07, baker05}, sometimes via novel concepts
for their construction. One of these concepts is the
factorization of motion polynomials \cite{hegedus13}. It gave rise to the
construction of the only class of overconstrained 6R linkages with still unknown
relations between its Denavit-Hartenberg parameters. In
\cite{hegedus15,gallet17,li18,li20,liu21,liu21b}, motion polynomial
factorization was exploited for the synthesis of linkages.
In spite of some attempts, a complete classification of overconstrained
single-loop linkages is currently out of reach. It is thus natural that research
efforts shifted towards the investigation of single-loop linkages consisting of
$n \ge 7$ links with, generically, $n - 6 \ge 1$ degrees of freedom. (The
classification of single-loop linkages with $n \ge 7$ links and more than $n -
6$ degrees of freedom has recently been completed in \cite{guerreiro21}.) A
guiding principle for their construction is existence of ``interesting''
properties of the mechanism's motion or its configuration variety. One example
is \cite{kong15}, where 7R linkages whose configuration variety contains
irreducible components of different dimensions -- a property that has been named
\emph{kinematotropic} in \cite{wohlhart96} -- are constructed. \cite{pfurner14}
combines mobile 4R linkages (Bennett linkages) or RPRP linkages to loops of 7R/P
joints whose configuration variety is reducible. The motion of the original 4R/P
linkages is obtained by locking of joints in certain configurations.
Analogically \cite{pfurner18} restricted a specially designed single-loop 8R
mechanism to its possible sub-motions. \cite{liu21} and \cite{liu21b} pursue
similar aims but use motion polynomial factorization techniques. Joint locking
is also used in \cite{kong16} for restricting a mechanism to a certain
subvariety of its total configuration space, although for a class of parallel
mechanism.
Our contribution in this article is of similar spirit as the works cited above
but also differs in several aspects. We present an 8R linkage with two degrees
of freedom that has the weird property that \emph{it retains one degree of
freedom when locking every second joint in any configuration} of a
two-dimensional subvariety of its total configuration space. This property
immediately implies that the quadruples of ``even'' or ``odd'' axes form
respective Bennett linkages in any configuration. We therefore refer to this
mechanism by the name ``multi-Bennett 8R mechanism''.
While combination of Bennett linkages is a common technique in this area
\cite{goldberg43,waldron68,baker93,pfurner14,kong15}, our example seems to be
novel. It is not geometrically motivated -- at least in the current state of our
understanding -- but rather based on an algebraic construction. As suggested by
examples in \cite{lercher21}, there exist \emph{bivariate motion polynomials}
that admit, in a non-trivial way, two factorizations into products of linear
\emph{univariate} factors with alternating indeterminates. These two times four
factors give rise to the revolute axes of the 8R mechanism and describe their
relative motions. The underlying bivariate factorization theory is currently
being explored \cite{lercher22:_bi-degree,lercher21} and is considerably harder
than in the univariate case. This is witnessed by our proof of existence in
Theorem~\ref{thm:primalpart}.
In spite of its algebraic construction, the 8R linkage is subject to severe
geometric constraints. We demonstrate this by computing simple necessary
relations between its Denavit-Hartenberg parameters in
Theorem~\ref{th:DH-relations}. In Theorems~\ref{th:folded} and
\ref{th:folded-Bennett} we describe remarkable properties of several discrete
configurations.
We feel that its numerous special properties (simple Denavit-Hartenberg
parameters, two degrees of freedom that are easy to control via low-degree
rational parametrization, existence of special configurations) make our
mechanism a promising candidate for yet to be explored applications.
We continue this text by recalling some basic facts about motion polynomials and
their relation to mechanism science in Section~\ref{sec:preliminaries}. In
Section~\ref{sec:bivariate} we provide a proof for existence of
quaternion polynomials with two non-trivial univariate factorizations. The proof
is constructive and provides a good method to directly compute the underlying 8R
linkage. Nonetheless, we found the procedure insufficient for obtaining results
that are suitable for further processing and in particular for the computation
of Denavit-Hartenberg parameters. Thus, our further investigation of the
multi-Bennett 8R mechanism in Section~\ref{sec:8R-Mechanism} is based on
carefully selected coordinate frames and configurations. This simplification
results in formulas that are tractable by means of computer algebra and,
ultimately, provides the desired necessary relations among the
Denavit-Hartenberg parameters (Theorem~\ref{th:DH-relations}).
This paper is a continuation of \cite{lercher22:_8R-mechanism}, a conference
paper which verifies most of the claims made in this article at hand of a
concrete numeric example. Strict mathematical proofs of the claimed facts are
presented herein for the first time.
\section{Preliminaries}
\label{sec:preliminaries}
Our construction of the multi-Bennett mechanism is based on certain
factorizations of bivariate quaternion polynomials. In this section we provide a
brief introduction to some fundamental concepts that will be used later in this
text and we settle our notation.
Denote by $\H$ the four-dimensional associative real algebra of quaternions. It
is generated by basis elements $\mathbf{i}$, $\mathbf{j}$, and $\mathbf{k}$ via the relations
\begin{equation*}
\mathbf{i}^2 = \mathbf{j}^2 = \mathbf{k}^2 = \mathbf{i}\mathbf{j}\mathbf{k} = -1.
\end{equation*}
A quaternion $q \in \H$ can be written as $q = q_0 + q_1\mathbf{i} + q_2\mathbf{j} + q_3\mathbf{k}$
with real numbers $q_0$, $q_1$, $q_2$, $q_3$. Extending real scalars to dual
numbers $a + b \varepsilon$ with $a$, $b \in \mathbb{R}$ and $\varepsilon^2 = 0$ yields the algebra
$\DH$ of dual quaternions
\begin{equation*}
\DH = \{h = p + \varepsilon q \mid p, q \in \H \}.
\end{equation*}
The conjugate quaternion or conjugate dual quaternion $\cj{h}$ is obtained by
replacing $\mathbf{i}$, $\mathbf{j}$, and $\mathbf{k}$ with $-\mathbf{i}$, $-\mathbf{j}$, and $-\mathbf{k}$, respectively,
the $\varepsilon$-conjugate $\ej{h}$ of a dual quaternion is obtained by replacing
$\varepsilon$ with~$-\varepsilon$.
Given $h = h_0 + h_1 \mathbf{i} + h_2\mathbf{j} + h_3\mathbf{k} + \varepsilon(h_4 + h_5\mathbf{i} + h_6\mathbf{j} +
h_7\mathbf{k})$, the value $\Scal(h) \coloneqq \frac{1}{2}(h + \cj{h}) =
h_0 + \varepsilon h_4$ is called the \emph{scalar part} and $\Vect(h)
\coloneqq \frac{1}{2}(h - \cj{h}) = h_1\mathbf{i} + h_2\mathbf{j} + h_3\mathbf{k} + \varepsilon(h_5\mathbf{i} +
h_6\mathbf{j} + h_7\mathbf{k})$ the \emph{vector part} of~$h$.
The quaternion or dual quaternion norm is $h\cj{h}$. For generic $h = p + \varepsilon q
= h_0 + h_1 \mathbf{i} + h_2\mathbf{j} + h_3\mathbf{k} + \varepsilon(h_4 + h_5\mathbf{i} + h_6\mathbf{j} + h_7\mathbf{k}) \in
\DH$, it is the dual number
\begin{equation}
\label{eq:1}
h\cj{h} = p\cj{p} + \varepsilon(p\cj{q} + q\cj{p}) = h_0^2 + h_1^2 + h_2^2 + h_3^2 + 2\varepsilon(h_0h_4 + h_1h_5 + h_2h_6 + h_3h_7) \in \mathbb{D}.
\end{equation}
Dual quaternions satisfying $h\cj{h} = 1$ are said to be \emph{normalized} or
\emph{unit.} In this case, the dual part in \eqref{eq:1} vanishes, that is
\begin{equation}
\label{eq:2}
h_0h_4 + h_1h_5 + h_2h_6 + h_3h_7 = 0.
\end{equation}
This is well-known under the name \emph{Study condition.}
The quaternion $p$ and the dual quaternion $h = p + \varepsilon q$ are
invertible if and only if $p \neq 0$. In this case, we have
\begin{equation*}
p^{-1} = \frac{\cj{p}}{p\cj{p}},\quad
h^{-1} = p^{-1} (1 - \varepsilon qp^{-1}).
\end{equation*}
If $h$ is unit, then $h^{-1} = \cj{h}$.
The multiplicative sub-group $\DH^\times \coloneqq \{h \in \DH \mid h\cj{h} \in
\mathbb{R} \setminus \{0\}\}$ modulo the real multiplicative group $\mathbb{R}^\times$ is
isomorphic to $\SE$, the group of rigid body displacements. Using homogeneous
coordinates in the projective space $\P^3(\mathbb{R}) = \P(\langle 1, \varepsilon\mathbf{i}, \varepsilon\mathbf{j},
\varepsilon\mathbf{k} \rangle)$, the action of $h \in \DH^\times$ on $x = x_0 + \varepsilon(x_1\mathbf{i} +
x_2\mathbf{j} + x_3\mathbf{k}) \in \P^3(\mathbb{R})$ is given by
\begin{equation}
\label{eq:3}
x \mapsto y \coloneqq \ej{h} x \cj{h}.
\end{equation}
\subsection{Dual Quaternions and Line Geometry}
\label{sec:line-geometry}
In this paper, rotations around a fixed axis but with variable rotation angle
will play an important role. We therefore have a closer look at the
representation of straight lines (revolute axes) and rotations within the
framework of dual quaternions. Identifying the oriented revolute axis with
normalized Plücker coordinates $(p_1,p_2,p_3,q_1,q_2,q_3)$ in the sense of
\cite[Section~2]{pottmann10} with the unit dual quaternion $r = p_1\mathbf{i} + p_2\mathbf{j}
+ p_3\mathbf{k} + \varepsilon(q_1\mathbf{i} + q_2\mathbf{j} + q_3\mathbf{k})$, the rotation with angle $\varphi$
around $r$ is given by the unit dual quaternion
\begin{equation}
\label{eq:4}
h \coloneqq \cos(\tfrac{\varphi}{2}) + \sin(\tfrac{\varphi}{2})r,\quad \varphi \in [0, 2\pi)
\end{equation}
or, because we use homogeneous coordinates, by any of its non-zero real
multiples. Note that the dual quaternion $h$ of \eqref{eq:4} satisfies the Study
condition \eqref{eq:2} because $r$ satisfies the Plücker condition $r\cj{r} \in
\mathbb{R} \setminus \{0\}$.
The action \eqref{eq:3} on points can be used to transform straight lines by
transforming points on them. Points on a straight line given by its Plücker
coordinates can be found, for example, by \cite[Equation~(2.4)]{pottmann10}. A
straightforward calculation also provides us with a direct formula for
displacing a straight line $r$ whose Plücker coordinates are given as vectorial
dual quaternions:
\begin{equation}
\label{eq:5}
r \mapsto q \coloneqq \ej{(h r \cj{h})}.
\end{equation}
\subsection{Dual Quaternion Polynomials}
\label{sec:polynomials}
The representation \eqref{eq:4} of a rotation around an oriented general axis is
only unique up to multiplication with a real scalar. Assuming, for the time
being, $\varphi \neq 0$, we can divide \eqref{eq:4} by $\sin\frac{\varphi}{2}$,
substitute $\cot\frac{\varphi}{2}$ with $-t$ and multiply the result with $-1$
to see that the linear dual quaternion polynomial
\begin{equation}
\label{eq:6}
h = t - r,\quad t \in \mathbb{R}
\end{equation}
parametrizes all rotations with non-vanishing rotation angle around $r$ as well.
In order to also account for $\varphi = 0$, we should extend the parameter range
in \eqref{eq:6} to $\mathbb{R} \cup \{\infty\}$. With the natural understanding that
$h(\infty) \coloneqq \lim_{t \to \infty}\frac{1}{t}h(t) = 1$, the parameter
value $t = \infty$ indeed corresponds to the rotation angle $\varphi = 0$, that is,
the identity transformation.
More generally, we can consider arbitrary polynomials $C = \sum_{i=0}^d c_it^i$
with coefficients $c_i \in \DH$. Since the indeterminate $t$ typically serves as
a \emph{real} parameter in our context, multiplication, conjugation and
evaluation at real values of polynomials are defined by the conventions that $t$
commutes with all coefficients and $\cj{t} = t$. The thus obtained ring of
polynomials is denoted by $\DH[t]$. Similarly, we can also consider the ring of
bivariate dual quaternion polynomials $\DH[s,t]$ in $t$ and $s$. Its
multiplication, conjugation and evaluation at real values is defined by similar
conventions and the assumption that $t$ and $s$ commute with all coefficients
and with each other.
The linear polynomial $t-r$ from Equation \eqref{eq:6} satisfies $\No{(t-r)} \in
\mathbb{R}[t]$ and also $\No{(t-r)}\neq 0$ (note that $r$ satifies the Plücker
condition). A generalization of this property leads to
\begin{definition}
\label{def:motion-polynomial}
A polynomial $C \in \DH[t]$ or in $\DH[s,t]$ is called a \emph{motion
polynomial} if $C\cj{C} \in \mathbb{R}[t]$ or in $\mathbb{R}[s,t]$, respectively, and
$C\cj{C} \neq 0$.
\end{definition}
The name ``motion polynomial'' is justified by the observation that the action
\eqref{eq:3} on points allows the parametric version
\begin{equation}
\label{eq:7}
x \mapsto y(s,t) = \ej{C}(s,t)x\cj{C}(s,t),\quad s,t \in (\mathbb{R} \cup \{ \infty \}) \times (\mathbb{R} \cup \{ \infty \}).
\end{equation}
Equation~\eqref{eq:7} is a polynomial map in homogeneous coordinates. The
Cartesian coordinates of $y(s,t)$ are rational functions so that \eqref{eq:7}
describes a rigid body motion with rational surfaces as trajectories.
Univariate motion polynomials have been originally defined in \cite{hegedus13}.
There, it was implicitly assumed that motion polynomials are monic. We rather
replace this assumption by the condition $C\cj{C} \neq 0$ which, together with a
proper evaluation at $s = \infty$ or $t = \infty$ and the possibility of
rational re-parametrizations, suffices for our purpose.
\begin{definition}
\label{def:infinity}
The value of the motion polynomial $C \in \DH[t]$ at $t = \infty$ is defined
as $C(\infty) \coloneqq \lim_{t \to \infty}t^{-\deg C}C(t)$. It is the leading
coefficient of~$C$. The value of $C \in \DH[s,t]$ at $(s,t)$ where $s =
\infty$ or (not exclusively) $t = \infty$ is defined by similar limits. It is
the leading coefficient in $s$ or $t$ (or in both), respectively.
\end{definition}
Re-parametrizations that preserve polynomiality and degree of univariate motion
polynomials are maps of the form
\begin{equation}
\label{eq:8}
\tau \mapsto t = \frac{a\tau+b}{c\tau+d},\quad
a,b,c,d \in \mathbb{R},\quad
ad - bc \neq 0,
\end{equation}
combined with multiplying away denominators. With $C = \sum_{i=0}^d c_i t^i$ we have
\begin{equation*}
C(\tau) = \sum_{i=0}^d c_i (a\tau+b)^i(c\tau+d)^{d-i}.
\end{equation*}
It is noteworthy that \eqref{eq:8} naturally is a map from $\mathbb{R} \cup \{\infty\}$
to $\mathbb{R} \cup \{\infty\}$. Assuming $c \neq 0$ we have
\begin{equation*}
\infty \mapsto \frac{a}{c}
\quad\text{and}\quad
-\frac{d}{c} \mapsto \infty.
\end{equation*}
If $c = 0$, then $\infty$ is a fix point of \eqref{eq:8}. Re-parametrizations of
type \eqref{eq:8} do not change the property of being a motion polynomial.
An extension of \eqref{eq:8} to bivariate polynomials is straightforward. Let us
illustrate some definitions and concepts so far for linear motion polynomials,
which constitute an important special example.
\begin{example}
\label{ex:linear-motion-polynomial}
The linear polynomial $C = t - h$ with $h = h_0 + h_1\mathbf{i} + h_2\mathbf{j} + h_3\mathbf{k} +
\varepsilon(h_4 + h_5\mathbf{i} + h_6\mathbf{j} + h_7\mathbf{k})$ is a motion polynomial by
Definition~\ref{def:motion-polynomial} if
\begin{equation*}
\No{C} = \No{(t-h)} = t^2 - (h + \cj{h})t + \No{h}
\end{equation*}
is real. This is equivalent to $h + \cj{h} = 2 (h_0 + \varepsilon h_4)$ and $h\cj{h}
= h_0^2 + h_1^2 + h_2^2 + h_3^2 + 2\varepsilon(h_0h_4 + h_1h_5 + h_2h_6 + h_3h_7)$
both being real whence
\begin{equation}
\label{eq:9}
h_4 = 0
\quad\text{and}\quad
h_1h_5 + h_2h_6 + h_3h_7 = 0.
\end{equation}
In this case the motion polynomial $C$ describes a rotation around the
straight line with Plücker coordinates $\Vect(h) = h_1\mathbf{i} + h_2\mathbf{j} + h_3\mathbf{k} +
\varepsilon(h_5\mathbf{i} + h_6\mathbf{j} + h_7\mathbf{k})$, a fact which should not be surprising. We
already demonstrated the relation between linear motion polynomials and
rotations. The second equation in \eqref{eq:9} is just the Plücker condition.
By Definition~\ref{def:infinity}, the value $C(\infty)$ equals
$\lim_{t\to\infty}t^{-1}C(t) = 1$ which is the identity displacement.
Obviously, $C(0) = -h$. The re-parametrization $\tau \mapsto \frac{1}{\tau}$
is of type \eqref{eq:8} with $a = d = 0$ and $b = c = 1$. It interchanges $0$
and $\infty$. Indeed, $C(\tau) = 1 - \tau h$ and
\begin{equation*}
C(\tau)|_{\tau = 0} = 1,
\quad
C(\tau)|_{\tau = \infty} = \lim_{\tau\to\infty} \tau^{-1}C(\tau) = -h,
\end{equation*}
as expected.
\end{example}
In the next section we will study bivariate motion polynomials which can be
written as products of linear motion polynomials.
\section{Alternating Factorizations of Bivariate Quaternion Polynomials}
\label{sec:bivariate}
Given a bivariate dual quaternion polynomial $C \in \DH[s,t]$, we denote its
bi-degree by $\bdeg(C)$. We wish to find a motion polynomial $C \in \DH[s,t]$
with $\bdeg(C)=(2,2)$ that admits two different factorizations with alternating
univariate linear factors, i.e.,
\begin{equation}
\label{eq:10}
C = (t-h)(s-\ell)(t-m)(s-n)=(s-n')(t-m')(s-\ell')(t-h'),
\end{equation}
where $h$, $\ell$, $m$, $n$, $n'$, $m'$, $\ell'$, $h' \in \DH \setminus \{0\}$
and $\No{(t-r)}=\No{(t-r')}\in \mathbb{R}[t]$, $\No{(s-u)}=\No{(s-u')}\in \mathbb{R}[s]$ for $r
\in \{h, m\}$ and $u \in \{\ell, n\}$, a requirement that is seen to be
necessary by taking norms on both sides of \eqref{eq:10}. We call these
factorizations \emph{alternating} since the $s$- and $t$-factors appear in
alternating order. By the considerations in Section~\ref{sec:preliminaries}, the
linear factors will represent rotations around fixed axes.
Motion polynomials of shape \eqref{eq:10} immediately lead to closed-loop 8R
mechanisms with the properties mentioned in the introduction:
\begin{itemize}
\item Each factorization gives rise to a two-parametric motion of an open 4R
chain. Since the factorizations agree, the two distal links can be rigidly
connected to form a closed-loop 8R linkage with the same two degrees of
freedom.
\item In this two-dimensional motion component (we conjecture that other
components exist as well), the motion of any axis is determined by either $s$
or $t$. Locking one axis, that is, fixing $s$ or $t$, automatically locks
every second axis while the axes parametrized by the other parameter still
move.
\end{itemize}
Up to now, only isolated examples of this kind of polynomials have been known
(c.~f. \cite{lercher21,lercher22:_8R-mechanism}). We will present a systematic
construction of these polynomials, and thus of multi-Bennett 8R mechanisms, and
start with a simple yet crucial lemma:
\begin{lem}
\label{lem:bennett}
Let $C \in \DH[s,t]$ be a dual quaternion polynomial that admits two
alternating factorizations:
\begin{equation}
\label{eq:11}
C=(t-h)(s-\ell)(t-m)(s-n)=(s-n')(t-m')(s-\ell')(t-h')
\end{equation}
Then
\begin{equation*}
(s-\ell)(s-n)=(s-n')(s-\ell') \quad \text{and} \quad (t-h)(t-m)=(t-m')(t-h').
\end{equation*}
\end{lem}
\begin{proof}
We may view $C$ as a polynomial in $t$ with coefficients in the ring $\DH[s]$.
Comparing the coefficient of $t^2$ on the left-hand and the right-hand side of
Equation~\eqref{eq:11} shows that $(s-\ell)(s-n)=(s-n')(s-\ell')$. The second
statement follows by interchanging the roles of $s$ and~$t$.
\end{proof}
\begin{rmk}
\label{rmk:bennett}
If the polynomial $C = (t-h)(s-\ell)(t-m)(s-n)$ admits a second alternating
factorization as in \eqref{eq:11}, it can always be computed by so-called
\emph{Bennett flips} \cite[Definition~4]{li18}. The name is motivated by the
observation that the revolute axes to $\ell$, $n$, $\ell'$, and $n'$ (and also to $h$, $m$, $h'$ and $m'$) form, in
that order, a Bennett linkage. More precisely, the quaternions $n'$, $\ell'$,
$m'$, $h' \in \DH$ can be computed by replacing the univariate polynomials
$(s-\ell)(s-n)$ and $(t-h)(t-m)$ by their second factorization with linear
factors. In \cite[Definition~4]{li18}, it is shown that the second
factorization of a univariate polynomial $(u-h_1)(u-h_2) \in \DH[u]$ is
obtained via the formulas
\begin{equation}
\label{eq:12}
k_2=-(\cj{h_1}-h_{2})^{-1}(h_1h_2-\No{h_1}) \quad \text{and} \quad k_1=h_1+h_2-k_2,
\end{equation}
where $(u-h_1)(u-h_2)=(u-k_1)(u-k_2)$ and $\No{(u-h_1)}=\No{(u-k_2)}$,
$\No{(u-h_2)}=\No{(u-k_1)}$.
\end{rmk}
The following theorem is the centerpiece of the present section. It presents a
method that can be used to construct quaternion polynomials of bi-degree $(2,2)$
that admit two different factorizations with linear factors. The theorem only
talks about quaternions. An extension to \emph{dual quaternions} will be
discussed later.
\begin{thm}
\label{thm:primalpart}
Let $h$, $m$, $n \in \H$ be quaternions. Moreover, assume that either
$\Scal(h) \neq \Scal(m)$ or, if $\Scal(h) = \Scal(m)$, that $\No{h}\neq
\No{m}$. Then there exists a suitable quaternion $\ell \in \H$ such that the
polynomial
\begin{equation}
\label{eq:13}
C \coloneqq (t-h)(s-\ell)(t-m)(s-n) \in \H[s,t]
\end{equation}
admits a second factorization with univariate linear factors.
\end{thm}
\begin{proof}
We briefly explain the main idea of the proof: According to \eqref{eq:13}, the
polynomial $C$ has a left factor of the form $t-h$. By choosing the quaternion
$\ell$ in a special way, we force the polynomial $C$ to admit another
factorization with a right factor $t-h'$ of the same norm, that is
\begin{equation}
\label{eq:14}
(t-h)(s-\ell)(t-m)(s-n)=A(t-h') \quad \text{with $A \in \H[s,t]$, $\bdeg(A)=(1,2)$,}
\end{equation}
and $\No{(t-h)}=\No{(t-h')}$.
In \cite{skopenkov19,lercher21}, it is shown that polynomials of degree one in
$t$ admit factorizations with univariate linear factors as long as the
corresponding norm polynomial splits into a product of real univariate
polynomials.\footnote{The original reference is \cite[Lemma~2.9]{skopenkov19},
but in \cite[p.~9]{lercher21} we provide an algorithm that can be used to
compute a factorization of the desired form.} This is indeed the case for
the polynomial $A$ in \eqref{eq:14} since $\No{A}=PR$ with $P=\No{(t-m)}\in
\mathbb{R}[t]$ and $R=\No{(s-\ell)}\No{(s-n)} \in \mathbb{R}[s]$. Therefore,
\begin{equation}
\label{eq:15}
A=(u_1-h_1)(u_2-h_2)(u_3-h_3) \quad \text{with} \quad u_i \in \{s,t\} \text{ and } h_i \in \H \text{ for } i=1,2,3
\end{equation}
and $C$ admits a second factorization with univariate linear factors. All possible combinations of linear $s$- and $t$-factors will be discussed in the proof of Corollary~\ref{cor:alt}.
In order to show \eqref{eq:14}, we define $M \coloneqq \No{(t-h)} \in \mathbb{R}[t]$, view $C$ and $M$ as univariate polynomials with coefficients in $\H[s]$ and apply division with remainder of $C$ by~$M$:
\begin{equation}
\label{eq:16}
C=(t-h)(s-\ell)(t-m)(s-n)=TM+R,
\end{equation}
where $T$, $R \in \H[s,t]$ and $\bdeg(R)=(d_t,d_s)$ with $d_t \le 1$ and $d_s
\le 2$. We compare the coefficients of $t^2$ on the left-hand and right-hand side of
equation \eqref{eq:16} and conclude $T=(s-\ell)(s-n)$ since $R$ is at most
linear in $t$. Therefore, the linear factor $s-n$ is a right factor of both
$C$ and $T$. Representation \eqref{eq:16} then shows that it is also a right
factor of $R$ (we used the fact $TM=MT$ since the polynomial $M \in \mathbb{R}[t]$ is
real and commutes with other polynomials). Similarly, the linear factor $t-h$
is a left factor of both $C$ and $M=(t-h)\cj{(t-h)}$ and hence also a left
factor of $R$. We can write $R=(t-h)R'=R't-hR' \in \H[s,t]$ with $R' \in
\H[s]$. Since $s-n$ divides $R$ from the right it needs to divide each
coefficient of $R$ when viewed as polynomial in $t$ with coefficients in
$\H[s]$. We conclude that $s-n$ is a right factor of $R'$. Hence $R$ is
necessarily of the form
\begin{equation}
\label{eq:17}
R=(t-h)(r_1s+r_0)(s-n)
\end{equation}
with $r_1, r_0 \in \H$. The quaternions $r_1$ and $r_0$ are obtained by
comparing appropriate coefficients in \eqref{eq:16}: Up to now, we always considered $C$ and $R$ as univariate polynomials with coefficients in $\H[s]$. We now view them as bivariate polynomials, which allows us to compare the coefficients of $ts^2$ and $t$: Comparing coefficients of
$ts^2$ in \eqref{eq:16} yields
\begin{equation*}
-h-m=-h-\cj{h}+r_1
\end{equation*}
and hence $r_1=\cj{h}-m$. Comparing coefficients of $t$ leads to
\begin{equation*}
-\ell m n- h\ell n = -\ell nh-\ell n\cj{h}-r_0n
\end{equation*}
and hence
\begin{equation*}
r_0=[(\ell m + h\ell)n-\ell \underbrace{n(h+\cj{h})}_{\overset{(*)}{=}(h+\cj{h})n}]n^{-1}=\ell m + h\ell -\ell(h+\cj{h})=h\ell-\ell(r_1+h).
\end{equation*}
In $(*)$ we used the fact $h+\cj{h} \in \mathbb{R}$. Let us recall the main idea of
the proof: We need to force $C$ to admit a factorization with the right factor
$t-h'$, where $h'$ is yet to be determined. Alternatively, we can force $R$ to have the right factor $t-h'$. By
\eqref{eq:16}, it is then also a right factor of $C$ (note that we require
$M=\cj{(t-h')}(t-h')$). We write
\begin{equation*}
R=r_1(t-r_1^{-1}hr_1)(s+r_1^{-1}r_0)(s-n)=r_1(t-h')(s+r_1^{-1}r_0)(s-n),
\end{equation*}
where $h' \coloneqq r_1^{-1}hr_1$. The polynomial $t-h'$ indeed satisfies the required condition $\No{(t-h')}=M$. If the polynomial $S \coloneqq
(s+r_1^{-1}r_0)(s-n) \in \H[s]$ was a real polynomial, the factor $t-h'$ would
commute with $S$ and hence be a right factor of $R$. In case $-r_{1}^{-1}r_0 =
\cj{n}$ we obtain $S = (s-\cj{n})(s-n) \in \mathbb{R}[s]$. Therefore, we need to find
a quaternion $\ell \in \H$ such that $-r_0=r_1\cj{n}$, that is
\begin{equation*}
\ell(r_1+h)-h\ell = r_1\cj{n}.
\end{equation*}
Above equation is a \emph{linear} equation in the quaternion unknown $\ell \in
\H$. By \cite[Theorem 2.3]{janovska08} it is uniquely solvable if and only if
$\Scal(A) \neq -\Scal(B)$ or $\Scal(A) = -\Scal(B)$ and $\No{A} \neq \No{B}$,
where $A \coloneqq -h$ and $B \coloneqq h+r_1$. This is equivalent to our
theorem's assumption $\Scal(h) \neq \Scal(m)$ or $\Scal(h) = \Scal(m)$ and
$\No{h}\neq \No{m}$. In the referenced Theorem~2.3 of \cite{janovska08}, an
explicit formula for the solution $\ell \in \H$ is provided:
\begin{equation*}
\ell = (2(2\Scal(h)-\Scal(m))-h-(h+r_1)\cj{(h+r_1)}h^{-1})^{-1}(r_1\cj{n}-h^{-1}r_1\cj{n}\cj{(h+r_1)}).
\end{equation*}
This proves the claim.
\end{proof}
In order to construct mechanisms, we need to guarantee that the second
factorization in Theorem \ref{thm:primalpart} is alternating as well. This is
ensured by some additional assumptions stated in the (rather technical)
Corollary \ref{cor:alt}.
\begin{cor}
\label{cor:alt}
Let $h$, $m$, $n \in \H$ be quaternions and $h'$, $m' \in \H$ be such that
$(t-h)(t-m)=(t-m')(t-h')$ (c.~f. Remark \ref{rmk:bennett}). If $mn\neq nm$ and
$h'n\neq nh'$, the second factorization in Theorem \ref{thm:primalpart} is
alternating as well, that is
\begin{equation*}
C = (t-h)(s-\ell)(t-m)(s-n) = (s-n')(t-m')(s-\ell')(t-h'),
\end{equation*}
where $\No{(t-r)}=\No{(t-r')}$, $\No{(s-u)}=\No{(s-u')}$ for $r \in \{h, m\}$
and $u \in \{\ell, n\}$.
\end{cor}
\begin{proof}
We have $3!=6$ possibilities for factorizations of $A$ with univariate linear
factors, where $A$ is defined in \eqref{eq:15}. We use representation
\eqref{eq:14} and obtain
\begin{align}
C &= (t-h)(s-\ell)(t-m)(s-n)=\mathbf{(s-\boldsymbol{\ell})(t-m')(s-n)}(t-h'), \label{it:1}\\
C &= (t-h)(s-\ell)(t-m)(s-n)=\mathbf{(t-m')(s-\boldsymbol{\ell})(s-n)}(t-h'), \label{it:2}\\
C &= (t-h)(s-\ell)(t-m)(s-n)=\mathbf{(s-\boldsymbol{\ell})(s-n)(t-m')}(t-h'), \label{it:3}\\
C &= (t-h)(s-\ell)(t-m)(s-n)=\mathbf{(t-m')(s-n')(s-\boldsymbol{\ell}')}(t-h'), \label{it:4}\\
C &= (t-h)(s-\ell)(t-m)(s-n)=\mathbf{(s-n')(s-\boldsymbol{\ell}')(t-m')}(t-h'),\quad\text{or} \label{it:5}\\
C &= (t-h)(s-\ell)(t-m)(s-n)=\mathbf{(s-n')(t-m')(s-\boldsymbol{\ell}')}(t-h'). \label{it:6}
\end{align}
We highlighted the different possibilities for factorizations of $A$ by using
bold letters. Note that the linear $s$-factors on the left have the same norms
as the linear $s$-factors on the right but the $s$-factors possibly appear in a different order. If
the order is different, the $s$-factors must correspond in Bennett flips
by arguments as in the proof of Lemma~\ref{lem:bennett} and are denoted by a
prime, i.e. $n'$, $\ell'$. If the order is the same, the
$s$-factors are equal by arguments similar to Lemma~\ref{lem:bennett} and
\cite[Lemma~3]{hegedus13}. The same arguments apply to linear $t$-factors.
The two factorizations in \eqref{it:1} are $t$-equivalent in the sense of \cite[Definition~4.3]{lercher21}.\footnote{In \cite[Definition~4.3]{lercher21}, two different factorizations of
bivariate quaternion polynomials with linear factors are called
$t$-equivalent, if the linear $s$-factors appear in the same order. This is
the case in \eqref{it:1} and \eqref{it:2} since $s-\ell$ is the first and
$s-n$ the second $s$-factor in both factorizations. Such factorizations are
special since they can be transferred into each other by applying Bennett
flips and letting appropriate $s$- and $t$-factors commute with each other
(c.~f. \cite[Proposition~4.6]{lercher21}).} By \cite[Proposition~4.6]{lercher21}, we conclude $h'n=nh'$ (and also
$h\ell=\ell h$), a case which is excluded by assumption. The same can be said for the two factorizations in~\eqref{it:2}. The second
factorization in \eqref{it:3} can be rewritten as $(s-\ell)(s-n)(t-h)(t-m)$
and therefore turns out to be $s$-equivalent to the first factorization in
\eqref{it:3}. We again use \cite[Proposition~4.6]{lercher21} and conclude
$mn=nm$, which is also excluded by assumption. The second factorizations in
\eqref{it:4} and also in \eqref{it:5} are coincident with the second factorizations in
\eqref{it:2} and \eqref{it:3} after applying Bennett flips of
$(s-n')(s-\ell')$. Therefore, $C$ needs to admit two different factorizations
of the form~\eqref{it:6}.
\end{proof}
\begin{rmk}
Under the weak assumptions of Corollary~\ref{cor:alt},
Theorem~\ref{thm:primalpart} guarantees existence of a quaternion $\ell \in
\H$ such that $C=(t-h)(s-\ell)(t-m)(s-n)$ admits a second alternating
factorization. While our proofs are constructive, the actual computation of
the second factorization can be simplified a lot with the help of
Remark~\ref{rmk:bennett}. At first, we compute quaternions $h'$, $\ell'$,
$m'$, $n' \in \H$ via Bennett flips \eqref{eq:12} of the univariate
polynomials $(t-h)(t-m)$ and $(s-\ell)(s-n)$, respectively. The second
factorization is then given by $C=(s-n')(t-m')(s-\ell')(t-h')$. Pseudocode for
this approach is given in Algorithm~\ref{alg:2alt}.
\end{rmk}
\begin{algorithm}
\caption{Polynomials with two alternating factorizations}
\label{alg:2alt}
\begin{algorithmic}[1]
\Require Real quaternions $h$, $m$, $n \in \H$ with $\Scal(h)\neq \Scal(m)$ or $\Scal(h) = \Scal(m)$ and $\No{h} \neq \No{m}$.
\Ensure Two tuples $(t-h,s-\ell,t-m,s-n)$ and $(s-n',t-m',s-\ell',t-h')$ such that $(t-h)(s-\ell)(t-m)(s-n)=(s-n')(t-m')(s-\ell')(t-h')$.
\State $r_1 \gets \cj{h}-m$
\State $\ell \gets (2(2\Scal(h)-\Scal(m))-h-(h+r_1)\cj{(h+r_1)}h^{-1})^{-1}(r_1\cj{n}-h^{-1}r_1\cj{n}\cj{(h+r_1)})$
\State $h' \gets -(\cj{h}-m)^{-1}(hm-\No{h}), \ m' \gets h+m-h'$
\State $\ell' \gets -(\cj{\ell}-n)^{-1}(\ell n-\No{\ell}), \ n' \gets \ell+n-\ell'$\\
\Return $(t-h,s-\ell,t-m,s-n), \ (s-n',t-m',s-\ell',t-h')$
\end{algorithmic}
\end{algorithm}
\begin{example}
\label{ex:primal}
Setting
\begin{equation*}
h = 2\mathbf{i}-\mathbf{j}-3\mathbf{k},\quad
m = -6-2\mathbf{i}+3\mathbf{j}-3\mathbf{k},\quad
n = -\mathbf{j}
\end{equation*}
and applying Algorithm~\ref{alg:2alt} yields the two alternating
factorizations
\begin{multline*}
(t - 2\mathbf{i} + \mathbf{j} + 3\mathbf{k})(s + \mathbf{i} - \mathbf{j})(t + 6 + 2\mathbf{i} - 3\mathbf{j} + 3\mathbf{k})(s + \mathbf{j})\\
=(s - \mathbf{j})(t - 2\mathbf{i} - 3\mathbf{j} + 3\mathbf{k} + 6)(s + \mathbf{i} + \mathbf{j})(t + 2\mathbf{i} + \mathbf{j} + 3\mathbf{k}).
\end{multline*}
\end{example}
When it comes to applications in space kinematics, it is necessary to formulate
our statements for \emph{dual quaternion polynomials}. The algebra of dual
quaternions $\DH$ contains zero divisors and non-invertible elements, hence
factorization theory for bivariate dual quaternion polynomials turns out to be
even more involved and we are not aware of \emph{any} results in that direction.
We observe, however, that the extension of two different alternating
factorizations over the quaternions to dual quaternions is straightforward by
using the following approach:
For any dual quaternion $h \in \DH$, we denote its primal part by $h_p$ and its
dual part by $h_d$, that is, $h = h_p+\varepsilon h_d$ for $h_p, h_d \in \H$.
Similar notation is used for motion polynomials. We find two alternating
factorizations of a motion polynomial by following three steps:
\begin{description}
\item[Step 1:] We start with two dual quaternions $h = h_p + \varepsilon h_d$ and $m =
m_p + \varepsilon m_d$ such that $t - h$ and $t - m$ satisfy the motion polynomial
condition of Definition~\ref{def:motion-polynomial} and we compute Bennett
flips of $h$ and $m$ to obtain dual quaternions $h'=h_p'+\varepsilon h_d'$ and
$m'=m_p'+\varepsilon m_d'$, respectively.
\item[Step 2:] We choose a real quaternion $n_p \in \H$ such that $h_p$, $m_p$,
and $n_p$ satisfy the conditions of Corollary~\ref{cor:alt} and apply
Theorem~\ref{thm:primalpart} to the quaternions $h_p$, $m_p$, and $n_p$. This
gives a quaternion polynomial $C_p \in \H[s,t]$ that admits the two different
alternating factorizations
\begin{equation}
\label{eq:18}
C_p=(t-h_p)(s-\ell_p)(t-m_p)(s-n_p) = (s-n_p')(t-m_p')(s-\ell_p')(t-h_p').
\end{equation}
\item[Step 3:] Finally, we have to determine the respective dual parts $\ell_d$,
$n_d$, $\ell_d'$, $n_d' \in \H$ of the dual quaternions $\ell = \ell_p + \varepsilon
\ell_d$, $n = n_p + \varepsilon n_d$, $\ell' = \ell'_p + \varepsilon \ell'_d$, and $n' = n'_p +
\varepsilon n'_d$, to allow for two factorizations of
\begin{multline}
\label{eq:19}
C\coloneqq(t-h_p-\varepsilon h_d)(s-\ell_p-\varepsilon \mathbf{\boldsymbol{\ell}_d})(t-m_p-\varepsilon m_d)(s-n_p-\varepsilon \mathbf{n_d}) =\\
(s-n_p'-\varepsilon \mathbf{n_d'})(t-m_p'-\varepsilon m_d')(s-\ell_p'-\varepsilon \mathbf{\boldsymbol{\ell}_d'})(t-h_p'-\varepsilon h_d').
\end{multline}
The yet unknown quaternions are highlighted in bold letters. Comparing
coefficients in $t$ and $s$ for all quaternion coefficients on the left-hand
and right-hand side of equation \eqref{eq:19} yields a system of $32$
equations in $16$ unknowns. (Note that the primal parts are equal by
construction.) Additionally, we have to impose the motion polynomial
conditions of Definition~\ref{def:motion-polynomial} on the linear
$s$-polynomials, leading to eight further linear equations in $16$ unknowns:
\begin{equation*}
\begin{split}
\ell_p\mathbf{\cj{\boldsymbol{\ell}_d}}+\mathbf{\boldsymbol{\ell}_d}\cj{\ell_p}=0, \quad \mathbf{\boldsymbol{\ell}_d}+\cj{\mathbf{\boldsymbol{\ell}_d}}=0,\\
\ell_p'\cj{\mathbf{\boldsymbol{\ell}_d'}}+\mathbf{\boldsymbol{\ell}_d'}\cj{\ell_p'}=0, \quad \mathbf{\boldsymbol{\ell}_d'}+\cj{\mathbf{\boldsymbol{\ell}_d'}}=0,\\
n_p\cj{\mathbf{n_d}}+\mathbf{n_d}\cj{n_p}=0, \quad \mathbf{n_d}+\cj{\mathbf{n_d}}=0,\\
n_p'\cj{\mathbf{n_d'}}+\mathbf{n_d'}\cj{n_p'}=0, \quad \mathbf{n_d'}+\cj{\mathbf{n_d'}}=0.
\end{split}
\end{equation*}
In total, we have to solve a system of $40$ linear equations in $16$ unknowns.
The respective linear system of equations seems to be highly overconstrained.
Quite surprisingly, it turns out to always admit a solutions. This will be
proved by a straight forward computation in Section~\ref{sec:DH-parameters} so that we have:
\end{description}
\begin{thm}
\label{th:dual-extension}
The construction outlined in above Steps~1 to 3 generically yields a motion polynomial $C$ satisfying
\begin{equation*}
C = (t-h)(s-\ell)(t-m)(s-n) = (s-n')(t-m')(s-\ell')(t-h')
\end{equation*}
with linear motion polynomials $t-h$, $s-\ell$, \ldots, $t-h'$.
\end{thm}
\begin{example}
We build on Example \ref{ex:primal} and additionally choose quaternions
\begin{equation*}
h_d \coloneqq 23\mathbf{i} - 74\mathbf{j} + 40\mathbf{k} \quad \text{and} \quad m_d \coloneqq -45\mathbf{i} - 66 \mathbf{j} - 36\mathbf{k}.
\end{equation*}
The polynomials $t-h_p-\varepsilon h_d$ and $t-m_p-\varepsilon m_d$ are motion
polynomials:
\begin{equation*}
\begin{aligned}
\No{(t-h_p-\varepsilon h_d)} &= t^2+14 \in \mathbb{R}[t] \setminus \{0\}, \\
\No{(t-m_p-\varepsilon m_d)} &= t^2+12t+58 \in \mathbb{R}[t] \setminus \{0\}.
\end{aligned}
\end{equation*}
We compute Bennett flips of $h \coloneqq h_p+\varepsilon h_d$ and $m \coloneqq
m_p+\varepsilon m_d$ and obtain
\begin{equation*}
\begin{aligned}
m' =& -6 + 2\mathbf{i} + 3\mathbf{j} - 3\mathbf{k} - \varepsilon (21\mathbf{i} + 22\mathbf{j} + 36\mathbf{k})\\
h' =& -2\mathbf{i} - \mathbf{j} - 3\mathbf{k} - \varepsilon (\mathbf{i} + 118\mathbf{j} - 40\mathbf{k}).
\end{aligned}
\end{equation*}
The unknowns $\ell_d$, $n_d$, $\ell_d'$, $n_d'$ are obtained by solving the
respective system of linear equations:
\begin{alignat*}{2}
\ell_d &= -11\mathbf{i} - 11\mathbf{j} + 2\mathbf{k}, &\quad n_d &= -3\mathbf{i} - 2\mathbf{k},\\
\ell_d' &= 11\mathbf{i} - 11\mathbf{j} - 2\mathbf{k}, &\quad n_d' &= -25\mathbf{i} + 2\mathbf{k}.
\end{alignat*}
Finally, we get a motion polynomial in $\DH[s,t]$ with two alternating
factorizations:
\begin{multline*}
(t - 2\mathbf{i} + \mathbf{j} + 3\mathbf{k} + \varepsilon (-23\mathbf{i} + 74\mathbf{j} - 40\mathbf{k}))(s + \mathbf{i} - \mathbf{j} + \varepsilon (11\mathbf{i} + 11\mathbf{j} - 2\mathbf{k}))\\
(t + 6 + 2\mathbf{i} - 3\mathbf{j} + 3\mathbf{k} + \varepsilon(45\mathbf{i} + 66\mathbf{j} + 36\mathbf{k}))(s + \mathbf{j} + \varepsilon(3\mathbf{i} + 2\mathbf{k})) \\
= (s - \mathbf{j} + \varepsilon(25\mathbf{i} - 2\mathbf{k}))(t+6 - 2\mathbf{i} - 3\mathbf{j} + 3\mathbf{k} + \varepsilon(21\mathbf{i} + 22\mathbf{j} + 36\mathbf{k}))\\
(s+\mathbf{i}+\mathbf{j}-\varepsilon(11\mathbf{i} - 11\mathbf{j} - 2\mathbf{k}))(t+2\mathbf{i} + \mathbf{j} + 3\mathbf{k} + \varepsilon(\mathbf{i} + 118\mathbf{j} - 40\mathbf{k})).
\end{multline*}
\end{example}
\section{The Multi-Bennett 8R Mechanism}
\label{sec:8R-Mechanism}
In the preceding section we proved existence of bivariate quaternion polynomials
$C \in \H[s,t]$ that admit two factorizations with linear quaternion polynomials and we hinted at the possibility to extend this to motion
polynomials of the shape
\begin{equation*}
C = (t-h)(s-\ell)(t-m)(s-n) = (s-n')(t-m')(s-\ell')(t-h')
\end{equation*}
with \emph{dual quaternions} $h$, $\ell$, $m$, $n$, $n'$, $m'$, $\ell'$, and
$h'$. By construction, each linear factor in $t$ or in $s$ parametrizes a
rotation around a straight line in space so that each of the two factorizations
gives rise to an open 4R chain whose end-effectors share the two-parametric
rational motion parametrized by $C$. Thus, this motion is contained in the
configuration variety of the closed-loop 8R linkage formed by the two open 4R
chains. Investigation of properties of this 8R linkage is the topic of this
section. In doing so, we only consider the \emph{generic} case, i.e., we assume
that no special algebraic relations between the input parameters are fulfilled.
At present, a comprehensive discussion of all special cases seems of little
value.
The linkage's zero configuration is given by $t = s = \infty$ because then there
is zero rotation in all joints (c.f. Definition~\ref{def:infinity}). The axes'
Plücker coordinates in this zero configuration are simply the respective
vector parts
\begin{gather*}
\Vect(h),\quad
\Vect(\ell),\quad
\Vect(m),\quad
\Vect(n),\quad
\Vect(n'),\quad
\Vect(m'),\quad
\Vect(\ell'),\quad
\Vect(h')
\end{gather*}
of the linear factors. The positions of these axes in the
configuration determined by a general parameter pair $(s,t)$ can be computed via
\eqref{eq:5} as
\begin{equation}
\label{eq:20}
\begin{aligned}
H(s,t) &= \Vect(h),\\
L(s,t) &= \ej{\bigl((t-h)\Vect(\ell)(t-\cj{h})\bigr)},\\
M(s,t) &= \ej{\bigl((t-h)(s-\ell)\Vect(m)(s-\cj{\ell})(t-\cj{h})\bigr)},\\
N(s,t) &= \ej{\bigl((t-h)(s-\ell)(t-m)\Vect(n)(t-\cj{m})(s-\cj{\ell})(t-\cj{h})\bigr)},\\
N'(s,t) &= \Vect(n'),\\
M'(s,t) &= \ej{\bigl((s-n')\Vect(m')(s-\cj{n'})\bigr)},\\
L'(s,t) &= \ej{\bigl((s-n')(t-m')\Vect(\ell')(t-\cj{m'})(s-\cj{n'})\bigr)},\\
H'(s,t) &= \ej{\bigl((s-n')(t-m')(s-\ell')\Vect(h')(s-\cj{\ell'})(t-\cj{m'})(s-\cj{n'})\bigr)}.
\end{aligned}
\end{equation}
Note that $H(s,t)$ and $N'(s,t)$ are independent of $t$ and $s$, $L(s,t)$
depends only on $t$ and $M'(s,t)$ depends only on~$s$.
For fixed $t = t_0$, the axes $L(s,t_0)$, $N(s,t_0)$, $L'(s,t_0)$, and
$N'(s,t_0)$ form, in that order, a Bennett linkage
whose motion is parameterized by $s$. We call it the \emph{$s$-Bennett linkage
at $t_0$.} Similarly, for fixed $s = s_0$ we obtain a \emph{$t$-Bennett
linkage at $s_0$,} formed by $H(s_0,t)$, $M(s_0,t)$, $H'(s_0,t)$, and
$M'(s_0,t)$.
It is well-known (and follows from Bennett's original description of his
mechanism as \emph{isogram}, c.f. \cite[Section~10.3]{odehnal20}) that for given
$t_0$ there exist two values for $s \in \mathbb{R} \cup \{\infty\}$ at which the
$t_0$-Bennett mechanism is in a configuration where its four axes have the same
common perpendicular. We call this an \emph{aligned configuration.} A similar
statement holds true for every $s_0$-Bennett mechanism.
The aligned configurations will play a crucial role in our computation of the
8R-linkage's Denavit-Hartenberg parameters in the next section. The 8R-linkage
itself exhibits an interesting aligning behavior as well that will be
investigated in more detail in the forthcoming
Section~\ref{sec:Bennett-sub-mechanisms}.
\subsection{Denavit-Hartenberg Parameters}
\label{sec:DH-parameters}
The aim of this section is the proof of simple relations among the
multi-Bennett's Denavit-Hartenberg parameters. In order to do so, we will
compute parametrizations of its moving axes with respect to special coordinates.
None of these assumptions is a loss of generality so that the resulting
statements are of general validity and are suitable for proving the missing
piece in Theorem~\ref{th:dual-extension}.
Our computation of the 8R-linkage's Denavit-Hartenberg parameters will profit a
lot from the geometry of its $t$- and $s$-Bennett linkages. According to
\cite[Section~10.3]{odehnal20}, the axes of any Bennett linkage can be computed
by
\begin{itemize}
\item picking two arbitrary points $F_h$, $F_m$ and a straight line $z$,
\item rotating $F_h$ and $F_m$, respectively, around $z$ by a rotation angle of
$\ang{180}$ to obtain a spatial quadrilateral $F_h$, $F_m$, $F_{h'}$,
$F_{m'}$ with equal opposite sides, and
\item selecting the axes $H$, $M$, $H'$, $M'$ as the perpendiculars to the
quadrilateral's sides at $F_h$, $F_m$, $F_{m'}$, and $F_{h'}$, respectively
(Figure~\ref{fig:bennett-linkage}).
\end{itemize}
\begin{figure}
\centering
\includegraphics[scale=0.2]{img/bennett-linkage.png}
\caption{Axis configuration of a Bennett linkage}
\label{fig:bennett-linkage}
\end{figure}
In order to compute the linkage's Denavit-Hartenberg parameters, we assume that
the $t$-Bennett linkage at $s_0 = \infty$ is aligned for $t_0 = \infty$. This is
no loss of generality as it can be achieved via re-parametrizations of type
\eqref{eq:8}. Moreover, we assume that the axes in this configuration intersect
the first coordinate axis perpendicularly and that the axis of half-turn
symmetry is the third coordinate axis. This entails a slight alteration of the
construction from above. We assign coordinates
\begin{equation*}
F_h = (h_1, 0, 0),\quad
F_m = (m_1, 0, 0),\quad
F_{h'} = (-h_1, 0, 0),\quad
F_{m'} = (-m_1, 0, 0)
\end{equation*}
to the common normal feet and
\begin{equation*}
D_h = (0, h_2, h_3 ),\quad
D_m = (0, m_2, m_3),\quad
D_{h'} = (0, -h_2, h_3),\quad
D_{m'} = (0, -m_2, m_3).
\end{equation*}
to the corresponding axis directions. By this choice, we ensure equal opposite
distance and angles but not equality of Bennett ratios.\footnote{An important characteristic of a Bennett mechanism is its \emph{Bennett ratio,}
the ratio between sine of angle and distance of two consecutive axes, which is
independent of the chosen pair of consecutive axes
\cite[Equation~(11.69)]{mccarthy11}.} A straightforward
computation yields that this can be satisfied by
\begin{equation*}
m_1 = \frac{h_1h_3m_2}{h_2m_3}
\quad\text{or}\quad
m_1 = \frac{h_1h_2m_3}{h_3m_2}.
\end{equation*}
Since both expressions are equal up to interchanging $h_2$ with $h_3$ and $m_2$
with $m_3$ we can use either of them. The following computations use $m_1 =
h_1h_3m_2/(h_2m_3)$.
Now, we compute the Plücker coordinates, viewed as dual quaternions, of the axes
in the zero configuration as
\begin{equation}
\label{eq:21}
\begin{aligned}
\Vect(h) &= D_h + \varepsilon (F_h \times D_h) = h_2\mathbf{j} + h_3\mathbf{k} - h_1\varepsilon(h_3\mathbf{j} - h_2\mathbf{k}), \\
\Vect(m) &= D_m + \varepsilon (F_m \times D_m) = m_2\mathbf{j} + m_3\mathbf{k} - \frac{h_1h_3m_2}{h_2m_3}\varepsilon(m_3\mathbf{j} - m_2\mathbf{k}), \\
\Vect(h') &= D_{h'} + \varepsilon (F_{h'} \times D_{h'}) = -h_2\mathbf{j} + h_3\mathbf{k} + h_1\varepsilon(h_3\mathbf{j} + h_2\mathbf{k}), \\
\Vect(m') &= D_{m'} + \varepsilon (F_{m'} \times D_{m'}) = -m_2\mathbf{j} + m_3\mathbf{k} + \frac{h_1h_3m_2}{h_2m_3}\varepsilon(m_3\mathbf{j} + m_2\mathbf{k}).
\end{aligned}
\end{equation}
Here, we identified in the usual way vectors in $\mathbb{R}^3$ with vectorial
quaternions. The coefficients $h$, $m$, $h'$, and $m'$ in the factors $t-h$,
$t-m$, $t-h'$, $t-m'$ of the sought motion polynomial are linear combinations of
$1$ and $\Vect(h)$, $\Vect(m)$, $\Vect(h')$, and $\Vect(m')$, respectively. The
coefficients cannot be chosen arbitrarily but are subject to the closure
condition $(t-h)(t-m) = (t-m')(t-h')$. This is ensured by having
\begin{equation*}
\begin{aligned}
h &= \mu - \nu m_2 (\mathbf{j} + \frac{h_3}{h_2}\mathbf{k}) + \frac{\nu h_1m_2}{h_2}\varepsilon (h_3\mathbf{j} - h_2\mathbf{k}),\\
m &= \mu + \nu (m_2\mathbf{j} + m_3\mathbf{k}) - \frac{\nu h_1h_3m_2}{h_2m_3}\varepsilon (m_3\mathbf{j} - m_2\mathbf{k}),\\
h' &= \mu + \nu m_2 (\mathbf{j} - \frac{h_3}{h_2}\mathbf{k}) - \frac{\nu h_1m_2}{h_2}\varepsilon (h_3\mathbf{j} + h_2\mathbf{k}),\\
m' &= \mu - \nu (m_2\mathbf{j} - m_3\mathbf{k}) + \frac{\nu h_1h_3m_2}{h_2m_3}\varepsilon (m_3\mathbf{j} + m_2\mathbf{k})
\end{aligned}
\end{equation*}
with parameters $\nu$, $\mu \in \mathbb{R}$.
So far, we have followed Step~1 of Section~\ref{sec:bivariate} and computed, in
full generality but at a special configuration, the axes and corresponding dual
quaternions that move with parameter $t$. For Steps~2 and 3 we make the general
\emph{ansatz}
\begin{equation*}
\ell = \ell_p + \varepsilon \ell_d,\quad
n = n_p + \varepsilon n_d,\quad
\ell' = \ell'_p + \varepsilon \ell'_d,\quad
n' = n'_p + \varepsilon n'_d
\end{equation*}
with $\ell_p$, $\ell_d$, $n_p$, $n_d$, $\ell'_p$, $\ell'_d$, $n'_p$, $n'_d \in
\H$. Step~2 gives the primal parts $\ell_p$, $\ell'_p$, and $n'$ in terms of the
indetermined coefficients of $n_p = n_0 + n_1\mathbf{i} + n_2\mathbf{j} + n_3\mathbf{k}$:
\begin{equation}
\label{eq:22}
\begin{aligned}
\ell_p &=
\frac{1}{h_2 m_3 + h_3 m_2} \Bigl(
h_2(m_3n_0-2m_2n_1)+h_3m_2n_0
+ n_1(h_2m_3-h_3m_2) \mathbf{i} \\
&\qquad\qquad\hfill
+ (h_2(m_3n_2-2m_2n_3)-h_3m_2n_2) \mathbf{j}
- n_3(h_2m_3+h_3m_2) \mathbf{k}
\Bigr),
\\
\ell'_p &=
\frac{1}{h_2m_3+h_3m_2} \Bigl(
h_2(m_3n_0-2m_2n_1)+h_3m_2n_0
+ n_1(h_2m_3-h_3m_2) \mathbf{i} \\
&\qquad\qquad\hfill
+ (h_2(m_3n_2-2m_2n_3)-h_3m_2n_2) \mathbf{j}
+ n_3(h_2m_3+h_3m_2) \mathbf{k}
\Bigr),\\
n_p' &= n_0 + n_1\mathbf{i} + n_2\mathbf{j} - n_3\mathbf{k}.
\end{aligned}
\end{equation}
This ensures that the primal parts on both sides of
\begin{equation*}
(t - h)(s - \ell)(t - m)(s - n) =
(s - n')(t - m')(s - \ell')(t - h')
\end{equation*}
agree. Equality of the respective dual parts together with the motion polynomial
condition boils down to a system of linear equations (Step~3) for the real
coefficients of $\ell_d$, $\ell'_d$, $n_d$, and $n'_d$ which we solve with a
computer algebra system. There is, indeed, a unique solution whence we have
provided the missing piece in the proof of Theorem~\ref{th:dual-extension}.
The solutions are just a bit too long to be displayed here. Therefore, and also
having in mind forthcoming computations, we strive for further simplifications.
By a rational re-parametrization we can achieve that the revolute axis
$\Vect(n)$ is perpendicular to the first coordinate axis in the zero
configuration, at $s_0 = \infty$. This having done, we see that necessarily $n_1
= 0$. With this admissible simplification, the solutions for the dual parts are
\begin{multline*}
\ell_d = \frac{h_1n_2}{\Delta}
\bigl(
n_3(h_2m_3+h_3m_2)(h_2m_2n_3-h_2m_3n_2+h_3m_2n_2+h_3m_3n_3) \mathbf{j}\\
-(h_2m_2n_3-h_2m_3n_2+h_3m_2n_2+h_3m_3n_3)(2h_2m_2n_3-h_2m_3n_2+h_3m_2n_2)\mathbf{k}
\bigr),
\end{multline*}
\begin{multline*}
n_d = \frac{h_1}{\Delta}
\bigl(
-n_3(2h_2m_2n_3-h_2m_3n_2+h_3m_2n_2)(h_2m_2n_3-h_2m_3n_2+h_3m_2n_2+h_3m_3n_3) \mathbf{j} \\
+ n_2(2h_2m_2n_3-h_2m_3n_2+h_3m_2n_2)(h_2m_2n_3-h_2m_3n_2+h_3m_2n_2+h_3m_3n_3) \mathbf{k}
\bigr),
\end{multline*}
\begin{multline*}
\ell'_d = \frac{h_1n_2}{\Delta}
\bigl(
n_3(h_2m_3+h_3m_2)(h_2m_2n_3-h_2m_3n_2+h_3m_2n_2+h_3m_3n_3)\mathbf{j} \\
+ (h_2m_2n_3-h_2m_3n_2+h_3m_2n_2+h_3m_3n_3)(2h_2m_2n_3-h_2m_3n_2+h_3m_2n_2) \mathbf{k}
\bigr)
\end{multline*}
\begin{multline*}
n'_d = \frac{h_1}{\Delta}
\bigl(
-n_3(2h_2m_2n_3-h_2m_3n_2+h_3m_2n_2)(h_2m_2n_3-h_2m_3n_2+h_3m_2n_2+h_3m_3n_3) \mathbf{j} \\
-n_2(2h_2m_2n_3-h_2m_3n_2+h_3m_2n_2)(h_2m_2n_3-h_2m_3n_2+h_3m_2n_2+h_3m_3n_3) \mathbf{k}
\bigr)
\end{multline*}
where
\begin{equation*}
\Delta = h_2m_3((h_3m_2-h_2m_3)(n_2^2-n_3^2)+2(h_2m_2+h_3m_3)n_2n_3).
\end{equation*}
But having $n_1 = 0$ has further consequences:
\begin{itemize}
\item A glance at \eqref{eq:22} immediately confirms that all coefficients of
$\mathbf{i}$ vanish for all revolute axes in the zero configuration. Therefore, all
revolute axes in the zero configuration \emph{are perpendicular to the first
coordinate axis.}
\item It can readily be verified that the intersection conditions
\begin{multline*}
\Vect(\ell) \mathbf{i} - \mathbf{i}\cj{\Vect(\ell)} =
\Vect(n) \mathbf{i} - \mathbf{i}\cj{\Vect(n)} \\
= \Vect(\ell') \mathbf{i} - \mathbf{i}\cj{\Vect(\ell')} =
\Vect(n') \mathbf{i} - \mathbf{i}\cj{\Vect(n')} = 0.
\end{multline*}
between the first coordinate axis (with Plücker coordinates $\mathbf{i}$) and all
mechanism axes that move with parameter $s$ in the zero configuration are
satisfied.
\end{itemize}
This means that in the zero configuration \emph{all revolute axes intersect the
first coordinate axis perpendicularly.} We infer that not only the $t$-Bennett
mechanism but also the $s$-Bennett mechanism aligns and both share the common
perpendicular of their axis. Since each Bennett mechanism has two aligned
configurations and there is nothing special about our zero configuration, we can
say:
\begin{thm}
\label{th:folded}
The multi-Bennett 8R mechanism has four aligned configurations in which all
eight revolute axes share a common perpendicular line.
\end{thm}
The four aligned configurations of an example can be seen in the corners of
Figure~\ref{fig:8R}.
From the representations \eqref{eq:21} and \eqref{eq:22} of the axes' Plücker
coordinates, it is straightforward to compute the mechanism's Denavit-Hartenberg
parameters. The information given in \cite[Section~2.1.2]{pottmann10} is
sufficient for that purpose but more explicit formulas are also available, for
example in \cite{faria19}. Using computer algebra, it is easy to verify
\begin{thm}
\label{th:DH-relations}
The offsets of a multi-Bennett 8R mechanism are all zero. Opposite distances as
well as opposite angles are equal.
\end{thm}
Remarkably, the four distances are rational expressions in the input
parameters, no square roots appear:
\begin{equation*}
\begin{aligned}
d_1 &= \frac{1}{\Phi}
(h_1(m_2n_3-m_3n_2)(2h_2^2m_2n_3-h_2^2m_3n_2+h_2h_3m_2n_2+h_2h_3m_3n_3+h_3^2m_2n_3)),\\
d_2 &= \frac{1}{\Phi}
(h_1(m_2n_3-m_3n_2)(h_2n_2-h_3n_3)(h_2m_3-h_3m_2)),\\
d_3 &= \frac{1}{\Phi}
(h_1(m_2n_2+m_3n_3)(h_2n_3+h_3n_2)(h_2m_3-h_3m_2)),\\
d_4 &= \frac{1}{\Phi}
(h_1(h_2n_3+h_3n_2)(2h_2m_2^2n_3-h_2m_2m_3n_2+h_2m_3^2n_3+h_3m_2^2n_2+h_3m_2m_3n_3)),
\end{aligned}
\end{equation*}
where
\begin{equation*}
\Phi = h_2m_3((h_3m_2-h_2m_3)(n_2^2-n_3^2) + 2n_2n_3(h_2m_2+h_3m_3)).
\end{equation*}
The squared cosines of the corresponding angles are
\begin{equation*}
\begin{aligned}
\cos^2\alpha_1 &= \frac{(m_2n_2+m_3n_3)^2}{(n_2^2+n_3^2)(m_2^2+m_3^2)},\\
\cos^2\alpha_2 &= \frac{(2h_2m_2^2n_3-h_2m_2m_3n_2+h_2m_3^2n_3+h_3m_2^2n_2+h_3m_2m_3n_3)^2}
{(m_2^2+m_3^2)\Psi}\\
\cos^2\alpha_3 &= \frac{(2h_2^2m_2n_3-h_2^2m_3n_2+h_2h_3m_2n_2+h_2h_3m_3n_3+h_3^2m_2n_3)^2}
{(h_2^2+h_3^2)\Psi},\\
\cos^2\alpha_4 &= \frac{(h_2n_2-h_3n_3)^2}{(h_2^2+h_3^2)(n_2^2+n_3^2)}
\end{aligned}
\end{equation*}
where
\begin{multline*}
\Psi = 4h_2^2m_2n_3(m_2n_3-m_3n_2) + (h_2^2m_3^2 + h_3^2m_2^2)(n_2^2+n_3^2)\\
+ 2h_2h_3m_2(2m_2n_2n_3-m_3(n_2^2-n_3^2)).
\end{multline*}
We conjecture that the necessary conditions of Theorem~\ref{th:DH-relations} on
the mechanism's Denavit-Hartenberg parameters are not sufficient to characterize
a multi-Bennett 8R mechanism.
\subsection{Bennett Sub-Mechanisms}
\label{sec:Bennett-sub-mechanisms}
We have already mentioned that for fixed $s = s_0$ the axes $H(s,t)$, $M(s,t)$,
$H'(s,t)$, and $M'(s,t)$ to the respective factors $t-h$, $t-m$, $t-h'$, and
$t-m'$ form a Bennett mechanism. The same is true for fixed $t = t_0$ and the
axes $L(s,t)$, $N(s,t)$, $L'(s,t)$, $N'(s,t)$ to the respective factors
$s-\ell$, $s-n$, $s-\ell'$, $s-n'$. We refer to the respective Bennett
mechanisms as $t$-Bennett mechanism at $s_0$ and as $s$-Bennett mechanism at
$t_0$. The $t$-Bennett mechanism aligns for precisely two parameter values $t'$,
$t''$. By means of \eqref{eq:20} it can readily be verified that aligning of
$t$-Bennett linkage happens at
\begin{equation}
\label{eq:23}
t' = \infty, \quad t'' = \mu
\end{equation}
while an $s$-Bennett linkage aligns at
\begin{equation}
\label{eq:24}
s' = \infty, \quad s'' = n_0.
\end{equation}
The most remarkable thing about Equations~\eqref{eq:23} and \eqref{eq:24} is
that that $t'$ and $t''$ do \emph{not depend} on $s$ and $s'$, $s''$ do
\emph{not depend} on $t$. Abstracting from our special geometric description to
the general case, we can thus state:
\begin{thm}
\label{th:folded-Bennett}
In a multi-Bennett 8R mechanism, the $t$-Bennett sub-mechanisms align precisely
for two fixed parameter values $t'$, $t''$ and the $s$-Bennett sub-mechanisms
align precisely for two fixed parameter values $s'$, $s''$. The points $(t',
s')$, $(t', s'')$, $(t'', s')$, and $(t'', s'')$ in the configuration space
correspond to the four aligned states of the complete mechanism, c.f.
Theorem~\ref{th:folded}.
\end{thm}
Theorem~\ref{th:folded-Bennett} is illustrated in Figure~\ref{fig:8R}. There,
the eight links are visualized by cylinders around the common normals of
consecutive joint axes. This is clearly visible in the four totally aligned
configurations in the corners. The motions between neighbouring corners have $t
= t'$, $t = t''$, $s = s'$, or $s = s''$. Figure~\ref{fig:8R} also illustrates
the multi-Bennett's configuration space, a torus, and the four curves, meridian
and lateral circles on the torus, along which Bennett sub-mechanisms align.
\newcommand{\myR}[2][scale=0.075]{\includegraphics[#1]{./img/8R-#2.png}}
\begin{figure}
\centering
\begin{tblr}{colspec={X[c]X[c]X[c]X[c]},
colsep=1mm,
}
\myR{01} & \myR{02} & \myR{03} & \myR{04} \\
\myR{15} & \SetCell[r=2,c=2]{c} \includegraphics[]{./img/torus} & & \myR{06} \\
\myR{14} & & & \myR{07} \\
\myR{12} & \myR{11} & \myR{10} & \myR{08}
\end{tblr}
\caption{Configuration space of multi-Bennett 8R mechanism and animated
transitions between four totally aligned configurations in the corners;
points correspond to depicted configurations.}
\label{fig:8R}
\end{figure}
As expected, the Bennett ratio is not
constant within the family of $t$-Bennett mechanism but depends on $s$ (and vice
versa for $s$-Bennett mechanisms). However, a noteworthy property is:
\begin{thm}
\label{th:bennett-ratio}
The Bennett ratio within the family of $t$-Bennett linkages is a
\emph{rational} function of degree four in $s$ and vice versa for the
$s$-Bennett linkages.
\end{thm}
\begin{proof}
A direct computation using computer algebra yields the value
\begin{equation*}
\tau = \frac{h_2m_2((h_2m_3+h_3m_2)^2(s^4 - 4n_0s^3) + c_2s^2 + c_1s + c_0)}{h_1\sqrt{h_2^2+h_3^2}\sqrt{m_2^2+m_3^2}(s^2 - 2n_0s + n_0^2 + n_2^2 + n_3^2)((h_3^2m_2^2-h_2^2m_3^2)s(s -2n_0) + d_0)}
\end{equation*}
for the $t$-Bennett ratio where
\begin{multline*}
c_2 = 6(h_2m_3+h_3m_2)^2n_0^2 + (2h_2^2m_3^2+2h_3^2m_2^2+4h_3^2m_3^2)n_2^2 \\
+ 2(2h_2^2m_2^2+h_2^2m_3^2+h_3^2m_2^2)n_3^2 - 4n_2n_3(h_2m_2-h_3m_3)(h_2m_3-h_3m_2),
\end{multline*}
\begin{multline*}
c_1 = -4n_0^3(h_2m_3+h_3m_2)^2 - 4n_0n_2^2(h_2^2m_3^2+h_3^2m_2^2+2h_3^2m_3^2) \\
+ 8n_0n_2n_3(h_2m_2-h_3m_3)(h_2m_3-h_3m_2) - 4n_0n_3^2(2h_2^2m_2^2+h_2^2m_3^2+h_3^2m_2^2),
\end{multline*}
\begin{multline*}
c_0 = n_0^4(h_2m_3+h_3m_2)^2
+ 2n_0^2n_2^2(h_2^2m_3^2+h_3^2m_2^2+2h_3^2m_3^2)\\
+ 4n_0^2n_2n_3(h_2m_2-h_3m_3)(h_3m_2-h_2m_3)
+ 2n_0^2n_3^2(2h_2^2m_2^2+h_2^2m_3^2+h_3^2m_2^2)
+ n_2^4(h_2m_3-h_3m_2)^2 \\
+ 4n_2^3n_3(h_2m_2+h_3m_3)(h_3m_2-h_2m_3)
+ 2n_2n_3^2(2h_2^2m_2^2-h_2^2m_3^2+6h_2h_3m_2m_3-h_3^2m_2^2+2h_3^2m_3^2)\\
+ 4n_2n_3^3(h_2m_2+h_3m_3)(h_2m_3-h_3m_2) + n_3^4(h_2m_3-h_3m_2)^2,
\end{multline*}
and
\begin{multline*}
d_0 = - 4h_2n_2n_3(2h_2m_2m_3-h_3m_2^2+h_3m_3^2) + n_0^2(h_3^2m_2^2-h_2^2m_3^2) \\
+ n_2^2(3h_2m_3-h_3m_2)(h_2m_3-h_3m_2) + n_3^2(4h_2^2m_2^2-h_2^2m_3^2+4h_2h_3m_2m_3+h_3^2m_2^2).
\end{multline*}
A similar formula can be derived for the $s$-Bennett ratio.
\end{proof}
\section{Conclusion and Future Research}
\label{sec:conclusion}
We presented the first example of a mechanism constructed from the factorization
of \emph{bivariate} motion polynomials and described some of its fundamental properties. Of course,
open questions remain.
The simple conditions on the mechanism's DH parameters which we describe in
Theorem~\ref{th:DH-relations} are necessary but, so we believe, not sufficient.
It would be desirable to augment them with further conditions to obtain a set of
sufficient conditions.
We further believe that the configuration space parameterized by the underlying
motion polynomial $C(s,t)$ is only a part of the mechanism's complete
configuration space. Obtaining a clearer picture of possible assembly modes or
bifurcations of the motion is certainly a worthy topic of future research.
The configuration space component described by $C(s,t)$ has many attractive
features for potential applications: It has a rational parametrization with low
degree parameter lines. The motion along a parameter line is the well-understood
coupler motion of a Bennett linkage. Simple parametrization but also the
unusual separation into joints that only move with parameter $t$ and joints that
only move with parameter $s$ is expected to be beneficial for the control of a
multi-Bennett 8R mechanism.
\bibliographystyle{plainnat}
|
2,877,628,091,148 | arxiv |
\section{Introduction} \label{sect:intro}
Many practical applications require sequences of decisions to be made under evolving and often uncertain conditions. Multistage stochastic programming (SP) is one of the common approaches used to guide decision making in such stochastic optimization problems. A variety of fields ranging from traditional production systems \cite{Peters1977}, hydroelectric reservoir scheduling \cite{Morton1996, Pereira1991}, and financial planning models \cite{Carino1998, Kusy1986}, to emerging applications in electricity grids with renewable generation \cite{Powell2012} and revenue management \cite{Topaloglu2009}, among others, have successfully used multistage SP.
Multistage stochastic linear programming (MSLP) models with recourse were used to formulate the early applications of multistage SP. These MSLP models were solved using multistage extensions of the L-shaped method \cite{VanSlyke1969}, such as the Nested Benders Decomposition (NBD) method \cite{Birge1985a}, the scenario decomposition method \cite{Mulvey1995}; and the progressive hedging (PH) algorithm \cite{Rockafellar1991}. A common feature across all these algorithms is the use of approximate deterministic representation of uncertainty through scenario trees (i.e., precedence relations) built using scenario generation techniques (e.g., \cite{Dupacova2000}). When the underlying stochastic process becomes complicated, their deterministic representation may result in large, unwieldy scenario trees. To handle such scenario trees in a computationally viable manner, one may have to resort to scenario reduction methods (e.g., \cite{Dupacova2003}). For models that allow stagewise independent data, \cite{Pereira1991} proposed the stochastic dual dynamic programming (SDDP) algorithm. The multistage extensions of the L-shaped method and SDDP and its variants intend to solve a base model with an uncertainty representation involving a finite sample space and known probability distribution (a scenario tree or a sample average approximation). The model resulting from such a representation is deterministic in nature. In this regard, we refer to these methods as deterministic decomposition-based methods.
In problems where reliable knowledge of uncertainty is not available, an approach that does not rely on exact probabilistic information is desirable. For MSLP models, the first inexact bundle method proposed in \cite{Sen2014} called the multistage stochastic decomposition (MSD) achieves this objective. This algorithm is a dynamic extension of the regularized version of the two-stage stochastic decomposition (2-SD) algorithm \cite{Higle1994}. It accommodates very general stochastic processes, possibly with time correlations, through a nodal formulation which requires only a ``layout" of a scenario tree, and a mechanism that provides transitions between nodes. A standard scenario tree formulation is a special case of such a mechanism. When the stochastic process exhibits interstage independence, a time-staged formulation (as opposed to nodal scenario-tree formulation) is more convenient. With this in mind, we present a sequential sampling-based algorithm that addresses decision making under stagewise independent stochastic processes.
\subsection{Contributions} \label{sect:contributions}
We refer to our sequential sampling-based approach for MSLP with interstage independence as the \emph{stochastic dynamic linear programming} (SDLP) algorithm. In light of the existing deterministic and stochastic decomposition-based methods, the contributions of this work are as follows.
\subsubsection*{An Algorithm for Stagewise Independent MSLP Models}
SDLP harnesses the advantages offered by both the interstage independence of stochastic processes (like SDDP) as well as the sequential sampling design (like 2-SD) to build an algorithm. The algorithm achieves asymptotic convergence while sampling only a small number of scenarios (e.g., one) in any iteration. The algorithm is designed for a state variable formulation of the MSLP models. There are many distinguishing features of SDLP when compared to deterministic decomposition-based methods. The principal differences are highlighted below.
\begin{itemize}
\item {\it Static v. Dynamic Instances:} Deterministic decomposition-based methods, including SDDP, can be classified as external sampling methods where the uncertainty representation step precedes the optimization step. In such methods, one begins by first identifying the nodes (observations) and the probability of observing the nodes at each stage, which is then used to set up the MSLP instance. SDDP aims to optimize the resulting MSLP instance. The decisions provided by SDDP are justified using the mathematical theory of sample average approximation. The uncertainty representation (observations and probabilities) is explicitly used in computing the cost-to-go value function approximations. In contrast to that, SDLP accommodates the possibility of observing new scenarios during the course of the algorithm. As a result, the uncertainty representation, and therefore, the MSLP instance dynamically evolves with the introduction of new scenarios.
\item {\it Implications of Sampling:} SDLP completes the forward and backward recursion computations along a single sample-path that is generated independently of previously observed sample-paths. Although this feature is reminiscent of SDDP variants that incorporate sampling in the forward and backward passes, there are two main differences. (a) Since SDDP operates with a fixed uncertainty representation, the sampled paths selected for forward and backward pass calculations are a subset of sample-paths used in the uncertainty representation. On the other hand, the sample-path used in the forward and backward recursions of SDLP may include observations that have not been encountered before. (b) Since the number of observations increases, the piecewise affine approximations need to be updated to ensure that they continue to provide a lower bound for the dynamically changing sample average approximation.
\item {\it Asymptotic Behavior:} Unlike SDDP that can recover the cost-to-go value functions in finitely many steps, we show the optimality of SDLP using the primal-dual relationships that are fundamental to mathematical programming. Moreover, SDLP approximations are not finitely convergent. Asymptotic convergence distinguishes the mathematical underpinnings of SDLP and SDDP analyses.
\end{itemize}
The distinguishing features identified above are all consequences of sequential sampling. In the two-stage setting (as in the regularized 2-SD algorithm of \cite{Higle1994}), the recourse function is a deterministic optimization problem, a linear program to be specific. On the other hand, in the multistage setting, the recourse functions in non-terminal stages will dynamically update nested sample average approximations. Therefore, MSD as well as SDLP include provisions to address the stochasticity in value function approximations. Since MSD works with a layout of a scenario tree, it uses a node-specific approximation. With stagewise independent stochastic processes, the future value function approximations are shared by all observations at a stage. Therefore, updates along the current sample-path perturb the future approximations for all observations. This marks a subtle but significant difference in the way the approximations are constructed and updated in SDLP. This also impacts the convergence analysis.
\subsubsection*{A Policy to Identify Incumbent Solutions}
The use of quadratic regularization in two-stage SP algorithms (\cite{Higle1994} and \cite{Ruszczynski1986}) has proven to be very effective for several reasons. The quadratic regularizer helps ensure descent, which is a property that helps prove convergence because it imparts approximate (or estimated) monotonicity. This property was very useful for convergence proofs, as in \cite{Higle1994}, for two-stage SLP problems. Another important advantage is that one can limit the size of the stage optimization problem to at most $n_t+3$ ``cuts'', where $n_t$ is the number of decision variables in stage $t$. Motivated by the advantages offered by regularization in sampling-based two-stage algorithms, the proposed algorithm, as well as MSD, employ quadratic regularization. Quadratic regularization can also be interpreted in the context of proximal algorithms at all non-terminal stages using ``incumbent" decisions\footnote{In SP algorithms, especially methods based on 2-SD, an ``incumbent decision" is one for which the predicted objective value appears to be the best (at the current iteration). When predictions change, the incumbent decision must also be updated.} that are maintained for all sample-paths discovered during the algorithm. Maintaining and updating these incumbent solutions becomes cumbersome as the number of sample-paths increases. To address this critical issue we develop the notion of a piecewise-affine policy which is used to identify incumbent solutions for out-of-sample scenarios (new sample-paths) generated sequentially within the algorithm. Such a policy is referred to as a basic feasible policy (BFP). A BFP is based on the optimal bases of the approximate stage problems that are solved during the course of the algorithm. While the BFP designed in this paper is used to identify incumbent solutions for SDLP, the general idea underlying a BFP can also be adopted for other multistage SP algorithms, including SDDP.
This paper also serves as a companion to our earlier work \cite{Gangammanavar2018} by providing the theoretical corroboration of the empirical evidence presented there. In \cite{Gangammanavar2018}, a sequential sampling-based approach was used for controlling distributed storage devices in power systems with significant renewable resources. Computational experiments conducted on large-scale instances showed that such an approach provides solutions which are statistically indistinguishable from solutions obtained using SDDP, while significantly reducing the computational time. These improvements (in comparison to SDDP) can be attributed to two key features of the SDLP algorithm. Firstly, the forward and backward recursion calculations are carried out only along one sample-path. This significantly reduces the total number of optimization problems solved in any iteration. Secondly, the use of regularization allows us to maintain a fixed-sized optimization problem at each stage, as in the case of the master problem in the regularized 2-SD algorithm \cite{Higle1994}. This implies that the computational effort per iteration (necessary to solve stagewise optimization problems) does not increase with iterations. Moreover, it has been recently established that 2-SD provides a sequence of incumbent solutions that converges to the optimal solution at a sublinear convergence rate \cite{Liu2020asymptotic}. It is important to emphasize that this result pertains to a solution sequence, rather than the objective function sequence, which was already known for first-order methods such as stochastic approximation (SA). Because the design and analysis of this paper mirrors that of 2-SD, we suspect that a similar rate of convergence may be possible for SDLP as well. However, a detailed convergence rate analysis is beyond the scope of the current paper.
\subsection*{Organization} The remainder of the paper is organized as follows. In \S\ref{sect:form} we present the MSLP formulation used in this paper. We present a brief overview of the deterministic decomposition-based MSLP methods, particularly SDDP, in \S\ref{sect:mslpAlgorithms}. A detailed description of the SDLP algorithm is provided in \S\ref{sect:sdlp}. We present the convergence analysis of SDLP in \S\ref{sect:convergenceAnalysis}. Our presentation will have a particular emphasis on the differences in approximations employed in deterministic and stochastic decomposition methods.
\section{Notation and Formulation} \label{sect:form}
We consider a system where sequential decisions are made at discrete decision epochs denoted by the set $\mathcal{T} := \{0,\ldots,T\}$. Here $T < \infty$, and hence we have a finite horizon sequential decision model with $T+1$ stages. In the interest of brevity (especially because there are many subscripted elements) we denote by $t+$ and $t-$ the succeeding and preceding time periods $(t+1)$ and $(t-1)$, respectively. We use $[t]$ to denote the history of the stochastic process $\{v_t\}_{t=0}^T$ until (and including) stage $t$, i.e., $v_{[t]} = v_0, v_1,\ldots,v_t$. Likewise, we use $v_{(t+)}$ to denote the process starting from stage $t+1$ until the end of horizon (stage $T$), i.e., $v_{(t+)} = v_{t+1},\ldots,v_T$. We use $\inner{\cdot}{\cdot}$ to denote the inner product of vectors (e.g., $\inner{v}{w} = v^\top w$) and the product of a matrix transpose and a vector, i.e., $\inner{M}{v} = M^\top v$.
Commonly in SP, MSLP models are formulated without state variables, focusing only on decisions in each stage. However in many applications, especially those involving dynamic systems, it is common to use the state variable description of system evolution. Because we expect SDLP to be able to provide decision support for such systems, it is advisable to use a state variable formulation. This approach is also common in the dynamic programming community. In this regard, we use a state variable $s_t := (x_t, \omega_t) \in \mathcal{S}_t$ to describe the system at stage $t$. This state variable is comprised of two components: $x_t \in \mathcal{X}_t$ is the endogenous state of the system and $\omega_t \in \Omega_t$ captures the exogenous information revealed in interval $(t-1,t]$. A stochastic process over which the decision-maker cannot exert any control drives the exogenous state evolution. For example, the exogenous state variable may represent a weather phenomenon like wind speed, or a market phenomenon like the price of gasoline. The evolution of the endogenous state, on the other hand, can be controlled by an algorithm through decisions $u_t$ and is captured by stochastic linear dynamics:
\begin{align} \label{eq:stDyn}
x_{t+} = \dynamics{t+}{x_t,\omega_{t+}, u_t} = a_{t+} + A_{t+} x_t + B_{t+}u_t.
\end{align}
Here, $(a_{t+}, A_{t+}, B_{t+})$ are components of the exogenous information vector $\omega_{t+}$ corresponding to the next time period.
To characterize the exogenous process $\{\tilde{\omega}_t\}_{t=1}^T$, we use $(\Omega, \mathcal{F}, \mathbb{P})$ to denote the filtered probability space. Here, $\Omega = \Omega_1 \times \ldots \times \Omega_T$ denotes the set of outcomes and $\omega_t$ denotes an observation of the random variable $\tilde{\omega}_t$. The $\sigma$-algebras $\mathcal{F}_t \subseteq \mathcal{F}$ represent the data available to the decision-maker at time $t$, which satisfy $\mathcal{F}_t \subseteq \mathcal{F}_{t^\prime}$ for $t < t^\prime$. The exogenous data $\omega_t$ includes components of $(a_t, A_t, B_t)$ that appear in \eqref{eq:stDyn} and parameters $(b_t, C_t)$ in the right-hand side of the constraints at stage $t$.
With these notations, the state-variable representation of the time-staged MSLP model can be written in the nested form as follows:
\begin{align} \label{eq:mslp}
\min~& \inner{c_0}{x_0} + \inner{d_0}{u_0} + \mathbb{E}_{\tilde{\omega}_{(1)}}\Bigg[\inner{c_1}{x_1} + \inner{d_1}{u_1} + \mathbb{E}_{\tilde{\omega}_{(2)}} \bigg [\ldots + \\& \hspace{7cm}\mathbb{E}_{\tilde{\omega}_T} [\inner{c_T}{x_T} + \inner{d_T}{u_T}] \bigg] \Bigg] \notag \\
\text{s.t.}~& u_t \in \mathcal{U}_t(s_t) := \{u_t|D_tu_t \leq b_t - C_t x_t,~ u_t \geq 0\} \qquad \forall t \in \mathcal{T} \notag \\
& x_{t+} = \dynamics{t+}{x_t,\omega_{t+}, u_t} = a_{t+} + A_{t+} x_t + B_{t+}u_t \qquad \forall t \in \mathcal{T} \setminus \{T\}. \notag
\end{align}
The above problem is stated for a given initial endogenous state $x_0$. Here, $u_t$ for $t \in \mathcal{T}$ are decision vectors and $\mathcal{U}_t(s_t)$ are closed convex sets that define the feasible set of decisions. In our finite horizon framework, we assume that the terminal cost $h_{T+}(s_{T+})$ is known for all $s_{T+}$ (or negligible enough to be set to $0$). The expectation is taken with respect to the exogenous stochastic process $\tilde{\omega}_{(t)}$ over the remainder of the horizon. In a time period $t$, the state $s_t$ explicitly depends on the initial state $x_0$, past decisions $u_{[t]}$, and past exogenous states $\omega_{[t]}$. Since $s_t$ affects the feasible set $\mathcal{U}_t$, decision $u_t$ is the function of decision process until time $t$. The multistage program can alternatively be stated in the following recursive form for all $t \in \mathcal{T}$:
\begin{align}\label{eq:mslpt}
h_t(s_t) = \inner{c_t}{x_t} + \min~& \inner{d_t}{u_t} + \expect{h_{t+}(\tilde{s}_{t+})}{} \\
s.t.~& u_t \in \mathcal{U}_t(s_t) := \{u_t|D_tu_t \leq b_t - C_t x_t, u_t \geq 0\}, \notag
\end{align}
where $\tilde{x}_{t+} = \dynamics{t+}{x_t,\tilde{\omega}_{t+},u_t}$. Since the initial state $x_0$ is assumed to be given, the stage-$0$ (henceforth known as the root-stage) problem has deterministic input.
In general, the MSLP problems are PSPACE-hard \cite{Dyer2006, Hanasusanto2016comment} and require exponential effort in horizon $T$ for provably tight approximations with high probability. To keep our presentation consistent with our algorithmic goals, we make the following assumptions:
\begin{enumerate}
\renewcommand{\labelenumi}{(A\theenumi)}
\item The set of root-stage decisions $\mathcal{U}_0$ is compact. \label{assum:compact}
\item The complete-recourse assumption is satisfied at all non-root stages, that is, the feasible set $\mathcal{U}_t(s_t)$ is non-empty for all state trajectories $s_t$ with $x_t$ satisfying \eqref{eq:stDyn} for all $t \in \mathcal{T} \setminus \{0\}$. \label{assum:completeResource}
\item The constraint matrices $D_t$ are fixed and have full row rank.\label{assum:fixed}
\item Zero provides the lower bound on all cost-to-go value functions. \label{assum:zeroLB}
\item The stochastic process for exogenous information is stagewise independent and its support is finite. \label{assum:indep}
\end{enumerate}
These assumptions provide a special structure and are fairly standard in the SP literature (\cite{Philpott2008, Sen2014}). The fixed recourse assumption \assumRef{assum:fixed} implies that the recourse matrix $D_t$ does not depend on exogenous information. As for assumption \assumRef{assum:zeroLB}, note that most loss functions used in engineering applications and statistical learning obey this property. For situations in which this assumption is not satisfied, one can perform a pre-processing step as follows: first estimate a lower bound on the optimal objective function value for each stage, and then, add the absolute value of the most negative stagewise lower bound to all stages. Introducing such a constant into the objective function does not alter the optimal decisions while rendering the validity of \assumRef{assum:zeroLB}. The finite support assumption \assumRef{assum:indep} on exogenous information ensures that $\mathcal{F}_t$ is finite. We note that the algorithms presented here can be extended, after some refinement, to settings where some of the above assumptions can be relaxed. For instance, certain extensions to Markovian stochastic processes can be envisioned. However, a detailed treatment of these extensions is beyond the scope of this paper.
\section{MSLP Algorithms} \label{sect:mslpAlgorithms}
The fundamental difficulty of solving SP problems is associated with the \emph {nested multidimensional} integral for computing the expectation in \eqref{eq:mslpt}. The most direct approach involves incorporating simulation to estimate the expected recourse function as:
\begin{align} \label{eq:saa}
\expect{h_{t+}(\tilde{s}_{t+})} \approx \widehat{H}_{t+}^N(s_{t+}) := \frac{1}{N}\sum_{n=1}^N h_{t+}(s_{t+}^n)
\end{align}
where, $s_{t+}^n$ has components $x_{t+}^n = a_{t+}^n + A_{t+}^n x_t + B_{t+}^n u_t$ and $\omega_{t+}^n$. Doing so results in the so-called sample average approximation (SAA) problem. In this case, we can view the support of $\Omega_{t+}$ as consisting of a simulated sample $\Omega_{t+}^N := \{\omega_{t+}^1, \omega_{t+}^2,\ldots,\omega_{t+}^N\}$, where each observation vector $\omega_{t+}^n$ has the same probability $p(\omega_{t+}^n) = (1/N)~\forall n = 1,\ldots,N$. Since the recourse function in \eqref{eq:mslpt} involves the expectation operator, it is worth noting that the estimate in \eqref{eq:saa} is an unbiased estimator and under certain conditions (e.g., when the sample is independent and identically distributed) a consistent estimator of the expected recourse function. However, the optimal value of a SAA problem provides a downward biased estimator of the true optimal value \cite{Shapiro2011}.
The SAA problem can be reformulated as a single large linear program (the deterministic equivalent form \cite{Birge2011}), and off-the-shelf optimization software can be used to solve the problem. However, as the sample size increases (as mandated by SAA theory to achieve high-quality solutions \cite{Shapiro2014}), or the number of stages increases, such an approach becomes computationally burdensome. Deterministic decomposition-based cutting plane methods, also known as outer-linearization methods, provide a means to partially overcome the aforementioned burden.
The deterministic decomposition-based (DD) methods can be traced to Kelley \cite{Kelley1960} for smooth convex optimization problems, Benders decomposition for ideas of decomposition/ partitioning in mixed-integer programs (MIPs) \cite{Benders1962}, and Van Slyke and Wets for 2-SLPs \cite{VanSlyke1969}. While the exact motivation for these methods arose in different contexts, we now see them as being very closely related to the outer-linearization perspective. These ideas have become the main-stay for both 2-SLPs and stochastic MIPs.
DD-based algorithms originally developed for 2-SLP have been extended to successive stages of dynamic linear programs. One of the early successes was reported in \cite{Birge1985a}, where the classical two-stage Benders decomposition algorithm was extended to multiple stages. This procedure has subsequently come to be known as the NBD algorithm. The starting point of this algorithm is the scenario tree representation of underlying uncertainty where all possible outcomes and their interdependence are represented as nodes on a tree. Naturally, this implies that the NBD algorithm can be classified under the multistage DD-based methods. Relationships between various algorithmic approaches are summarized in Figure \ref{fig:multistageAlgs}.
\subsection{Stochastic Dual Dynamic Programming}
It is well known that the number of nodes in the scenario tree grows exponentially with the number of stages, and therefore, the need to visit all the nodes in the scenario tree significantly increases the computational requirements of the NBD algorithm. Pereira and Pinto \cite{Pereira1991} provided a sampling-based approach to address this issue in the stochastic dual dynamic programming (SDDP) algorithm.
\begin{figure}[t]
\centering
\includestandalone[width=0.9\textwidth]{./figures/multistageAlgs}
\caption{Multistage Stochastic Linear Programming Algorithms} \label{fig:multistageAlgs}
\end{figure}
Like MSLP algorithms mentioned earlier, SDDP creates an outer approximation of the stage value function using subgradient information. SDDP performs its iteration in a forward pass and a backward pass, a feature common to most multistage SP algorithms. However, it avoids the intractability of scenario trees by assuming that the stochastic process is stagewise independent. While the algorithm traverses forward using sampling, the approximations are created on the backward pass similar to deterministic Benders type cuts. The interstage independence assumption allows these cuts to be shared across different states within a stage. Cut sharing under special stagewise dependency is presented in \cite{Infanger1996}, the algorithmic enhancements proposed in \cite{Linowsky2005}, and the inclusion of risk measures \cite{Guigues2012sampling, Philpott2012} have extended the capabilities of the original algorithm \cite{Pereira1991} and have contributed to the success of SDDP. The abridged nested decomposition algorithm in \cite{Donohue2006} and the cutting plane and partial sampling algorithm proposed in \cite{Chen1999} are other sampling-based methods which are similar in flavor to SDDP.
\sloppy
The main steps of SDDP are presented in \algRef{alg:sddp}. As in the case of NBD, each iteration of SDDP begins by solving an optimization problem for the root-stage. Then a finite number of Monte Carlo simulations are carried out to identify forward sample-paths $\{\future{\omega}{0}\}_{n=1}^N$ for the iteration. Along each one of these sample-paths, the forward pass involves identifying candidate solutions $u_t^{kn}$ by solving an optimization problem of the form:
\begin{align} \label{eq:sddpt}
\min \{f_t^{k-1}(s_t, u_t)~|~u_t \in \mathcal{U}_t(s_t^{kn})\}
\end{align}
and propagating the state according to the dynamics in \eqref{eq:stDyn} as $x_{t+}^{kn} = \mathcal{D}_{t+}(x_t^{kn},\omega_{t+}^{kn}, u_t^{kn})$. These two steps are undertaken in an alternating manner for all stages until the end of the horizon. In the above stage optimization problem, $f_t^{k-1}(s_t, u_t)$ denotes the current approximation of the cost-to-go value function in \eqref{eq:mslpt}. At the end of the forward pass, we have a set of candidate solutions at each non-terminal stage $\{u_t^{kn}\}_{\forall t}$; one for each simulated sample-path of the forward pass.
\begin{algorithm}[!ht]
\caption{Stochastic Dual Dynamic Programming} \label{alg:sddp}
\begin{algorithmic}[1]
\State \textbf{Initialization}: Iteration count $k \leftarrow 0$.
\State {\bf Forward pass:} Decision simulation along simulated sample-paths.\label{alg:sddpLoop}
\State Solve the root-stage optimization problem \eqref{eq:sddpt} to identify $u_0^k$.
\State Sample a set of $N$ paths $\{\future{\omega}{0}^{kn}\}_{n=1}^N$.
\For {$t = 1,\ldots,T-1$}
\For {$n = 1,\ldots,N$}
\State Setup the candidate states $x_t^{kn} = \dynamics{t}{x_{t-}^{kn}, \omega_t^{kn}, u_{t-}^{kn}}$.
\State Solve the stage optimization problem in \eqref{eq:sddpt} with $s_t^{kn}$ as input, and obtain the optimal primal candidate solution $u_t^{kn}$.
\EndFor
\EndFor
\State {\bf Backward pass}: Update cost-to-go value function approximations.
\For {$t = T-1,\ldots,0$} \label{alg:backwardpassBegin}
\For {$n = 1,\ldots,N$}
\For{$\omega_{t+} \in \Omega_{t+}$} \label{alg:2dd_omegaSelection}
\State Setup $s_{t+} = (x_{t+},\omega_{t+})$, where $x_{t+} = \mathcal{D}_{t+}(x_t^{kn}, \omega_{t+}, u_t^{kn})$.
\State Solve subproblem with $s_{t+}$ as input:
\begin{align}\label{eq:2dd_subproblem}
\min~\{f_{t+}^k(s_{t+},u_{t+})~|~ u_{t+} \in \mathcal{U}_{t+}(s_{t+})\},
\end{align}
\hspace*{45pt} and obtain optimal dual solution $\pi_{t+}(\omega_{t+})$.
\State Compute lower bounding affine function $\ell_{t+}(s_{t+}) := \alpha_{t+}^{kn}(\omega_{t+}) + \inner{\beta_{t+}^{kn}(\omega_{t+})}{x_{t+}}$, where
\begin{align} \label{eq:2dd_coeff}
\alpha_{t+}^{kn}(\omega_{t+}) = \inner{b_{t+}}{\pi_{t+}(\omega_{t+})}; \quad \beta_{t+}^{kn}(\omega_{t+}) = c_{t+} - \inner{C_{t+}}{\pi_{t+}(\omega_{t+})}.
\end{align}
\State Update the set of coefficients as: $$\minorants{t+}{k}(\omega_{t+}) = \minorants{t+}{k-1}(\omega_{t+}) \cup \{(\alpha_{t+}^{kn}(\omega_{t+}), \beta_{t+}^{kn}(\omega_{t+})\}.$$
\EndFor
\EndFor
\State Obtain the updated stage cost-to-go value function approximation using
\begin{align}\label{eq:sddpApprox}
h_{t+}^k(s_{t+}) = \max_{j \in \minorants{t+}{k}(\omega_{t+})} \{\alpha_{t+}^j + \inner{\beta_{t+}^j}{x_{t+}}\}.
\end{align}
to obtain $f_{t}^k(s_{t}, u_{t}) = \inner{c_t}{x_t} + \inner{d_t}{u_t} + \sum_{\omega_{t+} \in \Omega_{t+}} p(\omega_{t+}) h_{t+}^k(s_{t+})$.
\EndFor \label{alg:backwardpassEnd}
\State Increment iteration count: $k \leftarrow k + 1$, and go to Line \ref{alg:sddpLoop}.
\end{algorithmic}
\end{algorithm}
In the work of Pereira and Pinto \cite{Pereira1991} the backward pass proceeds as in the case of NBD (see Steps \ref{alg:backwardpassBegin}--\ref{alg:backwardpassEnd} in \algRef{alg:sddp}). At a non-terminal stage $t$ and for each element of the candidate solution set $\{u_t^{kn}\}$, backward pass states are computed using the linear dynamics in \eqref{eq:stDyn} for all possible outcomes in $\Omega_{t+}$. With each of these backward pass states as input, an optimization problem is solved in stage $t+$ and the optimal dual solution is used to compute a lower bounding affine function. Since this procedure requires subproblems to be solved for all the nodes along all the sample-paths simulated in the forward pass, this approach is ideal for narrow trees (few possible realizations per stage). However, the computational issues resurface when the number of outcomes per stage increases. Donohue and Birge proposed the abridged NBD algorithm to address this issue in \cite{Donohue2006} where the forward pass proceeds only along a subset of candidate states (termed as ``branching'' states) while solving all the nodes only along the trajectory of branching states in the backward pass. Subsequently, it was proposed in \cite{Linowsky2005} and \cite{Philpott2008} that sampling procedures can be adopted in the backward pass as well. We make the following observations regarding the original SDDP procedure and its variants:
\begin{enumerate}
\item Each collection of affine function $\minorants{t}{kn}(\omega)$ is associated with a unique candidate solution $u_{t-}^{kn}$ at stage $(t-1)$. The cost-to-go value function approximations in \eqref{eq:sddpApprox} includes a piecewise linear approximation in which the pointwise maximum is defined over the collections of affine functions generated across all the sample-paths, i.e., $\mathcal{J}_{t+}^k(\omega) = \cup_{n=1}^N \mathcal{J}_{t+}^{kn}(\omega)$. In addition, if the uncertainty is confined to the state dynamics, then the cuts can be shared across the outcomes $\omega \in \Omega_{t+}$. This ``sharing'' of cuts is possible due to the stagewise independence of exogenous information and was first proposed in \cite{Infanger1996}.
\item A SAA of the problem in \eqref{eq:mslpt} can be constructed by replacing the true distribution of $\tilde{\omega}_t$ by the empirical distribution based on a random sample $\{\omega_t^1, \omega_t^2, \ldots, \omega_t^N\}$ for all $t \in \mathcal{T}\setminus\{0\}$. These random samples are generated independently to ensure that the stagewise independent assumption is respected. A SAA based SDDP algorithm was analyzed in \cite{Shapiro2011}.
\item The forward pass sampling must ensure that each of the $|\Omega_1| \times |\Omega_2| \times \ldots \times |\Omega_T|$ possible sample-paths are visited infinitely many times w.p.1. If sampling is employed in the cut generation procedure (as in \cite{Chen1999,Philpott2008}), it must be performed independently of the forward pass sampling and must ensure that each element of $\Omega_t$ is sampled infinitely many times w.p.1. at all stages.
\end{enumerate}
In contrast to the SDDP algorithm, where a fixed sample is used at each stage, our SDLP algorithm will generate approximations that are based on sample average functions constructed using a sample whose size increases with iterations. The sequential nature of introducing new observations into the sample requires additional care within the algorithm design, particularly in the backward pass when approximations are generated (Steps \ref{alg:backwardpassBegin}--\ref{alg:backwardpassEnd}). We present these details in the next section.
\section{Stochastic Dynamic Linear Programming} \label{sect:sdlp}
An iteration of SDLP involves two principal steps: forward and backward recursion. The use of forward and backward recursions is common to almost all the multistage SP algorithms (except those based on progressive hedging \cite{Rockafellar1991}). The forward-backward recursion approach to solving dynamic optimization problems can be traced back to the differential dynamic programming (DDP) algorithm \cite{Jacobson1970differential}. The SDLP algorithm is closely related to the DDP algorithm, in the sense that we create locally accurate approximations of the subdifferential, whereas DDP works with quadratic approximations of smooth deterministic dynamic control problems. The algorithmic constructs of SDLP are designed to accommodate the inherent non-smoothness of MSLP models, and of course, stochasticity. We present details of these in iteration $k$ of the algorithm. Note that we make the same assumptions as the SDDP algorithm.
\subsection{Forward Recursion} \label{sect:SDLPforwardPass}
The forward recursion begins by solving the following quadratic regularized optimization problem:
\begin{align} \label{eq:regObjFnApprox}
\min_{u_0 \in \mathcal{U}_0}~\bigg\{f_0^{k-1}(s_0,u_0) + \frac{\sigma}{2}\|u_0 - \hat{u}_0^{k-1}\|^2 \bigg \}.
\end{align}
Here, the proximal parameter $\sigma \geq 1$ is assumed to be given. We denote the optimal solution of the above problem as $u_0^k$ and refer to it as the candidate solution. The incumbent solution $\hat{u}_0^k$ used in the proximal term is similar to that used in the regularized L-shaped \cite{Ruszczynski1986} and 2-SD \cite{Higle1994} algorithms. This is followed by simulating a sample-path $\future{\omega}{0}^k$ that is generated independently of previously observed sample-paths. The remainder of the forward recursion computations is carried out only along this simulated sample-path in two passes - a prediction pass and an optimization pass.
\subsubsection*{Prediction Pass} At all non-terminal stages we use a regularized stage optimization problem which is centered around the incumbent solution. The goal of the prediction pass is to make sure that the incumbent solutions, and the corresponding incumbent states, satisfy the underlying model dynamics in \eqref{eq:stDyn} along the current sample-path $\future{\omega}{0}^k$. Given the initial state $x_0$, the prediction pass starts by using the root-stage incumbent solution $\hat{u}_0^k$ and computing the incumbent state for stage-$1$ as: $\hat{x}_1^k = \dynamics{1}{x_0, \omega_{1}^k, \hat{u}_0^k}$. At the subsequent stage, we use the BFP to identify the incumbent solutions as $\hat{u}_t^k(\hat{s}_t^k) = \mathcal{M}_t(\hat{s}_t^k)$. Here, $\mathcal{M}_t: \mathcal{S}_t \rightarrow \mathcal{U}_t$ is a vector valued mapping that takes the state vector $s_t$ as an input and maps it on to a solution in $\mathcal{U}_t(s_t)$. We postpone the details of specifying BFP to \S\ref{sect:incumbentSelection} and continue with the algorithm description here. We proceed by computing the incumbent state using \eqref{eq:stDyn} and identifying the incumbent solution using the BFP for the remainder of the horizon. At the end of the prediction pass, we have an incumbent state $\{\hat{x}_t^k\}$ and solution $\{\hat{u}_t^k\}$ trajectories\footnote{We will use the more explicit notation $\hat{u}_t^k(\hat{s}_t^k)$ that shows the dependence of the incumbent solution on the input incumbent state only when it does not add undue notational burden. In most cases, we will simply use $\hat{u}_t^k$ for incumbent solution.} that satisfy state dynamics in \eqref{eq:stDyn} over the entire horizon.
\subsubsection*{Optimization Pass} After completing the prediction pass, the optimization pass is carried out to simulate candidate solutions along the current sample-path $\future{\omega}{0}^k$ for all $t \in \mathcal{T}\setminus\{T\}$:
\begin{align}\label{eq:forwardt}
u_t^k \in \mathrm{argmin} \{ f_t^{k-1}(s_t^k, u_t) + \frac{\sigma}{2}\|u_t - \hat{u}_t^{k}(\hat{s}_t^k) \|^2~|~u_t \in \mathcal{U}_t(s_t^k)\}.
\end{align}
Here $f_t^{k-1}(s_t,u_t)$ is the current approximation of the cost-to-go value function and the proximal term $\sigma > 0$ is assumed to be given. Structurally, $f_t^{k-1}$ is a piecewise affine and convex function and is similar to the approximations used in the SDDP algorithm. However, each individual piece is a minorant\footnote{Since the approximations generated in sequential sampling-based methods are based on statistical estimates which are updated iteratively, we use the term ``minorant'' to refer the lower bounding affine functions. This usage follows its introduction in \cite{Sen2014} and is intended to distinguish them from the more traditional ``cuts'' in DD-based methods.} generated using certain sample average functions. The candidate decision for a particular stage is used to set up the subsequent endogenous state $x_{t+}^k = \mathcal{D}_{t+}(x_t^k, \omega_{t+}^k, u_t^k)$ and thus the input state $s_{t+}^k$. We refer to the decision problem in \eqref{eq:forwardt} as \textit{Timestaged Decision Simulation} at stage $t$ (TDS$_t$). This completes the optimization pass, and hence the forward recursion, for the current iteration. At the end of forward recursion, we have the incumbent trajectory $\{\hat{x}_t^k\}$ and the candidate trajectory $\{x_t^k\}$ which will be used for updates during the backward recursion.
\subsection{Backward Recursion} \label{sect:SDLPbackwardPass}
The primary goal in the backward recursion procedure is to update the cost-to-go value function approximations $f_t^{k-1}$ at all non-terminal stages. As the name suggests these calculations are carried out backward in time, starting from the terminal stage to the root-stage, along the same sample-path that was observed during the forward recursion. These calculations are carried out for both the candidate as well as the incumbent trajectories.
In both the DD and SD-based approaches, the value function is approximated by the pointwise maximum of affine functions. However, the principal difference between these approaches lies in how the expected value function is approximated. In DD-based methods, it is the true expected value function which requires the knowledge of the probability distribution or a SAA with a fixed sample (as in \eqref{eq:saa}). On the other hand, the SD-based methods create successive approximations $\{f_t^k\}$ (for $t <T$) that provide a lower bound on a sample average approximation using only $k$ observations in iteration $k$, and therefore, satisfies:
\begin{align} \label{eq:2sdLB}
f_t^k(s_t, u_t) - \inner{c_t}{x_t} - \inner{d_t}{u_t} \leq \widehat{H}_{t+}^k(s_{t+}) := \sum_{\omega_{t+} \in \Omega_{t+}^k} p^k(\omega_{t+}) h_{t+}(x_{t+},\omega_{t+}),
\end{align}
where $x_{t+}$ is the endogenous state obtained from $\eqref{eq:stDyn}$ with $(x_t,\omega_{t+},u_t)$ as input, for all $\omega_{t+} \in \Omega_{t+}^k$ and $u_t \in \mathcal{U}_t(s_t)$. The quantity $p^k(\omega_{t+})$ in \eqref{eq:2sdLB} measures the relative frequency of an observation which is defined as the number of times $\omega_{t+}$ is observed ($\kappa^k(\omega_{t+})$) over the number of iterations ($k$). This quantity approximates the unconditional probability of exogenous information at stage $t+$, and is updated as follows. Given the current sample-path $\future{\omega}{0}^k$, a collection of observations at a non-root stage $t$ $(t > 0)$ is updated to include the latest observation $\omega_t^k$ as: $\Omega_t^{k} = \Omega_t^{k-1} \cup \omega_t^k$. The observation count is also updated as: $\kappa^k(\omega_t) = \kappa^{k-1}(\omega_t) + \mathbbm{1}_{\omega_t = \omega_t^k}$, for all $\omega_t \in \Omega_t^k$. Using these counts, the observation frequency for $\omega_t \in \Omega_t^k$ is given by: $p^k(\omega_t) = \frac{\kappa^k(\omega_t)}{k}$. Notice the superscript $k$ (iteration count) that is used in our notation of the SAA function $\hat{H}_{t+}^k$, the collection of observations $\Omega_{t+}^k$, and the observation frequency $p^k$. This is intended to convey the sequential nature of SDLP.
\begin{figure}
\centering
\includestandalone[width=0.9\textwidth]{./figures/sdlpSequentialSampling}
\caption{Uncertainty representation after $5$ iteration. The green path denotes the sample-path observed in iteration $5$. The number on the nodes represent the number of times the observation was encountered, i.e., $\kappa^k(\omega)$. New nodes are added to the representation as and when they are encountered. For example, the second node in $\Omega_2^5$ was encountered for the first time in iteration 5.}\label{fig:sequentialSampling}%
\end{figure}
\subsubsection{Terminal Stage Approximation} At the terminal stage, recall that $\expect{h_{T+}(s_{T+})}{} = 0$, and the value function $h_T$ is the value of a deterministic linear program for a given state input $s_T$. The sample average $\widehat{H}_T^k(s_{T}) = \sum_{\omega_T \in \Omega_T^k} p^k(\omega_T) h_T(s_{T})$ provides an unbiased estimate of $\expect{h_T(\tilde{s}_T)}{}$. Hence, the value function at the penultimate stage ($t = T-1$) can be approximated using a procedure similar to the one employed in the 2-SD algorithm.
In this procedure, a subproblem corresponding to the current observation $\omega_T^k$ is setup and solved. This subproblem uses $s_T^k = (x_T^k,\omega_T^k)$ as input, where $x_T^k = \mathcal{D}_T(x_{T-1}, \omega_T^k, u_{T-1}^k)$. Let the optimal dual solution obtained be denoted as $\pi_T^k(\omega_T^k)$ which is added to the collection of previously discovered dual vertices: $\Pi_T^k = \Pi_T^{k-1} \cup \pi_T^k(\omega_T^k)$. For other observations in $\Omega_T^k$, i.e., $\omega_T \in \Omega_T^k$ and $\omega_T \neq \omega_T^k$, we identify the best dual vertex in $\Pi_T^k$ using the ``argmax'' operation as in the case of 2-SD algorithm \cite{Higle1991}. This operation is as follows:
\begin{align} \label{eq:argmax}
\pi_T^k(\omega_T) \in \mathrm{argmax} \{\inner{\pi_T}{(b_T - C_Tx_T)~|~\pi_T \in \Pi_T^k}\}.
\end{align}
Using the dual vertices $\{\pi_T^k(\omega_T)\}_{\omega_T \in \Omega_T^k}$, we compute the lower bounding affine function $\ell_T^k(s_T) := \alpha_T^k(\omega_T) + \inner{\beta_T^k(\omega_T)}{x_T}$, where
\begin{align} \label{eq:2sd_coeff}
\alpha_T^k(\omega_T) = \inner{b_T}{\pi_T^k(\omega_T)}; \quad \beta_T^k(\omega_T) = c_T - \inner{C_T}{\pi_T^k(\omega_T)}.
\end{align}
The above calculations are also carried out for the the incumbent state $\hat{s}_t^k$, resulting in the affine function $\hat{\ell}_T^k(s_T) = \hat{\alpha}_T^k(\omega_T) + \inner{\hat{\beta}_T^k(\omega_T)}{x_T}$. The set of affine functions thus obtained ($\minorants{T}{k} = \minorants{T}{k-1} \cup \{\ell_T^k(s_T), \hat{\ell}_T^k(s_T)\}$) provides the piecewise affine lower bounding function to the value function $h_T(s_T)$ that is given by:
\begin{align} \label{eq:minoT}
h_T^k(s_T) = \max_{j \in \minorants{T}{k}(\omega_T)} \{\ell_T^j(s_T) = \alpha_T^j(\omega_T) + \inner{\beta_T^j(\omega_T)}{x_T}\}.
\end{align}
The above function provides an outer linearization of the terminal value function.
\subsubsection{Non-terminal Stage Approximation}
When updating the approximations at a non-terminal stage $t$, we have access to the minorants at stage $t+$ (recall that the value functions are being updated recursively backwards from the terminal stage). Using these we can define:
\begin{align} \label{eq:samplMean}
H_t^k(s_t) := \inner{c_t}{x_t} + \min_{u_t \in \mathcal{U}_t(s_t)}~& \inner{d_t}{u_t} + \sum_{\omega_{t+} \in \Omega_{t+}^k} p^k(\omega_{t+}) h_{t+}^k(s_{t+}),
\end{align}
where $s_{t+} = (\dynamics{t+}{x_t, \omega_{t+}, u_t}, \omega_{t+})$ for all $\omega_{t+} \in \Omega_{t+}^k$. The expression in (\ref{eq:samplMean}) represents a sample average computed over the current observations $\Omega_{t+}^k$ at stage $t+$ at an arbitrary input state $s_t$. Since we use lower bounding approximations $h_{t+}^k$ in building this sample average, this sampled estimate is biased. The stage approximation is updated using a lower bound to the above sample average function, and hence, is biased as well.
In order to compute this lower bound, notice that we can obtain the subgradient, i.e., $\beta_{t+}^k(\omega_{t+}) \in \partial h_{t+}^k(\dynamics{t+}{x_t^k, \omega_{t+}, u_t^k},\omega_{t+})$ using the collection of affine functions $\minorants{t+}{k}(\omega)$ for all observations $\omega_{t+} \in \Omega_{t+}$ (see \S\ref{sect:subgradientPolicy} for details). Let $\alpha_{t+}^k(\omega_{t+})$ be the corresponding intercept term. Using these, a valid lower bound to the sample average function in \eqref{eq:samplMean} can be written as:
\begin{align} \label{eq:samplMeanPrimal}
H_t^k(s_t) \geq \inner{c_t}{x_t} + \min_{u_t \in \mathcal{U}_t(s_t)}~\inner{d_t}{u_t} + \sum_{\omega_{t+} \in \Omega_{t+}^k} p^k(\omega_{t+}) \bigg[\alpha_{t+}^k(\omega_{t+}) + \inner{\beta_{t+}^k(\omega_{t+})}{x_{t+}}\bigg].
\end{align}
Substituting the state dynamics equation in \eqref{eq:stDyn}, and dualizing the linear program on the right-hand side of the above inequality, we obtain:
\begin{align}\label{eq:samplMeanDual}
H_t^k(s_t) \geq \inner{c_t}{x_t} + &\bar{\alpha}_{t+}^k + \inner{\bar{\beta}_{t+}^k}{x_t} + \\ &\max~\{ \inner{\pi_t}{(b_t - C_tx_t)}~|~\inner{D_t}{\pi_t} \leq \bar{\rho}_{t+}^k,~ \pi_t \leq 0\}, \notag
\end{align}
where,
\begin{align*}
&\bar{\beta}_{t+}^k = \sum_{\omega_{t+} \in \Omega_{t+}^k} p^k(\omega_{t+}) \inner{\beta_{t+}^k(\omega_{t+})}{A_{t+}},~~
\bar{\rho}_{t+}^k = d_t + \sum_{\omega_{t+} \in \Omega_{t+}^k} p^k(\omega_{t+}) \inner{\beta_{t+}^k(\omega_{t+})}{B_{t+}}, \\
&\text{and } \bar{\alpha}_{t+}^k = \sum_{\omega_{t+} \in \Omega_{t+}^k} p^k(\omega_{t+}) [\alpha_{t+}^k(\omega_{t+}) + \inner{\beta_{t+}^k(\omega_{t+})}{a_{t+}}].
\end{align*}
We refer to the linear program on the right-hand side of inequality in \eqref{eq:samplMeanDual} as the {\it stagewise-dual approximation} at stage $t$ and denote it as (SDA$_t^k$). Let $\pi_t^k(\omega_t^k)$ denote the optimal dual solution obtained by solving (SDA$_t^k$) with $s_t^k$ as input. Using this we obtain a lower bounding affine function $\ell_t^k(s_t) = \alpha_t^k(\omega_t^k) + \inner{\beta_t^k(\omega_t^k)}{x_t}$ with the following coefficients:
\begin{align}\label{eq:coefft}
\alpha_t^k(\omega_t^k) =~\inner{\pi_t^k(\omega_t^k)}{b_t} + \bar{\alpha}_{t+}^k~; \qquad \beta_t^k(\omega_t^k) =~ c_t-\inner{C_t}{\pi_t^k(\omega_t^k)} + \bar{\beta}_{t+}^k.
\end{align}
Similar calculations using $\hat{\pi}_t^k(\omega_t^k)$, an optimal solution to the (SDA$_t^k$) with $\hat{s}_t^k$ as input, yields an incumbent affine function $\hat{\ell}_t^k(s_t)$. As before these functions are included in a collection of affine functions to obtain the updated set $\minorants{t}{k}(\omega_t^k)$.
\begin{algorithm}[!t]
\caption{Stochastic Dynamic Linear Programming} \label{alg:sdlp}
\begin{algorithmic}[1]
\State \textbf{Initialization}:
\State Choose a proximal parameter $ \sigma \in [\sigma^{min}, \sigma^{max}]$ with $1 \leq \sigma^{min} < \sigma^{max}$.
\State Set observations $\Omega_t^0 = \emptyset$; a trivial affine functions $\ell_t^0 = 0$ in the set $\minorants{t}{0}$ for all $t \in \mathcal{T}$; iteration counter $k \leftarrow 1$.
\State {\bf Forward recursion}: Decision simulation along simulated sample-path \label{alg:sdlpLoop}
\State Solve the root-stage optimization problem of the form \eqref{eq:regObjFnApprox} to identify $u_0^k$.
\State Simulate a sample-path $\future{\omega}{0}^k$.
\State {\it Prediction pass}:
\For {$t = 1,\ldots,T-1$}
\State Setup the incumbent state $\hat{x}_t^k = \dynamics{t}{\hat{x}_{t-}^k, \omega_t^k, \hat{u}_{t-}^k(\hat{s}_{t-}^k)}$.
\State Identify an incumbent solution $\hat{u}_t^k(\hat{s}_t^k) = \mathcal{M}_t^k(\hat{s}_t^k)$.
\EndFor
\State {\it Optimization pass}:
\For {$t = 1,\ldots,T$}
\State Setup the candidate state $x_t^k = \dynamics{t}{x_{t-}^k, \omega_t^k, u_{t-}^k}$.
\State Solve the stage optimization problem \eqref{eq:forwardt} using $s_t^k$ as input, and obtain the \hspace*{0.4cm} candidate primal solution $u_t^k$.
\EndFor
\State {\bf Backward recursion}: Update value function approximations.
\For {$t = T,\ldots,1$}
\State Setup the stagewise-dual approximation \eqref{eq:samplMeanDual}.
\State Solve the dual approximation using the candidate and the incumbent states, \hspace*{0.4cm} and compute the coefficients for affine functions using \eqref{eq:coefft}.
\State Obtain the updated value function approximation as in \eqref{eq:objUpdtt}.
\EndFor
\State Increment the iteration count $k \leftarrow k + 1$, and go to Line-\ref{alg:sdlpLoop}.
\end{algorithmic}
\end{algorithm}
While it is true that the latest affine functions satisfy $H_t^k(s_t) \geq \ell_t^k(s_t)$, the same does not hold for affine functions generated at earlier iterations. Hence, it is possible that there exists a $j \in \minorants{t}{k}(\omega_t)$ such that the affine function $\ell_t^j(s_t)$ may not lower bound the current sample average $H_t^k(s_t)$. In keeping with the updates of 2-SD \cite{Higle1991}, the old minorants need to be updated as the sample average estimate changes during the course of the algorithm. Under assumption \assumRef{assum:zeroLB}, this is achieved by scaling down the previously generated affine functions. In the two-stage case, 2-SD minorants are updated by multiplying the coefficients by $(k-1)/k$. In the multistage case, the minorants are updated\footnote{The exponent $(T-t)$ results from the fact that minorants in the future $T-t$ stages are also updated in a similar manner. \theoremRef{thm:outerLinearization} provides the formal argument.} as follows
\begin{align}\label{eq:minot}
h_t^k (s_t)= \max~ \bigg \{ \bigg \{ \bigg (\frac{k-1}{k} \bigg)^{T-t}~\ell_t^j(s_t) \bigg \}_{j \in \minorants{t}{k-1}(\omega_t)},~\ell_t^k(s_t), ~\hat{\ell}_t^k(s_t) \bigg\}.
\end{align}
Notice that both the candidate and incumbent affine functions generated in previous iterations are treated similarly while scaling down.
We use these updated minorants to obtain the stage objective function as follows:
\begin{align}\label{eq:objUpdtt}
f_t^k(s_t, u_t) =~& \inner{c_t}{x_t} + \inner{d_t}{u_t} + \sum_{\omega_{t+} \in \Omega_{t+}^k} p^k(\omega_{t+}) h_{t+}^k(s_{t+}),
\end{align}
where $s_{t+} = (\dynamics{t+}{x_t, \omega_{t+}, u_t}, \omega_{t+})$, for all $\omega_{t+} \in \Omega_{t+}^k$. Similar updates are carried out at all the non-terminal stages by progressing backwards to the root-stage along the same sample-path that was used in the forward recursion. The backward recursion for iteration $k$ is said to be complete once the root-stage objective function is updated. The sequentially ordered steps of SDLP algorithm are presented in \algRef{alg:sdlp}.
\subsubsection{Comparison of DD and SD-based approximations}
The complete recourse assumption ensures that the dual feasible set is non-empty and the optimal dual solution $\pi_T$ is an extreme point of $\{\pi_T ~|~ \inner{D_t}{\pi_t} \leq d_T, \pi_T \leq 0 \}$. There are finitely many of these extreme points, and hence, coefficients for the terminal stage computed using \eqref{eq:2dd_coeff} for the DD-based algorithms or \eqref{eq:2sd_coeff} for the SD-based methods take finitely many values.
In DD-based multistage algorithms the coefficients belong to a finite set at stage $t+$, and therefore, there exists an iteration $k^\prime$ such that the set of coefficients $\mathcal{J}_{t+}^k(\omega) = \mathcal{J}_{t+}^{k^\prime}(\omega)~\forall \omega \in \Omega_{t+}$ for $k > k^\prime$. Consequently, the dual feasible region of the problem solved in the backward pass has the following form: \begin{align} \label{eq:sddpDualSet}
\Pi_t^{k,DD} = \left\{ (\pi_t, \theta_{t+}) ~\Bigg \vert~
\begin{array}{l}
\inner{D_t}{\pi_t} \leq d_t + \sum_{\omega \in \Omega_{t+}} \sum_{j \in \mathcal{J}_{t+}^k(\omega)} \theta_{t+}^j(\omega) \beta_{t+}^j(\omega), \\
\sum_{j \in \mathcal{J}_{t+}^k(\omega)} \theta_{t+}^j(\omega) = p(\omega) \quad \forall \omega \in \Omega_{t+},~ \pi_t \leq 0
\end{array} \right \}.
\end{align}
Notice that this dual feasible region does not change for iterations $k > k^\prime$. Since there are finite number of extreme points to $\Pi_t^{k,DD}$, the coefficients computed using these extreme point solutions result in at most a finite number of distinct values at stage $t$.
In SDLP, notice the update of the old affine functions in \eqref{eq:minot} at stage $t+$ can be viewed as a convex combination of the coefficient vector $(\alpha_{t+}^j, \beta_{t+}^j)$ and a zero vector. Due to these updates, the dual feasible region depends on updated coefficients (particularly $\beta_{t+}^k(\omega)$) as well as frequencies $p^k(\omega)$:
\begin{align}\label{eq:sdlpDualSet}
\Pi_t^{k,SD} = \{ \pi_t ~\vert~
\begin{array}{l}
\inner{D_t}{\pi_t} \leq d_t + \sum_{\omega_{t+} \in \Omega_{t+}^k} p^k(\omega_{t+}) \inner{\beta_{t+}^k(\omega_{t+})}{B_{t+}},
\pi_t \leq 0
\end{array} \}.
\end{align}
This implies that dual solutions used to compute the coefficients no longer belong to a finite set. However, following assumption \assumRef{assum:completeResource} the dual feasible set in (SDA$_t^k$) is bounded. Therefore, the coefficients computed in \eqref{eq:coefft} for a non-terminal stage are only guaranteed to be in a compact set. Proceeding backwards, we can conclude that this is the case for coefficients at all non-terminal stages. These observations are summarized in the following lemma.
\begin{lemma} \label{lemma:coeff}
Suppose the algorithm runs for infinitely many iterations. Under assumption Assumption \assumRef{assum:compact} and \assumRef{assum:completeResource}. For all $k \geq 1$,
\begin{enumerate}[label=(\roman*)]
\item The coefficients of cuts generated within DD-based methods in \eqref{eq:2dd_coeff}, and coefficients of minorants generated for the terminal stage within SD-based methods in \eqref{eq:2sd_coeff} belong to finite sets. \label{lemma:coeff_a}
\item The coefficients of minorants generated within SD-based methods for the non-terminal stages in \eqref{eq:coefft} belong to compact sets for all $k \geq 1$. \label{lemma:coeff_b}
\end{enumerate}
\end{lemma}
As a consequence of \ref{lemma:coeff_a} in above lemma and \assumRef{assum:indep}, a finite number of cuts are generated during the course of DD-based algorithms for MSLP models. This is possible because these algorithms utilize the knowledge of transition probabilities in computing cut coefficients. Additionally, these cuts provide lower bound to the true value function and are not required to be updated over the course of the algorithm. It must be noted that, the finite number of cuts pertains only to DD-based methods applied to MSLP problems. In the case of multistage stochastic non-linear convex programs (e.g., \cite{Girardeau2015convergence, GuiguesRegularized2020}), the number of cuts is not guaranteed to be finite. In such cases, the coefficients in the non-terminal stages of DD-based methods also belong to compact sets, albeit for a different reason than in the SD-based methods for MSLP models.
The subgradients computed in the SD-based methods are stochastic in nature. Therefore, only affine functions generated in the current iteration satisfy the lower bounding property for the current sample average approximation, but not necessarily for the true value function. The previous affine functions have to be updated using the scheme described in \eqref{eq:minot}. This scheme ensures that the minorant $h_t^k$, obtained after computing the current affine function and updating all previous affine functions, provides a lower bound to the sample average function $H_t^k$ at all non-terminal stages. The outer linearization property of the minorants is formalized in the following theorem.
\begin{theorem}\label{thm:outerLinearization}
Suppose assumption \assumRef{assum:compact}-\assumRef{assum:indep} hold.
\begin{enumerate}[label=(\roman*)]
\item The minorant computed in \eqref{eq:minoT} for terminal stage satisfies:
\begin{subequations}
\begin{align} \label{eq:outerLinearization_T}
h_T(s_T) \geq h_T^k(s_T) \geq h_T^{k-1}(s_T) \geq \ldots \geq h_T^j(s_T),
\end{align}
for all $1 \leq j \leq k$, $s_T \in \mathcal{S}_T$.
\item At non-terminal stages, the minorant computed in \eqref{eq:minot} satisfies for $s_t \in \mathcal{S}_t$:
\begin{align} \label{eq:outerLinearization_t}
H_t^k(s_t) \geq h_t^k(s_t) \geq \bigg(\frac{k-1}{k}\bigg)^{T-t} h_t^{k-1}(s_t).
\end{align}
\end{subequations}
\end{enumerate}
\end{theorem}
\begin{proof}
The first part of the theorem follows directly from the linear programming duality and the construction of the affine functions $\ell_T^k$ in \eqref{eq:argmax} and \eqref{eq:2sd_coeff}. For proof of the second part, we use $m = \omega_{t+}^k$ which is the observation encountered at stage $t+$ in iteration-$k$ and $n$ to index the set $\Omega_{t+}$. Following this notation, we denote $x_{t+} = \dynamics{t+}{x_t, \omega_{t+}, u_t}$ as $x_{nt+}$ and $s_{t+} = (x_{t+}, \omega_{t+})$ as $s_{nt+}$. Consider the stage sample average problem in \eqref{eq:samplMean}:
\begin{align}\label{eq:outer_1}
H_t^k(s_t) - & \inner{c_t}{x_t} =~ \min_{u_t \in \mathcal{U}_t(s_t)} \inner{d_t}{u_t} + \sum_{n \in \Omega_{t+}^k} p^k(n) h_{t+}^k(s_{nt+}).
\end{align}
Recall that the affine function $\ell_t^k$ is computed using the dual solution of the problem on the right-hand side of the above equation. Using \eqref{eq:samplMeanDual} and linear programming duality, we obtain
\begin{align}
H_t^k(s_t) \geq \ell_t^k(s_t).
\end{align}
We distribute the summation in \eqref{eq:outer_1} over observations encountered in the first $i < k$ iterations (i.e., $\Omega_{t+}^i$) and those encountered after iteration $i$.
\begin{align*}
H_t^k(s_t) - & \inner{c_t}{x_t} \\
=~ &\min_{u_t \in \mathcal{U}_t(s_t)} \inner{d_t}{u_t} +
\sum_{n \in \Omega_{t+}^j} p^k(n) h_{t+}^k(s_{nt+}) + \sum_{n \in \Omega_{t+}^k \setminus \Omega_{t+}^j} p^k(n) h_{t+}^k(s_{nt+}). \notag
\end{align*}
Since $h_{t+}^k \geq 0$, we have
\begin{align*}
H_t^k(s_t) - \inner{c_t}{x_t} \geq~& \min_{u_t \in \mathcal{U}_t(s_t)} \inner{d_t}{u_t} + \sum_{n \in \Omega_{t+}^i} p^k(n) h_{t+}^k(s_{nt+}) \\
=~ & \min_{u_t \in \mathcal{U}_t(s_t)} \inner{d_t}{u_t} + \sum_{n \in \Omega_{t+}^j} \frac{\kappa^{i}(n) + \kappa^{[i,k]}(n)}{k} \cdot h_{t+}^k(s_{nt+}).
\end{align*}
For observations in $\Omega_{t+}^i$, we distribute the computation of their relative frequency by setting $\kappa^k(n) = \kappa^i(n) + \kappa^{[i, k]}(n)$, where $\kappa^{[i,k]}(n)$ is the number of times observation $n$ was encountered after iteration $i$. Once again invoking $h_{t+}^k \geq 0$ we obtain:
\begin{align*}
H_t^k(s_t) - \inner{c_t}{x_t} \geq~ & \min_{u_t \in \mathcal{U}_t(s_t)} \inner{d_t}{u_t} + \sum_{n \in \Omega_{t+}^i} \frac{i}{k} \times \frac{\kappa^{i}(n)}{i} \cdot h_{t+}^k(s_{nt+}).
\end{align*}
Recall that the minorants at stage $t+$ are updated in \eqref{eq:minot} by adding new affine function into the collection while multiplying the previously generated affine function by a factor of $(\frac{i}{k})^{T-t-1} < 1$. By replacing the current minorant $h_{t+}^k$ by the scaled version of the one available in iteration $j$, we have:
\begin{align}
H_t^k(s_t) \geq ~ & \inner{c_t}{x_t} + \min_{u_t \in \mathcal{U}_t(s_t)} \inner{d_t}{u_t} + \frac{i}{k} \sum_{n \in \Omega_{t+}^i} p^i(n) \bigg[\bigg(\frac{i}{k}\bigg)^{T-t-1} h_{t+}^i(s_{nt+})\bigg] \notag \\
\geq~ & \bigg(\frac{i}{k}\bigg)^{T-t} \bigg [ \inner{c_t}{x_t} + \min_{u_t \in \mathcal{U}_t(s_t)} \inner{d_t}{u_t} + \sum_{n \in \Omega_{t+}^j} p^i(n) h_{t+}^i(s_{nt+}) \bigg]. \notag
\end{align}
The second inequality follows from assumption \assumRef{assum:zeroLB}. Notice that the scaling factor used when $t+ = T$ reduces to one. In this case, the future cost corresponds to the terminal stage, and the affine functions satisfy $\ell_T^j(s_T) \leq h_T(s_T)$ for all $j \in \minorants{T}{k}(\omega_T)$. Therefore, $h_T^k(s_T) \leq h_T(s_T)$. At other stages, an affine function generated in iteration $i < k$, viz. $\ell_t^j(s_t)$ with $j \in \minorants{t}{i}$ provides a lower bound to the sample average in the same iteration $H_t^i(s_t)$. This leads us to conclude that
\begin{align} \label{eq:lowerBound_scaling}
H_t^k(s_t) \geq \bigg(\frac{i}{k}\bigg)^{T-t} H_t^i(s_t) \geq \bigg(\frac{i}{k}\bigg)^{T-t} \ell_t^j(s_t).
\end{align}
Applying the same arguments for all $i < k$, and using the definition of minorant in \eqref{eq:minot} we obtain $H_t^k(s_t) \geq h_t^k(s_t)$.
Since,
\begin{align*}
H_t^k(s_t) \geq \bigg(\frac{i}{k}\bigg)^{T-t} \ell_t^i(s_t) =~& \bigg(\frac{k-1}{k}\bigg)^{T-t} \times \bigg(\frac{k-2}{k-1}\bigg)^{T-t} \times \ldots \times \bigg(\frac{i}{i+1}\bigg)^{T-t} \ell_t^i(s_t) \\
=~& \bigg(\frac{k-1}{k}\bigg)^{T-t} \bigg(\frac{i}{k-1}\bigg)^{T-t} \ell_t^i(s_t) \\ =~& \bigg(\frac{k-1}{k}\bigg)^{T-t} h_t^{k-1}(s_t).
\end{align*}
This completes the proof.
\end{proof}
As noted in the above proof, the scaling factor $(\frac{i}{k})^{T-t}$ used in \eqref{eq:lowerBound_scaling} is applied to affine functions in $\minorants{t}{i}$ that were generated in iteration $i < k$. Since these affine functions are updated in every iteration, computational efficiency can be attained by using recursive updates. In iteration $k$, the affine functions in $\minorants{t}{k-1}$ are updated by multiplying them by the factor $(\frac{k-1}{k})^{T-t}$ and storing the updated minorants in $\minorants{t}{k}$. We refer the reader to \cite{Higle1996} and \cite{Gangammanavar2020sd} for details regarding efficient implementation of these updates. In the next result we capture the asymptotic behavior of the sequence of minorants $\{h_t^k\}$.
\begin{theorem}\label{thm:minorantAsymptotics}
Under assumption \assumRef{assum:completeResource}, \assumRef{assum:fixed} and \assumRef{assum:indep}, the sequence of functions $\{h_t^k\}_k$ is equicontinuous and uniformly convergent at all non-root stages.
\end{theorem}
\proof Recall that the coefficients of the minorants belong to a compact set at all the non-root stages (\lemmaRef{lemma:coeff}). Therefore, $\{h_t^k\}$ is a sequence of bounded continuous functions with a uniform Lipschitz constant, say $M$. Further, the sequence $\{h_t^k\}$ converges pointwise on $s_t \in \mathcal{S}_t$. Let $s_t^{k_1}$ and $s_t^{k_2}$ be input states such that $\|s_t^{k_1} - s_t^{k_2}\| < \epsilon/M$, for a positive constant $\epsilon$. From Lipschitz continuity, we have
\begin{align*}
|h_t^k(s_t^{n_1}) - h_t^k(s_t^{n_2})| \leq M\|s_t^{n_1} - s_t^{n_2}\| < \epsilon.
\end{align*}
for any $k \geq 1$. Hence, the sequence $\{h_t^k\}$ is equicontinuous. Equicontinuity and pointwise convergence together imply uniform convergence \cite{Rudin1976}.
\endproof
In contrast to the above results, the approximations created in the DD-based methods (see \eqref{eq:sddpApprox}) provide outer linearization for a fixed cost function $H_t^N(\cdot)$. Since, the probability distribution is explicitly used (as constants) in computing the DD-based cuts, the approximations improve monotonically over iterations. That is, $H_t^N(s_t) \geq h_t^k(s_t) \geq h_t^{k-1}(s_t)$ for all $s_t$, without any need for updates. We close this section with the following two remarks. The first contrasts the incorporation of sampling during backward recursion of SDDP with the role of sampling adopted in SDLP. The second identifies the online sampling feature of SDLP that has many advantages in practical settings.
\remark{Sampling during backward recursion has also been explored in SDDP(e.g., \cite{Chen1999} \cite{DeMatos2015improving}, and \cite{Philpott2008}). However, there are important factors that distinguish value function updates undertaken during the backward recursion of SDLP when compared to SDDP calculations. In SDLP, the latest sample-path along which the backward recursion calculations are carried out is included independently of previously encountered sample-paths. As a result, the set of sample-paths grow in size (by at most one) when compared to the set of sample-paths used in the previous iteration. If the latest sample-path was not encountered before, it was not included in calculations carried out in the backward recursion of any previous iterations. This is unlike SDDP where the set of sample-paths is fixed and backward pass calculations are carried out over all scenarios in every iteration. Even when sampling is employed in the backward pass of SDDP, calculations are carried out along all sample-paths by either solving a subproblem or using the ``argmax'' procedure in \eqref{eq:argmax}. This type of cut formation was first suggested in \cite{Higle1991}. Even if the latest path was encountered in earlier iterations, the repeated observation results in an update in the empirical frequency associated with nodes along the latest sample-path. As a consequence, the weights (that are synonymous with estimated probability) used in calculating the SDLP cut coefficients \eqref{eq:coefft} differ from one iteration to the next. In SDDP, on the other hand, actual observation probabilities are used to calculate the value function approximation (see \eqref{eq:sddpApprox}) even when sampling is used on the backward pass.} \label{remark:samplingSDLP}
\remark{Since the SDLP algorithm works with data discovered through sequential sampling, it does not rely on any a priori knowledge of exogenous probability distribution. This feature makes this algorithm suitable to work with external simulators or statistical models that can better capture the nature of exogenous uncertainty. In each iteration, the algorithm can invoke a simulator to provide a new sample-path. This feature is particularly appealing when a priori representation of uncertainty using scenario trees is either cumbersome or inadequate due to computational and/or timeliness constraints. Such optimization problems are commonly encountered in the operations of power systems with significant renewable penetration. Due to the intermittent nature of renewable resources such as wind and solar, a scenario tree representation may be difficult (perhaps even impossible) to create within the timeliness constraints. State-of-the-art numerical weather prediction and other time series models are known to be more accurate descriptors of such uncertainty. Therefore, optimization algorithms which use sample-paths simulated from such models yield more reliable plans and cost estimates \cite{Gangammanavar2018, Gangammanavar2016}.}
\subsection{Subgradient and Incumbent Selection} \label{sect:policies}
In this section we address two important components of the SDLP algorithm: the ``argmax'' procedure to identify the subgradient of a SDLP approximation at non-root stage that is used during the backward recursion, and the selection of an incumbent solution for the proximal term used during timestaged decision simulation.
\subsubsection{Subgradient Selection}\label{sect:subgradientPolicy} During the backward recursion, we build a lower bound to the sample average function $H_{t-}^k$ using the best lower bounding affine functions from the collection $\minorants{t}{k}$ for all $\omega_{t} \in \Omega_t^k$. This procedure is accomplished differently based on whether the observation belongs to the current sample-path $\future{\omega}{0}^k$, or not. We utilize the collection of dual vertices $\Pi_t^k$ identified during the course of the algorithm for this purpose. We denote by $i(\pi_t)$ the iteration in which the dual vertex $\pi_t \in \Pi_t^k$ was generated. As seen in \eqref{eq:sdlpDualSet}, the dual vertex $\pi_{t} \in \Pi_t^k$ depends on $H_t^{i(\pi_t)}$, the sample average function in iteration $i(\pi_t)$. This dependence is reflected in the calculation of coefficients $(\bar{\alpha}_{t+}^{i(\pi_t)}, \bar{\beta}_{t+}^{i(\pi_t)})$ and the term $\bar{\rho}_{t+}^{i(\pi_t)}$ that defines the feasible set associated with $\pi_t$ (see \eqref{eq:samplMeanDual}).
{\bf For observation $\boldsymbol{\omega_{t}^k}$:} This observation is encountered at stage $t$ along the current sample-path. Consequently in the current backward recursion, we built and solved a SDA$_t^k$ to optimality using $s_{t}^k$ as input. Using the optimal dual solution thus obtained, we compute the coefficients in \eqref{eq:coefft} for the hyperplanes $\ell_{t}^k(s_{t})$ to SDA$_t^k$ at the candidate state. Similar calculations with $\hat{s}_{t}^k$ as input yield the hyperplane $\hat{\ell}_{t}^k(s_t)$ to SDA$_t^k$ at incumbent state $\hat{s}_t^k$.
{\bf For observations $\boldsymbol{\omega_{t} \in \Omega_{t}^k\setminus \{\omega_{t}^k\}}$:} These are the observations not included in the current sample-path, and therefore, no backward recursion optimization is carried out for these observations. Instead, we use an ``argmax'' procedure to identify the subgradient approximations. These subgradients correspond to the best lower bounding affine functions of SDA$_t^k$ for these observations. In order to accomplish this, we maintain a set of dual solutions $\Pi_t^k$ obtained by solving the SDA$_t^i$ in iterations $i \leq k$ as in the case of 2-SD. For each $\omega_t \in \Omega_t^k\setminus \{\omega_t^k\}$, we setup $s_t = (x_t,\omega_t)$, where $x_t$ is computed with $(x_{t-1}^k, \omega_t, u_{t-1}^k)$ as input in \eqref{eq:stDyn}, and identify a dual solution:
\begin{align*} \pi_t^k(\omega_t) \in \mathrm{argmax} \bigg \{\bigg(\frac{i(\pi_t)}{k}\bigg)^{T-t} \inner{\pi_t}{(b_t - C_tx_t)}~|~\pi_t \in \Pi_t^k \bigg \}.
\end{align*}
The scaling factor used in the above calculation reflects the scaling of affine functions discussed in Theorem \ref{thm:minorantAsymptotics}. Notice that the set of dual vertices $\Pi_t^k$ changes with iteration which may lead to computational difficulties. We address this issue by using the constancy of the basis index sets that generate these dual vertices. Further discussion of this issue is provided in section \S\ref{sect:incumbentSelection}. Using the dual solution obtained by the above procedure, we can compute the coefficients:
\begin{align*}
\alpha_t^k(\omega_t) =~& \bigg(\frac{i(\pi_t^k(\omega_t))}{k}\bigg)^{T-t} [\inner{\pi_t^k(\omega_t)}{b_t} + \bar{\alpha}_{t+}^{~i(\pi_t^k(\omega_t))} ], \\
\beta_t^k(\omega_t) =~& \bigg(\frac{i(\pi_t^k(\omega_t))}{k}\bigg)^{T-t} [\inner{-C_t}{\pi_t^k(\omega_t)} + \bar{\beta}_{t+}^{~i(\pi_t^k(\omega_t))} ].
\end{align*}
In essence, the above procedure identifies a dual solution $\pi_t^k(\omega_t)$ which was obtained using a SDA$_{t}^{i(\pi_t^k(\omega_t))}$, and scales it appropriately to provide the best lower bounding approximation to the current SDA$_{t}^k$.
\subsubsection{Incumbent Selection} \label{sect:incumbentSelection}
The procedure described here identifies an incumbent solution at all non-root, non-terminal stages is motivated by the optimal basis propagation policy presented in \cite{Casey2005}. This identification, which is performed during the prediction pass, relies on the basis of the stage dual approximation (SDA$_t^k$) that appears on the right-hand side of \eqref{eq:samplMeanDual}. To facilitate the discussion here, we have restated SDA$_t^k$ below:
\begin{align} \label{eq:sda_lp}
\max~ \inner{\pi_t}{(b_t - C_tx_t)} \text{ subject to } \inner{D_t}{\pi_t} \leq \bar{\rho}_t^k,~ \pi_t \leq 0,
\end{align}
where $\bar{\rho}_t^k$ is defined in the expressions following \eqref{eq:samplMeanDual}. In each iteration, the above linear program is solved to optimality along the iteration sample-path and potentially a new basis is discovered. Let $\mathbb{B}_t^k$ denote the index set whose elements are the rows which are active in \eqref{eq:sda_lp}. Denote by $D_{t,\mathbb{B}_t^k}$ the submatrix of $D_t$ formed by columns indexed by $\mathbb{B}_t^k$ (the basis matrix). From standard linear programming results we have that a feasible point is an extreme point of the feasible set if and only if there exists an index set that satisfies
$\inner{D_{t,\mathbb{B}_t^k}}{\pi_t^k} = \bar{\rho}_{t,\mathbb{B}_t^j}^k$.
This index set is added to the collection of previously discovered index sets, that is: $\mathcal{B}_t^k \leftarrow \mathcal{B}_t^{k-1} \cup \mathbb{B}_t^k$. We use this collection of index sets to construct dual solutions of the linear program in \eqref{eq:sda_lp}. Assumption \assumRef{assum:completeResource} ensures that the optimal set of the dual linear program is non-empty which implies that there exists an index set $\mathbb{B}_t^j \in \mathcal{B}_t^k$ such that for any arbitrary input state $s_t$ we can write:
\begin{align}\label{eq:incumbGen}
\hat{u}_{t,i} = D_{t,\mathbb{B}_t^j}^{-1}(b_t - C_tx_t),~ i \in \mathbb{B}_t^j; \qquad \hat{u}_{t,i}^j = 0,~i \notin \mathbb{B}_t^j.
\end{align}
This operation can be written as $\hat{u}_t = R_{\mathbb{B}_t^j}(b_t - C_tx_t)$, where $R_{\mathbb{B}_t^j}$ is an $m_t \times n_t$ matrix with rows $[R_{\mathbb{B}_t^k}]_i = [(D_{t,\mathbb{B}_t^j})^{-1}]_i$ for $i \in \mathbb{B}_t^j$ and $[R_{\mathbb{B}_t^j}]_i = \mathbf{0}$ (a zero vector of length $m_t$) for $i \notin \mathbb{B}_t^j$. Note that, if $\hat{u}_t^j$ satisfies the constraints of dual of \eqref{eq:sda_lp} then it is a suboptimal basic feasible solution to the dual problem (and if complementarity conditions are also satisfied then it is an optimal solution). We use $\widehat{\mathcal{U}}_t^k(s_t) \subseteq \mathcal{U}_t(s_t)$ to denote the set of basic feasible solutions generated using \eqref{eq:incumbGen} for all index sets in $\mathcal{B}_t^k$. Since \eqref{eq:sda_lp} corresponds to SDA$_t^k$, its dual feasible solutions are feasible to the stage optimization problem \eqref{eq:mslpt}. Using these index sets we define the mapping used for incumbent selection at non-root stages as follows:
\begin{align}\label{eq:incumbMapping}
\mathcal{M}_t^k(s_t) = \mathrm{argmin} \{ f_t^{k-1}(s_t, \hat{u}_t^j)~|~ \hat{u}_t^j \in \widehat{\mathcal{U}}_t^k(s_t) \} \qquad \forall t \in \mathcal{T}\setminus\{0\}.
\end{align}
We refer to the above mapping as the \emph{basic feasible policy} (BFP) of the MSLP problem. In case the argument that minimizes the right-hand of \eqref{eq:incumbMapping} is not unique, we choose an index set with the smaller iteration index $k$. Notice that the dual LP of \eqref{eq:sda_lp} has cost coefficients that vary over iterations, akin to 2-SD with random cost coefficients in the second-stage \cite{Gangammanavar2020sd}. The steps involved in identifying the BFP, particularly computation of dual solutions in \eqref{eq:incumbGen} and establishing their feasibility, can be implemented in a computationally efficient manner using a sparsity preserving representation of dual solutions. We refer the reader to \cite{Gangammanavar2020sd} for a detailed discussion of this representation and its implementation.
At the root-stage it suffices to maintain a single incumbent solution. This incumbent solution is updated based on predicted objective value reduction at the root-stage:
\begin{align}\label{eq:incumbUpdtt}
f_0^k(s_0, u_0^k) - f_0^k(s_0, \hat{u}_0^{k-1})~\leq~q~ [f_0^{k-1}(s_0, u_0^k) - f_0^{k-1}(s_0, \hat{u}_0^{k-1})],
\end{align}
where $q \in (0,1)$ is a given parameter. If the above inequality is satisfied, then the candidate solution at the root node will replace the incumbent solution $\hat{u}_0^{k-1}$ and will serve as the next incumbent solution; that is, $\hat{u}_{0}^k \leftarrow u_{0}^k$ for all $t^\prime \geq t$. On the other hand, if the inequality is not satisfied, then the current incumbent solution for stage $t$ is retained ($\hat{u}_0^k \leftarrow \hat{u}_0^{k-1}$). This update rule is similar to incumbent updates carried out in non-smooth optimization methods including regularized 2-SD \cite{Higle1994, Higle1999}.
\section{Convergence Analysis}\label{sect:convergenceAnalysis}
In this section, we present the convergence results for SDLP. We begin by discussing the behavior of the sequence of states and decisions generated by the SDLP algorithm, then proceed to show the convergence of value function estimates. Finally, we show that the incumbent solution sequence at the root-stage $\{\hat{u}_0^k\}$ converges and establish the optimality of the accumulation point. The SDLP convergence analysis is built upon the results of the 2-SD algorithm \cite{Higle1991}, its regularized variant \cite{Higle1994}, and 2-SD for 2-SLPs with random cost coefficients \cite{Gangammanavar2020sd}. The Fig. \ref{fig:sdlpAnalysis} illustrates the development of the SDLP convergence analysis. The cited references serve as pointers to related results in the two-stage setting.
\begin{figure}
\centering
\includestandalone[width=0.99\textwidth]{./figures/sdlpAnalysis}
\caption{Sketch of SDLP Analysis}\label{fig:sdlpAnalysis}%
\end{figure}
\subsection*{State and decision accumulation points}
Under assumption \assumRef{assum:indep}, we have a finite number possible sample-paths over the horizon. We use $\mathcal{P}_t$ to denote the set of all sample-paths until stage $t$. We focus on the evolution of states and decisions along these sample-paths.
\begin{theorem}\label{thm:converge_predictu}
Suppose assumptions \assumRef{assum:compact}-\assumRef{assum:indep} hold. Let $\{\hat{u}_0^k\} \subseteq \mathcal{U}_0$ denote any infinite sequence of root-stage incumbent solutions. There exists a subsequence $\mathcal{K}_0$ of iterations such that $\{\hat{u}_0^k\}_{\mathcal{K}_0}$ has an accumulation point. In subsequent stages, for all possible paths $\history{\omega}{t} \in \mathcal{P}_t$ there exists a subsequence of iterations indexed by $\mathcal{K}_t(\history{\omega}{t})$ such that the sequence $\{\hat{u}_{t}^k(\hat{s}_t^k)\}_{k \in \mathcal{K}_t(\history{\omega}{t})}$ has an accumulation point.
\end{theorem}
\begin{proof}
Consider the optimization problem on the right-hand side of $\eqref{eq:samplMeanDual}$ for a given $t$ in its dual form:
\begin{align}
\min~\{\inner{\bar{\rho}_t^k}{u_t} ~|~ D_t u_t \leq b_t - C_tx_t, u_t \geq 0\}. \notag
\end{align}
Recall that the feasible set of the above problem is denoted as $\mathcal{U}_t(s_t)$. Let $\mathbb{D}(u_t,s_t) := \mathrm{argmin} \{||u_t - u||^2, u \in \mathcal{U}(s_t) \}$. A slight variant of Hoffman's lemma (see Lemma \ref{lemma:hoffmanVariant} in the appendix) leads us to conclude that for any $s_t, s_t^* \in \text{dom}~ \mathcal{U}_t$ and any $u_t \in \mathcal{U}_t(s_t)$ that
$\mathbb{D}(u_t,s_t) \leq \gamma \|(b_t - C_tx_t) - (b_t - C_tx_t^*)\|$.
Here, $\gamma > 0$ is the Lipschitz constant of the mapping $\mathbb{D}(\cdot)$ which depends only on the recourse matrix $D_t$. In other words, the feasible set $\mathcal{U}_t(\cdot)$ is Lipschitz continuous in the above sense. It follows that it is possible to choose an extreme point $\hat{u}_t(s_t) \in \mathcal{U}_t(s_t)$ such that $\hat{u}_t(s_t)$ is continuous on $\text{dom}~\mathcal{U}_t$. Moreover, the polyhedral set $\mathcal{U}_t$ has a finite number of extreme points. Therefore, the BFP outlined in \S \ref{sect:incumbentSelection} is a continuous piecewise linear mapping.
For the root-node the feasible set $\mathcal{U}_0$ is compact by \assumRef{assum:compact}, hence there exists a subsequence of iterations indexed by $\overline{\mathcal{K}}_0$ such that $\{\hat{u}_0^k\}_{k \in \overline{\mathcal{K}}_0} \rightarrow \bar{u}_0$. Following \assumRef{assum:indep}, there exists an infinite subsequence $\mathcal{K}_1(\history{s}{1}) \subseteq \overline{\mathcal{K}}_0$ such that the algorithm selects sample-path $\history{\omega}{1} \in \mathcal{P}_1$. Since $\{\hat{u}_0^k\}_{k \in \overline{\mathcal{K}}_0}$ converges and $x_0$ is fixed, the sequence of endogenous state $\{x_1^k\}_{k \in \mathcal{K}_1(\history{s}{1})}$ converges to $\bar{x}_1(\history{s}{1})$. For the sample-path $\history{\omega}{1}$, since the sequence of input states $\{x_1^k\}_{k \in \mathcal{K}_1(\history{s}{1})}$ converges, the continuity of BFP implies that the corresponding sequence of incumbent solutions $\{\hat{u}_1^k(\hat{s}_1^k)\}_{k \in \mathcal{K}_1(\history{s}{1})}$ has a converging subsequence. Let $\overline{\mathcal{K}}_1(\history{s}{1})$ denote this subsequence. Therefore, we have $\{\hat{u}_1^k(\hat{s}_1^k)\}_{k \in \overline{\mathcal{K}}_1(\history{s}{1})} \rightarrow \bar{u}_t(\history{s}{1})$.
Now consider an arbitrary stage $t > 1$. For any sample-path $\history{\omega}{t} \in \mathcal{P}_{t}$, once again assumption \assumRef{assum:indep} guarantees that there exists an infinite subsequence of $\mathcal{K}_{t}(\history{s}{t}) \subseteq \mathcal{K}_{t-}(\history{s}{t-})$ when sample-path $\history{\omega}{t}$ is encountered. Here $\history{\omega}{t} = (\history{\omega}{t-}, \omega_t)$, i.e., sample-path $\history{\omega}{t}$ shares the same observations with $\history{\omega}{t-}$ until stage $t-$. Over this subsequence, the convergence of endogenous state sequence $ \{\hat{x}_{t}^k = \mathcal{D}_{t}(\hat{x}_{t-}^k, \omega_{t}, \hat{u}_{t-}^k)\}_{\mathcal{K}_{t}(\history{s}{t})} \rightarrow \bar{x}_{t}(\history{s}{t})$ ensures the convergence of the incumbent states $\{\hat{s}_{t}^k\}_{\mathcal{K}_{t}(\history{s}{t})}$. Further, the continuity of BFP applied at stage $t$ ensures that the corresponding sequence of incumbent solutions $\{\hat{u}_t^k(\hat{s}_t^k)\}$ have a converging subsequence. That is, there exists $\overline{\mathcal{K}}_{t}(\history{s}{t}) \subset \mathcal{K}_{t}(\history{s}{t})$ such that $\{\hat{u}_t^k(\hat{s}_t^k)\}_{\overline{\mathcal{K}}_{t}(\history{s}{t})} \rightarrow \bar{u}_t(\history{s}{t})$. Proceeding recursively to the rest of the stages, we conclude the validity of the theorem.
\end{proof}
The above result captures the impact of using the argmin mapping in \eqref{eq:incumbMapping} over a sequence of converging first-stage decisions. A converging sequence results in perturbed stage problems with linear constraints in subsequent stages. A central argument in the above proof relies upon the local Lipschitz continuity of the argmin mapping. Such mappings have previously been studied in \cite{Wets2003lipschitz}. We refer the reader to this reference for a more thorough treatment of inf-projections and the argmin mapping for non-linear optimization problems with linear constraints.
To facilitate the presentation in the remainder of this section, let $\future{\mathcal{P}}{t+}^k \in \Omega_{t+}^k \times \ldots \times \Omega_T^k$ denote the set of all possible scenarios from stage-$(t+1)$ to the end of horizon which traverse through observations encountered by the algorithm in the first $k$ iterations. Note that $\future{\mathcal{P}}{t+}^k$ represents the set of possible paths in the future and should not be confused with $\mathcal{P}_t^k$ which represents the set of traversed paths. Stagewise independence allows us to compute the probability estimate of a sample-path $\future{\omega}{t+}^j \in \future{\mathcal{P}}{t+}^k$ as product of frequencies associated with observations along that sample-path, i.e. $p^k(\future{\omega}{t+}^j) = p^k(\omega_{t+1}^j) \times \ldots \times p^k(\omega_{T}^j)$. Let $\future{x}{t+}^j$ and $\future{u}{t+}^j$ denote endogenous state and decision vector, respectively, associated with sample-path $\future{\omega}{t+}^j$. While \theoremRef{thm:converge_predictu} captured the behavior of solutions generated using the incumbent mapping in \eqref{eq:incumbGen} during prediction pass, the next result captures the behavior of solutions generated in optimization pass of the algorithm.
\begin{theorem} \label{thm:converge_optu}
Suppose assumptions \assumRef{assum:compact} - \assumRef{assum:indep} hold, and $\sigma \geq 1$. Then there exists $\bar{u}_0 \in \mathcal{U}_0(s_0)$ such that the sequence of root-node incumbent decisions generated by the algorithm satisfy $\{\hat{u}_0^k\} \rightarrow \bar{u}_0$. Moreover in every subsequent stage, there exists $\bar{u}_{t}(\history{s}{t}) \in \mathcal{U}_t(\bar{s}_{t}(\history{s}{t}))$ which satisfy dynamics in \eqref{eq:stDyn} and the sequence of solutions generated by the algorithm $\{u_{t}^k(\history{s}{t})\}_{\mathcal{K}_t(\history{s}{t})} \rightarrow \bar{u}_{t}(\history{s}{t})$ for all paths $\history{\omega}{t} \in \mathcal{P}_t$.
\end{theorem}
\begin{proof}
The proof for the root-stage follows that of regularized master in 2-SD (Theorem 5, \cite{Higle1994}) and the root-node of MSD algorithm (\cite{Sen2014}). Here we present the main parts of the proof and refer the reader to earlier works for detailed exposition. If the incumbent solution $\hat{u}_0^k$ changes infinitely many times, then the optimality condition for regularized approximation (see equation (5) in \cite{Higle1994}) and our choice of $\sigma \geq 1$ suggests that for any candidate solution $u_0^k$ the following holds:
\begin{align}\label{eq:regOpt}
f_0^{k-1}(s_0, u_0^k) - f_0^{k-1}(s_0, \hat{u}_0^{k-1}) \leq -\|u_0^k - \hat{u}_0^{k-1}\|^2 \leq 0 \qquad \forall k \geq 1.
\end{align}
In particular, the above condition holds at the iterations when the incumbent was updated by assigning the candidate solution as the new incumbent solution, i.e. $\hat{u}_0^k = u_0^k$. Let $\{k_1,k_2,\ldots,k_m\} \in \mathcal{K}_0$ denote the set of $m$ successive iterations when the incumbent solution was updated starting with an incumbent $\hat{u}_0^{k_0}$. Note that, for any $k_n \in \mathcal{K}_0$, $\hat{u}_0^{k_n-1} = \hat{u}_0^{k_{n-1}}$. Denote by $\Delta^{k_n} := f_0^{k_n-1}(s_0, \hat{u}_0^{k_n}) - f_0^{k_n-1}(s_0, \hat{u}_0^{k_{n-1}})$. Using \eqref{eq:regOpt} over these $m$ updates, we have
\begin{align*}
\frac{1}{m} \sum_{l=1}^m \Delta^{k_n} = &\frac{1}{m} \sum_{l=1}^m [f_0^{k_n-1}(s_0, \hat{u}_0^{k_n}) - f_0^{k_n-1}(s_0, \hat{u}_0^{k_{n-1}})] \\
= & \frac{1}{m} \underbrace{[f_0^{k_m-1}(s_0, \hat{u}_0^{k_m}) - f_0^{k_1-1}(s_0, \hat{u}_0^{k_0})]}_{(a)} + \\ &\qquad \frac{1}{m} \sum_{n=1}^m \underbrace{[f_0^{k_n-1}(s_0, \hat{u}_0^{k_n}) - f_0^{k_{n+1}-1}(s_0, \hat{u}_0^{k_n})]}_{(b)}.
\end{align*}
The boundedness of functions $\{f_0^k\}$ implies that (a) above approaches zero, as $m \rightarrow \infty$, and their uniform convergence (\theoremRef{thm:minorantAsymptotics}) implies that (b) converges to zero. Hence,
\begin{align}\label{eq:diminishingError}
\lim_{m \rightarrow \infty} \frac{1}{m} \sum_{l=1}^m \Delta^{k_n} = 0,
\end{align}
with probability one. Further, the above result, along with \eqref{eq:regOpt} implies that $\lim_{m \rightarrow \infty} \frac{1}{m}
\sum_{n=1}^m \|\hat{u}_0^{k_n}-\hat{u}_0^{k_{n-1}}\|^2 = 0$. Therefore, we conclude that the sequence of root-node incumbent solutions converges to $\bar{u}_0 \in \mathcal{U}_0$.
At non-root stages, the incumbent solutions are selected using the BFP described in \S\ref{sect:incumbentSelection}. The BFP is built using the bases of \eqref{eq:samplMeanDual} discovered during the course of the algorithm that are identified by the collection of index sets $\mathcal{B}_t$. Since there is a finite collection $\mathcal{B}_t$ of index sets, there exists iteration count $K_t$ large enough such that $\mathcal{B}_t^{k^\prime} = \mathcal{B}_t$ for all $k^\prime \geq K_t$. Let us consider $k > \max_t K_t$ when all the index sets for all non-root, non-terminal stages have been discovered. In these iterations, the procedure in \S\ref{sect:incumbentSelection} results in an incumbent solution such that:
\begin{align*}
\hat{u}_t^k(\hat{s}_t^k) = \mathcal{M}_t^k(\hat{s}_t^k) \in \mathrm{argmin}_{u_t \in \mathcal{U}_t(\hat{s}_t^k)} ~ \inner{d_t}{u_t} + \sum_{\omega_{t+} \in \Omega_{t+}^{k-1}} p^{k-1}(\omega_{t+}) h_{t+}^{k-1}(\dynamics{t}{\hat{x}_t^k,\omega_{t+},u_t}, \omega_{t+}).
\end{align*}
Consequently, the value associated with $\hat{u}_t^k(\hat{s}_t^k)$ is $H_t^{k-1}(\hat{s}_t^k)$ (see \eqref{eq:samplMean}). The forward pass optimal value associated with the candidate solution $u_t^k(\hat{s}_t^k)$ differs from $H_t^{k-1}(\hat{s}_t^k)$ only the quadratic term. Therefore, we have $H_t^{k-1}(\hat{s}_t^k) \leq F_t^{k-1}(\hat{s}_t^k)$ that can be restated using \eqref{eq:objUpdtt} as:
\begin{align*}
f_t^{k-1}(\hat{s}_t^k, \hat{u}_t^k(\hat{s}_t^k)) - f_t^{k-1}(\hat{s}_t^k, u_t(\hat{s}_t^k)) \leq 0,
\end{align*}
where $u_t(\hat{s}_t^k)$ is the solution obtained by optimizing the regularized problem used during forward recursion. The quadratic programming optimality conditions of this regularized problem allow us to write the following inequality:
\begin{align*}
f_t^{k-1}(\hat{s}_t^k, u_t(\hat{s}_t^k)) - f_t^{k-1}(\hat{s}_t^k, \hat{u}_t^k(\hat{s}_t^k)) \leq 0.
\end{align*}
The two preceding inequalities together with (\eqref{eq:regOpt} for stage $t$) implies that $\|u_t(\hat{s}_t^k) - \hat{u}_t^k(\hat{s}_t^k)\|^2 = 0$. For a sample-path $\history{\omega}{t} \in \mathcal{P}_t$, let $\mathcal{K}_t(\history{s}{t})$ denote the subsequence constructed in the proof of \theoremRef{thm:converge_predictu}. Over this subsequence, the result of \theoremRef{thm:converge_predictu} shows the existence of an accumulation point of $\{\hat{u}_t^k(s_t^k)\}_{k \in \mathcal{K}_t(\history{s}{t})}$, and subsequently, an accumulation point $\bar{u}_t(\history{s}{t})$ of $\{u_t^k(s_t^k)\}_{k \in \mathcal{K}_t(\history{s}{t})}$. Applying the argument to all sample-paths in $\history{\omega}{t} \in \mathcal{P}_t$ completes the proof.
\end{proof}
The limit in \eqref{eq:diminishingError} plays a critical role in showing the existence of an optimal accumulation point of incumbent solutions at the root-stage. Notice that the limit holds when the incumbent is updated infinitely often, i.e., $m \rightarrow \infty$. On the other hand, if the incumbent solution is updated only a finite number of times, then there exists a $K < \infty$ such that $\hat{u}_0^k = \bar{u} \in \mathcal{U}_0$, for all $k > K$. In this case, the optimality of $\bar{u}_0$ is attained only if $\Delta^k \rightarrow 0$. Before we present the optimality of solution sequence, we present the convergence of the value function estimates.
\subsection*{Convergence of Value Function Estimates}
Since our algorithm uses sequential sampling, path-wise forward and backward recursion updates, estimates of probability and sampled minorants we use benchmark functions to verify optimality of the value functions and solutions obtained from them. We next present the construction of these benchmark functions. Note that these function are not computed during the course of the algorithm and are intended only for the purpose of analysis.
For a given input $s_t$, the following is an extensive formulation of the cost-to-function:
\begin{align} \label{eq:pathMean}
\mathcal{H}_t^k(s_t) =~ & \inner{c_t}{x_t} + \\
\min~& \inner{d_t}{u_t} + \sum_{j \in \future{\mathcal{P}}{t+}^k} p^k(\future{\omega}{t+}^j) \times [\inner{\future{c}{t+}}{\future{x}{t+}^j} + \inner{\future{d}{t+}}{\future{u}{t+}^j}] \notag \\
& s.t.~u_t \in \mathcal{U}_t(u_0, s_t),~\{u_{t^\prime}^j \in \mathcal{U}_{t^\prime}(u_0,s_{t^\prime}^j)\}_{t^\prime > t} \text{ and non-anticipative}, \notag \\
& ~~~~~\{x_{t^\prime+}^j = \mathcal{D}_{t^\prime+}(x_{t^\prime}^j,\omega_{t^\prime+}^j,u_{t^\prime}^j)\}_{t^\prime\geq t} . \notag
\end{align}
In the above formulation, dynamics and non-anticipativity are satisfied starting at stage $t$, and are relative to input $s_t$. This sample average function $\mathcal{H}_t^k$ represents the value associated with input $s_t$ for the remainder of horizon with respect to current observations $\{\Omega_{i}^k\}_{i=t+}^k$. In order to simplify notation, the dependence of the sample average function on the set $\future{\mathcal{P}}{t+}^k$ is conveyed through the index $k$ in $\mathcal{H}_t^k(s_t)$, as opposed to the more complete $\mathcal{H}_t^k(s_t|\future{\mathcal{P}}{t+}^k)$.
During forward recursion decisions, $\{u_t\}$ are simulated using approximation $f_t^{k-1}$ in \eqref{eq:forwardt} along the observations dictated by sampling, and during the backward recursion the approximations using subgradients observed along the same sample-path. Next we relate the objective function values encountered during forward and backward recursions. In order to do this, we define $u_t(s_t)$ to be the optimal solution obtained using \eqref{eq:forwardt} during forward recursion with input $s_t$. The forward recursion objective function value $F_t^{k-1}$ associated with this decision is therefore given by:
\begin{align}\label{eq:forwardcost}
F_t^{k-1}(s_t) := \inner{c_t}{x_t} + &\inner{d_t}{u_t(s_t)} + \sum_{\omega_{t+} \in \Omega_{t+}^{k-1}} p^{k-1}(\omega_{t+})~ h_{t+}^{k-1}(s_{t+}^k(\omega_{t+})). \notag
\end{align}
Here $s_{t+}^k(\omega_{t+}) = \mathcal{D}_{t+}(x_t, \omega_{t+}, u_t(s_t))$. In order to study the asymptotic behavior of our algorithm, we investigate how the functions $\mathcal{H}_t^k$, $F_t^k$ and $h_t^k$ relate in value at limiting states. It is worthwhile to note that the sample average approximation in \eqref{eq:samplMeanPrimal}, the extensive formulation in \eqref{eq:pathMean} and the forward recursion objective value in \eqref{eq:forwardcost} are defined only for non-terminal stages as $H_T^k(s_T) = \mathcal{H}_T^k(s_T) = F_T^k(s_T) = h_T(s_T)$ for terminal stage $\forall k$.
\begin{lemma}\label{lemma:fnConverge}
Suppose Assumptions \assumRef{assum:compact}-\assumRef{assum:indep} hold.
\begin{enumerate}[label=(\roman*)]
\item The sequence of functions $\{F^k_t\}_k$ is equicontinuous and uniformly convergent for all $t$. \label{lemma:fnConverge_1}
\item The sequence of sample average approximation functions $\{\mathcal{H}_t^k\}_k$ converges uniformly to the value function $h_t(\cdot)$ in \eqref{eq:mslpt} for all $t > 0$, with probability one. \label{lemma:fnConverge_2}
\end{enumerate}
\end{lemma}
\proof
Under Assumption \ref{assum:completeResource}, we have $F_t^k < \infty$ for all $t \in \mathcal{T}\setminus\{T\}$ and $k \geq 1$. (i) Since a regularized problem \eqref{eq:forwardt} with quadratic proximal parameter is used to identify the sequence of solutions in the forward recursion of the algorithm, the optimality conditions of affinely constrained quadratic programs indicate that the solutions $u_t(s_t)$ are piecewise linear. Therefore, the sequence $\{F_t^k\}$ is bounded over a compact space and must have a uniform Lipschitz constant. This leads to the conclusion stated in part \ref{lemma:fnConverge_1} of the lemma. Part \ref{lemma:fnConverge_2} follows from Theorem 7.53 in \cite{Shapiro2014}. \hfill
\endproof
Following the above result, we use $\mathcal{H}_t^k$ as a benchmark for assessing optimality of the SDLP algorithm. We first show the convergence of approximations generated during the course of the algorithm to the true value function in the following theorem. In the two-stage setting, the equivalent result appears as Theorem 3 and Corollary 5 in \cite{Higle1991}.
\begin{theorem} \label{thm:valFnConvergence}
Suppose Assumptions \assumRef{assum:compact}-\assumRef{assum:fixed} hold. At any non-terminal stage $t$, if subsequence $\mathcal{K}_t$ is such that $\{\hat{s}_t^k\}_{k \in \mathcal{K}_t} \rightarrow \bar{s}_{t}$, then
\begin{align}
\lim_{k \in \mathcal{K}_t} f_t^k(\hat{s}_t^k, \hat{u}_t^k(\hat{s}_t^k)) = \lim_{k \in \mathcal{K}_t} f_t^{k+1} (\hat{s}_t^k, \hat{u}_t^k(\hat{u}_t^k)) = f_t(\bar{s}_t, \bar{u}_t),
\end{align}
with probability one.
\end{theorem}
\begin{proof}
For terminal stage ($t = T$), continuity of linear programming value function implies that $\lim_{k \in \mathcal{K}_T} h_T(\hat{s}_T^k) = h_T(\bar{s}_T(\history{s}{t}))$. Since $F_T^k$, $H_T^k$ and $\mathcal{H}_T^k$ are all equivalent to $h_T$, the above relation trivially holds. Consequently we have, $\lim_{k \rightarrow \mathcal{K}_T} \hat{\ell}_T^k(\hat{s}_T^k) = h_T(\bar{s}_T)$ and $\lim_{k \rightarrow \mathcal{K}_T} \partial \hat{\ell}_T^k(\hat{s}_T^k) \in \partial h_T(\bar{s}_T)$.
For a non-terminal stage, let $k - \tau$ and $k$ be two successive iterations of subsequence $\mathcal{K}_t$. The forward recursion objective function $F_t^{k-1}(s_t)$ and the backward recursion sample average function $H_t^{k-1}(s_t)$ differ only in the proximal term, and hence $H_t^{k-1}(s_t) \leq F_t^{k-1}(s_t)$ for all $s_t \in \mathcal{S}_t$. In the following, we focus on functions evaluated at $\hat{s}_t^k$, and use $m = \omega_{t+}^k$ and $n$ as an index for set $\Omega_{t+}^k$. The forward recursion objective function value at the current input state can be written as:
\begin{align*}
F_t^{k-1}(\hat{s}_t^k) =~& \inner{c_t}{\hat{x}_t^k} + \inner{d_t}{u_t(\hat{s}_t^k)} + \sum_{n \in \Omega_{t+}^{k-1}} p^{k-1}(n) h_{t+}^{k-1}(\hat{s}_{nt+}^k).
\end{align*}
The optimality of $u_t(\hat{s}_t^k)$ ensures that the objective function value is associated with $u_t(\hat{s}_t^k)$ is lower than any other feasible solution. If we specifically consider the optimal solution of the dual in \eqref{eq:samplMeanDual}, denoted $\tilde{u}_t(\hat{s}_t^k)$, we have
\begin{align*}
F_t^{k-1}(\hat{s}_t^k) \leq~& \inner{c_t}{\hat{x}_t^k} + \inner{d_t}{\tilde{u}_t(\hat{s}_t^k)} + \sum_{n \in \Omega_{t+}^{k-1}} p^{k-1}(n) h_{t+}^{k-1}(\tilde{s}_{nt+}^k).
\end{align*}
By adding and subtracting the current approximation of future cost, i.e., $\sum_{n \in \Omega_{t+}^k} p^k(n) h_{t+}^k(\hat{s}_{nt+}) = \sum_{n \in \Omega_{t+}^{k-1}\setminus\{m\}} p^k(n)~ h_{t+}^k(\hat{s}_{nt+}) + p^k(m) h_{t+}^k(\hat{s}_{mt+})$ we obtain
\begin{align*}
&F_t^{k-1}(\hat{s}_t^k) \leq \inner{c_t}{\hat{x}_t^k} + \inner{d_t}{\tilde{u}_t(\hat{s}_t^k)} + \sum_{n \in \Omega_{t+}^k} p^k(n) h_{t+}^k(\hat{s}_{nt+}^k) + \\ &\sum_{n \in \Omega_{t+}^{k-1}} p^{k-1}(n) h_{t+}^{k-1}(\tilde{s}_{nt+}^k) - \bigg[ \sum_{n \in \Omega_{t+}^{k-1}} \bigg (\frac{k-1}{k} \bigg)p^{k-1}(n)~ h_{t+}^k(\hat{s}_{nt+}^k) + \frac{1}{k} h_{t+}^k(\hat{s}_{mt+}^k) \bigg ].
\end{align*}
From the definition of backward recursion sample average approximation in \eqref{eq:samplMeanDual} and the fact that $h_t(s_t) \geq 0$, we have
\begin{align*}
F_t^{k-1}(\hat{s}_t^k) \leq~& H_t^k(\hat{s}_t^k) + \sum_{n \in \Omega_{t+}^{k-1}} p^{k-1}(n) \bigg[h_{t+}^{k-1}(\tilde{s}_{nt+}^k) - \bigg(\frac{k-1}{k} \bigg) h_{t+}^k(\hat{s}_{nt+}^k)\bigg].
\end{align*}
From Theorem \ref{thm:minorantAsymptotics}, we have $h_{t+}^k \geq \big(\frac{k-1}{k}\big)^{T-t-1}h_{t+}^{k-1}$. This yields
\begin{align*}
F_t^{k-1}(\hat{s}_t^k) \leq~& H_t^k(\hat{s}_t^k) + \sum_{n \in \Omega_{t+}^{k-1}} p^{k-1}(n) \bigg[h_{t+}^{k-1}(\tilde{s}_{nt+}^k) - \bigg(\frac{k-1}{k} \bigg)^{T-t} h_{t+}^{k-1}(\hat{s}_{nt+}^k)\bigg].
\end{align*}
Let us focus on the terms within the summation on the right hand side of above inequality, i.e., $\Delta_n^k = h_{t+}^{k-1}(\tilde{s}_{nt+}^k) - \big(\frac{k-1}{k}\big)^{T-t}h_{t+}^k(\hat{s}_{nt+}^k)$. Then
\begin{align*}
\Delta_n^k =~& h_{t+}^{k-1}(\tilde{s}_{nt+}^k) - h_{t+}^{k-1}(\hat{s}_{nt+}^k) + \bigg(1-\bigg(\frac{k-1}{k}\bigg)^{T-t}\bigg)h_{t+}^{k-1}(\hat{s}_{nt+}^k)\\
\leq~& |h_{t+}^{k-1}(\tilde{s}_{nt+}^k) - h_{t+}^{k-1}(\hat{s}_{nt+}^k)| + \bigg|\bigg(1-\bigg(\frac{k-1}{k}\bigg)^{T-t}\bigg)h_{t+}^{k-1}(\hat{s}_{nt+}^k)\bigg|.
\end{align*}
The second term in the above equates to zero as $k \rightarrow \infty$. Further, since $\{\hat{s}_t^k\}_{k \in \mathcal{K}_t} \rightarrow \bar{s}_{t}(\history{s}{t})$, for every $\delta > 0$ there exists a $K(\delta) \in \mathcal{K}_t$ such that $\|\hat{s}_t^k - \tilde{s}_t^k\| < \delta$ for all $k > K(\delta)$. Using the uniform equicontinuity of $\{h_t^k\}$ (\theoremRef{thm:minorantAsymptotics}), we have $|h_{t+}^{k-1}(\tilde{s}_{nt+}^k) - h_{t+}^{k-1}(\hat{s}_{nt+}^k)| < \epsilon$. Therefore, we can conclude that $\lim_{k \in \mathcal{K}} F_t^{k-1}(\hat{s}_t^k) - H_t^k(\hat{s}_t^k) \leq \epsilon$, for any $\epsilon > 0$.
To show the inequality in the other direction, we use the fact that $H_t^{k}(\hat{s}_t^k) \leq F_t^{k}(\hat{s}_t^k)$ and the uniform convergence of the sequence $\{F_t^k\}$. This gives us $\lim_{k \in \mathcal{K}_t} H_t^k(\hat{s}_t^k) - F_t^{k-1}(\hat{s}_t^k) \leq \epsilon$. Since inequalities hold in both directions for an arbitrary $\epsilon > 0$, we have
\begin{align} \label{eq:squeeze1}
\lim_{k \in \mathcal{K}_t} |F_t^{k-1}(\hat{s}_t^k) - H_t^k(\hat{s}_t^k)| = 0 ~~~ (w.p.1).
\end{align}
Now consider the benchmark function $\mathcal{H}_t^k(\hat{s}_t^k)$ that is optimal across all possible sample-paths. Optimality of $\mathcal{H}_t^k(\hat{s}_t^k)$, along with the fact that $h_t^k \leq H_t^k$ (\theoremRef{thm:outerLinearization}), we have
\begin{align*}
\lim_{k \in \mathcal{K}_t} \mathcal{H}_t^k(\hat{s}_t^k) \leq \lim_{k \in \mathcal{K}_t} h_t^k(\hat{s}_t^k) \leq \lim_{k \in \mathcal{K}_t} H_t^k(\hat{s}_t^k) ~~~ (w.p.1),
\end{align*}
Moreover, the forward recursion objective function value satisfies $\lim_{k \in \mathcal{K}_t} F_t^{k-1}(\hat{s}_t^k) \leq \lim_{k \in \mathcal{K}_t} \mathcal{H}_t^k(\hat{s}_t^k)~ (w.p.1)$. Therefore we have
\begin{align} \label{eq:squeeze2}
\lim_{k \in \mathcal{K}_t} F_t^{k-1}(\hat{s}_t^k) \leq \lim_{k \in \mathcal{K}_t} \mathcal{H}_t^k(\hat{s}_t^k) \leq \lim_{k \in \mathcal{K}_t} h_t^k(\hat{s}_t^k) \leq \lim_{k \in \mathcal{K}_t} H_t^k(\hat{s}_t^k) ~~~ (w.p.1).
\end{align} Using \eqref{eq:squeeze1} in the above relation and the results in \lemmaRef{lemma:fnConverge}, we conclude that the expresstion \eqref{eq:squeeze2} holds with equality, with probability one.
Since $\{\hat{s}_t^k\}_{k \in \mathcal{K}_t} \rightarrow \bar{s}$, the result in \theoremRef{thm:converge_optu} shows the existence a subsequence $\overline{\mathcal{K}}_t$ such that $\{(\hat{s}_t^k, \hat{u}_t^k)\}_{k \in \overline{\mathcal{K}}_t} \rightarrow (\bar{s}_t,\bar{u}_t)$. Using, the uniform convergence of the sequence of minorants $\{h_t^k\}$ and benchmark function $\{\mathcal{H}_t^k\}$ (\theoremRef{thm:minorantAsymptotics} and \lemmaRef{lemma:fnConverge}, respectively), we conclude that the function values $\{f_t^k(\hat{s}_t^k,\hat{u}_t^k(\hat{s}_t^k))\}$ converge to the optimal value at the accumulating state $\bar{s}_t$, with probability one.
\end{proof}
The above result holds at all non-terminal stages over any converging subsequence of states $\{\hat{s}_t^k\}_{k \in \mathcal{K}_t}$. Under the assumption of finite support \assumRef{assum:indep}, the algorithm will generate such subsequences as illustarted in \theoremRef{thm:converge_predictu} and \theoremRef{thm:converge_optu}.
\subsection*{Optimality of the Incumbent Solution Sequence}
Before establishing the optimality of the root-stage incumbent solution sequence, we establish the limiting relationship between the value function estimate at the candidate solution $f_0^{k-1}(s_0, u_0^k)$ and estimate at the incumbent solutions $f_0^{k-1}(s_0, \hat{u}_0^{k-1})$. As a consequence of \ref{thm:valFnConvergence}, the root-stage value function is equivalent to the value function of a 2-SLP. This equivalent 2-SLP has the first-stage cost equal to $\inner{c_0}{x_0} + \inner{d_0}{u_0}$ and the expected recourse value given by $\sum_{\omega_1 \in \Omega_1} p(\omega_1) h_1(s_1(\omega))$. The function $h_1(\cdot)$ is the optimal cost-to-go value starting from stage $1$ which is attained for the limiting states $\bar{s}_1(\omega_1)$ for all $\omega_1 \in \Omega_1$. With this perspective, the following lemma parallels a result from \cite{Higle1994} (Theorem 3). We present the proof for the case when the incumbent changes infinitely often and refer the reader to \cite{Higle1994} for the case when the incumbent changes finitely often.
\begin{lemma}\label{lemma:candid_v_incumbEst}
Let $\{u_0^k\}_{k=1}^\infty$ and $\{\hat{u}_0^k\}_{k=1}^\infty$ denote the sequence of candidate and incumbent solutions identified by SDLP, respectively. With probability one,
\begin{align} \label{eq:candid_v_incumbEst}
\limsup_{k \rightarrow \infty} f_0^{k-1}(s_0, u_0^k) - f_0^{k-1}(s_0, \hat{u}_0^{k-1}) = 0.
\end{align}
\end{lemma}
\begin{proof}
Let $\{k_n\}_{n \in \mathcal{K}_0}$ represent the sequence of iterations at which the incumbent is changed. If $\mathcal{K}_0$ is an infinite set, then as a consequence of the incumbent update rule \eqref{eq:incumbUpdtt} and \theoremRef{thm:valFnConvergence}, we have
\begin{align*}
\lim_{m \rightarrow \infty} \frac{1}{m} \sum_{n=1}^m \Delta^{k_n} \leq \limsup_{n \rightarrow \infty} \Delta^{k_n} \leq 0.
\end{align*}
From \eqref{eq:diminishingError}, there exists a subsequence $\mathcal{K}_0^* \subset \mathcal{K}_0$ such that
\begin{align*}
\lim_{k \in \mathcal{K}_0^*} \Delta^k = 0.
\end{align*}
Since, $\Delta^k = f_0^{k-1}(s_0, \hat{u}_0^k) - f_0^{k-1}(s_0, \hat{u}_0^{k-1})$ and $\hat{u}_0^k = u_0^k$, for all $k \in \mathcal{K}_0^*$, establishes \eqref{eq:candid_v_incumbEst}.
\end{proof}
The following result captures the asymptotic behavior of the directional derivatives of the sequence of first-stage objective function approximations. Specifically, it relates the directional derivatives of the approximate value function to that of the true value function.
\begin{lemma}\label{lemma:directionalStability}
Let $u_t \in \mathcal{U}_t(s_t)$. Define $\delta_t^k(u_t) = \frac{u_t - u_t^{k-1}}{\|u_t - u_t^{k-1}\|}$ and $\bar{\delta}_t(u_t) = \frac{u_t - \bar{u}_t}{\|u_t - \bar{u}_t\|}$. For any sequence $\mathcal{K}$ such that $\{u_t^k\}_{k \in \mathcal{K}} \rightarrow \bar{u}_t$, $\bar{u}_t \in \mathcal{U}_t(s_t)$, then
\begin{align}
\lim_{k \in \mathcal{K}} (f_t^k)^\prime(s_t, u_t^{k-1}; \delta_t^k(u_t)) \leq f_t^\prime(s_t, \bar{u}_t; \bar{\delta}_t(u_t)),
\end{align}
with probability one.
\end{lemma}
\begin{proof}
Since $\{u_t^k\}_{k \in \mathcal{K}} \rightarrow \bar{u}_t$, we have $\delta_t^k(u_t) \rightarrow \bar{\delta}_t(u_t)$, for all $u_t \in \mathcal{U}_t(s_t)$. Note that,
\begin{align*}
\beta_{t+}^k(\omega_{t+}) = (h_{t+}^k)^\prime(x_{t+}(\omega_{t+}), \omega_{t+}).
\end{align*}
Following \lemmaRef{lemma:coeff} (b) and \theoremRef{thm:valFnConvergence}, we have $\limsup_{k \in \mathcal{K}} (f_t^k)^\prime(s_t, u_t^{k+1}; \delta_t^k(u_t)$ is finite and further, there is exists a subsequence $\overline{\mathcal{K}} \subset \mathcal{K}$ such that
\begin{align} \label{eq:subgradientAccum}
\{\beta_{t+}^k(\omega_{t+})\}_{k \in \overline{\mathcal{K}}} \rightarrow \bar{\beta}_{t+}(\omega_{t+}) \in \partial h_{t+}(x_{t+}(\omega_{t+}), \omega_{t+}).
\end{align}
Using the definition of $f_t^k$ in \eqref{eq:objUpdtt}, we have
\begin{align*}
&(f_t^k)^\prime(s_t, u_t^{k+1}; \delta_t^k(u_t)) = \inner{d_t + \sum_{\omega_{t+} \in \Omega_{t+}^k} p^k(\omega_{t+}) (h_{t+}^k)^\prime(x_{t+}(\omega_{t+}), \omega_{t+})}{\delta_t^k(u_t}.
\end{align*}
This implies that
\begin{align*}
\lim_{k \in \overline{\mathcal{K}}} (f_t^k)^\prime(s_t, u_t^{k+1}; \delta_t^k(u_t)) =~& \limsup_{k \in \mathcal{K}} (f_t^k)^\prime(s_t, u_t^{k+1}; \delta_t^k(u_t)) \\
=~& \inner{d_t + \sum_{\omega_{t+} \in \Omega_{t+}^k} p^k(\omega_{t+}) (h_{t+}^k)^\prime(x_{t+}(\omega_{t+}), \omega_{t+})}{\delta_t^k(u_t)}.
\end{align*}
Let $\bar{d}_t = d_t + \expect{\bar{\beta}(\tilde{\omega})}{}$. Using \eqref{eq:subgradientAccum} and the fact that $p^k(\omega) \rightarrow p(\omega)$, almost surely, we have
\begin{align*}
\limsup_{k \in \mathcal{K}} (f_t^k)^\prime(s_t, u_t^{k+1}; \delta_t^k(u_t)) = ~& \inner{\bar{d}_t}{\bar{\delta}_t(u_t)} \\
\leq ~& \max\{\inner{v}{\bar{\delta}_t(u_t)}~|~ v \in \partial f_t(s_t,\bar{u}_t; \bar{\delta}(\bar{u}_t)\} \\
=~& f^\prime(s_t, \bar{u}_t; \bar{\delta}(\bar{u}_t).
\end{align*}
\end{proof}
The above lemma mirrors a similar result for regularized 2-SD that appeared in \cite{Higle1994} (as Lemma 4). We are now in a position to establish the optimality of the accumulation point of the sequence of root-stage incumbent solutions.
\begin{theorem} \label{thm:optimality}
Suppose Assumptions \assumRef{assum:compact}-\assumRef{assum:indep} hold and $\underline{\sigma} \geq 1$, then the SDLP algorithm produces a sequence incumbent solutions at root-stage $\{\hat{u}_0^k\} \rightarrow u_0^*$ and $u_0^*$ is optimum, with probability one.
\end{theorem}
\begin{proof}
Using the optimality condition of regularized root-stage problem \eqref{eq:regOpt}, the result in \lemmaRef{lemma:candid_v_incumbEst} implies that there exists a subsequence $\mathcal{K}_0^*$ such that
\begin{align*}
\lim_{k \in \mathcal{K}_0^*} f_0^{k-1}(s_0, u_0^k) - f_0^{k-1}(s_0, \hat{u}_0^{k-1}) + \|u_0^k - \hat{u}_0^{k-1}\| = 0,
\end{align*}
with probability one. Let $\overline{\mathcal{K}}_0^* \subset \mathcal{K}_0^*$ be such that $\{\hat{u}_0^k\}_{k \in \overline{\mathcal{K}}_0^*} \rightarrow \bar{u}_0$. Let $u_0 \in \mathcal{U}_0$ be such that $u_0 \neq \bar{u}_0$. We define
\begin{align*}
\delta_0^k(u_0) = \frac{u_0 - u_0^k}{\|u_0 - u_0^k\|},~ \text{and} ~\bar{\delta}_0(u_0) = \frac{u_0 - \bar{u}_0}{\|u_0 - \bar{u}_0\|}.
\end{align*}
Optimality of $u_0^k$ implies that
\begin{align*}
& f_0^k(s_0, u_0^k) + \frac{\sigma}{2}\|u_0^k - \hat{u}_0^{k-1}\|^2 \leq f_0^k(s_0, u_0^k + \delta_0^k(u_0)) + \frac{\sigma}{2}\|(u_0^k + \delta_0^k(u_0)) - \hat{u}_0^{k-1}\|^2. \\
\Rightarrow & [f_0^k(s_0, u_0^k + \delta_0^k(u_0)) - f_0^k(s_0, u_0^k)] + \\
& \hspace*{4cm}\frac{\sigma}{2}[\|(u_0^k + \delta_0^k(u_0)) - \hat{u}_0^{k-1}\|^2 - \|u_0^k - \hat{u}_0^{k-1}\|^2] \geq 0. \\
\Rightarrow & (f_0^k)^\prime (s_0, u_0^k; \delta_0^k(u_0)) + \frac{\sigma}{2}[\|(u_0^k + \delta_0^k(u_0)) - \hat{u}_0^{k-1}\|^2 - \|u_0^k - \hat{u}_0^{k-1}\|^2] \geq 0.
\end{align*}
Taking limits along $\overline{\mathcal{K}}_0^*$, the second term equates to zero. Therefore, we have
\begin{align*}
0 \leq \liminf_{k \in \mathcal{K}_0^*} (f_0^k)^\prime (s_0, u_0^k; \delta_0^k(u_0)) \leq \limsup_{k \in \mathcal{K}_0^*} (f_0^k)^\prime (s_0, u_0^k; \delta_0^k(u_0)) \leq f^\prime(s_0, \bar{u}_0; \bar{\delta}_0(u_0)).
\end{align*}
The last inequality follows from \lemmaRef{lemma:directionalStability}. Since $f_0(\cdot)$ is convex function and the above statement implies that the directional derivatives $\bar{\delta}_0(u_0)$ are non-negative for an arbitrary $u_0 \in \mathcal{U}_0$. We must have that $\bar{u}_0$ must be an optimal solution.
\end{proof}
\section{Conclusions}\label{sect:conclusions}
The SDLP algorithm extends the regularized 2-SD algorithm \cite{Higle1994} to the MSLP setting where the underlying stochastic process exhibits stagewise independence. The algorithm addresses the state variable formulation of MSLP problems by employing sequential sampling. In this sense, it is a counterpart to the MSD algorithm of \cite{Sen2014} which was designed for a case where the underlying uncertainty has a scenario tree structure. The algorithm presented in this paper incorporates several additional advantages granted by the stagewise independence property. We conclude here by noting the salient features of the SDLP algorithm:
\begin{enumerate}
\item The algorithm uses a single sample-path both for simulating decisions during the forward recursion and updating approximations during backward recursion. In any iteration, compared to SDDP which requires solving subproblems corresponding to all outcomes at all stages and for all sample-paths simulated during the forward pass, SDLP uses two subproblem solves at each stage. This significantly reduces the computational burden of solving MSLP problems.
\item The method uses quadratic regularization terms at all non-terminal stages which alleviates the need to retain all the minorants generated. This allows us to retain a finite-sized approximation in all stages, further improving its computational advantage.
\item The BFP described in \S\ref{sect:incumbentSelection} is the first to provide a data-driven policy for MSLP. This mapping overcomes the need to store incumbent solutions that, either explicitly or implicitly, depending on the entire history of state evolution, and can be used with other regularized MSLP algorithms. Our convergence results show that the optimality of the accumulation points of a subsequence of incumbent solutions is preserved even when such a mapping is employed.
\item SDLP incorporates sampling within the optimization step, and thereby, optimizes an SAA with increasing sample size. This feature enables SDLP to solve the MSLP problems to greater accuracy by incorporating additional observations at any stage without having to re-discover the structural information of an instance to build/update the approximations. The adaptive nature allows the algorithm to be terminated upon attaining a desired level of accuracy. This opens the avenue to design statistical optimality rules for multistage setting akin to those developed for 2-SLP \cite{Higle1999, Sen2016}.
\end{enumerate}
The computational advantages of SDLP were revealed in our companion paper \cite{Gangammanavar2018}. In that paper, we applied the SDLP algorithm to a MSLP model for distributed storage control in the presence of renewable generation uncertainty. The computational results compare our algorithm with SDDP applied to a SAA of the original model. The sample-paths used to set up the SAA and those used within the SDLP algorithm were simulated using an autoregressive moving-average time series model. The computational results of that paper indicate that SDLP provides solutions that are not only reliable but are also statistically indistinguishable from SDDP, while significantly improving the computational times. The computational advantage of SDLP over SDDP can be attributed to the algorithm design. Namely, (i) the forward and backward recursion calculations are carried along only one sample-path in each iteration, and (ii) the use of regularization helps us maintain a finite sized optimization problem at every non-terminal stage. Note that we are only referring to calculations within any particular iteration. In this sense our comparison is incomplete. However, carrying out a full theoretical comparison of SDLP and SDDP is beyond the scope of this paper. Nevertheless, we point the reader to recent results related to iteration complexity of the SDDP algorithm \cite{Lan2020complexity} and the sublinear rate of convergence for 2-SD in \cite{Liu2020asymptotic}. We plan to undertake the convergence rate analysis, (sample and iteration complexity) of SDLP in our future research endeavors. In any case, the results in \cite{Gangammanavar2018} provide the first evidence of computational benefits provided by a sequential sampling approach in a multistage setting.
|
2,877,628,091,149 | arxiv |
\section{Magnetic characterization of $\alpha$-R\lowercase{u}C\lowercase{l}$_3$}
The temperature dependence of the magnetic susceptibility
$\chi(T)$ of $\alpha$-RuCl$_3$\, is shown in Fig.~\ref{fig:magnetization} (upper
panel) for $\mu_0 H$ = 0.1 T $\parallel$ {\it{ab}}. Note that the
same single crystal was used for the magnetic characterization and
the specific heat capacity measurements. Clearly, $\chi(T)$
exhibits a sharp maximum at $\approx 7.2$~K in agreement with
earlier reports on high-quality single crystals, which only have a
very small amount of stacking faults
~\cite{Banerjee2016,Cao2016,Baek2017}. From the derivative
$d(\chi\cdot T)/dT$ the transition temperature signalling the
transition into the magnetically long-range ordered state is
determined to $T_N = 6.5$~K.
\begin{figure}[!b]
\centering
\includegraphics[scale=0.46]{FigureS1_end.pdf}
\caption{(color online) Upper panel: The magnetic susceptibility
$\chi$ as function of temperature of $\alpha$-RuCl$_3$\, for $\mu_0 H = 0.1$~T
$\parallel ab$ (left axis). On the right axis the inverse
susceptibility $1/\chi(T)$ is shown together with the Curie-Weiss
fit in the high-temperature regime. Lower panel: The magnetization
as function of field of $\alpha$-RuCl$_3$\, measured at $1.8$~K (left axis)
together with its derivative $dM/d(\mu_0 H)$ (right axis). In the
inset the relative difference of the magnetization for up-and
down-sweeps of the magnetic field $\Delta M/M$ is depicted as
function of field. } \label{fig:magnetization}
\end{figure}
From the temperature dependence of the inverse susceptibility (red
line in the upper panel of Fig.~\ref{fig:magnetization}) a linear
scaling of 1/$\chi(T)$ with temperature $T$ is observed for $T>T_s
\approx 160$~K. $T_s$ marks the first-order structural transition
of $\alpha$-RuCl$_3$\, \cite{Baek2017}. From a fit of the inverse susceptibility
to a Curie-Weiss law, a Curie-Weiss temperature $\theta_{\rm CW} =
+36$~K and an effective magnetic moment $\mu_{\rm eff} = 2.24
\mu_B$ were extracted for $H \parallel ab$. Notably, the effective
moment is much larger than the spin-only value of 1.73$\mu_B$
expected for Ru$^{3+}$, pointing towards a large orbital
contribution to the magnetic moment.
The magnetization of $\alpha$-RuCl$_3$\, as function of field $H \parallel ab$
measured at $1.8$~K is depicted in the lower panel of Fig.~
\ref{fig:magnetization}. From the derivative curve $dM/d(\mu_0 H)$
two changes of slope can clearly be discerned at $\mu_0 H \approx
1.2$~T and $\mu_0 H \approx 6.75$~T. While the transition around
$6.75$~T is in line with the field-induced QCP observed in our
specific-heat study in this work, the one around 1.2~T is still a
matter of debate. Following the change of slope of $M(H)$ in the
low-field regime together with the magnetic susceptibility at
lowest $T$, the presence of paramagnetic impurities can be
discarded as origin for the low-field anomaly around $1.2$~T.
Rather, the anomaly could be due to a redistribution in domain
population occurring in this rather low field
range~\cite{Sears2017}.
Looking at the hysteretic behavior of our magnetization curves for
up- and down-sweeps of the magnetic fields, no substantial
hysteresis can be observed for fields above $\sim 2$~T. This is in
perfect agreement with our field-induced QCP scenario at $\mu_0
H_\text{c}$ $\approx$ 6.9~T, and underlines the second-order nature of
the phase transition at $\mu_0 H_\text{c}$.
\section{Phonon calculations for R\lowercase{h}C\lowercase{l}$_3$}
\subsection{Computational details}
The first-principles calculations were performed with the
projector-augmented wave method as implemented in the Vienna
\emph{ab initio} simulation package
(VASP)~\cite{Bloechl:1994,Kresse:1999,Kresse:1996}. The
force-constant matrix was obtained through the super cell approach
within the finite displacement
method~\cite{Parlinski:1997,Chaput:2011} taking into account
non-analytical term corrections~\cite{Gonze:1997b}. The
generalized-gradient approximation in the parameterization of
Perdew, Burke, and Ernzerhof (PBE)~\cite{Perdew:1996} was adopted
to describe exchange and correlation. The software PHONOPY was
employed to determine the phonon dispersion relations and the
phonon density of states (DOS) from the force-constant matrix, as
well as the heat capacity at constant volume~\cite{phonopy}. The
experimental single crystal structure parameters for RhCl$_3$ were
used in the calculations, which confirm the literature
data~\cite{Baernighausen1964}.
The convergence of all numerical parameters was carefully checked.
All VASP calculations were carried out with the global precision
switch ``Accurate'' employing a plane-wave cutoff of $400$\,eV.
The grid for augmentation charges contained eight times the
default number and the convergence criteria for the total energy
was set to $10^{-8}$\,eV. $\Gamma$-point calculations for a
$4\times4\times4$ super cell in terms of the conventional eight
atoms unit cell (corresponding to a $4\times4\times4$ phonon grid
partitioning) mesh were adopted for the present results.
\section{Results}
The computed phonon DOS and the derived heat capacity in the low
temperature region for RhCl$_{3}$ are shown in
Figs.~\ref{fig:pDOS} and~\ref{fig:Cv}, respectively. As is
evident, the phonon spectrum is gapped twice, exhibits a
Debye-like low-frequency behavior, and possesses a band width of
approximately 10.3\,Thz. The temperature dependence of the heat
capacity follows a Debye-like $T^3$ behavior up to approximately
10\,K.
\begin{figure}
\resizebox{\columnwidth}{!}{\includegraphics[clip]{./phononDOS_RhCL3_PBE_BORN.pdf}}
\caption{\label{fig:pDOS}Phonon DOS for RhCl$_{3}$. The DOS is
normalized to the number of normal modes per primitive unit cell.}
\end{figure}
\begin{figure}
\resizebox{\columnwidth}{!}{\includegraphics[clip]{./heat_capacity_RhCl3_PBE_BORN.pdf}}
\caption{\label{fig:Cv}Log-log plot of the heat capacity at
constant volume for RhCl$_{3}$. The heat capacity is given per
formula unit.}
\end{figure}
\section{Field-induced QCP in {$J_1$--$K_1$--$\Gamma_1$--$J_3$} honeycomb lattice model}
\subsection{Modelling}
To date, the debate about the most appropriate effective spin
model to describe the magnetic behavior of $\alpha$-RuCl$_3$\, has not been
settled. Most proposals involve nearest-neighbor Heisenberg,
Kitaev, and symmetric off-diagonal exchanges on a two-dimensional
honeycomb lattice; often second- and/or third-neighbor
interactions are invoked as well.
Below we will show results for a concrete minimal model derived
from ab-initio density functional theory, containing
nearest-neighbor Heisenberg $J_1$, Kitaev $K_1$, and off-diagonal
$\Gamma_1$ interaction as well as a third-nearest-neighbor
Heisenberg $J_3$ interaction \cite{Winter2016}:
\begin{align}
\mathcal H &=
\sum_\text{1st nn} \left[
J_1 \vec S_i \cdot \vec S_j
+ K_1 S_i^\gamma S_j^\gamma
+ \Gamma_1 (S_i^\alpha S_j^\beta + S_i^\beta S_j^\alpha)\right] \notag \\
& +
\sum_\text{3rd nn} J_3 \vec S_i \cdot \vec S_j.
\label{eq:jkg}
\end{align}
Here, $\{\alpha, \beta, \gamma\} = \{x, y, z\}$ on a
nearest-neighbour $z$ bond, for example. The spin quantization
axes point along the cubic axes of the RuCl$_6$ octahedra, such
that the $[111]$ direction is perpendicular to the honeycomb $ab$
plane (sometimes referred to as $c^*$ axis) and the in-plane
$[\bar 1 1 0]$ direction points along a Ru-Ru nearest-neighbor
bond of the honeycomb lattice. Trigonal distortion is neglected in
this simple model.
The values for the exchange couplings can be estimated from the
\textit{ab initio} calculations~\cite{Winter2016}; however, we
find better agreement with our experimental data by using a
slightly adapted parameter set that has recently been suggested by
comparing with neutron scattering data (at zero
field)~\cite{Winter2017}:
\begin{align} \label{eq:couplings}
(J_1, K_1, \Gamma_1, J_3) & = \left(-0.5, -5.0, +2.5, +0.5\right)\,\text{meV}.
\end{align}
We are interested in the behavior of this model in the presence of
an external magnetic field, i.e., described by the Hamiltonian
$\mathcal H' = \mathcal H - g \mu_0 \mu_\mathrm{B} \sum_i \vec H
\cdot \vec S_i$. Here, $g \mu_\mathrm{B} \vec S$ corresponds to
the effective moment of the $J_\text{eff} = 1/2$ states in the
crystal. Solving this (or other relevant) models for
quantum-mechanical spins $1/2$ requires large-scale numerics, and
detailed studies in an applied field are lacking.
\subsection{Spin-wave theory for $H>H_\text{c}$}
The model \eqref{eq:jkg} can be solved in the semiclassical limit
of large spin $S$ \cite{Janssen2016, Janssen2017}. At zero field,
it has a zigzag antiferromagnetic ground state. At finite $\vec H
\parallel [\bar 1 1 0] \in ab$, the zigzag state cants towards the
magnetic field. At a critical field strength $H_\text{c}$, there is a
continuous transition towards a (partially) polarized high-field
phase. For the critical field we find, in the semiclassical limit,
$\mu_0 H_\text{c} = 0.586 \frac{|K_1 S|}{g \mu_\mathrm{B}} \simeq
9\,\mathrm{T}$ if we assume the previously estimated $g$ factor of
$g \simeq 2.8$~\cite{Majumder2015}.
Given the fact that our model does not include any free fitting
parameter and in light of the semiclassical approximation we find
the rough agreement with our experimental finding of $\mu_0 H_\text{c}
\simeq 6.9\,\mathrm{T}$ satisfactory.
\begin{figure*}
\includegraphics[scale=.75]{magnon-dispersion-set3+J3-hbar110-hc0.pdf}\hfill
\includegraphics[scale=.75]{magnon-dispersion-set3+J3-hbar110-hc0+1Em1.pdf}
\caption{(color online) Calculated excitation spectra and magnon DOS at the quantum critical point (a) and in high-field phase (b), respectively, for $J_1 - K_1 - \Gamma_1 - J_3$ model on honeycomb lattice in external field $\vec H \parallel [\bar 1 1 0]$. The inset shows the lower-band dispersion in the first Brillouin zone (color plot) and the path along the high-symmetry lines used in the main panel (red line).
We have used $(J_1, K_1, \Gamma_1, J_3) = (-0.5, -5.0, +2.5, +0.5)\,\text{meV}$.
}
\label{fig:spectrum}
\end{figure*}
\begin{widetext}
The excitation spectrum in the high-field phase can be computed
within spin-wave theory. We employ the Holstein-Primakoff
representation
\begin{align}
\vec S_i & =
\begin{cases}
(S - a^\dagger_i a_i) \vec n
+ \sqrt{\frac{S}{2}} (a_i + a_i^\dagger) \vec e
+ \mathrm{i} \sqrt{\frac{S}{2}} (a_i - a_i^\dagger) (\vec n \times \vec e) + \mathcal O(1/\sqrt{S}), &
\qquad \text{if } i \in \mathrm{A}, \\
(S - b^\dagger_i b_i) \vec n
+ \sqrt{\frac{S}{2}} (b_i + b_i^\dagger) \vec e
+ \mathrm{i} \sqrt{\frac{S}{2}} (b_i - b_i^\dagger) (\vec n \times \vec e) + \mathcal O(1/\sqrt{S}), &
\qquad \text{if } i \in \mathrm{B},
\end{cases}
\end{align}
with $\vec n = (-\vec e_x+\vec e_y)/\sqrt{2} \parallel \vec H$ and
$\vec e = -\vec e_z$. $\vec e_x$, $\vec e_y$, and $\vec e_z$ are
the spin quantization axes. $a_i^\dagger$ and $a_i$ ($b_i^\dagger$
and $b_i$) are the magnon creation and annihilation operators at
site $i$ on sublattice A (B). To the leading order in $1/S$, we
find the spin-wave Hamiltonian
\begin{align}
\mathcal H_\text{SW} & = S \sum_{\vec q \in \mathrm{BZ}}
\left[
\epsilon_0 \left(a^\dagger_{\vec q} a_{\vec q} + b^\dagger_{\vec q} b_{\vec q}\right)
+ \lambda_0(\vec q) a^\dagger_{\vec q} b_{\vec q} + \lambda_0^*(\vec q) b^\dagger_{\vec q} a_{\vec q}
+ \lambda_1(\vec q) a_{-\vec q} b_{\vec q} + \lambda_1^*(-\vec q) a^\dagger_{\vec q} b^\dagger_{- \vec q}
\right],
\end{align}
with the coefficients
\begin{align}
\epsilon_0 & = g \mu_0 \mu_\mathrm{B} H/S - 3 J_1 - K_1 + \Gamma_1, \\
\lambda_0(\vec q) & =
\left(J_1+ \frac{K_1}{4}\right) \left(\mathrm{e}^{\mathrm{i} \vec q \cdot \vec \delta_x} + \mathrm{e}^{\mathrm{i} \vec q \cdot \vec \delta_y}\right)
+ \left(J_1+ \frac{K_1}{2} + \frac{\Gamma_1}{2}\right) \mathrm{e}^{\mathrm{i} \vec q \cdot \vec \delta_z} +
J_3 \left(\mathrm{e}^{-2\mathrm{i} \vec q \cdot \vec \delta_x} + \mathrm{e}^{-2\mathrm{i} \vec q \cdot \vec \delta_y} + \mathrm{e}^{-2\mathrm{i} \vec q \cdot \vec \delta_z}\right), \\
\lambda_1(\vec q) & =
\left(-\frac{K_1}{4} + \frac{\mathrm{i} \Gamma_1}{\sqrt{2}}\right) \left(\mathrm{e}^{\mathrm{i} \vec q \cdot \vec \delta_x} + \mathrm{e}^{\mathrm{i} \vec q \cdot \vec \delta_y}\right)
+ \frac{K_1 - \Gamma_1}{2} \mathrm{e}^{\mathrm{i} \vec q \cdot \vec \delta_z}.
\end{align}
$\mathcal H_\mathrm{SW}$ can be diagonalized by means of a
Bogoliubov transformation. The resulting excitation spectrum
together with the corresponding density of states (DOS) for the
parameter set of Eq.~\eqref{eq:couplings} is displayed for two
different values of the magnetic field at and above the quantum
critical point (QCP) in Fig.~\ref{fig:spectrum}. The spectrum is
gapped for any $H > H_\text{c} = 0.586 \frac{|K_1 S|}{g \mu_0
\mu_\mathrm{B}}$ (in agreement with the classical critical field
strength) with a gap value of
\begin{align} \label{eq:gap}
\Delta(H) = 1.30 \left\lvert K_1 S \right\rvert \left(\frac{H-H_\text{c}}{H_\text{c}}\right)^{1/2} + \mathcal O\!\left[\left((H-H_\text{c})/H_\text{c}\right)^{3/2}\right],
\end{align}
which is roughly of the order of magnitude of the experimentally
observed gap. As quantum effects are enhanced at low energies, we
expect Eq.~\eqref{eq:gap} to receive sizable corrections when
magnon interactions are taken into account. In particular, the
true gap exponent $\nu z$ will deviate from the mean-field value
$(\nu z)_\text{MF} = 1/2$ we have obtained here. This prevents a
more detailed quantitative comparison with the experimental gap
behavior.
\end{widetext}
\begin{figure*}
\includegraphics[scale=1]{CpTm1XTp1-set3+J3-hbar110.pdf} \hfill
\includegraphics[scale=1]{CpTp1XTm1-set3+J3-hbar110.pdf} \\
\raggedright \includegraphics[scale=1]{Cp-scaling-set3+J3-hbar110.pdf}
\includegraphics[scale=1]{Cp-legend.pdf}\hfill
\caption{(color online)
(a) Double-log plot of the specific heat $C_\text{mag}/T$ versus temperature $T$ for a honeycomb-lattice {$J_1$--$K_1$--$\Gamma_1$--$J_3$} model in external field $\vec H \parallel [\bar 1 1 0]$ for different magnetic field strengths $H \geq H_\text{c}$. At the quantum critical point $H = H_\text{c}$ and low $T$, the specific heat scales as $C_\text{mag} \propto T^{d/z}$ with dimensionality $d=2$ and the dynamical critical exponent $z = 1$ (dashed line).
(b) Same data as (a), but now plotted as $C_\text{mag} T$ versus $1/T$ in log-linear plot. The dashed lines show the low-$T$ approximation according to Eq.~\eqref{eq:cXT}.
(c) Scaling plot $C_\text{mag}/T^{d/z}$ versus $T/(H - H_\text{c})^{\nu z}$ with correlation-length exponent $\nu$. For our model, we have $\nu=1/2$ at the level of the present mean-field-like approximation.}
\label{fig:specific-heat}
\end{figure*}
We note, however, that thermodynamic quantities, such as the
specific heat at low to intermediate temperatures, should be
expected to be lesser affected by our linear spin-wave
approximation, since they predominantly depend on the parts of the
excitation spectrum with a large density of states, and these are
located at higher energy.
\subsection{Specific heat for $H>H_\text{c}$}
The heat capacity is obtained from the spectrum via
\begin{align}
C_\text{mag}(T,H) & = \sum_{\alpha = 1,2} \sum_{\vec q \in \mathrm{BZ}}
\frac{\partial}{\partial T} \frac{\varepsilon_{\alpha}(\vec q)}{\exp\left[\varepsilon_{\alpha}(\vec q)/(k_\mathrm{B} T)\right] - 1},
\end{align}
where $\varepsilon_{1,2}(\vec q)$ are the two magnon bands. The
result is given for different magnetic field strengths in
Fig.~\ref{fig:specific-heat}(a).
At low temperatures, and $H$ not too close to $H_\text{c}$, the specific
heat is exponentially suppressed,
\begin{align} \label{eq:cXT}
C_\text{mag}(T,H) \simeq k_\mathrm{B} \left(\frac{\rho_0
\Delta^2}{k_\mathrm{B} T}\right) \mathrm{e}^{-\Delta/(k_\mathrm{B} T)},
\quad
\text{for } k_\mathrm{B} T \ll \Delta(H),
\end{align}
where $\rho_0 \equiv \rho_0(H)$ is the density of states at the
band minimum. This is shown in Fig.~\ref{fig:specific-heat}(b).
Close to the QCP, on the other hand, the critical part of the
specific heat is expected to follow a scaling law
\begin{align}
C_\text{mag}(T,H) = T^{d/z} f_\pm \!\left(T/(H-H_\text{c})^{\nu z}\right)
\end{align}
with the spatial dimensionality $d=2$, the dynamical critical
exponent $z=1$, the correlation-length exponent $\nu$, and scaling
functions $f_\pm(x)$ above $(+)$ and below $(-)$ the QCP. This is
demonstrated for our theoretical data in
Fig.~\ref{fig:specific-heat}(c).
As a consequence, directly at the QCP for $H = H_\text{c}$, the specific
heat follows a power law at low temperatures, $C_\text{mag}(T,H) \propto
T^{2}$, see dashed line in Fig.~\ref{fig:specific-heat}(a). For
fields $H>H_\text{c}$ the low-$T$ specific heat is gapped, with a gap
which depends sublinearly on $(H-H_\text{c})$, see Fig.~\ref{fig:vhs}.
\begin{figure}
\includegraphics[scale=.9]{gap-vHs-set3+J3}
\caption{(color online) Red: Calculated gap $\Delta(H)$ as
function of magnetic field $H/H_\text{c} \geq 1$. Blue: Energy of the
first van-Hove singularity $\varepsilon_\mathrm{vHs}$. The dashed
curves correspond to expansions in small $(H-H_\text{c})/H_\text{c}$,
yielding $\Delta(H)/|K_1 S| \simeq 1.30 [(H-H_\text{c})/H_\text{c}]^{1/2}$
(Eq.~\eqref{eq:gap}) and $\varepsilon_\mathrm{vHs}/|K_1 S| \simeq
1.11 + 0.71 (H-H_\text{c})/H_\text{c}$, respectively. } \label{fig:vhs}
\end{figure}
Interestingly, $C_\text{mag}/T$ displays a maximum at higher
temperatures, $k_\mathrm{B} T \sim \mathcal O(|K_1 S|)$. The
position of this maximum shifts approximately linearly with $H$;
this can be attributed to the shift of the high-energy part of the
spectrum that has a large weight, such as the location of the
van-Hove singularities at $\varepsilon_\text{vHs} \sim \mathcal
O(|K_1 S|)$ at $H=H_\text{c}$. The shift of $\varepsilon_\text{vHs}$ with
field is illustrated in Fig.~\ref{fig:vhs}. Note that the weight
near $\varepsilon_\text{vHs}$ is particularly large due to almost
flat portions of the magnon bands, arising from the combination of
$K_1$ and $\Gamma_1$ terms.
We emphasize that it is this specific-heat maximum which limits
the validity of scaling in our theoretical data,
Fig.~\ref{fig:specific-heat}(c). This is not unlike what happens
in the experimental data where scaling is spoiled by the presence
of a small energy scale in the magnon spectrum. Spectroscopic
investigations of the excitation spectrum at elevated fields are
clearly called for.
|
2,877,628,091,150 | arxiv | \section{Introduction}
\label{Introduction}
Since its formulation, Einstein's general theory of relativity (GR) has withstood extensive experimental and
observational scrutiny using tests that range from millimeter to solar system scales (see \cite{Will_review}
and references therein). The discovery of the late-time cosmic
acceleration
\cite{SNcosmic_accel,Perlmutter} was a surprise,
but one which could be modelled within the minimally extended
framework of $\Lambda$CDM
\cite{Peebles,lrr-2001-1} -- GR with a positive cosmological constant. To
this day this
simple model remains in very good agreement with data from all competitive probes
\cite{Komatsu:2010fb, Amanullah:2010vv, Reid:2009xm, Percival:2009xn}, which
imply that approximately 70\% of the energy density of the universe
is made up of a component which
does not cluster and has an equation of state with
pressure approximately equal to minus the energy density.
While the simplest model for this component is indeed the cosmological
constant, from the point of view of particle physics,
its value implied by the measurements of the cosmological expansion
is extremely low and requires a very high level of fine tuning.
A number of alternative models for dark energy have been proposed,
most of which suffer from a similar fine-tuning problem to $\Lambda$CDM
(see the review \cite{Copeland:2006wr}), but at least
provide a set of alternatives against which to test the $\Lambda$CDM
hypothesis. In this spirit, it is possible to imagine that, rather
than proposing the existence of a new, exotic form of energy density, it
is the theory of gravity which we use to interpret the cosmological
data that must be modified.
There are a number of proposed gravity theories which modify the
dynamics at large distances, and metric $f(R)$ theories
of gravity (see, e.g., \cite{f_of_R_review,Sotiriou} and references
therein) comprise one such class of modifications to GR. This class has attracted
considerable attention in recent years, perhaps due to the
simplicity of the modifications. Further motivation for the study of $f(R)$ gravity is reviewed
in \cite{Sotiriou}; for other interesting alternatives,
see \cite{f_of_R_review} for Gauss-Bonet gravity, \cite{Mannheim2006340} for conformal gravity and
\cite{MaartensRev} for Brane-World gravity.
The $f(R)$ formulation arises from a
simple replacement of the Ricci scalar ($R$) in
the Einstein-Hilbert action,
\labeq{EHaction}{
S = \frac{1}{16\pi}\int \sqrt{-g}d^4x (R-2\Lambda) + S_m(g_{\mu\nu},\psi_m),
}
where $g$ is the determinant of the metric tensor
$g_{\mu\nu}$, $\Lambda$ the cosmological constant, $S_m$ the
matter term in the action,
and $\psi_m$ collectively denotes the matter fields,
by an arbitrary function of the Ricci scalar, i.e.,
\labeq{fofRaction}{
S = \frac{1}{16\pi}\int \sqrt{-g}d^4x f(R) + S_m(g_{\mu\nu},\psi_m).
}
Note that throughout this work we adopt geometrized units, where $G=c=1$.
From Equations \eref{EHaction} and \eref{fofRaction} it is clear
that GR is recovered for $f(R)=R-2\Lambda$. In metric $f(R)$ theories
the connection symbols ${}^{(4)}\Gamma^i{}_{jk}$ are chosen to
be the Christoffel symbols associated with the metric tensor, so that
the action is a function of only the metric tensor and its derivatives.
As a result, in metric $f(R)$ gravity only the metric tensor is truly dynamical.
In Palatini $f(R)$ gravity the connections ${}^{(4)}\Gamma^i{}_{jk}$ are
considered independent of the metric tensor, so that the action is
a function of both the metric tensor and the connection symbols. Thus,
in Palatini $f(R)$ both $g_{\mu\nu}$ and ${}^{(4)}\Gamma^i{}_{jk}$ are
dynamical fields (see also \cite{Amendola:2010bk} for a new class of
models which interpolate between the metric and Palatini formulations).
In this work we are concerned with metric $f(R)$ gravity only.
Early work on $f(R)$ theories \cite{Bergmann,Ruzmaikina,Buchdahl,Starobinsky1980}
was mainly concerned with high-energy corrections to
general relativity and their influence on the early universe
(see in particular \cite{Starobinsky1980} where the first $f(R)$ model of inflation
was proposed). The discovery of cosmic acceleration
\cite{SNcosmic_accel,Perlmutter} renewed the interest in $f(R)$ models,
but now with modifications in the infra-red. A number of alternative models to
GR have been proposed \cite{Capozziello0,Capozziello,Capozziello2,CDTT,Nojiri}.
However, it was later shown that these models neither satisfy
local gravity constraints \cite{Chiba,Olmo1,Olmo2} nor
give rise to a standard matter-dominated era \cite{Amendola1,Amendola2}.
General conditions for the cosmological viability of $f(R)$ models were derived in
\cite{Amendola3} and it was later realized that the
so-called Chameleon mechanism - the scalar degree of freedom becomes massive in dense environments and light in diffuse ones -
can allow $f(R)$ gravity to satisfy Solar-System constraints \cite{Faulkner:2006ub,Hu_Iggy}.
The key consequence of the Chameleon mechanism is that the modification to the metric inside galactic haloes
is suppressed: gravity returns to its general-relativistic behaviour. The functioning of the Chameleon mechanism has
also been confirmed via N-body simulations of large-scale cosmological structure
formation in \cite{Oyaizu:2008sr, Oyaizu:2008tb, Schmidt:2008tn,Ferraro:2010gh, Zhao:2010qy}, where it was shown that
predictions for cluster abundance and the matter power spectrum return at small scales to those calculated within the $\Lambda$CDM framework.
A number of models that satisfy both Solar-System and cosmological constraints
have been proposed in \cite{Faulkner:2006ub,Hu_Iggy,Li,Starobinsky2,Appleby,Deruelle,Cognola,Linder}, and it is now
known that for an $f(R)$ theory to be viable the following four constraints must be met \cite{f_of_R_review}:
\begin{enumerate}
\item $f,_{R} > 0 $ for $R \geq R_0$, where $R_0$ is the cosmological
value of the Ricci scalar today. This condition is necessary
for guaranteeing that the new scalar degree of freedom is not a ghost -- a field with negative kinetic energy.
\item $f(R) \rightarrow R$ for $R \gg R_0$. This condition
is necessary for the presence of a matter-dominated era and to
evade solar-system constraints.
\item $f,_{RR} > 0 $ for $R \geq R_0$ in the presence of external matter.
This condition ensures that the matter-dominated era is the stable
solution for cosmology and that the solutions which satisfy solar
system constraints are stable.
\item $0 < Rf,_{RR}/f,_R|_{r=-2}$, where $r = -Rf,_R/f$.
This condition is necessary for the stability and presence of a late time de Sitter solution.
\end{enumerate}
The existence of these requirements is a result
of the fact that in $f(R)$ gravity the Ricci scalar is a full
dynamical degree of freedom, which must behave in a manner similar to
the Ricci scalar in GR, where it is controlled through a
constraint ($R=-8\pi T$). These conditions ensure that in high-density
environments, the so-called high-curvature solutions,
where $R\simeq 8\pi\rho$, are stable.
An additional constraint that any theory of gravity must satisfy is the existence of stable relativistic (neutron) stars. It was originally pointed
out in \cite{Frolov}, that many models of $f(R)$ theories reach a curvature singularity at a finite value of the
scalar degree of freedom $f,_R$ which is not protected by the existence of a potential barrier. This value
of the scalar field may be attained in the presence of relativistic matter. This same idea was used in \cite{Kobayashi} to argue
that it is not possible to build spherically symmetric, i.e., non-rotating, relativistic stars in $f(R)$ theories of gravity.
These works stimulated further interest and eventually numerical models of spherical relativistic stars in $f(R)$ gravity
were explicitly constructed in \cite{Babichev,Upadhye_Hu,Babichev2}. There it
was shown that building numerical models of neutron stars in $f(R)$ gravity is very sensitive to the treatment of boundary conditions.
To our knowledge a stability analysis of non-rotating equilibrium models of neutron stars in the context of $f(R)$ theories has not been carried out yet.
One may expect that the stability properties of relativistic stars in viable $f(R)$ gravity are the
same as those in GR, because of condition
3 above. However, given the subtleties that arise in obtaining relativistic stellar configurations in $f(R)$ theories
due to the effective scalar degree of freedom it is natural to expect that the back-reaction of the scalar field
will affect the stability, too. In addition, it would be interesting to explore
the existence and stability of rotating neutron stars and how $f(R)$ gravity affects the criterion for the onset of the bar mode, r-mode and other
non-axisymmetric instabilities \cite{LRS93a,LRS94,ChandrasekharEllips,ShibataBarmode,SaijoBarmode,KokkotasRev}.
Furthermore, it is intriguing to study gravitational radiation arising from compact stars, both in isolation and in binary systems. Included
in this list are
neutron star -- neutron star \cite{DuezNSNSReview}, black hole--black hole \cite{2010CQGra..27k4004H}, black hole--neutron star
\cite{Rantsiou08, Loffler06, Faber, Faber06, Shibata06, Shibata07, Shibata08,Yamamoto08,Etienne08a, Etienne08,
Duez08,2009PhRvD..79d4030S,2009PhRvD..79l4018K,2010AAS...21530001M,2010arXiv1006.2839C,2010CQGra..27k4106D,
2010arXiv1007.4160P,2010arXiv1007.4203F,2010arXiv1008.1460K} and white-dwarf--neutron star binaries \cite{WDNS_PAPERI,WDNS_PAPERII}.
Some of these studies can be carried out analytically via perturbation theory, and some require direct numerical simulations.
One of the main points we make in this work is that current numerical relativity techniques (see texts by Baumgarte and Shapiro
\cite{BSBook} and Alcubierre \cite{AlcuBook} and references therein), i.e.,
the solution of the Einstein equations by computational means,
should be able to handle the equations of $f(R)$ gravity
straightforwardly. In particular, the minimum requirement
is to modify the stress-energy tensor and add a new scalar field evolution equation.
However, to achieve long-term stable numerical integration of any set of partial differential equations,
well-posedness of the Cauchy (or initial value) problem must be guaranteed.
Unlike GR, the field equations of metric $f(R)$ gravity in the so-called Jordan frame are 4th order (see \Sref{fofRequations}).
Nevertheless these theories can be cast in 2nd-order form, by promoting $f,_R$ (the derivative of $f(R)$
with respect to $R$) to an effective dynamical scalar degree of freedom. Alternatively, metric $f(R)$ gravity can be
reduced to second-order form by a transformation of the $f(R)$ action to a Brans-Dicke (BD) \cite{BransDicke} action
with $\omega=0$ \cite{Chiba}. This means that $f(R)$ gravity is equivalent to BD gravity without a kinetic term.
Exploiting this equivalence and the 3+1 decomposition approach of \cite{Salgado06}, it was
demonstrated in \cite{fofR_well_posed} that metric $f(R)$ gravity admits a well-posed initial value problem. As in 3+1 GR,
to solve the initial value problem, first one solves the 3+1 constraint equations
to obtain initial data and then uses the 3+1 evolution equations to advance the initial data in time. For this approach to yield a consistent
solution of the covariant (4D) field equations, the 3+1 evolution equations must preserve the constraints of the theory.
To prove this one has to derive the evolution equations of the constraints, which are often referred to as the constraint propagation equations,
and show that if the constraint equations are initially satisfied, they must be satisfied for all times.
To our knowledge this has never been demonstrated for a 3+1 formulation of $f(R)$ gravity and in this work we show that this is indeed the case.
To date there are two methods for deriving 3+1 constraint propagation equations.
One approach is to take the time derivative of the constraint equations in 3+1 form and then replace the time derivatives
of all dynamical variables by using the evolution equations for these variables. We call this the 3+1 or ``brute force'' method.
This is a rather tedious approach and to our knowledge, it has been performed in GR only for vacuum spacetimes in \cite{VasPas_formulations}.
A pedagogical example that explains the ``brute force'' method is given in section II of \cite{VasPas_formulations},
and more involved applications involving Maxwell's equations can be found in \cite{Calabrese}.
The other approach, which is more elegant, takes advantage of the Bianchi identities. We call this the Frittelli method \cite{Frit97} (see also \cite{AlcuBook}).
However, the equations derived in \cite{Frit97} were not cast in pure 3+1
language. Here and throughout this paper by ``pure 3+1 language'' we mean that a given equation is written solely
in terms of scalars and purely spatial objects and their derivatives. Yoneda and Shinkai \cite{YonShin2,2004GReGr..36.1931S} have derived
the Arnowitt, Deser, Misner (ADM) constraint propagation equations in pure 3+1 language but they did not indicate how they arrived at their result.
In this work we employ the Frittelli approach to derive the constraint propagation equations of $f(R)$ gravity
and cast the resulting equations in pure 3+1 language.
We show that the mathematical form of the constraint propagation equations
is the same as that of the standard ADM formulation of GR. We also demonstrate that this result holds true
both in the Jordan and the Einstein frames of metric $f(R)$ gravity, as well as for the BD-equivalent version of metric $f(R)$ gravity.
Finally, we compare our equations with published results of the constraint propagation equations
derived using the 3+1 approach and show that the expressions obtained via both approaches agree.
While none of our results are surprising they serve to prove that f(R) gravity is self-consistent. Moreover, it is revealing
to demonstrate how previous results from GR can be extended to alternative theories of gravity and the consistency between
alternative approaches. Finally, obtaining the extended constraint propagation equations in pure 3+1 form
may prove useful for performing 3+1 numerical simulations, where constraint preservation can be used as a check on the integration.
This paper is organized as follows. In \Sref{fofRequations} we review the field equations of generic metric $f(R)$ models.
In \Sref{con_pres} we provide a simple pedagogical argument (see also \cite{WeinbergGR,BSBook}) to demonstrate the basic idea of
constraint preservation in the context of GR. In \Sref{3p1_intro} we review the 3+1 decomposition of the BD-equivalent metric $f(R)$ equations.
In \Sref{General_Con_prop} we employ the Frittelli method and use the results of \Sref{3p1_intro} to derive the 3+1 metric $f(R)$
constraint propagation equations. In \Sref{3p1_lang_con_prog}
we cast our generalized evolution equations of the constraints in pure 3+1 language. In \Sref{jordan_Einstein} we argue
that the 3+1 constraint propagation equations of $f(R)$ gravity in both the Jordan and the Einstein frames can be cast in the same
form as that in the 3+1 BD-equivalent version of $f(R)$ theories. Finally, we summarize our work in \Sref{summary}.
\section{$f(R)$ field equations}
\label{fofRequations}
As in GR, the fundamental quantity in $f(R)$ gravity is the spacetime metric tensor $g_{\alpha\beta}$
\labeq{metric1}{
ds^2 = g_{\alpha\beta}dx^{\alpha}dx^{\beta},
}
where $ds$ is the line element, and $x^{\alpha}$ denote the spacetime coordinates.
Here and throughout this paper
Greek indices run from $0$ to $3$, while Latin indices run from $1$ to $3$.
The goal of the theory is to determine the metric given a mass-energy distribution.
Because of the existence of an additional scalar degree of freedom in the gravitational field sector,
it is possible to formulate the field equations
of $f(R)$ theory in many ways, depending on the amount of mixing between these two fields.
We will discuss three such formulations: the Jordan frame,
the Einstein frame, and the BD-equivalent formulation.
The Jordan frame and Einstein frame formulations have different metrics as dynamical variables.
The two metric tensors are related via a conformal transformation
\labeq{conf_transf_gen}{
\tilde g_{\mu\nu} = \Omega^2 g_{\mu\nu}
}
where $\Omega$ is the conformal factor, and $g_{\mu\nu}$ here denotes the metric in the Jordan frame.
Note that \Eref{conf_transf_gen} is equivalent to a transformation of units~\cite{ConfTransf}.
In this section we review the field equations of $f(R)$ gravity in both the Jordan and Einstein frames, as well
as those of the BD-equivalent form of $f(R)$ gravity\footnote{For various Hamiltonian formulations of $f(R)$ gravity see \cite{Deruelle2009}.}.
\subsection{Jordan Frame}
The action \eref{fofRaction} is called the Jordan frame action.
An action is said to be in the Jordan frame, if the dynamical metric tensor in the action is the metric whose geodesics particles follow, i.e, the
physical metric. The Jordan frame is the one in which the definition of the matter stress-energy tensor is
\labeq{1}{
T_{\mu\nu}^{(m)} = -\frac{2}{\sqrt{-g}}\frac{\delta S_m}{\delta g^{\mu\nu}},
}
where $\delta S_m/\delta g^{\mu\nu}$ is the functional derivative of $S_m$ with respect to $g^{\mu\nu}$.
For example, it is in this frame that the stress-energy tensor of a perfect fluid has the form
\labeq{2}{
T_{\mu\nu}^{(m)} = (\rho+ P)u_{\mu}u_{\nu} + P g_{\mu\nu},
}
where $\rho$ is the total energy density of the fluid, $P$ the fluid pressure, and $u^\mu$ the fluid four velocity.
Varying the action \eref{fofRaction} with respect to the metric
yields the $f(R)$ field equations \cite{f_of_R_review,Sotiriou} in
the Jordan frame
\labeq{fofREOM}{
\Sigma_{\mu\nu}=8\pi T_{\mu\nu}^{(m)},
}
where $T_{\mu\nu}^{(m)}$ is the matter stress-energy tensor and
\labeq{Sigma}{
\Sigma_{\mu\nu}=FR_{\mu\nu}-\frac{1}{2} f g_{\mu\nu}-\nabla_\mu\nabla_\nu F+g_{\mu\nu}\Box F,
}
and where $F = f,_R$. Note that for brevity we have dropped the argument of both $f(R)$ and $F(R)$.
Clearly GR is recovered for $f = R - 2\Lambda$, in which case Equations \eref{fofREOM} and \eref{Sigma} yield
\labeq{3}{
\Sigma_{\mu\nu} = G_{\mu\nu}+\Lambda g_{\mu\nu} = 8\pi T_{\mu\nu}^{(m)},
}
where $G_{\mu\nu}$ is the Einstein tensor.
\Eref{fofREOM} is 4th-order due to the term $\nabla_\mu\nabla_\nu F$. However, if we take the trace of \Eref{fofREOM},
we obtain
\labeq{TrSigma}{
3\Box F + FR-2f = 8\pi T^{(m)},
}
where $T^{(m)} = g^{\mu\nu}T_{\mu\nu}^{(m)}$ and
\labeq{4}{
\Box F = \frac{1}{\sqrt{- g}}\partial_\mu(\sqrt{- g} g^{\mu\nu}\partial_\nu F).
}
\Eref{TrSigma} can be used to promote $F(R)$ into an effective dynamical scalar
degree of freedom (often referred to as ``scalaron''), thus recasting the theory in 2nd-order form.
\Eref{fofREOM} can also be written in the following form \cite{Starobinsky2}
\labeq{fofREOM2}{
G_{\mu\nu}= 8\pi (T_{\mu\nu}^{(m)}+T_{\mu\nu}^{(f)}),
}
where $T_{\mu\nu}^{(f)}$ can be thought of as a ``dark energy'' stress-energy tensor, given by
\begin{eqnarray}
8\pi T_{\mu\nu}^{(f)}= & &\ \frac{1}{2} g_{\mu\nu}(f-R)+\nabla_\mu\nabla_\nu F \nonumber \\
& & -g_{\mu\nu}\Box F+ (1-F)R_{\mu\nu},
\end{eqnarray}
This form of the field equations of the theory is interesting because the Bianchi identities
$\nabla^\mu G_{\mu\nu}=0$ together with $\nabla^\mu T_{\mu\nu}^{(m)} = 0$, imply that
\labeq{5}{
\nabla^\mu T_{\mu\nu}^{(f)} = 0,
}
i.e., the dark energy tensor $T_{\mu\nu}^{(f)} $ is conserved.
\subsection{Einstein frame}
To obtain the Einstein frame action of $f(R)$ gravity, i.e., an action linear in a Ricci scalar $\tilde R$ associated with
a metric $\tilde g_{\mu\nu}$, all we have to do is perform a conformal transformation on the metric
\labeq{6}{
\tilde g_{\mu\nu} \equiv F g_{\mu\nu},
}
i.e., the conformal factor $\Omega$ in \Eref{conf_transf_gen} is $\Omega^2=F$.
For the transformation to be physical $F$ must satisfy $F>0$. Note that this condition is in accord with
the first condition for cosmological viability of $f(R)$ gravity listed in \Sref{Introduction}.
If we introduce a new field $\phi$ such that
\labeq{7}{
\phi \equiv \sqrt{\frac{3}{16\pi}}\ln F,
}
then the $f(R)$ Jordan action transforms to \cite{f_of_R_review}
\labeq{fofREinstein}{
S_E = \frac{1}{16\pi G}\int d^4x\sqrt{-\tilde g}\tilde R+S_\phi+S_m(F^{-1}(\phi)\tilde g_{\mu\nu},\psi_m),
}
where
\labeq{8}{
S_\phi =\int d^4x\sqrt{-\tilde g}\big[-\frac{1}{2} \tilde g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi-V(\phi)\big]
}
is the scalar field term in the action, and where the scalar field potential is defined as
\labeq{9}{
V(\phi) = \frac{FR-f}{16\pi F^2}.
}
The dynamical metric tensor in the Einstein frame is not the physical ($g_{\mu\nu}$) but the conformal one ($\tilde g_{\mu\nu}$). However,
the matter still follows the geodesics of the physical (Jordan) metric.
Variation of the matter action with respect to $\tilde g_{\mu\nu}$ yields
\labeq{10}{
\tilde T_{\mu\nu}^{(m)} = -\frac{2}{\sqrt{-\tilde g}}\frac{\delta S_m}{\delta \tilde g^{\mu\nu}} = \frac{1}{F}T_{\mu\nu}^{(m)},
}
which is no longer independent of the scalar field $\phi$.
Variation of the action \eref{fofREinstein} with respect to $\phi$ yields the scalar field equation
\labeq{phi_einstein}{
\tilde \Box \phi - V,_\phi - \sqrt{\frac{4\pi}{3}}\tilde T^{(m)} = 0,
}
where $\tilde T^{(m)} = \tilde g^{\mu\nu}\tilde T_{\mu\nu}^{(m)}$ and
\labeq{11}{
\tilde \Box \phi = \frac{1}{\sqrt{-\tilde g}}\partial_\mu(\sqrt{-\tilde g}\tilde g^{\mu\nu}\partial_\nu\phi).
}
\Eref{phi_einstein} implies that the scalar field is directly coupled to matter.
Finally, variation of the action \eref{fofREinstein} with respect to $\tilde g^{\mu\nu}$ yields
\labeq{fofREinsteinEOM}{
\tilde G_{\mu\nu} = 8\pi (\tilde T_{\mu\nu}^{(m)}+\tilde T_{\mu\nu}^{(\phi)}),
}
where the scalar field stress-energy tensor is
\labeq{12}{
\tilde T_{\mu\nu}^{(\phi)
= \partial_\mu\phi\partial_\nu\phi-\tilde g_{\mu\nu}\bigg[\frac{1}{2}\tilde g^{\alpha\beta}\partial_\alpha\phi\partial_\beta\phi+V(\phi)\bigg].
}
Note that in the Einstein frame $\tilde\nabla^\mu \tilde T_{\mu\nu}^{(m)} \neq 0$; instead we have
\labeq{13}{
\tilde\nabla^\mu \tilde G_{\mu\nu} = \tilde\nabla^\mu (\tilde T_{\mu\nu}^{(m)}+\tilde T_{\mu\nu}^{(\phi)})= 0,
}
where $\tilde\nabla_\mu$ is the covariant derivative associated with $\tilde g_{\mu\nu}$. It can also be shown that \cite{f_of_R_review}
\labeq{14}{
\tilde\nabla^\mu \tilde T_{\mu\nu}^{(m)}= -\frac{1}{\sqrt{6}}\tilde T \tilde\nabla_\nu \phi, \qquad
\tilde\nabla^\mu \tilde T_{\mu\nu}^{(\phi)}= \frac{1}{\sqrt{6}}\tilde T \tilde\nabla_\nu \phi.
}
\subsection{Equivalence with Brans-Dicke gravity}
Another way to cast $f(R)$ gravity into second-order form is to express the theory as a BD theory.
To show that $f(R)$ gravity is equivalent to BD gravity with a potential, the following action
was considered in \cite{Chiba}:
\labeq{BDequiv}{
S =\frac{1}{16\pi } \int \sqrt{-g}d^4 x\big[f(\chi)+f,_\chi(\chi)(R-\chi)\big]+S_m.
}
Varying the action with respect to $\chi$ yields
\labeq{15}{
f,_{\chi\chi}(\chi)(R-\chi) = 0.
}
Thus, if $f,_{\chi\chi}(\chi)\neq 0$ (in agreement with condition 3 in \Sref{Introduction}), then
\labeq{BD1}{
\chi = R.
}
Hence, \Eref{BDequiv}
recovers the Jordan frame $f(R)$ action \eref{fofRaction}. If we now let $\phi = f,_{\chi}(\chi)$, \Eref{BDequiv} can be written as follows
\labeq{BDequiv2}{
S =\frac{1}{16\pi} \int \sqrt{-g}d^4 x\bigg[\phi R - V(\phi)\bigg]+S_m,
}
where the potential is given by
\labeq{16}{
V(\phi) = \chi(\phi)\phi-f(\chi(\phi)).
}
Action \eref{BDequiv2} is the same as the original BD action with a potential and
without the kinetic term $(\omega/2) g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi$,
i.e., the BD parameter is $\omega = 0$. Varying the action \eref{BDequiv2} with respect to
the metric yields the BD-equivalent $f(R)$ field equations \cite{Sotiriou}
\labeq{BD2}{
G_{\mu\nu} = \frac{8\pi }{\phi} (T_{\mu\nu}^{(m)}+T_{\mu\nu}^{(\phi)}),
}
where
\labeq{TmnBDphi}{
8\pi T_{\mu\nu}^{(\phi)} = \nabla_\mu\nabla_\nu\phi-g_{\mu\nu}\big(\Box\phi+\frac{1}{2} V(\phi)\big)
}
Taking the trace of \Eref{BD2} we can replace $R$ in \Eref{BD1} to obtain
\labeq{BD3}{
3\Box \phi + 2V(\phi)-\phi\frac{dV}{d\phi}=8\pi T.
}
Equations \eref{BD2} and \eref{BD3} are the BD-equivalent $f(R)$ field equations.
\section{Constraint Preservation in a spacetime context}
\label{con_pres}
In this section we review the concept
of constraint preservation using the standard Einstein equations in 4D covariant form, i.e., we do not invoke machinery
of the 3+1 decomposition of spacetime. The reason for doing so is that so far we have written the most popular
representations of metric $f(R)$ field equations in a GR-like form
\labeq{GR}{
G_{\mu\nu} = 8\pi \bar T_{\mu\nu},
}
where $\bar T_{\mu\nu}$ is an ``effective'' stress-energy tensor that is conserved, i.e., $\nabla_\mu \bar T^{\mu\nu} =0$.
Thus, it is instructive to first consider the Einstein equations in their familiar 4D covariant form.
For the Einstein equations with a cosmological constant we have $\bar T_{\mu\nu} = T_{\mu\nu}^{(m)}-\Lambda g_{\mu\nu}/8\pi$.
Since the Einstein equations are second-order partial differential equations, the evolution of the 4-metric $g_{\alpha\beta}$ in time can be determined
by specifying $g_{\alpha\beta}$ and $\partial_t g_{\alpha\beta}$, everywhere on a three-dimensional spacelike hypersurface that
corresponds to a given initial time $t$. \Eref{GR} can provide us with expressions for $\partial_t^2 g_{\alpha\beta}$, which we can use to
evolve the metric in time. There are 10 metric components and there are 10 field equations in~\eref{GR}. Hence, it appears that we have
the exact number of equations for the 10 degrees of freedom of the metric.
However, the Bianchi identities $\nabla_\beta G^{\alpha\beta} = 0$, give
\labeq{Bianchi_primit}{
\partial_t G^{\alpha 0} = -\partial_i G^{\alpha i} - G^{\beta\mu}{}^{(4)}\Gamma^{\alpha}{}_{\beta\mu}- G^{\alpha\beta}{}^{(4)}\Gamma^{\mu}{}_{\beta\mu},
}
where we set $\partial_t\equiv\partial_0$ and where ${}^{(4)}\Gamma^{\alpha}{}_{\beta\mu}$ are the Christoffel symbols associated with $g_{\alpha\beta}$.
Since no term on the right-hand-side of \Eref{Bianchi_primit} contains third time derivatives or higher, the four quantities $G^{\alpha 0}$ cannot
contain second time derivatives. Thus, the four equations
\labeq{GR2}{
G_{\mu0} = 8\pi\bar T_{\mu0}
}
do not provide any information on the dynamical evolution of the metric. They are instead a set of constraints
that $g_{\alpha\beta}$ and $\partial_t g_{\alpha\beta}$
have to satisfy. The only truly dynamical equations are the six remaining equations
\labeq{GR3}{
G_{ij} = 8\pi\bar T_{ij}.
}
The apparent mismatch between the number of metric components and the number of evolution equations is immediately resolved once we invoke the coordinate
freedom of GR. The theory is four-dimensional, and hence we can always choose four conditions to specify a coordinate system. For example, we can choose
the four $g_{0\beta}$ components and assign them certain values, or demand that they satisfy a given set of four partial differential equations. This way we are left
with six independent metric components, for which we have the exact number of evolution equations \eref{GR3}.
However, solving \Eref{GR3} does not guarantee that the full set of the Einstein equations \eref{GR} will be satisfied. For that to be true,
\Eref{GR2} has to be satisfied for all times. In other words, if one solves \Eref{GR3} starting with initial data that satisfy \Eref{GR2},
one has to prove that the constraints are preserved.
To demonstrate that this is indeed the case we make use of the Bianchi identities in the following form:
\labeq{Bianchi_primit2}{
\nabla_\beta {\mathcal E}^{\alpha\beta} = 0,
}
or
\labeq{Bianchi_primit3}{
\partial_t \mathcal{E}^{\alpha 0} = -\partial_i \mathcal{E}^{\alpha i} - \mathcal{E}^{\beta\mu}{}^{(4)}\Gamma^{\alpha}{}_{\beta\mu}- \mathcal{E}^{\alpha\beta}{}^{(4)}\Gamma^{\mu}{}_{\beta\mu},
}
where
\labeq{17}{
{\mathcal E}^{\alpha\beta} \equiv G^{\alpha\beta} - 8\pi \bar T^{\alpha\beta}.
}
If we let $C = {\mathcal E}^{00}$ and $C^i={\mathcal E}^{i0}$, \Eref{Bianchi_primit3} can be rewritten as
\begin{eqnarray} \label{spacetime_constraint_prop}
\partial_t C =& & -\partial_i C^i - C \big(2{}^{(4)}\Gamma^{0}{}_{00}+{}^{(4)}\Gamma^{i}{}_{0i}\big)\nonumber \\
& & -C^i\big(3{}^{(4)}\Gamma^{0}{}_{i0}+{}^{(4)}\Gamma^{j}{}_{ij}\big) \\
\partial_t C^j = & & -C{}^{(4)}\Gamma^{j}{}_{00}-2C^i{}^{(4)}\Gamma^{j}{}_{i0}-C^j{}^{(4)}\Gamma^{\beta}{}_{0\beta},
\end{eqnarray}
where we have used \Eref{GR3}, ${\mathcal E}^{ij}=0$, to obtain the result. Thus, if the constraints are initially satisfied, then $C=C^i=0$ initially
and from \Eref{spacetime_constraint_prop} the time derivative of the constraints will be zero and hence the constraints
will remain zero for all times. Since this conclusion resulted from setting ${\mathcal E}^{ij}=0$,
the previous statement is equivalent to saying that the evolution equations preserve the constraints,
a result that is well-known.
\section{3+1 Decomposition of $f(R)$ gravity}
\label{3p1_intro}
Well-posedness of the Cauchy problem in metric $f(R)$ gravity has been demonstrated in \cite{fofR_well_posed} using the BD-equivalent $f(R)$ formulation.
In this section we focus on the BD version of $f(R)$ gravity and review the salient features of its 3+1 decomposition that will be useful
in our proof of constraint preservation.
The form of the field equations of the theory is that of \Eref{GR}, where
\labeq{18}{
\bar T_{\mu\nu} = \frac{1}{\phi} (T_{\mu\nu}^{(m)}+T_{\mu\nu}^{(\phi)}).
}
The 3+1 decomposition of spacetime is a decomposition of spacetime into space and time. To do this, one assumes that the
four-dimensional spacetime manifold can be foliated by a one-parameter family of nonintersecting
spacelike hypersurfaces. The parameter of this family of hypersurfaces is taken to be the coordinate time.
The spacetime metric is then rewritten as \cite{ADM3p1}
\begin{equation}
ds^2 = -\alpha^2 dt^2 + \gamma_{ij} (dx^i + \beta^i dt) (dx^j + \beta^j dt),
\end {equation}
where $\alpha$ is the lapse function, $\beta^i$ is the shift vector, and $\gamma_{ij}$ is the 3-metric on the spacelike
hypersurfaces, induced by $g_{\alpha\beta}$. The lapse function and the shift vector are gauge quantities; they
dictate how to build the coordinate system and can be freely specified.
The relation between $\gamma_{ij}$ and $g_{\alpha\beta}$ is
\labeq{projection_op}{
\gamma^{\alpha}{}_{\beta} = \delta^{\alpha}{}_{\beta} + n^\alpha n_{\beta},
}
where $\gamma^{\alpha}{}_{\beta}= g^{\alpha\mu}\gamma_{\mu\beta}$, $\delta^{\alpha}{}_{\beta}$ is the Kronecker delta, and
$n^{\alpha}$ is the future directed timelike unit vector normal to the $t = \rm const.$ hypersurfaces.
The tensor $\gamma^{\alpha}{}_{\beta}$ is the operator that projects tensors onto spacelike hypersurfaces.
The field equations can then be decomposed into a set of evolution equations and a set of constraint equations
by using $\gamma^{\alpha}{}_{\beta}$ and $n^{\alpha}$.
Projecting \Eref{GR} twice with the projection operator yields the evolution equations
\labeq{evolution_prim}{
E_{\mu\nu} \equiv (G_{\alpha\beta} - 8\pi \bar T_{\alpha\beta}) \gamma^\alpha{}_\mu \gamma^\beta{}_\nu =
G_{\alpha\beta} \gamma^\alpha_\mu \gamma^\beta_\nu - 8\pi \bar S_{\mu\nu}=0,
}
where
\labeq{19}{
\bar S_{\mu\nu} \equiv \bar T_{\alpha\beta}\gamma^\alpha{}_\mu \gamma^\beta{}_\nu = S_{\mu\nu} + S_{\mu\nu}^{(\phi)},
}
and where
\labeq{Smn}{
S_{\mu\nu} \equiv T_{\alpha\beta}^{(m)}\gamma^\alpha{}_\mu \gamma^\beta{}_\nu, \qquad
S_{\mu\nu}^{(\phi)} \equiv T_{\alpha\beta}^{(\phi)}\gamma^\alpha{}_\mu \gamma^\beta{}_\nu.
}
Using \Eref{TmnBDphi}, we can write $S_{\mu\nu}^{(\phi)}$ in \Eref{Smn} as follows
\labeq{20}{
S_{\mu\nu}^{(\phi)} = \frac{1}{8\pi}[D_\mu\nabla_\nu \phi - \gamma_{\mu\nu}(\Box \phi +\frac{1}{2} V(\phi))],
}
where $D_\mu$ is the covariant derivative associated with $\gamma_{\mu\nu}$.
Contracting \Eref{GR} twice with $n^{\alpha}$ yields the Hamiltonian constraint
\labeq{Hamiltonian_prim}{
H \equiv (G_{\alpha\beta} - 8\pi \bar T_{\alpha\beta}) n^\alpha n^\beta =
G_{\alpha\beta}n^\alpha n^\beta - 8\pi \bar \rho = 0,
}
where
\labeq{21}{
\bar \rho \equiv \bar T_{\alpha\beta}n^\alpha n^\beta = \rho + \rho^{(\phi)},
}
and where
\labeq{22}{
\rho \equiv T_{\alpha\beta}^{(m)}n^\alpha n^\beta, \qquad \rho^{(\phi)} \equiv T_{\alpha\beta}^{(\phi)}n^\alpha n^\beta
}
Using \Eref{TmnBDphi} we also obtain
\labeq{23}{
\rho^{(\phi)} = \frac{1}{8\pi} [n^{\mu}n^{\nu}\nabla_\mu\nabla_\nu\phi + (\Box \phi +\frac{1}{2} V(\phi))].
}
Contracting \Eref{GR} once with $n^{\alpha}$ and projecting once with $\gamma^\alpha{}_\beta$ yields the momentum constraints
\begin{eqnarray}
\label{Momentum_prim}
M_\mu & \equiv & -(G_{\alpha\beta} - 8\pi \bar T_{\alpha\beta}) n^\alpha \gamma^\beta{}_\mu \nonumber \\
& = & -G_{\alpha\beta}n^\alpha \gamma^\beta{}_\mu - 8\pi j_{\mu} - 8\pi j_{\mu}^{(\phi)}=0 ,
\end{eqnarray}
where
\labeq{24}{
j_\mu \equiv - T_{\alpha\beta}^{(m)}n^\alpha \gamma^\beta{}_\mu, \qquad j_\mu^{(\phi)} \equiv - T_{\alpha\beta}^{(\phi)}n^\alpha \gamma^\beta{}_\mu,
}
and where from \Eref{TmnBDphi} we find
\labeq{25}{
j_\mu^{(\phi)} = \frac{1}{8\pi}n^\alpha\gamma^\beta{}_\mu \nabla_\alpha\nabla_\beta\phi.
}
We can now write \Eref{GR} as a linear combination of the evolution and the constraint equations.
\begin{eqnarray}
G^{\alpha\beta} - 8\pi \bar T^{\alpha\beta} &=& (G^{\mu\nu} - 8\pi \bar T^{\mu\nu}) \delta^\alpha{}_\mu \delta^\beta{}_\nu \nonumber \\
&=& (G^{\mu\nu} - 8\pi \bar T^{\mu\nu})( \gamma^\alpha{}_\mu - n^\alpha n_\mu)( \gamma^\beta{}_\nu - n^\beta n_\nu)
\label{Ein_decomp0}
\end{eqnarray}
where we used \Eref{projection_op} in the second line to replace the Kronecker deltas. By use of Equations \eref{evolution_prim}, \eref{Hamiltonian_prim} and \eref{Momentum_prim}, \Eref{Ein_decomp0} becomes
\labeq{Ein}{
G^{\alpha\beta} - 8\pi \bar T^{\alpha\beta} = E^{\alpha\beta} + 2 n^{(\alpha} M^{\beta)} + H n^\alpha n^\beta.
}
This last equation has been derived by Frittelli, (see Eq. (9) in \cite{Frit97}) for GR. Here we have shown that this equation is valid
in $f(R)$ gravity, too, provided that appropriate definitions of $H$ and $M^i$ are given.
As was shown in \cite{Frit97}, setting $E^{\alpha\beta} = 0$ yields the evolution equations of the original ADM formulation
\cite{ADM3p1}, whereas setting $E^{\alpha\beta} = \gamma^{\alpha\beta}H$ yields the evolution equations of the standard ADM formulation
\cite{ADMbyYork,AlcuBook,BSBook}. Following the parametrization of \cite{Frit97} we set $E^{\alpha\beta} = \lambda\gamma^{\alpha\beta}H$,
so that in the ADM language $\lambda=0$ corresponds to the
original ADM formulation, while $\lambda=1$ to the standard ADM formulation, except that here we deal with $f(R)$ 3+1 formulations.
It is now evident from \Eref{Ein} that if $M^\alpha = H = 0$ and $E^{\alpha\beta} = \lambda\gamma^{\alpha\beta}H$,
then the $f(R)$ equations are satisfied.
By introducing the extrinsic curvature $K_{ij}$
\labeq{Kij}{
K_{ij} = -\frac{1}{2}\pounds_n \gamma_{ij}
}
where $\pounds_n$ stands for the Lie derivative along the timelike unit vector $n^\alpha$,
using the Gauss, Godazzi and Ricci equations (see e.g. Equations (2.68), (2.73), (2.82) in \cite{BSBook}), and adopting the usual coordinate basis where
\labeq{nmu}{
n^\mu = (\alpha^{-1}, - \alpha^{-1} \beta^i)
}
one can derive the evolution and constraint equations in 3+1 form, which (for $\lambda=0$) are presented in \cite{fofR_well_posed} and we do not
repeat them here.
A subtlety that must be addressed for our purpose and which is pointed out in \cite{Salgado06,fofR_well_posed}
is that to remove the time derivatives of the scalar field
$\phi$ from the sources $S_{\mu\nu}^{(\phi)},\rho^{(\phi)},j_{\mu}^{(\phi)}$ one introduces the gradients of $\phi$ as new dynamical variables
\labeq{Pi}{
\Pi \equiv \pounds_n\phi= n^\mu \nabla_\mu \phi,
}
\labeq{Qmu}{
Q_\mu \equiv D_\mu \phi.
}
The $\Box \phi$ operator in the sources $S_{\mu\nu}^{(\phi)},\rho^{(\phi)},j_{\mu}^{(\phi)}$ can be removed by use of \Eref{BD3}.
Furthermore, \Eref{BD3} in combination with \Eref{Pi}, which can be written as
\labeq{Pievol}{
\alpha\Pi = \partial_t \phi - \beta^i Q_i,
}
can be used to derive the evolution equation for $\Pi$. Eventually, one finds \cite{Salgado06}
\labeq{26}{
\pounds_{n} \Pi = \Pi K + Q^iD_i(\ln\alpha)+D_i Q^i-\Box \phi.
}
where $K=\gamma^{ij}K_{ij}$.
To promote $Q_i$ to a dynamical variable we take a time derivative of $Q_i$ and using \Eref{Pievol} we obtain
\labeq{Qievol}{
\partial_t Q_i = \pounds_\beta Q_i + D_i (\alpha\Pi),
}
where
\labeq{27}{
\pounds_\beta Q_i = \beta^s\partial_s Q_i + Q_s \partial_i \beta^s.
}
The introduction of new variables $Q_i$, introduces an extra constraint, which the evolution equations have to satisfy
\labeq{Cmu}{
C_{i} \equiv Q_{i} - D_i \phi = 0.
}
In addition to this, the ordering constraint
\labeq{Cmunu}{
C_{ij} \equiv D_i Q_j - D_j Q_i = 0.
}
has to be satisfied, too.
Thus, constraint preservation means that the evolution equations must preserve all the constraints of the 3+1 decomposition, i.e.,
Equations \eref{Hamiltonian_prim},~\eref{Momentum_prim},~\eref{Cmu}, and~\eref{Cmunu}.\\
\section{Constraint Propagation Equations of 3+1 $f(R)$ gravity}
\label{General_Con_prop}
The backbone of the Frittelli approach is to express the field equations in the form of \Eref{Ein} and plug it in the Bianchi
identities in order to derive the evolution equations for the constraints, assuming the evolution equations are satisfied $E^{\mu\nu}=\lambda \gamma^{\mu\nu}H$.
So far we have extended the Frittelli approach
to general metric $f(R)$ gravity. Since the form of Equations \eref{Ein} is the same as in \cite{Frit97},
the derivation of the 3+1 BD-equivalent $f(R)$ constraint propagation equations
is precisely the same as that in \cite{Frit97}, which is valid for GR, and to which we refer the interested reader for more details.
Here we only sketch the derivation and write the result.
The Bianchi identities are
\labeq{Bianchi}{
\nabla_\mu (G^{\mu\nu}-8\pi \bar T^{\mu\nu}) = 0.
}
or, equivalently, after substituting \Eref{Ein} in \Eref{Bianchi}
\labeq{Bianchi2}{
\nabla_\mu (E^{\mu\nu} + 2 n^{(\mu} M^{\nu)} + H n^\mu n^\nu) = 0.
}
To find the evolution of the Hamiltonian constraint we contract \Eref{Bianchi2} with $n^\alpha$
and after some algebra we find
\begin{eqnarray} \label{Ham_evol_Frit}
0 & = & -E^{\mu\nu} D_\nu n_\mu - 2 n^\nu M^\mu \nabla_\nu n_\mu - D_\nu M^\nu \nonumber \\
& & -\ n^\nu \nabla_\nu H - H D_\nu n^\nu.
\end{eqnarray}
To find the evolution of the Momentum constraint we project \Eref{Bianchi2} with $\gamma^\alpha{}_\beta$
and after some algebra we find \footnote{We note here that Eq. (11) in \cite{Frit97}, differs from our \Eref{Mom_evol_Frit}
by a factor of 2 in the term $ H n^\mu \nabla_\mu n^\alpha$.
We believe that this discrepancy is simply due to a typographical error.\label{ftnt1}}
\begin{eqnarray}
0 & = & D_\mu E^{\mu \alpha} + n^\nu E^{\alpha\mu} \nabla_\nu n_\mu + n^\mu \nabla_\mu M^\alpha - n^\alpha M^\nu n^\mu \nabla_\mu n_\nu \nonumber \\
& & +\ M^\alpha D_\mu n^\mu + M^\mu D_\mu n^\alpha + H n^\mu \nabla_\mu n^\alpha. \label{Mom_evol_Frit}
\end{eqnarray}
Proof that our equations are correct will be provided below when we cast the constraint propagation equations in pure 3+1
language and compare our result with published results in the literature obtained via the 3+1 approach.
Using the following identities
\labeq{28}{\gamma^{\mu\nu} H D_\mu n_\nu = H D_\mu n^\mu,}
\labeq{29}{n^\mu \gamma^{\alpha\nu} H \nabla_\mu n_\nu = H n^\mu \nabla_\mu n^\alpha,}
and substituting $E^{\mu\nu}=\lambda\gamma^{\mu\nu}H$ in Equations~\eref{Ham_evol_Frit} and~\eref{Mom_evol_Frit}
we find that the evolution of the constraints is given by
\labeq{con_prop_ham}{
n^\mu \nabla_\mu H = - 2 n^\mu M^\nu \nabla_\mu n_\nu - D_\mu M^\mu - (1+\lambda) H D_\mu n^\mu,
}
\begin{eqnarray}\label{con_prop_mom}
n^\mu \nabla_\mu M^\nu = & & -\lambda \gamma^{\mu\nu} D_\mu H + n^\nu M^\alpha n^\beta \nabla_\beta n_\alpha - M^\nu D_\mu n^\mu \nonumber \\
& & - M^\mu D_\mu n^\nu - (1+\lambda) H n^\mu \nabla_\mu n^\nu.
\end{eqnarray}
These last two equations have the same mathematical form (except for a factor of 2; see footnote in page \pageref{ftnt1})
as those derived in \cite{Frit97} that applied to the case of GR, i.e. $f(R)=R$.
Here we have proven that the form of the hamiltonian and momentum constraint
propagation equations is the same for both vacuum ($T_{\mu\nu}^{(m)} = 0$) and non-vacuum spacetimes ($T_{\mu\nu}^{(m)} \neq 0$),
and that it is independent of the $f(R)$ function, because we have absorbed all terms that depend on these quantities
in the definition of the hamiltonian and momentum constraints
(see Equations~\eref{Hamiltonian_prim},~\eref{Momentum_prim}).
We deal with the evolution of constraints \eref{Cmu} and \eref{Cmunu}, in the following section.
\section{$f(R)$ Constraint propagation equations in pure 3+1 language}
\label{3p1_lang_con_prog}
Note that Equations~\eref{con_prop_ham} and~\eref{con_prop_mom} involve both spacetime and purely spatial
objects. This is not a form that easily yields a comparison between the constraint propagation equations obtained
in the Frittelli approach with those obtained in the 3+1 approach. Nor is it convenient for
integration in a 3+1 numerical implementation that could serve as a check of the numerical integration of the evolution equations of the
dynamical variables.
For this reason, we now cast these equations
in pure 3+1 language. To our knowledge such a calculation has never been published before, hence it is instructive to include it
here.
Alternative expressions for the extrinsic curvature are (see e.g. Equations~(2.49), (2.52) in \cite{BSBook})
\labeq{ext_curv_2}{D_{(\alpha} n_{\beta)} = - K_{\alpha\beta} \mbox{\ \ and\ \ } K_{\alpha\beta} = -\nabla_\alpha n_\beta - n_\alpha a_\beta, }
where $a_\alpha = n^\beta\nabla_\beta n_\alpha$ is the acceleration of normal observers, also equal to (see Eq.~(2.22) in \cite{BSReview})
\labeq{accel}{
a_\beta = D_\beta \ln\alpha.
}
From \Eref{ext_curv_2} it can be shown that
\labeq{ext_curv_sca}{D_\beta n^\beta = -K \mbox{\ \ and\ \ } D_\beta n^\alpha = - K_\beta{}^\alpha.}
By use of Equations~\eref{ext_curv_2},~\eref{accel} and~\eref{ext_curv_sca}, Equations~\eref{con_prop_ham} and~\eref{con_prop_mom}
can be written as
\begin {eqnarray}
n^\mu \nabla_\mu H &=& - D_\mu M^\mu + (1+ \lambda) H K \nonumber \\
& & - 2 M^\nu D_\nu \ln \alpha \label{Ham_con_new}, \\
n^\mu \nabla_\mu M^\nu &=& -\mu \gamma^{\mu\nu} D_\mu H + n^\nu M^\mu D_\mu \ln \alpha + M^\nu K \nonumber \\
& & + M^\mu K_\mu{}^\nu - (1+ \lambda) H D^\nu \ln \alpha. \label{Mom_con_new}
\end{eqnarray}
The identities, $\nabla_\alpha H = \partial_\alpha H$, $\gamma^{\mu\nu}D_\mu H=\gamma^{\mu\nu}\partial_\mu H$,
$M^{\alpha}D_\alpha \ln \alpha = M^{\alpha}\partial_\alpha \ln \alpha$,
$\nabla_\mu M^\nu = \partial_\mu M^\nu + {}^{(4)}\Gamma^\nu_{\mu\beta} M^\beta$, can be
used to replace the covariant derivatives that occur above.
Furthermore, the timelike unit vector ($n^\mu$) can be replaced by \Eref{nmu}.
Equations~\eref{Ham_con_new}
and~\eref{Mom_con_new} can then be written as
\labeq{Ham_con_new2}{
\partial_t H = \beta^i \partial_i H - 2 M^i \partial_i \alpha - \alpha D_i M^i +(1+ \mu) \alpha H K,
}
\begin{eqnarray}\label{Mom_con_new2}
\partial_t M^j = & & -\mu \gamma^{ij} \partial_i H + \beta^i \partial_i M^j - {}^{(4)}\Gamma^j_{i0} M^i \nonumber \\
& & + {}^{(4)}\Gamma^j_{ik} M^i \beta^k + n^j M^i \partial_i \alpha + \alpha M^j K \nonumber \\
& & + \alpha M^i K_i^j - (1+ \mu) \gamma^{ij} H \partial_i \alpha,
\end{eqnarray}
where we have focused on the spatial indices of $M^\mu$, since $M^\mu$ is purely spatial.
Using $D_i M^i=\gamma^{ij}\partial_i M_j+\gamma^{ij}\Gamma^{k}{}_{ij}M_k$,
and the expressions for the Lie derivatives of the constraints along $\alpha n^\mu$
\begin{eqnarray}
\pounds_{\alpha n} H &=& \partial_t H - \beta^i \partial_i H \label{eq:old42}, \\
\pounds_{\alpha n} M^j &=& \partial_t M^j - \beta^i \partial_i M^j + M^i \partial_i \beta^j \label{eq:old43}
\end{eqnarray}
we write Equations~\eref{Ham_con_new2} and ~\eref{Mom_con_new2} as
\begin{eqnarray}\label{H_evol3}
\pounds_{\alpha n} H = & & - 2 M^i \partial_i \alpha - \alpha \gamma^{ij}\partial_i M_j \nonumber \\
& & + \alpha\gamma^{ij}\Gamma^{k}{}_{ij}M_k +(1+ \mu) \alpha H K,
\end{eqnarray}
\begin{eqnarray}
\pounds_{\alpha n} M^j = & & -\lambda \gamma^{ij} \partial_i H + A^j{}_i M^i + \alpha M^j K \nonumber\\
& & - (1+ \lambda) \gamma^{ij} H \partial_i \alpha+ \gamma^{jk} M^i \partial_i \beta_k \nonumber \\
& & - M^\ell\beta^s\gamma^{jm}\partial_\ell \gamma_{sm}, \label{M_evolb}
\end{eqnarray}
where
\labeq{Aji_mom}{
A^j{}_i\equiv - {}^{(4)}\Gamma^j_{i0} + {}^{(4)}\Gamma^j_{ik} \beta^k -\alpha^{-1}\beta ^j\partial_i\alpha +\alpha K_i^j.
}
We now need to express the Christoffel symbols associated with the spacetime metric $g_{\mu\nu}$, that appear in \Eref{Aji_mom}, in terms
of the 3-metric $\gamma_{ij}$ and the gauge variables. We do this as follows
\begin{eqnarray}\label{Gamma0}
{}^{(4)}\Gamma^{j}{}_{i0} & = & \ \frac{1}{2} g^{j\rho}(\partial_i g_{0\rho}+\partial_0 g_{i\rho}-\partial_\rho g_{i0}) \nonumber \\
& = & \ \frac{1}{2} g^{j0}\partial_i g_{00} + \frac{1}{2} g^{j\ell}(\partial_i g_{0\ell}+\partial_0 g_{i\ell}-\partial_\ell g_{i0}).
\end{eqnarray}
Using the relations between $g_{\mu\nu}$ and $\alpha, \beta^i$, $\gamma_{ij}$ \cite{BSReview}
\labeq{ident}{
g_{00} = -\alpha^2 + \beta_\ell \beta^\ell, \quad g_{0i} = \beta_i, \quad g^{j\ell} = \gamma^{j\ell} - \alpha^{-2}\beta^j\beta^\ell,
}
\Eref{Gamma0} eventually becomes
\begin{eqnarray}\label{Gamma0_2}
{}^{(4)}\Gamma^{j}{}_{i0} =& & -\alpha^{-1}\beta^j\partial_i \alpha + \frac{1}{2}\alpha^{-2}\beta^j\beta^\ell\partial_i\beta_\ell \nonumber \\
& & -\frac{1}{2}\big(\alpha^{-2}\beta^j\beta^\ell\beta^s\partial_i\gamma_{\ell s} - \gamma^{j\ell}\partial_i\beta_\ell
-\gamma^{j\ell}\partial_0\gamma_{i\ell}\big) \nonumber \\
& & -\frac{1}{2}\big(\gamma^{j\ell}\partial_\ell \beta_i +\alpha^{-2}\beta^j\beta^\ell\partial_0 \gamma_{i\ell}-
\alpha^{-2}\beta^j\beta^\ell\partial_\ell \beta_i\big),
\end{eqnarray}
where we have also used the following identities
\labeq{ident2}{
\beta^i = \gamma^{ij}\beta_j, \quad \partial_k\gamma^{ij} = -\gamma^{is}\gamma^{jm}\partial_k \gamma_{sm}.
}
The next object that appears in \Eref{Aji_mom}, and which we cast in 3+1 language is $\beta^k {}^{(4)}\Gamma^{j}{}_{ik}$. We can write this as
\begin{eqnarray}\label{betaGamma}
\beta^k {}^{(4)}\Gamma^{j}{}_{ik} & = &\ \frac{1}{2} \beta^k g^{j\rho}(\partial_i g_{k\rho}+\partial_k g_{i\rho} - \partial_\rho g_{ik}) \nonumber \\
& = &\ \frac{1}{2} \beta^k g^{j0}(\partial_i g_{k0}+\partial_k g_{i0} - \partial_0 g_{ik}) \nonumber \\
& & + \frac{1}{2} \beta^k g^{j\ell}(\partial_i g_{k\ell}+\partial_k g_{i\ell} - \partial_\ell g_{ik}).
\end{eqnarray}
By virtue of Equations~\eref{ident} and \eref{ident2}, \Eref{betaGamma} finally becomes
\begin{eqnarray}\label{betaGamma2}
\beta^k {}^{(4)}\Gamma^{j}{}_{ik} = & &\ \frac{1}{2}\alpha^{-2} \beta^k\beta^j\partial_i\beta_k + \frac{1}{2}\alpha^{-2} \beta^k\beta^j\partial_k\beta_i \nonumber \\
& & -\frac{1}{2}\alpha^{-2} \beta^k\beta^j\partial_0\gamma_{ik} +\beta^{k}\Gamma^{j}{}_{ik} \nonumber \\
& & -\alpha^{-2}\beta^j\beta^k\beta_\ell\Gamma^{\ell}{}_{ik},
\end{eqnarray}
or equivalently
\begin{eqnarray}\label{betaGamma3}
\beta^k {}^{(4)}\Gamma^{j}{}_{ik} = & &\ \frac{1}{2}\alpha^{-2} \beta^k\beta^j\partial_i\beta_k + \frac{1}{2}\alpha^{-2} \beta^k\beta^j\partial_k\beta_i \nonumber \\
& & - \frac{1}{2}\alpha^{-2} \beta^k\beta^j\partial_0\gamma_{ik} +\beta^{k}\Gamma^{j}{}_{ik} \nonumber \\
& & -\frac{1}{2}\alpha^{-2}\beta^j\beta^k\beta^\ell\partial_i\gamma_{k\ell},
\end{eqnarray}
where $\Gamma^{j}{}_{ik}$ stand for the Christoffel symbols associated with the 3-metric.
By use of Equations \eref{Gamma0_2} and \eref{betaGamma3}, \Eref{Aji_mom} becomes
\begin{eqnarray}\label{Aij2}
A^{j}{}_{i} = & & -\frac{1}{2} \gamma^{j\ell}\partial_i\beta_\ell-\frac{1}{2}\gamma^{j\ell}\partial_0\gamma_{i\ell}+
\frac{1}{2}\gamma^{j\ell}\partial_\ell\beta_i \nonumber \\
& & +\beta^{k}\Gamma^{j}{}_{ik} + \alpha K^{j}{}_{i}.
\end{eqnarray}
From the evolution equation of the 3-metric, \Eref{Kij}, we have
\labeq{gam_dt_gam}{
\frac{1}{2} \gamma^{j\ell}\partial_0\gamma_{i\ell} = -\alpha K^j{}_{i}+\frac{1}{2} \gamma^{j\ell}\partial_i\beta_\ell+\frac{1}{2} \gamma^{j\ell}\partial_\ell\beta_i
-\gamma^{jl}\Gamma^s{}_{i\ell}\beta_s.
}
Substitution of \Eref{gam_dt_gam} into \Eref{Aij2} yields
\labeq{Aji3}{
A^j{}_i = -\gamma^{j\ell}\partial_i\beta_\ell + \gamma^{j\ell}\beta^k\partial_i\gamma_{\ell k}+2\alpha K^j{}_i.
}
Finally, substituting \Eref{Aji3} into \Eref{M_evolb} yields the desired result,
\begin{eqnarray}\label{Mupj_evol}
\pounds_{\alpha n} M^j = & &-\lambda \gamma^{ij} \partial_i H + 2\alpha K^j_i M^i + \alpha M^j K \nonumber \\
& & - (1+ \lambda) \gamma^{ij} H \partial_i \alpha.
\end{eqnarray}
Equations \eref{H_evol3} and \eref{Mupj_evol} are the hamiltonian and momentum constraint propagation equations in pure 3+1 language, where the Lie
derivatives are given in Equations~\eref{eq:old42} and~\eref{eq:old43}.
We have already established that the form of the constraint propagation equations is the same for both vacuum and non-vacuum spacetimes,
and is independent of the form of the function $f(R)$.
Thus, to validate our equations we can use known results that apply to the case of GR, and have been derived
by using the ``brute force'' method.
For this reason we now compare our results with results published in \cite{VasPas_formulations,YonShin2} that apply for $f(R)=R$, i.e., for the Einstein equations.
In these two papers the evolution equations of the constraints were presented
assuming $T_{\mu\nu}=0$. In \cite{VasPas_formulations} the 3+1 approach was employed to derive
the constraint propagation equations. For direct comparison with these published results we also derive the evolution
equations for $\mathcal{H} = 2 H$ and the evolution for $M_i$ which were used in \cite{VasPas_formulations,YonShin2}
instead.
Using
\begin{eqnarray}\label{Mlowj_evol}
\pounds_{\alpha n} M_i & = & M^j\pounds_{\alpha n}\gamma_{ij} + \gamma_{ij} \pounds_{\alpha n} M^j \nonumber \\
& = & - 2\alpha K_{ij} M^j + \gamma_{ij} \pounds_{\alpha n} M^j
\end{eqnarray}
and replacing $H = \mathcal{H}/2$ in \eref{H_evol3} and \eref{Mupj_evol} we obtain the following alternative form for the constraint propagation equations
\begin{eqnarray}\label{H_evol_final}
\pounds_{\alpha n} \mathcal{H} = & & - 4 M^i \partial_i \alpha - 2\alpha \gamma^{ij}\partial_i M_j \nonumber \\
& & + 2\alpha\gamma^{ij}\Gamma^{k}{}_{ij}M_k + (1+ \lambda) \alpha \mathcal{H} K ,
\end{eqnarray}
\labeq{Mlowj_evol_final}{
\pounds_{\alpha n} M_i = -\frac{1}{2}\lambda \partial_i \mathcal{H} + \alpha M_i K - (1+ \lambda)\frac{1}{2} \mathcal{H} \partial_i \alpha.
}
For $\lambda=1$ Equations ~\eref{H_evol_final} and~\eref{Mlowj_evol_final} become precisely the same as the expressions
in \cite{VasPas_formulations}, when the quantities, $C_{kij}=\partial_k\gamma_{ij}-D_{kij}$, defined in \cite{VasPas_formulations}, satisfy
$C_{kij}=0$. In that work $C_{kij}$ are constraints that arise from the introduction of the auxiliary variables
$D_{kij}\equiv\partial_k\gamma_{ij}$, which were used to reduce the ADM formulation to 1st-order.
Also, a straightforward calculation shows that the expressions above
are equivalent to the corresponding expressions in \cite{YonShin2}. From Equations~\eref{H_evol_final} and~\eref{Mlowj_evol_final} it is again
evident that the constraints remain satisfied ($\mathcal{H}=M_i=0$), if they are initially satisfied.
We now turn our attention to the evolution equations of $C_{i}$ and $C_{ij}$. To derive the evolution of $C_{i}$ and $C_{ij}$ we simply take a
time derivative of $C_{i}$ and $C_{ij}$, use the commutation relation $\partial_i \partial_t = \partial_t\partial_i$, and
replace the time derivative of variables via Equations \eref{Pievol} and \eref{Qievol} to find that
\labeq{Cievol}{
\partial_t C_i = \beta^sC_{si}
}
and
\labeq{Cijevol}{
\partial_t C_{ij} = \pounds_\beta C_{ij},
}
where
\labeq{30}{
\pounds_\beta C_{ij} = \beta^s\partial_s C_{ij} + C_{sj}\partial_i\beta^s + C_{is}\partial_j\beta^s.
}
Equations \eref{Cievol} and \eref{Cijevol} imply that if the constraints $C_i,C_{ij}$ are initially satisfied,
then the evolution equations will preserve the constraints.
We stress again that Equations~\eref{H_evol_final},~\eref{Mlowj_evol_final}, \eref{Cievol} and \eref{Cijevol} are valid not only for vacuum,
but also for non-vacuum spacetimes, as well as for any viable $f(R)$ model. This is a new result that to our knowledge has not
been pointed out previously and is not trivial to prove, if one
employs the 3+1 or ``brute force'' method to derive the constraint propagation equations. Here we proved this without prior
knowledge of the evolution equations of the dynamical variables $K_{ij}, \gamma_{ij}$, on the basis of the Frittelli approach.
Finally, we note that
the agreement between our expressions and results obtained by the ``brute force'' approach confirms that
\Eref{Mom_evol_Frit} is correct (see footnote in page \pageref{ftnt1}).
\section{Constraint Propagation in the Jordan and Einstein frames}
\label{jordan_Einstein}
The form of the $f(R)$ field equations both in the Jordan frame (see Equations \eref{TrSigma} and \eref{fofREOM2}) and in the Einstein frame
(see Equations \eref{phi_einstein} and \eref{fofREinsteinEOM}) is the same as the BD formulation of $f(R)$ gravity. For this reason, it is
evident from our discussion in \Sref{General_Con_prop} that the form of the constraint propagation equations in these two
frames must be the same as those in the BD formulation, provided that we define the hamiltonian and momentum constraints
analogously to Equations \eref{Hamiltonian_prim} and \eref{Momentum_prim} with one important caveat;
The foliation in the Einstein frame must be based on the conformal (Einstein) metric and not the
physical (Jordan), i.e., the induced 3-metric on spacelike hypersurfaces must be $\tilde \gamma_{ab} = \tilde g_{ab} + \tilde n_a \tilde n_b$, where
the normal timelike vector now satisfies $\tilde g_{ab}\tilde n^a \tilde n^b = -1$ and not $g_{ab}\tilde n^a \tilde n^b = -1$. Note that this last condition
is only a mathematical requirement for the 3+1 machinery to remain the same. Physical conclusions must still be drawn based on the Jordan metric.
Finally, we note that if one applies the general recipe for a 3+1 decomposition (see \Sref{3p1_intro}) to
more general scalar-tensor theories of gravity considered in \cite{fofR_well_posed}, then
the constraint propagation equations will be the same as our Equations~\eref{H_evol_final},~\eref{Mlowj_evol_final},
\eref{Cievol} and \eref{Cijevol}. This is because the covariant (Jordan frame) formulation of these theories obtains
the same form as Equations~\eref{BD2} and~\eref{BD3} that we considered here, which lead to same decomposition \eref{Ein}.
\section{Summary and Discussion}
\label{summary}
We have extended the ADM constraint propagation equations, using the Frittelli method \cite{Frit97}, to
generic metric $f(R)$ gravity represented as a BD theory. For direct comparison with published results, we wrote our
general evolution equations of the constraints (defined via Equations~\eref{Hamiltonian_prim} and~\eref{Momentum_prim})
in the same form as the original equations given in \cite{Frit97}. This mathematical form,
given by Equations~\eref{con_prop_ham} and \eref{con_prop_mom},
combines both spacetime and purely spatial objects.
To make transparent the connection between these
equations and the language of the 3+1 decomposition of spacetime, we
cast Equations \eref{con_prop_ham} and \eref{con_prop_mom} in pure 3+1 form, i.e., in a form that involves only scalar
and purely spatial objects and their derivatives (see Equations~\eref{H_evol_final} and~\eref{Mlowj_evol_final}).
The 3+1 form is the mathematical
form the evolution equations of the constraints would take on, if one had employed a ``brute force'' 3+1 approach
for performing this derivation. The brute force approach requires prior knowledge of the exact 3+1 equations and is much more
involved.
The main result of this work is that the mathematical form of the constraint
propagation equations is the same for both vacuum and non-vacuum spacetimes, as well as for any (viable) form of the function $f(R)$,
provided that $T_{\mu\nu}^{(m)}$ and $T_{\mu\nu}^{(\phi)}$ (see \Sref{3p1_intro}) are absorbed properly in the definition of the constraints.
We have also argued that the mathematical form of the evolution of the constraints for 3+1 $f(R)$ gravity
in the Jordan frame remains the same as that of the
BD-equivalent 3+1 $f(R)$ gravity. This result holds true in the Einstein frame, too, if the spacetime foliation is chosen based on
the Einstein metric $\tilde g_{\mu\nu}$ and not the physical (Jordan) metric $g_{\mu\nu}$.
Finally, a comparison between our equations and previous GR results, using the 3+1 approach,
shows that all expressions for the constraint propagation equations agree.
We end this work by pointing out that the 3+1 BD-equivalent $f(R)$ equations can be incorporated in current numerical relativity codes
with only minor additional effort. For example, the minimum requirement for studying vacuum spacetimes in viable $f(R)$ models
is to include the contribution of the scalar field stress-energy tensor $T_{\mu\nu}^{(\phi)}$ (see \Sref{3p1_intro})
and implement a scalar field solver in the form of Equations \eref{Pievol}-\eref{Qievol}. As in GR, it is almost certain that the stability
of numerical implementations of the fully non-linear equations of $f(R)$ gravity will be sensitive to the formulation used.
Given that the structure of the $f(R)$ constraint propagation equations is fundamentally the same as that of the ADM formulation,
we believe that dynamical $f(R)$ simulations will benefit from formulations such as the Baumgarte-Shapiro-Shibata-Nakamura (BSSN;
\cite{ShibNakamBSSN,BaumShapirBSSN}) approach or the generalized harmonic decomposition \cite{Pretorius2005a}. If these formalisms fail,
other approaches such as those proposed in \cite{PADM,Calabrese,Fiske} may prove useful. We hope that this work will serve as a starting point
for relativists to develop fully dynamical codes for viable $f(R)$ models.
\ack
We would like to thank Carlos Cunha for useful conversations.
This
paper was supported in part by NSF Grants PHY06-50377 and
PHY09-63136 as well as NASA Grants
NNX07AG96G and NNX10AI73G to the University of
Illinois at Urbana-Champaign. Ignacy Sawicki acknowledges support by
the DFG through TRR33 ``The Dark Universe".
\section*{References}
\bibliographystyle{unsrt}
|
2,877,628,091,151 | arxiv | \section{Introduction}
\label{sec:intro}
Ensemble learning has been proved impactful in traditional machine learning
\cite{breiman2001random, liu1999ensemble, caruana2004ensemble}.
Concurrently, it is widely applied in deep learning as well.
It is utilized to enhance final performance on different tasks~\cite{liu2018path}.
Nevertheless, most applications of ensemble in deep learning follow the traditional strategy, simply combining several individually trained neural networks.
This strategy introduces multiple extra parameters and computational resources, which is extravagant for most practical applications.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{sketch_map.jpg}
\caption{Intra-Ensemble Network (IENet). Sub-network 0 indicates the original network with all channels and layers. Sub-network 1, 2, 3 share different channels and layers of the original network.}
\label{sketch_map}
\end{figure}
The design of many famous CNN architectures was partly inspired by the idea of ensemble.
The most famous Inception series~\cite{szegedy2017inception} concatenate different branches with various filter types or depths in each layer.
Other architectures like ResNet~\cite{he2016deep} and DenseNet~\cite{huang2017densely} sum or concatenate different features from previous layers.
Our work is mainly inspired by one-shot model~\cite{bender2018understanding} and slimmable neural networks~\cite{yu2018slimmable}.
One-shot model proposes to train a neural network with multiple optional operations in each position with certain drop probability.
Different sub-networks are naturally generated by keeping different operations at each position.
The positive correlations make it possible to estimate the stand-alone model's accuracy using its corresponding sub-network in one-shot model.
However, owing to relatively large search space, there are thousands of underlying combinations to form various sub-networks.
Parameters shared in sub-networks conflict with each other, leading to a serious accuracy decrease compared to corresponding stand-alone networks.
A general method~\cite{yu2018slimmable} to train a single network executable at different widths by switchable batch normalization(S-BN) has been proposed.
Different sub-networks mutually share weights so total parameter size is nearly the same as the stand-alone one, except marginal parameters brought in by S-BN.
In our view, S-BN could provide high performance sub-networks, which are fundamental portions for our ensemble technique.
Another important question is, could sub-networks reach similar accuracies with the original network?
In our experiments, we find that it is possible when the network size is relatively over-parameterized on the training dataset.
Based on the common sense that deep neural networks are usually over-parameterized, we attempt to make use of this redundancy in deep neural networks by applying intra-ensemble among inner sub-networks.
However, training a switchable network directly will create a series of sub-networks with similar properties, which is obviously harmful to ensemble.
For this reason, the key problem is to train high-accuracy and low-correlation sub-networks to keep their ensemble ability.
Once sub-networks have more uncorrelated outputs to solve different kinds of hard cases, ensemble can take effect and contribute to the final performance.
To this end, we propose intra-ensemble with stochastic channel recombination operations(a.k.a. IENet) to produce efficient diversity among sub-networks.
The sketch-map in figure~\ref{sketch_map} shows the relation between original network and sub-networks and the conception of intra-ensemble.
With a fixed size of network parameters, we can generate several sub-networks with stochastic channel arrangement.
Based on the properties of high-accuracy and diversity, we implement simple combination strategies on sub-networks rather than picking up the best one described in one-shot method.
We interestingly find that, although sub-networks may suffer a tiny accuracy drop, intra-ensemble result can still surpass the stand-alone neural network with almost same parameter size.
The main contributions of this work are as follows:
\begin{itemize}
\item We originally propose intra-ensemble with stochastic channel recombination operations to significantly increase the diversity of sub-networks, which enhances network performance with nearly same parameter size.
\item We prove the applicability of intra-ensemble on various datasets and network architectures through extensive experiments. Our IENets achieve highly competitive results on CIFAR-10, CIFAR-100 and other tasks.
\end{itemize}
\section{Intra-Ensemble in One Neural Network}
\subsection{Train Sub-networks with Multiple Widths}
In this work, parameter redundancy is tactfully utilized to train several sub-networks within one neural network while sharing most of the network weights.
Here we define a list of width ratios $W$ to control the channel number in each layer.
For any width ratio $w_i \in W$, $w_i$ is the ratio of used channels per layer.
To be specific, giving a layer with $n$ channels, if $w_i=0.8$, the corresponding $i$-$th$ sub-network use $0.8n$ channels of the original network in this layer.
However, naive training with different widths will cause a great performance decline for each sub-network.
To address this issue, we use slimmable networks~\cite{yu2018slimmable} as a reference and apply their idea of switchable batch normalization(S-BN) in our work.
With S-BN, we can easily train one neural network at different widths with a marginal increase of parameter size, while keeping their high performance: Layers with different widths employ independent batch normalization to accumulate their own feature statistics, which ensures stability in both training and inference procedures.
It is different from the original work that we utilize S-BN with several sub-networks using more than 90\% channels in every layer of the network.
Even if the parameter sharing rate is much higher in our case, S-BN can still ensures the sub-networks' performance.
\subsection{Stochastic Channel Recombination}\label{diversity}
Naively trained sub-networks using slimmable methods~\cite{yu2018slimmable} usually converge towards similar results because they share a large part of parameters with each other.
Ensemble demands a set of diverse models to go into effect.
So the crucial point is to increase the diversity among different sub-networks.
Here we introduce intra-ensemble with stochastic channel recombination operations to significantly enhance ensemble performance within one neural network.
In order to reduce the homogeneity among sub-networks, we resolve the problem into creating structural difference among sub-networks.
We propose the following three recombination operations shown in figure \ref{diff_ops} to increase sub-network diversity on width.
Our operations are mainly implemented on rearranging channel indexing.
Suppose we have a layer with $c$ channels in total, with a sub-network containing $n(0 < n \leq c)$ channels in this layer.
The operations are described as follows:
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{diff_ops.jpg}
\caption{Different stochastic channel recombination operations. (a) random offset. (b) random cut. (c) shuffle channel.}
\label{diff_ops}
\end{figure*}
{\bf Random cut (RC)} Random cut means randomly cutting out a continuous part of channels from all channels for sub-network.
If we cut out $p$ percent channels with a cut index $t$ which is randomly generated, we will block out channels with index $[t, t+1,...,t+pc) $, channels in $[0,...,t)$ and $[t+pc,c)$ are remained for the forward and backward propagation of corresponding sub-network.
So the channels used for a certain layer can be described as:
$$ \pmb w_{used} = \left [\pmb w_i \right]_{i \in \left[ 0,...,t,t + pc,...c \right)} $$
with t randomly generated from $[0,...,c-pc)$ and $w_i$ i-th channels in this layer.
{\bf Random offset (RO)} Random offset operation sets offsets for sub-network channels, instead of simply choosing channels starting from the head.
If one sub-net layer use $p$ percent of total channels $c$ with offsets $t$, its channel index list will be $[t,t+1,t+2,...,t+pc)$.
The constraint of offset $t$ is $0\leq t < c-pc$.
So the channels used for a certain layer can be described as:
$$ \pmb w_{used} = \left [\pmb w_i\right ]_{i\in \left [ pc,...c\right ) } $$
with t randomly generated from $[0,...,c-pc)$ and $w_i$ i-th channels in this layer.
{\bf Shuffle channel (SC)} Shuffle channel is inspired by ShuffleNet~\cite{ma2018shufflenet}.
We randomly choose different lists of channel indexes and concatenate the features with shuffled order for sub-network layers.
With shuffle channel operation, different sub-network will use different channels with various order. In this way, diversity can be greatly enhanced.
{\bf Sub-network similarity} Here we propose a simple metric called similarity noting $\mathcal{S}$.
Given test dataset with $N$ test images, if there are $K$ images with same outputs from all sub-networks, the similarity $\mathcal{S}$ is
$$\mathcal{S}=K/N$$
More similar outputs of sub-networks lead to a higher $\mathcal{S}$ score.
It is a empirically-dependent approach to optimize the balance between similarity and accuracy.
Adequate experiments are carried out to find the best operation for IENet.
\subsection{Ensemble Strategy}
Since we already have several high-performance and diverse sub-networks, the next step is to decide how to combine the outputs properly.
We simply apply two basic combination strategies of ensemble learning as follows:
Consider the $C$ classes softmax outputs of $N$ sub-networks $\{\pmb o_i \}_{i={1,2,...,N}}$, $\pmb o_i = [x_{i1},x_{i2},...,x_{ic}] \in \mathbb{R}^C $.
{\bf Averaging} Averaging means the probability for each category will be the average of all the sub-networks. The final output is the mean value over softmax outputs.
$$ \pmb o_{avg} = \frac{1}{N} \sum_{i=0}^N \pmb o_i$$
{\bf Stacking} We can also set a small stacking network, which each class will have its privately-owned weights for training and inference, and there is no interactive information among different classes to reduce parameter size to $NC$:
$$ \pmb o_{stacking} = diag(\pmb W \cdot \pmb O)$$
$$ \pmb W \in \mathbb{R}^{C \times N}, \pmb O \in \mathbb{R}^{N \times C} $$
Averaging is a simple and effective parameter-free methods and does not need extra training.
While benefit by supervised information, stacking can achieve slightly better accuracy in most cases, with marginal parameters added.
Based on practical experience, we mainly choose and report the performance of intra-ensemble with random cut and stacking.
\section{Experiments and Results}
\begin{table*}[ht]
\centering
\begin{tabular}{lcccccc}
\hline
Datasets & Classes & Baseline acc. & IENet acc. & Acc. $\Delta $ & Param $\Delta $\\
\hline
SVHN~\cite{netzer2011reading} & 10 & 97.59 & {\bf 97.93} & +0.34 & +0.09M\\
Fashion-MNIST~\cite{xiao2017fashion} & 10 & 96.05 & {\bf 96.43} & +0.38 & +0.09M\\
Mini-ImageNet~\cite{vinyals2016matching} & 100 & 72.43& {\bf 74.71} & +2.28 & +0.11M\\
Oxford-IIIT Pets~\cite{em2017incorporating} & 37 & 92.86 & {\bf 94.91}& +2.05 & +0.11M\\
FGVC Aircraft(Fam.)~\cite{maji2013fine} & 70 & 82.60 &{\bf 87.22}& +4.62 & +0.11M\\
FGVC Aircraft(Var.)~\cite{maji2013fine} & 102 & 75.34 & {\bf 80.91 }& +5.57 & +0.11M\\
Caltech-101~\cite{fei2007learning} & 101 & 84.50 & {\bf 87.65} & +3.15 & +0.11M\\
Food-101~\cite{bossard14} & 101 & 82.27 & {\bf 85.00} & +2.73 & +0.11M\\
\hline
\end{tabular}
{
\caption{Results on other datasets. All experiments are implemented using 4 sub-networks IENet with random cut(RC) and stacking. {\bf Acc. $\Delta$} means the accuracy difference between IENet and the stand-alone baseline network. {\bf Param $\Delta$} means the parameters added with intra-ensemble.}
\label{other_results}
}
\end{table*}
\subsection{Results on Different Datasets}
Our training setup follows the CIFAR-10 implementation of DARTS~\cite{liu2019darts}.
Data augmentation is exactly the same as DARTS with cutout~\cite{devries2017improved}.
Moreover, we do not add any auxiliary head to assist training.
{\bf CIFAR-10 and CIFAR-100} The typical configuration for CIFAR-10 experiments is: 4 sub-networks with random cut(RC) and a width ratio list $\left[0.9, 0.9, 0.9, 1.0\right]$. It reaches 2.61\% test error, using only 2.66M parameters.
With less parameters, it outperforms most neural architecture searched and manually designed models.
Moreover, we have a wider version with 4.22M reaching 2.47\% test error and a narrower version with 1.63M reaching 2.91\%, which proves the high scalability of intra-ensemble.
The comparison with our models and others can be found in table~\ref{CIFAR-10}.
All the models with intra-ensemble have great improvement in accuracy while introducing little extra parameters.
By the way, the 2.57M stand-alone baseline network architecture is a simply modified MobileNet V2 1.0x with slightly wider channels in each layer which attains 3.10\% test error.
We directly apply CIFAR-10 configuration to CIFAR-100, except the output classes number.
Though class number increases from 10 to 100 and bad cases become more complex, our intra-ensemble still have significant effects on improving performance.
The 4 sub-networks IENet with 2.78M parameters have a 1.64\% marginal gain compared to single model with similar parameter size, as in table~\ref{CIFAR-100 Results}.
Surprisingly, the 5 sub-networks IENet with 2.82M parameters have a 2.62\% marginal gain. We conjecture it is due to the relatively low top-1 accuracy compared with CIFAR-10.
So intra-ensemble has more potentiality for improvement.
\begin{table}[ht]
\centering
\begin{tabular}{lccc}
\hline
Method & Param. & $\mathcal{S}$ & Err.\\
\hline
DenseNet-BC~\cite{huang2017densely} & 25.6M &- & 3.46 \\
NASNet-A~\cite{zoph2018learning} & 3.3M & - & 2.65\\
AmoebaNet-A~\cite{real2018regularized} & 3.2M & - &3.34\\
Darts~\cite{liu2019darts} & 3.3M &- & 2.76 \\
ILRAS~\cite{guo2018irlas} & 3.91M &- & 2.60 \\
\hline
our stand-alone & 2.57M & - & 3.10 \\
\hline
IENet (4 sub-nets, SC, S) & 2.66M & 0.838 & 2.86 \\
IENet (4 sub-nets, RO, S) & 2.66M & 0.845 &2.77 \\
IENet (4 sub-nets, RC, S) & 2.66M & 0.857 &2.61 \\
\hline
IENet (4 sub-nets, RC, S) & 1.63M & - &2.91 \\
IENet (4 sub-nets, RC, A) & 2.66M & - &2.66 \\
IENet (4 sub-nets, RC, S) & 4.22M & - &2.47 \\
IENet (5 sub-nets, RC, S) & 2.70M & - &2.66 \\
\hline
\end{tabular}
{
\caption{Comparison of test error on CIFAR-10. In the annotations, `S' means Stacking; `A' means Averaging.}
\label{CIFAR-10}
}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{lccc}
\hline
Method & Param & $\mathcal{S}$ & Err.\\
\hline
DenseNet-BC~\cite{huang2017densely} & 25.6M & - &17.18 \\
SMASHv2~\cite{brock2017smash} &16M &- & 20.6 \\
ENAS~\cite{pham2018efficient} & 4.6M &- & 17.27\\
PNAS~\cite{liu2018progressive} & 3.2M &- & 17.63 \\
AmoebaNet-B~\cite{real2018regularized} & 34.9M & - &15.80\\
\hline
our stand-alone & 2.71M &- & 18.66 \\
\hline
IENet (4 sub-nets, SC, S) & 2.78M &0.653 & 18.04 \\
IENet (4 sub-nets, RO, S) & 2.78M & 0.685 &17.10 \\
IENet (4 sub-nets, RC, S) & 2.78M & 0.759 & 17.02 \\
\hline
IENet (5 sub-nets, RC, A) & 2.82M & - & 16.10 \\
IENet (5 sub-nets, RC, S) & 2.82M & - & 16.04 \\
\hline
\end{tabular}
{
\caption{Comparison of test error on CIFAR-100. In the annotations, `S' means Stacking; `A' means Averaging.}
\label{CIFAR-100 Results}
}
\end{table}
{\bf Other datasets} We additionally carry out sufficient experiments on different kinds of datasets to prove the solidity of our method as shown in table~\ref{other_results}.
Simple training skills and fewer training epochs are applied in order to quickly verify the effectiveness of intra-ensemble on these datasets.
All the extra experiments are carried out on 4 sub-networks IENet using random cut(RC) and stacking.
While different datasets have various image sizes and class numbers, intra-ensemble always gains performance improvement on both simple and complex tasks.
The results in the table demonstrate the general applicability of intra-ensemble to various types of data.
{\bf Similarity analysis} A good trade-off between accuracy and similarity should be carefully considered while using different stochastic channel operations.As shown in table~\ref{CIFAR-10} and \ref{CIFAR-100 Results}, these operations significantly decrease sub-network similarity $\mathcal{S}$.
As we can see, intra-ensemble with RC balances the best accuracy-similarity trade-off comparing to RC and RO. Although having relatively lower similarity, the intra-ensemble results using RO and SC can not surpass results using RC because of the worse accuracy drop of sub-networks. We conjecture that it is because when $\mathcal{S}$ gets lower, there are more ``conflicts'' among the weights-shared sub-networks.
Moreover, the similarity $\mathcal{S}$
on CIFAR-100 is much lower than that in CIFAR-10.
Correspondingly, intra-ensemble result has more improvement on CIFAR-100.
We deduce that the complexity and variety of CIFAR-100 lead to greater difference among sub-networks, thus ensemble performance is better enhanced.
Besides, in table~\ref{other_results}, the datasets with more classes also have larger improvement with intra-ensemble.
Presumably, intra-ensemble works better when dealing with more complex tasks which have more image classes.
\section{Conclusion}
We have introduced Intra-Ensemble, a novel strategy which combines several diversified sub-networks within one neural network to enhance final performance.
The stochastic channel recombination operations ensure high-accuracy and diversity.
With marginal parameters added, intra-ensemble achieves competitive results compared with other methods on classification tasks.
Extensive experiments show that our method is effective on various kinds of architectures and datasets.
Besides, as multi-core computing power is more and more widespread, model parallelism will become more easily and sub-networks can be simultaneously trained to save training time.
In this work, we only carry out experiments on classification tasks.
But we believe IENet could enhance the performance of other CNN architecture and be applied to other tasks using CNN(e.g. object detection, etc.) to improve model performance.
We will work on it to maximize the utilization of intra-ensemble method.
\input{egbib.bbl}
\bibliographystyle{IEEEbib}
|
2,877,628,091,152 | arxiv | \section{Introduction}
Let $S$ be a standard graded algebra over a field $k$.
For a homogeneous ideal $Q \subseteq S$, we call the function $\depth S/Q^n$, $n \ge 1$
the \emph{depth function} of $Q$.
The goal of this paper is to prove the following conjecture of Herzog and Hibi in \cite{HH} (see also \cite[Problem 3.10]{He}).
\medskip
\begin{conjecture}[Herzog-Hibi] \label{conj.HH}
Let $f: {\mathbb N} \rightarrow {\mathbb Z}_{\ge 0}$ be any function such that $f(n) = f(n+1)$ for all $n \gg 0$. Then there exists a homogeneous ideal $Q$ in a polynomial ring $S$ such that $f$ is the depth function of $Q$.
\end{conjecture}
For simplicity we call a function $f: {\mathbb N} \rightarrow {\mathbb Z}_{\ge 0}$ a \emph{numerical function}
and say that $f$ is \emph{convergent} if $f(n) = f(n+1)$ for all $n \gg 0$. By a classical result of Brodmann \cite{Br}, the depth function of a homogeneous ideal is always convergent. Conjecture \ref{conj.HH} simply says that this is the only constraint for numerical functions to be depth functions of homogeneous ideals. This conjecture is remarkable since the depth function tends to be non-increasing in known examples.
Before this work, Conjecture \ref{conj.HH} has been verified only for non-decreasing functions \cite{HH}
and for some special classes of non-increasing functions \cite{HTT, HH, MST}.
Note that the proof of Conjecture \ref{conj.HH} for non-increasing functions in \cite{HTT} has a gap. Examples of non-monotone depth functions were hard to find \cite{BHH, HS, HH, MV}.
However, Bandari, Herzog and Hibi \cite{BHH} showed that the depth function can have any given number of local maxima. \par
Our main result, Theorem \ref{Herzog-Hibi}, settles Conjecture \ref{conj.HH} in its full generality.
Furthermore, we shall show that the ideal $Q$ can be chosen to be a monomial ideal.
As a consequence, we give a positive answer to the following question of Ratliff,
which has remained open since 1983 \cite[(8.9)]{Ra}.
\medskip
\begin{question}[Ratliff] \label{question.R}
Given a finite set $\Gamma$ of positive integers, do there exist a Noetherian
ring $S$, an ideal $Q$ and a prime ideal $P \supset Q$ in $S$ such that $P$ is an associated prime of $Q^n$ if and only if $n \in \Gamma$?
\end{question}
Inspired by Theorem \ref{Herzog-Hibi}, one may expect that for any convergent positive numerical function $f$, there exists a homogeneous ideal $Q$ such that $f$ is the depth function of symbolic powers of $Q$. This is verified recently by the second and the third authors of this paper \cite{NgT}.
The proof of Conjecture \ref{conj.HH} is based on our recent works on sums of ideals \cite{HNTT, HTT}. The key observation is the additivity of depth functions; that is, the sum of two depth functions is again a depth function. It can also be seen that any convergent numerical function which is not the constant zero function can be written as the sum of a finite number of functions of the following two types:
\begin{itemize}
\item Type I: for some fixed $d \in {\mathbb N}$, $f(n) = \left\{\begin{array}{ll} 0 & \text{if } n < d \\ 1 & \text{if } n \ge d. \end{array}\right.$
\item Type II: for some fixed $d \in {\mathbb N}$, $f(n) = \left\{\begin{array}{ll} 0 & \text{if } n \not= d \\ 1 & \text{if } n = d. \end{array}\right.$
\end{itemize}
Therefore, the proof is completed if we can construct ideals with depth functions of Types I and II.
Our paper is structured as follows. In Section 2 we prove the additivity of depth functions. Ideals with depth functions of Types I and II are constructed in Section 3. Section 4 is devoted to consequences of our solution to Conjecture \ref{conj.HH}.
We assume that the reader is familiar with basic properties of associated primes and depth, which we use without references. For unexplained notions and terminology, we refer the reader to \cite{BrH, E}.
\section{Additivity of depth functions} \label{sect_prel}
Throughout this section, let $A$ and $B$ be polynomial rings over a field $k$ with disjoint sets of variables, and let $R = A \otimes_k B$. Let $I \subseteq A$ and $J \subseteq B$ be nonzero proper homogeneous ideals. By abuse of notations, we shall also use $I$ and $J$ to denote their extensions in $R$.
\begin{lemma}[\protect{\cite[Lemma 1.1]{HT}}] \label{HoaTam}
$I \cap J = IJ$.
\end{lemma}
\begin{lemma}[\protect{\cite[Lemmas 2.2]{HT}}] \label{HoaTam2}
$\depth R/IJ = \depth A/I + \depth B/J + 1.$
\end{lemma}
We shall use the above lemmas to prove the following result which yields the additivity of depth functions.
\begin{proposition} \label{additivity}
Let $I \subset A$ and $J \subset B$ be homogeneous ideals as above.
There exists a homogeneous ideal $Q$ in a polynomial ring $S$ such that for all $n > 0$,
$$\depth S/Q^n = \depth A/I^n + \depth B/J^n.$$
Moreover, if $I$ and $J$ are monomial ideals, then $Q$ can be chosen to be a monomial ideal.
\end{proposition}
\begin{proof}
Let $x \in A$ and $y \in B$ be arbitrary variables.
Let $R = A \otimes_k B$. Then $R$ is a polynomial ring in the variables of $A$ and $B$.
By Lemma \ref{HoaTam} we have $IJ = I \cap J$.
The associated primes of $I \cap J$ are extensions of ideals in one of the rings $A,B$.
Therefore, $x-y$ does not belong to any associated prime of $IJ$.
From this it follows that
$$\depth R/(IJ,x-y) = \depth R/IJ -1.$$
By Lemma \ref{HoaTam2} we have
$$\depth R/IJ = \depth A/I + \depth B/J +1.$$
Therefore,
$$\depth R/(IJ,x-y) = \depth A/I + \depth B/J.$$
Obviously, we may replace $I,J$ by $I^n,J^n$ and obtain
$$\depth R/(I^nJ^n,x-y) = \depth A/I^n + \depth B/J^n.$$
Set $S = R/(x-y)$ and $Q = (IJ,x-y)/(x-y)$. Then $S$ is isomorphic to a polynomial ring over $k$
and
$$\depth S/Q^n = \depth R/((IJ)^n,x-y) = \depth A/I^n + \depth B/J^n$$
for all $n > 0$. Moreover,
$Q$ is a monomial ideal if $I, J$ are monomial ideals.
\end{proof}
To ease on notations, we shall identify a numerical function $f(n)$ with the sequence of its values $f(1),f(2),....$
If $f$ is the constant function 0,0,0,...,
then $f$ is the depth function of the maximal homogeneous ideal of any polynomial ring over $k$.
\begin{lemma} \label{Types}
Let $f$ be a convergent numerical function which is not the constant confunction 0,0,0,....
Then $f$ can be written as a sum of numerical functions of the following two types:\par
{\rm Type I:} $0,...,0,1,1,...$, \par
{\rm Type II:} $0,...,0,1,0,0,...$. \par
\end{lemma}
\begin{proof}
Let $f$ be a convergent numerical function of the form $c_1,...,c_n, c,c,...$.
Then $f$ is the sum of the functions $0,...,0,c_i,0,0,...$, $i = 1,..,n$, and the functions $0,...,0,c,c,...$.
The function $0,...,0,c_i,0,0,...$ is $c_i$ times the function $0,...,0,1,0,0,...$, where $1$ stands only at the $i$-th place. The function $0,...,0,c,c,...$ is $c$ times the function $0,...,0,1,1,...$, where $1$ starts from the $(n+1)$-th place.
\end{proof}
By Proposition \ref{additivity} and Lemma \ref{Types}, to establish the validity of Conjecture \ref{conj.HH}, it suffices to construct depth functions of types I and II.
\section{Construction of ideals with depth functions of Types I and II}
Herzog and Hibi \cite{HH} already constructed monomial ideals $I$ whose depth functions can be any non-decreasing convergent numerical function. Therefore, the existence of depth functions of Type I follows from their result.
\begin{example}[\protect{\cite[Theorem 4.1]{HH}}] \label{Type I}
Let $A = k[x,y,z]$. For any integer $d \ge 2$, let
$I=(x^{d+2},x^{d+1}y,xy^{d+1},y^{d+2},x^dy^2z).$
Then
$$\depth A/I^n = \begin{cases} 0 &\text{if $n \le d-1$},\\ 1 &\text{if $n \ge d$}. \end{cases}$$
\end{example}
We also know that there are monomial ideals $J$ with the depth function $1,...,1,0,0,...$ \cite{HTT, MST}.
The existence of such depth functions can be used to construct depth functions of Type II as follows.
Let $I$ and $J$ be monomial ideals with the depth functions $0,...,0,1,1,...$ and $1,...,1,0,0,...$, where the first 1 of the first function and the last 1 of the second function are on the same place.
By the proof of Proposition \ref{additivity}, the function
$\depth R/((IJ)^n,x-y)$ is a function of the form $1,...,1,2,1,1,...$ for some variables $x,y$.
If we can find variables $x',y'$ such that $x'-y'$ is a non-zerodivisor in $R/((IJ)^n,x-y)$ for all $n \ge1$,
then
$$\depth R/((IJ)^n,x-y,x'-y') = \depth R/((IJ)^n,x-y)-1$$
is a function of the form $0,...,0,1,0,0,....$
Clearly, we can identify $S = R/(x-y,x'-y')$ with a polynomial ring
and $(IJ,x-y,x'-y')/(x-y,x'-y')$ with a monomial ideal in $S$.
\par
To find such variables $x',y'$ we need to know the associated primes of the ideal $((IJ)^n,x-y)$ for all $n \ge 1$.
For convenience, we denote the set of the associated primes and the set of the minimal associated primes of an ideal $Q$ by $\Ass(Q)$ and $\Min(Q)$, respectively.
\begin{proposition} \label{control}
Let $A$ and $B$ be polynomial rings over a field $k$.
Let $I \subset A$ and $J \subset B$ be nonzero proper homogeneous ideals.
Let $x \in A$ and $y \in B$ be arbitrary variables. Let $R = A\otimes_kB$.
Then
\begin{align*}
& \Ass (IJ,x-y) = \\
& \{({\mathfrak p},x-y)|\ {\mathfrak p} \in \Ass (I)\} \cup \{({\mathfrak q},x-y)|\ {\mathfrak q} \in \Ass(J)\} \cup \bigcup_{\begin{subarray}{l} {\mathfrak p} \in \Ass (I), x \in {\mathfrak p}\\ {\mathfrak q} \in \Ass (J), y \in {\mathfrak q} \end{subarray}} \Min({\mathfrak p}+{\mathfrak q}).
\end{align*}
\end{proposition}
\begin{proof}
Let $P$ be an arbitrary prime of $\Ass(I,x-y)$. Then $P = {\mathfrak p}+(x-y)$ for some ${\mathfrak p} \in \Ass(I)$.
If $J \subseteq P$, we must have $J \subseteq (y) \subset P$.
This implies $J = y^dJ'$ for some ideal $J' \subset B$, $J' \not\subseteq (y)$, $d \ge 1$.
Let $f \in A$ be an element such that $P = (I,x-y):f$.
It is easy to check that $P = (y^dI,x-y) : y^df.$
Hence, $P \in \Ass(y^dI,x-y)$.
Since $(IJ,x-y)_P = (y^dI,x-y)_P$, this implies $P \in \Ass(IJ,x-y)$.
If $J \not\subseteq P$, we have $(IJ,x-y)_P = (I,x-y)_P$.
Hence, $P \in \Ass(IJ,x-y)$.
So we can conclude that
$$\Ass(I,x-y) = \{({\mathfrak p},x-y)|\ {\mathfrak p} \in \Ass (I)\} \subseteq \Ass(IJ,x-y).$$
\par
Similarly, we also have
$$\Ass(J,x-y) = \{({\mathfrak q},x-y)|\ {\mathfrak q} \in \Ass(J)\} \subseteq \Ass(IJ,x-y).$$
It remains to prove that if $P$ is a prime ideal of $R$, which does not belong to $\Ass(I,x-y)$ nor $\Ass(J,x-y)$,
then $P \in \Ass(IJ,x-y)$ if and only if $P \in \Min({\mathfrak p}+{\mathfrak q})$ for some ${\mathfrak p} \in \Ass (I), x \in {\mathfrak p}$, and ${\mathfrak q} \in \Ass (J), y \in {\mathfrak q} $. \par
Without restriction, we may assume that $(IJ,x-y) \subseteq P$.
Since $P \not\in \Ass(I,x-y)$, we have $\depth(R/(I,x-y))_P \ge 1$.
Since $x-y$ is a non-zerodivisor on $I$, this implies $\depth (R/I)_P \ge 2$.
Similarly, we also have $\depth (R/J)_P \ge 2$.
Note that $P \in \Ass(IJ,x-y)$ if and only if $\depth (R/(IJ,x-y))_P = 0$.
By Lemma \ref{HoaTam} we have $IJ = I \cap J$. Hence, $x-y$ is a non-zerodivisor in $R/IJ$.
From this it follows that $P \in \Ass(IJ,x-y)$ if and only if $\depth (R/IJ)_P = 1$.
Using the exact sequence
$$0 \to (R/IJ)_P \to (R/I)_P \oplus (R/J)_P \to (R/I+J)_P \to 0$$
we can deduce that $\depth (R/IJ)_P = 1$ if and only if $\depth (R/I+J)_P = 0$, which means $P \in \Ass(I+J)$.
By \cite[Theorem 2.5]{HNTT}, we have
$$\Ass(I+J) = \bigcup_{\begin{subarray}{l} {\mathfrak p} \in \Ass (I)\\ {\mathfrak q} \in \Ass (J)\end{subarray}} \Min({\mathfrak p}+{\mathfrak q}).$$
Notice that ${\mathfrak p} +{\mathfrak q}$ is not necessarily a prime ideal (see e.g. \cite[Example 2.3]{HNTT}).
If $P \in \Min({\mathfrak p}+{\mathfrak q})$, then $P \cap A = {\mathfrak p}$ and $P \cap B = {\mathfrak q}$ by \cite[Lemma 2.4]{HNTT}. Moreover,
$P$ is a bihomogeneous ideal with respect to the natural bigraded structure of $R = A \otimes_k B$.
In this case, $x-y \in P$ implies $x \in P \cap A = {\mathfrak p}$ and $y \in P \cap B = {\mathfrak q}$.
So we can conclude that $P \in \Ass(IJ,x-y)$ if and only if $P \in \Min({\mathfrak p}+{\mathfrak q})$ for some ${\mathfrak p} \in \Ass (I), x \in {\mathfrak p}$, and ${\mathfrak q} \in \Ass (J), y \in {\mathfrak q} $.
\end{proof}
\begin{remark}
{\rm Since Theorem \ref{control} is of independent interest,
one may ask whether it is true in a more general setting.
If $I,J$ are not homogeneous, we can use the same arguments to prove the following general formula:
\begin{align*}
\Ass (IJ,x-y) = & \{({\mathfrak p},x-y)|\ {\mathfrak p} \in \Ass (I)\} \cup \{({\mathfrak q},x-y)|\ {\mathfrak q} \in \Ass(J)\} \cup\\
& \bigcup_{\begin{subarray}{l} {\mathfrak p} \in \Ass (I)\\ {\mathfrak q} \in \Ass (J)\end{subarray}} \Min({\mathfrak p}+{\mathfrak q}) \cap V(x-y),
\end{align*}
where $V(x-y)$ denotes the set of prime ideals containing $x-y$. In this case,
$(IJ,x-y)$ may have an associated prime $P \in \Min({\mathfrak p}+{\mathfrak q})$ for some ${\mathfrak p} \in \Ass(I)$ and ${\mathfrak q} \in \Ass(J)$ which do not satisfy the conditions $x \not\in {\mathfrak p}$ and $y \not\in {\mathfrak q}$.
}
\end{remark}
\begin{example}
{\rm Let $A = {\mathbb Q}[x,z]$ and $I = (x^2+1,z)$. Let $B = {\mathbb Q}[y,t]$ and $J = (y^2+1,t)$.
Then $I, J$ are prime ideals, $x \not\in I$ and $y \not\in J$. We have
$$\Min_R(R/I+J) = \{(x^2+1,t,z,x-y),(x^2+1,t,z,x+y)\}.$$
Hence
$$\Ass(IJ,x-y) = \{(x^2+1,z,x-y), (y^2+1,t,x-y), (x^2+1,z,t,x-y)\}.$$
These primes do not contain $x$ and $y$.}
\end{example}
Using Proposition \ref{control} we can give a sufficient condition for the existence of variables $x',y'$ such that $x'-y'$ is a non-zerodivisor in $R/((IJ)^n,x-y)$ for all $n \ge 1$.
\begin{proposition} \label{reduction}
Let $I$ be a proper monomial ideal in $A = k[x_1,...,x_r]$, $r \ge 3$, such that $x_3,...,x_r \in \sqrt{I}$.
Let $J$ be a proper monomial ideal in $B=k[y_1,...,y_s]$, $s \ge 3$, such that $y_3,...,y_s \in \sqrt{J}$.
Let $R = k[x_1,...,x_r,y_1,...,y_s]$.
Assume that $\depth A/I^n > 0$ or $\depth B/J^n > 0$ for some $n > 0$. Then
$$\depth R/((IJ)^n,x_1-y_1,x_2-y_2) = \depth A/I^n + \depth B/J^n - 1.$$
\end{proposition}
\begin{proof}
By the proof of Proposition \ref{additivity} we have
$$\depth R/((IJ)^n,x_1-y_1) = \depth A/I^n + \depth B/J^n \ge 1.$$
It remains to show that $x_2-y_2$ is a non-zerodivisor in $R/((IJ)^n,x_1-y_1)$.
Assume for the contrary that $x_2-y_2 \in P$ for some associated prime $P$ of $((IJ)^n,x_1-y_1)$. By Proposition \ref{control}, $P = {\mathfrak p}+{\mathfrak q}$ for some ${\mathfrak p} \in \Ass(I^n)$, $x_1 \in {\mathfrak p}$, and ${\mathfrak q} \in \Ass(J^n)$, $y_1 \in {\mathfrak q}$.
Note that ${\mathfrak p}$ and ${\mathfrak q}$ are generated by variables in $A$ and $B$.
Since $x_2-y_2 \in {\mathfrak p}+{\mathfrak q}$, we must have $x_2 \in {\mathfrak p}$ and $y_2 \in {\mathfrak q}$.
The assumption $x_3,...,x_r \in \sqrt{I}$ and $y_3,...,y_s \in \sqrt{J}$ implies $x_3,...,x_r \in {\mathfrak p}$ and $y_3,...,y_s \in {\mathfrak q}$.
Hence, $x_1,...,x_r,y_1,...,y_s \in P$. Therefore, $P = (x_1,...,x_r,y_1,...,y_s)$, which contradicts the fact that
$\depth R/((IJ)^n,x_1-y_1) \ge 1$.
\end{proof}
Now we are going to construct monomial ideals having depth function of Type II.
\begin{example} \label{Type II}
{\rm Let $A = k[x,y,z]$ and $I=(x^{d+2},x^{d+1}y,xy^{d+1},y^{d+2},x^dy^2z)$, $d \ge 2$.
By Example \ref{Type I} we have
$$\depth A/I^n = \begin{cases} 0 &\text{if $n \le d-1$},\\ 1 &\text{if $n \ge d$}. \end{cases}$$
Let $B = k[t,u,v]$. Let $J$ be the integral closure of the ideal
$(t^{3d+3},tu^{3d+1}v,u^{3d+2}v)^3$ or $J = (t^{d+1},tu^{d-1}v,u^dv)$.
By \cite[Example 4.10]{HTT} and \cite[Proposition 1.5]{MST} we have
$$\depth B/J^n=\begin{cases} 1 & \text{if $n\le d$},\\ 0 &\text{if $n\ge d+1$}. \end{cases}$$
Let $R = k[x,y,z,t,u,v]$.
By Proposition \ref{reduction}, we have
$$\depth R/((IJ)^n,y-u,z-v) = \begin{cases} 0 &\text{if $n\neq d$},\\ 1 &\text{if $n = d$}. \end{cases}$$
If we set $S = k[x,t,u,v]$ and $Q = (x^{d+2},x^{d+1}u,xu^{d+1},u^{d+2},x^du^2v)J$, which is obtained from $IJ$ by setting $y = u$ and $z = v$, then
$$\depth S/Q^n = \depth R/((IJ)^n,y-u,z-v).$$
Hence, the depth function of $Q$ is of Type II.}
\end{example}
\section{Consequences}
By Examples \ref{Type I} and \ref{Type II} we have monomial ideals with depth functions of Types I and II.
Therefore, the solution to Conjecture \ref{conj.HH} immediately follows from Proposition \ref{additivity} and Lemma \ref{Types}.
\begin{theorem} \label{Herzog-Hibi}
Let $f$ be any convergent numerical function.
There exists a monomial ideal $Q$ in a polynomial ring $S$ such that $\depth S/Q^n$ $= f(n)$ for all $n \ge 1$.
\end{theorem}
Theorem \ref{Herzog-Hibi} has the following interesting consequence on the associated primes of powers of ideals, which
gives a positive answer to Question \ref{question.R} of Ratliff.
\begin{corollary} \label{Ratliff}
Let $\Gamma$ be a set of positive integers which is either finite or contains all sufficiently large integers.
Then there exists a monomial ideal $Q$ in a polynomial ring $S$ such that ${\mathfrak m} \in \Ass(Q^n)$ if and only if $n \in \Gamma$, where ${\mathfrak m}$ is the maximal homogeneous ideal of $S$.
\end{corollary}
\begin{proof}
Let $f$ be any convergent numerical function such that $f(n) = 0$ if and only if $n \in \Gamma$.
Then there exists a monomial ideal $Q$ in a polynomial ring $S$ such that $\depth S/Q^n = f(n)$ for all $n \ge 1$.
This is the desired ideal because ${\mathfrak m} \in \Ass(Q^n)$ if and only if $\depth S/Q^n = 0$.
\end{proof}
Corollary \ref{Ratliff} also gives a negative answer to the following question of Ratliff \cite[(8.4)]{Ra}.
\begin{question}[Ratliff] \label{Ratliff 2}
Let $Q$ be an arbitrary ideal in $Q$ in a Noetherian ring $S$. Let $P \supset Q$ be a prime ideal such that $P \in \Ass(Q^m)$ for some $m \ge 1$ and $P \in \Ass(Q^n)$ for all $n$ sufficiently large. Is $P \in \Ass(Q^n)$ for all $n \ge m$?
\end{question}
This question was already answered in the negative by Huckaba \cite[Example 1.1]{Hu}. However, the ideal $Q$ in his example is not a monomial ideal as in the proof of Corollary \ref{Ratliff}.
One may also ask about the possible function of the projective dimension of powers of a homogeneous ideal.
Let $Q$ be an arbitrary homogeneous ideal in a polynomial ring $S$.
By the Auslander-Buchsbaum formula we have
$$\pd Q^n = \dim S - \depth S/Q^n - 1.$$
Since $\depth S/Q^n$ is a convergent numerical function \cite{Br}, $\pd Q^n$ is also a convergent numerical function.
\begin{corollary} \label{pd}
Let $g$ be an arbitrary convergent numerical function.
There exist a homogeneous ideal $Q$ and a number $c$ such that $\pd Q^n = g(n) + c$ for all $n \ge 1$.
\end{corollary}
\begin{proof}
Let $m = \max_{n \ge 1}g(n)$. Then $f(n) = m - g(n)$, $n \ge 1$, is a convergent numerical function.
By Theorem \ref{Herzog-Hibi}, there exists a
homogeneous ideal $Q$ in a polynomial ring $S$ such that $\depth S/Q^n = f(n)$ for all $n \ge 1$.
Let $d$ be the number of variables of $S$. Set $c = d-m-1$.
Then
$$\pd Q^n = d - f(n) - 1= d - m + g(n) - 1 = g(n) + c$$
for all $n \ge 1$.
\end{proof}
It is of interest to know the smallest possible number $c$ for a given function $g$ in Corollary \ref{pd}.
This number is determined by the smallest number of variables of a polynomial ring which contains a homogeneous ideal with a given depth function $f$. We are not able to compute this number.
The proof of Theorem \ref{Herzog-Hibi} uses a high number of variables compared to the values of $f$.
\medskip
\begin{acknowledgement}
H.T. H\`a is partially supported by the Simons Foundation (grant \#279786) and Louisiana Board of Regents (grant \#LEQSF(2017-19)-ENH-TR-25). Hop D. Nguyen and T.N. Trung are partially supported by Project ICRTM 01$\_$2019.01 of the International Centre for Research and Postgraduate Training in Mathematics.
N.V. Trung is partially supported by Vietnam National Foundation for Science and Technology Development.
The authors would like to thank Aldo Conca and J\"urgen Herzog for useful discussions, Takayuki Hibi for pointing out a gap of the proof of Conjecture \ref{conj.HH} for non-increasing functions in \cite{HTT}, and C\u{a}t\u{a}lin Ciuperc\u{a} for informing that our negative answer to Question \ref{Ratliff 2} of Ratliff was already given by S. Huckaba in \cite{Hu}.
This paper is split from the first version of \cite{HNTT} following a recommendation of its referee.
\end{acknowledgement}
|
2,877,628,091,153 | arxiv | \section{Introduction}
The current values for the Standard Model couplings imply a striking feature for the potential of the Higgs: at large scales there seems to exist a second minimum, with large negative energy-density. This is problematic for early universe physics, as during inflation strong background curvature may cause the Higgs to fluctuate into this second minimum, in clear contradiction with the current observation of the electroweak vacuum. Currently the most accurate calculation \cite{Buttazzo:2013uya} for the potential of the Standard Model (SM) Higgs tells us that the instability scale $\Lambda_I$ where the potential turns over to negative values lies between $10^{10}$ -- $10^{12}$GeV\,. Furthermore, from the primordial tensor bound \cite{Ade:2015xua} we know that the scale of inflation can be as high as $H\sim10^{14}$GeV. As a leading approximation, the probability density of vacuum decay during inflation scales as
\begin{equation}
P\sim \exp\left\{-8\pi^2V_{\rm max}/(3H^4)\right \}\, ,\label{eq:p}
\end{equation}
with respect to the maximum of the potential $V_{\rm max}$ with roughly $V_{\rm max}^{1/4}\sim\Lambda_I$. Hence, a large inflationary scale, $H\gg\Lambda_I$ appears to imply probable vacuum decay during inflation \cite{Fairbairn:2014zia}. But importantly, this conclusion is based on a potential calculated on a flat background, and its validity must be carefully assessed for cases with strong background curvature, such as large scale inflation. Spacetime curvature can be incorporated into the calculation of the potential by using the standard approach of quantum field theory on a curved background and for a large $H$ it turns out to provide a stabilizing mechanism, but also can enhance the instability even further \cite{Herranen:2014cua}. The two new ingredients introduced by a curved background are the generation of the so-called non-minimal coupling $\xi$ between the SM Higgs and the scalar curvature $R$, and curvature induced renormalization group flow.
\section{Effective potential in curved space}\label{subsec:prod}
If the SM behaves as a subdominant spectator on an inflating background, we can conveniently introduce the curvature corrections by invoking the resummed Heat Kernel method \cite{Jack:1985mw}. This gives the ultraviolet (UV) portion of the modes as required by stochastic quantization \cite{Starobinsky:1994bd}, which is the most convenient way of deriving (\ref{eq:p}). In de Sitter space for the relevant degrees of freedom to 1-loop order in the 't Hooft-Landau gauge the quantum corrected, or effective, potential reads
\begin{align}
V_{\rm eff}& = -\f{1}{2}m^2\phi^2 + \f{1}{2}\xi R\phi^2 + \f{1}{4}\lambda\phi^4
+ \sum\limits_{i=1}^9 \f{n_i}{64\pi^2}M_i^4(\phi)\left[\log\f{\big|M_i^2(\phi)\big|}{\mu^2} - c_i \right]\,;\label{potential}\\M^2_i(\phi)&=\kappa_i\phi^2-\kappa' +\theta_i R\, ,\label{eq:effm}
\end{align}
where the $n_i$ count the degrees of freedom and the $M_i(\phi)$ are the effective masses. The field $\phi$ is the scalar degree of freedom of the Higgs doublet that develops an expectation value at low energies resulting in the known masses for the particles.
\begin{table}
\caption{\label{tab:contributions}The effective potential (\ref{potential}) with $W^{\pm}$, $Z^0$, top quark t, Higgs $\phi$ and the Goldstone bosons $\chi_{1,2,3}$.}
\vspace{2mm}
\begin{center}
\begin{tabular}{|c||cccccc|}
\hline
$\Phi$ & $~~i$ & $~~n_i$ &$~~\kappa_i$ & $\kappa'_i$ & $~~\theta_i$ & $\quad c_i~~$ \\[1mm]\hline
$~$ & $~~1$ & $~~2$ & $~~ g^2/4$ & $0$ & $~~{1}/{12}$ & $\quad{3}/{2}~~$ \\[1mm]
$~W^\pm$ & $~~2$ & $~~6$ & $~~ g^2/4$ & $0$ & $~~{1}/{12}$ & $\quad{5}/{6}~~$ \\[1mm]
$~$ & $~~3$ & $-2$ & $~~g^2/4$ & $0$ & $-{1}/{6}$ & $\quad{3}/{2}~~$ \\[1mm]\hline
$~$ & $~~4$ & $~~1$ & $~~(g^2+g'^2)/4$ & $0$ & $~~{1}/{12}$ & $\quad{3}/{2}~~$ \\[1mm]
$Z^0$ & $~~5$ & $~~3$ & $~~(g^2+g'^2)/4$ & $0$ & $~~{1}/{12}$ & $\quad{5}/{6}~~$ \\[1mm]
$~$ & $~~6$ & $-1$ & $~~(g^2+g'^2)/4$ & $0$ & $-{1}/{6}$ & $\quad{3}/{2}~~$ \\[1mm]\hline
t & $~~7$ & $-12$ & $~~ y_{\rm t}^2/2$ & $0$ & $~~{1}/{12}$ & $\quad{3}/{2}~~$ \\[1mm]\hline
$\phi$ & $~~8$ & $~~1$ & $~~3\lambda$ & $~m^2$ & $~~\xi-{1}/{6}$ & $\quad{3}/{2}~~$
\\[1mm]\hline
$\chi_i$ & $~~9$ & $~~3$ & $~~\lambda$ & $~m^2$ & $~~\xi-{1}/{6}$ & $\quad{3}/{2}~~$
\\[.5mm]\hline
\end{tabular}
\end{center}
\end{table}
The potential in (\ref{potential}) is not yet applicable for scales relevant for vacuum instability $\phi \geq\Lambda_I$ and the reason for this lies in the renormalization scale $\mu$: the parameters are fixed to experiments at much smaller scales than $\Lambda_I$ causing the logarithms to become large. This can be cured by using the well-known technique of renormalization group (RG) improvement, which leads to a potential with coupling constants running with the energy scale. At its core, RG improvement is a statement of $\mu$ independence, so the improved result must satisfy ${d}V_{\rm eff}/{d\mu}=0$. Unfortunately, in a perturbative result the truncated higher order terms always contain a residual $\mu$ dependence. Hence, one should choose $\mu$ in such a way that the expansion is optimized \cite{Ford:1992mv}, essentially by keeping the logarithms small. A good and frequently used choice in flat space for $\phi\gg m$ is $\mu=\phi$, which gives as the leading approximation $V_{\rm eff}\approx(\lambda(\phi)/4)\phi^4$. For a strongly curved background the same prescription forces us to make a different choice as now all effective masses contain curvature contributions.
A good choice in curved space is then
\begin{equation}
\mu^2=\phi^2+R\,.\label{eq:mu}
\end{equation} From the above one can clearly see an important difference to the flat space result: \textit{Curvature induces running of the parameters.} Another important difference is the generation of the non-minimal term $(1/2)\xi R \phi^2$. From the SM $\beta_\xi$-function we can determine that $\xi=0$ is not an fixed point of the RG flow \cite{Herranen:2014cua}, which means that a non-zero $\xi$ will always be generated by a change in energy, such as the slow change in $H$ during inflation. It then follows that: \textit{the SM contains a non-zero $\xi$ parameter}. These two effects are clearly visible in the leading contribution at a large scale ($\phi\gg m$) i.e. running couplings in the tree-level potential
\begin{equation}
V_{\rm eff}=\f{\xi(\mu)}{2}R\phi^2+\f{\lambda(\mu)}{4}\phi^4\, ,\label{eq:lo}
\end{equation}
where $\mu$ is chosen as in (\ref{eq:mu}). The main effects of curvature can be understood from expression (\ref{eq:lo}) and trivially for small $R$ it approaches the flat space form.
\section{Stability during inflation}
\begin{figure}
\begin{center}
\includegraphics[width=0.55\textwidth]{./Potential_curved.pdf}
\end{center}
\caption{\label{f}The RG improved 1-loop effective potential, with $H=10^{10}$GeV and $\xi=0.1$ at the electroweak scale.}
\end{figure}
We can now study the stability of the effective potential by using (\ref{eq:lo}) as a guide. An important observation comes by realizing that the four-point coupling is negative from the very onset of inflation if the scale of inflation $H=(R/12)^{1/2}$ exceeds the instability scale $\Lambda_I$, because of curvature induced running. This means that the only hope of having a positive potential is a large positive non-minimal term $(1/2)\xi R \phi^2$. If $\xi$ is renormalized to zero at the electroweak scale it can be shown to run to a negative value at the inflationary scale \cite{Herranen:2014cua} and hence for this choice the potential is negative and monotonically decreasing, and probably unstable. However, already when choosing $\xi$ to be $\sim 0.1$ at electroweak scales the non-minimal term runs to give a large positive contribution. As the couplings run weakly at large scales, as a first approximation we can solve the maximum of the potential for a negative $\lambda$ but a positive $\xi$ by using (\ref{eq:lo}):
\begin{equation}
\Lambda^2_{\rm max}\approx-\frac{\xi(\mu) }{\lambda(\mu)}R\,,\qquad V_{\rm max}\approx-\frac{\left[\xi(\mu)R\right]^2}{4\lambda(\mu)}\,;\qquad \mu^2\approx R\, ,\label{eq:anapp}
\end{equation}
The above tells us that even for modest values of $\xi$ we have $V_{\rm max}\geq H^4$ and (\ref{eq:p}) shows that the vacuum decay probability is significantly diminished. This is also visible in figure \ref{f}, where we plot the full 1-loop RG improved potential in curved space. The dashed lines correspond to the approximations in (\ref{eq:anapp}) and the $x$-axis is normalized with respect to the field value for the maximum in flat space $\overline{\Lambda}_{\rm max}$. We have chosen $H\sim10^{10}$GeV, since in the 1-loop approximation $\lambda$ becomes negative much earlier than in a the state-of-the-art derivation \cite{Buttazzo:2013uya}, roughly $\Lambda_I\sim10^8$GeV and hence for a 1-loop result this choice corresponds to $H\gg\Lambda_I$. The result clearly shows that the peak of the potential now occurs at a scale that is $\sim 10^{3}$-times larger than $\overline{\Lambda}_{\rm max}$ and importantly that the maximum of the potential is also increased by a similar amount, approximately $V^{1/4}_{\rm max}\sim 2H$. Formula (\ref{eq:p}) then gives the transition probability $P<e^{-400}$, which indicates that $\xi \gtrsim0.1$ at the electroweak scale is enough to stabilize the potential. Note that all values for $\xi$ larger than some threshold will give a stabilizing result and in fact quickly after $\xi=1/6$ the Higgs starts behaving as a non-fluctuating massive field. The current experimental bounds for $\xi$ are very weak \cite{Atkins:2012yn} and hence the SM can be stable during inflation.
\section{Stability after inflation}
Even if there is no instability during inflation, the dynamics of the subsequent reheating phase can also trigger the instability \cite{Herranen:2015ima}. Reheating is a complicated process and often involves non-perturbative resonance phenomena that are characterised by explosive particle production that result from the oscillations of the inflaton \cite{Kofman:1997yn}. As the SM Higgs will always be coupled to the inflaton due to the generation of $\xi$, it will feel the dynamics of reheating. The Einstein equation gives for an inflaton $\Phi$ with the potential $U(\Phi)$ in Planck units, $R=4 U(\Phi)-\dot{\Phi}^2$,
which shows that if $\Phi$ oscillates, so does $R$ and the Higgs generically has an oscillating mass $\xi R$. Importantly, the oscillations of $R$ periodically become negative, which gives rise to \textit{tachyonic resonance} that can quickly generate a large fluctuation \cite{Bassett:1997az} and possibly vacuum decay \cite{Herranen:2015ima}. In particular, reheating limits \textit{large} values of $\xi$ as the amplitude of the oscillations increases with $\xi$, which in turn makes the effect itself stronger. For $\xi\lesssim\mathcal{O}(1)$ the potential instability can be avoided \cite{Herranen:2015ima}.
\section{Conclusions}
To conclude, gravity cannot be neglected in the quantum dynamics of the early universe vacuum instability for a large scale of inflation. The two main effects are curvature induced running of the constants and the generation of the non-minimal coupling for the Higgs. The non-minimal parameter $\xi$ can provide a stabilizing mechanism during inflation, where requiring stability results in a lower bound for $\xi$. However, in the reheating phase large values of $\xi$ result in the generation of a large fluctuation via tachyonic resonance, which can result in vacuum decay. Combining the inflationary and reheating stability limits gives at the electroweak scale \cite{Herranen:2014cua} \cite{Herranen:2015ima
\begin{equation}
0.1\lesssim\xi\lesssim\mathcal{O}(1)
\end{equation}
\section*{Acknowledgments}
This talk is based on the research articles \cite{Herranen:2014cua} \cite{Herranen:2015ima} written in collaboration with M. Herranen, S. Nurmi and A. Rajantie. TM wishes to thank the organizers of the 27th Rencontres the Blois for a vibrant conference. This work was supported by the Osk. Huttunen Foundation.
\section*{References}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.